id
stringlengths 9
9
| title
stringlengths 4
234
| abstract
stringlengths 20
2.86k
| article
stringlengths 32
5.03M
|
---|---|---|---|
0911.1675 | on the equivalence of different approaches for generating multisoliton
solutions of the kpii equation | the unexpectedly rich structure of the multisoliton solutions of the kpii
equation has been explored by using different approaches, running from dressing
method to twisting transformations and to the tau-function formulation. all
these approaches proved to be useful in order to display different properties
of these solutions and their related jost solutions. the aim of this paper is
to establish the explicit formulae relating all these approaches. in addition
some hidden invariance properties of these multisoliton solutions are
discussed.
| introduction
the kadomtsev–petviashvili (kp) equation in its version called kpii
(ut −6uux1 + ux1x1x1)x1 = −3ux2x2,
(1.1)
where u = u(x, t), x = (x1, x2) and subscripts x1, x2 and t denote partial derivatives, is
a (2+1)-dimensional generalization of the celebrated korteweg–de vries (kdv) equa-
tion.
there are two inequivalent versions of the kp equations, corresponding to
the choice for the sign in the rhs as in (1.1) and to the alternative choice, which is
referred to as kpi equation. the kp equations, originally derived as a model for
small-amplitude, long-wavelength, weakly two-dimensional waves in a weakly disper-
sive medium [1], were known to be integrable since the beginning of the 1970s [2,3],
and can be considered as prototypical (2+1)-dimensional integrable equations.
the kpii equation is integrable via its association to the operator
l(x, ∂x) = −∂x2 + ∂2
x1 −u(x),
(1.2)
which defines the well known equation of heat conduction, or heat equation for short.
the spectral theory of the operator (1.2) was developed in [4–7] in the case of a
real potential u(x) rapidly decaying at spatial infinity, which, however, is not the most
interesting case, since the kpii equation was just proposed in [1] in order to deal with
1
two dimensional weak transverse perturbation of the one soliton solution of the kdv.
in fact, kpii admits a one soliton solution of the form
u(x, t) = −(a −b)2
2
sech2
(a −b)
2
x1 + (a2 −b2)
2
x2 −2(a3 −b3)t)
,
(1.3)
where a and b are real, arbitrary constants. multisoliton solutions have also been ob-
tained by different methods, in [8] trough hirota method, in [9] and [10] via dressing,
in [11] by the wronskian technique and in [12] by using darboux transformations. in
addition, in contrast with the kpi equation, also non elastic and resonant scattering of
solitons was described in [13–17]. the problem of finding the most general n-soliton
solution and their interactions has recently attracted a great deal of attention. in [18] a
sort of dressing method was applied to "superimpose" n-solitons to a generic, smooth
and decaying background and to obtain the corresponding jost solutions. there, the
two soliton case was studied in details, showing that solitons can interact inelasti-
cally and that they can be created and annihilated. by using a finite dimensional
version of the sato theory for the kp hierarchy [19], in a series of papers [20–24], it
was shown that the general n-soliton solution can be written in term of τ-functions
and their structure was studied in details, showing that they exhibit nontrivial spatial
interaction patterns, resonances and web structures. a survey of these results is given
in [25] and applications to shallow water waves in [25,26]. in [27] solutions correspond-
ing to n solitons "superimposed" to a generic smooth decaying background and the
corresponding jost solutions were constructed by means of twisting transformations.
a spectral theory of the heat operator (1.2) that also includes multisolitons has to
be built. in [28] the inverse scattering transform for a perturbed one-soliton potential
was derived. in [29] the initial value problem for the kpii equation with data not
decaying along a line was linearized. however, the case of n-solitons is still open.
in solving the analogous problem for the nonstationary schr ̈
odinger operator, asso-
ciated to the kpi equation, the extended resolvent approach was introduced [30].
accordingly, in order to solve the spectral problem for the heat operator, when the
potential u(x) describes n solitons, one needs to find among different procedures used
in deriving these potentials just that one that can be exploited in building the corre-
sponding extended resolvent. therefore, one needs to explore these different available
procedures and their interrelation, which is the goal of this paper.
the paper is organized as follows. in section 2 we introduce basic notations and
sketch some basic topics of the standard scattering problem of operator (1.2) for the
case of decaying potential. in section 3 we review the multisoliton potentials con-
structed in [18], solving a ∂-problem given by a rational transformation of generic
spectral data. in section 4 we consider the multisoliton potentials obtained in [27] by
means of twisting transformations and we study their connection with the potentials
given in section 3. in section 5 we derive some alternative, more symmetric repre-
sentations of the pure solitonic potentials, and we show that they coincide with the
representation obtained in [20–24] by using the τ-function approach. in section 6 a
symmetric representation for the jost solutions is also obtained, showing that they
can be obtained by a miwa shift [31]. finally, we discuss the invariance properties of
the multisoliton potentials with respect to transformations of the soliton parameters.
2
2
background theory
the kpii can be expressed as compatibility condition of a lax pair, [l, t ] = 0, where
l is the heat operator defined in (1.2) and t is given by
t (x, ∂x, ∂t) = ∂t + 4∂3
x1 −6u∂x1 −3ux1 −3∂−1
x1 ux2.
(2.1)
since the heat operator is not self-dual, one has to consider at the same time its dual
ld (x, ∂x) = ∂x2 + ∂2
x1 −u(x) and, then, introduce the jost solution φ(x, k) and the
dual jost solution ψ(x, k) obeying equations
l(x, ∂x)φ(x, k) = 0,
ld(x, ∂x)ψ(x, k) = 0,
(2.2)
where k is an arbitrary complex variable, playing the role of a spectral parameter.
it is convenient to normalize the jost and dual jost solutions multiplying them by
an exponential function as follows
χ(x, k) = ei k x1+k2 x2φ(x, k),
ξ(x, k) = e−i k x1−k2 x2ψ(x, k).
(2.3)
these functions satisfy the differential equations
(−∂x2 + ∂2
x1 −2i k ∂x1 −u(x))χ(x, k) = 0,
(2.4a)
(∂x2 + ∂2
x1 + 2i k ∂x1 −u(x))ξ(x, k) = 0,
(2.4b)
and are chosen to obey normalization conditions at k-infinity
lim
k→∞χ(x, k) = 1,
lim
k→∞ξ(x, k) = 1,
(2.5)
so that, by (2.4), they are related to the potential u(x) of the heat equation by means
of the relations
u(x) = −2i lim
k→∞k ∂x1χ(x, k) = 2i lim
k→∞k ∂x1ξ(x, k).
(2.6)
the reality of the potential u(x), that we always assume here, is equivalent to the
conjugation properties
χ(x, k) = χ(x, −k),
ξ(x, k) = ξ(x, −k).
(2.7)
in the case of a potential u(x) ≡u0(x) rapidly decaying at spatial infinity, according
to [4–7], the main tools in building the spectral theory of the operator (1.2), as in the
one dimensional case, are the integral equations, whose solutions define the related
normalized jost solution χ0(x, k), and ξ0(x, k), i.e.,
χ0(x, k) = 1 +
z
dx′ g0(x −x′, k)u0(x′)χ0(x′, k),
(2.8a)
ξ0(x, k) = 1 +
z
dx′ g0(x′ −x, k)u0(x′)ξ0(x′, k),
(2.8b)
where
g0(x, k) = −sgn x2
2π
z
dα θ(α(α −2 kr)x2)eiαx1−α(α−2 k)x2,
(2.9)
3
is the green's function of the bare heat operator, (−∂x2 + ∂2
x1)g0(x, k) = δ(x).
thanks to (2.8), the functions χ0 and ξ0 have the following asymptotic behaviour
on the x-plane
lim
x→∞χ0(x, k) = 1,
lim
x→∞ξ0(x, k) = 1.
(2.10)
however, if the potential u(x) does not decay at spatial infinity, as it is the case
when line soliton solutions are considered, the integral equations (2.8) are ill-defined
and one needs a more general approach.
a spectral theory of the kpii equation
that also includes solitons has been investigated using the resolvent approach. in this
framework it was possible to develop the inverse scattering transform for a solution
describing one soliton on a generic background [28], and to study the existence of
the (extended) resolvent for (some) multisoliton solutions [27]. the general theory
is, to some extent, still a work in progress. in this paper, however, we consider only
different approaches to the construction of soliton solutions and the corresponding
jost solutions.
3
multisoliton potentials via dressing method
in [18] we used a sort of dressing method to construct a potential u(x) describing n
solitons superimposed to a generic background potential u0(x). more precisely, we
considered a potential u(x) with spectral data obtained by a rational transformation
of the spectral data of a generic background u0(x) and we transformed the problem
of finding this u(x) into a ∂-problem on the corresponding jost solutions, which was
solved by a dressing procedure. the rational transformation was chosen to depend on
n pairs of real distinct parameters aj, bj (j = 1, . . . , n) and the jost solutions φ(x, k)
and ψ(x, k) were required to obey suitable analyticity properties, normalization like
in (2.3), (2.5) and conjugation property (2.7).
the transformed potential u(x) is given by the following formula
u(x) = u0(x) −2∂2
x1 det[c + f(x)],
(3.1)
where c is a real n × n constant matrix and f(x) is an n × n matrix function of x
with entries
flj(x) =
x1
z
(bl−aj)∞
y2=x2
dy1 ψ0(y, ibl)φ0(y, iaj),
l, j = 1, . . . , n,
(3.2)
where φ0(x, k) and ψ0(x, k) are the jost and dual jost solutions of equations (2.2)
with potential u0(x). notice that the matrix elements of f(x) are given in terms of
values of the so-called cauchy–baker–akhiezer function [32] at points aj and bl with
j, l = 1, . . . , n. in [18] we have shown that the matrix c does not need to be either
regular, or diagonal. we also mentioned that in order to obtain a real potential it is
enough to consider a more general situation, with complex parameters aj and bj such
that
aj = ̄
aπa(j),
bj = ̄
bπb(j),
j = 1, . . . , n,
(3.3)
where the bar denotes the complex conjugate and πa, πb are some permutations of
the indices (with a proper modification of the constraints on the constant matrix c).
4
however, this case is essentially more complicated to be investigated and we do not
consider it here.
if, a posteriori, the background u0 is taken to be identically zero, then the eigen-
functions in (3.2) are pure exponential functions, i.e.,
φ0(x, iaj) = eaj(x),
ψ0(x, ibl) = e−bl(x),
where for the future convenience we introduced
aj(x) = ajx1 + a2
jx2,
bl(x) = blx1 + b2
l x2,
j, l = 1, 2, . . . , n.
(3.4)
then (3.2) takes the form
flj(x) = e−bl(x)λljeaj(x),
(3.5)
with λ the cauchy matrix
λlj =
1
aj −bl
.
(3.6)
notice that in [18] the time dependence was not specified, but this simply amounts
to taking into account the time dependence of the original jost solutions φ0(x, k) and
ψ0(x, k) fixed by the choice of the second lax operator (2.1). in the case of u0(x, t) ≡0
this means that we have to use
aj(x, t) = ajx1 + a2
jx2 −4a3
jt,
bl(x, t) = blx1 + b2
l x2 −4b3
l t,
(3.7)
instead of (3.4).
in [18] we addressed the problem of regularity of the potentials, as well as of
their asymptotic behaviour, only for the case n = 2, i.e., for 2-soliton potentials. in
this particular case, we were able to formulate the conditions on the 2 × 2 matrix
c that guarantee the regularity of the potential u(x), and we showed that at large
distances in the x-plane the potential decays exponentially fast except along certain
specific rays, where it has a solitonic one dimensional behaviour of the form (1.3). the
classification of the 2-soliton potentials obtained in [18] was then successively obtained
and generalized to the case of n-soliton in [24,25].
4
multisoliton potentials via twisting transforma-
tions
4.1
equivalence of the n-soliton potentials derived in [18] and
in [27]
in this section we consider pure soliton potentials of the heat equation obtained by
twisting transformations. the details of the construction of the twisting operators and
of the corresponding potentials can be found in [27]. the transformation of a generic
(smooth, decaying at infinity) background potential u0(x) into a new potential u(x),
which describes n-solitons "superimposed" to the background u0(x), is parameterized
by two sets of real parameters {a1, . . . , ana} and {b1, . . . , bnb}, which we assume all
distinct, and an na × nb real matrix c. in this case we allow na and nb to be not
necessarily equal, but
na, nb ≥1,
(4.1)
5
and denote n = max{na, nb}. the pure n-soliton potential follows directly from the
expressions derived in [27] by taking u0(x) ≡0. precisely, we have
u(x) = −2∂2
x1 log τ1(x),
(4.2)
with
τ1(x) = det(ena + c f(x)) ≡det(enb + f(x)c),
(4.3)
where we introduced the na × na (resp. nb × nb) identity matrix ena (resp. enb),
the diagonal matrices
ea(x) = diag{eaj(x)}na
j=1,
e−b(x) = diag{e−bl(x)}nb
l=1,
(4.4)
see (3.4), and the nb × na matrix function f(x) = ∥flj(x)∥j=1,...,na
l=1,...,nb , where nota-
tion (3.5) was used, so that
f(x) = e−b(x)λea(x),
(4.5)
with λ a nb × na constant matrix with elements given in (3.6).
let us show now that the potentials given in (3.1) in terms of n × n matrices c
and f coincide with those obtained by the twisting transformations and expressed by
means of matrices c and f in (4.3). this is obvious when na = nb and matrices c
and c are nonsingular. then, it is enough to put c = c−1 so that the determinants
in (3.1) and (4.3) are equal up to an unessential constant factor.
the generic situation is a bit more complicated. in order to make explicit the
size of the matrices involved we use here notation an for a n × n matrix a. let us
consider, for definiteness, the case na ≤nb ≡n. let cn denote the n × n matrix
constructed by adding nb −na zero rows to the matrix c. taking into account that
the product fc in the second equality in (4.3) is an n × n-matrix, it is clear that
(f)ncn = fc, where
fn = (e−b)nλn(ea)n
(4.6)
and where the parameters aj with j = na + 1, na + 2, . . . , n in (ea)n and in λn can
be chosen arbitrarily, provided they are different from the parameters bl. it is well
known that the cauchy matrix λn is invertible and that λ−1
n
= gn e
λnfn, where
the matrix e
λn is obtained from the matrix λn by renaming aj the bj and vice-versa
and where gn and fn are two invertible diagonal matrices. thus, for the second
determinant in (4.3) we have
det(enb + fc) = det(en + fn cn)
= det(e−bλeagf)n det
(e−a)n e
λn(eb)n + g−1
n cnf −1
n
.
the factor det(e−bλeagf)n does not contribute to the potential, due to the deriva-
tive in (4.2). moreover, thanks to (3.5) and (4.6), the n ×n matrix e−a(x)e
λeb(x) coin-
cides with the n ×n-matrix f(x) in (3.5), once parameters aj and bj are exchanged.
therefore, relations (4.3) and (4.2) indeed generate the same potential provided
cn = gncfn
(4.7)
and parameters aj are renamed bj and vice-versa.
6
remark 4.1 in the case of a nonzero background potential u0(x) one can prove that
the two potentials obtained by the two approaches are equivalent, up to an additional
term ∂2
x1 log det fn(x).
remark 4.2 here and below we omitted the time dependence as it can be easily
switched on by using (3.7) instead of (3.4).
4.2
regularity conditions for the potential
in view of the discussion on the necessary and sufficient conditions required to guar-
antee the regularity of the multisoliton solutions of the equations in the hierarchy of
the kpii equation, we recover here, in an equivalent formulation, sufficient conditions
for regularity of multisoliton potentials of the heat equation and, then, of multisoliton
solutions of kpii, already given in [21,25].
let us consider again the second equality in (4.3). we have
det(enb + fc) =
nb
x
n=0
x
1≤l1<l2<***<ln≤nb
(fc)
l1, l2, . . . , ln
l1, l2, . . . , ln
.
(4.8)
here and in the following a
l1, l2, . . . , ln
l1, l2, . . . , ln
denotes the minor of the matrix a obtained
by selecting rows l1, l1, . . . , ln and columns j1, j2, . . . , jn. the principal minor of the
product fc in (4.8) can be written by the binet–cauchy formula as
(fc)
l1, l2, . . . , ln
l1, l2, . . . , ln
=
x
1≤j1<j2<***<jn≤na
n
y
m=1
e−blm(x)
!
n
y
m=1
eajm(x)
!
× λ
l1, l2, . . . , ln
j1, j2, . . . , jn
c
j1, j2, . . . , jn
l1, l2, . . . , ln
.
(4.9)
taking into account that f is a nb × na-matrix and c a na × nb-matrix we get that
all minors of fc with n > min na, nb are zero.
recalling that the term corresponding to n = 0 in (4.8) is 1, i.e., greater than
zero, we deduce that a sufficient condition for having a regular solution is that the
real matrix c satisfies the following characterization conditions
λ
l1, l2, . . . , ln
j1, j2, . . . , jn
c
j1, j2, . . . , jn
l1, l2, . . . , ln
≥0,
(4.10)
for any 1 ≤n ≤min{na, nb} and all minors, i.e., any choice of 1 ≤j1 < j2 < * * * <
jn ≤na and 1 ≤l1 < l2 < . . . ln ≤nb. in order to get a nontrivial potential under
substitution of τ1 in (4.2) it is necessary then to impose the condition that at least
one of inequalities in (4.10) is strict.
now, since any submatrix of a cauchy matrix λ is itself a cauchy matrix, for an
arbitrary minor of λ we have
λ
l1, l2, . . . , lp
j1, j2, . . . , jp
=
y
1≤i<m≤p
(aji −ajm) (blm −bli)
p
y
i,m=1
(aji −bjm)
.
(4.11)
7
then, we deduce that there are orders of the parameters a's and b's for which all
minors of the matrix λ are positive. for instance, if
a1 < a2 < * * * < ana−1 < ana < b1 < b2 < * * * < bnb−1 < bnb,
(4.12)
the regularity conditions (4.10) become
c
l1, l2, . . . , ln
j1, j2, . . . , jn
≥0,
(4.13)
i.e., all minors of the matrix c must be non negative. these matrices are called totally
nonnegative matrices [34]. obviously one obtains the same result by using the first
equality in (4.3) and the binet–cauchy formula enables to check directly that the two
determinants in (4.3) are equal.
if we consider the multi-time multisoliton solution of the entire hierarchy related
to the kpii equation obtained by the miwa shift (see [31]), as reported in (6.22), we
get again an expansion of the form (4.9), where the exponents are now independent
and we conclude that the regularity condition (4.10) is also necessary.
5
equivalence with the τ-function representation
5.1
determinant representations for the potential and jost so-
lutions
in [27] we derived the transformed potential u(x) (see (4.2 and (4.3)) and the jost
solutions of the direct and dual problems (2.2) related to this potential. the corre-
sponding normalized jost solutions are expressed as
χ(x, k) = 1 +
na
x
j=1
nb
x
l,l′=1
eaj(x)cjl′(enb + f(x)c)−1
l′l
e−bl(x)
bl + i k
≡1 +
na
x
j,j′=1
nb
x
l=1
eaj(x)(ena + c f(x))−1
jj′ cj′le−bl(x)
bl + i k
,
(5.1a)
ξ(x, k) = 1 −i
na
x
j=1
nb
x
l,l′=1
eaj(x)
aj + i kcjl(enb + f(x)c)−1
ll′ e−bl(x)
≡1 −
na
x
j,j′=1
nb
x
l=1
eaj(x)
aj + i k(ena + c f(x))−1
jj′ cj′le−bl(x).
(5.1b)
in analogy to (4.3), all relations are given in two equivalent forms. while it is enough
to keep only one of them, we continue to consider both of them to highlight the
symmetry property of the whole construction with respect to the numbers na and nb,
which play the role of topological charges.
in order to obtain a representation of
(4.3) and (5.1) in terms of τ-functions
we need to perform some simple algebraic operations. first, thanks to the standard
identity for the determinant of a bordered matrix, we can rewrite (5.1) as
χ(x, k) = τ1,χ(x, k)
τ1(x)
,
ξ(x, k) = τ1,ξ(x, k)
τ1(x)
,
(5.2)
8
where τ1 is given in (4.3) and
τ1,χ(x, k) = det
enb + f(x)c
−e−b∗(x)
b∗+ i k
na
x
j=1
eaj(x)cj∗
1
(5.3a)
≡det
ena + c f(x)
−
nb
x
l=1
c∗l e−bl(x)
bl + i k
ea∗(x)
1
,
(5.3b)
τ1,ξ(x, k) = det
enb + f(x)c
e−b∗(x)
na
x
j=1
eaj(x)
aj + i kcj∗
1
(5.3c)
≡det
ena + c f(x)
nb
x
l=1
c∗l e−bl(x)
ea∗(x)
a∗+ i k
1
.
(5.3d)
in the above formulas, in the row and column bordering matrices enb + f c and
ena + c f we use the subscript ∗to denote an index running, respectively, from 1 to
nb and from 1 to na. let us notice that the function τ1 in (4.3) can also be obtained
as a limiting value, i.e.,
τ1(x) = lim
k→∞τ1,χ(x, k) = lim
k→∞τ1,ξ(x, k).
(5.4)
using (2.3) to go back from (5.2) and (5.3) to the jost solutions φ(x, k) and ψ(x, k),
one can check that they have poles, respectively, at k = ibl (l = 1, . . . , nb) and k = iaj
(j = 1, . . . , na) with residua
φbl(x) = res
k=ibl φ(x, k),
ψaj(x) = res
k=iaj ψ(x, k),
(5.5)
which obey the relations
φbl(x) = −i
na
x
j=1
φ(x, iaj)cjl,
ψaj(x) = i
nb
x
l=1
cjlψ(x, ibl).
(5.6)
notice that these equations, together with the requirement of analyticity and the
normalization condition (2.10), allow one to reconstruct the normalized jost solutions
in (5.2) and (5.3) and, then, via (2.6) the potential u(x) in (4.2) and (4.3).
next, we rewrite (5.3) as
τ1,χ(x, k) =
nb
y
l=1
e−bl(x)
bl + i k
!
det
(b∗+ i k)[eb(x) + λea(x)c]
−1∗
na
x
j=1
eaj(x)cj∗
1
≡
na
y
j=1
eaj(x)
det
e−a(x) + ce−b(x)λ
−
nb
x
l=1
c∗l e−bl(x)
bl + i k
1∗
1
,
9
τ1,ξ(x, k) =
nb
y
l=1
e−bl(x)
!
det
eb(x) + λea(x)c
1∗
na
x
j=1
eaj(x)
aj + i kcj∗
1
≡
na
y
j=1
eaj(x)
aj + i k
det
[e−a(x) + ce−b(x)λ](a∗+ i k)
nb
x
l=1
c∗le−bl(x)
1∗
1
.
by elementary transformations of the matrices on the right-hand sides, we can reduce
them to a form where rows and columns with all elements equal to 1 or −1 are
transformed into rows and columns with all elements equal to 0 except 1 at the last
place.
in order to present the result of these transformations we introduce
τ2,χ(x, k) =
nb
y
l=1
ebl(x)(bl + i k)
!
τ1,χ(x, k),
(5.7a)
τ ′
2,χ(x, k) =
na
y
j=1
e−aj(x)
aj + i k
τ1,χ(x, k),
(5.7b)
τ2,ξ(x, k) =
nb
y
l=1
ebl(x)
bl + i k
!
τ1,ξ(x, k),
(5.7c)
τ ′
2,ξ(x, k) =
na
y
l=1
e−aj(x)(aj + i k)
!
τ1,ξ(x, k).
(5.7d)
then,
τ2,χ(x, k) = det
δll′(bl + i k)ebl(x) +
na
x
j=1
aj + i k
aj −bl
eaj(x)cjl′
nb
l,l′=1
,
(5.8a)
τ ′
2,χ(x, k) = det
δjj′ e−aj(x)
aj + i k +
nb
x
l=1
cjle−bl(x)
(aj′ −bl)(bl + i k)
!na
j,j′=1
,
(5.8b)
τ2,ξ(x, k) = det
δll′ ebl(x)
bl + i k +
na
x
j=1
eaj(x)cjl′
(aj −bl)(aj + i k)
nb
l,l′=1
,
(5.8c)
τ ′
2,ξ(x, k) = det
δjj′(aj + i k)e−aj(x) +
nb
x
l=1
cjl(bl + i k)
aj′ −bl
e−bl(x)
!na
j,j′=1
.
(5.8d)
let us now introduce the limits
τ2(x) = lim
k→∞(i k)−nbτ2,χ(x, k) = lim
k→∞(i k)nbτ2,ξ(x, k),
τ ′
2(x) = lim
k→∞(i k)naτ ′
2,χ(x, k) = lim
k→∞(i k)−naτ ′
2,ξ(x, k),
10
that, thanks to (5.8), have the following explicit expressions
τ2(x) = det
δll′ebl(x) +
na
x
j=1
eaj(x)cjl′
aj −bl
nb
l,l′=1
,
(5.9a)
τ ′
2(x) = det
δjj′e−aj(x) +
nb
x
l=1
cjle−bl(x)
aj′ −bl
!na
j,j′=1
.
(5.9b)
by (5.4) we have
τ2(x) =
nb
y
l=1
ebl(x)
!
τ1(x),
τ′
2(x) =
na
y
j=1
e−aj(x)
τ1(x).
(5.10)
both functions in (5.9) are equivalent, in the sense that they generate the same po-
tential
u(x) = −2∂2
x1 log τ2(x) = −2∂2
x1 log τ ′
2(x).
(5.11)
for the functions χ(x, k) and ξ(x, k) (see (2.3)) we get by (5.2) and (5.7)
χ(x, k) =
nb
y
l=1
(bl + i k)−1
!
τ2,χ(x, k)
τ2(x)
≡
na
y
j=1
(aj + i k)
τ ′
2,χ(x, k)
τ ′
2(x)
,
(5.12a)
ξ(x, k) =
nb
y
l=1
(bl + i k)
!
τ2,ξ(x, k)
τ2(x)
≡
na
y
j=1
(aj + i k)−1
τ ′
2,ξ(x, k)
τ ′
2(x)
.
(5.12b)
notice that functions in (5.8) can be obtained from functions in (5.9) by performing
the following substitutions
τ2(x) →τ2,χ(x, k)
replacing
eaj →eaj(aj + i k),
ebl →ebl(bl + i k)
(5.13a)
τ ′
2(x) →τ ′
2,χ(x, k)
replacing
eaj →eaj(aj + i k),
ebl →ebl(bl + i k), , (5.13b)
τ2(x) →τ2,ξ(x, k)
replacing
eaj →
eaj
aj + i k,
ebl →
ebl
bl + i k,
(5.13c)
τ ′
2(x) →τ ′
2,ξ(x, k)
replacing
eaj →
eaj
aj + i k,
ebl →
ebl
bl + i k.
(5.13d)
these rules enable to significantly shorten the list of formulas given below (see also
remark 6.1).
remark 5.1 notice that under the transformation x →−x, na ←
→nb and a ←
→b
the function χ is transformed into ξ and viceversa.
5.2
symmetric representations for the potential and the com-
parison with the τ-function approach
here we prove that the expression (5.11) for the potential is equivalent to that one
obtained by means of the τ-function approach in the series of papers quoted in the
introduction and surveyed in [25].
11
we already mentioned that the double representations for the potential and the
jost solutions derived above highlight the symmetric role played by the parameters
aj and bl. to better exploit this fact we introduce na + nb (real) parameters
{κ1, . . . , κna+nb} = {a1, . . . , ana, b1 . . . , bnb},
(5.14)
and, by analogy with (3.4), we introduce
kn(x) = κnx1 + κ2
nx2,
n = 1, . . . , na + nb,
(5.15)
so that kn(x) = an(x), n = 1, . . . , na, and kn(x) = bn−na(x), n = na +1, . . ., na +
nb.
according to (3.7) the time dependence is taken into account simply by adding a
term −4κ3
nt to the rhs of (5.15).
let d denote an na × nb real matrix with elements given in terms of the elements
of the matrix c as
djl =
nb
y
l′=1
(aj −bl′)−1cjl
nb
y
l′=1, l′̸=l
(bl −bl′),
j = 1, . . . , na,
l = 1, . . . , nb.
(5.16)
let us also introduce the constant (na + nb) × nb- and na × (na + nb)-matrices d
and d ′ with, respectively, the following block structures
d =
d
enb
,
d ′ = (ena, −d) ,
(5.17)
and the constant, diagonal, real (na + nb) × (na + nb)-matrix
γ = diag
na+nb
y
n′=1,n′̸=n
(κn −κn′)−1,
n = 1, . . . , na + nb
.
(5.18)
let us prove now that with one more rescaling of τ2(x) and τ ′
2(x):
τ(x) =
y
1≤l<l′≤nb
(bl′ −bl)
τ2(x),
(5.19a)
τ ′(x) =
y
1≤j<j′≤na
(aj −aj′)−1
na
y
j=1
nb
y
l=1
(aj −bl)−1
τ ′
2(x),
(5.19b)
we get instead of (5.9)
τ(x) =
det
k ek(x) d
y
1≤l<l′≤nb
(bl −bl′)
,
τ′(x) = det
d ′ e−k(x)γ k ′
y
1≤j<j′≤na
(aj −aj′)
,
(5.20)
where in analogy with (4.4) we introduced the diagonal (na +nb)×(na + nb)-matrix
ek(x) = diag{ekn(x)}na+nb
n=1
,
(5.21)
12
and the new constant nb × (na + nb) and (na + nb) × na matrices
k = ∥kln ∥,
kln =
nb
y
l′=1, l′̸=l
(κn −bl′),
l = 1, . . . , nb,
n = 1, . . . , na + nb,
(5.22)
k ′ = ∥k′
nj ∥,
k′
nj =
na
y
j′=1, j′̸=j
(aj′ −κn),
n = 1, . . . , na + nb,
j = 1, . . . , na.
(5.23)
notice that they have a block structure since kl,na+l′ and k ′
j′j (l, l′ = 1, . . . , nb, j, j′ =
1, . . . , na) are diagonal submatrices. we also emphasize that, since na, nb ≥1, as
stated in (4.1), the constant matrices k, k ′, d and d ′ are not squared and, therefore,
the determinants cannot be decomposed into products of determinants, which would
imply u(x) ≡0.
in order to prove (5.20), let us notice that
k ek(x) d
nb
l,l′=1 ≡
na+nb
x
n=1
nb
y
l′′=1, l′′̸=l
(κn −bl′′)
ekn(x) dnl′
nb
l,l′=1
=
na
x
j=1
eaj(x)djl′
nb
y
l′′=1, l′′̸=l
(aj −bl′′) +
nb
x
l′′′=1
ebl′′′(x)δl′′′,l′
nb
y
l′′=1, l′′̸=l
(bl′′′ −bl′′),
where (5.17), (5.21) and (5.22) were used. all terms in the last sum vanish except for
l′′′ = l. thus by (5.16) we get
det
k ek(x) d
nb
l,l′=1
= (−1)nb(nb−1)/2
y
1≤l<l′≤nb
(bl′ −bl)2
det
δll′′ebl(x) +
na
x
j=1
eaj(x)cjl′
aj −bl
nb
l,l′=1
.
the determinant in the rhs coincides with the determinant in (5.9a), which proves the
first equality in (5.20). the second one is reduced to (5.9b) by analogy, where also
the explicit expression for matrix γ in (5.18) must be taken into account.
by considering convenient linear operations on the rows and columns of the matri-
ces in (5.20) one can get more symmetric expressions of τ(x) and τ ′(x) which involve
only κ's. let us consider the first equality in (5.20). we can subtract the last row
of matrix k from all the preceding rows. then the l-th (l > 1) row will read as
(bl −bnb) qnb−1
l′=1, l′̸=l(κn −bl′). each factor bl −bnb for l = 1, . . . , nb can then be ex-
tracted from the determinant and we repeat the same procedure with the last but one
row, and up to the second one. the first row will then have all 1's, and the second row
will have entries κl −b1 for l = 1, . . . , na + nb. then one can easily shift κl −b1 →κl
by using the first row. a similar transformation can be used to transform the generic
element of the subsequent rows into κj
n. finally, instead of (5.20) one obtains
τ(x) = det
v ek(x) d
,
τ′(x) = det
d ′ e−k(x)γ v ′
,
(5.24)
13
where by v and v ′ we denote the "incomplete vandermonde matrices," i.e., the nb ×
(na + nb)- and (na + nb) × na-matrices given as
v =
1
. . .
1
.
.
.
.
.
.
κnb−1
1
. . .
κnb−1
na+nb
,
v ′ =
1
. . .
κna−1
1
.
.
.
.
.
.
1
. . .
κna−1
na+nb
.
(5.25)
remark 5.2 in the expressions (5.20) all objects with the exception of kn(x) are
invariant with respect to an overall shift of all parameters a's and b's (or, equivalently,
all κ's) by the same constant, while in (5.25) this invariance is not obvious. in fact,
following the same procedure used for transforming (5.20) into (5.25) one can get
matrices v and v ′ constructed from powers of κn + z instead of κn, where z is totally
arbitrary.
the expressions for the potential are invariant with respect to rescaling (5.10)
and (5.19) and, consequently, we have by (4.2)
u(x) = −2∂2
x1 log τ(x) = −2∂2
x1 log τ ′(x).
(5.26)
this twofold expression for the potential follows also directly, as thanks to (5.10), (5.14), (5.15)
and (5.19),
τ(x) = (−1)nanb+na(na−1)/2
na+nb
y
n=1
ekn(x)
!
v (κ1, . . . , κna+nb)τ ′(x),
(5.27)
where v denotes the vandermonde determinant
v (κ1, . . . , κna+nb) = det
1
. . .
1
.
.
.
.
.
.
κna+nb−1
1
. . .
κna+nb−1
na+nb
≡
y
1≤m<n≤na+nb
(κn −κm).
(5.28)
thus, we have shown the equivalence of representations (4.2) and (5.11) for the
potential with those expressed as determinants of product of three and four matrices
given in (5.24). these representations coincide with the representations obtained using
the τ-function approach in [21] and studied in detail in [22–25]. notice that the special
block form of the matrices d and d ′ in (5.17), in general, is not preserved, when in
the determinants (5.24) we perform a renumbering of the parameters κn, for instance
in order to have κ1 < κ2 < * * * < κna+nb. this problem will be studied in details in
section 6.3.
5.3
explicit representation for the tau-functions
in order to study the behaviour of the potential and jost solutions, which we plan to
perform in a forthcoming publication, it is convenient to derive an explicit represen-
tations for the determinants involved, see also [20–25]. these representations involve
the maximal minors of matrices d and d ′ (see (5.17)) for which we use the simplified
notations:
d(n1, . . . , nnb) = d
n1,
n2,
. . . ,
nnb
1,
2,
. . . ,
nb
,
(5.29)
14
i.e., determinant of the nb × nb-matrix, that consists of {n1, . . . , nnb} rows of the
matrix d and all its columns, and
d ′(n1, . . . , nna) = d ′
1,
2,
. . . ,
na
n1,
n2,
. . . ,
nna
,
(5.30)
i.e., determinant of the na × na-matrix that consists of all rows of the matrix d ′
and {n1, . . . , nna} columns of this matrix. then, by using the binet–cauchy formula
for the determinant of a product of matrices and notation (5.28), we can rewrite
relations (5.24) in the form
τ(x) =
x
1≤n1<n2<***<nnb≤na+nb
fn1,...,nnb
nb
y
l=1
eknl(x),
(5.31a)
τ ′(x) =
x
1≤n1<n2<***<nna≤na+nb
f ′
n1,...,nna
na
y
j=1
e−knj (x),
(5.31b)
where
fn1,n2,...,nnb = v (κn1, . . . , κnnb) d(n1, . . . , nnb),
(5.32a)
f ′
n1,n2,...,nna = v (κn1, . . . , κnna) d ′(n1, . . . , nna)
na
y
j=1
γnj.
(5.32b)
from (5.27) it follows that the coefficients f and f ′ in the two expansions (5.31b)
and (5.31a) are related by the equation
f ′
n1,...,nna = (−1)nanb+na(na−1)/2
fe
n1,e
n2,...,e
nnb
v (κ1, . . . , κna+nb),
(5.33)
where {n1, . . . , nna} and {e
n1, e
n2, . . . , e
nnb} are two disjoint ordered subsets of the set
of numbers running from 1 to na + nb.
let us mention that in our construction the two equivalent representations for the
τ-functions in (5.27) were obtained as a consequence of the two equalities in (2.6),
while, of course, they can be proved directly by means of (5.31) and (5.33). see [24],
where this property is called duality.
remark 5.3 notice that, in agreement with (5.32) and (5.33), we have
na
y
j=1
γnjv (κn1, . . . , κnna) = det π(−1)nanb+na(na−1)/2 v (κe
n1, . . . , κe
nnb)
v (κ1, . . . , κna+nb).
(5.34)
and
d′(n1, . . . , nna) = det πd(e
n1, . . . , e
nnb),
(5.35)
where π is the matrix performing the permutation from (κn1, . . . , κnna, κe
n1, . . . , κe
nnb)
to (κ1, . . . , κna+nb).
15
6
jost solutions and invariance properties
6.1
properties of matrices d and d ′
matrices d and d ′ introduced in (5.17) obey rather interesting properties. as follows
directly from the definition, they are orthogonal in the sense that
d ′ d = 0,
(6.1)
where zero in the rhs is a na × nb-matrix. moreover, since the matrices
d† d = enb + d†d,
d ′ d ′† = ena + dd†,
(6.2)
where † denotes hermitian conjugation of matrices (in fact, transposition here), are
invertible, the matrices (see [33])
d
(−1) = (enb + d†d)−1d†,
(6.3)
d ′(−1) = d ′†(ena + dd†)−1,
(6.4)
are, respectively, the left inverse of the matrix d and the right inverse of the matrix
d ′, i.e.,
d
(−1) d = enb,
d ′ d ′(−1) = ena.
(6.5)
products of these matrices in the opposite order give the real self-adjoint (na + nb) ×
(na + nb)-matrices
p = d
d
(−1) = d(enb + d†d)−1 d
†,
(6.6)
p ′ =
d ′(−1) d ′ =
d ′†(ena + dd†)−1 d ′,
(6.7)
which are orthogonal projectors, i.e.,
p 2 = p,
(p ′)2 = p ′,
pp ′ = 0 = p ′p,
(6.8)
and complementary in the sense that
p + p ′ = ena+nb.
(6.9)
orthogonality of the projectors follows from (6.1) and the last equality from obvious
relations of the kind (enb + d†d)−1d† = d†(ena + dd†)−1.
6.2
symmetric representations for the jost solutions
in order to get a τ-representation for the jost solutions, we use (5.12) and notice that
rescaling (5.19) does not modify the substitution rules given in (5.13). in terms of
notations (5.14), (5.15) and (5.21) these rules read as
τ(x) →τχ(x, k)
replacing
ek →ek(κ + i k),
(6.10a)
τ ′(x) →τ ′
χ(x, k)
replacing
ek →ek(κ + i k),
(6.10b)
τ(x) →τξ(x, k)
replacing
ek →
ek
κ + i k,
(6.10c)
16
τ ′(x) →τ ′
ξ(x, k)
replacing
ek →
ek
κ + i k,
(6.10d)
where κ + i k denotes the diagonal (na + nb) × (na + nb)-matrix
κ + i k = diag{κ1 + i k, . . . , κna+nb + i k}
(6.11)
and analogously for the matrix (κ+i k)−1. explicitly these replacements give (see (5.24))
τχ(x, k) = det
v ek(x)(κ + i k) d
,
(6.12a)
τ ′
χ(x, k) = det
d ′ e−k(x)(κ + i k)−1γ v ′
,
(6.12b)
τξ(x, k) = det
v ek(x)(κ + i k)−1 d
,
(6.12c)
τ ′
ξ(x, k) = det
d ′ e−k(x)(κ + i k)γ v ′
,
(6.12d)
thus, by (5.12) we have
nb
y
l=1
(bl + i k)
!
χ(x, k) = τχ(x, k)
τ(x)
≡
na+nb
y
n=1
(κn + i k)
!
τ ′
χ(x, k)
τ ′(x) ,
(6.13a)
nb
y
l=1
(bl + i k)−1
!
ξ(x, k) = τξ(x, k)
τ(x)
≡
na+nb
y
n=1
(κn + i k)−1
!
τ ′
ξ(x, k)
τ ′(x) .
(6.13b)
the jost solutions themselves are then given by (2.3) and in order to simplify their
analyticity properties it is convenient to renormalize them in the following way
φ(x, k) →
φ(x, k)
nb
y
l=1
(bl + i k)
,
ψ(x, k) →
nb
y
l=1
(bl + i k)
!
ψ(x, k).
(6.14)
then, to preserve relations given in (2.3) we also normalize χ(x, k) and ξ(x, k) ac-
cordingly, so that instead of (6.13) we have
χ(x, k) = τχ(x, k)
τ(x)
≡
na+nb
y
n=1
(κn + i k)
!
τ ′
χ(x, k)
τ ′(x) ,
(6.15a)
ξ(x, k) = τξ(x, k)
τ(x)
≡
na+nb
y
n=1
(κn + i k)−1
!
τ ′
ξ(x, k)
τ ′(x) .
(6.15b)
now χ(x, k) is a polynomial with respect to k of the order knb and ξ(x, k) is a mero-
morphic function of k that becomes a polynomial of the order kna after multiplication
by qna+nb
n=1
(κn+i k). in other words, now φ(x, k) is an entire function of k and ψ(x, k)
is meromorphic with poles at all points k = iκn, n = 1, . . . , na + nb. introducing the
discrete values of φ(x, k) at these points as a (na + nb)-row
φ(x, iκ) = {φ(x, iκ1), . . . , φ(x, iκna+nb)},
(6.16)
17
and the residuals of ψ(x, k) at these points
ψκn(x) =
res
k=iκn ψ(x, k),
(6.17)
as a (na + nb)-column
ψκ(x) = {ψκ1(x), . . . , ψκna+nb(x)}t,
(6.18)
we get thanks to (5.14), (5.16) and (5.17) that the relations (5.6) take the more
symmetric form
φ(x, iκ) d = 0,
d ′ ψκ(x) = 0.
(6.19)
it is necessary to mention that after the renormalization (6.13) the asymptotic condi-
tions (2.5) become
lim
k→∞(i k)−nbχ(x, k) = 1,
lim
k→∞(i k)nbξ(x, k) = 1,
(6.20)
and the relations (2.6) take the form
u(x) = −2 lim
k→∞(i k)−nb+1∂x1χ(x, k) = 2 lim
k→∞(i k)nb+1∂x1ξ(x, k).
(6.21)
finally let us point out that by introducing an infinite set of times, and, precisely,
replacing kn(x) with the formal series (see [31])
kn(t1, t2, . . .) =
∞
x
j=1
κj
ntj,
(6.22)
appropriate choices of the times tj's provide the multisoliton solutions of any non-
linear evolution equation in the hierarchy related to the kpii equation, as well as
the corresponding jost solutions. in particular the multisoliton solutions of the kpii
equation are obtained by choosing t1 = x1, t2 = x2, t3 = −4t and all highest times
equal to zero.
remark 6.1 we showed in (6.10) that the jost solutions can be obtained by means
of transformations, that are equivalent to the formal miwa shift used in the construc-
tion of the baker–akhiezer functions in terms of the τ-functions. in fact, substitu-
tions (6.10b) and (6.10a) can be obtained (up to an unessential factor i k) by con-
sidering the multi-time τ-function obtained by replacing kn with the infinite formal
series in (6.22) and, then, by shifting tj →tj −1/j(i/ k)j. similarly, the substitu-
tions (6.10d) and (6.10c) are obtained (again up to an unessential factor −i/ k) by
the shifts tj →tj + 1/j(i/ k)j. then, non formal jost solutions of the heat equation
with a potential being a solution of kpii are derived by choosing t1 = x1, t2 = x2,
t3 = −4t and all highest times equal to zero.
6.3
invariance properties of the multisoliton potential
in some cases it is useful to rename the spectral parameters κn →e
κn, for instance in
such a way that the renamed parameters e
κn are ordered as follows
e
κ1 < e
κ2 < * * * < e
κn .
(6.23)
18
this permutation can be performed by means of a (na + nb) × (na + nb)-matrix π
such that
(e
κ1, . . . , e
κna+nb) = (κ1, . . . , κna+nb)π.
(6.24)
this matrix is unitary,
π† = π−1,
(6.25)
and all rows and columns have one element equal to 1 and all other equal to 0. it is
convenient to write this matrix in a block form like
π =
π11
π12
π21
π22
,
(6.26)
where π11 is a na × na-matrix, π22 a nb × nb-matrix, π12 a na × nb-matrix and π21
a nb × na-matrix.
then, representations (5.24) under the transformation (6.24) keep the same form
if the matrices d and d ′ are transformed as follows
d →e
d = π d =
d1
d2
,
d ′ →e
d
′ = d ′ π† = (d ′
1, −d ′
2),
(6.27)
where, from (5.17), we have
d1 = π11d + π12,
d2 = π21d + π22,
(6.28)
d ′
1 = π†
11 −dπ†
12,
−d ′
2 = π†
21 −dπ†
22.
(6.29)
while the size of blocks in (6.27) are the same as in (5.17), the block structure in
general is different. nevertheless, thanks to the unitarity of the matrix π, relation (6.1)
remains valid for the transformed matrices, which by (6.27) means that
d ′
1d1 = d ′
2d2.
(6.30)
also relations (6.19) are preserved for the transformed quantities and transformed
projectors e
p and e
p ′ can be built.
let us mention that the special block structure of the matrices d and d ′ in (5.17)
determines them uniquely in correspondence to a given potential. one can release
this condition, without changing the potential, by multiplying the matrix d by any
nonsingular nb × nb-matrix from the right and the matrix d ′ by any nonsingular
na × na-matrix from the left.
in fact, determinants of these matrices cancel out
in (5.26) and (6.13). viceversa one can use this procedure for bringing e
d and e
d
′
back from the block structure (6.27) to the special block structure in (5.17). this can
be done by using matrices (d2)−1 and (d ′
1)−1, if they exist, to perform the following
transformations
e
d →e
d(d2)−1 =
d1(d2)−1
enb
,
e
d
′ →(d ′
1)−1 e
d
′ = (ena, −(d ′
1)−1d ′
2),
(6.31)
and, then, by noticing that thanks to (6.30)
d1(d2)−1 = (d ′
1)−1d ′
2 = e
d.
(6.32)
taking into account that both representations in (5.24) are equivalent, we deduce
that matrices d2 and d ′
1, if invertible, are simultaneously invertible. thus, we proved
19
that in the case of a permutation π such that the matrix d2 (or d ′
1) is nonsingular,
the permutation of κ's is equivalent to a transformation of the matrix d to e
d, or,
correspondingly, by (5.16), to a transformation of the matrix c. this is always the
case when the matrix π has a diagonal block structure, i.e., when π12 = π21 = 0.
in this situation both matrices d2 and d ′
1 are invertible and one can use the above
substitution. notice that in this case the permutation of κ's does not mix the original
parameters a's with b's (see (5.14)).
in general in making a permutation of κ's, say, reducing them to the order given
in (6.23), the block structure (5.17) is lost and we can only say that the τ-functions are
given by (5.24) and (6.12), where both matrices d and d ′ have at least two nonzero
maximal minors.
7
concluding remarks
here we described relations between different representations, existing in the literature,
of the multisoliton potentials of the heat operator and we derived forms of these
representations that will enable, in a forthcoming publication, a detailed study of
the asymptotic behaviour of the potentials themselves and the corresponding jost
solutions on the x-plane. we also presented various formulations of the conditions
that guarantee the regularity of the potential. nevertheless, the essential problems
of determining the necessary conditions of regularity of the multisoliton potential is
left open. in this context, let us mention the special interest of the specific subclass
of potentials satisfying strict inequalities in (4.10), which by (5.31a) is equivalent to
requiring that all fn1, . . . ,nb are of the same sign (analogously, by (5.31b) that all
f ′
n1, . . . ,na are of the same signs). these conditions identify fully resonant soliton
solutions (cf. [20, 21]).
when such conditions are imposed, all maximal minors of
matrices d and d ′ are different from zero, as follows from (5.32a) and (5.32b) and,
then, thanks to the invariance properties discussed above, we can always permute
parameters κ's in any way, for instance as in (6.23), and at the same time deal with
transformed matrices e
d and e
d
′, which have a special block structure as in (5.17).
acknowledgments
this work is supported in part by the grant rfbr # 08-01-00501, grant rfbr–ce
# 09-01-92433, scientific schools 795.2008.1, by the program of ras "mathematical
methods of the nonlinear dynamics," by infn and by consortium e.i.n.s.t.e.i.n.
akp thanks department of physics of the university of salento (lecce) for kind
hospitality.
references
[1] b. b. kadomtsev and v. i. petviashvili, "on the stability of solitary waves in
weakly dispersive media," sov. phys. dokl. 192 (1970) 539–541
[2] v. s. dryuma, "analytic solution of the two-dimensional korteweg–de vries
(kdv) equation," sov. jetp lett. 19 (1974) 387–388
20
[3] v. e. zakharov and a. b. shabat, "a scheme for integrating the non-linear equa-
tions of mathematical physics by the method of the inverse scattering problem,"
func. anal. appl. 8 (1974) 226–235
[4] m. j. ablowitz, d. bar yacoov and a. s. fokas, "on the inverse scattering trans-
form for the kadomtsev-petvishvili equation," stud. appl. math. 69 (1983) 135-
143
[5] v. g. lipovsky, "hamiltonian structure of the kadomtsev–petviashvili–ii equa-
tion in the class of decreasing cauchy data," funkts. anal. prilog. 20 (1986)
35-45
[6] m. v. wickerhauser, "inverse scattering for the heat operator and evolutions in
2+1 variables," commun. math. phys. 108 (1987) 67-89
[7] p. g. grinevich and s. p. novikov, "two-dimensional inverse scattering problem
at negative energy and generalized analytic functions. i. energy below a ground
state," func. anal. appl. 22 (1988) 19–27
[8] j. satsuma, "n-soliton solution of the two-dimensional korteweg-de vries equa-
tion," j. phys. soc. jap. 40 (1976) 286–290
[9] s. v. manakov, v. e. zakharov, l. a. bordag, a. r. its, and v. b. matveev,
"two-dimensional solitons of the kadomtsev-petviashvili equation and their in-
teraction," phys. lett. a 63(1977) 205–206
[10] b.g. konopelchenko, solitons in multidimensions: inverse spectral transform
method, world scientific pu. co., singapore (1993)
[11] n.c. freeman, j.j.c. nimmo, "soliton-solutions of the korteweg–de vries and
kadomstev–petviashvili equations: the wronskian technique," phys lett. 95a
(1983) 1–3
[12] v. b. matveev and m. a. salle, darboux transformations and solitons, springer,
berlin (1991)
[13] j.w. miles, "obliquely interacting solitary waves," j. fluid mech. 79 (1977)
157–169 and "resonantly interacting solitary waves," j. fluid mech. 79 (1977)
171–179
[14] v.e. zakharov, "shock waves propagated on solitons on the surface of a fluid,"
radiophysics and quantum electronics 29 (1986) 813-817
[15] m. boiti, f. pempinelli, a. k. pogrebkov and b. prinari, "study of some non-
decaying solutions for the heat equation" in nonlinearity, integrability and all
that. twenty years after needs'79, boiti m, martina l, pempinelli f, prinari
b and soliani g eds, pp 42–50, world scientific pu. co., singapore (2000)
[16] b. prinari, "on some nondecaying potentials and related jost solutions for the
heat conduction equation," inverse problems 16(2000) 589–603
[17] e. medina, "an n soliton resonance solution for the kp equation: interaction
with change of form and velocity," lett. math. phys. 62(2002) 91–99
21
[18] m. boiti, f. pempinelli, a. pogrebkov and b. prinari, "towards an inverse scat-
tering theory for non decaying potentials of the heat equation," inverse problems
17 (2001) 937–957
[19] m. sato, "soliton equations as dynamical systems on an infinite dimensional
grassmanian manifold," rims kokyuroku (kyoto university) 439(1981) 30-46
[20] g. biondini and y. kodama, "on a family of solutions of the kadomtsev–
petviashvili equation which also satisfy the toda lattice hierarchy," j. phys. a:
math. gen. 36 (2003) 10519–10536
[21] g. biondini and s. chakravarty, "soliton solutions for the kadomtsev-petviashvili
ii equation," j. math. phys.47, 033514 (2006) 1–26
[22] g. biondini and s. chakravarty, "elastic and inelastic line-soliton solutions of the
kadomtsev–petviashvili ii equation," math. comp. simul. 74 (2007) 237–250
[23] g. biondini, "elastic line-soliton interactions of the kadomtsev-petviashvili ii
equation," phys. rev. lett. 99, 064103 (2007) 1–4
[24] s. chakravarty and y. kodama, "classification of the line-soliton solutions of
kpii," j. phys. a. 41(2008) 275209
[25] s. chakravarty and y. kodama, "soliton solutions of the kp equation and ap-
plication to shallow water waves," stud. app. math. 123(2009) 83-151
[26] g. biondini, k.-i. maruno, m. oikawa, h. tsuji, "soliton interactions of the
kadomtsev–petviashvili equation and generation of large-amplitude water
waves," stud. app. math. 122 (2009) 377-394
[27] m. boiti, f. pempinelli, a.k. pogrebkov and b. prinari, "building extended re-
solvent of heat operator via twisting transformations," theor. math. phys. 159
(2009) 721-733
[28] m. boiti, f. pempinelli, a.k. pogrebkov and b. prinari, "inverse scattering theory
of the heat equation for the perturbed 1-soliton potential," j. math. phys. 43
(2002) 1044-1062
[29] j. villarroel and m.j. ablowitz, "the cauchy problem for the kadomtsev-
petviashili ii equation with nondecaying data along a line," stud. appl. math.109
(2002) 151-162, and "on the initial value problem for the kpii equation with data
that does not decay along a line," nonlinearity17 (2004) 1843-1866
[30] m. boiti, f. pempinelli and a. k. pogrebkov, "scattering transform for the
nonstationary schr ̈
odinger equation with a bidimensionally perturbed n-soliton
potential," j. math. phys. 47 123510 (2006) 1-43
[31] t. miwa, m. jimbo and e. date, solitons: differential equations, symmetries and
infinite dimensional algebra, cambridge university press, cambridge (2000)
[32] p.g. grinevich, a.yu. orlov, "virasoro action on riemann surfaces, grassmani-
ans, det ∂and segal–wilson τ-function," in "problems of modern quantum field
theory," ed. a. a. belavin, a. v. klimyk and a. b. zamolodchikov, springer,
new york (1989) 86–106
22
[33] f. r. gantmacher, th ́
eorie des matrices, editions jacques gabay (1990)
[34] f.r. gantmacher and m.g. krein, oscillation matrices and kernels and small vi-
brations of mechanical systems, ams, providence (2002), oszillationsmatrizen,
oszillationskerne und kleine schwingungen mechanischer systeme, akademie-
verlag, berlin (1960)
23
|
0911.1676 | universal dynamical decoupling: two-qubit states and beyond | uhrig's dynamical decoupling pulse sequence has emerged as one universal and
highly promising approach to decoherence suppression. so far both the
theoretical and experimental studies have examined single-qubit decoherence
only. this work extends uhrig's universal dynamical decoupling from one-qubit
to two-qubit systems and even to general multi-level quantum systems. in
particular, we show that by designing appropriate control hamiltonians for a
two-qubit or a multi-level system, uhrig's pulse sequence can also preserve a
generalized quantum coherence measure to the order of $1+o(t^{n+1})$, with only
$n$ pulses. our results lead to a very useful scheme for efficiently locking
two-qubit entangled states. future important applications of uhrig's pulse
sequence in preserving the quantum coherence of multi-level quantum systems can
also be anticipated.
| introduction
decoherence, i.e., the loss of quantum coherence due
to system-environment coupling, is a major obstacle for
a variety of fascinating quantum information tasks. even
with the assistance of error corrections, decoherence must
be suppressed below an acceptable level to realize a useful
quantum operation. analogous to refocusing techniques
in nuclear magnetic resonance (nmr) studies, the dy-
namical decoupling (dd) approach to decoherence sup-
pression has attracted tremendous interest. the central
idea of dd is to use a control pulse sequence to effectively
decouple a quantum system from its environment.
during the past years several dd pulse sequences have
been proposed. the so-called "bang-bang" control has
proved to be very useful [1, 2, 3] with a variety of exten-
sions. however, it is not optimized for a given period t
of coherence preservation. the carr-purcell-meiboom-
gill (cpmg) sequence from the nmr context can sup-
press decoherence up to o(t 3) [4]. in an approach called
"concatenated dynamical decoupling" [5, 6], the decoher-
ence can be suppressed to the order of o(t n+1) with
2n pulses.
remarkably, in considering a single qubit
subject to decoherence without population relaxation,
uhrig's (optimal) dynamical decoupling (udd) pulse se-
quence proposed in 2007 can suppress decoherence up to
o(t n+1) with only n pulses [4, 7, 8].
in a udd se-
quence, the jth control pulse is applied at the time
tj = t sin2(
jπ
2n+2), j = 1, 2 * * * , n.
(1)
in most cases udd outperforms all other known dd con-
trol sequences, a fact already confirmed in two beautiful
experiments [9, 10, 11]. as a dramatic development in
∗[email protected]
theory, yang and liu proved that udd is universal for
suppressing single-qubit decoherence [12]. that is, for a
single qubit coupled with an arbitrary bath, udd works
regardless of how the qubit is coupled to its bath.
given the universality of udd for suppression of
single-qubit decoherence, it becomes urgent to examine
whether udd is useful for preserving quantum coher-
ence of two-qubit states. this extension is necessary and
important because many quantum operations involve at
least two qubits. conceptually there is also a big differ-
ence between single-qubit coherence and two-qubit co-
herence: preserving the latter often means the storage
of quantum entanglement. furthermore, because quan-
tum entanglement is a nonlocal property and cannot be
affected by local operations, preserving quantum entan-
glement between two qubits by a control pulse sequence
will require the use of nonlocal control hamiltonians.
in this work, by exploiting a central result in yang
and liu's universality proof [12] for udd in single-qubit
systems and by adopting a generalized coherence mea-
sure for two-qubit states, we show that udd pulse se-
quence does apply to two-qubit systems, at least for pre-
serving one pre-determined type of quantum coherence.
the associated control hamiltonian is also explicitly con-
structed. this significant extension from single-qubit to
two-qubit systems opens up an exciting avenue of dy-
namical protection of quantum entanglement. indeed, it
is now possible to efficiently lock a two-qubit system on
a desired entangled state, without any knowledge of the
bath. encouraged by our results for two-qubit systems,
we then show that in general, the coherence of an arbi-
trary m-level quantum system, which is characterized by
our generalized coherence measure, can also be preserved
by udd to the order of 1+o(t n+1) with only n pulses,
irrespective of how this system is coupled with its envi-
ronment. hence, in principle, an arbitrary (but known)
quantum state of an m-qubit system with m = 2m levels
can be locked by udd, provided that the required control
2
hamiltonian can be implemented experimentally. to es-
tablish an interesting connection with a kicked multi-level
system recently realized in a cold-atom laboratory [13],
we also explicitly construct the udd control hamilto-
nian for decoherence suppression in three-level quantum
systems.
this paper is organized as follows. in sec. ii, we first
briefly outline an important result proved by yang and
liu [12]; we then present our theory for udd in two-qubit
systems, followed by an extension to multi-level quantum
systems. in sec. iii, we present supporting results from
some simple numerical experiments. section iv discusses
the implications of our results and then concludes this
paper.
ii.
udd theory for two-qubit and
general multi-level systems
a.
on yang-liu's universality proof for
single-qubit systems
for our later use we first briefly describe one central
result in yang and liu's work [12] for proving the uni-
versality of the udd control sequence applied to single-
qubit systems. let c and z be two time-independent
hermitian operators. define two unitary operator u (n)
±
as follows:
u (n)
±
(t ) = e−i[c±(−1)nz](t −tn)
× e−i[c±(−1)(n−1)z](tn −tn−1) * * *
× e−i[c∓z](t2−t1)e−i[c±z]t1.
(2)
yang and liu proved that for tj satisfying eq. (1), we
must have
u (n)
−
†
u (n)
+
= 1 + o(t n+1),
(3)
i.e., the product of
u (n)
−
†
and u (n)
+
differs from unity
only by the order of o(t n+1) for sufficiently small t . in
the interaction representation,
zi(t) ≡eictze−ict
=
∞
x
p=0
(it)p
p!
[c, [c, ...[c, z]]]
|
{z
}
p folds
,
(4)
hence the above expression for u (n)
±
can be rewritten in
the following compact form
u (n)
±
(t ) = e−ict j
h
e−i
t
0 ±fn(t)zi(t)dti
,
(5)
where t is the final time, j denotes the time-ordering
operator, and
fn(t) = (−1)j, for t ∈(tj, tj+1).
(6)
as an important observation, we note that though ref.
[12] focused on single-qubit decoherence in a bath, eq.
(3) was proved therein for arbitrary hermitian operators
c and z. this motivated us to investigate under what
conditions the unitary evolution operator of a controlled
two-qubit system plus a bath can assume the same form
as eq. (2).
b.
decoherence suppression in two-qubit systems
quantum coherence is often characterized by the mag-
nitude of the off-diagonal matrix elements of the system
density operator after tracing over the bath. in single-
qubit cases, the transverse polarization then measures
the coherence and the longitudinal polarization measures
the population difference.
such a perspective is often
helpful so long as its representation-dependent nature is
well understood. in two-qubit systems or general multi-
level systems, the concept of quantum coherence becomes
more ambiguous because there are many off-diagonal ma-
trix elements of the system density operator.
clearly
then, to have a general and convenient coherence measure
will be important for extending decoherence suppression
studies beyond single-qubit systems.
here we define a generalized polarization operator to
characterize a certain type of coherence. specifically, as-
sociated with an arbitrary pure state |ψ⟩of our quantum
system, we define the following polarization operator,
p|ψ⟩≡2|ψ⟩⟨ψ| −i,
(7)
where i is the identity operator. this polarization oper-
ator has the following properties:
p2
|ψ⟩= i,
p|ψ⟩|ψ⟩= |ψ⟩,
p|ψ⟩|ψ⊥⟩= −|ψ⊥⟩,
(8)
where |ψ⊥⟩represents all other possible states of the sys-
tem that are orthogonal to |ψ⟩. hence, if the expectation
value of p|ψ⟩is unity, then the system must be on the
state |ψ⟩. in this sense, the expectation value of p|ψ⟩
measures how much coherence of the |ψ⟩-type is con-
tained in a given system. for example, in the single-qubit
case, p|ψ⟩measures the longitudinal coherence if |ψ⟩is
chosen as the spin-up state, but measures the transverse
coherence along a certain direction if |ψ⟩is chosen as a
superposition of spin-up and spin-down states. most im-
portant of all, as seen in the following, the generalized
polarization operator p|ψ⟩can directly give the required
control hamiltonian in order to preserve the quantum
coherence thus defined.
we now consider a two-qubit system interacting with
an arbitrary bath whose self-hamiltonian is given by
he = c0.
the qubits interact with the environment
via the interaction hamiltonian hje = σj
xcx,j + σj
ycy,j +
σj
zcz,j for j = 1, 2, where σj
x, σj
y, and σj
z are the stan-
dard pauli matrices, and cα,j are bath operators. we
3
further assume that the qubit-qubit interaction is given
by h12 = p
k,l={x,y,z} cklσ1
kσ2
l , where the coefficients ckl
may also depend on arbitrary bath operators. a gen-
eral total hamiltonian describing a two-qubit system in
a bath hence becomes
h = he + h1e + h2e + h12
= c0 + σ1
xcx,1 + σ1
ycy,1 + σ1
zcz,1 + σ2
xcx,2
+ σ2
ycy,2 + σ2
zcz,2 + σ1
xσ2
xcxx + σ1
xσ2
ycxy
+ σ1
xσ2
zcxz + σ1
yσ2
xcyx + σ1
yσ2
ycyy + σ1
yσ2
zcyz
+ σ1
zσ2
xczx + σ1
zσ2
yczy + σ1
zσ2
zczz.
(9)
for convenience each term in the above total hamiltonian
is assumed to be time independent (this assumption will
be lifted in the end).
focusing on the two-qubit subspace, the above total
hamiltonian is seen to consist of 16 linearly-independent
terms that span a natural set of basis operators for all
possible hermitian operators acting on the two-qubit sys-
tem. this set of basis operators can be summarized as
{xi}i=1,2,*** ,16 = {σk ⊗σl},
(10)
where σk, σl ∈{i, σx, σy, σz}, with the orthogonality con-
dition trace(xjxk) = 4δjk. but this choice of basis oper-
ators is rather arbitrary. we find that this operator basis
set should be changed to new ones to facilitate operator
manipulations. in the following we examine the suppres-
sion of two types of coherence, one is associated with
non-entangled states and the other is associated with a
bell state.
1.
preserving coherence associated with non-entangled
states
let the four basis states of a two-qubit system be |0⟩=
| ↑↑⟩, |1⟩= | ↑↓⟩, |2⟩= | ↓↑⟩,
and |3⟩= | ↓↓⟩.
the
projector associated with each of the four basis states is
given by
|0⟩⟨0| = p0 = 1
4(1 + σ1
z)(1 + σ2
z),
|1⟩⟨1| = p1 = 1
4(1 + σ1
z)(1 −σ2
z),
|2⟩⟨2| = p2 = 1
4(1 −σ1
z)(1 + σ2
z),
|3⟩⟨3| = p3 = 1
4(1 −σ1
z)(1 −σ2
z).
(11)
as a simple example, the quantum coherence to be pro-
tected here is assumed to be p|0⟩= 2|0⟩⟨0| −i.
we now switch to the following new set of 16 basis
operators,
y1 = p|0⟩= 2p0 −i
= 1
2(−i + σ1
z + σ2
z + σ1
zσ2
z),
y2 = p0 + p1 = 1
2(i + σ1
z),
y3 = p0 −p1 + 2p2 = 1
2(i −σ1
z + 2σ2
z),
y4 = p0 −p1 −p2 + 3p3
= 1
2(i −σ1
z −σ2
z + 3σ1
zσ2
z),
y5 = |1⟩⟨3| + |3⟩⟨1| = 1
2(σ1
x −σ1
xσ2
z),
y6 = −i|1⟩⟨3| + i|3⟩⟨1| = 1
2(σ1
y −σ1
yσ2
z),
y7 = |2⟩⟨3| + |3⟩⟨2| = 1
2(σ2
x −σ1
zσ2
x),
y8 = −i|2⟩⟨3| + i|3⟩⟨2| = 1
2(σ2
y −σ1
zσ2
y),
y9 = |1⟩⟨2| + |2⟩⟨1| = 1
2(σ1
xσ2
x + σ1
yσ2
y),
y10 = −i|1⟩⟨2| + i|2⟩⟨1| = 1
2(σ1
yσ2
x −σ1
xσ2
y),
y11 = |0⟩⟨1| + |1⟩⟨0| = 1
2(σ2
x + σ1
zσ2
x),
y12 = −i|0⟩⟨1| + i|1⟩⟨0| = 1
2(σ2
y + σ1
zσ2
y),
y13 = |0⟩⟨2| + |2⟩⟨0| = 1
2(σ1
x + σ1
xσ2
z),
y14 = −i|0⟩⟨2| + i|2⟩⟨0| = 1
2(σ1
y + σ1
yσ2
z),
y15 = |0⟩⟨3| + |3⟩⟨0| = 1
2(σ1
xσ2
x −σ1
yσ2
y),
y16 = −i|0⟩⟨3| + i|3⟩⟨0| = 1
2(σ1
xσ2
y + σ1
yσ2
x).
(12)
using this new set of basis operators for a two-qubit
system, the total hamiltonian becomes a linear combi-
nation of the yj (j = 1 −16) operators defined above,
i.e.,
h =
16
x
j=1
wjyj,
(13)
where wj are the expansion coefficients that can contain
arbitrary bath operators.
the above new set of basis
operators have the following properties. first, the oper-
ator y1 in this set are identical with p|0⟩and hence also
satisfies the interesting properties described by eq. (8).
second,
[yj, y1] = 0, for j = 1, 2, * * * , 10;
{yj, y1}+ = 0, for j = 11, 12, * * * , 16,
(14)
where [*] represents the commutator and {*}+ represents
4
an anti-commutator. third,
10
x
i=1
aiyi,
16
x
j=11
bjyj
=
16
x
j=11
cjyj,
10
x
i=1
aiyi
!
10
x
j=1
bjyj
=
10
x
j=1
cjyj,
16
x
i=11
aiyi
!
16
x
j=11
bjyj
=
10
x
j=1
cjyj.
(15)
with these observations, we next split the total uncon-
trolled hamiltonian into two terms, i.e., h = h0 + h′,
where
h0 = w1y1 + w2y2 + * * * + w10y10,
(16)
and
h′ = w11y11 + * * * + w16y16.
(17)
evidently, we have the anti-commuting relation
{y1, h′}+ = 0,
(18)
an important fact for our proof below.
consider now the following control hamiltonian de-
scribing a sequence of extended udd π-pulses
hc =
n
x
j=1
πδ(t −tj)y1
2 .
(19)
after the n control pulses, the unitary evolution operator
for the whole system of the two qubits plus a bath is given
by (ħ= 1 throughout)
u(t ) = e−i[h0+h′](t −tn )(−iy1)
× e−i[h0+h′](tn −tn−1)(−iy1)
* * *
× e−i[h0+h′](t3−t2)(−iy1)
× e−i[h0+h′](t2−t1)(−iy1)
× e−i[h0+h′]t1.
(20)
we can then take advantage of the anti-commuting rela-
tion of eq. (18) to exchange the order between (−iy1)
and the exponentials in the above equation, leading to
u(t ) = (−iy1)ne−i[h0+(−1)n h′](t −tn)
× e−i[h0+(−1)n−1h′](tn−tn−1)
* * *
× e−i[h0+h′](t3−t2)
× e−i[h0−h′](t2−t1)
× e−i[h0+h′]t1
= (−iy1)ne−ih0t j
h
e−i
t
0 fn (t)h′
i(t)dti
≡(−iy1)nu(n)
+
(t ).
(21)
here fn(t) is already defined in eq.
(6), the second
equality is obtained by using the interaction represen-
tation, with h′
i(t) ≡eih0thie−ih0t, and the last line
defines the operator u(n)
+
(t ). clearly, u(n)
+
is exactly
in the form of u (n)
+
defined in eqs. (2) and (5), with
h0 replacing c and h′ replacing z. this observation
motivates us to define
u(n)
−
(t ) ≡e−ih0t j
h
e−i
t
0 −fn(t)h′
i(t)dti
,
(22)
which is completely in parallel with u (n)
−
defined in eq.
(5). as such, eq. (3) directly leads to
u(n)
−
†
u(n)
+
= 1 + o(t n+1).
(23)
with eq. (23) obtained we can now evaluate the co-
herence measure. in particular, for an arbitrary initial
state given by the density operator ρi, the expectation
value of p|0⟩at time t is given by
trace{u(t )ρiu †(t )p|0⟩}
= trace{(−iy1)nu(n)
+
ρi
u(n)
+
†
(iy1)np|0⟩}
= trace{(−iy1)nu(n)
+
ρip|0⟩
u(n)
−
†
(iy1)n}
= trace{
u(n)
−
†
u(n)
+
ρip|0⟩}
= trace{ρip|0⟩}
1 + o(t n+1)
,
(24)
where we have used p|0⟩= y1, y 2
1 = i, and the anti-
commuting relation between p|0⟩and h′. equation (24)
clearly demonstrates that, as a result of the udd se-
quence of n pulses, the expectation value of p|0⟩is pre-
served to the order of 1 + o(t n+1), for an arbitrary
initial state.
if the initial state is set to be |0⟩, i.e.,
trace{ρip|0⟩} = 1, then the expectation value of p|0⟩
remains to be 1 + o(t n+1) at time t , indicating that
the udd sequence has locked the system on the state
|0⟩= | ↑↑⟩.
in our proof of the udd applicability in preserving the
coherence p|ψ⟩associated with a non-entangled state, the
first important step is to construct the control operator
y1 = p|ψ⟩and then the control hamiltonian hc. as is
clear from eq. (8), each application of the control op-
erator y1 = p|0⟩leaves the state |0⟩intact but induces
a negative sign for all other two-qubit states. it is in-
teresting to compare the control operator y1 with what
can be intuitively expected from early single-qubit udd
results.
suppose that the two qubits are unrelated at
all, then in order to suppress the spin flipping of the first
qubit (second qubit), we need a control operator σ1
z (σ2
z).
as such, an intuitive single-qubit-based control hamilto-
nian would be
hc,single = π
2
n
x
j=1
δ(t −tj)(σ1
z + σ2
z).
(25)
5
this intuitive control hamiltonian differs from eq. (19),
hinting an importance difference between two-qubit and
single-qubit cases.
indeed, here the qubit-qubit inter-
action or the system-environment coupling may directly
cause a double-flipping error | ↑↑⟩→| ↓↓⟩, which can-
not be suppressed by hc,single. the second key step is
to split the hamiltonian h into two parts h0 and h′,
with the former commuting with y1 and the latter anti-
commuting with y1. once these two steps are achieved,
the remaining part of our proof becomes straightforward
by exploiting eq. (23). these understandings suggest
that it should be equally possible to preserve the coher-
ence associated with entangled two-qubit states.
2.
preserving coherence associated with entangled states
consider a different coherence property as defined
by our generalized polarization operator p|ψ⟩, with |ψ⟩
taken as a bell state
| ̃
0⟩=
1
√
2
[| ↑↓⟩+ | ↓↑⟩].
(26)
the other three orthogonal basis states for the two-qubit
system are now denoted as | ̃
1⟩, | ̃
2⟩, | ̃
3⟩.
for example,
they can be assumed to be | ̃
1⟩=
1
√
2[| ↑↑⟩+ | ↓↓⟩],
| ̃
2⟩=
1
√
2[| ↑↑⟩−| ↓↓⟩], and | ̃
3⟩=
1
√
2[| ↑↓⟩−| ↓↑⟩]. to
preserve such a new type of coherence, we follow our
early procedure to first construct a control operator ̃
y1
and then a new set of basis operators. in particular, we
require
̃
y1 = p| ̃
0⟩= 2| ̃
0⟩⟨ ̃
0| −i
= 1
2(−i + σ1
xσ2
x + σ1
yσ2
y −σ1
zσ2
z).
(27)
we then construct other 9 basis operators that all com-
mute with ̃
y1, e.g.,
̃
y2 = 1
2(i + σ1
xσ2
x),
̃
y3 = 1
2(i −σ1
xσ2
x + 2σ1
yσ2
y),
̃
y4 = 1
2(i −σ1
xσ2
x −σ1
yσ2
y −3σ1
zσ2
z),
̃
y5 = 1
2(σ1
zσ2
x −σ1
xσ2
z),
̃
y6 = 1
2(σ2
y −σ1
y),
̃
y7 = 1
2(σ2
x −σ1
x),
̃
y8 = −1
2(σ1
yσ2
z −σ1
zσ2
y),
̃
y9 = 1
2(σ1
z + σ2
z),
̃
y10 = −1
2(σ1
xσ2
y + σ1
yσ2
x).
(28)
the remaining 6 linearly independent basis operators are
found to be anti-commuting with ̃
y1. they can be writ-
ten as
̃
y11 = 1
2(σ1
x + σ2
x),
̃
y12 = −1
2(σ1
yσ2
z + σ1
zσ2
y),
̃
y13 = 1
2(σ1
xσ2
z + σ1
zσ2
x),
̃
y14 = −1
2(σ1
y + σ2
y),
̃
y15 = 1
2(σ1
z −σ2
z),
̃
y16 = 1
2(σ1
xσ2
y −σ1
yσ2
x).
(29)
the total hamiltonian can now be rewritten as h =
̃
h0 + ̃
h′, in which
̃
h0 = ̃
w1 ̃
y1 + ̃
w2 ̃
y2 + * * * + ̃
w10 ̃
y10
(30)
and
̃
h′ = ̃
w11 ̃
y11 + * * * + ̃
w16 ̃
y16.
(31)
it is then evident that if we apply the following control
hamiltonian, i.e.,
̃
hc =
n
x
j=1
πδ(t −tj)
̃
y1
2
=
n
x
j=1
π
4 δ(t −tj)(−i + σ1
xσ2
x + σ1
yσ2
y −σ1
zσ2
z),
(32)
the time evolution operator of the controlled total system
becomes entirely parallel to eqs. (20) and (21) (with an
arbitrary operator o replaced by ̃
o). hence, using the
n control pulse described by eq.
(32), the quantum
coherence defined by the expectation value of p| ̃
0⟩can
be preserved up to 1 + o(t n+1), for an arbitrary initial
state. if the initial state is already the bell state | ̃
0⟩(i.e.,
coincides with the |ψ⟩that defines our coherence measure
p|ψ⟩), then our udd control sequence locks the system
on this bell state with a fidelity 1 + o(t n+1), no matter
how the system is coupled to its environment.
the constant term in the control hamiltonian ̃
hc can
be dropped because it only induces an overall phase of the
evolving state. all other terms in ̃
hc represent two-body
and hence nonlocal control. this confirms our initial ex-
pectation that suppressing the decoherence of entangled
two-qubit states is more involving than in single-qubit
cases.
we have also considered the preservation of another
bell state
1
√
2[| ↑↓⟩−| ↓↑⟩]. following the same procedure
outlined above, one finds that the required udd control
hamiltonian should be given by
̃
hc = −
n
x
j=1
π
4 δ(t −tj)(i + σ1
xσ2
x + σ1
yσ2
y + σ1
zσ2
z),
(33)
6
which is a pulsed heisenberg interaction hamiltonian.
such an isotropic control hamiltonian is consistent with
the fact the singlet bell state defining our quantum co-
herence measure is also isotropic.
c.
udd in m-level systems
our early consideration for two-qubit systems sug-
gests a general strategy for establishing udd in an ar-
bitrary m-level system. let |0⟩, |1⟩, * * * , |m −1⟩be the
m orthogonal basis states for an m-level system. their
associated projectors are defined as pj ≡|j⟩⟨j|, with
j = 0, 1, * * * , m −1. without loss of generality we con-
sider the quantum coherence to be preserved is of the |0⟩-
type, as characterized by p|0⟩= 2|0⟩⟨0| −i. as learned
from sec. ii-b, the important control operator is then
v1 = p|0⟩= 2p0 −i,
(34)
with v 2
1 = i. a udd sequence of this control operator
can be achieved by the following control hamiltonian
̃
hc =
n
x
j=1
πδ(t −tj)v1
2 .
(35)
in the m-dimensional hilbert space, there are totally
m 2 linearly independent hermitian operators. we now
divide the m 2 operators into two groups, one commutes
with v1 and the other anti-commutes with v1. specifi-
cally, the following m −1 operators
v2
= p0 + p1,
v3
= p0 −p1 + 2p2,
* * *
vm
= p0 −p1 −... −pm−2 + (m −1)pm−1 (36)
evidently
commutes
with
v1.
in
addition,
other
(m −2)(m −1)
basis
operators,
denoted
vm+1, vm+2, * * * , vm+(m−2)(m−1), also commute with
v1.
this is the case because we can construct the
following 1
2(m −2)(m −1) basis operators
|k⟩⟨l| + |l⟩⟨k|
(37)
with 0 < k < m and k < l < m.
the other
1
2(m −2)(m −1) basis operators that commute with
v1 are constructed as
−i|k⟩⟨l| + i|l⟩⟨k|,
(38)
also with 0 < k < m and k < l < m. all the remaining
2(m −1) basis operators are found to anti-commute with
v1. specifically, they can be written as
vm+(m−1)(m−2)+2l−1 = |0⟩⟨l| + |l⟩⟨0|;
vm+(m−1)(m−2)+2l = −i|0⟩⟨l| + i|l⟩⟨0|,
(39)
where 1 ≤l ≤m −1.
the total hamiltonian for an uncontrolled m-level sys-
tem interacting with a bath can now be written as
hm = h0 + h′,
h0 =
m2−2m+2
x
j=1
wjvj,
h′ =
m2
x
j=m2−2m+3
wjvj,
(40)
where wj are the expansion coefficients that may contain
arbitrary bath operators.
with the udd control sequence described in eq. (35)
tuned on, the unitary evolution operator can be easily
investigated using [v1, h0] = 0 and {v1, h′}+ = 0. in-
deed, it takes exactly the same form (with y1 →v1) as
in eq. (21). we can then conclude that, the quantum
coherence property p|ψ⟩associated with an arbitrarily
pre-selected state |ψ⟩in an m-level system can be pre-
served with a fidelity 1 + o(t n+1), with only n pulses.
for an m-qubit system, m = 2m. in such a multi-qubit
case, our result here indicates the following: if the initial
state of an m-qubit system is known, then by (i) setting
|ψ⟩the same as this initial state, and then (ii) setting
p|ψ⟩as the control operator, the known initial state will
be efficiently locked by udd. certainly, realizing the re-
quired control hamiltonian for a multi-qubit system may
be experimentally challenging.
recently, a multi-level system subject to pulsed exter-
nal fields is experimentally realized in a cold-atom lab-
oratory [13]. to motivate possible experiments of udd
using an analogous setup, in the following we consider
the case of m = 3 in detail. to gain more insights into
the control operator v1, here we use angular momentum
operators in the j = 1 subspace to express all the nine
basis operators. specifically, using the eigenstates of the
jz operator as our representation, we have
jx =
1
√
2
1
1
1
1
,
jy =
1
√
2
−i
i
−i
i
,
jz =
1
0
−1
.
(41)
as an example, we use the state (1, 0, 0)t to define our
coherence measure. the associated control operator v1
is then found to be
v1 = jz + j2
z −i.
(42)
interestingly, this control operator involves a nonlinear
function of the angular momentum operator jz.
this
requirement can be experimentally fulfilled, because re-
alizing such kind of operators in a pulsed fashion is one
7
main achievement of ref. [13], where a "kicked-top" sys-
tem is realized for the first time. the two different con-
texts, i.e., udd by instantaneous pulses and the delta-
kicked top model for understanding quantum-classical
correspondence and quantum chaos [13, 14, 15], can thus
be connected to each other.
for the sake of completeness, we also present below
those operators that commute with v1, namely,
v2 = i + 1
2jz −1
2j2
z,
v3 = −i −1
2jz + 5
2j2
z,
v4 = −1
√
2(j+jz + jzj−),
v5 =
i
√
2(j+jz −jzj−),
(43)
where j± = jx ± ijy; and those operators that anti-
commute with v1, namely,
v6 =
1
√
2
(jzj+ + j−jz),
v7 =
i
√
2(j−jz −jzj+),
v8 = 1
2(j2
+ + j2
−),
v9 = i
2(j2
−−j2
+).
(44)
some linear combinations of these operators will be re-
quired to construct the control hamiltonian to preserve
the coherence associated with other states.
iii.
simple numerical experiments
to further confirm the udd control sequences we ex-
plicitly constructed above, we have performed some sim-
ple numerical experiments. we first consider a model of
a two-spin system coupled to a bath of three spins. the
total hamiltonian in dimensionless units is hence given
by
h =
5
x
m=3
x
j={x,y,z}
bj,mσm
j
+
5
x
n=1
x
k={x,y,z}
5
x
m>n
x
j={x,y,z}
cjkσm
j σn
k
+ hc,
(45)
where the first two spins constitute the two-qubit system
in the absence of any external field, hc represents the
udd control hamiltonian, and the coefficients bj,m and
cjk take randomly chosen values in [0, 1] in dimensionless
units. in addition, to be more realistic, we replace the
instantaneous δ(t −tj) function in our control hamil-
tonians by a gaussian pulse, i.e., (1/c√π)e−[(t−tj)2/c2],
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0.8
0.85
0.9
0.95
1
1.05
t
f(t)
without pulses
extended udd, n=8
intuitive pulse, n=8
fig. 1: (color online) expectation value of the coherence
measure p|ψ⟩, denoted f(t), as a function of time in dimen-
sionless units, with |ψ⟩being the non-entangled state | ↑↑⟩
of a two-qubit system.
the bath responsible for the deco-
herence is modeled by a three-spin system detailed in the
text. the bottom curve is without any control and the de-
coherence is significant. the middle curve is calculated from
a control hamiltonian intuitively based on two independent
qubits. the top solid curve represents significant decoherence
suppression due to our two-qubit udd control hamiltonian
described by eq. (19).
with c = t/100 unless specified otherwise. further, we
set t = 0.1, because this scale is comparable to the de-
coherence time scale.
figure 1 depicts the time dependence of the expecta-
tion value of the coherence measure p|ψ⟩, denoted f(t),
with |ψ⟩being the non-entangled state | ↑↑⟩of the two-
qubit system. the initial state of the system is also taken
as the non-entangled state | ↑↑⟩. as is evident from the
uncontrolled case (bottom curve) , the decoherence time
scale without any decoherence suppression is of the or-
der 0.1 in dimensionless units. turning on the two-qubit
udd control sequence described by eq. (19) for n = 8,
the decoherence (top solid curve) is seen to be greatly
suppressed. we have also examined the decoherence sup-
pression using a udd sequence based on the single-qubit-
based intuitive control hamiltonian hc,single described by
eq. (25). as shown in fig. 1, hc,single can only produce
unsatisfactory decoherence suppression.
similar results are obtained in fig. 2, where we aim
to preserve the coherence measure p|ψ⟩associated with
the bell state defined in eq. (26). apparently, with the
assistance of our two-qubit udd control sequence, the
system is seen to be locked on the bell state with a fi-
delity close to unity at all times. figure 2 also presents
8
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0.7
0.75
0.8
0.85
0.9
0.95
1
1.05
t
f(t)
without pulses
extended udd, n=8
intuitive pulse, n=8
fig. 2: (color online) same as in fig. 1, but for p|ψ⟩associ-
ated with a bell state defined in eq. (26). the smooth dashed
curve represents significant decoherence without control. the
drastically oscillating dashed curve is calculated from an intu-
itive single-qubit-based control hamiltonian, showing strong
population transfer from the initial state to other two-qubit
states. the top solid curve represents signficant decoherence
suppression due to our two-qubit udd control sequence in
eq. (32).
the parallel result if the control hamiltonian is given by
hc,single shown in eq. (25). the drastic oscillation of
f(t) in this case indicates that strong population oscilla-
tion occurs, thereby demonstrating again the difference
between single-qubit decoherence suppression and two-
qubit decoherence suppression.
using the same initial state as in fig. 2, fig. 3 de-
picts d ≡
1
2t
t
0 ||ρ(t) −ρi||dt, i.e., the time-averaged
distance between the actual time-evolving density matrix
from that of a completely locked bell state, for c = t/100
and c = t/1000, with different number of udd pulses. it
is seen that, at least for the number of udd pulses con-
sidered here, c = t/100 = 1/1000 (about one hundredth
of the decoherence time scale) already suffices to preserve
a bell state. that is, there seems to be no need to use
much shorter pulses such as c = t/1000 = 1/10000, be-
cause the case of c = t/1000 (dashed line) in fig.
3
shows little improvement as compared with the case of
c = t/100 (solid line). this should be of practical in-
terest for experimental studies of two-qubit decoherence
suppression.
finally, we show in fig. 4 the decoherence suppression
of a three-level quantum system, with the control opera-
tor given by eq. (42). here the bath is modeled by other
four three-level subsystems, and the total hamiltonian is
0
2
4
6
8
10
12
14
16
18
20
10
−2
10
−1
n
d
c=1/10000
c=1/1000
fig. 3: (color online) the time-averaged distance d between
the actual density matrix from that of a completely locked
bell state, for c = t/100 and c = t/1000, versus the number
of udd pulses. the initial state is the same as in fig. 2.
chosen as
h =
5
x
m=2
x
α={x,y,z}
bj,mjα,m
+
5
x
n=1
x
α={x,y,z}
5
x
m>n
x
β={x,y,z}
cαβjα,mjβ,n
+ hc,
(46)
where jα,m represents the jx, jy, or jz operator associated
with the mth three-level subsystem, with the first being
the central system and the other four being the bath.
the coupling coefficients are again randomly chosen from
[0, 1] with dimensionless units. the results are analogous
to those seen in fig. 1 and fig. 2, confirming the general
applicability of our udd control sequence in multi-level
quantum systems. note also that even for the n = 2
case (middle curve in fig. 4), decoherence suppression
already shows up clearly. the results here may motivate
experimental udd studies using systems analogous to
the kicked-top system realized in ref. [13].
iv.
discussion and conclusion
so far we have assumed that the system-bath cou-
pling, the bath self-hamiltonian, and the system hamil-
tonian in the absence of the control sequence are all time-
independent. this assumption can be easily lifted. in-
deed, as shown in a recent study by pasini and uhrig for
9
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0.8
0.82
0.84
0.86
0.88
0.9
0.92
0.94
0.96
0.98
1
t
f(t)
n=2
n=10
without pulses
fig. 4: (color online) expectation value of the coherence
measure p|ψ⟩, denoted f(t), as a function of time in dimen-
sionless units, with |ψ⟩being one basis state of a three-level
system. the central system is coupled with a bath modeled
by other four three-level subsystems. the bottom curve rep-
resents significant decoherence without decoherence control.
the top two curves represent decoherence suppression based
on the control operator constructed in eq. (42), for n = 2
and n = 10.
single-qubit systems [16], the udd result holds even af-
ter introducing a smooth time dependence to these terms.
the proof in ref. [16] is also based on yang and liu's
work [12]. a similar proof can be done for our extension
here. take the two-qubit case with the control operator
y1 as an example. if h0 and h′ are time-dependent, then
the unitary evolution operator in eq. (20) is changed to
u(t ) = (−iy1)nj
h
e−i
t
tn [h0+(−1)nh′]dti
× j
e
−i
tn
tn−1[h0+(−1)n−1h′]dt
* * *
× j
h
e−i
t3
t2 [h0+h′]dti
× j
h
e−i
t 2
t 1 [h0−h′]dti
× j
h
e−i
t1
0
[h0+h′]dti
= (−iy1)nj
h
e−i
t
0 h0dti
j
h
e−i
t
0 fn (t)h′
i(t)dti
,
(47)
with
h′
i(t) = j
h
ei
t
0 h0dti
h′j
h
e−i
t
0 h0dti
.
(48)
because the term j
h
e−i
t
0 h0dti
in eq.
(47) does not
affect the expectation value of our coherence measure, the
final expression for the coherence measure is essentially
the same as before and is hence again given by its initial
value multiplied by 1 + o(t n+1).
our construction of the udd control sequence is based
on a pre-determined coherence measure p|ψ⟩that charac-
terizes a certain type of quantum coherence. this implies
that our two-qubit udd relies on which type of deco-
herence we wish to suppress. indeed, this is a feature
shared by uhrig's work [7] and the yang-liu universal-
ity proof [12] for single-qubit systems (i.e., suppressing
either transverse decoherence or longitudinal population
relaxation). can we also efficiently suppress decoherence
of different types at the same time, or can we simulta-
neously preserve the quantum coherence associated with
entangled states as well as non-entangled states? this is
a significant issue because the ultimate goal of decoher-
ence suppression is to suppress the decoherence of a com-
pletely unknown state and hence to preserve the quan-
tum coherence of any type at the same time. fortunately,
for single-qubit cases: (i) there are already good insights
into the difference between decoherence suppression for a
known state and decoherence suppression for an unknown
state [17, 18] (with un-optimized dd schemes); and (ii) a
very recent study [19] showed that suppressing the longi-
tudinal decoherence and the transverse decoherence of a
single qubit at the same time in a "near-optimal" fashion
is possible, by arranging different control hamiltonians in
a nested loop structure. inspired by these studies, we are
now working on an extended scheme to achieve efficient
decoherence suppression in two-qubit systems, such that
two or even more types of coherence properties can be
preserved.
thanks to our explicit construction of the
udd control sequence for non-entangled and entangled
states, some interesting progress towards this more am-
bitious goal is being made. for example, we anticipate
that it is possible to preserve two types of quantum co-
herence of a two-qubit state at the same time, if we have
some partial knowledge of the initial state.
it is well known that decoherence effects on two-qubit
entanglement can be much different from that on single-
qubit states. one current important topic is the so-called
"entanglement sudden death" [20], i.e., how two-qubit
entanglement can completely disappear within a finite
duration.
since the efficient preservation of two-qubit
entangled states by udd is already demonstrated here,
it becomes certain that the dynamics of entanglement
death can be strongly affected by applying just very few
control pulses.
in this sense, our results on two-qubit
systems are not only of great experimental interest to
quantum entanglement storage, but also of fundamental
interest to understanding some aspects of entanglement
dynamics in an environment.
to conclude, based on a generalized polarization opera-
tor as a coherence measure, we have shown that udd also
applies to two-qubit systems and even to arbitrary multi-
level quantum systems. the associated control fidelity is
10
still given by 1 + o(t n+1) if n instantaneous control
pulses are applied. this extension is completely general
because no assumption on the environment is made. we
have also explicitly constructed the control hamiltonian
for a few examples, including a two-qubit system and a
three-level system. our results are expected to advance
both theoretical and experimental studies of decoherence
control.
v.
acknowledgments
this work was initiated by an "sps" project in the
faculty of science, national university of singapore. we
thank chee kong lee and tzyh haur yang for discus-
sions. j.g. is supported by the nus start-up fund (grant
no. r-144-050-193-101/133) and the nus "yia" (grant
no. r-144-000-195-101), both from the national univer-
sity of singapore.
[1] l. viola and s. lloyd, phys. rev. a 58, 2733 (1998).
[2] l. viola, e. knill, and s. lloyd, phys. rev. lett. 82,
2417 (1999).
[3] j. j. l. morton et al., nature physics 2, 40 (2006); j. j.
l. morton et al., nature 455, 1085 (2008).
[4] g. s. uhrig, new journal of physics 10, 083024 (2008).
[5] k. khodjasteh and d.a. lidar, phys. rev. lett. 95,
180501 (2005).
[6] k. khodjasteh and d. a. lidar, phys. rev. a 75, 062310
(2007).
[7] g. s. uhrig, phys. rev. lett. 98, 100504 (2007).
[8] b. lee, w. m. witzel, s. das sarma, phys. rev. lett.
100, 160505 (2008).
[9] m. j. biercuk, h. uys, a. p. vandevender, n. shiga, w.
m. itano, and j. j. bolinger, nature 458, 996 (2009).
[10] m. j. biercuk, h. uys, a. p. vandevender, n. shiga, w.
m. itano, and j. j. bolinger, phys. rev. a 79, 062324
(2009).
[11] j. f. du, x. rong, n. zhao, y. wang, j. h. yang, and
r. b. liu, nature 461, 1265 (2009).
[12] w. yang and r. b. liu, phys. rev. lett. 101, 180403
(2008).
[13] s. chaudhury, a. smith, b. e. andersdon, s. ghose, and
p. s. jessen, nature 461, 768 (2009).
[14] f.
haake,
quantum signatures
of chaos
2nd
ed.
(springer-verlag, berlin, 1999).
[15] see, for example, j. wang and j. b. gong, phys. rev.
lett. 102, 244102 (2009) for an extensive list of the
kicked-top model literature and its relevance to several
areas.
[16] s. pasini and g. s. uhrig, arxiv:0910.0417 (2009).
[17] w. x. zhang, n. p. konstantinidis, v. v. dobrovitski,
b. n. harmon, l. f. santos, and l. viola, phys. rev. b
77, 125336 (2008).
[18] w. x. zhang, v. v. dobrovitski, l. f. santos, l. viola,
and b.n. harmon, phys. rev. b 75, 201302 (r) (2007).
[19] j.
r.
west,
b.
h.
fong,
and
d.
a.
lidar,
arxiv:0908.4490v2 (2009).
[20] t. yu and j. h. eberly, science 323, 598 (2009).
|
0911.1677 | logical primes, metavariables and satisfiability | for formulas f of propositional calculus i introduce a "metavariable" mf and
show how it can be used to define an algorithm for testing satisfiability. mf
is a formula which is true/false under all possible truth assignments iff f is
satisfiable/unsatisfiable. in this sense mf is a metavariable with the
"meaning" 'f is sat'. for constructing mf a group of transformations of the
basic variables ai is used which corresponds to 'flipping" literals to their
negation. the whole procedure corresponds to branching algorithms where a
formula is split with respect to the truth values of its variables, one by one.
each branching step corresponds to an approximation to the metatheorem which
doubles the chance to find a satisfying truth assignment but also doubles the
length of the formulas to be tested, in principle. simplifications arise by
additional length reductions. i also discuss the notion of "logical primes" and
show that each formula can be written as a uniquely defined product of such
prime factors. satisfying truth assignments can be found by determining the
"missing" primes in the factorization of a formula.
| introduction
introductions to the problem of satisfiability can be found in textbooks and reviews,
some of them available in the net (see e.g. [1],[2]). one of the unsolved questions of
the field is whether satisfiability can be determined in polynomial time ("p=np ?").
other questions center around efficient techniques to determine satisfying
assignments (see [3,4] for new approaches), and to identify classes of "hard"
problems which inherently seem to consume large computing time. i believe that
some insight into the difficulties can be gained by using algebraic tools. i have
outlined some of them in a previous note [5]. in particular the notion of 'logical primes'
and the group of flipping transformations appear helpful in analyzing formulas and
deriving general theorems.
i will recall these notions and some consequences in sections i , ii and iii. then i will
introduce the metaformula and a related quantity, the parityformula, which encodes
whether f has an even or an odd number of satisfying solutions in section iv. in
sections v and vi algorithms with encreasing effectiveness in determining
satisfiability are introduced.
i. definitions
we consider a finite algebra v with two operations + and x, and denote by 1 and 0
their neutral elements, respectively, i.e.
(1)
ax1=a, a+0=a
additionally, the operations are associative and commutative, and the distributive law
(2)
ax(b+c)=axb + axc
is assumed to hold in v.
two more properties are required, namely:
3
(3)
a+a=0
(4)
axa=a
it is clear from these definitions that v may be identified with the boolean algebra of
propositional calculus, where "x" corresponds to the logical "and" and "+" to the
logical "xor" (exclusice or).
to each element of v we introduce its "negation" by
(5)
~a := a+1
from (2), (3) and (4) it is clear that ~axa = 0 as is appropriate for a negation.
ii. consequences.
as a first consequence of equ.s (1) - (5) we can state the following theorem:
(ti)
dim(v) = |v| = 2n for some natural number n
i.e. the number of elements of v is necessarily a power of 2.
this is not surprising, of course, if one has the close resemblence of v to
propositional calculus in mind. but here it is to be deduced solely from the algebraic
properties.
all proofs are given in the appendix.
in order to formulate a second consequence it is necessary to introduce the notion of
"logical primes". we define:
4
(di)
p ε v is a (logical) prime, iff
for any a ε v pxa=0 implies a=0 or a=~p.
if not clear by definition, the name "prime" will become clear by the following
theorems
(tii)
there are exactly ld|v|=n many primes in v. and:
(tiii) each element of v has a unique decomposition into primes:
(6)
a = πjpj where the product refers to the x-operation, and jειa, and ιa=ιb iff a=b
this property can be formulated alternatively with the negated primes ~pj via
(7)
a = σj ~pj with jε cιa (cιa is the complement of ιa in {0,1,..., n-1} )
the neutral elements 0 and 1 are special cases. 1 is expressed as the empty
product according to (6), whereas the sum extends over all primes. for 0 the sum-
representation is empty, but the product extends over all possible primes.
a property which is extremely helpful in calculations is
(8)
~pjx~pk = ~pk δjk (δjk = 1 iff j=k, 0 otherwise)
which with the aid of (5) can be written
pjxpk = pj + ~pk = ~pj + pk for k=j
note, that no use has been made of the correspondence of {v,+,x, 0 , 1 } to
propositional calculus, up to now. we can even proceed further and define the
analogue of truth assignments. consider the set of maps t:v {0,1} . we call t
5
"allowed" iff there is a relationship between the image of a "sum" or a "product" and
the image of the single summands or factors. in formula:
(9)
t(a+b) = f(t(a),t(b)) and t(axb) = g(t(a),t(b))
with some functions f and g and all a,bεv.
these relations suffice to show theorem iv
(tiv) there are exactly n different allowed maps tj , and they fulfill:
(10)
tj(~pk) = δjk
given functions f and g of (9) one can also use (10) as a definition and extend tj to
all elements of v via (7).
in one last step we assume n=2n for some natural number n. then
(tv) n distinct elements ak ( different from 0, 1) can be found, such that
(11)
~ps = (πjsjaj)( πk(1-sk)~ak) where s= σr2r-1sr is the binary representation of s.
in words: each element of v can be written as a "sum" of "products" of all ak and ~ak.
e.g. for n=3 one has p2=a2x~a1x~a3 as one of the eight primes.the ak are not
necessarily unique. e.g., for n=3, given ak, the set a1,a3, a1x~a2+~a1xa2 will serve the
same purpose (with a different numbering convention in (11)).
iii. propositional calculus.
propositional calculus (pc) consists of infinitely many formulas which can be
constructed from basic variables ak with logical functions (like "and", "or" and
6
negation). even for a finite set of n basic variables bn={a1,a2,...an} there are infinitely
many formulas arizing from combinations of the basic variables. these formulas can
be grouped into classes of logically equivalent formulas. that is, formulas f and f'
belong to the same class iff their values under any truth assignment t:bn {0,1} are
the same. members of different classes are logically inequivalent, i.e. there is at least
one truth assignment for which their values differ. this finite set of classes for fixed n
can be identified with the algebra v of the foregoing section. neutral elements of the
operations x and +, 1 and 0, are interpreted as complete truth and complete
unsatisfiability.
in order to see how operations + and x correspond to logical operations "and" and
"or" we define a new operation v in v via
(12)
a v b = a + b + axb
with this definition the defining relations (1) - (5) can be reformulated in terms of v
and x, and the algebraic structure of a boolean algebra for formulas becomes
obvious. v is the logical "or", x the logical "and".
relation (12) reduces logical considerations to simple algebraic manipulations in
which + and x can be used as in multiplication and addition of numbers. additionally
the simplifying relations a+a=0 = ~axa and axa=a, a+~a=1 hold. consider for
illustration the so called "resolution" method. it states that avb and ~avc imply bvc. a
"calculational" proof of this statement might run as follows. (from now on we skip the
x-symbol for multiplication) we make use of the fact that in pc the implication a
b is identical to ~a v b :
7
(avb)(~avc) bvc = ~((avb)(~avc))vbvc =
(1+(a+~ab)(~a+ca))+b+c+bc+(b+c+bc)( 1+(a+~ab)(~a+ca)) = 1 +ac+~ab +
+(b+c+bc)(a+~ab)(~a+ca) = 1 +ac+~ab+abc+~ab+~bac = 1 +ac(1 +b+~b) = 1
in other words: the implication is a tautology ( true under all truth assignments) as
claimed.
tiii and tv tell us that each formula f of pc has a unique decomposition into a "sum"
of "products" of its independent variables ak. because of (8) and (12) the sum in (7)
may be written as a "v"-sum. thus (7) takes the form of a disjunctive normal form
(dnf) and it can as well be transformed into a conjunctive normal form (cnf) as
given by (6). for the neutral element 0 one has
(13)
0 = (a1va2v...van)x(~a1v...an)x...x(~a1v...~an)
with all possible primes. according to (6) each formula f has a similar representation,
but with some prime factors missing. from the primes present one can immediately
read off the truth assignments for which f evaluates to 0, thus the missing factors
give the truth assignments for which f is satisfiable.
note, however, that each factor in the prime representation of a formula involves all
ak . so one way of determining satisfying assignments or test a formula for
satisfiability consists of transforming a given cnf representation of the formula to its
standard form (6). this can be done e.g. by "blowing up" each factor until all ak are
present. e.g. avbv~c = (avbv~cvd)(avbv~cv~d) from 3 to 4 variables. since each
new factor has to be treated in the same way, until n is reached, this is a o(2n) -
8
process in principle, which makes the difficulty in finding a polynomial time algorithm
for testing satisfiability understandable.
also from (7) with (10) and (8) it follows that the satisfying assignments of a formula
f= σj ~pj are given by the negated primes which do not show up in the cnf
representation. in particular, the number of satisfying assignments is equal to the
number of summands in this equation. furthermore, they can be read off
immediately, since, according to (10) ts(f) = 1 iff the corresponding ~ps shows up in
the sum. also the ts must coincide with the 2n possible truth assignments t :bn
{0,1}. one may choose the numbering such that the values of ts on bn are given by
the binary representation s= σr2r-1ts(ar).
as a last example for the usefulness of the algebraic approach we consider the
number of satisfying assignments of a formula f of pc , #(f) and show that this
number does not change if some (or all) of the variables ak are "flipped", i.e.
substituted by their negation and vice versa:
(14)
#(f(a1,...,an)) = #(f(a1,...~ai,...~aj,...))
to prove this "conservation of satisfiability" we consider a group of transformations
{r0,...rn-1} which negate the ak according to the following definition: rs negates all ar
(and ~ar likewise) for which sr in the binary representation of s is non zero. in formula,
for any truth assignment tj
(15)
tj(rs(ar)) = (1-sr) tj(ar) + sr(1- tj(ar)) and s= σr2r-1sr.
9
it is easy to see that the rs form a group with r0 = id , and each rs induces a
permutation πs of of the ~pj which is actually a transposition given by
(15') πs(j) = s + j - 2σr2r-1srjr =: (s,j)
thus rs simply permutes the primes pk and therefore in the representation of f in (6)
or (7) their number is not changed. the fact may also be stated as
(16)
tj(rs(f)) = t (s,j) (f)
and therefore #(f)= σjtj(f) = σjtj(rs(f)) = #(rs(f)) which proves (14).
one may also conclude from (16) that for satisfiable f each rs(f) is satisfiable.
more precise: if tj(f) =1 for some j, then for any k there is a flipping operation rs
such that tk(rs(f))=1, namely s=(k,j). likewise, for any rs one can find a tk such that
tk(rs(f))=1.
on the other hand, if f is not satisfiable, none of the rs(f) can be satisfiable,
otherwise one would have tk(rs(f))=1 for some k and thus tj(f) =1 for some j,
contrary to the assumption that f is not sat.
iv. the metaformula.
for any formula f of n variables we write f(a1,...,an) and define the metaformula by
"adding" with respect to the or-operation all "flipped" versions of f:
(17)
mf(a1,...,an) = r0(f) v r1(f) v ... v rn-1(f)
10
where n= 2n. from the considerations at the end of the foregoing section it is
immediately clear that mf is not satisfiable if f is not, and that mf is a tautology if f is
satisfiable:
(18)
mf = 1 iff fε sat; mf = 0 iff fε sat
thus, considered as a logical variable itself, mf represents the satisfiability of f. mf
can only take the two values 0 and 1 depending on whether f is sat or not. that is
why i call mf a metatheorem or metaformula.
very similarly one may introduce a "parityformula" pf in substituting the or-operation
in the definition (17) by the exclusive xor. analogously to (18) one can show that
pf = 0 iff pf has an even number of satisfying truth assignments,
pf = 1 iff pf has un odd number of satisfying assignments.
v. sat algorithm.
we now turn to the question how mf can be utilized to formulate sat algorithms.
since either mf = 1 or mf = 0 it is sufficient to test one single truth assignment in
order to determine whether f is sat or not. thus the satisfiability of f can be
determined in linear time in the length of mf. nothing is gained so far, however, since
the length of mf is of order n times the length of f. thus, instead of testing all n tj on
f to determine its satisfiability in the metatheorem approach one first constructs an
order-n variant of f and checks it with a single tj.
simplifications may arise, however in the process of constructing mf.
11
note first that mf can be constructed in n steps. for this purpose consider the shift
operator
(19)
d(k)(f) = f v rq(f) with q=2k-1 and k=1,...,n
note that all n operators rs can be generated by the n operators rq with q=2k-1 and
k=1,...,n. e.g. r29 = r16r8r4r1.
furthermore it is easy to see that rq flips the variable ak and therefore d(k) is
independent of ak, and (19) may be rewritten as
(20)
d(k)(f) = f(a1,..., ak= 1, ...,an) v f(a1,..., ak= 0, ..., an).
in terms of shift operators mf may be rewritten as
(21)
mf = dn(dn-1(...d2(d1(f)...) = :d(n)...d(1)(f)
we can now consider systematic approximations on mf. namely the series of lth order
approximations
(22)
mf
(l) = d(l)...d(1)(f) = d(l)(mf
(l-1)) ; mf
(0) = f.
from this definition we may write mf
(l) as
(23)
mf
(l) = f v r1(f) v ... v rq-1(f) with q=2l.
we will show next that a properly chosen truth assignment for testing the l-th
approximation can give a wealth of information. check tq with q=2l on mf
(l). let us
assume that tq(mf
(l)) = 1. then one of the ri(f) is true under that truth assignment,
therefore there is a truth assignment which satisfies f, thus f is satisfiable. if on the
other hand tq(mf
(l)) = 0, then we may conclude the following: tq(ri(f)) = 0 for all i
12
ε {1, 2,..., q}. therefore also t(i,q)(f) = 0 according to (16). in conclusion, see (15') for
the definition of (i,q):
(24)
if tq(mf
(l)) = 0 then f is not satisfied by truth assignments tk for
k ε {q, q+1,..., 2q-1} (q= 2l)
an effective check for satisfiability of f may therefore run as follows:
checksat [f,n]
set s=1
1
set f = d(s)(f)
if ts(f)=1 then stop and return "f is sat"
s=s+1
if s=n then stop and return "f is not sat"
goto 1
checksat determines satisfiability in n steps each of which is linear in the length of
the formula. in each step the number of excluded truth assignments is doubled, as
well as the chance to find a satisfying assignment if there is one. however, the
formulas to be checked become longer and longer in each step, therefore it remains
an order-n process in principle. a look at (20) reveals that the procedure
corresponds to a successive elimination of variables. further optimizations require
length reductions in the formulas mf
(l) which arise in the approximation process.
vi. length reduction.
in this section we assume f to be given in conjunctive normal form (cnf):
13
(25)
f = c1c2...cm
with m clauses of the form
(26)
c = l v r
where l is a literal corresponding to one of the variables ak or its negation, and r may
itself be written in the form (26) and so forth until r is a literal.
in the process of eliminating variables described in the foregoing section the following
well known rules can help to reduce formulas in length.
(a)
(lvr)(~lvr) = r
(b)
(lvr)(~lvr') = lr' v ~lr
(27)
(c)
l(~lvr) = lr
(d)
(lvr)(lvr') = l v ~lrr'
(e)
l(lvr) = l
in a cnf-formula one encounters terms of the form (lvr1)...(lvrs)(~lvs1)...(~lvst)
which may be rewritten by the aid of (27):
(28)
f(l):= (lvr1)...(lvrs)(~lvs1)...(~lvst) = ls1s2...st v ~lr1r2...rs
note that the cnf form on the l.h.s. is split into a disjunction of two cnf-formulas on
the r.h.s.. the variable l does not show up in the r an s by definition. if we eliminate
the variable l, which is exactly what happens when the shift operator d is applied,
one reads off the r.h.s.:
(29)
f(l) v f(~l) = s1s2...st v r1r2...rs
14
a disjunction of cnf-forms independent of l.
in any practical application of the approximation algorithm outlined in the foregoing
section to a cnf-formula g one might proceed as follows: collect all clauses with
variable a1 and ~a1. call the remaining factor gr. then one has (in the notation of
(28) with l=a1)
(30)
d(1)(g) = g v r1(g) = g(a1,...) v g(~a1,...)
= (s1s2...st v r1r2...rs)gr
where neither the r and s nor gr depend on a1. the collection procedure is
polynomial and the resulting formula is not longer than the original one in terms of
symbols. but it is not a cnf formula anymore. if one wants to repeat the process and
apply the same rules, one has to split (30) into two cnf formulas and apply the
procedure to each. now in effect the formula length has doubled (nearly) and one
encounters the exponential behaviour typical of np-problems. simplifications might
arise from the s and r factors, however. all of them are shorter than the clauses one
started with because they do not contain a1 anymore. if an s or r is reduced to a
single varible l, the application of (27b) can eliminate several clauses in one stroke.
from this consideration it becomes clear that an effective algorithm will involve a
clever choice of consecutive variables.
conclusion.
two new formal tools to deal with propositional calculus and the problem of
satisfiability were discussed; namely the notion of logical primes [5 ] and the
15
metaformula. it was shown that each equivalence class of boolean formulas has a
unique representation as a product of logical primes. therefore the satisfiability of a
formula can be formulated as a problem of prime factorization.
the notion of the metavariable or metaformula enables one to formulate well known
procedures for determining satisfiability in a systematic manner. a simple program
was formulated which checks for sat in n (number of basic variables) linear steps.
nonetheless the procedure cannot do the job in polynomial time because the length
of the formula to be checked in each step basically doubles. steps to optimize the
procedure by proper length reductions were indicated.
appendix
the proofs for theorems (ti) to (tv) are straightforward and only basic ideas will be
sketched here.
proof of ti: for n=1 v consists only of the trivial elements 0 and 1 . thus we assume
|v|>2. for some nontrivial s define ks={a|axs=0 }. obviously ~s and 0 ε ks.
analogously for k~s. it is easy to show that ks and k~s are subgroups of v with respect
to + , and both have only 0 in common. thus each a ε v has a unique decomposition
a=u+v where u ε ks and v ε k~s . let | ks |=ns, and | k~s |= n~s. next we count
elements which do not belong to ks or k~s. define:
eks(u0) = {u0+v| v ε k~s\ 0} with u0 ε ks. | eks(u0) | = n~s-1 from the definition. next
one shows that eks(a) and eks(b) have no elements in common unless a=b. thus
|v|= ns-1+ n~s +|σu eks(u) |= ns-1+ n~s +( ns-1)|eks(u)|= ( ns-1)(1+ n~s-1)+ n~s
16
= ns n~s .
since both ks and k~s are subfields of v (with neutral elements ~s and s with respect
to x) one can apply the same line of argument to each of them until one reaches the
trivial field v0={ 0, 1} which has |v0|=2. thus both ns and n~s , and therefore |v| is a
power of 2.
next the proof of (tii) can proceed via induction over n=ld(|v|).
again one considers the subfields ks and k~s of a v with | v|= 2n+1 and their sets of
primes pj and qj which exist by assumption. then one shows that all pj + s are primes
in v, and qj + ~s dto. furthermore one can show that no two of these primes of v or
their negations coincide, and, secondly, that any possible prime of v is necessarily
one of them. thus the pj + s and qj + ~s constitute the set of primes of v, and their
number is by assumption ld(ns)+ld(n~s) = n+1.
the fact that different negated pk are orthogonal, equ. (8), is proven as follows:
for i=j pjx~pi ε kpi by definition of k. but since pi is prime, kpi = { 0,~pi}. thus
either pjx~pi = 0 which implies (because also pj is prime) that ~pi is either 0 or equal
to ~pj both in contradiction to assumptions, therefore : or pjx~pi = ~pi . which is
equivalent to the claim.
along the same line of thought - considering ks and k~s for s=some prime element of
v - it can be proven that each element of v has a unique decomposition into primes,
equ. (7) or (6).
proof of (tiv).
17
first note that both functions f(x,y) and g(x,y) in equ. (9) can take values 0 or 1 only,
and they are symmetric because of the commutativity of the operations x and +. then
from (1) and (9) setting t(a)=0 or 1 respectively one gets
0 = g(0,t(1 )) = g(t(1 ),0) and 1 = g(1,t(1 )) = g(t(1 ),1) and
t(0) = g(1, t(0)) = g(0, t(0 )) from ax 0= 0 .
if one chooses t(0) = 0 then t(1) = 0 leads to a contradiction, as well as setting both
values equal to 1. one is left with the choice
(a)
t(0 ) = 0 and t(1) = 1
(b)
t(0 ) = 1 and t(1) = 0
we adopt choice (a) in the following. as a consequence
0=g(0,1) = g(1,0) = g(0,0) and 1 = g(1,1) and, from (1) for +
0=f(0,0)=f(1,1) and 1=f(1,0)=f(0,1) .
let t be fixed. because of (8): 0=g(t(~p),t(~q)) for different p,q. thus either
t(~p)=t(~q)=0 or the two assignments have different value. if t(~pk)=0 for all k, one
gets a contradiction to 1=σk~pk and 0=f(0,0). thus at least for one k t(~pk)=1. but
then for all other j t(~pj)=0 because of 0=g(0,1) and the orthogonality relation (8).
thus for each t there is exactly one ~pk with truth assignment 1, and all other ~p
giving 0. now consider two different maps t, t' with t(~pk)=1 and t'(~pl)=1. then k
and l must be different, otherwise the two maps would coincide. repeating this
argument with a third t'' and so on leads to the conclusion that there are exactly as
many allowed maps as there are primes. we can label the maps as we would like to,
so the most natural choice is equ. (10).
18
as for theorem v, the easiest way to prove the existence of n=ld(n) ak is to construct
them from the uniquely defined primes:
ar = σi σsσl ~pi δ(i,s+2kl)
where δ is the kronecker δ and the s and l sums run from 2k-1 to 2k-1 and from 0 to
2n-k-1 respectively. constructing them inductively is more instructive because one
encounters choices which lead to different sets of ak. the seemingly complicated
formula above is obsolete once one uses the binary representation of all quantities
which is given by the bijection f tn-1(f) ...ti(f)...t0(f) for any f. in
particular the ai take the simple form:
a1 = ....1010101010101010
a2 = ....1100110011001100
a3 = ....1111000011110000
and so on.
references.
[1] welzl, emo: boolean satisfiability – combinatorics and algorithms. lecture script,
http://www.inf.ethz.ch/~emo/smallpieces/sat.ps
[2] cook, s. a., and mitchell, d. g.: finding hard instances of the satisfiability
problem: a survey. imacs series in discrete mathematics and theoretical
computer science, vol 5, 1997.
[3] schuh, b. r.:testing satisfiability in polynomial time, submitted for publication in
international journal of unconventional computing
[4] schuh, b. r.: mean value satisfiability, to be published
[5] schuh, b. r.: algebraic properties of propositional calculus, arxiv:0906.2133v1
|
0911.1678 | industrial-strength formally certified sat solving | boolean satisfiability (sat) solvers are now routinely used in the
verification of large industrial problems. however, their application in
safety-critical domains such as the railways, avionics, and automotive
industries requires some form of assurance for the results, as the solvers can
(and sometimes do) have bugs. unfortunately, the complexity of modern, highly
optimized sat solvers renders impractical the development of direct formal
proofs of their correctness. this paper presents an alternative approach where
an untrusted, industrial-strength, sat solver is plugged into a trusted,
formally certified, sat proof checker to provide industrial-strength certified
sat solving. the key novelties and characteristics of our approach are (i) that
the checker is automatically extracted from the formal development, (ii), that
the combined system can be used as a standalone executable program independent
of any supporting theorem prover, and (iii) that the checker certifies any sat
solver respecting the agreed format for satisfiability and unsatisfiability
claims. the core of the system is a certified checker for unsatisfiability
claims that is formally designed and verified in coq. we present its formal
design and outline the correctness proofs. the actual standalone checker is
automatically extracted from the the coq development. an evaluation of the
certified checker on a representative set of industrial benchmarks from the sat
race competition shows that, albeit it is slower than uncertified sat checkers,
it is significantly faster than certified checkers implemented on top of an
interactive theorem prover.
| introduction
advances in boolean satisfiability sat technology have made it possible for sat
solvers to be routinely used in the verification of large industrial problems, including
safety-critical domains that require a high degree of assurance such as the railways,
avionics, and automotive industries [9,15]. however, the use of sat solvers in such do-
mains requires some form of assurance for the results. this assurance can be provided
in two different ways.
first, the solver can be proven correct once and for all. however, this approach
had limited success. for example, lescuyer et al. [11] formally designed and verified
a sat solver using the coq proof-assistant [2], but without any of the techniques and
optimizations used in modern solvers. reasoning about these optimizations makes the
arxiv:0911.1678v2 [cs.lo] 17 dec 2009
sat solver
- industrial strength
- large & complex
- un-trusted (ad-hoc)
- proof generating
industrial - strength certifed sat solver
certified sat checker
- standalone executable
- small & clear
- trusted (formal)
- proof checking
cnf
proof
cnf
yes/no
fig. 1: high-level view of an industrial-strength formally certified sat solver.
formal correctness proofs exceedingly hard. this was shown by the work of mari ́
c [12],
who verified the algorithm used in the argo-sat solver but restricted the verification
to the pseudo-code level, and in particular, did not verify the actual solver itself. in
addition, the formal verification has to be repeated for every new sat solver (or even a
new version of a solver), or else the user is locked into using the specific verified solver.
alternatively, a proof checker can be used to validate each individual outcome of the
solver independently; this requires the solver to produce a proof trace that is viewed as a
certificate justifying the outcome of the solver. this approach was used to design several
checkers such as tts, booleforce, picosat and zchaff [16]. however, these checkers
are typically implemented by the developers of the sat solvers whose output they are
meant to check, which can lead to bugs being masked, and none of them was formally
designed or verified, which means that they provide only limited assurance.
the problems of both approaches can be circumvented if the checker rather than
the solver is proven correct, once and for all. this is substantially simpler than proving
the solver correct, because the checker is comparatively small and straightforward. it
does not lead to a system lock-in, because the checker can work for all solvers that can
produce proof traces (certificates) in the agreed format. this approach was followed
by weber and amjad [20] in their formal development of a proof checker for zchaff
and minisat proof traces. their core idea is to replay the derivation encoded in the
proof trace inside lcf-style interactive theorem provers such as hol 4, isabelle, and
hol light. since the design and implementation of these provers is based on a small
trusted kernel of inference rules, assurance is very high. however this comes at the
cost of usability: their checker can run only inside the supporting prover, and not as a
standalone tool. moreover, performance bottlenecks become prominent when the size
of the problems increases.
here, we follow the same general idea of a formally certified proof checker, but
depart considerably from weber and amjad in how we design and implement our solu-
tion. we describe an approach where one can plug an untrusted, industrial-strength sat
solver into a formally certified sat proof checker to provide an industrial-strength cer-
tified sat solver. we designed, formalized and verified the sat proof checker for both
satisfiable and unsatisfiable problems. in this paper, we focus on the more interesting
aspect of checking unsatisfiable claims; satisfiable certificates are significantly easier to
formally certify. our certified checker shruti is formally designed and verified using
the higher-order logic based proof assistant coq [2], but we never use coq as a checker;
instead we automatically extract an ocaml program from the formal development that
is compiled to a standalone executable that is used independently of coq. in this regard,
our approach prevents the user to be locked-in to a specific proof assistant, something
that was not possible with weber and amjad's approach. a high-level architectural view
of our approach is shown in figure 1. since it combines certification and ease-of-use, it
enables the use of certified checkers as regular components in a sat-based verification
work flow.
2
propositional satisfiability
2.1
satisfiability solving
given a propositional formula, the goal of satisfiability solving is to determine whether
there is an assignment of the boolean truth values (i.e., true, false) to the variables in
the formula such that the formula evaluates to true. if such an assignment exists, the
given formula is said to be satisfiable or sat, otherwise the formula is said to be unsat-
isfiable or unsat. many problems of practical interest in system verification involved
proving unsatisfiability, one concrete example being bounded model checking [4].
for efficiency purposes, sat solvers represent the propositional formulas in con-
junctive normal form (cnf), where the entire formula is a conjunction of clauses. each
clause itself denotes a disjunction of literals, which are simply (boolean) variables or
negated variables. an efficient cnf representation uses non-zero integers to represent
literals. a positive literal is represented by a positive integer, whilst a negated one is
denoted by a negative integer. as an example, the (unsatisfiable) formula
(a ∧b) ∨(¬a ∧b) ∨(a ∧¬b) ∨(¬a ∧¬b)
over two propositional variables a and b can thus be represented as
1
2 0
-1
2 0
1 -2 0
-1 -2 0
the zeroes are delimiters that separate the clauses from each other.
sat solvers take a boolean formula, for example represented in the dimacs no-
tation used here, and produce a sat/unsat claim. a proof-generating sat solver
produces additional evidence (or certificates) to support its claims. for a sat claim,
the certificate simply consists of an assignment. it is usually trivial to check whether
that assignment-and thus the original sat claim-is correct. one simply substitutes
the boolean values given by the assignment in the formula and then evaluates the over-
all formula, checking that it indeed is true. for unsat claims, the evidence is more
complicated, and the solvers need to return a resolution proof trace as certificate. un-
surprisingly, checking these unsat certificates is more complicated as well.
2.2
proof checking
when the solver claims a given problem is unsat, we can independently re-play the
proofs produced by the solver, to check that the solver's output is correct. for a given
problem, if we can follow the resolution inferences given in the proof trace to derive an
empty clause, then we know that the proof trace correctly denotes an unsat instance
for the problem, and we can conclude that the given problem is indeed unsat.
a proof trace consists of the original clauses used during resolution and the interme-
diate resolvents obtained by resolving the original input clauses. the part of the proof
trace that specifies how the input clauses have been resolved in sequence to derive a
conflict is organized as chains. these chains are often referred to as the regular input
resolution proofs, or the trivial proofs [1,3]. we call the input clauses in a chain its an-
tecedents and its final resolvent simply its resolvent. designing an efficient checking
methodology relies to some extent on the representation of the proof trace produced
by the solvers. representing proof chains as a trivial resolution proof is a key con-
straint [3]. the trivial resolution proof requires that the clauses generated to form the
proof should be aligned in such a manner in the trace so that whenever a pair of clauses
is used for resolution, at most one complementary pair of literals is deleted. another
important constraint that is needed for efficiency reasons is that at least one pair of
complementary literals gets deleted whenever any two clauses are used in the trace for
resolution. this is needed because we want to avoid search during checking, and if the
proof trace respects these two criteria we can design a checking algorithm that can val-
idate the proofs in a single pass using an algorithm which is linear in the size of the
input clauses.
2.3
picosat proof representation
most proof-generating sat solvers [3,7,23] preserve these two criterions. we carried
out our first set of experiments with picosat [3]. picosat ranked as one of the best
solvers in the industrial category of the sat competitions 2007 and 2009, and in the
sat race 2008. moreover, picosat's proof representation is in ascii format. it was
reported by weber and amjad [22] that some other solvers such as minisat generate
proofs in a more compact, but binary format.
another advantage of picosat's proof trace format besides using an ascii format
is its simplicity. it does not record the information about pivot literals as some of the
other solvers such as zchaff. it is however straightforward to develop translators for
formats used by other sat solvers [19]. as a proof of concept, we developed a translator
from zchaff's proof format to picosat's proof format, which can be used for validating
unsatisfiability proofs obtained with zchaff.
a picosat proof trace consists of rows representing the input clauses, followed by
rows encoding the proof chains. each row representing a chain consists of an asterisk
(*) as place-holder for the chain's resolvent,3 followed by the identifiers of the clauses
involved in the chain. each chain row thus contains at least two clause identifiers, and
3 this is generated by picosat; there is another option of generating proof traces from picosat
where instead of the asterisk the actual resolvents are generated delimited by a single zero from
the rest of the chain.
denotes an application of one or more of the resolution inference rule, describing a
trivial resolution derivation. each row also starts with a non-zero positive integer de-
noting the identifier for that row's (input or resolvent) clause. in an actual trace there
are additional zeroes as delimiters at the end of each row, but we remove these before
we start proof checking. for the unsat formula shown in the previous section, the
corresponding proof trace generated from picosat looks as follows:
1
1
2
2 -1
2
3
1 -2
4 -1 -2
5
*
3 1
6
*
4 2 5
the first four rows denote the input clauses from the original problem (see above)
that are used in the resolution, with their identifiers referring to the original clause
numbering, whereas rows 5 and 6 represent the proof chains. in row 5, the clauses with
identifiers 3 and 1 are resolved using a single resolution rule, whilst in row 6 first the
original clauses with identifier 4 and 2 are resolved and then the resulting clause is
resolved against the clause denoted by identifier 5 (i.e., the resolvent from the previous
chain), in total using two resolution steps.
picosat by default creates a compacted form of proof traces, where the antecedents
for the derived clauses are not ordered properly within the chain. this means that there
are instances in the chain where we resolve a pair of adjacent clauses and no literal is
deleted. this violates one of the constraints we explained above, thus we cannot deduce
an existence of an empty clause for this trace unless we re-order the antecedents in the
chain.
picosat comes with an uncertified proof checker called tracecheck that can not
only check the outcome of picosat but also corrects the mis-alignment of traces. the
outcome of the alignment process is an extended proof trace and these then become the
input to the certified checker that we design.
3
the shruti certified proof checker
our approach to efficient formally certified sat solving relies on the use of a certified
checker program that has been designed and implemented formally, but can be used in
practice, independently of any formal development environment. rather than verifying
a separately developed checker we follow a correct-by-construction approach in which
we formally design and mechanically verify a checker using a proof assistant to achieve
the highest level of confidence, and then use program extraction to obtain a standalone
executable.
our checker shruti takes an input cnf file which contains the original problem
description and a proof trace file and checks whether the two together denote an (i)
unsat instance, and (ii) that they are "legitimate". our focus here is on (i) where
we check that each step of the trace is correctly applying the resolution inference rule.
however, in order to gain full assurance about the unsat claim, we also need to check
that all input clauses used in the resolution in the trace are taken from the original
problem, i.e., that the proof trace is legitimate. shruti provides this as an option, but
as far as we are aware most checkers do not do this check.
our formal development follows the lcf style [17], and in particular, only uses
definitional extensions, i.e., new theorems can only be derived by applying previously
derived inference rules. axiomatic extensions though possible are prohibited, since one
can assume the existence of a theorem without a proof. thus, we never use it in our own
work. we use the the coq proof assistant [2] as a development tool. coq is based on
the calculus of inductive constructions and encapsulates the concepts of typed higher-
order logic. it uses the notion of proofs as types, and allows constructive proofs and use
of dependent types. it has been successfully used in the design and implementation of
large scale certification of software such as in the compcert [10] project.
for our development of shruti, we first formalize in coq the definitions of res-
olution and its auxiliary functions and then prove inside coq that these definitions are
correct, i.e., satisfy the properties expected of resolution. once the coq formalization is
complete, ocaml code is extracted from it through the extraction api included in the
coq proof assistant. the extracted ocaml code expects its input in data structures such
as tables and lists. these data structures are built by some glue code that also handles
file i/o and pre-processes the read proof traces (e.g., removes the zeroes used as sepa-
rators for the clauses). the glue code wraps around the extracted checker and the result
is then compiled to a native machine code executable that can be run independently of
the proof-assistant coq.
3.1
formalization in coq
in this section we present the formalization of shruti. its core logic is formalized
as a shallow embedding in coq. in a shallow embedding we identify the object data
types (types used for shruti) with the types of the meta-language, which in our case
happens to be the coq datatypes.
since most checkers read the clause representation on integers (e.g., using the di-
macs notation) we incorporate integers as first class elements in our formalization,
so we do not have to map literals to booleans. thus, inside coq, we denote literals by
integers, and clauses by lists of integers. antecedents (denoting the input clauses) in a
proof chain are represented by integers and a proof chain itself by a list of integers. a
resolution proof is represented internally in our implementation by a table consisting
of (key,binding) pairs. the key for this table is the identifier obtained from the proof
chains read from the input proof trace file. the binding of this table is the actual re-
solvent obtained by resolving the clauses specified (in the proof trace). when the input
proof trace is read, the identifier corresponding to the first row of the proof chain be-
comes the starting point for resolution checking. once the resolvent is calculated for
this the process is repeated for all the remaining rows of the proof chain, until we reach
the end of the trace input. if the identifier for the last row of the proof chain denotes an
empty resolvent, we conclude that the given problem and its trace represents an unsat
instance.
we use the usual notation for quantifiers (∀, ∃) and logical connectives (∧, ∨, ¬) but
distinguish implication over propositions (⊃) and over types (→) for presentation clar-
ity, though inside coq they are exactly the same. the notation ⇒is used during pattern
matching (using match −with) as in other functional languages. for type annotation
we use :, and for the cons operation on lists we use ::. the empty list is denoted by nil.
the set of integers is denoted by z, the type of polymorphic list by list and the list
of integers by list z. list containment is represented by ∈and its negation by /
∈. the
function abs computes the absolute value of an integer. we use the keyword definition
to present our function definitions. it is akin to defining functions in coq. main data
structures that we used in the coq formalization are lists, and finite maps (hashtables
with integer keys and polymorphic bindings).
we define our resolution function (▷
◁) with the help of two auxiliary functions
union and auxunion. both functions compute the union of two clauses represented
as integer lists, but differ in their behavior when they encounter complementary literals:
whereas union removes both literals and then calls auxunion to process the remainder
of the lists; auxunion copies both the literals into the output and thus produces a tauto-
logical clause. ideally, if the sat solver is sound and the proof trace reflects the sound
outcome, then for any pair of clauses that are resolved, there will be only one pair of
complementary literals and we do not need two functions. however in reality, a solver
or its proof trace can have bugs and it can create instances of clauses in the trace with
multiple complementary pair of literals. hence, we employ the two auxiliary functions
to ensure that the resolution function deals with this in a sound way.
we will later explain in more detail the functionality of the auxiliary functions but
both functions expect the input clauses to respect three well-formedness criteria: there
should be no duplicates in the clauses (nodup); there should be no complementary
pair of literals within any clause (nocomppair), and the clauses should be sorted by
absolute value (sorted). the predicate wf encapsulates these properties.
definition wf c = nocomppair c ∧nodup c ∧sorted c
the assumptions that there are no duplicates and no complementary pair of literal
within a clause are essentially the constraints imposed on input clauses when the reso-
lution function is applied in practice. sorting is enforced by us to keep the complexity
of our algorithm linear.
the union function takes a pair of sorted (by absolute value) lists of integers, and an
accumulator list, and computes the resolvent by doing a pointwise comparison on input
literals.
definition union (c1 c2 : list z)(acc : list z) =
match c1, c2 with
| nil, c2 ⇒app (rev acc) c2
| c1, nil ⇒app (rev acc) c1
| x :: xs, y :: ys ⇒if (x + y = 0) then auxunion xs ys acc
else if (abs x < abs y) then union xs (y :: ys)(x :: acc)
else if (abs y < abs x) then union (x :: xs) ys (y :: acc)
else union xs ys (x :: acc)
we already pointed out above that this function and the auxiliary union function -
auxunion (shown below) that it employs are different in behaviour if the literals being
compared are complementary. however, when the literals are non-complementary, if
they are equal, only one copy is put in the resolvent whilst when they are unequal both
are kept in the resolvent. once a single run of any of the clauses is finished, the other
clause is merged with the accumulator. actual sorting in our case is done by simply
reversing the accumulator (since all elements are in descending order).
definition auxunion (c1 c2 : list z)(acc : list z) =
match c1, c2 with
| nil, c2 ⇒app (rev acc) c2
| c1, nil ⇒app (rev acc) c1
| x :: xs, y :: ys ⇒if (abs x < abs y) then auxunion xs (y :: ys) (x :: acc)
else if (abs y < abs x) then auxunion (x :: xs) ys (y :: acc)
else if x=y then auxunion xs ys (x :: acc)
else auxunion xs ys (x :: y :: acc)
we can now show the actual resolution function denoted by ▷
◁below. it makes use
of the union function.
definition c1 ▷
◁c2 = (union c1 c2 nil)
given a problem representation in cnf form and a proof trace that respects the well-
formedness criterion and is a trivial resolution proof for the given problem, shruti
will deduce the empty clause and thus validate the solver's unsat claim. conversely,
whenever shruti validates a claim, the problem is indeed unsat-shruti will
never deduce an empty clause for a sat instance and will thus never give a false posi-
tive.
if shruti cannot deduce the empty clause, it invalidates the claim. this situation
can be caused by three different reasons. first, counter to the claim the problem is sat.
second, the problem may be unsat but the resolution proof may not represent this
because it may have bugs. third, the traces either do not respect the well-formedness
criteria, or do not represent a trivial resolution proof. in this case both the problem and
the proof may represent an unsat instance but our checker cannot validate it as such.
in this respect our checker is incomplete.
3.2
soundness of the resolution function
here we formalize the soundness criteria for our checker and present the soundness
theorem stating that the definition of our resolution function is sound. we need to prove
that the resolvent of a given pair of clauses is logically entailed by the two clauses. thus
at a high-level we need to prove that:
∀c1 c2 c3 * (c3 = c1 ▷
◁c2) ⊃{c1, c2} |
= c3
where |
= denotes the logical entailment.
we can use the deduction theorem
∀a b c * {a, b} |
= c ≡(a ∧b ⊃c)
to re-state what we intuitively would like to prove:
∀c1 c2 * (c1 ∧c2) ⊃(c1 ▷
◁c2)
however instead of proving the above theorem directly we prove its contraposition:
∀c1c2 * ¬(c1 ▷
◁c2) ⊃¬(c1 ∧c2)
in our formalization a clause is denoted by a list of non-zero integers, and a con-
junction of clauses is denoted by a list of clauses. we now present the definition of the
logical disjunction and conjunction functions that operate on the integer list represen-
tation of clauses. we do this with the help of an interpretation function that maps an
integer to a boolean value.
the function eval shown below maps a list of integers to a boolean value by using
the logical disjunction ∨and an interpretation i of the type z →bool.
definition eval nil i = false
eval (x :: xs) i = eval x i ∨(eval xs i)
we now define what it means to perform a conjunction over a list of clauses. the
function and shown below takes a list of clauses and an interpretation i (with type
z →bool) and returns a boolean which denotes the conjunction of all the clauses in
the list.
definition and nil i = true
and (x :: xs) i = (eval x i) ∧(and xs i)
the interpretations that we are interested in are the logical interpretations which
means that if we apply an interpretation i on a negative integer the value returned is the
logical negation of the value returned when the same interpretation is applied on the
positive integer.
definition logical i = ∀(x : z) * i(−x) = ¬(i x)
thus we can now state the precise statement of the soundness theorem that we
proved for our checker as:
theorem 1. soundness theorem
∀c1c2 * ∀i * logical i ⊃¬(eval (c1 ▷
◁c2) i) ⊃¬(and [c1, c2] i)
proof. the proof begins by structural induction on c1 and c2. the first three sub-goals
are easily proven by term rewriting and simplification by unfolding the definitions of
▷
◁, eval and and. the last sub-goal is proven by doing a case split on if-then-else
and then using a combination of induction hypothesis and generating conflict among
some of the assumptions. a detailed transcription of the coq proof is available from
http://sites.google.com/site/certifiedsat/.
3.3
correctness of implementation
further to ensure that the formalization of our checker is correct we check that the union
function is implemented correctly. we check that it preserves the following properties
of the trivial resolution function. these properties are:
1. a pair of complementary literals is deleted in the resolvent obtained from resolving
a given pair of clauses (theorem 2).
2. all non-complementary pair of literals that are unequal are retained in the resolvent
(theorem 3).
3. for a given pair of clauses, if there are no duplicate literals within each clause, then
for a literal that exists in both the clauses of the pair, only one copy of the literal is
retained in the resolvent (theorem 4).
we have proven these properties in coq. the actual proof consists of proving several
small and big lemmas - in total about 4000 lines of proof script in coq (see the proofs
online).
the general strategy is to use structural induction on clauses c1 and c2. for each
theorem, this results in four main goals, three of which are proven by contradiction since
for all elements l, l/
∈nil. for the remaining goal a case-split is done on if-then-else,
thereby producing sub-goals, some of whom are proven from induction hypotheses, and
some from conflicting assumptions arising from the case-split.
theorem 2. a pair of complementary literals is deleted.
∀c1 c2 * wf c1 ⊃wf c2 ⊃uniquecomppair c1 c2 ⊃
∀l1 l2 * (l1 ∈c1) ⊃(l2 ∈c2) ⊃(l1 + l2 = 0) ⊃
(l1 /
∈(c1 ▷
◁c2)) ∧(l2 /
∈(c1 ▷
◁c2))
note that to ensure that only a single pair of complementary literals is deleted we
need to assume that there is a unique complementary pair (uniquecomppair). the
above theorem will not hold for the case with multiple complementary pairs.
for the following theorem we need to assert in the assumption that for any literal in
one clause there exists no literal in the other clause such that the sum of two literals is
0. this is defined by the predicate nocomplit.
theorem 3. all non-complementary, unequal literals are retained.
∀c1 c2 * wf c1 ⊃wf c2 ⊃
∀l1 l2 * (l1 ∈c1) ⊃(l2 ∈c2) ⊃
(nocomplit l1 c2) ⊃(nocomplit l2 c1) ⊃
(l1 ̸= l2) ⊃(l1 ∈(c1 ▷
◁c2)) ∧(l2 ∈(c1 ▷
◁c2))
theorem 4. only one copy of equal literals is retained (factoring).
∀c1 c2 * wf c1 ⊃wf c2 ⊃
∀l1 l2 * (l1 ∈c1) ⊃(l2 ∈c2) ⊃(l1 = l2) ⊃
((l1 ∈(c1 ▷
◁c2)) ∧(count l1 (c1 ▷
◁c2) = 1))
in order to check the resolution steps for each row, one has to collect the actual
clauses corresponding to their identifiers and this is done by the findclause function.
the function findclause takes a list of clause identifiers (dlst), an accumulator (acc) to
collect the list of clauses, and requires as input a table that has the information about all
the input clauses (ctbl). if a clause id is processed, then its resolvent is fetched from the
resolvent table (rtbl), else obtained from ctbl. if there is no entry for a given id in the
resolvent table and in the clause table, an error is signalled. this error denotes the fact
that there was an input/output problem with the proof trace file due to which some input
clauses in the proof trace could not be accessed properly. this could have happened
either because the proof trace was ill-formed accidentally or wilfully tampered with.
the function that uses the ▷
◁function recursively on a list of input clause chain is
called chainresolution and it simply folds the ▷
◁function from left to right for every
row in the proof part of the proof trace file.
definition chainresolution lst =
match (lst : list (list z)) with
| nil ⇒nil
| (x :: xs) ⇒list.fold left (▷
◁) xs x
the function findandresolve is our last function defined in coq world for unsat check-
ing and provides a wrapper on other functions. once the input clause file and proof trace
files are opened and read into different tables, findandresolve starts the checking pro-
cess by first obtaining the clause ids from the proof part of the proof trace file, and then
invoking findclause to collect all the clauses for each row in the proof part of the proof
trace file. once all the clauses are obtained the function chainresolution is called and
applied on the list of clauses row by row. for each row the resolvent is stored in a sep-
arate table. the checker then simply checks if the last row has an empty clause, and if
there is one, it agrees with the sat solver and says yes, the problem is unsat, else no.
if the proof trace part of the trace file contains nothing (ill-formed) then there would
be no entry for an identifier in the trace table (ttbl), and this is signalled by en error state
consisting of a list with a single zero. since zeros are otherwise prohibited to be a legal
part of the cnf problem description, we use them to signal error states. similarly, if the
clause and trace table both are empty, then a list with two zeros is output as an error.
3.4
program extraction
we extract the ocaml code by using the built-in extraction api in coq. at the time of
extraction we mapped several coq datatypes and data structures to equivalent ocaml
ones. for optimization we made the following replacements:
1. coq booleans by ocaml booleans.
2. coq integers (z) by ocaml int.
3. coq lists by ocaml lists.
4. coq finite map by ocaml's finite map.
5. the combination of app and rev on lists in the function union, and auxunion was
replaced by the tail-recursive list.rev append in ocaml.
replacing coq zs with ocaml integers gave a performance boost by a factor of 7-
10. making minor adjustments by replacing the coq finite maps by ocaml ones and
using tail recursive functions gave a further 20% improvement. an important conse-
quence of our extraction is that only some datatypes, and data structures get mapped to
ocaml's; the key logical functionality is unmodified. the decisions for making changes
in data types and data structures are a standard procedure in any extraction process using
coq [2].
table 1: comparison of our results with hol 4 and tracecheck. number of resolutions
(inferences) shown for hol 4 is the number that hol 4 calculated from the proof trace
obtained from running zverify - the uncertified checker (for zchaff) that amjad used to obtain
the proof trace. shruti resolution count is obtained from the proof trace generated by the
uncertified checker tracecheck. in terms of inferences/second, we are 1.5 to 32 times faster than
amjad's hol 4 checker, whilst a factor 2.5 slower than tracecheck. all times shown are total
times for all the three checkers.the symbol z? denotes that zchaff timed out after an hour.
no. benchmark
hol 4
shruti
tracecheck
resolutions
time inf/sec resolutions
time
inf/s time
inf/s
1. een-tip-uns-numsv-t5.b
89136
4.61 19335
122816
0.86 142809
0.36 341155
2. een-pico-prop01-75
205807
5.70 36106
246430
1.67 147562
0.48 513395
3. een-pico-prop05-50
1804983
58.41 30901
2804173
20.76 135075
8.11 345767
4. hoons-vbmc-lucky7
3460518
59.65 58013
4359478
35.18 123919 12.95 336639
5. ibm-2002-26r-k45
1448
24.76
58
1105
0.004 276250
0.04
27625
6. ibm-2004-26-k25
1020
11.78
86
1132
0.004 283000
0.04
28300
7. ibm-2004-3 02 1-k95
69454
5.03 13807
114794
0.71 161681
0.35 327982
8. ibm-2004-6 02 3-k100
111415
7.04 15825
126873
0.9 140970
0.40 317182
9. ibm-2002-07r-k100
141501
2.82 50177
255159
1.62 157505
0.54 472516
10. ibm-2004-1 11-k25
534002
13.88 38472
255544
1.77 144375
0.75 340725
11. ibm-2004-2 14-k45
988995
31.16 31739
701430
5.42 129415
1.85 379151
12. ibm-2004-2 02 1-k100
1589429
24.17 65760
1009393
7.42 136036
3.02 334236
13. ibm-2004-3 11-k60
z?
z?
-
13982558 133.05 105092 59.27 235912
14. manol-pipe-g6bi
82890
2.12 39099
245222
1.59 154227
0.50 490444
15. manol-pipe-c9nidw s
700084
26.79 26132
265931
1.81 146923
0.54 492464
16. manol-pipe-c10id s
36682
11.23
3266
395897
2.60 152268
0.82 482801
17. manol-pipe-c10nidw s
z?
z?
-
458042
3.06 149686
1.21 381701
18. manol-pipe-g7nidw
325509
8.82 36905
788790
5.40 146072
1.98 398378
19. manol-pipe-c9
198446
3.15 62998
863749
6.29 137320
2.50 345499
20. manol-pipe-f6bi
104401
5.07 20591
1058871
7.89 134204
2.97 356522
21. manol-pipe-c7b i
806583
13.76 58617
4666001
38.03 122692 15.54 300257
22. manol-pipe-c7b
824716
14.31 57632
4901713
42.31 115852
18 272317
23. manol-pipe-g10id
775605
23.21 33416
6092862
50.82 119891 21.08 289035
24. manol-pipe-g10b
2719959
52.90 51416
7827637
64.69 121002 26.85 291532
25. manol-pipe-f7idw
956072
35.17 27184
7665865
68.14 112501 30.74 249377
26. manol-pipe-g10bidw
4107275 125.82 32644
14776611 134.92 109521 68.13 216888
4
experimental results
we evaluated our certified checker shruti on a set of benchmarks from the sat races
of 2006 and 2008 and the sat competition of 2007. we present our results on a sample
of the sat race benchmarks in table 1. the results for shruti shown in the table
are for validating proof traces obtained from the picosat solver. our experiments were
carried out on a server running red hat on a dual-core 3 ghz, intel xeon cpu with
28gb memory.
the hol 4 and isabelle based checkers [22] were evaluated on the sat race
benchmarks shown in the table [21]. isabelle reported segmentation faults on most
of the problems, whilst hol 4's results are summarized along with our's in table 1.
hol 4 was run on an amd dual-core 3.2 ghz processor running ubuntu with 4gb of
memory. we also compare our timings with that obtained from the uncertified checker
tracecheck. since the size of the proof traces obtained from zchaff is substantially
different than the size of the traces obtained from tracecheck on most problems, we
decided to compare the speed of our checker with hol 4 and tracecheck in terms of
resolutions (inferences) solved per second. we observe that in terms of inferences/sec
we are 1.5 to 32 times faster than hol 4 and 2.5 times slower than tracecheck. times
shown for all the three checkers in the table are the total times including time spent
on actual resolution checking, file i/o and garbage collection. amjad reported that the
version of the checker he has used on these benchmarks is much faster than the one
published in [22].
as a proof of concept we also validated the proof traces from zchaff by translating
them to picosat's trace format. the performance of shruti in terms of inf/sec on
the translated proof traces (from zchaff to picosat) was similar to the performance of
shruti when it checked picosat's traces obtained directly from the picosat solver
– something that is to be expected.
4.1
discussion
the coq formalization consisted of 8 main function definitions amounting to nearly
160 lines of code, and 4 main theorems shown in the paper and 4 more that are about
maps (not shown here due to space). overall the proof in coq was nearly 4000 lines
consisting of the proofs of several big and small lemmas that were essential to prove
the 4 main theorems. the extracted ocaml code was approximately 2446 lines, and the
ocaml glue code was 324 lines.
we found that there is no implementation of the array data type which meant that
we had to use the type of list. since lists are defined inductively, it is easier to do
reasoning with them, although implementing very fast and efficient functions on these
is impossible. in a recent development related to coq, there has been an emergence of
a tool called ynot [14] that can deal with arrays, pointers and file related i/o in a hoare
type theory. future work in certification using coq should definitely investigate the
relevance and use of this.
we noticed that the ocaml compiler's native code compilation does produce effi-
cient binaries but the default settings for automatic garbage collection were not useful.
we observed that if we do not tune the runtime environment settings of ocaml by set-
ting the values of ocmalrunparam, as soon as the input proof traces had more than
a million inferences, garbage collection would kick in so severely that it will end up
consuming (and thereby delaying the overall computation) as much as 60% of the total
time. by setting the initial size of major heap to a large value such as 2 gb and making
the garbage collection less eager, we noticed that the computation times of our checker
got reduced by upto a factor of 7 on proof traces with over 1 million inferences.
5
related work
recent work on checking the result of sat solvers can be traced to the work of zhang
& malik [23] and goldberg & novikov [8], with additional insights provided in recent
work [1,19]. besides weber and amjad, others who have advocated the use of a checker
include bulwahn et al [5] who experimented with the idea of doing reflective theorem
proving in isabelle and suggested that it can be used for designing a sat checker.
in a recent paper [12], mari ́
c presented a formalization of sat solving algorithms
in isabelle that are used in modern day sat solvers. an important difference is that
whereas we have formalized a sat checker and extracted an executable code from
the formalization itself, mari ́
c formalizes a sat solver (at the abstract level of state
machines) and then implements the verified algorithm in the sat solver off-line.
an alternative line of work involves the formal development of sat solvers. ex-
amples include the work of smith & westfold [18] and the work of lescuyer and con-
chon [11]. lescuyer and conchon have formalized a simplified sat solver in coq and
extracted an executable. however, the performance results have not been reported on
any industrial benchmarks. this is because they have not formalized several of the key
techniques used in modern sat solvers. the work of of smith & westfold involves the
formal synthesis of a sat solver from a high level description. albeit ambitious, the
preliminary version of the sat solver does not include the most effective techniques
used in modern sat solvers.
there has been a recent surge in the area of certifying smt solvers. m. moskal
recently provided an efficient certification technique for smt solvers [13] using term-
rewriting systems. the soundness of the proof checker is guaranteed through a formal-
ization using inference rules provided in a term-rewriting formalism. l. de moura and
n. bjørner [6] presented the proof and model generating features of the state-of-the-art
smt solver z3.
6
conclusion
in this paper we presented a methodology for performing efficient yet formally certified
sat solving. the key feature of our approach is that we did a one-off formal design and
reasoning of the checker using coq proof-assistant and extracted an ocaml program
which was used as a standalone executable to check the outcome of industrial-strength
sat solvers such as picosat and zchaff. our certified checker can be plugged in with
any proof generating sat solver with previously agreed certificates for satisfiable and
unsatisfiable problems. on one hand our checker provides much higher assurance as
compared to uncertified checkers such as tracecheck and on the other it enhances us-
ability and performance when compared to the certified checkers implemented in hol
4 and isabelle. in this regard our approach provides an arguably optimal middle ground
between the two extremes. we are investigating on optimizing the performance aspects
of our checker even further so that the slight difference in overall performance between
uncertified checkers and us can be further minimized.
acknowledgements. we thank h. herbelin, y. bertot, p. letouzey, and many more people on
the coq mailing list who helped us with coq questions. we also thank t. weber and h. amjad
for answering our questions on their work and also carrying out industrial benchmark evaluation
on their checker. a. p. landells helped out with server issues. this work was partially funded by
epsrc grant ep/e012973/1, and eu grants ict/217069 and ist/033709.
references
1. p. beame, h. a. kautz, and a. sabharwal. towards understanding and harnessing the poten-
tial of clause learning. j. artif. intell. res. (jair), 22:319–351, 2004.
2. y. bertot and p. castéran. interactive theorem proving and program development. coq'art:
the calculus of inductive constructions, 2004.
3. a. biere. picosat essentials. journal on satisfiability, boolean modeling and computation,
4:75–97, 2008.
4. a. biere, a. cimatti, e. clarke, o. strichman, and y. zhu. advances in computers, chapter
bounded model checking. academic press, 2003.
5. l. bulwahn, a. krauss, f. haftmann, l. erkök, and j. matthews. imperative functional
programming with isabelle/hol. in theorem proving in higher order logics, pages 134–
149, 2008.
6. l. m. de moura and n. bjørner. proofs and refutations, and z3. in proceedings of the lpar
2008 workshops, 2008.
7. n. een and n. sorensson. an extensible sat-solver. in e. giunchiglia and a. tacchella,
editors, sat, volume 2919 of lecture notes in computer science, pages 502–518. springer,
2003.
8. e. i. goldberg and y. novikov. verification of proofs of unsatisfiability for cnf formulas.
in design, automation and test in europe conference, pages 10886–10891, march 2003.
9. j. hammarberg and s. nadjm-tehrani. formal verification of fault tolerance in safety-critical
reconfigurable modules. international journal on software tools for technology transfer
(sttt), 7(3):268–279, 2005.
10. x. leroy and s. blazy. formal verification of a c-like memory model and its uses for
verifying program transformations. journal of automated reasoning, jan 2008.
11. s. lescuyer and s. conchon. a reflexive formalization of a sat solver in coq. in emerging
trends of tphols, 2008.
12. f. mari ́
c. formalization and implementation of modern sat solvers. journal of automated
reasoning, 43(1):81–119, june 2009.
13. m. moskal. rocket-fast proof checking for smt solvers. in tools and algorithms for the
construction and analysis of systems, pages 486–500, 2008.
14. a. nanevski, g. morrisett, a. shinnar, p. govereau, and l. birkedal. ynot: dependent types
for imperative programs. in international conference on functional programming, pages
229–240, 2008.
15. m. penicka. formal approach to railway applications. in formal methods and hybrid real-
time systems, pages 504–520, 2007.
16. sat competition. http://www.satcompetition.org/.
17. d. s. scott. a type-theoretical alternative to iswim, cuch, owhy. theor. comput. sci.,
121(1-2):411–440, 1993.
18. d. r. smith and s. j. westfold. synthesis of propositional satisfiability solvers. technical
report, kestrel institute, april 2008.
19. a. van gelder. verifying propositional unsatisfiability: pitfalls to avoid. in theory and
applications of satisfiability testing, pages 328–333, may 2007.
20. t. weber. efficiently checking propositional resolution proofs in isabelle/hol. 6th interna-
tional workshop on the implementation of logics, 2009.
21. t. weber and h. amjad. private communication.
22. t. weber and h. amjad. efficiently checking propositional refutations in hol theorem
provers. journal of applied logic, 7(1):26–40, 2009.
23. l. zhang and s. malik.
validating sat solvers using an independent resolution-based
checker: practical implementations and other applications. in design, automation and test
in europe conference, pages 10880–10885, march 2003.
|
0911.1679 | xor gate response in a mesoscopic ring with embedded quantum dots | we address xor gate response in a mesoscopic ring threaded by a magnetic flux
$\phi$. the ring, composed of identical quantum dots, is symmetrically attached
to two semi-infinite one-dimensional metallic electrodes and two gate voltages,
viz, $v_a$ and $v_b$, are applied, respectively, in each arm of the ring which
are treated as the two inputs of the xor gate. the calculations are based on
the tight-binding model and the green's function method, which numerically
compute the conductance-energy and current-voltage characteristics as functions
of the ring-electrodes coupling strengths, magnetic flux and gate voltages.
quite interestingly it is observed that, for $\phi=\phi_0/2$ ($\phi_0=ch/e$,
the elementary flux-quantum) a high output current (1) (in the logical sense)
appears if one, and only one, of the inputs to the gate is high (1), while if
both inputs are low (0) or both are high (1), a low output current (0) appears.
it clearly demonstrates the xor behavior and this aspect may be utilized in
designing the electronic logic gate.
| introduction
in the present age of nanoscience and technology
quantum confined model systems are used exten-
sively in electronic as well as spintronic engineering
since these simple looking systems are the funda-
mental building blocks of designing nano devices. a
mesoscopic normal metal ring is one such promising
example of quantum confined systems. here we will
explore the electron transport through a mesoscopic
ring, composed of identical quantum dots and at-
tached to two external electrodes, the so-called
electrode-ring-electrode bridge, and show how such
a simple geometric model can be used to design
a logic gate.
the theoretical description of elec-
tron transport in a bridge system has been followed
based on the pioneering work of aviram and rat-
ner [1]. later, many excellent experiments [2, 3, 4]
have been done in several bridge systems to un-
derstand the basic mechanisms underlying the elec-
tron transport.
though in literature many theo-
retical [5, 6, 7, 8, 9, 10, 11, 12, 13, 14] as well as
experimental papers [2, 3, 4] on electron transport
are available, yet lot of controversies are still present
between the theory and experiment, and the com-
plete knowledge of the conduction mechanism in
this scale is not very well established even today.
the electronic transport in the ring significantly de-
pends on the ring-to-electrodes interface structure.
by changing the geometry one can tune the trans-
mission probability of an electron across the ring.
this is solely due to the quantum interference ef-
fect among the electronic waves traversing through
different arms of the ring. furthermore, the elec-
tron transport through the ring can be modulated
in other way by tuning the magnetic flux, the so-
called aharonov-bohm (ab) flux, that threads the
ring. the ab flux threading the ring can change the
phases of the wave functions propagating along the
different arms of the ring leading to constructive
or destructive interferences, and accordingly the
transmission amplitude changes [15, 16, 17, 18, 19].
beside these factors, ring-to-electrodes coupling is
another important issue that controls the electron
transport in a meaningful way [19]. all these are
the key factors which regulate the electron trans-
mission in the electrode-ring-electrode bridge sys-
tem and these effects have to be taken into account
properly to reveal the transport mechanisms.
the aim of the present paper is to describe the
xor gate response in a mesoscopic ring threaded
by a magnetic flux φ. the ring is contacted sym-
metrically to the electrodes, and the two arms of
the ring are subjected to two gate voltages va and
vb, respectively (see fig. 1) those are treated as
the two inputs of the xor gate. here we adopt
a simple tight-binding model to describe the sys-
tem and all the calculations are performed numer-
ically. we address the xor behavior by studying
the conductance-energy and current-voltage charac-
teristics as functions of the ring-electrodes coupling
strengths, magnetic flux and gate voltages.
our
study reveals that for a particular value of the mag-
netic flux, φ = φ0/2, a high output current (1) (in
the logical sense) is available if one, and only one, of
the inputs to the gate is high (1), while if both the
inputs are low (0) or both are high (1), a low output
current (0) is available. this phenomenon clearly
demonstrates the xor behavior which may be uti-
lized in manufacturing the electronic logic gate. to
the best of our knowledge the xor gate response in
such a simple system has not been described earlier
in the literature.
the paper is organized as follow. following the
introduction (section 1), in section 2, we present
the model and the theoretical formulations for our
calculations. section 3 discusses the significant re-
sults, and finally, we summarize our results in sec-
tion 4.
2
model and the synopsis of
the theoretical background
we begin by referring to fig. 1. a mesoscopic ring,
composed of identical quantum dots (filled red cir-
cles) and threaded by a magnetic flux φ, is attached
symmetrically to two semi-infinite one-dimensional
metallic electrodes. the ring is placed between two
gate electrodes, viz, gate-a and gate-b. these gate
electrodes are ideally isolated from the ring and can
be regarded as two parallel plates of a capacitor. in
our present scheme we assume that the gate volt-
ages each operate on the dots nearest to the plates
only. while, in complicated geometric models, the
effect must be taken into account for the other dots,
though the effect becomes too small. the dots a
and b in the two arms of the ring are subjected to
the gate voltages va and vb, respectively, and these
are treated as the two inputs of the xor gate. the
actual scheme of connections with the batteries for
the operation of the xor gate is clearly presented
in the figure (fig. 1), where the source and the gate
voltages are applied with respect to the drain.
based on the landauer conductance formula [20,
21] we determine the conductance (g) of the ring.
2
at very low temperature and bias voltage it can be
expressed in the form,
g = 2e2
h t
(1)
where t gives the transmission probability of an
electron through the ring. this (t ) can be repre-
sented in terms of the green's function of the ring
and its coupling to the two electrodes by the rela-
tion [20, 21],
t = tr [γsgr
rγdga
r]
(2)
where gr
r and ga
r are respectively the retarded and
advanced green's functions of the ring including the
effects of the electrodes. the parameters γs and γd
describe the coupling of the ring to the source and
φ
a
b
source
drain
gate−a
gate−b
figure 1:
(color online).
the scheme of con-
nections with the batteries for the operation of
the xor gate.
a mesoscopic ring with embed-
ded quantum dots (filled red circles) is attached
to two semi-infinite one-dimensional metallic elec-
trodes, viz, source and drain. the gate voltages va
and vb, those are variable, are applied in the dots
a and b of the two arms, respectively. the source
and the gate voltages are applied with respect to
the drain.
drain, respectively. for the full system i.e., the ring,
source and drain, the green's function is defined as,
g = (e −h)−1
(3)
where e is the injecting energy of the source elec-
tron. to evaluate this green's function, the inver-
sion of an infinite matrix is needed since the full
system consists of the finite ring and the two semi-
infinite electrodes. however, the entire system can
be partitioned into sub-matrices corresponding to
the individual sub-systems and the green's func-
tion for the ring can be effectively written as,
gr = (e −hr −σs −σd)−1
(4)
where hr is the hamiltonian of the ring that can be
expressed within the non-interacting picture like,
hr
=
x
i
(ǫi0 + vaδia + vbδib) c†
ici
+
x
<ij>
t
c†
icjeiθ + c†
jcie−iθ
(5)
in this hamiltonian ǫi0's are the on-site energies for
all the sites (filled red circles) i, except the sites
i = a and b where the gate voltages va and vb
are applied, those are variable. these gate voltages
can be incorporated through the site energies as ex-
pressed in the above hamiltonian. c†
i (ci) is the cre-
ation (annihilation) operator of an electron at the
site i and t is the nearest-neighbor hopping integral.
θ = 2πφ/nφ0 is the phase factor due to the flux φ,
where n represents the total number of sites/dots in
the ring. similar kind of tight-binding hamiltonian
is also used, except the phase factor θ, to describe
the semi-infinite one-dimensional perfect electrodes
where the hamiltonian is parametrized by constant
on-site potential ǫ0 and nearest-neighbor hopping
integral t0. the ring is coupled to the electrodes by
the parameters τs and τd, where they (coupling pa-
rameters) correspond to the coupling strengths with
the source and drain, respectively. the parameters
σs and σd in eq. (4) represent the self-energies due
to the coupling of the ring to the source and drain,
respectively, where all the informations of this cou-
pling are included into these self-energies.
to evaluate the current (i), passing through the
ring, as a function of the applied bias voltage (v )
we use the relation [20],
i(v ) = e
π ̄
h
ef +ev/2
z
ef −ev/2
t (e) de
(6)
where ef is the equilibrium fermi energy. here we
make a realistic assumption that the entire voltage
is dropped across the ring-electrode interfaces, and
it is examined that under such an assumption the
i-v characteristics do not change their qualitative
features.
in this presentation, all the results are computed
only at absolute zero temperature. these results
3
are also valid even for some finite (low) tempera-
tures, since the broadening of the energy levels of
the ring due to its coupling with the electrodes be-
comes much larger than that of the thermal broad-
ening [20]. on the other hand, at high tempera-
ture limit, all these phenomena completely disap-
pear. this is due to the fact that the phase coher-
ence length decreases significantly with the rise of
temperature where the contribution comes mainly
from the scattering on phonons, and accordingly,
the quantum interference effect vanishes. for the
sake of simplicity, we take the unit c = e = h = 1
in our present calculations.
3
results and discussion
to illustrate the results, let us first mention the
values of the different parameters used for the nu-
merical calculations. in the ring, the on-site energy
ǫi0 is taken as 0 for all the sites i, except the sites
i = a and b where the site energies are taken as
va and vb, respectively, and the nearest-neighbor
-8
0
8
e
0
1
2
ghel
hcl
-8
0
8
e
-1
0
1
ghel
hdl
-8
0
8
e
-1
0
1
ghel
hal
-8
0
8
e
0
1
2
ghel
hbl
figure 2: (color online). conductance g as a func-
tion of the energy e for a mesoscopic ring with
n = 8 and φ = 0.5 in the limit of weak-coupling.
(a) va = vb = 0, (b) va = 2 and vb = 0, (c) va = 0
and vb = 2 and (d) va = vb = 2.
hopping strength t is set to 3. on the other hand,
for the side attached electrodes the on-site energy
(ǫ0) and the nearest-neighbor hopping strength (t0)
are fixed to 0 and 4, respectively. the fermi en-
ergy ef is set to 0. to narrate the coupling effect,
throughout the study we focus our results for the
two limiting cases depending on the strength of the
coupling of the ring to the source and drain. case
i: the weak-coupling limit. it is described by the
condition τs(d) << t. for this regime we choose
τs = τd = 0.5. case ii: the strong-coupling limit.
this is specified by the condition τs(d) ∼t.
in
this particular regime, we set the values of the pa-
rameters as τs = τd = 2.5. the key controlling
parameter for all these calculations is the magnetic
flux φ which is set to φ0/2 i.e., 0.5 in our chosen
unit c = e = h = 1.
in fig. 2 we present the conductance-energy (g-
e) characteristics for a mesoscopic ring with n = 8
in the limit of weak-coupling, where (a), (b), (c)
and (d) correspond to the results for the different
gate voltages. when both the two inputs va and vb
are identical to zero i.e., both the inputs are low,
the conductance g becomes exactly zero (fig. 2(a))
-8
0
8
e
0
1
2
ghel
hcl
-8
0
8
e
-1
0
1
ghel
hdl
-8
0
8
e
-1
0
1
ghel
hal
-8
0
8
e
0
1
2
ghel
hbl
figure 3: (color online). conductance g as a func-
tion of the energy e for a mesoscopic ring with
n = 8 and φ = 0.5 in the limit of strong-coupling.
(a) va = vb = 0, (b) va = 2 and vb = 0, (c) va = 0
and vb = 2 and (d) va = vb = 2.
for all energies. this reveals that the electron can-
not conduct through the ring. similar response is
also observed when both the two inputs are high
i.e., va = vb = 2, and in this case also the ring does
not allow to pass an electron from the source to the
drain (fig. 2(d)). on the other hand, for the cases
where any one of the two inputs is high and other
is low i.e., either va = 2 and vb = 0 (fig. 2(b))
or va = 0 and vb = 2 (fig. 2(c)), the conduc-
tance exhibits fine resonant peaks for some partic-
ular energies. thus for both these two cases the
4
electron conduction takes place across the ring. at
the resonances where the conductance approaches
the value 2, the transmission probability t goes
to unity, since the relation g = 2t follows from
the landauer conductance formula (see eq. 1 with
e = h = 1). all these resonant peaks are associ-
ated with the energy eigenvalues of the ring, and
accordingly, we can say that the conductance spec-
trum manifests itself the electronic structure of the
-8
0
8
v
-.6
.6
hcl
0
i
-8
0
8
v
-.6
.6
hdl
0
i
-8
0
8
v
-.6
.6
hal
0
i
-8
0
8
v
-.6
.6
hbl
0
i
figure 4: (color online). current i as a function
of the bias voltage v for a mesoscopic ring with
n = 8 and φ = 0.5 in the limit of weak-coupling.
(a) va = vb = 0, (b) va = 2 and vb = 0, (c) va = 0
and vb = 2 and (d) va = vb = 2.
ring. thus more resonant peaks can be obtained
for larger rings corresponding to their energy eigen-
values. now we justify the dependences of the gate
voltages on the electron transport for these four dif-
ferent cases. the probability amplitude of getting
an electron across the ring depends on the quantum
interference of the electronic waves passing through
the upper and lower arms of the ring. for the sym-
metrically connected ring i.e., when the two arms
of the ring are identical with each other, the proba-
bility amplitude is exactly zero (t = 0) for the flux
φ = φ0/2. this is due to the result of the quantum
interference among the two waves in the two arms
of the ring, which can be obtained in a very sim-
ple mathematical calculation. thus for the cases
when both the two inputs (va and vb) are either
low or high, the transmission probability drops to
zero. on the other hand, for the other two cases the
symmetry of the two arms of the ring is broken by
applying the gate voltage either in the atom a or b,
and therefore, the non-zero value of the transmis-
sion probability is achieved which reveals the elec-
tron conduction across the ring. thus we can pre-
dict that the electron conduction takes place across
the ring if one, and only one, of the inputs to the
gate is high, while if both the inputs are low or
table 1: xor gate behavior in the limit of weak-
coupling.
the current i is computed at the bias
voltage 6.02.
input-i (va)
input-ii (vb)
current (i)
0
0
0
2
0
0.378
0
2
0.378
2
2
0
both are high the conduction is no longer possible.
this feature clearly demonstrates the xor behav-
ior. with these characteristics, we get additional
one feature when the coupling strength of the ring
to the electrodes increases from the low regime to
high one. in the limit of strong ring-to-electrodes
coupling, all these resonances get substantial widths
compared to the weak-coupling limit. the results
are shown in fig. 3, where all the other parame-
ters are identical to those in fig. 2.
the contri-
bution for the broadening of the resonant peaks in
this strong-coupling limit appears from the imagi-
nary parts of the self-energies σs and σd, respec-
tively [20]. hence by tuning the coupling strength,
we can get the electron transmission across the ring
for the wider range of energies and it provides an
important signature in the study of current-voltage
(i-v ) characteristics.
all these features of electron transfer become
much more clearly visible by studying the i-v char-
acteristics. the current passing through the ring
is computed from the integration procedure of the
transmission function t as prescribed in eq. 6. the
transmission function varies exactly similar to that
of the conductance spectrum, differ only in magni-
tude by the factor 2 since the relation g = 2t holds
from the landauer conductance formula eq. 1. as
illustrative examples, in fig. 4 we show the current-
voltage characteristics for a mesoscopic ring with
n = 8 in the limit of weak-coupling. for the cases
when both the two inputs are identical with each
other, either low (fig. 4(a)) or high (fig. 4(d)), the
current is zero for the entire bias voltages. this be-
havior is clearly understood from the conductance
5
spectra, figs. 2(a) and (d), since the current is com-
puted from the integration procedure of the trans-
mission function t . for the two other cases where
only one of the two inputs is high and other is low,
a high output current is obtained which are clearly
described in figs. 4(b) and (c). from these figures
it is observed that the current exhibits staircase-like
structure with fine steps as a function of the applied
bias voltage.
this is due to the existence of the
sharp resonant peaks in the conductance spectrum
in the weak-coupling limit, since the current is com-
puted by the integration method of the transmission
-8
0
8
v
-4
0
hcl
4
i
-8
0
8
v
-4
0
hdl
4
i
-8
0
8
v
-4
0
hal
4
i
-8
0
8
v
-4
0
hbl
4
i
figure 5: (color online). current i as a function
of the bias voltage v for a mesoscopic ring with
n = 8 and φ = 0.5 in the limit of strong-coupling.
(a) va = vb = 0, (b) va = 2 and vb = 0, (c) va = 0
and vb = 2 and (d) va = vb = 2.
function t . with the increase of the bias voltage
v , the electrochemical potentials on the electrodes
are shifted gradually, and finally cross one of the
quantized energy levels of the ring.
accordingly,
a current channel is opened up which provides a
jump in the i-v characteristic curve. addition to
these behaviors, it is also important to note that
the non-zero value of the current appears beyond
a finite value of v , so-called the threshold voltage
(vth). this vth can be controlled by tuning the size
(n) of the ring. from these i-v characteristics the
behavior of the xor gate response is clearly visi-
ble. to make it much clear, in table 1, we present
a quantitative estimate of the typical current am-
plitude, computed at the bias voltage v = 6.02, in
this weak-coupling limit. it shows that i = 0.378
only when any one of the two inputs is high and
other is low, while for the other cases when either
va = vb = 0 or va = vb = 2, i gets the value 0. in
the same analogy, as above, here we also discuss the
i-v characteristics for the strong-coupling limit. in
this limit, the current varies almost continuously
with the applied bias voltage and achieves much
larger amplitude than the weak-coupling case as
presented in fig. 5. the reason is that, in the limit
of strong-coupling all the energy levels get broad-
ened which provide larger current in the integration
procedure of the transmission function t . thus by
tuning the strength of the ring-to-electrodes cou-
pling, we can achieve very large current amplitude
table 2: xor gate behavior in the limit of strong-
coupling.
the current i is computed at the bias
voltage 6.02.
input-i (va)
input-ii (vb)
current (i)
0
0
0
2
0
2.846
0
2
2.846
2
2
0
from the very low one for the same bias voltage v .
all the other properties i.e., the dependences of the
gate voltages on the i-v characteristics are exactly
similar to those as given in fig. 4. in this strong-
coupling limit we also make a quantitative study
for the typical current amplitude, given in table 2,
where the current amplitude is determined at the
same bias voltage (v = 6.02) as earlier. the re-
sponse of the output current is exactly similar to
that as given in table 1. here the non-zero value
of the current gets the value 2.846 which is much
larger compared to the weak-coupling case which
shows the value 0.378. from these results we can
clearly manifest that a mesoscopic ring exhibits the
xor gate response.
4
concluding remarks
to summarize, we have addressed xor gate re-
sponse in a mesoscopic metallic ring threaded by
a magnetic flux φ. the ring, composed of identi-
cal quantum dots, is attached symmetrically to the
source and drain.
the upper and lower arms of
the ring are subjected to the gate voltages va and
vb, respectively those are taken as the two inputs
of the xor gate. a simple tight-binding model is
used to describe the full system and all the calcu-
6
lations are done in the green's function formalism.
we have numerically computed the conductance-
energy and current-voltage characteristics as func-
tions of the ring-electrodes coupling strengths, mag-
netic flux and gate voltages. very interestingly we
have noticed that, for the half flux-quantum value
of φ (φ = φ0/2), a high output current (1) (in the
logical sense) appears if one, and only one, of the
inputs to the gate is high (1). on the other hand,
if both the inputs are low (0) or both are high (1),
a low output current (0) appears. it clearly mani-
fests the xor gate behavior and this aspect may be
utilized in designing a tailor made electronic logic
gate. in view of the potential application of this
xor gate as a circuit element in an integrated cir-
cuit, we would like to point out that care should be
taken during the application of the magnetic field
in the ring such that the other circuit elements of
the integrated circuit are not effected by this field.
in this presentation, we have calculated all the
results by ignoring the effects of the temperature,
electron-electron correlation, disorder, etc. due to
these factors, any scattering process that appears
in the arms of the ring would have influence on
electronic phases, and, in consequences can disturb
the quantum interference effects. here we have as-
sumed that, in our sample all these effects are too
small, and accordingly, we have neglected all these
effects in our present study.
the importance of this article is mainly con-
cerned with (i) the simplicity of the geometry and
(ii) the smallness of the size. to the best of our
knowledge the xor gate response in such a sim-
ple low-dimensional system has not been addressed
earlier in the literature.
references
[1] a. aviram, m. ratner, chem. phys. lett. 29
(1974) 277.
[2] t. dadosh, y. gordin, r. krahne, i. khivrich,
d. mahalu, v. frydman, j. sperling, a. ya-
coby, i. bar-joseph, nature 436 (2005) 677.
[3] j. chen, m. a. reed, a. m. rawlett, j. m.
tour, science 286 (1999) 1550.
[4] m. a. reed, c. zhou, c. j. muller, t. p. bur-
gin, j. m. tour, science 278 (1997) 252.
[5] p. a. orellana, m. l. ladron de guevara, m.
pacheco, a. latge, phys. rev. b 68 (2003)
195321.
[6] p. a. orellana,
f. dominguez-adame,
i.
gomez, m. l. ladron de guevara, phys. rev.
b 67 (2003) 085321.
[7] a. nitzan, annu. rev. phys. chem. 52 (2001)
681.
[8] a. nitzan, m. a. ratner, science 300 (2003)
1384.
[9] d. m. newns, phys. rev. 178 (1969) 1123.
[10] v. mujica, m. kemp, m. a. ratner, j. chem.
phys. 101 (1994) 6849.
[11] v. mujica, m. kemp, a. e. roitberg, m. a.
ratner, j. chem. phys. 104 (1996) 7296.
[12] k. walczak, phys. stat. sol. (b) 241 (2004)
2555.
[13] k. walczak, arxiv:0309666.
[14] w. y. cui, s. z. wu, g. jin, x. zhao, y. q.
ma, eur. phys. j. b. 59 (2007) 47.
[15] r. baer, d. neuhauser, j. am. chem. soc. 124
(2002) 4200.
[16] d. walter, d. neuhauser, r. baer, chem.
phys. 299 (2004) 139.
[17] k. tagami, l. wang, m. tsukada, nano lett.
4 (2004) 209.
[18] k. walczak, cent. eur. j. chem. 2 (2004) 524.
[19] r. baer, d. neuhauser, chem. phys. 281
(2002) 353.
[20] s. datta, electronic transport in mesoscopic
systems, cambridge university press, cam-
bridge (1997).
[21] m. b. nardelli, phys. rev. b 60 (1999) 7828.
7
|
0911.1680 | the heavy quark free-energy at t<tc in ads/qcd | starting with the modified ads/qcd metric developed in ref.[1] we use the
nambu-goto action to obtain the free energy of a quark-antiquark pair at t<tc,
for which we show that the effective string tension goes to zero at tc=154mev.
| introduction
as shown in ref. [2] the free energy of a static, in-
finitely massive quark-antiquark pair f is given by
e−βf =
l(⃗
rq)l†(⃗
rq)
,
(1.1)
where l(⃗
r) is the wilson-line
l(⃗
r) = 1
n tr t exp
i
β
z
0
dτ ˆ
a0(⃗
r, τ)
,
(1.2)
β = 1/t is the inverse temperature, ˆ
a0 is the gluon field
in the fundamental representation and τ denotes imag-
inary time. according to the holographic dictionary [3]
the right hand side of equation (1.1) is equal to the string
partition function on the ads5 space with the integration
contours on the boundary of ads5. in saddle-point ap-
proximation:
e−βf =
l(⃗
rq)l†(rq)
≈e−sng ,
(1.3)
where sng is the nambu-goto action
sng =
1
2πl2
s
z
d2ξ
p
dethab ,
(1.4)
with the induced worldsheet metric
hab = gμν
∂xμ
∂ξa
∂xν
∂ξb .
(1.5)
as described in reference [4], due to the symmetry of the
problem, we may set up a cylindrical coordinate system
in five-dimensional euclidean space. then the five coor-
dinates are:
∗electronic address: [email protected]
†electronic address: [email protected]
‡electronic address: [email protected]
§electronic address: [email protected]
q
q
x
−d
2
b
d
2
b
τ
r
fig. 1: wilson loops in euclidean time with periodicity β =
2πr.
• t - time
• z - the bulk coordinate (extra 5th dimension)
• x, r, φ - three spatial coordinates
we parameterize an element of the surface connecting
the two wilson loops with dξ1 = rdφ and dξ2 = dx, set
t = 0 and reinterpret dτ ≡rdφ as euclidean time. we
use the modified metric gμν developed in reference [1]
effectively for nc = 3 and nf = 4. this metric has been
further analysed as a possible solution of 5-d gravity in
ref. [5]
ds2
eucl = h(z)l2
z2
(r2dφ2 + dr2 + dx2 + dz2) , (1.6)
h(z) =
log(ǫ)
log ((λz)2 + ǫ) ,
(1.7)
λ = l−1 = 264mev ,
(1.8)
ǫ =
l2
s
l2 = 0.48 .
(1.9)
the nambu-goto action determines the string surface
with ls as string length:
2
sng =
1
2πl2
s
2π
z
0
dφ
d/2
z
−d/2
dxl2h(z)
z2
r
p
1 + (z′)2 + (r′)2
=
1
ǫ
d/2
z
−d/2
dxh(z)
z2 r
p
1 + (z′)2 + (r′)2
(1.10)
=: 1
ǫ
d/2
z
−d/2
dx l[z(x), z′(x), r(x), r′(x)] ,
(1.11)
the quark and antiquark are separated by a distance
d along the x-axis. the two polyakov loops are approx-
imated by wilson loops of radius r = β/2π. l is the
lagrangian density and the prime (′) denotes the deriva-
tive with respect to x.
the configuration in shown in
figure 1.
ii.
euler-lagrange equations
since the lagrangian in (1.11) does not depend on x
explicitly, it is invariant under variations x →x + δx. it
follows from noether's theorem that there exists a con-
served quantity k given by
k = h(z) * r
z2
1
p
1 + (z′)2 + (r′)2 .
(2.1)
the euler-lagrange equations corresponding to the
nambu-goto
action
(1.10)
can
be
simplified
with
eq. (2.1) (see ref. [4]):
r′′ −h2(z) * r
k2z4
= 0 ,
z′′ −h(z) * r2 * (z∂zh(z) −2h(z))
k2z5
= 0 .
(2.2)
the boundary conditions are
r (±d/2) = r = β
2π =
1
2πt ,
z (±d/2) = 0 .
(2.3)
iii.
numerical solutions
analysis of symmetry (cf. fig. 1) gives for the first
derivatives:
r′(0) = 0 ,
z′(0) = 0 .
(3.1)
but the conditions eqs. (2.3) and (3.1) are not given at
the same point. it is more convenient to find conditions
for the functions and their derivatives at the same point.
analysis of eq. (2.2) shows that r′ and z′ must diverge
near the boundary x →± d
2. in order to obtain some
stable numerical solution, we have studied the behavior
of r(x) and z(x) near the boundary, cf. ref. [4]. nu-
merically, a small cutoffv is applied, then r(−d/2 + v),
z(−d/2+v), r′(−d/2+v) and z′(−d/2+v) can be calcu-
lated asymptotically and used as initial conditions. we
do not prescribe the value of d. for a fixed value of r, we
give an arbitrary value to k > 0, then calculate r, z and
consequently the nambu-goto action sng for this value
of k. by changing the value of k we obtain the nambu-
goto action as a function of k, denoted by sng(k). we
can express d as a function of k, and determine the dis-
tance d associated with the nambu-goto action sng(k).
in principle, the constructed numerical solutions r′(x)
and z′(x) can vanish at different points due to the small
difference between our asymptotic solution and the real
solution. we adjust the initial conditions at the point v
keeping the value of k fixed in such a way that r′(0) and
z′(0) vanish at the same point. the nambu-goto action
is given by
sng = 2
ǫ
z 0
−d/2
dxh(z) * r
z2
q
1 + (z′)2 + (r′)2
= 2
ǫ
z −d/2+v
−d/2
dxh (za) * ra
z2
a
q
1 + (z′
a)2 + (r′
a)2
+ 2
ǫ
z 0
−d/2+v
dxh (zn) * rn
z2
n
q
1 + (z′
n)2 + (r′
n)2,
(3.2)
where the subscript "a" denotes "asymptotic solution"
near x = −d/2, while "n" means "numerical solution".
in the last expression, the first integral is divergent at
x = −d/2.
but, as we have the explicit form of the
0
0.2
0.4
0.6
0.8
1.0
0
0.2
0.4
0.6
0.8
t[fm−1]
d[fm]
fig. 2: the euler-lagrange equations (2.2) have only so-
lutions for temperature t and quark-antiquark separation d
within the shaded area.
3
0.1
0.2
0.3
0.4
−5
−4
−3
−2
−1
0
d[fm]
sng,reg
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
++
+
+
+
+
+
++
+
+
+
+
+
+
+
++
+
+
+
+
+
+
+
+
+
++
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
**
*
*
*
*
*
*
*
*
*
*
*
*
**
*
*
*
*
*
*
*
*
*
*
*
*
• - t = 98mev
◦- t = 118mev
+- t = 138mev
⋆- t = 154mev
fig. 3:
regularized nambu-goto action as a function of
quark-antiquark separation d for different temperatures t.
asymptotic solutions ra(x) and za(x), we can expand the
first integrand into power series near x = −d/2, and re-
move the divergent terms. to compensate this removal,
we should add the antiderivative of the divergent terms
at x = −d/2 + v. this way, we obtain the regularized
value of sng.
integrating the euler-lagrange equations (2.2) for a
wide range of initial values suggests that solutions exist
only for a specific range of temperature t and quark-
antiquark separation d (fig. 2).
figure 3 defines the regularized nambu-goto action
sng,reg for several temperatures t as a function of quark-
antiquark separation d. a fit to the numerical calcula-
tions gives
sfit
ng,reg = −0.48
t d
+ d
−7.46
fm
+ 5.84
t fm2
.
(3.3)
iv.
thermodynamic quantities
using sfit
ng,reg it is easy to calculate the free energy of
the q ̄
q-system as a function of the q ̄
q separation:
f = t * sfit
ng,reg = −0.48
d
+ d
−7.46
fm
t + 5.84
fm2
. (4.1)
fig. 4 shows the free energy. one can recognize the flat-
tening of f when increasing temperature t for large di-
tances d. the term linear in d yields the effective string
tension:
σeffective = −7.46
fm
t + 5.84
fm2 .
(4.2)
the entropy writes as
s = −∂f
∂t = 7.46
fm d ,
(4.3)
0.1
0.2
0.3
0.4
−5
−4
−3
−2
−1
0
d[fm]
f[fm−1]
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
b
c
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
++
+
+
+
+
+
++
+
+
+
+
+
+
+
++
+
+
+
+
+
+
+
+
+
++
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
**
*
*
*
*
*
*
*
*
*
*
*
*
**
*
*
*
*
*
*
*
*
*
*
*
*
• - t = 98mev
◦- t = 118mev
+- t = 138mev
⋆- t = 154mev
fig. 4: free energy as a function of quark-antiquark separa-
tion d for different temperatures t.
and does not depend on temperature. such an entropy is
well known for strong coupling qcd on a 3 dimensional
lattice, where s = l*log (2d −1), with d = 3 and lis the
length of the random path in lattice units connecting the
quark and antiquark [6]. inner energy and string tension
also do not depend on temperature and are given by
e = f + t s = −0.48
d
+ 5.84
fm2 d ,
(4.4)
and
σ = 5.84
fm2 = σeffective(t = 0) ,
(4.5)
respectively.
v.
confinement and phase transition
the free energy eq. (4.1) contains a linear term and
a coulomb-like term, which is absent in strong cou-
pling lattice qcd. the linear term provides confinement:
when it becomes zero there will be no confinement. from
eq. (4.2) we can estimate the critical temperature for
the confinement/deconfinement phase transition, which
is roughly tc = 154 mev.
this value is in agreement
with lattice qcd results, i.e. t lattice
c
(nc = 3, nf = 3) =
155 ± 10 mev [7], but one must admit that we only solve
a pure gluon theory without dynamical quarks, where,
however, the input t = 0 potential has been fitted for
nf = 4.
vi.
conclusion
we have shown that the modified metric of ads/qcd
proposed in ref. [4] can also be applied to the heavy
quark potential. since we are not using the black hole
metric this theory is restricted to the t < tc regime. we
4
0
0.2
0.4
0.6
0.8
0
2
4
6
tc
t[fm−1]
σeffective[fm−2]
fig. 5: effective string tension as a function of temperature.
the phase transition happens when the effective string tension
vanishes.
found that the modified ads5-metric produces confine-
ment and the short distance coulombic behavior in this
region. in previous works on loop-loop correlators [8, 9],
these two features had to be added by hand, whereas here
they follow from one action. we also can determine tc
by demanding that the effective string tension vanishes.
because of the singularity in the metric at zir =
1
λ
√1 −ǫ/ ≈0.54fm the euler-lagrange equations (2.2)
have only solution for a very limited range of boundary
condition. in particular for large t they yield solutions
only for very small qq separations making the fit (3.3)
more hypothetical.
[1] h. j. pirner and b. galow, phys. lett. b679, 51 (2009),
0903.2701.
[2] l. d. mclerran and b. svetitsky, phys. rev. d24, 450
(1981).
[3] e. witten, adv. theor. math. phys. 2, 253 (1998), hep-
th/9802150.
[4] j. nian and h. j. pirner (2009), 0908.1330.
[5] b. galow, e. megias, j. nian, and h. j. pirner (2009),
0911.0627.
[6] h. meyer-ortmanns, rev. mod. phys. 68, 473 (1996),
hep-lat/9608098.
[7] k. yagi, t. hatsuda, and y. miake, camb. monogr. part.
phys. nucl. phys. cosmol. 23, 1 (2005).
[8] a. i. shoshi, f. d. steffen, and h. j. pirner, nucl. phys.
a709, 131 (2002), hep-ph/0202012.
[9] a. i. shoshi, f. d. steffen, h. g. dosch, and h. j. pirner,
phys. rev. d68, 074004 (2003), hep-ph/0211287.
|
0911.1681 | applications of uwb technology | recent advances in wideband impulse technology, low power communication along
with unlicensed band have enabled ultra wide band (uwb) as a leading technology
for future wireless applications. this paper outlines the applications of
emerging uwb technology in a private and commercial sector. we further talk
about uwb technology for a wireless body area network (wban).
| introduction
there have been tremendous research efforts to apply
ultra wide band (uwb) technology to the military and
government sectors. some of them are already accomplished
and some are intended for future. these applications are
mainly categorized into three parts: communications and
sensors, position location and tracking, and radar.
this paper presents a brief discussion on the aforementioned
applications. it is divided into five sections. section 2 outlines
the application of uwb technology in communication and
sensors. section 3 presents discussion on position location and
tracking. in section 4, we talk about radar. section 5 presents
the application of uwb in a wireless body area network
(wban). the final section presents conclusion.
ii. communications and sensors
a. low data rate
the power spectral density (psd) of uwb signals is
extremely low, which enables uwb system to operate in the
same spectrum with narrowband technology without causing
undue interference. the solution on the market for today's
indoor application is infrared or ultrasonic approaches. the
line-of-sight propagation in infrared technology cannot be
guaranteed all the time. it is also affected by shadows and
light-related interferences. ultra sonic approach propagates
with confined penetration. uwb technology is less affected
by shadows and allows the transmission through objects. the
innovative communication method of uwb at low data rate
gives numerous benefits to government and private sectors. for
instance, the wireless connection of computer peripherals such
as mouse, monitor, keyboard, joystick and printer can utilize
uwb technology. uwb allows the operation of multiples
devices without interference at the same time in the same
space. it can be used as a communication link in a sensor
network. it can also create a security bubble around a specific
area to ensure security. it is the best candidate to support a
variety of wban applications. a network of uwb sensors
such as electrocardiogram (ecg), oxygen saturation sensor
(spo2) and electromyography (emg) can be used to develop
a proactive and a smart healthcare system. this can benefit
the patient in chronic condition and provides long term health
monitoring. in uwb system, the transmitter is often kept
simpler and most of the complexity is shifted towards receiver,
which permits extremely low energy consumption and thus
extends battery life.
designing rake-receiver for low power devices is a com-
plicated issue. energy detection receivers are the best approach
to build simple receivers [1]. a rake receiver for on-body
network is presented in [2], which shows that 1 or 2 fingers
is sufficient to collect 50% to 80% of maximum energy for
links with a distance of 15cm, independent of its placement
on the body. different energy management schemes may also
assist in extending the battery life . positioning with previ-
ously unattained precision, tracking, and distance measuring
techniques, as well as accommodating high node densities due
to the large operating bandwidth are also possible [3]. global
positioning system (gps) system is often available in low date
rate applications and requires new solutions. the reduction in
protocol overhead can decrease the energy consumption of
the complex gps transceivers and extends the battery life.
even though, a low data rate application using alternative
phy concepts is currently discussed in ieee 802.15.4a [4]
but tremendous research efforts are required to bring those
systems to real world applications.
b. high data rate
the unique applications of uwb systems in different
scenarios have initially drawn much attention, since many
applications of uwb spans around existing market needs
for high data rata applications. demand for high density
multimedia applications is increasing, which needs innovative
methods to better utilize the available bandwidth. uwb system
has the property to fill the available bandwidth as demand
increases. the problem of designing receiver and robustness
against jamming are main challenges for high-rate applications
[5]. the large high-resolution video screens can benefit from
uwb. these devices stream video content wirelessly from
video source to a wall-mounted screen. various high data rata
applications include internet access and multimedia services,
wireless peripheral interfaces and location based services.
arxiv:0911.1681v3 [cs.ni] 2 apr 2015
table i
contents and requirements for home networking and
computing [9]
service
data rate (mbps)
real time feature
digital video
32
yes
dvd, tv
2-16
yes
audio
1.5
yes
pc
32
no
internet
>10
no
other
<1
no
regardless of the environment, very high data rate applications
(>1 gbit/sec) have to be provided. the use of very large
bandwidth at lower spectral efficiency has designated uwb
system as a suitable candidate for high internet access and
multimedia applications. the conventional narrowband system
with high spectral efficiency may not be suitable for low cost
and low power devices such as pda or other handheld devices.
standardized wireless interconnection is highly desirable to
replace cables and propriety plugs [6].the interconnectivity
of various numbers of devices such as laptops and mobile
phones is increasingly important for battery-powered devices.
c. home network applications
home network application is a crucial factor to make per-
vasive home network environment. the wireless connectivity
of different home electronic systems removes wiring clutter in
living room. this is particularly important when we consider
the bit rate needed for high definition television that is in
excess of 30 mbps over a distance of at least few meters
[6]. in ieee 1394, an attempt has been made to integrate
entertainment, consumer electronics and computing within a
home environment. it provides isochronous mode where data
delivery is guaranteed with constant transmission speed. it is
important for real time applications such as video broadcasts.
the required data rates and services for different devices are
given in table i. ieee 1394 also provides asynchronous mode
where data delivery is guaranteed but no guarantee is made
about the time of arrival of the data [7]. the isochronous data
can be transferred using uwb technology. a new method
which allows ieee 1394 equipment to transfer an isochronous
data using a uwb wireless communication network is pre-
sented in [8]. a connection management protocol (cmp)
and ieee 1394 over uwb bridge module can exchange
isochronous data through ieee 1394 over uwb network.
iii. position location and tracking
position location and tracking have wide range of benefits
such as locating patient in case of critical condition, hikers
injured in remote area, tracking cars, and managing a variety
of goods in a big shopping mall. for active rf tracking and
positioning applications, the short-pulse uwb techniques offer
distinct advantages in precision time-of-flight measurement,
multipath immunity for leading edge detection, and low prime
power requirements for extended-operation rf identification
(rfid) tags [10]. the reason of supporting human-space
intervention is to identify the persons and the objects the
user aims at, and identifying the target task of the user.
knowing where a person is, we can figure out near to what
or who this person is and finally make a hypothesis what the
user is aiming at [11]. this human-space intervention could
improve quality of life when used in a wban. in a wban,
a number of intelligent sensors are used to gather patient's
data and forwards it to a pda which is further forwarded to a
remote server. in case of critical condition such as arrhythmic
disturbances, the correct identification of patient's location
could assist medical experts in treatment.
iv. radar
a short-pulse uwb techniques have several radar applica-
tions such as higher range measurement accuracy and range
resolution, enhanced target recognition, increased immunity to
co-located radar transmissions, increased detection probability
for certain classes of targets and ability to detect very slowly
moving or stationary targets [10]. uwb is a leading technol-
ogy candidate for micro air vehicles (mav) applications [12].
the nature of creating millions of ultra-wideband pulses per
second has the capability of high penetration in a wide range
of materials such as building materials, concrete block, plastic
and wood.
v. ultra wideband technology in wban
a wban consists of miniaturised, low power, and non-
invasive/invasive wireless biosensors, which are seamlessly
placed on or implanted in human body in order to provide
a smart and adaptable healthcare system. each tiny biosensor
is capable of processing its own task and communicates with a
network coordinator or a pda. the network coordinator sends
patient's information to a remote server for diagnosis and pre-
scription. a wban requires the resolution of many technical
issues and challenges such as interoperability, qos, scalability,
design of low power rf data paths, privacy and security, low
power communication protocol, information infrastructure and
data integrity of the patient's medical records. the average
power consumption of a radio interface in a wban must be
reduced below 100μw. moreover, a wban is a one-hop star
topology where power budget of the miniaturised sensor nodes
is limited while network coordinator has enough power budget.
in addition, most of the complexity is shifted to the network
coordinator due to its capability of having abundant power
budget. the emerging uwb technology promises to satisfy the
average power consumption requirement of the radio interface
(100μw), which cannot be achieved by using narrowband
radio communication, and increases the operating period of
sensors. in the uwb system, considerable complexity on the
receiver side enables the development of ultra-low-power and
low-complex uwb transmitters for uplink communication,
thereby making uwb a perfect candidate for a wban. the
difficulty in detecting noise-like behavior and robustness of
uwb signals offer high security and reliability for medical
applications [5].
fig. 1.
block diagram of pulse generator [16]
existing technological growth has facilitated research in
promoting uwb technology for a wban. the influence of
human body on uwb channel is investigated in [13] and
results about the path loss and delay spread have been reported.
the behavior of uwb antenna in a wban has also been
presented in [14]. moreover, a uwb antenna for a wban
operating in close vicinity to a biological tissue is proposed
in [15]. this antenna can be used for wban applications
between 3 ghz and 6 ghz bands.
a low complex uwb transmitter being presented in [16]
adapts a pulse-based uwb scheme where a strong duty cycle
is produced by restricting the operation of the transmitter to
pulse transmission. this pulse-based uwb scheme allows the
system to operate in burst mode with minimal duty cycle
and thus, reducing the baseline power consumption. a low
power uwb transmitter for a wban requires the calibration
of psd inside the federal communication commission (fcc)
mask for indoor application. the calibration process is a
challenging task due to the discrepancies between higher and
lower frequencies. two calibration circuits are used to calibrate
the spectrum inside fcc mask and to calibrate the bandwidth.
a pulse generator presented in fig. 1 activates triangular pulse
generator and a ring oscillator simultaneously. during pulse
transmission, the ring oscillator is activated by a gating circuit,
thus avoiding extra power consumption. a triangular pulse is
obtained at output when the triangular signal is multiplied with
the output carrier produced by the oscillator.
finite-difference time-domain (fdtd) is used to model
uwb prorogation around the human body where the path loss
depends on the distance and increases with a large fading vari-
ance. the narrowband implementation is compared in terms of
power consumption using a wban channel model. simulation
results showed that the path loss near the body was higher
than the path loss in free space. moreover, it was concluded
that the performance of a uwb transmitter for a wban is
better for best channel while for average channels narrow
band implementation is a good solution [16].a considerable
research efforts are required both at algorithmic and circuit
level to make uwb a key technology for wban applications.
vi. conclusions
the uwb is a leading technology for wireless applications
including numerous wban applications. in this paper, we
discussed the current and future applications of emerging
uwb technology in a private and commercial sector. we
believe that the uwb technology can easily satisfy the energy
consumption requirements of a wban. our future work
includes the investigation of uwb technology for a non
invasive wban.
references
[1] federal communications commission (fcc) fcc noi: rules
regarding ultra-wideband transmission systems, et docket no.
98-153, sept. 1, 1998.
[2] thomas zasowski, frank althaus, mathias stager, a. wittneben,
and g. troster, uwb for non-invasive wireless body area net-
works: channel measurements and results, ieee conference on
ultra wideband systems and technologies, uwbst 2003, reston,
virginia, usa, nov. 2003.
[3] b allen, "ultra wideband wireless sensor networks", iee uwb
symposium, june 2004.
[4] dec. 2007, http://www.ieee802.org/15/pub/tg4a.html
[5] b. allen, t. brown, k. schwieger, e. zimmermann, w. q. malik,
d. j. edwards, l. ouvry, and i. oppermann, ultra wideband: appli-
cations, technology and future perspectives, in proc. int. workshop
convergent tech. oulu, finland, june 2005.
[6] b allen, m ghavami, a armogida, a.h aghvami, the holy grail
of wire replacement, iee communications engineer, oct/nov
2003.
[7] jan. 2008, http://www.vxm.com/21r.49.html
[8] s.h park, s.h lee, isochronous data transfer between av devices
using pseudo cmp protocol in ieee 1394 over uwb network,
ieice trans. commun., vol.e90-b, no.12 december 2007.
[9] ben allen, white paper ultra wideband: technology and future
perspectives v3.0, march 2005.
[10] r.j. fontana, recent system applications of short-pulse ultra-
wideband (uwb) technology, ieee microwave theory and tech.,
vol. 52, no. 9, september 2004.
[11] t. manesis and n. avouris, survey of position location techniques
in mobile systems, mobilehci, salzburg, september 2005.
[12] r. j. fontana, e. a. richley et.al, an ultra wideband radar
for micro air vehicle applications, reprinted from 2002 ieee
conference on ultra wideband systems and technologies, may
2002.
[13] yue ping zhang; qiang li, "performance of uwb impulse radio
with planar monopoles over on-human-body propagation chan-
nel for wireless body area networks," antennas and propagation,
ieee transactions on , vol.55, no.10, pp.2907-2914, oct. 2007
[14] yazdandoost, k.y.; kohno, r., "uwb antenna for wireless body
area network," microwave conference, 2006. apmc 2006. asia-
pacific , vol., no., pp.1647-1652, 12-15 dec. 2006
[15] m. klemm, i.z. kovacs, et.al, comparison of directional and omni-
directional uwb antennas for wireless body area network appli-
cations, 18th international conference on applied electromagnetics
and communications, pg 1-4, icecom 2005.
[16] j. ryckaert, c. desset, a. fort, m. badaroglu, v. de heyn,
p. wambacq, g. van der plas, s. donnay, b. van poucke, b.
gyselinckx, "ultra-wide-band transmitter for low-power wireless
body area networks: design and evaluation," ieee transactions on
circuits and systems i: regular papers, vol.52, no.12, pp. 2515-
2525, dec. 2005
|
0911.1683 | luminescent ions in silica-based optical fibers | we present some of our research activities dedicated to doped silica-based
optical fibers, aiming at understanding the spectral properties of luminescent
ions, such as rare-earth and transition metal elements. the influence of the
local environment on dopants is extensively studied: energy transfer mechanisms
between rare-earth ions, control of the valence state of chromium ions, effect
of the local phonon energy on thulium ions emission efficiency, and broadening
of erbium ions emission induced by oxide nanoparticles. knowledge of these
effects is essential for photonics applications.
| introduction
during the last two decades, the development of sophisticated optical systems and
devices based on fiber optics have benefited from the development of very performant optical
fiber components. in particular, optical fibers doped with 'active' elements such as rare-earth
(re) ions have allowed the extremely fast development of optical telecommunications [i,ii],
lasers [iii] industries and the development of temperature sensors [iv]. the most frequently used
re ions (nd3+, er3+, yb3+, tm3+) have applications in three main spectral windows: around 1,
1.5 and 2μm in fiber lasers and sensors based on absorption/fluorescence and around 1.5 μm for
telecommunications and temperature sensors. re-doped fibers are either doped with one
element (e.g. er3+ in line amplifiers for long haul telecommunications) or two elements (e.g.
yb3+ and er3+ in booster amplifiers or powerful 1.5 μm lasers). in the second case, the non-
radiative energy transfer mechanism from donor to acceptor is implemented to benefit from the
good pump absorption capacity of the donor (e.g. yb3+ around 0.98 μm) and from the good
stimulated emission efficiency of the acceptor (e.g. er3+ around 1.5 μm). all the developed
applications of amplifying optical fibers are the result of long and careful optimization of the
material properties, particularly in terms of dopant incorporation in the glass matrix,
transparency and quantum efficiency.
the exploited re-doped fibers are made of a choice of glasses: silica is the most widely
used, sometimes as the result of some compromises. alternative glasses, including low
maximum phonon energy (mpe) ones, are also used because they provide better quantum
efficiency or emission bandwidth to some re ions particular optical transition. the icon
example is the tm3+-doped fiber amplifier (tdfa) for telecommunications in the s-band
(1.48-1.53 μm) [v], for which low mpe glasses have been developed: oxides [vi,vii], fluorides
[viii], chalcogenides [ix]... however, these glasses have some drawbacks not acceptable at a
commercial point of view: high fabrication costs, low reliability, difficult connection to silica
components and, in the case of fiber lasers, low optical damage threshold and resistance to heat.
to our knowledge, silica glass is the only material able to meet most of applications
requirements, and therefore the choice of vitreous silica for the active fiber material is of critical
importance. however a pure silica tdfa would suffer strong non-radiative de-excitation
(nrd) caused by multiphonon coupling from tm3+ ions to the matrix. successful insulation of
tm3+-ions from matrix vibrations by appropriate ion-site 'engineering' would allow the
development of a practical silica-based tdfa.
other dopants have recently been proposed to explore amplification over new wavelength
ranges. bi-doped glasses with optical gain [x] and fiber lasers operating around 1100-1200 nm
have been developed [xi,xii], although the identification of the emitting center is still not clear,
and optimization of the efficiency is not yet achieved. transition metal (tm) ions of the ti-cu
series would also have interesting applications as broad band amplifiers, super-fluorescent or
tunable laser sources, because they have in principle ten-fold spectrally larger and stronger
emission cross-sections than re ions. however, important nrd strongly reduces the emission
quantum efficiency in silica. bi- and tm-doped fibers optical properties are extremely sensitive
to the glass composition and/or structure to a very local scale. as for tm3+ ions, practical
applications based on silica would be possible when the 'ion site engineering' will be performed
in a systematic approach. this approach is proposed via 'encapsulation' of dopants inside glassy
or crystalline nanoparticles (np) embedded in the fiber glass, like reported for oxyfluoride
fibers [xiii] and multicomponent silicate fibers [xiv]. in np-doped-silica fibers, silica would act
as support giving optical and mechanical properties to the fiber, whereas the dopant
spectroscopic properties would be controlled by the np nature. the np density, mean diameter
and diameter distribution must be optimized for transparency [xv].
in this context, our group has made contributions in various aspects introduced above.
our motivations are both fundamental and application oriented. first, the selected dopants act as
probes of the local matrix environment, via their spectroscopic variations versus ligand field
intensity, site structure, phonon energy, statistical proximity to other dopants,... the studies are
always dedicated to problems or limitation in applications, such as for erbium-doped fiber
amplifier (edfa) and tdfa, or high temperature sensors. it is also important to use a
commercially derived fabrication technique, here the modified chemical vapor deposition
(mcvd), to assess the potential of active fiber components for further development.
the aim of this paper is reviewing our contributions to improving the spectroscopic
properties of some re and tm ions doped into silica. the article is organized as follows:
section 0 describes the mcvd fabrication method of preform and fiber samples, and the
common characterization techniques used in all studies. section 0 chapitre: is devoted to the
study of energy transfers in erbium ion (er3+) and ytterbium:erbium (yb3+:er3+) heavily
(co)doped fibers and the applications to fiber temperature sensors, whereas section 0
summarizes our original investigations on chromium (cr3+ and cr4+) in silica-based fibers. in
section 0 chapitre:, we report on the spectroscopic investigations of thulium- (tm3+) doped
fibers versus the material composition, including phonon interactions and non-radiative
relaxations. in section 0 chapitre: are reported our recent discoveries in re-doped dielectric
nanoparticles, grown by phase separation.
experimental
preforms and fibers fabrication
all the fibers investigated in this article were drawn from preforms prepared by the
modified chemical vapor deposition (mcvd) technique [xvi] at laboratoire de physique de la
matière condensée (nice). in this process, chemicals (such as o2, sicl4) are mixed inside a
glass tube that is rotating on a lathe. due to the flame of a burner moving along the tube, they
react and extremely fine silica particles are deposited on the inner side of the tube. these soot
are transformed into a glass layer (thickness is about few μm) when the burner is passing over.
the cladding layers are deposited inside the substrate tube, followed by the core layers.
germanium and phosphorus can be incorporated directly through the mcvd process. they are
added to raise the refractive index. moreover, this last element is also added as a melting agent,
decreasing the melting temperature of the glass. all the other elements (rare-earths, transition
metals, aluminium, ...) are incorporated through the solution doping technique [xvii]. the last
core layer is deposited at lower temperature than the preceding cladding layers, so that they are
not fully sintered and left porous. then the substrate tube is filled with an alcoholic solution of
salts and allowed to impregnate the porous layers. after 1-2 hours, the solution is removed, the
porous layer is dried and sintered. when the deposition is complete, the tube is collapsed at
2000°c into a preform. in our case, the typical length of the preform is about 30 cm and the
diameter is 10 mm. the preform is then put into a furnace for drawing into fiber. the preform
tip is heated to about 2000°c. as the glass softens, a thin drop falls by gravity and pulls a thin
glass fiber. the diameter of the fiber is adjusted by varying the capstan speed. a uv-curable
polymer is used to coat the fiber.
material characterizations
refractive index profiles (rip) of the preforms were measured using a york technology
refractive index profiler (p101), while the rips of the optical fibers were determined using a
york technology refractive index profiler (s14). the oxide core compositions of the samples
were deduced from measurement of the rip in the preform, knowing the correspondence
between index rising and alo3/2, geo2, po5/2 concentration in silica glass from the literature
[xviii,xix]. the composition was also directly measured on some preforms using electron probe
microanalysis technique in order to compare results. a good agreement was found. the
concentration of these elements is generally around few mol%. luminescent ions concentrations
are too low to be measured through the rip. they were measured through absorption spectra.
for example, tm3+ ion concentration has been deduced from the 785 nm (3h6=>3h4) absorption
peak measured in fibers and using absorption cross-section reported in [xx]: abs(785 nm) =
8.7x10-25 m2.
energy transfers in er3+ and yb3+:er3+ heavily doped silica fibers
the non-radiative energy transfer processes are well-known phenomena that influence the
optical properties of doped-materials. the first theoretical basis appears in the 50's with the
förster-dexter's model [xxi,xxii] that treats this process as the result of dipole-dipole and
multipole-multipole interactions. two energy transfer processes are described in fig. 1. when
pumping a co-doped material some ions are promoted in one of their excited level. if some ions
are close to each other, their wave-functions interpenetrate and the energy stored in the excited
level of the donor ions is non-radiatively transferred to a resonant level of the acceptor ions.
this process was turned to good account in yb3+:er3+ co-doped silica fibers for high power fiber
amplifier [xxiii] and laser [xxiv] applications : it takes advantage of the strong absorption cross-
section of yb3+ at 980 nm and of the high efficiency of the energy transfer. in the case of high
doping levels for both species, another non-radiative energy transfer process can take place and
allows exciting a higher level of the acceptor ion : it is the double energy transfer process (det)
described by auzel [xxv]. this process was first used to convert infrared light from led to
visible emission or to detect weak infrared signals with photomultipliers [xxvi,xxvii].
double energy transfer in er3+-doped fibers
the clustering effect in er3+-doped silica fibers is now a well-accepted phenomenon, and
its detrimental influence on the 1550-nm gain transition of such fibers is well established
[xxviii]. for simplicity, modeling of clusters has consisted of considering that a fraction of the
dopants were organized in ion pairs [xxix], in which an immediate energy transfer leads to an
instantaneous relaxation of one excited ion. this model is in very good agreement with the
experimental results obtained for saturable absorption and for gain measurements at low er3+
doping levels as in fiber amplifiers. at higher doping levels, ainslie et al. [xxx] showed that, in
addition to the ions dispersed in the host, regions in which concentrations of rare-earth
exceeding 40 wt% - called clusters – appear : in such a material the ion-pair model cannot be
applied. we have developed a cluster model [xxxi] that differs from the ion-pair one by the fact
that we consider that each ion of a cluster can efficiently transfer its energy to any of the other
ions of the same cluster. when n ions of a cluster are excited, a succession of (n-1) fast
relaxations by energy transfer leads to a situation in which all the ions of a cluster but one are
de-excited. this model permits the determination of the proportion of the dopants organized in
clusters and the transfer rate. in order to validate the model we realized a pump-absorption-
versus-pump-power experiment with two fiber samples, er-1 and er-2, doped with 100 ppm
and 2,500 ppm of er3+, respectively (fig. 2). this shows that the non-saturable absorption
(nsa) grows dramatically with the er3+ concentration. we have attributed this behaviour to the
presence of clusters containing a significant percentage of the dopants and in which efficient
energy transfers allow these ions to relax rapidly after the absorption of a first pump photon.
double energy transfer in highly yb3+:er3+-co-doped fibers
the green fluorescence of er3+-doped optical fiber is a well-known phenomenon in 800
nm-pumped erbium-doped fibers. this emission results from the excited state absorption
phenomenon and is characteristic of the emission from the 2h11/2 and 4s3/2-levels and
consequently can be observed with any pumping scheme leading to the population of these
levels. we have studied how these levels can be excited by det [xxxii] and a schematic energy
diagram is shown in fig. 3.
at low rare-earth concentrations, the large inter-ionic distances permit efficient single
energy transfers, but the second energy transfer is very inefficient. for applications in which the
green fluorescence is desirable, this second energy transfer must be enhanced. for that the rare-
earth concentration must be as high as possible to reduce the distance between neighboring ions.
in this case, a second phase, referred to as clusters, can appear in which the rare-earth ions
concentration is particularly high. in order to quantify the fraction of active ions into clusters,
we have studied the yb3+ and er3+ fluorescence dynamics in a highly co-doped fiber
([yb]=[er]=2,500 ppm) : the 1040 nm-fluorescence decay represents the population decay from
the 2f5/2-metastable level of yb3+, and that of the green-fluorescence represents the evolution of
the 2h11/2 and 4s3/2-populations of er3+. our setup allows simultaneous measurements of the
counter-propagative visible emission and the lateral infrared emission. the experimental curves
show two typical decays. fitted with our rate equations model [xxxii], they revealed that
roughly 50% of both ions are organized in clusters in the co-doped fiber. this high percentage
must be associated with very high yb-er transfer rates (3x106 s-1), one order of magnitude
superior to the er3+:4i11/2 intermediate level relaxation rate (3.7x105 s-1): er3+ ions placed in their
short lived 4i11/2 state have a higher probability to be excited to the 4f7/2 upper state than to relax
spontaneously. the strong percentage of ions organized in clusters and the very high transfer
rates are at the origin of the very good up-conversion efficiency.
thermalization effects between excited levels in doped fibers: temperature
sensor based on fluorescence of er3+
though the rare-earth ions are never in thermodynamical equilibrium because of the
metastability of some levels, it has been demonstrated that the populations of the 2h11/2 and 4s3/2
levels responsible for the green emission in er3+-doped fibers are in quasi-thermal equilibrium.
this effect has been observed for the first time in fluoride glass fibers [xxxiii] and can be
attributed to the relatively long lifetime of these levels (400 μs) in that host. in silica, in spite of
the two orders of magnitude shorter lifetime, a fast thermal coupling between both levels has
been proposed [xxxiv] and confirmed experimentally [xxxv] (fig. 4). indeed these levels can be
considered to be in quasi-thermal equilibrium, because of the small energy gap between them,
about 800 cm-1, compared to the high energy gap between them and the nearest lower level,
about 3000 cm-1. in this case, the lifetime of these levels is sufficient (1 μs) to allow populating
the upper level from the lower one by phonon induced transitions. therefore r, the ratio of the
intensities coming from both levels, can be written as:
r
i 2h
11/2
i 4s
3/2
2h
11/2
4s
3/2
e
2h
11/2
e
4s
3/2
expe/kt
(1)
where is the frequency, e the emission cross section, k the boltzmann constant, e
the energy gap between the two levels and t the temperature in degrees kelvin. in fig. 4 we
show that the experimental data can be fitted by a function in agreement with equation (1). this
is another example of an energy transfer process, this one being assisted by phonons.
in order to take advantage of the high efficiency of the det in highly co-doped yb-er
doped fiber and of the thermalisation effect between the higher levels involved in the green
fluorescence in this kind of fiber, we have developed a new temperature sensor, unsensitive for
strain. the dynamic obtained was 11 db in fig. 4 over the shown temperature range, leading to
a mean rate of change of the green intensity ratio of approximately 0.016 db/k at 300 k.
several temperature cycles have been carried out and we have observed a good repeatability. as
for the stability, no modifications have been observed on the two intensities when the fiber was
heated during several hours at temperatures up to 600°c. due to the strong absorption of the
doped fiber in the signal wavelength range - the green emissions corresponding to transitions
downto the fundamental level - and to the 15 db/km intrinsic absorption of the transparent fiber
in the same wavelength range, such a device would be limited to a point sensor.
we have developed a new sensor based on the 1.13 μm and 1.24 μm emission lines,
coming from the same levels [xxxvi]. these lines present the same temperature behaviour as the
green ones. as the lower level of these transitions is the 4i11/2-level and not the fundamental one
(fig. 3), the signals are absorption free and their wavelengths correspond to a transparency
region of the intermediate fibers. these arguments have permitted the development of an
efficient quasi-distributed configuration without limitation on the sensing line length : the short
lifetime of upper levels (1 μs) could allow realizing a sensors network. each sensitive head is
separated from its neighbors by a 100-meter long transparent silica fiber in order to time-resolve
the counter-propagative signals.
conclusion
energy transfer processes in rare-earth-doped materials have been studied since the
middle of the 20th century. at the beginning, the applications of det were mainly conversion of
infrared light to visible emission or detection of weak infrared signals with photomultipliers. a
renewal interest appears with the development of optical fibers in which high power density can
be achieved: single energy transfer allows improvement of high power fiber amplifiers and laser
and det permits realizing point and quasi-distributed fiber sensors.
local structure, valency states and spectroscopy of transition metal ions
optical fiber materials with very broad-band gain are of great interest for many
applications. tunability in re-doped fiber devices is already well established, but limited by
shielding of the optically active electronic orbitals of re ions. optically active, unshielded
orbitals are found in transition metal (tm) ions. some tm-doped bulk solid-state lasers
materials, such as cr4+:yag, have demonstrated very good results as broad-band gain media
[xxxvii]. tentatives with other tm ions, like ni2+ in vitroceramics fiber are also promising
[xiv]. more recently, a 400-nm emission bandwidth was observed from a fiber whose cr-doped
core was made of y2o3:a2o3:sio2 obtained by a rod-in-tube technique using a cr4+:yag rod as
core material and a silica tube as cladding material [xxxviii].
little literature exists on chromium- and other tm-doped vitreous bulk silica, although
this issue was addressed in the 70's [xxxix] to improve transmission of silica optical fibers.
some reports on chromium-doped glasses have already shown evidence of absorption and near-
infrared (nir) fluorescence due to cr4+ in these materials [xl,xli]. however their compositions
and preparation techniques greatly differ from those of silica optical fibers. therefore, some
basic studies on the optical properties of tm ions in silica-based optical fibers are needed. in
particular, the final tm oxidation state(s) in the fiber core strongly depend(s) on the preparation
process. also, the optical properties (absorption and luminescence) of one particular oxidation
state of a tm ion varies from one host composition and structure to another, due to variations of
the crystal-field (so-called ligand field in glass) [xlii]. hence the interpretation of absorption and
emission spectroscopy is difficult. because no luminescence spectroscopy of the tm-doped
silica fibers had been reported before, we have contributed to explore this field. we have
studied the influence of the chemical composition of the doped region on the cr-oxidation states
and on the spectroscopic properties of the samples. we have also studied the optical properties
versus the experimental conditions (temperature and pump wavelength). we describe the
experimental details specifically used for tm-doped fibers, then we summarize all results and
interpretations.
fabrication and characterization of chromium-doped samples
the preforms and fibers were prepared as described in §0 chapitre:, using cr3+-salt
alcoholic doping solution and oxygen or nitrogen (neutral) atmosphere for the drying-to-
collapse stages. three different types of samples containing ge or/and al were prepared,
referred to as cr(ge), cr(ge-al) and cr(al), respectively. the total chromium concentration
([cr]) was varied from below 50 mol-ppm to several thousands mol ppm. above several 100s
mol ppm, preform samples had evidence of phase separation causing high background optical
losses, whereas fibers (few 10s mol ppm) did not show phase separation and had low
background losses (<1 db/m). the oxidation states of cr and their relative concentrations were
analyzed by electron paramagnetic resonance, whereas the absolute content of all elements
(including cr) was analyzed by plasma emission spectroscopy.
absorption spectra were analyzed using the tanabe-sugano (t.-s.) formalism [xliii] to
compare our assignments to optical transitions with reports on cr3+- and cr4+-doped materials.
this formalism helps predicting the energy of electronic states of a tm in a known ligand field
symmetry as a function of the field strength dq and the phenomenologic b and c so-called
racah parameters (all in cm-1, fig. 5). the dq/b ratio allows the qualitative determination of
some optical properties of tm ions, such as strength, energy and bandwidth of optical
transitions. we have also estimated the absorption cross-sections using results from composition
and valency measurements. absorption and emission spectroscopies including decay
measurements were performed on both preforms and fibers, at room temperature (rt) and low
temperature (lt, either 12 or 77 k), using various pump wavelengths: 673, 900 and 1064 nm.
full details of the experimental procedures are given in [xliv,xlv,xlvi].
principal results
by slightly modifying the concentration in germanium and/or aluminium in the core of
the samples, their optical properties are greatly modified. in particular, we have shown that:
i)
only cr3+ and cr4+ oxidation states are stabilized. cr3+ is favoured by ge co-doping, and
lies in octahedral site symmetry (o), as in other oxide glasses [xlvii]. cr4+ is present in all
samples. this valency is promoted by al co-doping or when [cr] is high, and lies in a distorded
tetrahedral site symmetry (cs) [xlviii,xlix]. the low-doped cr(al) samples contain only cr4+ and
their absorption spectra are similar to those of aluminate [xl] and alumino-silicate glasses [xli]
and even crystalline yag (y3al5o12) [l]. glass modifiers like al induce major spectroscopic
changes, even at low concentrations (~1-2 mol%). this would help engineering the chromium
optical properties in silica-based fibers, using possibly alternative modifiers.
ii) the absorption spectra have been interpreted and optical transitions assigned for each
present valency state (fig. 6). the absorption cross sections curves (abs) were estimated. for
cr3+, abs(cr3+, 670 nm)= 43 x 10-24 m2 is consistent with reported values in other materials,
such as ruby [li] and silica glass [xxxix], while abs(cr4+, 1000 nm)~3.5 x 10-24 m2 is lower than
in reference crystals for lasers [lii] or saturable absorbers [liii], but consistent with estimated
values in alumino-silicate glass [xli].
iii) using the t.-s. formalism, we found dq/b = 1.43 which is lower than the value were
3t2 and 1e levels cross (dq/b = 1.6, fig. 5). as a consequence, the expected emission is along
the 3t23a2 transition as a broad featureless nir band. no narrow emission line from the 1e
state is expected, in agreement with fluorescence measurements. dq/b is lower than those
reported for cr4+ in laser materials like yag and forsterite [xlviii].
iv) the lt fluorescence from cr4+ spreads over a broad spectral domain, from 850 to
1700 nm, and strongly varies depending on core chemical composition, [cr] and p (pump
wavelength). the observed bands were all attributed to cr4+ ions, in various sites. fig. 7 shows
the fluorescence spectra of cr4+ in two different types of samples and in various experimental
conditions. possible emission from other centers (cr3+, cr5+, cr6+) was discussed, but rejected
[xlvi]. the fluorescence sensitivity to [cr] and p suggests that cr-ions are located in various
host sites, and that several sites are simultaneously selected by an adequate choice of p (like in
cr(ge-al)). it is also suggested that although al promotes cr4+ over cr3+ when [cr] is low, cr4+
is also promoted in ge-modified fibers at high [cr].
v)
the strong decrease of fluorescence from lt to rt is attributed to temperature
quenching caused by multiphonon relaxations, like in crystalline materials where the emission
drops by typically an order of magnitude from 77 k to 293 k [xxxvii].
vi) the lt fluorescence decays are non-exponential (fig. 8) and depend on [cr] and s.
the fast decay part is assigned to cr clusters or cr4+-rich phases within the glass. the 1/e-
lifetimes ( at s = 1100 nm are all within the15-35 s range in al-containing samples,
whereas ~ 3-11 s in cr(ge) samples, depending on [cr]. the lifetime of isolated ions (iso),
measured on the exponential tail decay curves (not shown) reach high values: iso~200 to 300 μs
at s~1100 nm, iso~70 μs at s~1400 nm. in the heavily-doped cr(ge) samples, iso is an order
of magnitude less. hence, cr4+ ions are hosted in various sites: the lowest energy ones suffer
more non-radiative relaxations than the higher energy ones. also presence of al improves the
lifetime, even at high [cr]. it is estimated that at rt, lifetime would be of the order of 1 μs or
less. this fast relaxation time, compared to re ions (~1 ms) has been implemented as a
fiberized saturable absorber in a passively q-switched all-fiber laser [liv].
conclusion
the observed lt fluorescence of cr4+ is extremely sensitive to glass composition, total
cr concentration and excitation wavelength. using aluminum as a glass network modifier has
advantages: longer excited state lifetime and broader fluorescence bandwidth than in
germanium-modified silica. a combination of al and ge glass modification induces the
broadest fluorescence emission in the nir range, to our knowledge, exhibiting a 550 nm-
bandwidth. however, increasing the quantum efficiency is now necessary for practical fiber
amplifiers and light sources. further investigations concluded to the necessity of local
surrounding tm ions with a different material, i.e. having sensibly different chemical and
physical properties compared to pure silica, in order to improve the local site symmetry and
hence minimize nrd. preliminary implementation of this principle was reported recently,
concerning cr3+ ions in post-heat-treated ga-modified silica fibers [lv]. when engineering of the
local dopant environment will be possible, then practical tm-doped silica-based amplifying
devices will be at hand.
phonon interactions / non-radiative relaxations: improvement of tm3+ efficiency
thulium-doped fibers have been widely studied in the past few years. because of tm3+
ion rich energy diagram, lasing action and amplification at multiple infrared and visible
wavelengths are allowed. thanks to the possible stimulated emission peaking at 1.47 μm (3h4
=> 3f4, see fig. 9), discovered by antipenko et al. [lvi], one of the most exciting possibilities of
tm3+
ion is amplifying optical signal in the s-band (1.47–1.52 μm), in order to increase the
available bandwidth for future optical communications. unfortunately, the upper 3h4 level of
this transition is very close to the next lower 3h5 level so non-radiative de-excitations (nrd) are
likely to happen in high phonon energy glass host, causing detrimental gain quenching.
oxide modifiers influence on the 3h4-level lifetime
to address this problem, we have studied the effect of some modifications of tm3+ ion
local environment. keeping the overall fiber composition as close as possible to that of a
standard silica fiber, we expect to control the rare-earth spectroscopic properties by co-doping
with selected modifying oxides. we have studied the incorporation of modifying elements
compatible with mcvd. geo2 and alo3/2 are standard refractive index raisers in silica. alo3/2 is
also known to improve some spectroscopic properties of er3+ ion for c-band amplification [i]
and to reduce quenching effect through clustering in highly rare-earth-doped silica [lvii]. both
oxides have a lower maximum phonon energy than silica. we use high phonon energy po5/2 as
opposite demonstration. geo2 and po5/2 concentrations are 20 and 8 mol%, respectively. alo3/2
concentration is varied from 5.6 to 17.4 mol%. tm3+ concentration is less than 200 mol ppm.
to investigate the role of the modification of the local environment, decay curves of the
810 nm fluorescence from the 3h4 level were recorded. all decay curves measured are non-
exponential. this can be attributed to several phenomena and will be discussed in this article.
here, we study the variations of 1/e lifetimes () versus concentration of oxides of network
modifiers (al or p) and formers (ge). the lifetime strongly changes with the composition of the
glass host. the most striking results are observed within the tm(al) sample series: linearly
increases with increasing alo3/2 content, from 14 μs in pure silica to 50 μs in sample tm(al)
containing 17.4 mol% of alo3/2. the lifetime was increased about 3.6 times. the lifetime of the
20 mol% geo2 doped fiber tm(ge) was increased up to 28 μs whereas that of the 8 mol% po5/2
doped fiber tm(p) was reduced down to 9 μs. we see that aluminum codoping seems the most
interesting route among the three tested codopants.
non-exponential shape of the 810-nm emission decay curves
all fluorescence decay curves from the 3h4 level are non-exponential. we have
investigated the reasons for this non-exponential shape in various silica glass compositions. we
observed that the decay curve shape depends only on the al-concentration, even in the presence
of ge or p in samples tm(ge) and tm(p), respectively [lviii]. it is thought that tm3+ ions are
inserted in a glass which is characterized by a multitude of different sites available for the rare-
earth ion, leading to a multitude of decay constants. this phenomenological model was first
proposed by grinberg et al. and applied to cr3+ in glasses [lix]. here we apply this model, for
the first time to our knowledge, to tm3+-doped glass fibers. in this method, a continuous
distribution of lifetime rather than a number of discrete contributions is used. the advantage of
this method is that no luminescence decay model or physical model of the material is required a
priori. the luminescence decay is given by:
i(t)
ai exp t / i
i
(2)
where a() is the continuous distribution of decay constant.
the procedure for calculating and the fitting algorithm are described in detail in
[lix]. for the fitting procedure, we considered 125 different values for i, logarithmically spaced
from 1 to 1000 μs. by applying this procedure to all the decay curves, a good matching was
generally obtained. for a given composition (fig. 10), we can notice two main distributions of
the decay constant. with the aluminium concentration, they increase from 6 to 15 μs and from
20 to 50 μs, respectively. for the highest aluminium concentration (9 mol%, in tm(ge) and
tm(p)), these two bands are still present (not shown in the figure). one is around 10 μs and the
second one spreads from 30 to 100 μs, for both compositions (tm(ge) and tm(p)). according
to the phenomenological model, the width of the decay constants distribution is related to the
number of different sites. the large distribution around 80 μs is then due to a large number of
sites available with different environments. it is however remarkable that this distribution at 80
μs is very similar in both sample types. from the tm3+ ion point of view (considering
luminescence kinetics), tm(ge) and tm(p) glasses seem to offer the same sites.
the meaning of the decay constant values is now discussed. lifetime constants obtained
from the fitting can be correlated with the one expected for thulium located in a pure silica or
pure al2o3 environment. the 3h4 lifetime is calculating by using this equation:
1 1 rad wnr
(3)
where rad corresponds to the radiative lifetime which is given to be 670 μs in silica [lx].
wnr is the non-radiative decay rate, expressed as [lxi]:
wnr w0 exp (e 2ep)
(4)
where w0 and are constants depending on the material, e is the energy difference
between the 3h4 and 3h5 levels and ep is the phonon energy of the glass. w0 and were
estimated for different oxide glasses [lxi, lxii]. the energy difference e was estimated by
measuring the absorption spectrum of the fibers. when al concentration varies, this value is
almost constant around 3700 cm-1 [lxiii].
with these considerations, the 3h4 expected lifetime can be calculated. in the case of
silica glass, silica = 6 μs and for an al2o3 environment, alumina = 110 μs. these two values are in
accordance with the ones we obtained from the fitting procedure. the distribution of decay
constant around 10 μs corresponds to tm3+ ions located in almost pure silica environment while
the second distribution is attributed to tm3+ located in al2o3 -rich sites.
conclusion
by adding oxide network modifiers or formers, we demonstrated that aluminium is the
most efficient to improve the 3h4 level-lifetime. this was attributed to a lower local phonon
energy. potential of the amplification in the s-band was then investigated. in the fiber with the
highest aluminium concentration, gain curve was measured. although excitation wavelength
(1060 nm), refractive index profile and thulium concentration were not optimized, a gain of 0.9
db was obtained at 1500 nm [lxiv]. with a numerical model of the tdfa that we developed
[lxv], we estimated that a gain higher than 20 db is reachable in a silica-based tdfa.
rare-earth-doped dielectric nanoparticles
erbium-doped materials are of great interest in optical telecommunications due to the er3+
intra-4f emission at 1.54 μm. erbium-doped fiber amplifiers (edfa) were developed in silica
glass because of the low losses at this wavelength and the reliability of this glass. developments
of new rare-earth doped fiber amplifiers aim to control their spectroscopic properties: shape and
width of the gain curve, optical quantum efficiency, .... standard silica glass modifiers, such as
aluminum, give very good properties to available edfa. however, for more drastic
spectroscopic changes, more important modifications of the rare-earth ions local environment
are required. to this aim, we present a fiber fabrication route creating rare-earth doped calco-
silicate or calco-phospho-silicate nanoparticles (np) embedded in silica glass.
nanostructured fibers preparation
in the chosen route, np are not prepared ex-situ and incorporated into the perform. to
prepare them, we take advantage of the heat treatement occurring during the mcvd process.
their formation is based on the basic principle of phase separation. on the basis of
thermodynamical data such as activity coefficient, entropy of mixing, enthalpy of mixing and
gibbs-free energy change, the phase diagram of the sio2-cao binary compound was derived
using factstage software (fig. 11). a miscibility gap is found when the cao concentration is
between 2 and 30 mol%. in this region, cao droplets are formed, like oil in water. such
phenomenon is expected during perform fabrication as temperature reaches 2000°c during
collapsing passes.
for calcium doping, cacl2 salt was added to the er3+ containing soaking solution. four
cacl2 concentrations were studied (0, 0.001, 0.1 and 1 mol/l). ge and p were also added by
mcvd. when the ca concentration was increased in the doping solution, the aspect of the
central core of the preform turned from transparent to milky. this variation is explained by the
structural changes of the core. for preforms with calcium concentration higher than 0.01 mol/l,
np were observed by transmission electron microscopy (tem) on preform samples (fig. 12).
we can clearly observe polydisperse spherical np with an estimated mean diameter of 50 nm.
smaller particles of 10 nm are visible. the size of the biggest particles was around 200 nm (not
shown in fig. 12). when the ca concentration decreases, the size distribution of the particles is
nearly identical but the density is lower. the composition of the core was investigated by
energy dispersive x-ray analysis: the np contained equal amounts of ca, p and si cations,
whereas only si cations was detected in the surrounding matrix. ge seemed to be
homogeneously distributed over the entire glass. the most important finding is that er3+ ions
and ca were detected only within the np.
erbium emission characterizations
spectroscopic characterizations on the emission line associated to the 4i13/2-4i15/2 transition
at 1.54 μm were made at room temperature on er-doped samples with (sample a) and without
(sample b) calcium. the results are shown in fig. 13 where we evidence the fact that the
emission spectrum of sample a is broader than that of sample b. to explain these differences
we have studied the er3+ local environment. exafs measurements at the er-liii edge (e=8358
ev) were carried out at the gilda-crg beamline at the european synchrotron radiation
facility. in sample a the rare-earth is linked to o atoms in the first coordination shell and to si
or p atoms (these two atoms can not be distinguished due to the similar backscattering
amplitude and phase) in the second shell in a way similar to that already observed in silicate
glasses [lxvi], phosphate glasses [lxvii]. also the structural parameters (about 7 o atoms at 2.26
å and si (or p) atoms at 3.6 å corresponding to a er-o-si (or p) bond angle of ≈140 deg) are in
good agreement with the cited literature. si(or p) atoms are visible as they belong to the same
sio4 (or po4) tetrahedron as the first shell o atoms but no further coordination shells are
detected. this permits to state that an amorphous environment is realized around er3+ ions. on
the other hand sample b presents a completely different exafs signal that is well comparable
with the spectrum of erpo4. this means that er in this case is inserted in a locally well ordered
phase of about a few coordination shells (around 4-5 å around the absorber). the fact that tem
on this sample reveals a uniform sample is not in contradiction with this result; it just means that
this phase is not spatially extended to form nm-sized np (in our tem analyses the spatial
resolution is limited to few nm) but the ordering is extremely local, i.e. it is limited to only a
few shells around the rare-earth ion.
from these considerations, the broadening of the emission spectrum observed in sample a
can be attributed to an inhomogeneous broadening due to er3+ ions located in a more disordered
environment compare to sample b. here we see that the cumulated effects of ca and p within
the er-doped np both amorphize the material structure around er3+ ions and increase the
fluorescence inhomogeneous broadening.
conclusion
in this paragraph, we have demonstrated that through the phase separation mechanism,
nanoparticles can be obtained in preforms by adding calcium. er3+ ions are found to be located
only into these nanoparticles. an inhomogeneous broadening of the emission band is observed,
associated to er3+ ions located in a more disordered environment compare to silica. this feature
is particularly interesting in the production of materials for wavelength division multiplexing
applications, such as erbium-doped fiber amplifier with a broader band gain.
perspectives and conclusion
the choice of a glass to develop new optical fiber component is most of the time a result
of compromises. silica glass is the most widely used for its many advantages (reliability, low
cost fabrication, ...). however, it suffers from different drawbacks, such as high phonon energy
or low luminescent ions solubility, which affect quantum efficiency or emission bandwidth of
luminescent ions, for example. we have shown in several cases that spectroscopic properties of
dopants are not directly related to the average properties of the doped glass, but to their local
environment. indeed, by slightly modifying the silica composition, we succeeded to control the
chromium valence state and improve the thulium emission efficiency. moreover, we present
interest of high doping level to take advantage of energy transfers. then, nanostructuration of
doped fiber is proposed as a new route to 'engineer' the local dopant environment. all these
results will benefit to optical fiber components such as lasers, amplifiers and sensors, which can
now be realized with silica glass.
references
i
desurvire, e. 1994. in erbium doped fiber amplifiers: principles and applications.
wiley interscience. isbn 0-471-58977-2.
ii
desurvire, e., d. bayart, b. desthieux, s. bigo. 2002. in erbium doped fiber
amplifiers: device and system developments. wiley interscience. isbn 0-471-41903-6.
iii
digonnet, m. 2001. in rare-earth-doped fiber lasers and amplifiers (2nd ed.) crc.
isbn 0-824-70458-2.
iv
grattan, k.t.v., t. sun t. 2000. fiber optic sensor technology: an overview. sens.
actuators a: phys. 82:40.
v
komukai, t., t. yamamoto, t. sugawa and y. miyajima, 1995. upconversion pumped
thulium-doped fluoride fiber amplifier and laser operating at 1.47 μm. ieee j. quant.
electron. 31:1880.
vi
minelly, j and a.ellison. 2002. applications of antimony-silicate glasses for fiber optic
amplifiers. opt. fib. tech. 8:123.
vii
lin, h., s. tanabe, l. lin, y.y. hou, k. liu, d.l. yang, t.c. ma, j.y. yu and e.y.b.
pun. 2007. near-infrared emissions with widely different widths in two kinds of er3+-
doped oxide glasses with high refractive indices and low phonon energies. j. lum.
124:167.
viii
durteste, y., m. monerie, j.-y. allain, and h. poignant. 1991. amplification and lasing
at 1.3 pm in praseodymium-doped fluoridezirconate fibers. electron. lett. 27:626
ix
hewak, d.w., r.s. deol, j. wang, g. wylangowski, j.a. mederios neto, b.n. samson,
r.i. laming, w.s. brocklesby, d.n. payne, a. jha, m. poulain, s. otero, s. surinach,
and m.d. baro. 1993. low phonon-energy glasses for efficient 1.3 μm optical fiber
amplifiers. electron. lett. 29:237.
x
murata, k., y. fujimoto, t. kanabe, h. fujita, and m. nakatsuka. 1999. bi-doped sio2
as a new laser material for an intense laser. fusion engineering and design 44:437.
xi
dianov, e.m., v.v. dvoyrin, v.m. mashinsky, a.a. umnikov, m.v. yashkov and a.n.
gur'yanov. 2005. cw bismuth fiber laser. quantum electronics 35:1083.
xii
razdobreev, i., l. bigot, v. pureur, a. favre, g. bouwmans and m. douay. 2007.
efficient all-fiber bismuth-doped laser. appl. phys. lett. 90:031103.
xiii
samson, b.n., p.a. tick, and n.f. borrelli. 2001. efficient neodymium-doped glass-
ceramic fiber laser and amplifier. opt. lett. 26(3):145.
xiv
samson, b.n., l.r. pinckney, j. wang, g.h. beall, and n.f. borrelli. 2002. nickel-doped
nanocrystalline glass-ceramic fiber. opt. lett. 27(15):1309.
xv
tick, p.a., n.f. borrelli, l.k. cornelius, and m.a. newhouse. 1995. transparent glass
ceramics for 1300 nm amplifier applications. j. appl. phys. 78 (ll):6367.
xvi
see for example: nagel, s.r., j.b. macchesney and k.l. walker. 1985. modified
chemical vapor deposition. in optical fiber communications: vol 1 'fiber fabrication',
ed. t. li, orlando: academic press
xvii townsend, j.e., s.b. poole and d.n. payne. 1987. solution doping technique for
fabrication of rare earth doped optical fibers. electron. lett. 23:329.
xviii vienne, g. 1996. fabrication and characterization of ytterbium:erbium codoped
phosphosilicate fibers for optical amplifiers and lasers", phd thesis, southampton, uk.
xix schultz, p.c. 1977. proc 11th int. cong. glass, 3:155.
xx
jackson, s.d. and t.a. king. 1999. theoretical modelling of tm-doped silica fiber
lasers. j. light. technol. 17(5):948.
xxi
förster, t. 1948. zwischenmolekulare energiewanderung und fluoreszenz. ann. phys.
(leipzig) b 2:55.
xxii dexter, d.l. 1953. a theory of sensitized luminescence in solids. j. chem. phys.
21(5):836.
xxiii grubb, s.g., w.f. humer, r.s. cannon, t.h. windhorn, s.w. vendetta, k.l. sweeney,
p.a. leilabady, w.l. barnes, k.p. jedrzejewski and j.e. townsend. 1992. +21-dbm
erbium power-amplifier pumped by a diode-pumped nd-yag laser. ieee photon. tech.
lett. 4(6):553.
xxiv barnes, w.l., s.b. poole, j.e. townsend, l. reekie, d.j. taylor and d.n. payne. 1989.
er3+-yb3+ and er3+ doped fiber lasers. j. lightwave technol. 7(10):1461.
xxv
auzel, f. 1990. upconversion processes in coupled ion systems. j. lumin. 45(1-6):341.
xxvi wright, j.c.. 1976. up conversion and excited state energy transfer in rare-earth doped
materials. in topics in applied physics, vol. 15, ed. f.k. fong, springer verlag, n.y.
xxvii reisfeld, r. and c.k. jorgensen. 1977. in lasers and excited states of rare earths.
springer-verlag, berlin heidelberg.
xxviii delevaque, e., t. georges, m. monerie, p. lamouler and j.-f. bayon. 1993. modeling
of pair-induced quenching in erbium-doped silicate fibers. ieee photon. tech. lett. 5:73.
xxix wagener, j.l., p.f. wysocki, m.j.f. digonnet and h.j. shaw. 1994. modeling of ion-
pairs in erbium-doped fiber amplifiers. opt. lett. 19:347.
xxx ainslie, b.j., s. p. craig and r. wyatt and k. moulding. 1989. optical and structural-
analysis of a neodymium-doped silica-based optical fiber. mater. lett. 8:204.
xxxi maurice, e., g. monnom, b. dussardier and d.b. ostrowsky. 1995. clustering induced
non-saturable absorption phenomenon in heavily erbium-doped silica fibers. opt. lett.
20:2487.
xxxii maurice, e., g. monnom, b. dussardier and d.b. ostrowsky. 1996. clustering effects on
double energy transfer in heavily ytterbium-erbium-codoped silica fibers. j. opt. soc.
am., 13:693.
xxxiii berthou, h. and c.k. jörgensen. 1990. optical-fiber temperature sensor based on
upconversion-excited fluorescence. opt. lett, 15:1100.
xxxiv
krug, p.a., m.g. sceats, g.r. atkins, s.c. guy and s.b. poole. 1991. intermediate
excited-state absorption in erbium-doped fiber strongly pumped at 980 nm.. opt. lett,
16(24):1976.
xxxv maurice, e., g. monnom, a. saissy, d.b. ostrowsky and g.w. baxter. 1994.
thermalization effects between upper levels of green fluorescence in er-doped silica
fibers. opt. lett, 19:990.
xxxvi
maurice, e., g. monnom , d.b. ostrowsky and g.w. baxter. 1995. 1.2 μm-transitions in
erbium-doped fibers: the possibility of quasi-distributed temperature sensors. appl.
optics, 34:4196.
xxxvii
sennaroglu, a., c.r. pollock and h. nathel. 1995. efficient continuous-wave chromium-
doped yag laser. j. opt. soc. am. b. 12:930.
xxxviii chen, j.-c., y.-s. lin, c.-n. tsai, k.-y. huang, c.-c. lai, and w.-z. su. 2007. 400-
nm-bandwidth emission from a cr-doped glass fiber. ieee phot. technol. lett.
19(8):595.
xxxix schultz, p.c.. 1974. optical absorption of the transition elements in vitreous silica, j.
am. ceram. soc. 57:309.
xl
cerqua-richardson, k., b. peng and t. izumitani. 1992. spectroscopic investigation of
cr4+-doped glasses. in osa proc. advanced solid-state lasers (1992), eds. l.l. chase
and a.a. pinto (optical society of america), 13:52.
xli
hömmerich, u., h. eilers, w.m. yen, j.s. hayden and m.k. aston. 1994. near infrared
emission at 1.35 m in cr doped glass. j. lum. 60&61:119.
xlii
henderson, b. and g.f. imbush, in optical spectroscopy of inorganic solids (clarendon,
oxford, 1989).
xliii sugano, s., y. tanabe and h. kamimura, in multiplets of transition-metal ions in
crystals (academic press, new york, 1970).
xliv felice, v., b. dussardier, j.k. jones, g. monnom and d.b. ostrowsky. 2001. chromium-
doped silica optical fibers : influence of the core composition on the cr oxidation states
and crystal field. opt. mat. 16:269.
xlv
felice, v., b. dussardier, j.k. jones, g. monnom and d.b. ostrowsky. 2000. cr4+-doped
silica optical fibers : absorption and fluorescence properties. eur. phys. j. ap, 11:107.
xlvi dussardier, b., y. guyot, v. felice, g. monnom and g. boulon. 2002. cr4+-doped silica-
based optical fibers fluorescence from 0.8 μm to 1.7 μm. in proc. advanced solid state
lasers, in trends in optics and photonics series (osa), isbn: 1-55752-697-4, 68:104.
xlvii rasheed, f., k.p. o'donnell, b. henderson and d.b. hollis. 1991. disorder and the
optical spectroscopy of cr3+-doped glasses: i. silicate glasses. j. phys.: condens. matter.
3:1915.
xlviii anino, c., j. théry and d. vivien. 1997. new cr4+ activated compounds in tetrahedral
sites for tunable laser applications. opt. mat. 8:121.
xlix note: in the following the energy states are referred to by their irreductible representation
in the td symmetry coordination (ground state is 3a2)).
l
moncorgé, r., h. manaa and g. boulon. 1994. cr4+ and mn5+ actives centers for new
solid state laser materials. opt. mat. 4:139.
li
cronemeyer, d.c. 1966. optical absorption characteristics of pink ruby. j. opt. soc.
am. 56:1703.
lii
sennaroglu, a., u. demirbas, s. ozharar and f. yaman. 2006. accurate determination of
saturation parameters for cr4+-doped solid-state saturable absorbers. j. opt. soc. am. b
23(2):241.
liii
lipavsky, b., y. kalisky, z. burshtein, y. shimony and s. rotman. 1999. some optical
properties of cr4+-doped crystals. opt. mat. 13:117.
liv
tordella, l., h. djellout, b. dussardier, a. saïssy and g. monnom. high repetition rate
passively q-switched nd3+:cr4+ all-fiber laser. electron. lett., 39:1307.
lv
dvoyrin, v.v., v.m. mashinsky, v.b. neustruev, e.m. dianov, a.n. guryanov and a.a.
umnikov. 2003. effective room-temperature luminescence in annealed chromium-doped
silicate optical fibers. j. opt. soc. am. b. 20:280.
lvi
antipenko, b.m., a.a. mak, o.b. raba, k.b. seiranyan, and t.v. uvarova. 1983. "new
lasing transition in the tm3+ ion", new lasing transition in the tm3+ ion. sov. j. quant.
electron. 13(4):558.
lvii arai, k., h. namikawa, k. kumata, t. honda, y. ishii, and t. handa. 1986. aluminium
or phosphorus co-doping effects on the fluorescence and structural properties of
neodymium-doped silica glass. j. appl. phys. 59(10):3430.
lviii blanc, w., t.l. sebastian, b. dussardier, c. michel, b. faure, m. ude, and g. monnom.
2008. thulium environment in a silica doped optical fiber. j. non-cryst. solids 354(2-
9):435.
lix
grinberg, m., d.l. russell, k. holliday, k. wisniewski, and cz. koepke. 1998.
continuous function decay analysis of a multisite impurity activated solid. opt. comm.
156(4-6):409.
lx
walsh, b.m. and n.p. barnes. 2004. comparison of tm:zblan and tm:silica fiber
lasers: spectroscopy and tunable pulsed laser operation around 1.9 μm. appl. phys. b.
78 (3-4):325.
lxi
van dijk, j.m.f. and m.f.h. schuurmans. 1983. on the nonradiative and radiative
decay rates and a modified exponential energy gap law for 4f–4f transitions in rare-earth
ions. j. chem. phys. 78(9):5317.
lxii layne, c.b., w.h. lowdermilk and m.j. weber. 1977. multiphonon relaxation of rare-
earth ions in oxide glasses. phys. rev. b 16:10.
lxiii faure, b., w. blanc, b. dussardier, and g. monnom. 2007. improvement of the tm3+:3h4
level lifetime in silica optical fibers by lowering the local phonon energy. j. non-cryst.
solids 353(29):2767.
lxiv blanc w., p. peterka, b. faure, b. dussardier, g. monnom, i. kasik, j. kanka, d.
simpson, and g. baxter. 2006. characterization of a thulium-doped silica-based optical
fiber for s-band amplification. spie proc. 6180:181.
lxv peterka p., b. faure, w. blanc, m. karasek, and b. dussardier. 2004. theoretical
modelling of s-band thulium-doped silica fiber amplifiers. opt. quant. electron. 36(1-
3):201.
lxvi d'acapito, f., s. mobilio, l. santos, and r. almeida. 2001. local environment of rare-
earth dopants in silica-titania-alumina glasses: an extended x-ray absorption fine
structure study at the k edges of er and yb. appl. phys. lett. 78(18):2676.
lxvii d'acapito, f., s. mobilio, p. bruno, d. barbier, and j. philipsen. 2001. luminescent
properties of local atomic order of er3+ and yb3+ ions in aluminophosphate glasses. j.
appl. phys. 90(1):265.
figures captions
fig. 1: schematic energy diagram of (a) single energy transfer between two ions, (b) double
energy transfer.
fig. 2 : absorption of the er-1 and er-2 fibers vs launched pump power. squares: experimental
data; solid lines: cluster model for 0%, 10% and 52% of er3+-ions in clusters; vertical
arrows: non-saturated absorption as a difference with simulation for 0% cluster.
fig. 3 : energy scheme for the det process. level energies are in cm-1; their lifetimes between
round brackets.
fig. 4 : natural logarithm of measured intensity ratio (r) plotted against the inverse of
temperature.
fig. 5: normalized tanabe-sugano energy level diagram for cr4+ in tetrahedral ligand field (td
symmetry) showing the energy states of interest, for c/b = 4.1. the free ion states are
shown on the left of the ordinate axis. the dashed line (dq/b = 1.43) reveals the relative
positions of the states found for cr4+ in the silica-based samples: the first excited state
level is 3t2(3f).
fig. 6:. background corrected absorption from (left) a cr(ge) preform ([cr] = 1400 ppm) and
(right) a cr(al) fiber ([cr] = 40 ppm). circles: experimental data; solid lines: adjusted
bands to cr3+ (left) and cr4+ (right) transitions, respectively; and resulting absorption
spectra. assignments are indicated from the ground level cr3+:4a2 or cr4+:3a2 to the
indicated excited level, respectively. the cr4+:3t2 level three-fold splitting is due to
distorsion from perfect tetrahedral symmetry. the spin-forbidden cr3+:4a22e and
cr4+:3a21e transitions are not visible and overlapping with the intense spin-allowed
transitions.
fig. 7: fluorescence spectra: (a) fiber cr(al):[cr] = 40 ppm, p = 900 nm, t=77 k, (b) preform
cr(ge-al): [cr] = 300 ppm, p = 673 nm, t=12k.
fig. 8: fluorescence decays from cr(al) samples, p = 673 nm, t=12 k: (a) s~1100 nm and
[cr]=40 ppm, (b)s~1100 nm, [cr]=4000 ppm, and (c) s~1400 nm, [cr]=4000 ppm
fig. 9: schematic energy diagram of tm3+ ion, showing the relevant multiplets. solid arrows:
absorption and emission optical transitions; thick arrow: nrd (non-radiative de-
excitation) across the energy gap between the 3h4 and 3h5 multiplets, e~ 3700 cm-1.
fig. 10 : histograms of the recovered luminescence decay time distributions obtained for silica-
based tm3+-doped fibers with phosphorus incorporated in the core and different al2o3
concentration.
fig. 11: miscibility-gap in the derived phase-diagram of binary sio2-cao glass
fig. 12: tem image from preform sample doped with ca and p.
fig. 13 : room temperature emission spectra of er-doped preform with (sample a) and without
(sample b) calcium. samples were excited at 980 nm.
fig. 1: schematic energy diagram of (a) single energy transfer between two ions, (b) double
energy transfer.
0
10
20
30
40
50
0
1
2
3
4
a (db)
0%
10%
52%
er 2 fiber
er 1 fiber
nsa
p (mw)
in
fig. 2 : absorption of the er-1 and er-2 fibers vs launched pump power. squares: experimental
data; solid lines: cluster model for 0%, 10% and 52% of er3+-ions in clusters; vertical
arrows: non-saturated absorption as a difference with simulation for 0% cluster.
fig. 3 : energy scheme for the det process. level energies are in cm-1; their lifetimes between
round brackets.
1
-0.5
0.5
1.5
ln ( r)
2
3
1000/t (k)
fig. 4 : natural logarithm of measured intensity ratio (r) plotted against the inverse of
temperature.
0
2
4
1
3
60
30
0
e/b
dq/b
1.43
3f
1d
3p
3t1 (3p)
3t1 (3f)
3t2 (3f)
3a2 (3f)
1e (1d)
fig. 5: normalized tanabe-sugano energy level diagram for cr4+ in tetrahedral ligand field (td
symmetry) showing the energy states of interest, for c/b = 4.1. the free ion states are
shown on the left of the ordinate axis. the dashed line (dq/b = 1.43) reveals the relative
positions of the states found for cr4+ in the silica-based samples: the first excited state
level is 3t2(3f).
fig. 6:. background corrected absorption from (left) a cr(ge) preform ([cr] = 1400 ppm) and
(right) a cr(al) fiber ([cr] = 40 ppm). circles: experimental data; solid lines: adjusted
bands to cr3+ (left) and cr4+ (right) transitions, respectively; and resulting absorption
spectra. assignments are indicated from the ground level cr3+:4a2 or cr4+:3a2 to the
indicated excited level, respectively. the cr4+:3t2 level three-fold splitting is due to
distorsion from perfect tetrahedral symmetry. the spin-forbidden cr3+:4a22e and
cr4+:3a21e transitions are not visible and overlapping with the intense spin-allowed
transitions.
fig. 7: fluorescence spectra: (a) fiber cr(al):[cr] = 40 ppm, p = 900 nm, t=77 k, (b) preform
cr(ge-al): [cr] = 300 ppm, p = 673 nm, t=12k.
0,001
0,01
0,1
1
0
100
200
300
400
time (μs)
(a)
(c)
(b)
fig. 8: fluorescence decays from cr(al) samples, p = 673 nm, t=12 k: (a) s~1100 nm and
[cr]=40 ppm, (b)s~1100 nm, [cr]=4000 ppm, and (c) s~1400 nm, [cr]=4000 ppm
3h
6
3f
4
3h
4
3h
5
nrd
785 nm 810 nm
1470 nm (s-band)
e
fig. 9: schematic energy diagram of tm3+ ion, showing the relevant multiplets. solid arrows:
absorption and emission optical transitions; thick arrow: nrd (non-radiative de-
excitation) across the energy gap between the 3h4 and 3h5 multiplets, e~ 3700 cm-1.
fig. 10 : histograms of the recovered luminescence decay time distributions obtained for silica-
based tm3+-doped fibers with phosphorus incorporated in the core and different al2o3
concentration.
fig. 11: miscibility-gap in the derived phase-diagram of binary sio2-cao glass
fig. 12: tem image from preform sample doped with ca and p.
fig. 13 : room temperature emission spectra of er-doped preform with (sample a) and without
(sample b) calcium. samples were excited at 980 nm.
|
0911.1685 | multi-objective optimisation method for posture prediction and analysis
with consideration of fatigue effect and its application case | automation technique has been widely used in manufacturing industry, but
there are still manual handling operations required in assembly and maintenance
work in industry. inappropriate posture and physical fatigue might result in
musculoskeletal disorders (msds) in such physical jobs. in ergonomics and
occupational biomechanics, virtual human modelling techniques have been
employed to design and optimize the manual operations in design stage so as to
avoid or decrease potential msd risks. in these methods, physical fatigue is
only considered as minimizing the muscle or joint stress, and the fatigue
effect along time for the posture is not considered enough. in this study,
based on the existing methods and multiple objective optimisation method (moo),
a new posture prediction and analysis method is proposed for predicting the
optimal posture and evaluating the physical fatigue in the manual handling
operation. the posture prediction and analysis problem is mathematically
described and a special application case is demonstrated for analyzing a
drilling assembly operation in european aeronautic defence & space company
(eads) in this paper.
| introduction
although the automation technique has been employed widely in industry, there are still lots of
manual operations, especially in assembly and maintenance jobs due to the flexibility and the
feasibility of human being (forsman et al., 2002). among these manual handling operations, there are
occasionally several physical operations with high strength demands. among the workers in such
operations, msd is one of the major health problems. the magnitude of the load, posture, personal
factors, and sometimes vibration are potential exposures for msds (li and buckle, 1999). it is
believed that one reason for msds is the physical fatigue resulted from the physical work.
the aim of ergonomics is to generate working conditions that enhance safety, well-being and
performance, and manual operation design and analysis is one of the key methods to improve manual
work efficiency, safety, comfort, as well as job satisfaction. for manual handling operation design, the
strength of the joint and muscle is of importance to guide the design of workspace or equipment to
reduce work related injuries, and furthermore to help in personnel selection to increase work efficiency.
human strength information can also be used in a human task simulation environment to define the
load or exertion capabilities of each agent and, hence, decide whether a given task can be completed in
a task simulation. it should be noticed that the physical strength does not remain immutable in a
working process, and in fact it varies according to several conditions, such as environment, physical
state and mental state. the diminution of the physical capacity along time is an obvious phenomenon
in these manual operations.
physical fatigue is defined as reduction of physical capacity, which is derived from the definition of
muscle fatigue: "any reduction in the maximal capacity to generate force or power output" (vollestad,
1997). physical fatigue is mainly resulting from three reasons: magnitude of the external load, duration
and frequency of the external load, and vibration. it was proved in (chen, 2000) that the movement
strategy in industrial activities involving combined manual handling jobs, such as a lifting job,
depends on the fatigue state of muscle, and it is obvious that the change of the movement strategy in
the activities directly impacts the motion of the operation and then results in different loads in muscles
and joints. if it goes worse, once the desired exertion is over the physical capacity, cumulative fatigue
or injury might appear in the tissues as potential risks for msds.
in order to make an appropriate design, the same problem has been encountered by countless
organizations in a variety of industries: the human element is not being considered early or thoroughly
enough in the life cycle of products, from design to recycling. more significantly, this does have a
devastating impact on cost, time to market, quality and safety. using realistic virtual human is one
method to take the early consideration of ergonomics issues in the design and it reduces the design
cycle time and cost (badler, 1997; honglun et al., 2007). nowadays, there are several commercialized
3
human simulation tools available for job design and posture analysis, such as 3dsspp, jack, vsr and
anybody.
3dsspp (three dimensional static strength prediction programme) is a tool developed in university
michigan (chaffin et al., 1999). originally, this tool is developed to predict population static strengths
and low back forces resulting from common manual exertions in industry. the biomechanical models
used in 3dsspp are meant to evaluate very slow or static exertions (chaffin, 1997). it predicts static
strength requirements for tasks such as lifts, presses, pushes, and pulls. the output includes the
percentage of men and women who have the strength to perform the described job, spinal compression
forces, and data comparisons to niosh guidelines. however, they do not allow dynamic exertions to
be simulated, and there is no fatigue evaluation tool in this tool.
jack (badler et al., 1993) is a human modelling and simulation software solution that helps
organizations in various industries improve the ergonomics of product designs and refine workplace
tasks. with jack, it is able to assign a virtual human in a task and analyze the posture and other
performance of the task using existing posture analysis tools, like owsa (ovako working posture
analyzing system) and so on. ptms (predetermined time measurement systems) are also integrated
to estimate the standard working time of a specified task. in this virtual human tool, the fatigue term is
considered in motion planning to avoid a path that has a high torque value maintained over a
prolonged period of time. however, the reduction of the physical capacity is not modelled in the
virtual human, although the work-rest schedule can be determined using its extension package.
in vsr (virtual soldier research), another virtual human was developed for military application. in
this research, the posture prediction is based on moo (multiple-objective optimisation) with three
objective terms of human performance measures: potential energy, joint displacement and joint
discomfort (yang et al., 2004). in santostm, fatigue is modelled based on the physiological principle
mentioned in a series of publication (ding et al., 2000, 2002, 2003). because this muscle fatigue
model is based on physiological mechanism of muscle, it requires dozens of variables to construct the
mathematical model for a single muscle. meanwhile, the parameters for this muscle fatigue model are
only available for quadriceps. in addition, in its posture prediction method, the fatigue effect is not
integrated.
anybody is a system capable of analyzing the musculoskeletal system of humans or other creatures as
rigid-body systems. a modelling interface is designed for the muscle configuration, and optimisation
method is used in the package to resolve the muscle recruitment problem in the inverse dynamics
approach (damsgaard et al., 2006). in this system, the recruitment strategy is stated in terms of
normalized muscle forces. "however, the scientific search for the muscle recruitment criterion is still
4
ongoing, and it may never be established." (damsgaard et al., 2006). furthermore, in the optimisation
criterion, the capacities of the musculoskeletal system are assumed as constants, and no limitations
from the fatigue are taken into account.
in all the posture prediction methods mentioned above, especially in these optimisation methods, the
physical capacity is treated as constant. for example, in anybody or other static optimisation methods,
the muscle strength is proportional to the pcsa (physiological cross section area). in jack, the
strength is the maximum achievable joint torque. in other words, the reduction of the physical capacity
is not considered, and using these tools is not sufficient to predict or analyze the fatigue effect in a real
manual operation.
table 1. comparison of different available virtual human simulation tools
3dsspp(1,2)
anybody(3)
jack
santotm(4)
posture analysis
√
√
√
√
joint effort analysis
√
√
√
√
muscle force analysis
√
posture prediction
√
√
√
√
empirical data based
√
optimization method based
√
√
√
√
soo
√
√
moo
√
joint discomfort guided
√
√
fatigue effect in optimization
√
√
(1)
3dsspp is only suitable for static or quasi-static tasks.
(2)
the motion posture prediction is based on empirical data and optimization based differential inverse
kinematics.
(3)
the objective function is programmable.
(4)
potential energy, joint displacement, joint discomfort and etc are used as objective functions.
in manufacturing and assembly line work, repetitive movements constitute a major facet of several
workplace tasks, and such movements lead to muscle fatigue. muscle fatigue generates influences on
neuromuscular pathway, postural stability and global reorganization of posture (fuller et al., 2008). in
the tools mentioned above, the fatigue effect can be inferred in posture analysis, but how the human
reacts on physical fatigue by adjusting the posture in order to meet the physical requirements is not
feasible in those tools. physical fatigue, which can be experienced by everyone in everyday, especially
for those who are engaged in manual handling operations, should be taken into human simulation.
a more realistic posture prediction can gain clearer understanding of human movement performance,
and it is always a tempting goal for biomechanics and ergonomics researchers (zhang and chaffin,
2000). the predictive capacity, or the reality is provided by a model in computerized form, and these
quantitative models should be able to predict realistically how people move and interact with systems.
therefore, it should be necessary to integrate the feature of fatigue into posture prediction to predict
the possible change of posture along with the reduction of the physical capacity. furthermore, the
fatigue model should have a sufficient precision to reproduce the fatigue correctly.
5
in this paper, a posture analysis and posture prediction method is proposed to take account of the
fatigue effect in the manual operations. at first, the general modelling procedure of virtual human is
presented. the mathematical description of the posture prediction is formulated based on a muscle
fatigue model in the following section. the overall framework involving the posture analysis method
is shown to explain the workflow in a virtual working environment. at last, an application case in
eads is demonstrated followed by results and discussions.
2. kinematic modelling and dynamic modelling of virtual human
in this study, the human body is modelled cinematically as a series of revolute joints. modified
denavit-hartenberg notation system (khalil and kleinfinger, 1986; khalil and dombre, 2002) is used
to describe the movement flexibility of the joint. according to its function, one natural joint can be
modelled by 1-3 revolute joints. each revolute joint has its own joint coordinate, labelled as
i
q , with
joint limits: the upper limit
u
i
q
and the lower limit
l
i
q
. a set of generalized coordinates
1
q=
t
i
n
q
q
q
is defined as a vector to represent the kinematic chain. in fig. 1, the
human body is geometrically modelled by 28 revolute joints to represent the main movement of the
human body. the posture, velocity, and acceleration are expressed by the general coordinates q ,q
,
and q
. it is feasible to achieve the kinematic analysis of the virtual human based on this kinematic
model. by implementing existing inverse kinematic algorithms, it is able to predict the posture and
trajectory of the human, particularly for the end effectors, i.e. both hands.
in this operation, it is possible that all the joints are involved in the implementation of the inverse
kinematics; therefore there are many possible solutions with such a high dof (28 in total for main
joints). in industry, the sedentary operation occupies a large proportion for manual handling jobs, and
even in some heavy operations, the upper extremity is mainly engaged to finish the task. therefore, in
our application case, only both arms are kinematic and dynamic modelled to analyze the operation.
6
z2
z1
z3
z5
z4
z6
z7
z8
z9
z10
z11
z12
z13
z17
z14
z16
z18
z15
z21
z26
z19
z20
z25
z24
z22
z27
z28
z23
z0
x0
z10
right hand
left hand
figure 1. kinematic modelling of the human body
no matter in static posture or in dynamic process, the movement and the external efforts can generate
torques and forces at the joints. therefore, dynamic modelling of the human body is necessary for
implementing inverse dynamic calculation. for each body segment, the most important dynamic
parameters are the moment of inertia, gravity centre, and mass of the limb. such information can be
achieved from some anthropometrical database and biomechanical database.
3. multi-objective optimisation for posture prediction
the general description of the posture analysis problem based on multiple-objective optimisation
(moo) is to find a set of q in order to minimize several objective functions in eq. (1) simultaneously:
7
1( )
min
( )
( )
( )
i
n
f
f
f
f
q ω
q
q
q
q
(1)
subject to equality and inequality constraints in eq. (2).
( )
0
1,2,
,
( )
0
1,2,
,
i
i
g
i
m
h
j
e
q
q
(2)
with m is the number of inequality constraints and e is the number of equality constraints.
two human performance measures are used to create the global objective function: fatigue and
discomfort. of course, with the exception of these two performance measures, there are still several
other objective functions, such energy expenditure (ren et al., 2007), joint displacement (yang et al.,
2004), visibility and accessibility (chedmail et al., 2003) etc. in our current application, only fatigue
and joint discomfort are taken into consideration for the posture prediction and evaluation, since the
physical fatigue effect acting on the posture prediction is the main phenomena that should be verified.
if several objective functions are involved in the posture prediction, it would be difficult to analyze the
fatigue independently.
fatigue
1
p
dof
i
fatigue
i
i
cem
f
(3)
in the literature, normalized muscle force is often used as a term to determine the muscle force. this
term represents the minimization of muscle fatigue in the literature, and a similar measure has been
used in (ayoub, 1998; ayoub and lin, 1995) for simulating the lifting activities. in our application,
the summation of the normalized joint torques is used based on the same concept in eq. (3). dof is
the total number of the revolute joints for modelling the human body. for each joint, the term
normalized torque
i
i
cem
represents the relative load of the joint. the summation of the relative load is
one measure to minimize the fatigue of each joint.
in traditional methods,
i
cem
is assumed constant in the operation. in order to integrate the fatigue
effect, the fatigue process is mathematically modelled in a differential equation eq. (4). in this model,
the temporal parameters and the physical parameters are taken into consideration, which represents the
magnitude of physical load, duration, and frequency in the conventional ergonomics analysis methods.
the descriptions of all the parameters in the equation are listed in table 2.
8
max
cem
cem
i
i
i
load
i
d
k
dt
(4)
table 2: parameters in muscle fatigue and recovery model
parameters unit description
γmax
nm maximum joint strength
γcem
nm joint strength at time instant t
γ
nm torque at the joint at time instant t
k
min-1 fatigue ratio, equals to 1
r
min-1 recovery ratio, equals to 2.4
t
min time
the fatigue process is graphically shown in fig. 2. assume in a static posture, the load of the joint is
constant
load
. at the very beginning of an operation, the joint has the maximum strength
max
. with
time, the joint strength decreases from the maximum strength. the maximum endurance time (met)
is the duration from the start to the time instant at which the strength decreases to the torque demand
resulting from external load.
load t
joint capacity
cem t
time
t
safe
potential risks
max
endurance time
figure 2. fatigue effect on the joint strength
this fatigue model is based on motor-units pattern of muscle (liu et al., 2002; vollestad, 1997). the
joint torque capacity is the overall performance of muscles attached around the joint. in a muscle,
there are mainly three types of muscle motor units: type i, type ii a, and type ii b. the fatigue
resistance in ascending sequence is: type ii b < type ii a < type i. meanwhile, the muscle force
generation capacity is: type i < type ii a < type ii b. muscle motor recruitment sequence starts from
type i, and then goes to type ii a and at last type ii b. therefore, to fulfil the requirement of the larger
9
external force, more type ii b units are involved and then the faster muscle becomes fatigue.
i
load
can
represent the influence from the external load.
the fatigue resistance is determined by the composition of muscle units. when the capacity decreases,
which means more and more type ii b units and type ii a units are getting fatigued, and at the same
time type i units remain non fatigue and the overall fatigue resistance increases, and as a result the
reduction process of the capacity decreases. this phenomenon is described by
max
cem
i
i
.
this model has been mathematically validated by comparing the existing static met models in the
literature (ma et al., 2009). high correlation has proved that this model is suitable for static posture or
slow operation. the fatigue model for the dynamic operation has not yet been validated.
recovery
max
(
)
t
cem
cem
i
i
i
d
r
d
(5)
besides fatigue, the recovery of the physical capacity should also be modelled to predict the work-rest
schedule in order to complete the design of manual handling operations. the recovery model in eq. (5)
predicts the recuperation of the physical capacity and its original form is introduced in the literature
(carnahan et al., 2001; wood, 1997).
discomfort
another objective function is joint discomfort. the discomfort measure is taken from vsr (yang et
al., 2004). this measure evaluates the joint discomfort level from the rotational position of joint
relative to its upper limit and its lower limit. the discomfort level is formulated in eq. (6) as follows,
and it increases significantly as joint values approaches their limits. qu (eq. (7)) and ql (eq. (8)) are
penalty terms correspondingly to the upper limit and lower limit of the joint.
i
is the weighing value
for each joint. the detailed notation of the variables in discomfort model is listed in table 3.
table 3: parameters in joint discomfort model
parameters
unit
description
qi
degree
current position of joint i
qu
i
degree
upper limit of joint i
ql
i
degree
lower limit of joint i
qn
i
degree
neutral position of joint i
g
-
constant, 106
qui
-
penalty term of upper limits
qli
-
penalty term of lower limits
γi
-
weighting value of joint i
10
2
1
1
(
)
dof
norm
discomfort
i
i
i
i
i
f
q
g qu
g ql
g
(6)
100
5.0
0.5sin
1
2
u
i
i
i
u
l
i
i
q
q
qu
q
q
(7)
100
5.0
0.5sin
1
2
l
i
i
i
u
l
i
i
q
q
ql
q
q
(8)
an example calculated from the joint discomfort performance is graphically shown in fig.3. it is
apparent that the joint discomfort reaches its minimum value at its neutral position and it increases
when approaching its upper and lower limits.
0
20
40
60
80
100
120
140
160
180
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
joint movement range, qu=180, ql=0, qn=40
discomfort level of a joint
neutral position
figure 3. an example of the joint discomfort.
objective function
( )
min
( )
( )
fatigue
discomfort
f
f
f
q
q
q
(9)
the overall objective function uses fatigue measure and discomfort measure to determine the optimal
geometrical configuration of the posture. the biomechanical aspect of the posture is evaluated by the
fatigue objective function, and meanwhile, the geometrical constraints for the human body are
measured by the discomfort measure.
constraints
11
in this study, constraints from kinematic aspect and biomechanical aspect are used to determine the
possible solution space.
from kinematic aspect, the cartesian coordinates of the destination for the posture contributes to one
constraint in eq. (10).
t
x
y
z
is the cartesian coordinates of the end-effector (right hand and left
hand) of the aim of the reach. the function x can be described in direct kinematic approach. the
transformation matrix between the end-effector and the reference coordinates can be modelled in the
way of modified dh notation method.
( )
x
y
x
z
q
(10)
joint limits (ranges of motion) are imposed in terms of inequality constraints in the form of eq. (11).
l
u
i
i
i
q
q
q
(11)
from biomechanical aspect, theoretically there are mainly two constraints. one is the limitation of the
joint strength (eq. (12)) and another one is equilibrium equation described in inverse dynamics in eq.
(13).
max
0
i
i
(12)
it should be noted that in eq. (12) the upper limit
max
i
is treated as unchangeable in conventional
posture prediction methods. in our optimisation method, the upper limit is replaced by
i
cem
to update
the physical capacity caused by fatigue.
the joint strength depends on the posture of human body and personal factors; such as age and gender.
in fig. 4, elbow joint flexion strength is shown for the 95% male adult population according to the
literature (chaffin et al., 1999). the elbow flexion strength is related to the flexion angle of elbow
s
and flexion angle of shoulder
e
(shown in fig. 7). in the range of the joint, for a 50% population, the
joint strength varies from 70 nm to 40 nm. for most of the population, the strength varies from 40
nm to almost 120 nm for the male.
12
figure 4. biomechanical joint flexion strength constraints of elbow
in terms of equality constraints, another constraint is the inverse dynamics in eq. (13). with
displacement, velocity and acceleration in general coordinates, the inverse dynamics formulates the
equilibrium equation. in eq. (13), ( , , )
γ q q q
represents the term related to external loads,
( )
a q is the
link inertia matrix,
(
)
b q,q
represents centrifugal and coriolis terms, and
( )
q q is the potential term.
( , , )
( )
(
)
( )
γ q q q
a q q
b q,q q
q q
(13)
in summary, the moo problem can be simplified as: for a static posture or in a relative slow motion,
we can assume that q=0
, and q=0
, therefore, the joint torque depends only on the joint position and
the external load. a set of solution satisfying all the constraints
{
( )
( )
}
s
q
q
0,
q
0
g
h
can be
found. in this case, we are trying to find a configuration q
s
to achieve the minimization of both
fatigue and discomfort objective functions.
4. framework and flowchart
13
owes
objective
work
evaluation
system
virtual
environment
virtual human
virtual interaction
fatigue criteria
posture criteria
efficiency criteria
comfort criteria
environment
human motion
interaction
motion capture
haptic interfaces
virtual reality
simulated human
motion
human simulation
fatigue analysis
comfort analysis
posutre analysis
posture prediction algorithm
virutal
human
status
update
figure 5. framework of the objective work evaluation system
the posture analysis with consideration of fatigue is involved in an objective work evaluation
system (owes) in fig. 5. the aim of the framework is to enhance the simulated human motion by
using motion capture technique, and mainly two functions are designed: motion analysis and motion
prediction.
for motion analysis in such a system, manual handling operation is either captured by motion capture
system or simulated by virtual human software in a virtual working environment. in this way, data-
driven algorithm and computational approaches, two main methods for human modelling and
simulation, can be integrated into the framework. the first method is developed based on experiment
data and regression; therefore the most probable posture can be implemented for a specific data.
however, a time-consuming data collection process is involved in such a method, such as motion
tracking. the second one can be used for posture prediction, based on biomechanics and kinematics.
with this tool, it is possible to predict the posture by formulating a set of equations.
the interaction information is detected via haptic interfaces and recorded as external efforts on the
joint, noted as
,
ex
ex
j
j
f
. j is the index of the joint. both of the motion information and haptic
interaction information are input into the work evaluation module. in such module, kinematic analysis
can achieve the posture of the human body in each frame and the inverse dynamics is carried out to
determine the corresponding effort at each joint
,
j
j
f
. using predefined posture analysis criteria,
14
efficiency criteria, fatigue evaluation tool, etc, the different aspect of the manual handling operation
can be assessed.
in traditional system, there is no feedback from the analysis result to the prediction algorithm. in fact,
the human does change its posture and trajectory according to different physical or mental status. in
our framework, from the analysis result, the human status is updated, such as physical capacities. the
updated status can be used further for posture prediction. therefore, the evaluation result, such as
fatigue, needs to be taken back to the simulation to generate the much more realistic human simulation.
5. application case for drilling task
task description
in our research project, the application case is junction of two fuselage section with rivets from the
assembly line of a virtual aircraft. one part of the job consists of drilling holes all around the section.
the properties of this task can be described in natural language as: drilling holes around the fuselage
circumference. the number of the holes could be up to 2000 under real work conditions. the drilling
machine has a weight around 5 kg, and even up to 7 kg in the worst condition with consideration of
the pipe weight. the drilling force applied to the drilling machine is around 49n. in general, it takes
30 seconds to finish a hole. the drilling operation is graphically shown in fig. 6.
figure 6. drilling task in virtual aircraft factory in catia
in this application case, there are several ergonomics issues and several physical exposures contribute
to the difficulty and penalty of the job. it includes posture, heavy load from the drilling effort, the
15
weight of the drilling machine, and vibration. fatigue is mainly caused by the load on certain postures,
and the vibration might result in damage to some other tissues of human body. to maintain the drilling
work for a certain time, the load could cause fatigue in elbow, shoulder, and lower back. in this paper,
the analysis is only carried out to evaluate the fatigue of right arm in order to verify the conception of
the framework and the posture prediction method based on moo. the vibration is excluded from the
analysis. further more, we assume that the worker carry the drilling machine symmetrically, the
external loads are divided by two so as to simplify the calculation.
the upper arm is modelled by five revolute joints in fig. 7. each revolute joint rotates around its z
axis and the function of each joint is defined in table 4.
1
2
3
[
,
,
]
q q q
is used to model the shoulder
mobility.
4
5
[
,
]
q q
is used to describe the mobility around the elbow joint.
s
is the flexion angle
between shoulder and the body in the sagittal plane and the and
e
is the angle between lower arm
and upper arm in a flexion posture.
x1
z2
z3
z4
z5
z0
z1
x0
x3, x4, x5
x2
rl3
shoulder
e
s
elbow
waist
figure 7. kinematic modelling of human and two flexion angles of the arm
table 4. five revolute joints in the arm kinematic model and their corresponding descriptions
joints
description
1
flexion and extension of shoulder joint
2
adduction and abduction of shoulder joint
3
supination and pronation of upper arm
4
flexion and extension of shoulder joint
5
supination and pronation of upper arm
16
the geometrical parameters of the limb are required in order to accomplish the kinematic modelling.
such information can be obtained from anthropometrical database in the literature. take the arm as an
example. the arm is segmented into two parts: upper arm and forearm (hand included). each part of
the arm is simplified to a cylinder form and assumed a uniform distribution of density in order to
calculate its moment of inertia. once the height of the virtual human is determined, according to
anthropometry and biomechanics, both the length and the radius of the upper arm and lower arm can
be estimated from eq. (14). the mass of each part can be achieved in occupational biomechanics by
eq. (15). once the mass and cylinder radius and height are all known, its inertia moment around its
long axis can be determined by a diagonal matrix in eq.(16).
parameters
unit
description
m
kg
mass of the virtual human
h
m
height of the virtual human
m
kg
mass of the segment
f
-
subscript for forearm
u
-
subscript for upper arm
ig
-
moment of inertia of the segment
h
m
length of the segment
r
m
radius of the segment
table 4: dynamic parameters and their descriptions in arm dynamic modelling
0.146
0.125
0.186
0.125
f
f
f
u
u
u
h
h
r
h
h
h
r
h
(14)
0.451 0.051
0.549 0.051
f
u
m
m
m
m
(15)
2
2
2
2
2
,
,
4
12
4
12
2
g
mr
mh
mr
mh
mr
i
diag
(16)
results
after kinematic and dynamic modeling of human arm, the posture analysis and posture prediction
based on moo can be carried out.
posture analysis: fatigue and recovery
17
in the left subfigure of fig. 8, the reduction of shoulder strength capacity is graphically presented
using the fatigue model. in this case, the arm for the drilling work is configured by
30
s
and
90
e
. for maintaining the drilling posture, the torque generated by the external load at each joint
remains constant. the joint load
j
is represented by the horizontal solid line. the reduction of the
strength capacity of the 95% male population is represented by the curves. for the male adult
population, the strength of the joint locates in the range between 40 nm and 110 nm. the endurance
time for such a drilling operation varies from 60 seconds to almost 450 seconds, and it proves that the
strength variation is quite significant, and operation strategy and work-rest schedule should be
designed with consideration of the individual variation (chaffin, 1997). furthermore, with the fatigue
model, the reduction of the capacity is predictable for the manual operation. therefore, the posture
prediction can be implemented based on the fatigue model. in the right subfigure of fig. 8, it is
apparent that the same external load exerts different normalized load on the population. smaller joint
capacity results in more rapid reduction of the capacity.
50
100
150
200
250
300
0
20
40
60
80
100
120
holding time of shoulder flexion for drilling hole tasks [s]
reduction of shoulder flexion strength [nm]
geometric configuration in s = 30o, e = 90o, mass of drilling machine 3.5 kg
sj-2j
sj-j
sj
sj+j
sj+2j
j
load
50
100
150
200
250
300
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
normalized reduction of shoulder flexion strength
sj-2j
sj-j
sj
sj+j
sj+2j
figure 8. reduction of the joint strength (shoulder) along time in the drilling task
for a completed design of manual handling operation, work-rest schedule is also of great importance,
especially for the manual handling work with relative high physical requirement. in fig. 9, a drilling
process with 30 seconds for drilling a hole and 60 seconds for rest is shown. it can be observed that in
18
the capacity goes down in the work cycle and it recoveries in the following rest period. although there
is a slight reduction of the capacity after one work-rest cycle, 95% of the population can maintain the
drilling job for a long duration.
50
100
150
200
250
20
30
40
50
60
70
80
90
100
110
120
holding time of shoulder flexion for drilling hole tasks [s]
reduction of shoulder flexion strength [nm]
geometric configuration in s = 30o, e = 90o, mass of drilling machine 3.5 kg
sj-2j
sj-j
sj
sj+j
sj+2j
j
figure 9. work-rest schedule predicted by fatigue and recovery model
posture prediction:
optimal posture for a drilling task
in manual handling operation, the workspace parameters are important for determining the posture of
the human body. in case of holding the drilling machine, the distance between the hole and shoulder is
the most important geometrical constraint. in the scope between 0.4m and 0.7m, the geometrical
configuration q can be determined, and then it is possible to calculate the fatigue measure and the
discomfort measure. both measures are shown in fig. 10. it is obviously that the longer the distance is,
the more the arm is extended, and as a result, the larger torque is applied to joints, which causes higher
fatigue measure. simultaneously, the discomfort level changes with the distance. the larger the
extension of the arm, the more the shoulder joint moves to its upper limit, however the elbow joint
moves to its neutral position. the combination of both joints shows the declination along the distance.
19
0.45
0.5
0.55
0.6
0.65
0.7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
fatigue performance measure
distance from right shoulder to hole [m]
discomfort performance measure
fatigue performance measure at the very beginning
fatigue performance measure after drilling 10 holes
discomfort performance measure
figure 10. fatigue and discomfort performance measures along the work distance
the optimal posture can be determined using the moo method in fig. 11. weighted aggregation
method is used in this case to covert the multi-objective problem into a single-objective method in
order to achieve the pareto optimal in the pareto front represented by the solid curve. the single
objective is mathematically formed in eq. (17). both measures are normalized.
1
2
1
min
( )
max(
)
max(
)
n
discomfort
fatigue
j
j
j
discomfort
fatigue
f
f
z
w f
w
w
f
f
q
(17)
with
0
j
w
and
1
1
n
j
j
w
. each
j
w indicates the importance of each objective. this objective
function can be further transformed to a straight line equation:
z
f
w
w
f
discomfort
fatigue
min
2
1
.
if we assume that the fatigue and the discomfort have the same importance in the drilling case, the
optimal position can be obtained at the intersection point between the solid straight line with slope k=-
1 and the pareto front in fig 11. however, the selection of the weighting value can have great
influence on the optimal posture. the individual preference can be represented by the different weights
of the two measures which results in straight lines with different slopes. in fig.11, two examples with
slope k=-2 (dashed point line) and k=-0.5 (dashed line) are illustrated with different intersection points
with the pareto front. those two points represent different posture strategies for posture control: the
20
former one with less discomfort, and the latter one with less joint stress. all the points in the pareto
font are the feasible solutions for posture selection. the selection of posture depends on the physical
status of individual, and the preference of the individual.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
normalized discomfort of joints fd
normalized fatigue of joints f
f
pareto front curve
w1/w2=1
w1/w2=0.5
w1/w2=2
(fd, ff)
solution: when w1/w2=1
solution: when w1/w2=2
solution: when w1/w2=0.5
solution zone
figure 11. optimisation posture for the drilling task without fatigue
optimal posture changed by fatigue
meanwhile, the fatigue influences the posture. in order to evaluate the fatigue effect, we keep the same
balance between fatigue and discomfort in our application. in fig. 12, the single objective function in
eq. (17) along the distance from 0.4m to 0.7m is calculated and shown. the solid curve is the one
without fatigue, and the dashed curve is the one with fatigue status after maintaining a drilling
operation after 30 s. from the left subfigure, it is noticeable that the optimal distances for both
situations are different, which maps to the different drilling posture. the optimal distance between the
shoulder and the hole is smaller with fatigue then without fatigue. it proves that the manual handling
strategy is making the arm close to the human body to maintain the same load when there is fatigue. in
this posture, the user can handle the weight of the machine more easily. in the right subfigure, the
pareto front in fatigue status is moved afterwards from the pareto front without fatigue as the fatigue
measure increases resulting from reduction of the physical capacity.
21
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
normalized discomfort of joints fd
normalized fatigue of joints f
f
without fatigue
with fatigue
0.45
0.5
0.55
0.6
0.65
0.7
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
distance from right shoulder to hole [m]
summation of normalized discomfort and normalized fatigue
without fatigue
with fatigue
figure 12. comparison of the optimized work distance in both non-fatigue and fatigue cases
6. discussion
in this study, a fatigue model is integrated into a posture analysis and posture prediction method. with
this model, it is possible to evaluate and design the posture for the manual handling operation by
considering fatigue. the fatigue model can predict the reduction of the physical capacity in static
posture or slow operation. the reduction of the physical capacity make the posture changed to
maintain the external physical requirement.
one limitation in our framework is that the posture analysis and prediction are only limited to the joint
level until now, but not in muscle level. that is because it is believed that it is difficult to measure the
force of each individual muscle, although the optimization method is employed to solve the
underdetermined problem of the muscle skeleton system. the precision of the result is still
questionable (freund and takala, 2001). from another point of view, the joint torque is generated and
determined by a group of muscle attached around the joint. the coordination of the muscle group is
very complex, and it is believed that calculating the joint torque can achieve a higher precision then
calculating the individual muscle forces. meanwhile, in several ergonomics measurement, the met is
also measured by the joint torque (mathiassen and ahsberg, 1999).
22
another limitation is that the result of the posture analysis is only applicable for static and slow
operations, because the fatigue model is only validated by comparing with existing met models. for
these static met models, all the measurement was carried out under static posture. dynamic motion
and static posture are different in physiological principle, and fatigue and recovery phenomenon might
occur alternatively and mix in a dynamic process.
at last, the optimal posture is predicted in moo method. in such a method, the weighting values of
each item are used to construct the overall objective function. however, it requires a priori knowledge
about the relative importance of the objectives, and the trade-off between the fatigue and the
discomfort can not be evaluated very well. "it is believed that the human body has certain strategy to
lead the human motion, but it is dictated by just one performance measure; it may be necessary to
combine various measures" (yang et al., 2004). two main problems rise for the motion prediction.
one is how to model the performance measure. another one is how to combine all the performance
measures together. human motion is very complex due to its large variability. each single
performance measure is difficult to be validated in experiment. furthermore, for the combination, the
correlation between different performance measures requires lots of effort to define and verify. moo
method just provides a reference method in ergonomics simulation leading to a safer and better design
of work.
7. conclusion and perspective
in this paper, a new method based on moo method for posture prediction and analysis is presented.
different from the other methods used in virtual human posture prediction methods, the effect from
fatigue is taken into account. a fatigue model based on motor-units pattern is employed into the moo
method to predict the reduction of the physical capacity. meanwhile, the work-rest schedule can be
evaluated with the fatigue and recovery model. due to the validation of the fatigue model, this method
is suitable for static or relative slow manual handling operation. at last, it is possible to predict the
optimal posture of an operation to simulate the realistic motion. in the future, the fatigue for the
dynamic working process will be validated and then integrated into the work evaluation system.
8. acknowledgments
this research was supported by the eads and by the région des pays de la loire (france) in the
context of collaboration between the ecole centrale de nantes (nantes, france) and tsinghua
university (beijing, p.r.china).
23
references
ayoub, m.m. (1998). a 2-d simulation model for lifting activities. computers and industrial
engineering, 35(3-4), 619-622
ayoub, m.m., & lin, c.j. (1995). biomechanics of manual material handling through simulation:
computational aspects. computers and industrial engineering, 29(1-4), 427-431
badler, n.i. (1997). virtual humans for animation, ergonomics, and simulation proceedings of the
ieee workshop on non-rigid and articulated motion, 28-36.
badler, n.i., phillips, c.b., & webber, b.l. (1993). simulating humans. new york: oxford
university press, inc.
carnahan, b.j., norman, b.a., & redfern, m.s. (2001). incorporating physical demand criteria into
assembly line balancing. iie transactions, 33(10), 875-887
chaffin, d.b. (1997). development of computerized human static strength simulation model for job
design. human factors and ergonomics in manufacturing, 7(4), 305-322
chaffin, d.b., andersson, g.b.j., & martin, b.j. (1999). occupational biomechanics (third ed.):
wiley-interscience.
chedmail, p., chablat, d., & le roy, c. (2003). a distributed approach for access and visibility task
with a manikin and a robot in a virtual reality environment. ieee transactions on industrial
electronics, 50, 693-698
chen, y.-l. (2000). changes in lifting dynamics after localized arm fatigue. international journal of
industrial ergonomics, 25(6), 611-619
damsgaard, m., rasmussen, j., christensen, s.t., surma, e., & zee, m.d. (2006). analysis of
musculoskeletal systems in the anybody modeling system. simulation modelling practice and theory,
14(8), 1100-1111
ding, j., wexler, a.s., & binder-macleod, s.a. (2000). a predictive model of fatigue in human
skeletal muscles. journal of applied physiology, 89(4), 1322-1332
ding, j., wexler, a.s., & binder-macleod, s.a. (2002). a predictive fatigue model i: predicting the
effect of simulation frequency and pattern of fatigue. ieee transactions on neural systems and
rehabilitation engineering, 10(1), 48-58
ding, j., wexler, a.s., & binder-macleod, s.a. (2003). mathematical models for fatigue
minimization during functional electrical stimulation. electromyography kinesiology, 13, 575-588
forsman, m., hasson, g.-a., medbo, l., asterland, p., & engstorm, t. (2002). a method for
evaluation of manual work using synchronized video recordings and physiological measurements.
applied ergonomics, 33(6), 533-540.
freund, j., & takala, e.-p. (2001). a dynamic model of the forearm including fatigue. journal of
biomechanics, 34(5), 597-605
24
fuller, j. lomond, k., fung, j. & côté, j. (2008). posture-movement changes following repetitive
motion-induced shoulder muscle fatigue. journal of electromyography and kinesiology, 19(6), 1043-
1052
honglun, h., shouqian, s., & yunhe, p. (2007). research on virtual human in ergonomics simulation.
computers & industrial engineering, 53(2), 350-356
khalil, w., & dombre, e. (2002). modeling, identification and control of robots: hermes science
publications.
khalil, w., & kleinfinger, j.f. (1986). a new geometric notation for open and closed-loop robots.
proceedings of the ieee robotics and automation, 1174-1179.
li, g., & buckle, p. (1999). current techniques for assessing physical exposure to work-related
musculoskeletal risks, with emphasis on posture-based methods. ergonomics, 42(5), 674-695
liu, j.z., brown, r.w., & yue, g.h. (2002). a dynamical model of muscle activation, fatigue, and
recovery. biophysical journal, 82(5), 2344-2359
ma, l., chablat, d., bennis, f., & zhang, w. (2009). a new simple dynamic muscle fatigue model
and its validation. international journal of industrial ergonomics, 39(1), 211-220
mathiassen, s.e., & ahsberg, e. (1999). prediction of shoulder flexion endurance from personal
factors. international journal of industrial ergonomics, 24(3), 315-329
ren, l., jones, r.k., & howard, d. (2007). predictive modelling of human walking over a complete
gait cycle. journal of biomechanics, 40(7), 1567-1574
vollestad, n.k. (1997). measurement of human muscle fatigue. journal of neuroscience methods,
74(2), 219-227
wood, d.d. (1997). minimizing fatigue during repetitive jobs: optimal work-rest schedule. human
factors 39(1), 83-101
yang, j., marler, r.t., kim, h., & arora, j.s. (2004). multi-objective optimization for upper body
posture prediction. 10th aiaa/issmo multidisciplinary analysis and optimization conference
zhang, x. & chaffin, d. (2000). a three-dimensional dynamic posture prediction model for
simulating in-vehicle seated reaching movements: development and validation ergonomics, 43(9),
1314-1330
|
0911.1686 | the globular cluster ngc 5286. ii. variable stars | we present the results of a search for variable stars in the globular cluster
ngc 5286, which has recently been suggested to be associated with the canis
major dwarf spheroidal galaxy. 57 variable stars were detected, only 19 of
which had previously been known. among our detections one finds 52 rr lyrae (22
rrc and 30 rrab), 4 lpv's, and 1 type ii cepheid of the bl herculis type.
periods are derived for all of the rr lyrae as well as the cepheid, and bv
light curves are provided for all the variables.
the mean period of the rrab variables is <pab> = 0.656 days, and the number
fraction of rrc stars is n(c)/n(rr) = 0.42, both consistent with an oosterhoff
ii (ooii) type -- thus making ngc 5286 one of the most metal-rich ([fe/h] =
-1.67; harris 1996) ooii globulars known to date. the minimum period of the
\rrab's, namely pab,min = 0.513 d, while still consistent with an ooii
classification, falls towards the short end of the observed pab,min
distribution for ooii globular clusters. as was recently found in the case of
the prototypical ooii globular cluster m15 (ngc 7078), the distribution of
stars in the bailey diagram does not strictly conform to the previously
reported locus for ooii stars.
we provide fourier decomposition parameters for all of the rr lyrae stars
detected in our survey, and discuss the physical parameters derived therefrom.
the values derived for the rrc's are not consistent with those typically found
for ooii clusters, which may be due to the cluster's relatively high
metallicity -- the latter being confirmed by our fourier analysis of the
ab-type rr lyrae light curves. we derive for the cluster a revised distance
modulus of (m-m)v = 16.04 mag. (abridged)
| introduction
ngc 5286 (c1343-511) is a fairly bright (mv = −8.26) and
dense globular cluster (gc), with a central luminosity den-
sity ρ0 ≈14,800l⊙/pc3 – which is more than a factor of six
higher than in the case of ω centauri (ngc 5139), according
to the entries in the harris (1996) catalog. in zorotovic et al.
(2009, hereafter paper i) we presented a color-magnitude dia-
gram (cmd) study of the cluster that reveals an unusual hor-
izontal branch (hb) morphology in that it does not contain a
prominent red hb component, contrary to what is normally
found in gcs with comparable metallicity ([fe/h] = −1.67;
harris 1996), such as m3 (ngc 5272) or m5 (ngc 5904). as
a matter of fact, ngc 5286 contains blue hb stars reaching
down all the way to at least the main sequence turnoff level
in v. yet, unlike most blue hb gcs, ngc 5286 is known
to contain a sizeable population of rr lyrae variable stars,
∗based on observations obtained in chile with the 1.3m
warsaw telescope at the las campanas observatory,
and the soar 4.1m telescope.
1 departamento de astronomía y astrofísica, pontificia universidad
católica de chile, av. vicuña mackena 4860, 782-0436 macul, santiago,
chile; e-mail: mzorotov, [email protected]
2 european southern observatory, alonso de cordova 3107, santiago,
chile
3 john simon guggenheim memorial foundation fellow
4 on sabbatical leave at michigan state university, department of physics
and astronomy, east lansing, mi 48824
5 department of physics and astronomy, michigan state university, east
lansing, mi 48824
6 department of physics and astronomy, university of wisconsin,
oshkosh, wi 54901
with at least 15 such variables being known in the field of the
cluster (clement et al. 2001). in this sense, ngc 5286 resem-
bles the case of m62 (ngc 6266; contreras et al. 2005), thus
possibly being yet another member of a new group of gcs
with hb types intermediate between m13 (ngc 6205)-like
(a very blue hb with relatively few rr lyrae variables) and
that of the oosterhoff i (oo i) cluster m3 (a redder hb, with
a well-populated instability strip). ngc 5286 thus constitutes
an example of the "missing link" between m3- and m13-like
gcs (caloi, castellani, & piccolo 1987).
previous surveys for variable stars in ngc 5286 (e.g.,
liller & lichten 1978; gerashchenko et al. 1997) have turned
up relatively large numbers of rr lyrae stars.
however,
such studies were carried out either by photographic meth-
ods, used comparatively few observations, or utilized reduc-
tion methods that have subsequently been superseded by im-
proved techniques, including robust multiple-frame photome-
try (e.g., allframe; stetson 1994) and image subtraction
(e.g., isis; alard 2000). this, together with the large central
surface brightness of the cluster, strongly suggests that a large
population of variable stars remains unknown in ngc 5286,
especially towards its crowded inner regions. in addition, for
the known or suspected variables, it should be possible to ob-
tain light curves of much superior quality to those available,
thus leading to better defined periods, amplitudes, and fourier
decomposition parameters.
indeed, to our knowledge, no modern variability study has
ever been carried out for this cluster. a study of its variable
star population appears especially interesting in view of its
suggested association with the canis major dwarf spheroidal
2
m. zorotovic et al.
galaxy (crane et al. 2003; forbes, strader, & brodie 2004),
and the constraints that the ancient rr lyrae variable stars
are able to pose on the early formation history of galax-
ies (e.g., catelan 2004b, 2007, 2009; kinman, saha, & pier
2004; mateu et al. 2009). therefore, the time seems ripe for a
reassessment of the variable star content of ngc 5286 – and
this is precisely the main subject of the present paper.
in §2, we describe the variable stars search techniques and
the conversion from isis relative fluxes to standard magni-
tudes. in §3, we show the results of our variability search, giv-
ing the positions, periods, amplitudes, magnitudes, and colors
for the detected variables. we show the positions of the vari-
ables in the cluster cmd in §4. in §5, we provide the results
of a fourier decomposition of the rr lyrae light curves, ob-
taining several useful physical parameters. we analyze the
cluster's oosterhoff type in §6, whereas §7 is dedicated to the
type ii cepheid that we found in ngc 5286. §8 summarizes
the main results of our investigation. all of the derived light
curves are provided in an appendix.
2. observations and data reduction
the images used in this paper are the same as described in
paper i, constituting a set of 128 frames in v and 133 in b,
acquired with the 1.3m warsaw university telescope at las
campanas observatory, chile, in the course of a one-week
run in april 2003. further details can be found in paper i. in
addition, a few images were taken in feb. 2008 using the 4.1m
southern astrophysical research (soar) telescope, located
in cerro pachón, chile, to further check the positions of the
variables in the crowded regions around the cluster center.
the variable stars search was made using the image sub-
traction package isis v2.2 (alard 2000). in order to convert
the isis differential fluxes to standard magnitudes, we used
daophot ii/allframe (stetson 1987, 1994) to obtain
instrumental magnitudes for each of the variables in the b and
v reference images of the isis reductions. first we obtained
the flux of the variable star in the reference image, given by
fref = 10
" c0−mref
2.5
"
,
(1)
where mref is the instrumental magnitude of the star in the ref-
erence image and c0 is a constant which depends on the pho-
tometric reduction package (for daophot ii/allframe
it is c0 = 25). then we derived instrumental magnitudes for
each epoch from the differential fluxes ∆fi = fref −fi given by
isis using the equation
mi = c0 −2.5log(fref −∆fi).
(2)
finally, the equations to obtain the calibrated magnitudes (mi)
from the instrumental magnitudes are of the following form:
mi = mi +mstd −mref,
(3)
where mstd is the calibrated magnitude of the star in the ref-
erence image (we used the standard magnitude data from pa-
per i).
3. variable stars
in our variability search, we found 57 variable stars: 52 rr
lyrae (22 rrc, 30 rrab), 4 lpv's, and 1 type ii cepheid
(more specifically, a bl herculis star). we identified 19 of
the 24 previously catalogued variables, and discovered 38
new variables. a finding chart is provided in figure 1. of
the 16 previously catalogued variables with known periods
(v1-v16) we were able to find 15.
v16 is the only one
not present in our data, because it is not in the chip of the
ccd that we have analyzed for variability. the other 8 pre-
viously catalogued variables (v17-v24) were suggested by
gerashchenko et al. (1997) based just on their position on the
cmd. we found that only 4 of these stars (v17, v18, v20,
and v21) are real variables in our survey. we do not detect
any variable sources at the coordinates that they provide for
the remaining 4 candidate variables in their study (v19, v22,
v23 and v24). the recent images taken with a better spa-
tial resolution at the 4.1m soar telescope reveal that v19 is
very close to two other stars, and probably is not resolved in
the images used by gerashchenko et al. (1997). v22 and v23
are close to the instability strip but they still fall in the blue
part of the hb, so they are not variable stars. v24 is in the
instability strip but very close to the blue part of the hb. it is
possible that this star belongs to the blue hb and is contami-
nated by a redder star.
3.1. periods and light curves
periods were determined using the phase dispersion mini-
mization (pdm; stellingwerf 1978) program in iraf. peri-
ods, along with the coordinates and several important photo-
metric parameters, are provided in table 1. in this table, col-
umn 1 indicates the star's name. columns 2 and 3 provide the
right ascension and declination (j2000 epoch), respectively,
whereas column 4 shows our derived period. columns 5 and 6
list the derived amplitudes in the b and v bands, respectively,
whereas columns 7 and 8 show the magnitude-weighted mean
b and v magnitudes, corrected for differential reddening (see
paper i for details). the corresponding intensity-mean av-
erages are provided in columns 9 and 10 (also corrected for
differential reddening). the average b−v color in magni-
tude units and the intensity-mean color ⟨b⟩−⟨v⟩are given in
columns 11 and 12, respectively, whereas column 13 lists the
b−v color corresponding to the equivalent static star. finally,
the last column indicates the star's variability type.
to derive the color of the equivalent static star (i.e., the
color the star would have if it were not pulsating), we first de-
rived the magnitude-weighted mean color, and then applied an
amplitude-dependent correction by interpolating on table 4
from bono, caputo, & stellingwerf (1995).
we calculate the hb level, vhb = 16.63 ±0.04, as the aver-
age ⟨v⟩magnitude of all the rr lyrae detected.
magnitude data as a function of julian date and phase for
the variable stars detected in our study are given in table 2.
in this table, column 1 indicates the star's name, following the
clement et al. (2001) designation (when available). column 2
indicates the filter used. column 3 provides the julian date of
the observation, whereas column 4 shows the phase according
to our derived period (from table 1). columns 5 and 6 list the
observed magnitude in the corresponding filter and the asso-
ciated error, respectively. light curves based on our derived
periods (when available) are shown in the appendix.
4. color-magnitude diagram
figure 2 shows the variable stars in the ngc 5286 cmd,
decontaminated from field stars as described in paper i. we
can see that all rr lyrae stars fall around the hb region,
whereas the type ii cepheid is brighter than the hb. the four
detected lpvs all fall close to the top of the rgb. these
trends are precisely as expected if all the detected variables
ngc 5286. ii. variable stars
3
table 1
photometric parameters for ngc 5286 variables
id
ra (j2000)
dec (j2000)
p
ab
av
(b)mag
(v)mag
⟨b⟩
⟨v⟩
(b−v)mag
⟨b⟩−⟨v⟩
(b−v)st
comments
(h:m:s)
(deg:m:s)
(days)
(mag)
(mag)
(mag)
(mag)
(mag)
(mag)
(mag)
(mag)
(mag)
v01...........
13 : 46 : 21.5
−51 : 20 : 03.8
0.635
1.26
0.99
17.435
16.743
17.539
16.818
0.692
0.721
0.680
rrab
v02...........
13 : 46 : 35.1
−51 : 23 : 14.8
0.611
0.82
0.71
17.517
17.002
17.578
17.047
0.515
0.530
0.507
rrab
v03...........
13 : 46 : 54.1
−51 : 23 : 08.9
0.685
0.92
0.73
17.216
16.697
17.299
16.744
0.520
0.555
0.510
rrab
v04...........
13 : 46 : 19.2
−51 : 23 : 47.2
0.352
0.60
0.46
17.067
16.639
17.114
16.668
0.428
0.446
0.423
rrc
v05...........
13 : 46 : 33.4
−51 : 22 : 03.0
0.5873
...
1.31
...
17.383
...
17.518
...
...
...
rrab
v06...........
13 : 46 : 32.9
−51 : 23 : 04.3
0.646
1.38
1.08
16.850
16.363
16.990
16.485
0.487
0.504
0.476
rrab
v07...........
13 : 46 : 29.4
−51 : 23 : 38.4
0.512
...
...
17.08 :
16.58 :
....
....
0.50 :
....
....
rrab
v08......
13 : 46 : 28.5
−51 : 23 : 10.2
2.33
1.24
1.15
15.569
15.099
15.730
15.259
0.471
0.471
...
bl her
v09...........
13 : 46 : 39.0
−51 : 22 : 00.0
0.3003
0.63
0.48
17.266
16.873
17.314
16.904
0.393
0.411
0.387
rrc
v10.........
13 : 46 : 24.0
−51 : 22 : 46.4
0.569
1.29
1.18
17.529
17.141
17.635
17.241
0.388
0.394
0.376
rrab
v11.........
13 : 46 : 26.2
−51 : 23 : 35.7
0.652
0.94
0.69
17.120
16.530
17.194
16.519
0.590
0.676
0.580
rrab
v12.........
13 : 46 : 09.1
−51 : 22 : 38.9
0.356
0.65
0.48
17.305
16.770
17.352
16.803
0.535
0.550
0.528
rrc
v13.........
13 : 46 : 33.4
−51 : 23 : 33.8
0.294
0.65
0.49
16.871
16.518
16.919
16.548
0.353
0.371
0.346
rrc
v14.........
13 : 46 : 23.2
−51 : 23 : 38.9
0.415
0.60
0.47
16.994
16.535
17.037
16.562
0.458
0.474
0.454
rrc
v15.........
13 : 46 : 24.1
−51 : 22 : 56.8
0.585
1.78
1.19
17.539
16.825
17.768
16.919
0.713
0.849
0.689
rrab
v17.........
13 : 46 : 34.6
−51 : 23 : 29.0
0.733
0.83
0.23
17.172
16.581
17.182
16.584
0.591
0.598
0.582
rrab
v18.........
13 : 46 : 33.9
−51 : 23 : 16.0
0.781
0.20
...
17.265
...
17.280
...
...
...
...
rrab
v20.........
13 : 46 : 25.2
−51 : 21 : 38.6
0.319
0.39
0.35
17.077
16.614
17.095
16.628
0.463
0.467
0.467
rrc
v21.........
13 : 46 : 25.8
−51 : 24 : 02.7
0.646
0.89
0.73
17.176
16.673
17.246
16.718
0.503
0.528
0.493
rrab
nv1........
13 : 46 : 27.4
−51 : 25 : 46.1
0.366
0.54
0.43
17.156
16.695
17.199
16.722
0.461
0.476
0.459
rrc
nv2........
13 : 46 : 17.7
−51 : 23 : 56.8
0.354
0.60
0.50
17.284
16.758
17.336
16.789
0.526
0.547
0.521
rrc
nv3........
13 : 46 : 30.5
−51 : 23 : 32.5
0.755
0.55
0.30
17.292
16.501
17.321
16.503
0.791
0.819
0.788
rrab
nv4........
13 : 46 : 15.9
−51 : 23 : 31.7
0.786
0.28
0.21
17.099
16.450
17.108
16.455
0.649
0.653
0.660
rrab
nv5........
13 : 46 : 17.6
−51 : 20 : 33.0
0.357
0.60
0.45
17.166
16.659
17.207
16.685
0.507
0.522
0.502
rrc
nv6........
13 : 46 : 26.4
−51 : 23 : 27.6
0.566
1.45
1.12
16.888
16.511
17.028
16.604
0.377
0.425
0.366
rrab
nv7........
13 : 46 : 25.8
−51 : 21 : 34.6
0.339
0.48
0.41
16.986
16.560
17.017
16.585
0.426
0.431
0.427
rrc
nv8........
13 : 46 : 26.2
−51 : 23 : 12.4
0.80
0.30
0.30
17.103
16.463
17.113
16.473
0.640
0.640
0.649
rrab
nv9........
13 : 46 : 24.6
−51 : 23 : 11.6
0.745
...
...
17.24 :
16.58 :
...
...
0.66 :
...
...
rrab
nv10......
13 : 46 : 25.1
−51 : 23 : 02.2
0.339
0.68
0.54
16.922
16.436
16.984
16.454
0.485
0.529
0.477
rrc
nv11......
13 : 46 : 26.0
−51 : 23 : 02.7
0.536
0.85
0.71
16.813
16.347
16.889
16.402
0.466
0.487
0.457
rrab
nv12......
13 : 46 : 26.7
−51 : 23 : 02.5
0.905
0.52
0.41
16.876
16.251
16.906
16.269
0.625
0.637
0.622
rrab
nv13......
13 : 46 : 27.3
−51 : 22 : 58.2
0.583
0.95
1.35
16.514
16.083
16.585
16.215
0.431
0.370
0.421
rrab
nv14......
13 : 46 : 29.2
−51 : 22 : 07.8
0.284
0.64
0.44
17.319
16.803
17.366
16.826
0.516
0.540
0.509
rrc
nv15......
13 : 46 : 25.5
−51 : 22 : 53.1
0.742
0.34
0.52
17.128
16.477
17.138
16.505
0.651
0.633
0.657
rrab
nv16......
13 : 46 : 27.0
−51 : 22 : 50.7
0.366
0.54
0.32
17.125
16.618
17.158
16.629
0.507
0.529
0.505
rrc
nv17......
13 : 46 : 26.6
−51 : 22 : 49.3
0.322
0.49
0.50
16.923
16.490
16.946
16.431
0.433
0.415
0.433
rrc
nv18......
13 : 46 : 29.1
−51 : 22 : 46.9
0.362
0.57
0.44
17.257
16.410
17.295
16.431
0.846
0.864
0.843
rrc
nv19......
13 : 46 : 26.8
−51 : 22 : 44.2
0.658
0.94
1.70
16.902
16.624
16.924
16.706
0.278
0.218
0.268
rrab
nv20......
13 : 46 : 22.5
−51 : 22 : 43.2
0.3103
1.90
0.65
17.555
16.753
18.023
16.791
0.802
1.232
0.736
rrc
nv21......
13 : 46 : 29.4
−51 : 22 : 41.0
0.570
0.95
1.06
16.830
16.583
16.902
16.669
0.247
0.233
0.237
rrab
nv22......
13 : 46 : 27.5
−51 : 22 : 39.9
0.68
1.15
1.20
16.829
16.273
17.022
16.374
0.556
0.648
0.544
rrab
nv23......
13 : 46 : 26.7
−51 : 22 : 16.1
0.598
1.14
1.30
17.090
16.609
17.182
16.729
0.481
0.453
0.469
rrab
nv24......
13 : 46 : 29.1
−51 : 22 : 39.1
0.60
0.75
0.70
16.927
16.346
16.965
16.382
0.581
0.583
0.574
rrab
nv25......
13 : 46 : 24.2
−51 : 22 : 17.2
0.550
1.44
1.20
17.328
16.724
17.048
16.664
0.605
0.384
0.593
rrab
nv26......
13 : 46 : 26.0
−51 : 22 : 36.5
0.364
0.46
0.36
17.011
16.523
17.040
16.536
0.489
0.505
0.490
rrc
nv27......
13 : 46 : 30.1
−51 : 22 : 22.0
0.706
1.17
0.80
16.952
16.599
17.489
16.811
0.490
0.530
0.340
rrab
nv28......
13 : 46 : 29.8
−51 : 22 : 35.4
0.540
0.97
0.70
16.880
16.390
16.954
16.424
0.490
0.530
0.479
rrab
nv29......
13 : 46 : 28.8
−51 : 22 : 33.8
0.301
0.97
0.63
17.458
16.963
17.565
17.010
0.495
0.555
0.473
rrc
nv30......
13 : 46 : 29.1
−51 : 22 : 24.1
0.72
0.65
0.55
17.251
16.641
17.292
16.672
0.610
0.620
0.605
rrab
nv31......
13 : 46 : 25.5
−51 : 22 : 31.6
0.289
0.41
0.40
17.163
16.519
17.189
16.535
0.645
0.654
0.648
rrc
nv32......
13 : 46 : 25.2
−51 : 22 : 30.0
0.283
0.34
0.24
16.804
16.339
16.817
16.344
0.465
0.473
0.471
rrc
nv33......
13 : 46 : 27.9
−51 : 22 : 26.6
0.294
0.34
0.50
16.801
16.457
16.815
16.483
0.345
0.333
0.351
rrc
nv34......
13 : 46 : 26.7
−51 : 22 : 24.9
0.367
0.42
0.48
16.569
16.073
16.591
16.096
0.496
0.495
0.499
rrc
nv35......
13 : 46 : 24.9
−51 : 23 : 03.8
...
...
...
14.70 :
13.38 :
...
...
1.32 :
...
...
lpv
nv36......
13 : 46 : 29.1
−51 : 22 : 58.4
...
...
...
15.30 :
13.40 :
...
...
1.90 :
...
...
lpv
nv37......
13 : 46 : 28.8
−51 : 22 : 12.9
...
...
...
15.70 :
14.60 :
...
...
1.10 :
...
...
lpv
nv38......
13 : 46 : 28.8
−51 : 20 : 37.0
...
...
...
15.30 :
13.74 :
...
...
1.56 :
...
...
lpv
are cluster members. note, however, that the cmd positions
of the 4 lpvs are not precisely defined, since we lack ade-
quate phase coverage for these stars.
figure 3 is a magnified cmd showing only the hb region.
to assess the effects of crowding, we use different symbol
sizes for the variables in different radial annuli from the clus-
ter center: small sizes for stars in the innermost cluster regions
(r ≤0.29′) which are badly affected by crowding, medium
sizes for stars with 0.29′ < r ≤0.58′, and large sizes for stars
in the outermost cluster regions (r > 0.58′). as expected, the
variables in the innermost cluster region present more scat-
ter. apart from this, in general the detected rr lyrae fall
inside a reasonably well-defined instability strip (is). how-
ever, we can see that there is not a clear-cut separation in
color between rrc's and rrab's. although the rrc's tend to
be found preferentially towards the blue side of the is, as ex-
pected, the rrab's are more homogeneously distributed. this
can be again a scatter effect, because the rrab stars that are
in the less crowded regions of the cluster (large and medium
size circles) are more concentrated at the red part.
4.1. notes on individual stars
v5: this star was only detected in our v images, so that we
were unable to determine its color. it remains unclear to us
4
m. zorotovic et al.
fig. 1.- finding chart for the ngc 5286 variables. top: the outer variables in the v image obtained with the 0.9m ctio telescope (note scale at the bottom
right of the image). bottom left: the same image as before, but zoomed in slightly. bottom right: finding chart for the variables located closest to the cluster
center.
why isis was unable to detect this variable in the b data –
note, from figure 1, that it is not located in an especially
crowded region of the cluster.
v7: our derived period, 0.512 d, is slightly longer than the
previously reported period of 0.50667 d in liller & lichten
(1978). as a matter of fact, v7 remains one of the shortest-
period rrab stars among all known ooii gcs.
because
the period is so close to half a day, there is a consider-
able gap in the phased light curve for v7 (as was also the
case, though to a somewhat lesser degree, for the light curve
of liller & lichten). we carefully checked the pdm peri-
odogram of the star in search of acceptable longer periods,
but could find none that fit our data, nor could we find a pe-
riod that reduced significantly the spread seen in the phased
light curve close to minimum light (which is particularly ob-
vious in the v-band light curve). the liller & lichten light
curve also shows substantial scatter close to minimum light,
although in their case much of the scatter is clearly due to
photometric error.
v8: liller & lichten (1978) found a period of 0.7 d for this
ngc 5286. ii. variable stars
5
table 2
photometry of the variable stars
name
filter
jd
phase
mag
e_mag
(d)
(mag)
(mag)
v01
v
2,452,736.54861
0.0000
16.2351
0.0031
v01
v
2,452,736.55367
0.0080
16.2779
0.0042
v01
v
2,452,736.55977
0.0176
16.2787
0.0051
v01
v
2,452,736.57490
0.0414
16.3361
0.0054
v01
v
2,452,736.58749
0.0612
16.3878
0.0059
v01
v
2,452,736.59664
0.0756
16.3996
0.0059
note. - this table is published in its entirety in the electronic edition of the
astronomical journal. a portion is shown here for guidance regarding its form
and content.
fig. 2.- cmd for ngc 5286 (from paper i) including the variables ana-
lyzed in this paper. crosses indicate the lpv's, circles the rr lyrae, and the
open square the type ii cepheid. the positions of the variables are based on
the intensity-mean magnitudes ⟨v⟩and on the colors of the equivalent static
stars (b−v)st, as given in table 1.
star and suggested that it is an rr lyrae-type variable. in our
analysis we found an alias at 0.7 d for the period, but the best
fit is obtained with a period of 2.33 d, which corresponds to a
type ii cepheid of the bl her type.
v18: as for v5, we only detected this star in one of the
filters. in this case isis only detected it in our b images, so
that we were unable to determine its color and also to perform
fourier decomposition.
v7 and nv9: we do not see the minimum and maximum,
respectively, for these two variables. for that reason, we could
not perform a fourier analysis, and the mean v and b values
provided in table 1 are just approximate.
5. fourier decomposition
light curves for rr lyrae variables were analyzed by
fourier decomposition using the same equations as in
corwin et al. (2003), namely
mag = a0 +
n
x
j=1
a j sin(2π jt/p+φj)
(4)
(for rrab variables) and
fig. 3.- as in figure 2, but focusing on the hb region of the cmd. here
we use different symbol sizes for the variables in different radial annuli from
the cluster center: small symbols are related to stars inside the core radius
(r ≤0.29′); medium-sized symbols refer to stars with 0.29′ < r ≤0.58′; and
the larger symbols refer to stars in the outermost cluster regions (r > 0.58′).
table 3
fourier coefficients: rrc stars
id
a21
a31
a41
φ21
φ31
φ41
v4
0.113
0.059
0.037
4.503
3.667
2.103
v9
0.226
0.056
0.053
4.735
2.905
1.283
v12
0.159
0.096
0.032
5.064
3.454
2.635
v13
0.194
0.063
0.058
4.748
2.451
1.248
v14
0.072
0.064
0.062
5.411
4.522
3.401
v20
0.059
0.016
0.026
4.723
3.832
3.128
nv1
0.117
0.091
0.053
4.768
3.635
3.295
nv2
0.123
0.051
0.036
4.621
3.406
2.433
nv5
0.083
0.059
0.043
4.766
4.134
2.693
nv7
0.151 ::
0.064 ::
0.012 ::
5.443 ::
4.886 ::
5.665 ::
nv10
0.144
0.056
0.020
4.936
3.237
2.660
nv14
0.193
0.053
0.044
4.645
2.881
1.954
nv16
0.069 ::
0.094 ::
0.058 ::
4.448 ::
4.293 ::
2.468 ::
nv17
0.115 ::
0.052 ::
0.085 ::
5.515 ::
3.764 ::
2.698 ::
nv18
0.087 :
0.013 :
0.041 :
4.484 :
5.416 :
3.313 :
nv20
0.164
0.098
0.072
4.844
3.104
1.823
nv26
0.068 :
0.072 :
0.035 :
5.041 :
4.154 :
3.058 :
nv29
0.233 :
0.091 :
0.029 :
4.929 :
3.344 :
1.328 :
nv31
0.152 :
0.002 :
0.046 :
4.557 :
1.086 :
0.590 :
nv32
0.074 :
0.017 :
0.050 :
5.495 :
3.484 :
3.811 :
nv33
0.168 :
0.075 :
0.073 :
4.704 :
2.230 :
0.747 :
nv34
0.043 :
0.103 :
0.017 :
5.336 :
3.616 :
2.641 :
mag = a0 +
n
x
j=1
a j cos(2π jt/p+φj)
(5)
(for rrc variables), where again n = 10 was usually adopted.
5.1. rrc variables
amplitude ratios a j1 ≡a j/a1 and phase differences φj1 ≡
φj −jφ1 for the lower-order terms are provided in table 3. in
this table, a colon symbol (":") indicates an uncertain value,
whereas a double colon ("::") indicates a very uncertain value.
simon & clement (1993) used light curves of rrc vari-
ables, as provided by their hydrodynamical pulsation mod-
els, to derive equations to calculate mass, luminosity, effective
temperature, and a "helium parameter" for rrc variables – all
6
m. zorotovic et al.
table 4
fourier-based physical parameters: rrc stars
id
m/m⊙
log(l/l⊙)
te(k)
y
[fe/h]
⟨mv ⟩
v4
0.563
1.725
7264
0.271
−1.690
0.747
v9
0.629
1.698
7358
0.275
−1.891
0.811
v12
0.598
1.744
7229
0.264
−1.681
0.737
v13
0.625
1.619
7568
0.298
−1.898
0.822
v14
0.495
1.751
7167
0.268
−1.069
0.711
v20
0.513
1.672
7382
0.289
−1.837
0.763
nv1
0.579
1.745
7218
0.265
−1.605
0.745
nv2
0.604
1.744
7231
0.264
−1.673
0.742
nv5
0.504
1.705
7290
0.280
−1.656
0.748
nv10
0.617
1.734
7259
0.266
−1.766
0.764
nv14
0.614
1.673
7418
0.282
−1.896
0.816
nv20
0.609
1.701
7342
0.275
−1.887
0.830
mean
0.601± 0.049
1.715± 0.040
7276± 110
0.273± 0.011
−1.713± 0.230
0.770± 0.039
as a function of the fourier phase difference φ31 ≡φ3 −3φ1
and the period. however, catelan (2004b, see his §4) pointed
out that the simon & clement set of equations cannot, in their
current form, provide physically correct values for luminosi-
ties and masses, since they violate the constraints imposed
by the ritter (period-mean density) relation. we still provide
the derived values for the ngc 5286 rrc stars in this paper
though, chiefly for comparison with similar work for other
gcs.
accordingly, we use simon & clement's (1993) equa-
tions (2), (3), (6), and (7) to compute m/m⊙, log(l/l⊙), te,
and y, respectively, for 12 of our rrc variables (i.e., those
with the best defined fourier coefficients). we also use equa-
tion (3) in morgan, wahl, & wieckhorst (2007) to compute
[fe/h], and equation (10) in kovács (1998) to compute mv.
the results are given in table 4.
the unweighted mean values and corresponding standard
errors of the derived mean mass, log luminosity (in so-
lar units), effective temperature, "helium parameter", metal-
licity (in the zinn & west 1984 scale), and mean absolute
magnitude in v are (0.601 ± 0.049)m⊙, (1.715 ± 0.040),
(7276±110) k, (0.273±0.011), (−1.71±0.23), and (0.770±
0.039) mag, respectively.
5.2. rrab variables
amplitude ratios a j1 and phase differences φj1 for the
lower-order terms are provided, in the case of the rrab's,
in table 5.
we also give the jurcsik-kovács dm value
(jurcsik & kovács 1996, computed on the basis of their
eq. [6] and table 6), which is intended to differentiate rrab
stars with "regular" light curves from those with "anomalous"
light curves (e.g., presenting the blazhko effect – but see also
cacciari et al. 2005 for a critical discussion of dm as an in-
dicator of the occurrence of the blazhko phenomenon). as
before, a colon symbol (":") indicates an uncertain value,
whereas a double colon ("::") indicates a very uncertain value.
jurcsik & kovács
(1996),
kovács & jurcsik
(1996,
1997),
jurcsik
(1998),
kovács & kanbur
(1998),
and
kovács & walker (1999, 2001) derived empirical expressions
that relate metallicity, absolute magnitude, and temperature
with the fourier parameters of rrab stars, in the case of
sufficiently "regular" light curves (dm < 3). we accordingly
use equations (1), (2), (5), and (11) in jurcsik (1998) to
compute [fe/h], mv , v −k, and logte⟨v−k⟩, respectively,
for the 12 rrab variables in our sample with dm < 3. the
color indices b−vand v −i come from equations (6) and (9)
of kovács & walker (2001); we then use equation (12) of
kovács & walker (1999), assuming a mass of 0.7m⊙, to
derive temperature values from equation (11) (for b−v) and
equation (12) (for v −i) in kovács & walker (2001). these
results are given in table 6.
fourier parameters suggest a metallicity of [fe/h] =
−1.52 ± 0.21 for ngc 5286 in the jurcsik (1995) scale; this
corresponds to a value of [fe/h] = −1.68 in the zinn & west
(1984) scale. this is consistent with the value derived using
the rrc variables and is in excellent agreement with harris
(1996, [fe/h] = −1.67), and also with the value obtained in
paper i ([fe/h] = −1.70 ± 0.05, again in the zinn & west
scale) on the basis of several different photometric parame-
ters describing the shape and position of the cluster's red giant
branch in the v, b−vdiagram.
we find a mean absolute magnitude of ⟨mv ⟩= 0.717 ±
0.038 mag for the rrab stars. for the same set of 12 rrab
stars used to derive this value, we also find ⟨v⟩= 16.64 ±
0.08 mag. this implies a distance modulus of (m −m)v =
15.92 ± 0.12 for ngc 5286, which is in excellent agree-
ment with the value provided in the harris (1996) catalog,
namely, (m −m)v = 15.95 mag.
if one adopts instead for
the hb an average absolute magnitude of mv = 0.60 mag
at the ngc 5286 metallicity, as implied by equation (4a) in
catelan & cortés (2008) – which is based on a calibration
of the rr lyrae distance scale that uses the latest hippar-
cos and hubble space telescope trigonometric parallaxes for
rr lyr, and takes explicitly into account the evolutionary sta-
tus of this star – one finds for the cluster a distance modu-
lus of (m −m)v = 16.04 mag for ngc 5286. we caution the
reader that the ngc 5286 rr lyrae stars could in principle be
somewhat overluminous for the cluster's metallicity (in view
of the cluster's predominantly blue hb), in which case the
correct distance modulus could be even larger, by an amount
that could be of the order of ∼0.1 mag (e.g., lee & carney
1999; demarque et al. 2000). finally, and as also pointed out
by cacciari et al. (2005), we also warn the reader that intrin-
sic colors and temperatures estimated from fourier decompo-
sition are not particularly reliable, and should accordingly be
used with due caution. the reader is also referred to kovács
(1998) and catelan (2004b) for caveats regarding the validity
of the results obtained based on the simon & clement (1993)
relations for rrc stars.
6. the oosterhoff type of ngc 5286
figure 4 shows a histogram with our derived periods in
ngc 5286. the bottom panel is similar to the upper panel,
but with the rrc periods "fundamentalized" using the equa-
tion logpf = logpc + 0.128 (e.g., catelan 2009, and refer-
ences therein). note the lack of a sharply peaked distribu-
ngc 5286. ii. variable stars
7
table 5
fourier coefficients: rrab stars
id
a21
a31
a41
φ21
φ31
φ41
dm
v1
0.524
0.359
0.246
2.406
5.113
1.638
0.751
v2
0.379
0.257
0.108
2.293
4.754
1.181
2.534
v3
0.412
0.258
0.126
2.623
5.379
2.263
1.211
v5
0.472
0.303
0.231
2.329
5.040
1.459
5.957
v6
0.563
0.330
0.199
2.419
5.198
1.683
14.276
v7
...
...
...
...
...
...
...
v10
0.488
0.325
0.193
2.330
4.948
1.148
2.117
v11
0.491
0.226
0.176
2.484
5.228
1.844
16.137
v15
0.448
0.362
0.222
2.363
5.235
1.517
4.592
v17
0.267
0.078
0.019
2.653
5.840
3.345
1.502
v21
0.494
0.315
0.142
2.585
5.403
2.227
1.956
nv3
0.346
0.204
0.023
2.647
5.167
1.894
6.447
nv4
0.246 :
0.087 :
0.037 :
2.712 :
6.177 :
3.153 :
3.197 :
nv6
0.458
0.352
0.237
2.260
4.839
1.164
2.117
nv8
0.330
0.135
0.021
2.775
5.838
3.085
0.379
nv9
...
...
...
...
...
...
...
nv11
0.334
0.210
0.107
2.176
4.616
1.198
3.550
nv12
0.295 :
0.080 :
0.106 :
3.511 :
0.525 :
4.805 :
3.143 :
nv13
0.517
0.353
0.217
2.438
5.241
1.588
0.324
nv15
0.439
0.194
0.078
2.619
5.645
2.320
4.390
nv19
0.475
0.231
0.000
2.450
5.234
1.749
5.938
nv21
0.487
0.341
0.245
2.357
4.812
1.232
0.313
nv22
0.566
0.339
0.217
2.670
5.588
2.607
10.859
nv23
0.463
0.321
0.221
2.208
5.015
1.187
2.600
nv24
0.631
0.282
0.136
1.964
4.458
0.793
4.763
nv25
0.448
0.344
0.241
2.277
4.859
1.108
4.151
nv27
0.481
0.296
0.156
2.627
5.540
2.191
0.686
nv28
0.497
0.277
0.158
2.278
4.901
0.799
3.703
nv30
0.444
0.262
0.119
2.474
5.587
2.174
3.312
table 6
fourier-based physical parameters: rrab stars
id
[fe/h]
⟨mv ⟩
⟨v −k⟩
logte⟨v−k⟩
⟨b −v⟩
logte⟨b−v⟩
⟨v −i⟩
logte⟨v−i⟩
v1
−1.587
0.707
1.200
3.801
0.355
3.804
0.450
3.837
v2
−1.938
0.728
1.239
3.798
0.355
3.802
0.443
3.843
v3
−1.501
0.692
1.256
3.794
0.366
3.801
0.475
3.829
v10
−1.452
0.747
1.105
3.811
0.327
3.815
0.398
3.848
v17
−1.139
0.752
1.321
3.786
0.406
3.790
0.534
3.810
v21
−1.529
0.687
1.267
3.793
0.376
3.798
0.489
3.826
nv6
−1.584
0.752
1.135
3.808
0.335
3.812
0.407
3.848
nv8
−1.509
0.645
1.385
3.780
0.409
3.785
0.551
3.810
nv13
−1.521
0.662
1.292
3.790
0.391
3.794
0.518
3.818
nv21
−1.140
0.732
1.069
3.814
0.322
3.819
0.396
3.845
nv23
−1.642
0.752
1.156
3.806
0.339
3.809
0.413
3.847
nv27
−1.397
0.677
1.239
3.796
0.370
3.800
0.484
3.825
mean
−1.515± 0.213
0.717± 0.038
1.239± 0.093
3.797± 0.010
0.361± 0.029
3.802± 0.010
0.462± 0.054
3.833± 0.015
tion, contrary to what is seen in several gcs of both oost-
erhoff types, but most notably in m3 (e.g., catelan 2004a;
d'antona & caloi 2008, and references therein).
in order to assign an oosterhoff type to ngc 5286, we
must compare its rr lyrae properties with those found in
other oosterhoff (1939, 1944) type i and ii gcs.
in this
sense, clement et al. (2001) found the mean rrab and rrc
(plus rrd) periods for rr lyrae stars in galactic gcs to be
0.559 days and 0.326 days, respectively, for ooi clusters, and
0.659 days and 0.368 days, respectively, in the case of ooii
clusters. in addition, catelan et al. (2009, in preparation) have
recently shown that the minimum period of the ab-type pul-
sators pab,min, when used in conjunction with ⟨pab⟩, provides
a particularly reliable diagnostics of oosterhoff status.
in this sense, the key quantities for the cluster can be sum-
marized as follows:
⟨pab⟩= 0.656d,
(6)
⟨pc⟩= 0.333d,
(7)
pab,min = 0.512d,
(8)
nc/nc+ab = 0.42.
(9)
one immediately finds that the value of ⟨pab⟩for the clus-
ter points to an ooii status (see also fig. 5, which is based
on the compilation presented in catelan 2009) – which is also
favored by its relatively high c-type number fraction. on the
other hand, the average period of the rrc's is lower than typ-
ically found among ooii globulars, being more typical of ooi
objects. however, as shown by catelan et al. (2009), there is a
large overlap in ⟨pc⟩values between ooi and ooii globulars,
thus making this quantity a poorer indicator of oosterhoff sta-
tus than is often realized.
the situation regarding pab,min is rather interesting, for the
8
m. zorotovic et al.
fig. 4.- top: period histogram for the ngc 5286 rr lyrae stars. bottom:
same as in the top, but fundamentalized the periods of the rrc's.
fig. 5.- distribution of galactic gcs in the mean rrab period vs. metal-
licity plane. the position of ngc 5286, as derived on the basis of our new
measurements, is highlighted.
value derived for ngc 5286 makes it one of the ooii clusters
with the shortest pab,min values to date – though 0.512 d would
still clearly be too long for an ooi cluster, which generally
have pab,min < 0.5 d (catelan et al. 2009) – as opposed to
ooii clusters, which typically have instead pab,min > 0.5 d. as
can be seen from table 1 and figure 4 (top), after v7 (the star
with p = 0.512 d), the next shortest-period star in ngc 5286
is nv11, with p = 0.536 d; this is indeed less atypical for an
ooii object.
the position of the cluster in a metallicity versus hb
type diagram is displayed in figure 6.
here hb type
l ≡(b −r)/(b+v +r), where b, r, v are the numbers of
blue, red, and variable (rr lyrae-type) hb stars, respec-
tively; this quantity was derived for ngc 5286 in paper i. as
discussed by catelan (2009), oosterhoff-intermediate gcs,
such as found in several gcs associated with the dwarf satel-
lite galaxies of the milky way, tend to cluster inside the
(b-r)/(b+v+r)
-1.0
-0.5
0.0
0.5
1.0
[fe/h]
-2.5
-2.0
-1.5
-1.0
-0.5
0.0
"bulge, disk" gcs
"old halo" gcs
"young halo" gcs
oosterhoff gap?
+0.8 gyr
2.0 gyr
n5286
fig. 6.- distribution of galactic gcs in the metallicity-hb type l plane.
the region marked as a triangle represents the "oosterhoff gap," a seemingly
forbidden region for bona-fide galactic gcs. ooi clusters tend to sit to the
left of the oosterhoff gap, whereas ooii clusters are mostly found to its right.
the overplotted lines are isochrones from catelan & de freitas pacheco
(1993). adapted from catelan (2009).
-0.6
-0.4
-0.2
0
0
0.5
1
fig. 7.- position of rr lyrae stars on the bailey (period-amplitude) dia-
gram for v. filled circles show rrab's with dm < 3, open circles those with
dm > 3, and crosses show the rrc's. solid lines are the typical lines for ooi
clusters and dashed lines for ooii clusters, according to cacciari et al. 2005.
as in figure 3, different symbol sizes are related to the radial distance from
the cluster center.
triangular-shaped region marked in this diagram – whereas
galactic gcs somehow are not found in this same region, thus
giving rise to the oosterhoff dichotomy in the galaxy. in this
same plane, ooi clusters tend to fall to the left (i.e., redder hb
types) of the triangular-shaped region, whereas ooii objects
are more commonly found to its right. ngc 5286 falls rather
close to the oosterhoff-intermediate region in this plane, but
its position is indeed still consistent with ooii status (see also
fig. 8 in catelan 2009).
figures 7 and 8 show the positions of the rr lyrae stars
on the bailey (period-amplitude) diagram for v and b mag-
nitudes, respectively. circles indicate the rrab's, whereas
crosses are used for the rrc's. as in figure 3, we use dif-
ferent symbol sizes for the variables in different radial annuli
ngc 5286. ii. variable stars
9
-0.6
-0.4
-0.2
0
0
0.5
1
1.5
fig. 8.- same as in figure 7, but for b.
from the cluster center. also shown in this figure are typical
lines for ooi and ooii clusters, which read as follows (see
cacciari et al. 2005):
aab
b = −3.123 −26.331 logp−35.853 logp2,
(10)
aab
v = −2.627 −22.046 logp−30.876 logp2,
(11)
for ab-type rr lyrae stars in ooi clusters. for rrab's in
ooii clusters, in turn, the same lines can be used, but shifted
in periods by ∆logp = +0.06.7 in the case of c-type stars,
we derive reference lines on the basis of figures 2 and 4 of
cacciari et al. (2005); these read as follows:
ac
b = −0.522 −2.290 logp,
(12)
ac
v = −0.395 −1.764 logp,
(13)
for c-type rr lyrae stars in ooi clusters, and
ac
b = −0.039 −1.619 logp,
(14)
ac
v = −0.244 −1.834 logp,
(15)
for c-type rr lyrae stars in ooii clusters (again based on
presumably "evolved" rr lyrae stars in m3).
this kind of diagram can be used as a diagnostic tool to
investigate the oosterhoff classification of rrab stars. how-
ever, the position of a star in this diagram can be strongly af-
fected by the presence of the blazhko effect. in order to min-
imize this problem, we make a distinction between variables
with a jurcsik-kovács compatibility parameter value dm < 3
(filled circles) and dm > 3 (open circles). even considering
only the rr lyrae stars with small dm and hence presum-
ably "regular" light curves (according to the jurcsik-kovács
criterion), we see that there is still a wide scatter among the
rrab's in the bailey diagrams, with no clear-cut tendency for
7 cacciari et al. (2005) actually derived their ooii curves based on what
appeared to be highly evolved stars in m3, and then verified that the same
curves do provide a good description of rr lyrae stars in several bona-fide
ooii gcs.
stars to cluster tightly around either oosterhoff reference line,
particularly in the b case. it is possible that at least some of
the dispersion is caused by unidentified blends in the heavily
crowded inner regions of the cluster, where most of the vari-
ables studied in this paper can be found (see fig. 1). in fact,
the variable stars in the innermost cluster regions (small cir-
cles) show more scatter. if we only look at the variables that
lie outside the core radius (the large and medium-sized cir-
cles, respectively, with r > 0.29′), we can see that they cluster
much more tightly around the oosterhoff ii line. it should
be noted, in any case, that very recently corwin et al. (2008)
have shown that the ab-type rr lyrae stars in the prototypical
ooii cluster m15 (ngc 7078) similarly do not cluster around
the ooii reference line derived on the basis of more metal-
rich clusters, thus casting some doubt on the validity of these
lines as indicators of oosterhoff status, at least at the more
metal-poor end of the rr lyrae metallicity distribution.
as far as the positions of the c-type rr lyrae stars in the
bailey diagrams are concerned, we find that there is also a
wide scatter, without any clearly defined tendency for the data
to clump tightly around either of the oosterhoff reference
lines – although the distribution does seem skewed towards
shorter amplitudes at a given period, compared to the typical
situation in ooii clusters.
finally, we can also check how the average fourier-based
physical parameters derived for the ngc 5286 variables rank
the cluster in terms of oosterhoff status. this exercise is en-
abled by a comparison with the data for several clusters of
different oosterhoff types, as compiled in tables 6 and 7 of
corwin et al. (2003). for the rrc's, we find that the mean
masses, luminosities and temperatures are in fact more sim-
ilar to those found for m3 (a prototypical ooi cluster) than
they are for ooii globulars. part of the problem may be due
to the fact that ngc 5286 is significantly more metal-rich
than all ooii globulars used in the analysis; recall that φ31
is the only fourier parameter used in the simon & clement
(1993) calibration of masses, luminosities, and temperatures,
and that the impact of metallicity on the simon & clement re-
lations has still not been comprehensively addressed (see §5
in clement, jankulak, & simon 1992, and also §4 in catelan
2009 for general caveats regarding the validity of those rela-
tions). as a matter of fact, the recent study by morgan et al.
(2007) clearly shows that, in the case of rrc variables, φ31
depends strongly on the metallicity.
for the rrab's, in turn, both the derived temperatures and
absolute magnitudes are fully consistent with an ooii classi-
fication for the cluster.
7. the type ii cepheid
we find one type ii cepheid (v8) with a period of 2.33 days
and a visual amplitude of av = 1.15 mag, typical for a bl
herculis star. we use equation (3) in pritzl et al. (2003) to
obtain mv = −0.55 ± 0.07 mag for v8. using the intensity-
weighted mean magnitude for v8 from table 1, we obtain for
the cluster a distance modulus (m−m)v = 15.81 ±0.07 mag,
slightly shorter than the values discussed in §5.2. however, as
we can see from figure 9 in pritzl et al., a large dispersion in
mv is indeed present for short-period type ii cepheids, thus
possibly explaining the small discrepancy.
8. summary
in this paper, we present the results of time-series photome-
try for ngc 5286, a gc which has been tentatively associated
with the canis major dwarf spheroidal galaxy. 38 new vari-
10
m. zorotovic et al.
ables were discovered in the cluster, and 19 previously known
ones were recovered in our study (including one bl her star
that was previously catalogued as an rr lyrae). the popula-
tion of variable stars consists of 52 rr lyrae (22 rrc and 30
rrab), 4 lpv's, and 1 type ii cepheid.
from fourier decomposition of the rrab light curves, we
obtained a value for the metalicity of the cluster of [fe/h] =
−1.68 ± 0.21 dex in the zinn & west (1984) scale.
we
also derive a distance modulus of (m −m)v = 16.04 mag for
ngc 5286, based on the recent rr lyrae mv −[fe/h] cali-
bration of catelan & cortés (2008).
using a variety of indicators, we discuss in detail the oost-
erhoff type of the cluster, concluding in favor of an ooii clas-
sification. the cluster's fairly high metallicity places it among
the most metal-rich ooii clusters known, which may help ac-
count for what appears to be a fairly unusual behavior for a
cluster of this type, including relatively short values of pab,min
and ⟨pc⟩, and unusual physical parameters, as derived for its
c-type rr lyrae stars on the basis of fourier decomposition
of their light curves.
in regard to the cluster's suggested association to the ca-
nis major dwarf spheroidal galaxy, we note that the metallic-
ity and distance modulus derived in this work are very sim-
ilar to the values previously accepted for the cluster (harris
1996), and thus the conclusions reached by previous authors
(crane et al. 2003; forbes et al. 2004) regarding its possible
association with this dwarf galaxy are not significantly af-
fected by our new metallicity and distance estimates. in ad-
dition, the position of the cluster in the hb morphology-
metallicity plane is fairly similar to that found in several
nearby extragalactic systems.
as far as ngc 5286's rr
lyrae pulsation properties are concerned, the present study
shows them to be somewhat unusual compared with bona-fide
galactic globular clusters, but still do not classify the clus-
ter as an oosterhoff-intermediate system, as frequently found
among the galaxy's dwarf satellites (e.g., catelan 2009, and
references therein). it is interesting to note, in any case, that
the canis major field, unlike what is found among other dwarf
galaxies, appears to be basically devoid of rr lyrae stars
(kinman et al. 2004; mateu et al. 2009), and also to be chiefly
comprised of fairly high-metallicity ([m/h] ≥−0.7), young
(t ≲10 gyr) stars (e.g., bellazzini et al. 2004). the present
paper, along with paper i, show instead that ngc 5286 is an
rr lyrae-rich, metal-poor globular cluster that is at least as
old as the oldest globular clusters in the galactic halo. it is not
immediately clear that such an object as ngc 5286 would be
easily formed within a galaxy with the properties observed
for the canis major main body – and this should be taken into
account when investigating the physical origin and formation
mechanism for the canis major overdensity and its associated
tidal ring.
we warmly thank i. c. leão for his help producing the
finding chart, and an anonymous referee for several com-
ments that helped improve the presentation of our results.
mz and mc acknowledge financial support by proyecto
fondecyt regular #1071002. support for mc is also pro-
vided by proyecto basal pfb-06/2007, by fondap cen-
tro de astrofísica 15010003, and by a john simon guggen-
heim memorial foundation fellowship. has is supported by
csce and nsf grant ast 0607249.
references
alard, c. 2000, a&as, 144, 363
bellazzini, m., ibata, r., monaco, l., martin, n., irwin, m. j., & lewis, g.
f. 2004, mnras, 354, 1263
bono, g., caputo, f., & stellingwerf, r. f. 1995, apjs, 99, 263
cacciari, c., corwin, t. m., & carney, b. w. 2005, aj, 129, 267
caloi, v., castellani, v., & piccolo, f. 1987, a&as, 67, 181
catelan, m. 2004a, apj, 600, 409
catelan, m. 2004b, in variable stars in the local group, asp conf. ser.,
310, ed. d. w. kurtz & k. r. pollard (san francisco: asp), 113
catelan, m. 2007, revmexaa conf. ser., 26, 93
catelan, m. 2009, ap&ss, 320, 261
catelan, m., & cortés, c. 2008, apj, 676, l135
catelan, m., & de freitas pacheco, j. a. 1993, aj, 106, 1858
clement, c. m., jankulak, m., & simon, n. r. 1992, apj, 395, 192
clement, c. m., muzzin, a., dufton, q., ponnampalam, t., wang, j.,
burford, j., richardson, a., rosebery, t., rowe, j., & hogg, h. s. 2001,
aj, 122, 2587
contreras, r., catelan, m., smith, h. a., pritzl, b. j., & borissova, j. 2005,
apj, 623, l117
corwin, t. m., catelan, m., smith, h. a., borissova, j., ferraro, f. r., &
raburn, w. s 2003, aj, 125, 2543
corwin, t. m., borissova, j., stetson, p. b., catelan, m., smith, h. a.,
kurtev, r., & stephens, a. w. 2008, aj, 135, 1459
crane, j. d., majewski, s. r., rocha-pinto, h. j., frinchaboy, p. m.,
skrutskie, m. f., & law, d. r. 2003, apj, 594, 119
d'antona, f., & caloi, v. 2008, mnras, 390, 693
demarque, p., zinn, r., lee, y.-w., & yi, s. 2000, aj, 119, 1398
forbes, d. a., strader, j., & brodie, j. p. 2004, aj, 127, 3394
gerashchenko, a. n., kadla, z. i., & malakhova, yu. n. 1997, ibvs, 4418,
1
harris, w. e. 1996, aj, 112, 1487
jurcsik, j. 1995, aca, 45, 653
jurcsik, j. 1998, a&a, 333, 571
jurcsik, j., & kovács, g. 1996, a&a, 312, 111
kinman, t. d., saha, a., & pier, j. r. 2004, apj, 605, l25
kovács, g. 1998, msait, 69, 49
kovács, g., & jurcsik, j. 1996, apj, 466, l17
kovács, g., & jurcsik, j. 1997 a&a, 322, 218
kovács, g., & kanbur, s. m. 1998, mnras, 295, 834
kovács, g., & walker, a. r. 1999, apj, 512, 271
kovács, g., & walker, a. r. 2001, a&a, 371, 579
lee, j.-w., & carney, b. w. 1999, aj, 118, 1373
liller, m. h., & lichten, s. m. 1978, aj, 83, 41
mateu, c., vivas, a. k., zinn, r., miller, l. r., & abad, c. 2009, aj, 137,
4412
morgan, s. m., wahl, j. n., & wieckhorst, r. m. 2007, mnras, 374, 1421
oosterhoff, p. th. 1939, observatory, 62, 104
oosterhoff, p. th. 1944, bull. astron. inst. neth., 10, 55
pritzl, b. j., smith, h. a., stetson, p. b., catelan, m., sweigart, a. v.,
layden, a. c., & rich, r. m. 2003, aj, 126, 1381
simon, n, r., clement, c. m. 1993, apj, 410, 526
stellingwerf, r. f. 1978, apj, 224, 953
stetson, p. b. 1987, pasp, 99, 191
stetson, p. b. 1994, pasp, 106, 250
zinn, r., & west, m. j. 1984, apjs, 55, 45
zorotovic, m., catelan, m., zoccali, m., pritzl, b. j., smith, h. a., stephens,
a. w., contreras, r., & escobar, m. e. 2009, aj, 137, 257 (paper i)
ngc 5286. ii. variable stars
11
appendix
light curves
12
m. zorotovic et al.
ngc 5286. ii. variable stars
13
14
m. zorotovic et al.
ngc 5286. ii. variable stars
15
|
0911.1687 | supermagnetosonic jets behind a collisionless quasi-parallel shock | the downstream region of a collisionless quasi-parallel shock is structured
containing bulk flows with high kinetic energy density from a previously
unidentified source. we present cluster multi-spacecraft measurements of this
type of supermagnetosonic jet as well as of a weak secondary shock front within
the sheath, that allow us to propose the following generation mechanism for the
jets: the local curvature variations inherent to quasi-parallel shocks can
create fast, deflected jets accompanied by density variations in the downstream
region. if the speed of the jet is super(magneto)sonic in the reference frame
of the obstacle, a second shock front forms in the sheath closer to the
obstacle. our results can be applied to collisionless quasi-parallel shocks in
many plasma environments.
| introduction.- when the angle between the nominal
shock normal and the upstream magnetic field is small,
the shock transition in a collisionless plasma is much
more complex than in the quasi-perpendicular case [1].
the nonthermal nature of the upstream side of a quasi-
parallel shock has been recognized for decades [2, 3, 4].
the downstream region, however, has only recently come
under active research, both in astrophysical (supernovae
[5]) and solar system (termination shock [6], earth's bow
shock [7, 8]) contexts.
the most detailed and extensive data of collisionless
shock waves are from the earth's bow shock.
in con-
trast to remote observations and laboratory measure-
ments, the near-earth space can be used to study in
situ supersonic plasma flow past a magnetic obstacle-
the flow of the solar wind around the magnetosphere of
the earth. the magnetospheric boundary (the magne-
topause) is usually located at a distance of 10 earth radii
(1 re = 6371 km) in the solar direction. the bow shock
is curved at magnetospheric scales while the structures in
the solar wind and interplanetary magnetic field are large
compared to the size of the magnetosphere. hence the
locations of parallel and perpendicular regions of the bow
shock vary depending on the direction of the interplane-
tary magnetic field. consequently, we can access a wide
range of plasma conditions via spacecraft observations.
recent measurements have shown that the flow in the
downstream region of a quasi-parallel shock is structured:
nemecek et al. [9] have reported observations of transient
ion flux enhancements in the earth's magnetosheath dur-
ing radial interplanetary magnetic field. in subsequent
studies, savin et al. [10] have found more than 140 events
of anomalously high energy density. however, the source
of these jets of high kinetic energy and ion flux has re-
mained unclear. in this letter, we present a set of multi-
spacecraft measurements from cluster [11] that allows us
to suggest a formation mechanism for such jets.
data.- we have analyzed cluster measurements from
the evening of march 17, 2007, when the four spacecraft
(c1-c4) were close to the nose of the magnetosphere.
the spacecraft constellation was quite flat in the nom-
inal plane of the magnetopause, since c3 and c4 were
close to each other (950 km, 0.15 re apart), while the
others were slightly more than 7000 km (>1 re) away.
we have used data from the magnetic field experiment
fgm from all four spacecraft, and from the ion experi-
ment cis-hia from c1 and c3 [11]. information about
the upstream conditions was provided by ace and wind
satellites situated near the lagrangian point l1, as well
as the geotail spacecraft, which at the time was in the
foreshock region near the subsolar point.
the free upstream solar wind flow was quite fast
(v
∼530 km/s) and steady (see the upper panels
of figure 1). the particle number density was around
2 cm−3 and hence the dynamic pressure (ρv 2, where
ρ is the mass density) was low, close to 1 npa.
the
interplanetary magnetic field was approximately radial,
i.e., the sunward magnetic field component bx [22] was
dominant. moreover, the angle between the flow direc-
tion and the magnetic field was less than 20◦. conse-
quently, the bow shock was quasi-parallel at the subso-
lar point.
the upstream mach numbers [23] were all
larger than 10: ma ∼12, ms ∼16, and mms ∼10.
the location of the bow shock, as observed by geotail at
(x, y, z)gse = ∼(14, −7, −3) re [22] when the shock
moved over the spacecraft several times between 17:30
and 24:00 ut, matches well to the empirical model [12]
for the measured upstream parameters.
2
b[nt]
ace
2
3
4
5
θ[deg]
90
135
180
n[cm−3]
16:00
17:00
18:00
19:00
20:00
2
3
pdyn[npa]
1
2
b[nt] c1
cluster
−40
−20
0
20
40
b[nt] c2
−40
−20
0
20
40
b[nt] c3
−40
−20
0
20
40
17−mar−2007
b[nt] c4
18:13
18:14
18:15
18:16
18:17
−40
−20
0
20
40
xgse
ygse
zgse
abs
m−sphere
sheath
shock
jet
fig. 1: upper panels: upstream solar wind data from the
ace satellite (time-shifted by 44 minutes to account for the
solar wind propagation to the magnetopause). first panel:
magnitude of the interplanetary magnetic field and angle θ
between the field direction vector and the xgse-axis [22].
second panel: plasma number density and dynamic pressure.
ace was located at (x, y, z)gse = (237, 36.4, −18.6) re.
the gray shading marks the period of interest. lower panels:
magnetic field from all four cluster spacecraft in gse coor-
dinates. the quartet was situated around (10.7, 1.5, 3) re.
the color panels mark different plasma regions. white back-
ground between color panels represents transition between
two regions.
the cluster quartet, moving on an outbound or-
bit near the subsolar point, encountered the magne-
topause the first time shortly after 17:00 ut and passed
into the solar wind at 22:30 ut. between 17:00 and
20:00 ut cluster observed multiple magnetopause cross-
ings.
moreover, during this 3-hour period cluster ob-
served several high speed jets (v ∼500 km/s) in the mag-
netosheath behind the quasi-parallel bow shock. here we
concentrate on the jet between 18:13 and 18:17 ut.
as displayed in the lower panels of figure 1, all four
spacecraft were inside the magnetosphere at the begin-
ning of the interval. first the magnetopause moved in-
wards passing over the cluster quartet at 250 km/s (ob-
tained using 4-spacecraft timing). then the spacecraft
observed a weak shock within the magnetosheath mov-
ing in the same direction at 140 km/s. in the first panel
of figure 2, the c1 measurements show that at this mo-
ment the component of the plasma velocity parallel to
the shock normal in the reference frame moving with the
shock vn exceeds the magnetosonic speed vms (as well
as the other characteristic speeds [23]). hence the mag-
netosonic mach number mms > 1. (this is also the case
with respect to the magnetopause.) the same was ob-
served by c3 located about 8000 km away (not shown).
after the shock cluster entered a cold, supermagne-
tosonic jet with a plasma speed close to 500 km/s (see
figure 2, first panel). at the location of c2, the shock
and the magnetopause moved back across the spacecraft
and it re-entered the magnetosphere for several seconds
at 18:15:20 ut. the other spacecraft stayed in the su-
personic jet for over a minute moving gradually back into
normal sheath-type plasma. this transition can be seen
in the ion velocity distributions (not shown): the narrow
(∼1 mk) distribution of the jet was slowly replaced by
a warmer, symmetric quasi-maxwellian after 18:16 ut.
while in the jet, cluster observed a gradual increase in
both plasma density and magnetic field: from low values
of 7 cm−3 and 8 nt at the beginning of the jet to very
high values of 22 cm−3 and 30 nt at the end (see fig-
ure 2, second panel for c1 ion density). consequently,
the dynamic pressure in the jet increased to over 6 npa,
as compared to the nominal pressure of 1 npa. these en-
hancements were accompanied by a substantial deflection
of the bulk flow from its nominal direction, as illustrated
by the third panel of figure 2.
interpretation.- we propose the following mechanism
to explain the formation of the jet: first, consider an
oblique shock with radial upstream conditions (v1 ∥b1)
as illustrated in the inset of figure 3.
the rankine-
hugoniot jump conditions for high ma give v1n = rv2n
and v1t ∼v2t, where r is the shock compression ratio.
we then consider the streamlines of plasma flow across
a curved high ma shock as illustrated in figure 3. we
infer, based on the analysis of the observations as will be
discussed below, that the scale of the shock ripple under
consideration is of the order of the spacecraft separation:
50 −100 ion inertial lengths, 7000−15000 km, 1 −3 re.
as the shock primarily decelerates the component of the
upstream velocity v1 parallel to the shock normal n, the
shock crossing leads to efficient compression and deceler-
ation in regions where the angle α between v1 and n is
small. wherever α is large, however, the shock mainly
deflects the flow while the plasma speed stays close to the
upstream value. the plasma is still compressed so that
the higher density together with the high speed leads to
a jet of very high dynamic pressure. furthermore, if the
speed v2 of this jet on the downstream side is still su-
per(magneto)sonic in the reference frame of the obstacle,
a second shock front forms closer to the obstacle. in addi-
tion, depending on the ripple geometry, the flow behind
the shock can converge causing local density enhance-
ments, or diverge causing density depletions.
let us compare figure 3 with the c1 measurements
presented in figure 2 where, in the third panel, the bulk
3
[km/s] c1
−100
0
250
500
{−vz,vx}[km/s] c1
−500
−250
0
200
α[deg] c1
17−mar−2007
18:13
18:14
18:15
18:16
18:17
−90
−60
−30
0
n[cm−3] c1
0
10
20
30
vn (in shock frame)
v
va
vs
vms
fig. 2: first panel: component of plasma velocity paral-
lel to the local shock normal (−0.59, 0.52, −0.61)gse of the
secondary shock (calculated with minimum variance analy-
sis), in the reference frame moving with the secondary shock
vn (black solid curve) and total plasma speed v (light blue
solid curve). dashed curves show the characteristic speeds:
alfv ́
en speed va (blue), sound speed vs (green), and magne-
tosonic speed vms (red) [23]. second panel: plasma number
density. third panel: bulk velocity projection to (−zgse,
xgse) plane.
fourth panel: the angle α calculated from
the observed velocity deflection using the jump conditions for
high ma and r = 4. the calculation is not expected to be
valid at the edges of the jet where the shock is weak, and
hence α is shown for the center only. all data are from c1.
the color coding for different plasma regions is the same as
in figure 1.
v2
v1
α
n
z
x
obstacle
downstream
upstream
2nd shock
c1
shock
fig. 3: illustration of the effect of local shock curvature. the
variation of the plasma number density in the downstream re-
gion is illustrated by the shading: dark blue indicates density
enhancement, light blue indicates density depletion. the tra-
jectory of c1 in the reference frame moving with the ripple
is sketched with the dashed line. the inset details the flow
deflection when v1 is not parallel to n.
flow direction is displayed in the (−zgse, xgse) plane.
the observed pattern of the supermagnetosonic flow af-
ter the secondary shock suggests that there is a ripple
in the bow shock similar to the one of the illustration
moving in the ∼zgse direction. this interpretation is
supported by the observed density and flow speed pro-
files. the fourth panel of figure 2 shows the upstream
angle α for the supermagnetosonic jet calculated from the
observations (considering both the downstream and up-
stream data and taking r = 4). during the main velocity
deflection, α ∼−65◦. the flow pattern in the ygse (not
shown) reveals more of the three-dimensional structure
of the ripple and will be considered elsewhere. the ob-
servations of c3 are similar, though not identical to c1.
given this and the fact that c2 was outside of the jet, we
infer that the lower limit for the scale of the bow shock
perturbation is of the order of the spacecraft separation
(∼8000 km, 1.2 re).
the ripples we propose to be the source of the high
speed jets stem from the unstable nature of collisionless
quasi-parallel shocks: reflected ions can stream against
the upstream flow and interact with the incident plasma
over long distances before returning (if at all) to the
shock. this interaction triggers instabilities and creates
waves that steepen into large structures convecting back
to the shock front (see [1], and the references therein).
the effect is most pronounced when b1 and v1 are
aligned in the coordinate system of the obstacle.
both observations and simulations have shown that
ripples are inherent to quasi-parallel shocks: observa-
tions of the ion reflection on the upstream side of the
earth's bow shock [4] indicated that the direction of n
varies when the shock is quasi-parallel at the subsolar
point.
such studies on the ion distributions have also
shown that, at times, the solar wind does indeed pass
through the shock layer without significant heating [13].
however, no connection between these two findings was
made. furthermore, recent multi-spacecraft observations
have characterised in detail the short, large amplitude
magnetic structures (slams) [14, 15] convecting in the
upstream of earth's quasi-parallel bow shock towards the
shock front. slams have a scale size up to 1 re compa-
rable to the ripples discussed here. in addition, measure-
ments showed signatures that the shock transition itself
is narrow consisting of only one to a few slams. the
roughness of the parallel part of the shock front due to
slams is clearly seen in the bow shock simulations of,
e.g., blanco-cano et al. [16].
discussion and conclusions.- in previous studies
savin et al.
[10] have found several jets with high ki-
netic energy density ( 1
2ρv 2), of which 33 jets had an
energy density larger than 10 kev/cm3 (compared with
19 kev/cm3 in this event).
nemecek et al.
[9] have
also reported what they call transient ion flux enhance-
ments, with fluxes of 6 × 108/cm2s (∼8 × 108/cm2s in
this event), during intervals of radial interplanetary mag-
4
netic field. both transient flux enhancements and high
kinetic energy jets have properties similar to the jets re-
ported here.
neither nemecek et al.
nor savin et al.
could identify a clear source for the jets, but they could
rule out, e.g., reconnection. here, we propose a gener-
ation mechanism for high speed jets in the sheath that
is in agreement with the measurements presented in this
letter and those reported in previous studies. naturally,
we cannot ascertain that all of the previously reported
jets stem from the same shock geometry related origin.
it has become evident that shocks are more structured
than was previously recognized, so that a conventional
plane wave description is not sufficient. in fact, the mech-
anism proposed in this letter for spatial structuring of
the downstream is valid for all rippled shocks regard-
less of magnetic field obliquity, provided that the mach
number is high. as voyager 1 and 2 crossed the helio-
spheric termination shock [6], their observations revealed
a rippled, supercritical (mms ∼10) quasi-perpendicular
shock [17]. likewise, interplanetary shocks seem to be
nonplanar [18] and also oblique ones may be rippling [19].
therefore we expect that the effects of the ripples, includ-
ing supersonic jets, can be observed behind collisionless
shocks in many plasma environments, and especially be-
hind extended, varying shock fronts having quasi-parallel
regions.
in astrophysical context, the high speed jets and
nonthermal structure can act as seeds for magnetic
field amplification and particle acceleration [5], even
for smooth upstream plasma.
in magnetospheric con-
text, the jets with their high dynamic pressure pro-
vide a previously unidentified source for magnetopause
waves during steady solar wind conditions.
a locally
perturbed magnetopause is consistent with the cluster
measurements of c2 being within the magnetosphere
while the other spacecraft were in the jet. in turn, the
large magnetopause perturbation can affect the coupled
magnetosphere-ionosphere dynamics [20]. note also that
this letter presents observations of a weak shock within
the magnetosheath during steady upstream conditions.
previous studies of discontinuities within the sheath have
been related to bow shock interaction with interplanetary
shocks (see [21], and the references therein).
in summary, we propose a generation mechanism for
high speed jets in the downstream side of a quasi-
parallel shock based on a set of multi-point measure-
ments.
quasi-parallel shocks are known to be rippled
even during steady upstream conditions. the local cur-
vature changes of the quasi-parallel shock can create fast
bulk flows: in the regions where the upstream velocity is
quasi-perpendicular to the local shock normal, the shock
mainly deflects plasma flow while the speed stays close
to the upstream value. together with the compression of
the plasma, these localized streams can lead to jets with
a kinetic energy density that is several times higher than
the kinetic energy density in the upstream.
we thank n. ness and d. j. maccomas for ace data,
r. lepping and r. lin for wind data, and t. mukai and
t. nagai for geotail data. we also thank cdaweb at
nssdc and the cluster active archive for data access.
h. h. thanks m. andr ́
e and e. k. j. kilpua for useful
comments. the work of h. h. is supported by the vaisala
foundation and the m. ehrnrooth foundation. the work
of t. l., k. a., r. v., and m. p. is supported by the
academy of finland. m. p. is also supported by eu-fp7
erc starting grant 200141-quespace.
∗[email protected]
[1] d. burgess et al., space sc. rev. 118, 205 (2005).
[2] j. r. asbridge, s. j. bame, and i. b. strong, j. geophys.
res. 73, 5777 (1968).
[3] m. m. hoppe and c. t. russell, nature 295, 41 (1982).
[4] t. g. onsager et al., j. geophys. res. 95, 2261 (1990).
[5] j. giacalone and j. r. jokipii, astrophys. j. 663, l41
(2007).
[6] j. r. jokipii, nature 454, 38 (2008).
[7] a. retin`
o et al., nature physics 3, 236 (2007).
[8] e. yordanova et al., phys. rev. lett. 100, 205003 (2008).
[9] z. nemecek et al., geophys. res. lett. 25, 1273 (1998).
[10] s. savin et al., jetp lett. 87, 593 (2008).
[11] c. p. escoubet, r. schmidt, and m. l. goldstein, space
sci. rev. 79, 11 (1997).
[12] j. merka et al., j. geophys. res. 110, a04202 (2005).
[13] j. t. gosling et al., j. geophys. res. 94, 10027 (1989).
[14] e. a. lucek et al., ann. geophys. 20, 1699 (2002).
[15] e. a. lucek et al., j. geophys. res. 113, a07s02 (2008).
[16] x. blanco-cano, n. omidi, and c. t. russell, j. geo-
phys. res. 114, a01216 (2009).
[17] l. f. burlaga et al.,nature 454, 75 (2008).
[18] m. neugebauer and j. giacalone, j. geophys. res. 110,
a12106 (2005).
[19] d. krauss-varban, y. li, and j. g. luhmann, in par-
ticle acceleration and transport in the heliosphere and
beyond, 7th annual astrophysics conference, edited by
g. li, q. hu, o. verkhoglyadova, g. p. zank, r. p. lin,
and j. g. luhmann (aip, 2008), pp. 307–313.
[20] d. g. sibeck et al., j. geophys. res. 94, 2505 (1989).
[21] l. prech, z. nemecek, and j. s. safrankova, geophys.
res. lett. 35, l17s02 (2008).
[22] geocentric solar ecliptic system (gse) coordinates: x-
axis points from the earth towards the sun, y -axis op-
posite to the planetary motion, and z-axis towards the
ecliptic north pole.
[23] characteristic
speeds
in
plasma
are
alfv ́
en
speed
va = (b2/μ0ρm)1/2, sound speed vs = (γkbt/m)1/2,
and magnetosonic speed vms
=
(v 2
a + v 2
s )1/2. thus
the relevant mach numbers are alfv ́
en mach number
ma
=
vn/va, sonic mach number ms
=
vn/vs,
and magnetosonic mach number mms = vn/vms. here
vn is the component of velocity parallel to the shock nor-
mal in the reference frame moving with the shock front.
|
0911.1688 | two-cooper-pair problem and the pauli exclusion principle | while the one-cooper pair problem is now a textbook exercise, the energy of
two pairs of electrons with opposite spins and zero total momentum has not been
derived yet, the exact handling of pauli blocking between bound pairs being not
that easy for n=2 already. the two-cooper pair problem however is quite
enlightening to understand the very peculiar role played by the pauli exclusion
principle in superconductivity. pauli blocking is known to drive the change
from 1 to $n$ pairs, but no precise description of this continuous change has
been given so far. using richardson procedure, we here show that pauli blocking
increases the free part of the two-pair ground state energy, but decreases the
binding part when compared to two isolated pairs - the excitation gap to break
a pair however increasing from one to two pairs. when extrapolated to the dense
bcs regime, the decrease of the pair binding while the gap increases strongly
indicates that, at odd with common belief, the average pair binding energy
cannot be of the order of the gap.
| introduction
the first step towards understanding the microscopic
grounds of superconductivity was made by fr ̈
ohlich1 who
has realized that electrons in metals can form bound pairs
due to their weak interaction with the ion lattice, which
results in an effective electron-electron attraction. a few
years later, cooper has considered2 a simplified quantum
mechanical problem of two electrons with opposite spins
and zero total momentum added to a "frozen" fermi sea,
i.e., a sea of noninteracting electrons. within the cooper
model, an attractive interaction between these two elec-
trons is introduced, this interaction being localized in a
finite-width layer above the "frozen" fermi sea. cooper
has shown that such an attraction, no matter how weak,
leads to the appearance of a bound state for the two ad-
ditional electrons. this result was demonstrated for a
single pair although it was fully clear that conventional
superconductivity takes place in a macroscopic system of
electrons paired by such an attraction.
one year later, bardeen, cooper and schrieffer3 (bcs)
have proposed an approximate solution of the quantum
many-body problem for electrons with opposite spins at-
tracting each other. a very important result of the bcs
theory is the existence of a gap in the excitation spectrum
above the ground state. in the bcs model, the potential
layer, in which an attraction between electrons with op-
posite spins acts, extends symmetrically on both sides of
the fermi level. this implies that indeed a macroscopic
number of electrons interact with each other. in order
to avoid the difficult problem associated with the pauli
exclusion principle between a given number of same spin
electrons, the grand canonical ensemble was used. the
original formulation of bcs theory is also based on a
variational ansatz for the ground state wave function:
the wave function is taken with all the electrons feeling
the attraction, paired, i.e., "condensed" into the same
quantum-mechanical state.
it was, however, emphasized by schrieffer that electron
pairs are not elementary bosons because they are con-
structed from two elementary fermions4, so that their
creation and destruction operators do not obey simple
bosonic commutation relations. schrieffer also claimed
that the large overlap which exists between pairs in the
dense bcs configuration cuts any link with the two-body
cooper model, the isolated pair picture thus having little
meaning in the dense regime4. in spite of this claim, it is
rather obvious that the many-electron bcs configuration
can be reached from the one-cooper pair limit by sim-
ply adding more and more electron pairs into the layer
where the attraction acts, until the layer becomes half-
filled. a canonical procedure of this kind would allow
one to see the evolution of correlated electron pairs from
the dilute to the dense regime and to understand deeper
the role of the pauli exclusion principle in fermion pair-
ing. notice that such an approach can also be considered
as a useful and well-defined toy model for the crossover
between local and extended pairs of attracting fermions,
which in the present time attracts large attention within
the field of ultracold gases5–7.
the crossover problem
is still open even for the simplest case of the "reduced"
bcs potential for fermion-fermion interaction: a varia-
tional solution has only been proposed long time ago by
eagles8 and also by leggett9.
it also uses a bcs-like
ansatz for the ground state wave function.
a possible way to tackle the problem in the canoni-
cal ensemble, i.e., for a fixed number of electron pairs, is
to use the procedure developed by richardson10,11. it al-
lows us to write the form of the exact n-pair eigenstate of
2
the schr ̈
odinger equation in the case of the so-called "re-
duced" bcs potential which is the simplest formulation
of the electron-electron interaction mediated by the ion
motion. the eigenstate, as well the energy of n pairs,
read in terms of n parameters r1, ..., rn, which are
solutions of n nonlinear algebraic equations. although
richardson's approach greatly simplifies the problem by
avoiding a resolution of a n-body schr ̈
odinger equation,
the solution of these equations for r1, ..., rn in a com-
pact form for general n remains an open problem. one
of the difficulties is due to the fact that n is not a pa-
rameter in these equations but only enters through the
number of equations. this is rather unusual and makes
the n-dependence of the system energy quite uneasy to
extract. nowadays, richardson's equations are tackled
numerically for small-size superconducting granules con-
taining countable numbers of electron pairs12. we wish
to add that the canonical approach has also been used
in the form of a variational fixed-n projected bcs-like
theories, see e.g. ref13.
the goal of the present paper, is to extend the origi-
nal cooper's work for one electron pair to two pairs: we
analytically solve the two richardson's equations in the
large sample limit. our work can be considered as an
initial step towards the establishing of the precise link
which exists between dilute and dense regimes of pairs,
since it indicates a general trend for the evolution of the
ground state energy with the increase of pair number,
i.e., overlap between pairs. richardson's equations are
here solved by three methods. they of course give iden-
tical results but shine different light on these equations.
the approaches to tackle richardson's equations, pro-
posed in this paper, in fact constitute perspectives for
the extension to a larger number of pairs and hopefully
to the thermodynamical limit.
the solution we obtain shows that the average pair
binding energy is smaller in the two-pair configuration
than for one pair.
this result can be physically un-
derstood by noting that electrons which are paired are
fermions; therefore, by increasing the number of pairs, we
decrease the number of states in the potential layer avail-
able to form these paired states. the energy decrease we
here find is actually quite general for composite bosons14.
however, extrapolation of this understanding to the
dense bcs configuration faces difficulty within the com-
mon understanding of bcs results.
indeed, it is gen-
erally believed15,16 that the pair binding energy in the
dense bcs limit is of the order of the superconducting
gap ∆.
at the same time, this gap is found as expo-
nentially larger than the single pair binding energy ob-
tained by cooper. according to the tendency we here
revealed, the average pair binding energy in the dense
regime should be smaller than that in the one-pair prob-
lem.
this discrepancy motivated us to focus on what is
called pair binding energy and more generally "cooper
pair" in the various understanding of the bcs theory.
usually, pairs are said to have a binding energy of the
order of ∆. however, such pairs are introduced not ab
initio, but to provide a physical understanding of the
bcs result for the ground state energy16,17. pairs with
energy of the order of the gap are called "virtual pairs"
by schrieffer4. they represent couples of electrons ex-
cited above the normal fermi level for noninteracting
electrons, as a result of the attraction between up and
down spin electrons.
since the fermi level is smeared
out on a scale of ∆by the attraction, the number of
such pairs is much smaller than the total number of elec-
tron pairs feeling the attraction. the latter were named
"superfluid pairs" by schrieffer4. by construction, the
concept of "virtual pair" breaks a possible continuity be-
tween the dilute and dense regimes of pairs in a somewhat
artificial way. by contrast, staying within the framework
of "superfluid pairs" greatly facilitates the physical un-
derstanding of the role of pauli blocking in superconduc-
tivity as well as in the bec-bcs crossover problem. our
results in fact demonstrate the importance of a clear sep-
aration between the various concepts of "cooper pair"
found in the literature.
we wish to mention that the results presented in this
paper do not have straightforward experimental appli-
cations.
the main goal of this paper is to reveal the
general trend for the evolution of energy spectrum when
changing the number of pairs and to make a first step to-
wards a fully controllable resolution of the n-pair prob-
lem. however, even a two-pair configuration has a rela-
tion to real materials having correlated pairs of fermions,
because this configuration corresponds to a dilute regime
of pairs, realized in some systems.
conceptually, the
overlap between pairs can be tuned either by changing
fermion-fermion interaction or total number of pairs. we
here show that by increasing the overlap between pairs,
we block more and more states available for the construc-
tion of paired states. for the first time, a dilute regime of
pairs was addressed by eagles8 in the context of super-
conducting semiconductors having a low carrier concen-
tration. in particular, it was shown in this paper that the
excitation spectrum in the dilute regime is controlled by
the binding energy of an isolated pair (in agreement with
our results) rather than by a more cooperative gap which
appears, when pairs start to overlap. thus, this picture
is quite similar to the isolated-pair model considered by
cooper.
there also is a variety of unconventional superconduc-
tors which are characterized by rather short coherence
length that implies pairs not overlapping so strongly as
in conventional low-tc materials.
for instance, it was
argued in ref.18 that the bec-bcs crossover might be
relevant for high-tc cuprates.
some experiments seem
to support this idea, for example ref19 where experi-
mental data on the dependence of the superconducting
transition temperature on fermi temperature are col-
lected for various superconducting materials. this anal-
ysis indicates that conventional low-tc superconductors
stay apart from short-coherence length materials, includ-
ing heavy fermion superconductors. thus, it was argued
3
that to understand these unconventional materials, it is
appropriate to focus on the most basic aspect, i.e., on the
short coherence length, rather than to introduce more ex-
otic and less generic concepts20. it was also shown that
the very recently discovered fe-based pnictides, which
constitute a new class of high-tc superconductors, should
be understood as low-carrier density metals resembling
underdoped cuprates21, so that it is possible that the
bec-bcs crossover phenomenon is relevant for these
materials as well. quite recently, it was demonstrated
in ref.22 that size quantization in nanowires made of
conventional superconductors can result in a dramatic
reduction of the coherence length bringing superconduct-
ing state to the bec-bcs crossover regime. we finally
would like to mention that the two-correlated pair prob-
lem has received great attention within the ultracold gas
field, see e.g. ref.5. all these examples demonstrate that,
paradoxically, the cooper problem seems to be more rel-
evant to modern physics than several decades ago.
it
is also worth mentioning that bcs hamiltonian, which
only includes interaction between the up and down spin
electrons with zero pair momentum, is oversimplified.
nevertheless, fermionic pairs in the bec-bcs transition
regime have not been described yet in a fully controlled
manner even within this hamiltonian. one of the possi-
ble strategies to tackle this crossover therefore is to find
a precise solution of the problem for the simplest hamil-
tonian and only after that, to turn to more elaborate
hamiltonians.
the paper is organized as follows. in section ii, we
briefly recall the one-cooper pair problem to settle no-
tations. in section iii, we present two solutions to the
two-pair ground state, as well as a discussion of the pos-
sible excited states. we conclude in section iv. in the
appendix, we give another exact solution to the two-pairs
richardson's equations which shines a different light to
the problem.
ii.
the one-cooper pair problem
let us briefly recall the one-cooper pair problem. we
consider a fermi sea |f0⟩made of electrons with up and
down spins. an attractive potential between electrons
with opposite spins and opposite momenta acts above
the fermi level εf0. this potential is taken as constant
and separable to allow analytical calculations. in terms
of free pair creation operators β†
k = a†
k↑a†
−k↓it reads as
v = −v
x
k′,k
wk′wkβ†
k′βk
(1)
v is a positive constant and wk = 1 for εf0 < εk <
εf0 + ω.
we add a pair of electrons with opposite spins to the
"frozen" sea |f0⟩. when the pair has a nonzero momen-
tum, it is trivial to see that a†
p↑a†
−p′↓|f0⟩with p ̸= p′
is eigenstate of h = h0 + v where h0 = p
k,s εka†
ksaks,
its energy being (εp + εp′). if the pair has a zero total
momentum, the h eigenstates are linear combinations of
βk |f0⟩. we look for them as
|ψ1⟩=
x
k
g(k)β†
k |f0⟩
(2)
the schr ̈
odinger equation (h −e1) |ψ1⟩= 0 imposes
g(k) to be such that
[2εk −e1] g(k) −v wk
x
wk′g(k′) = 0
(3)
for 2εk ̸= e1, the eigenfunction g(k) depends on k as
wk/(2εk −e1), so that |ψ1⟩is only made of pairs within
the potential layer, as physically expected. the eigen-
values such that 2εk ̸= e1 for all k within the potential
layer then follows from eq.(3) as
1 = v
x
k
wk
2εk −e1
≃v ρ0
2
z εf0 +ω
εf0
2dε
2ε −e1
(4)
ρ0 is the mean density of states in the potential layer.
this leads for a weak potential, i.e., a dimensionless pa-
rameter v = ρ0v small compared to 1, to
e1 ≃2εf0 −εc
(5)
εc ≃2ωe−2/v
(6)
as seen below, it will be physically enlightening to
rewrite this one-cooper pair binding energy as
εc ≃(ρ0ω)(2e−2/v/ρ0) = nωεv
(7)
nω= ρ0ωis the number of empty pair states in the po-
tential layer ωfrom which the cooper pair bound state
is constructed, these states being all empty in the one-
cooper pair problem. εv = 2e−2/v/ρ0 appears as a bind-
ing energy unit induced by each of the empty pair states
in the potential layer. εv only depends on the potential
amplitude v and the density of states ρ0 in the potential
layer.
eq.(7) already shows that the wider the potential layer
ω, the larger the number of empty states feeling the po-
tential from which the cooper pair is made and, ulti-
mately, the larger the binding energy εc. we can also
note that the pair binding energy depends linearly on the
number of states available to form a bound state. this
remark is actually crucial to grasp the key role played by
pauli blocking in superconductivity: indeed, this block-
ing makes the number of empty states available to form a
bound state decrease when the pair number increases - or
when one pair is broken as in the case of excited states.
iii.
the two-cooper pair problem
we now add two pairs having opposite spin electrons
and zero total momentum to the fermi sea |f0⟩and we
look for the eigenstates (h −e2) |ψ2⟩= 0 as
|ψ2⟩=
x
g(k1, k2)β†
k1β†
k2 |f0⟩
(8)
4
the bosonic character of fermion pairs which leads to
β†
k1β†
k2
= β†
k2β†
k1, allows us to enforce g(k1, k2) =
g(k2, k1) without any lost of generality. the schr ̈
odinger
equation fulfilled by g(k1, k2) is somewhat more compli-
cated than for one pair. to get it, it is convenient to note
that
vβ†
k1β†
k2 |f0⟩= −v (1 −δk1k2)
wk1β†
k2 + wk2β†
k1
x
wpβ†
p |f0⟩
(9)
the factor (1−δk1k2) being necessary for both sides of the
above equation to cancel for k1 = k2. when used into
(h −e2) |ψ2⟩= 0 projected upon ⟨f0| βk1βk2 we get
0 = (1 −δk1k2) [(2εk1 + 2εk2−e2) g(k1, k2)
−v
wk1
x
k̸=k2
wkg(k, k2) + (k1 ↔k2)
(10)
the above equation makes g(k1, k1) undefined.
this
however is unimportant since the k1 = k2 contribution
to |ψ2⟩anyway cancels due to the pauli exclusion prin-
ciple. for k1 ̸= k2, the equation fulfilled by g(k1, k2)
follows from the cancellation of the above bracket. with
probably in mind a ( k1, k2) decoupling , richardson sug-
gested to split e2 as
e2 = r1 + r2
(11)
with r1 ̸= r2, a requirement mathematically crucial as
seen below. we can then note that
2εk1 + 2εk2 −e2
(2εk1 −r1) (2εk2 −r2) =
1
2εk1 −r1
+
1
2εk2 −r2
(12)
with (r1, r2) possibly exchanged.
this probably led
richardson to see that the symmetrical function con-
structed on the lhs of the above equation, namely
g(k1, k2) =
1
(2εk1 −r1) (2εk2 −r2) +(r1 ↔r2) (13)
is an exact solution of the schr ̈
odinger equation provided
that r1 and r2 are such that
1 = v
x
wk
2εk −r1
+
2v
r1 −r2
= (r1 ↔r2)
(14)
as obtained by inserting eq.(13) into (h −e2) |ψ2⟩= 0.
note that the denominator in the above equation clearly
shows why (r1, r2) are required to be different.
the
fundamental advantage of richardson's procedure is to
replace the resolution of a 2-body schr ̈
odinger equation
for g(k1, k2) by a problem far simpler, namely, the res-
olution of two nonlinear algebraic equations.
this procedure nicely extends to n pairs, the equa-
tions for r1, ..., rn reading as eq.(14), with all possible
r differences. however, to the best of our knowledge, the
analytical resolution of these equations for arbitrary n
has stayed an open problem, even when n = 2. we now
show how we can tackle this resolution analytically, first
through a perturbative approach, and then through two
exact procedures.
a.
perturbative approach
a simple way to tackle the richardson's equations an-
alytically is to note that eq.(4) allows to replace 1 in the
lhs of eq.(14) by the same sum with r1 replaced by
e1. if we now add and substract the two richardson's
equations, we get two equations in which the potential v
has formally disappeared, namely
x
wk
2εk −r1
+
wk
2εk −r2
= 2
x
wk
2εk −e1
(15)
x
wk
2εk −r1
−
wk
2εk −r2
= −
4
r1 −r2
(16)
v is in fact hidden into e1. this is a wise way to put the
singular v dependence of cooper pairs into the problem,
at minimum cost.
in view of eq.(15), we are led to expand the sums
appearing in richardson's equations as
x
k
wk
2εk −r1
=
x
k
wk
2εk −e1 + e1 −r1
=
∞
x
n=0
jn(r1 −e1)n
(17)
where j0 = 1/v while jn>0 is a positive constant given
by
jn =
x
k
wk
(2εk −e1)n+1 = ρ0
2
in
nεn
c
(18)
in ≃1 −e−2n/v
(19)
for v small. for this expansion to be valid, we must have
|ri −e1| < 2εk −e1 for all k. this condition is going to
be fulfilled for large samples, as possible to check in the
end.
it is convenient to look for ri
through ci
=
(ri −e1) /εc with i = (1, 2). eqs.(15, 16) then give
∞
x
n=1
in
n (cn
1 + cn
2 ) = 0
(20)
(c1 −c2)
∞
x
n=1
in
n (cn
1 −cn
2 ) = −2γc
(21)
the above formulation evidences that the richard-
son's equations contain a small dimensionless parameter,
namely
γc = 4/nc
(22)
where nc = ρ0εc. indeed, nc is just the pair number
from which pairs start to overlap. this makes nc large,
5
and consequently γc small compared to 1, in the large
sample limit.
for γc = 0 , the solution of the above equations reduces
to c1 = c2 = 0, i.e., e2 = 2e1. the fact that the two-pair
energy e2 differs from the energy of two single pairs 2e1
is physically due to pauli blocking, but mathematically
comes from a small but nonzero value of γc.
to solve eqs.(20, 21) in the small γc limit, it is conve-
nient to set c1 = s + d and c2 = s −d. this allows us
to rewrite eqs.(20, 21) as
−d2 =
(23)
γc/2
i1+ i3
2 (d2+3s2)+* * *
+s [i2+i4(d2+s2)+* * * ]
−s =
i2
2 (d2 + s2) + i4
4 (d4 + 6d2s2 + s4) + * * *
i1 + i3
3 (3d2 + s2) + * * *
(24)
their solution at lowest order in γc reads γc/2i1 ≃
−d2 ≃2si1/i2. when inserted into r1 + r2 = 2e1 +
(c1 + c2)εc, this gives the two-pair energy as
e2 ≃2e1 + γc
i2
2i2
1
εc
≃2e1 + 2
ρ0
1 + 2e−2/v
(25)
using the expression of e1 given in eqs.(5,6), we can
rewrite this energy as
e2 ≃2
2εf0 + 1
ρ0
−εc
1 −
1
nω
≃2
2εf0 + 1
ρ0
−εv (nω−1)
(26)
compared to the energy of two single pairs 2e1
=
2(2εf0 −εc), we see that pauli blocking has two quite
different effects. (i) it first increases the normal part of
this energy as reasonable since the fermi level for free
electrons increases. the first term in eq.(26) is nothing
but 2εf0 +2(εf0 + 1
ρ0 ): one pair has a kinetic energy 2εf0
while the second pair has a slightly larger kinetic energy
2(εf0 + 1
ρ0 ), the fermi level increase when one electron is
added, being 1/ρ0. (ii) another less obvious effect of the
pauli exclusion principle is to decrease the average pair
binding energy. indeed due to pauli blocking, (nω−1)
pair states only are available to form a bound state in
the two-pair configuration, while all the nωpair states
are available in the case of a single cooper pair.
b.
exact approach
the perturbative approach developed above, through
the (ri −e1) expansion of the sum appearing in the
richardson's equations, helped us to easily get the ef-
fect of pauli blocking on the ground state of two cooper
pairs. it is in fact possible to avoid this γc expansion as
we now show.
through the perturbative calculation, we have found
that the difference r1 −r2 is imaginary at first order in
γc. it is possible to prove that this difference is imaginary
at any order in γc: the richardson's procedure amounts
to add an imaginary part to the two-pair energy, in order
to escape into the complex plane and avoid poles in sums
like the one of eq.(4), the two "richardson's energies"
then reading as r1 = r + ir′ and r2 = r −ir′, with r
and r′ real.
r is by construction real since r1 + r2 = 2r is the
energy of the two cooper pairs. in order to show that r′
also is real, let us go back to eq.(16). in terms of (r, r′),
this equation reads
x
wk
x2
k + r′2 =
1
r′2
(27)
where xk = 2εk −r is real.
we then note that this
equation also reads
x
wkx2
k
|x2
k + r′2|2 = r′∗2
"
1
|r′|4 −
x
wk
|x2
k + r′2|2
#
(28)
from it, we readily see that, since the lhs and the
bracket are both real, r′∗2 must be real. r′∗2 can then
either be positive or negative, e.i., r′ can be real or imag-
inary, which produces (r1, r2) either both real or com-
plex conjugate.
to show that (r1, r2) cannot be both real, we go back
to eq.(16). by noting that p wk is nothing but the num-
ber nωof pairs in the potential layer, we can rewrite this
equation as
0 =
x
wk
1
2εk −r1
−
1
2εk −r2
+
4
nω(r1 −r2)
(29)
=
1
r1 −r2
x
wk
ak
(2εk −r1)(2εk −r2)
(30)
where
ak = (r1 −r2)2 +
4
nω
(2εk −r1)(2εk −r2)
(31)
it is possible to rewrite the second term of ak using 4ab =
(a + b)2 −(a −b)2. this leads to
ak = (1 −1
nω
)(r1 −r2)2 + 1
nω
(4εk −r1 −r2)2 (32)
since the number of pairs nωin the potential layer is far
larger than 1, ak would be positive if (r1, r2) were both
real. for (r1, r2) outside the potential layer over which
the sum over k is taken, the sum in eq.(30) would be
made of terms with a given sign, so that this sum cannot
6
cancels.
consequently, solutions outside the potential
layer must be complex conjugate whatever γc.
for (r1, r2) complex conjugate, i.e., r′ real, the sum
over k in eq.27, performed within a constant density of
states, leads to
1
r′2 = ρ0
2
z εf0 +ω
εf0
2dεk
x2
k + r′2
=
ρ0
2r′
arctan 2ω+ xf0
r′
−arctan xf0
r′
(33)
where xf0 = 2εf0 −r. if we now take the tangent of
the above equation, we find
tan
2
ρ0r′ =
2ωr′
r′2 + xf0(2ω+ xf0)
(34)
turning to eq.(15), we find that it reads in terms of
(r, r′) as
x
xk
x2
k + r′2 = 1
v
(35)
if we again perform the integration over k with a constant
density of states, this equation gives
x2
f0 + r′2
(2ω+ xf0)2 + r′2 = e−4/v
(36)
r and r′ then appear as the solutions of two algebraic
equations, namely eqs.(31) and (33).
unfortunately,
they do not have compact form solutions.
it is however possible to solve these equations ana-
lytically in the large sample limit. ρ0 then goes to in-
finity so that nωand nc are both large. in this limit
tan(2/ρ0r′) ≃2/ρ0r′ to lowest order in (1/ρ0). eq.(31)
then gives r′2 ≃xf0(2ω+ xf0)/nω.
for ρ0 infinite, i.e., nωinfinite, r′ reduces to zero so
that, due to eq.(33), z = xf0/(2ω+ xf0) reduces to
e−2/v. eq.(33) can then be rewritten as
z2 ≃e−4/v 1 + z/nω
1 + 1/znω
(37)
since e−2/vnω= nc/2 is also large compared to 1, this
gives the first order correction in 1/ρ0 to z ≃e−2/v as z ≃
e−2/v
1 + (e−2/v −e2/v)/2nω
. from xf0 = 2ωz/(1 −
z) which for z small reduces to xf0 ≃2ω(z +z2), we end
by dropping terms in e−4/v, with r at first order in 1/ρ0
given by
r ≃2ǫf0 −2ωe−2/v
1 −1
nc
−
1
nω
≃2ǫf0 + 1
ρ0
−ǫc(1 −
1
nω
)
(38)
since e2 = 2r, this result is just the one obtained from
the perturbative approach given in eq.(26).
the major advantage of this exact procedure is to
clearly show that the above result corresponds to the
dominant term in both, the large sample limit by drop-
ping terms in (1/ρ0)2 in front 1/ρ0, and the small poten-
tial limit by dropping terms in e−4/v in front of e−2/v. as
seen from the first expression of r in eq.(35), the pauli
exclusion principle induces a double correction, in 1/nc
and in 1/nωto the one-pair binding energy ǫc = 2ωe−2/v.
however, the corrections in 1/nc ends by giving a poten-
tial free correction to the 2-pair energy e2 because, in a
non-obvious way, it in fact comes from a simple change
in the free electron fermi sea filling, as seen from the
second expression of r in eq.(35).
c.
excited state
we now consider the 2-pair excited states with a bro-
ken pair having a nonzero total momentum, as possibly
obtained by photon absorption. such a pair does not feel
the bcs potential, so that it stays uncorrelated. these
excited states thus read
ψ1; k,k′
=
x
f(k1)β†
k1a†
k↑a†
−k′↓|f0⟩
(39)
to derive the equation fulfilled by f(k1), it is conve-
nient to note that, for k ̸= k′,
βpβ†
k1a†
k↑a†
−k′↓|f0⟩
= δk1p (1 −δk1k −δk1k′) a†
k↑a†
−k′↓|f0⟩
(40)
the bracket insuring cancellation for k1 = k or k′, as
necessary due to the lhs. it is then easy to show, from
the schr ̈
odinger equation
h −e1, kk′
ψ1; k,k′
= 0 pro-
jected upon ⟨f0| a−k′↓ak↑βp that
0 =
1 −δpk −δpk′
2εp + εk + εk′ −e1, kk′
f(p)
−v wp
x
q̸=k,k′
wqf(q)
(41)
this makes f(p) undefined for p = k or k′. this is unim-
portant since the corresponding contribution in
ψ1; k,k′
cancels due to the pauli exclusion principle. for p ̸= (k,
k′) the equation fulfilled by f(p) is obtained by enforc-
ing the bracket of the above equation to cancel. follow-
ing the one-cooper pair procedure, we get the eigenvalue
equation for one broken pair (k, −k′) plus one cooper
pair as
1
v =
x
p̸=k,k′
wp
2εp + εk + εk′ −e1, kk′
(42)
a first possibility is to have the two free electrons in the
two lowest states of the potential layer, namely εk = εf0
and εk′ = εf0 + 1/ρ0. the p-state energy in the above
7
equation must then be larger than ε(2)
f0 with ε(n)
f0 = εf0 +
n/ρ0; so that eq.(39) merely gives
1
v = ρ0
2
z εf0 +ω
ε(2)
f0
2dε
2ε + 2εf0 + 1/ρ0 −e1, kk′
(43)
by writing εf0 + ωas ε(n)
f0 + ω(n) with ω(n) = ω−n/ρ0,
eqs.(4, 5) for the single pair energy readily give
e1, kk′−
2εf0 + 1
ρ0
≃2
εf0 + 2
ρ0
−2
ω−2
ρ0
e−2/v
(44)
another possibility is to put the two free electrons in
the second and third lowest states of the potential layer,
namely εk = εf0 + 1/ρ0 and εk′ = εf0 + 2/ρ0. the p-
state energy in eq.(39) can then be equal to εf0 or larger
than ε(3)
f0 . in this case, eq.(39) gives
1
v =
1
2εf0 −e + ρ0
2
z εf0 +ω
ε(3)
f0
2dε
2ε −e
(45)
in which we have set e = e1, kk′ −2εf0 −3/ρ0. by e as
2ε(3)
f0 −2ω(3)e−2/vx, the above equation gives x through
2
xn
′′
c −3 = log
x
1 + xe−2/v
(46)
where n
′′
c
= 2ω(3)e−2/vρ0 is close to nc, i.e., large
compared to 1 in the large sample limit.
this gives
x ≃1 + 2/n
′′
c , so that the energy e1, kk′ would then
be equal to
e′
1, kk′ ≃4εf0 + 7
ρ0
−2
ω−3
ρ0
e−2/v
(47)
this energy is larger than the one given in eq.(41) with
the broken pair in the two lowest energy levels of the
potential layer.
such a conclusion stays valid for broken pair electrons
in higher states: the minimum energy for a broken pair
plus a correlated pair is given by e1, kk′ in eq.(41). the
excitation gap to break one of the two cooper pairs into
two free electrons ∆= e1, kk′ −e2, thus appears to be
∆= εc + 3
ρ0
= εc
1 + 3
nc
(48)
we can then remember that the excitation gap for a
single pair is equal to
h
εf0 + (εf0 + 1
ρ0 )
i
−(2εf0−εc), i.e.,
εc +
1
ρ0 : the broken pair being again in the two lowest
states of the potential layer, this brings an additional
1
ρ0 contribution to the average pair binding energy εc.
eq.(45) thus shows that the gap increases when going
from one to two pairs. this increase in fact comes from a
mere kinetic energy increase induced by pauli blocking.
it is worth noting that, while pauli blocking induces an
increase of the gap, it produces a decrease from εc to
εc (1 −1/nω) of the average pair binding energy when
going from one to two correlated pairs. since nω= ρ0ω
is far larger than nc = ρ0εc, the gap increase however is
far larger than the binding energy decrease.
the changes we obtain in the excitation gap and in
the average pair binding energy when going from one to
two pairs, are a strong indication that the gap in the
dense bcs configuration cannot be simply linked to the
pair binding energy, as commonly said. indeed, the pair
binding energy is going to stay smaller than εc = 2ωe−2/v
due to pauli blocking in the potential layer, while the
experimental gap in the dense regime is known to be of
the order of ωe−1/v which is far larger than εc.
we wish to stress that, in addition to the excited states
considered in this section, in which the broken pair ends
by having a non-zero momentum, there also are excited
states, not included into the present work. in these ex-
cited states, the two pairs still have a zero momentum
but correspond to r's located somewhere in the quasi-
continuum spectrum of the one-electron states, i.e., in-
between two one-electron levels. for such r's, it is not
possible to straightforwardly replace summation by inte-
gration in the richardson's equations as we did through-
out the present paper.
iv.
conclusion
we here extend the well-known one-pair problem,
solved by cooper, and consider two correlated pairs of
electrons added to a fermi sea of noninteracting elec-
trons. the schr ̈
odinger equation for the two-pair ground
state has been reduced by richardson to a set of two
coupled algebraic equations. we here give three differ-
ent methods to solve these two equations analytically in
the large sample limit, providing a unique result. these
methods are perspective for the extension to an arbi-
trary number of pairs in order to hopefully cover the
crossover between dilute and dense regimes of cooper
pairs, as well as to apply them to nanoscopic supercon-
ductors. although the two-pair problem we here solve, is
only a first step toward the resolution of this quite funda-
mental problem, it already allows us to understand more
deeply the role of pauli blocking between electrons from
which pairs are constructed. we show that this blocking
leads to a decrease of the average pair binding energy in
the two-pair system compared to the one-pair configura-
tion. this decrease is due to the fact that by increasing
the number of pairs, we decrease the number of available
states to form bound pairs.
this two-pair problem actually has some direct rela-
tion to real physical systems, where correlated pairs are
more local than in conventional bcs superconductors.
we can mention underdoped cuprates, heavy fermion su-
perconductors, pnictides, and ultracold atomic gases. it
was shown long time ago8, in the context of supercon-
ducting semiconductors with low carrier density, that the
excitation spectrum of such dilute system of pairs is con-
8
trolled by the binding energy of an isolated pair rather
than by a cooperative bcs gap. hence, this picture is
very similar to the classical cooper model. within the
two-pair configuration that we here solve using richard-
son's procedure, we reach the same conclusion for the
excitation spectrum.
we also reveal how the compos-
ite nature of correlated pairs affects their binding ener-
gies through the pauli exclusion principles for elementary
fermions from which the pairs are constructed.
the extrapolation of the tendency we find to the dense
bcs regime of pairs, indicates that the average pair bind-
ing energy in this regime must be smaller than that of an
isolated pair. at the same time, it is generally believed
that the pair binding energy in the bcs configuration is
of the order of the superconducting gap, which is much
larger than the isolated pair binding energy. to under-
stand this discrepancy, we must note that there are two
rather different concepts of "pairs" in the many-particle
bcs configuration. those with energy of the order of
the gap are introduced not ab initio, but enforced to
have a gap energy in order to provide a qualitative un-
derstanding for the expression for the ground state en-
ergy, found within the bcs theory. these entities, called
"virtual pairs" by schrieffer4, correspond to pairs of elec-
trons excited above the fermi sea of noninteracting elec-
trons.
these virtual pairs have to be contrasted with
what schrieffer calls "superfluid pairs"4, made of all the
electrons with opposite momenta feeling the attracting
bcs potential, the number of these pairs being much
larger than the number of "virtual pairs". staying within
the framework of "superfluid pairs" greatly helps to un-
derstand the dilute and dense regimes of pairs on the
same footing.
v.
acknowledgements
w. v. p. acknowledges supports from the french min-
istry of education, rfbr (project no. 09-02-00248), and
the dynasty foundation.
vi.
appendix
in this appendix, we propose another exact approach
to richardson's equations which may turn more conve-
nient for problems dealing with a pair number larger than
two.
we start with eqs.(14) and calculate the sum by again
assuming a constant density
2
v =
z εf0 +ω
εf0
2 dε
2ε −r1
+
4
ρ0(r1 −r2)
(a1)
with r1 possibly complex. instead of ri, we are going to
look for ai = (2ω+2εf0 −ri)/(2εf0 −ri) with i = (1, 2).
since ri = 2εf0 −2ω/(ai −1), the above equation yields
2
v = log a1 +
2
nω
(a1 −1)(a2 −1)
a1 −a2
(a2)
where log denotes the principal value of the complex
logarithmic function, i.e., the one that satisfies −π <
im (log a1) ≤π
by adding the same equation with (1, 2) exchanged,
we readily get
a1a2 = e4/v
(a3)
this equation is nothing but eq.(33), since for r1 = r+
ir′, we do have a1 = (2ω+xf0−r−ir′)/(xf0−r−ir′),
with i changed into −i for (r1, a1) changed into (r2,
a2). from eq.(a3), we conclude that a1 = e2/vt while
a2 = e2/v/t.
next, we note that, since the two-pair energy e2 =
r1 + r2, which also reads
e2 = 4εf0 −2ω
a1 + a2 −2
e4/v + 1 −(a1 + a2)
(a4)
is real, (a1 +a2) must be real. this implies t+1/t = t∗+
1/t∗or equivalently (t −t∗)(tt∗−1) = 0. consequently,
t is either real or such that |t| = 1, i.e., t = eiφ.
to
choose between these two possibilities, we consider the
difference of the two richardson's equations as written
in eq.(a2). this difference which first appears as
0 = log a1
a2
+
4
nω
(a1 −1)(a2 −1)
a1 −a2
(a5)
reads in terms of t as
e2/v + e−2/v = t + 1
t −nω
2
t −1
t
log t
(a6)
for t real, (t −t−1)log t is always positive, except for
t = 1 where it cancels.
this shows that the rhs of
eq.(a6), equal to 2 for t = 1, stays essentially smaller
than 2 for nωfar larger than 1. since e2/v + e−2/v is
far larger than 2 for v small, we conclude that eq.(a6)
cannot be fulfilled for t real.
the other possibility is t = eiφ with 0 < |φ| < π,
so that log t = iφ we then have a1 = a∗
2, i.e., r1 =
r∗
2. this shows that the two richardson's energies are
complex conjugate, as found by the other exact approach.
when used into eq.(a6), this t leads to
φ sin φ +
2
nωcos φ = δc
(a7)
with δc = e2/v+e−2/v
nω
≃
2
nc . once eq.(a7) for φ is solved,
the two-pair energy given in eq.(a4) follows from
e2 = 4εf0 −4ω
e2/v cos φ −1
e4/v + 1 −2e2/v cos φ
(a8)
the solutions of eq.(a7) cannot be expressed in a com-
pact form in terms of classical functions. we however
9
see that the two dimensionless terms in eq.(a7), namely
2/nωand δc, are small. furthermore 2/nωis smaller
than δc. the function in the lhs of eq.(a7) is increas-
ing from 2/nωup to a maximum ≃1.82, then decreasing
down to −2/nωas φ runs in [0, π], and still decreasing
on [π, 2π]. this shows that eq.(a7) admits exactly one
solution in the interval (0, π/2), another one in the inter-
val (π/2, π) but no solution in [π, 2π]. changing φ in −φ
would provide also two solutions on (−π, 0) but this just
corresponds to exchange a1 and a2; so that they cannot
be considered as distinct solutions. for these solutions,
φ sin φ stays close to zero, so that φ is close to 0 or to π.
for φ = 0, the rhs of the above equation reduces, for v
small, to 4εf0 −4ωe−2/v which is just twice the energy
of a single cooper pair as given in eq. (6). the effect of
pauli blocking on this two-single pair energy results from
a large but finite number of pair states nωin the poten-
tial layer, as physically expected. we can note that, by
contrast, φ = π would lead to e2 close to 4εf0 +4ωe−2/v.
this solution has to be sorted out because it corresponds
to r1 and r2 located in the complex plane very close to
the real axis where the one-electron levels are positioned,
so that the distance between them and this real axis is
of the order of 1/ρ0. this prevents substitution of dis-
crete summation by integration, in eq.(a1), as discussed
above.
for φ close to zero, eq.(a7) gives the leading term in
1/nωas φ2 ≃
e1/v −e−1/v)2
nω. the ratio in eq.(a8)
then reads for φ small as
1
e2/v −1
1 −φ2
2
e2/v(e2/v + 1)
(e2/v −1)2
≃
1
e2/v −1 −e2/v
2nω
e4/v −e2/v −1 + e−2/v
e2/v −1
3
(a9)
when inserted into eq.(a8), we end with
e2 ≃4εf0 −4ωe−2/v + 2ω
nω
1 + 2e−2/v
(a10)
which is nothing but eq.(26).
the main advantage of this second exact method is to
have the two-pair energy reading in terms of φ which fol-
lows from a single equation, namely eq.(a7). by con-
trast, to get e2 through xf0 = 2εf0 −e2, as in the
other exact method, we must solve two coupled equa-
tions, namely eqs.(31) and (33).
1 h. frohlich, phys. rev. 79, 845 (1950).
2 l. n. cooper, phys. rev. 104, 1189 (1956).
3 j. bardeen, l.n. cooper, and j.r. schrieffer, phys. rev.
108, 1175 (1957).
4 j. r. schrieffer, theory of superconductivity,
perseus
books group, massachusetts (1999).
5 r. combescot, x. leyronas, and m. y. kagan, phys. rev.
a 73, 023618 (2006).
6 i. bloch, j. dalibard, and w. zwerger, rev. mod. phys.
80, 885 (2008).
7 s. giorgini, l. p. pitaevskii, and s. stringari, rev. mod.
phys. 80, 1215 (2008).
8 d. m. eagles, phys. rev. 186, 456 (1969).
9 a. j. leggett, j. de physique. colloques 41, c7 (1980).
10 r. w. richardson, phys. lett. 3, 277 (1963).
11 r. w. richardson and n. sherman, nucl. phys. 52,
221(1964).
12 j. dukelsky, s. pittel, and g. sierra, rev. mod. phys. 76,
643 (2004).
13 f. braun and j. von delft, phys. rev. lett. 81, 4712
(1998).
14 m. combescot,
o. betbeder-matibet,
and f. dubin,
physics reports 463, 215 (2008).
15 e. m. lifshitz and l. p. pitaevskii, statistical physics, part
2, pergamon, oxford (1980).
16 a. l. fetter and j. d. walecka, quantum theory of many-
particle systems, dover publications, new york (2003).
17 m. tinkham, introduction to superconductivity, dover
publications, new york (2004).
18 m. randeria, in: a. griffin, d. snoke, and s. stringari
(eds.), bose-einstein condensation. cambridge univer-
sity press, campridge, pp. 355-392 (1995).
19 y. j. uemura, physica c 194, 282 (1997).
20 q. chen, j. stajic, s. tan, and k. levin, physics reports
412, 1 (2005).
21 d. j. singh and m.-h. du, phys. rev. lett 100, 237003
(2008); l. craco, m. s. laad, s. leoni, and h. rosner,
phys. rev. b 78, 134511 (2008).
22 a. a. shanenko, m. d. croitoru, a. vagov, and f. m.
peeters, arxiv:0910.2345 (2009).
|
0911.1689 | the factor set of gr-categories of the type $(\pi,a)$ | any $\gamma$-graded categorical group is determined by a factor set of a
categorical group. this paper studies the factor set of the group $\gamma$ with
coefficients in the categorical group of the type $(\pi,a).$ then, an
interpretation of the notion of $\gamma-$operator $3-$cocycle is presented and
the proof of cohomological classification theorem for the a $\gamma-$graded
gr-category is also presented.
| introduction
the notion of a graded monoidal category was presented by fr ̈
ohlich and wall [4] by
generalization some manifolds of categories with the action of a group γ. then, γ will be
also regarded as a category with exactly one object, (say ∗), where the morphisms are the
members of γ and the composition law is the group composition operation. a γ−grading on
a category d is a functor gr : d →γ. the grading is called stable if for all c ∈obd, σ ∈γ,
there is an equivalence u in d with domain c and gr(u) = σ. then σ is the grade of u.
if (d, gr) is a γ−graded category, we define kerd to be the subcategory consisting of all
morphisms of grade 1.
a γ−monoidal category consists of: a stably γ−graded category (d, gr), γ−functors
⊗: d ×γ d −
→d ,
i : γ −
→d
and natural equivalences (of grade 1) ax,y,z : (x⊗y )⊗z
∼
−
→x⊗(y ⊗z), lx : i⊗x
∼
−
→x,
rx : x ⊗i
∼
−
→x, where i = i(∗), satisfying coherence conditions of a monoidal category.
for any γ−graded category (d, gr), authors wrote rep(d, gr) for the category of
γ−fuctors f : γ →d, and natural transformations. an object of rep(d, gr) thus consists
an object c of d with homomorphism γ →autd(c). homomorphism γ →autd(i) is the
right inverse of the graded homomorphism autd(i) →γ. in other words, autd(i) is a split
extension of the normal subgroup n of automorphisms of grade 1 by the subgroup which is
isomorphic to γ. the extension defines an action of γ on n by
σu = i(σ) ◦u ◦i(σ−1).
in [1], authors considered the γ−graded extension problem of categories as a catego-
rization of the group extension problem. the groups a, b in the short exact sequence:
0 →a →b →γ →1
are replaced with the categories c, d. a γ−monoidal extension of the monoidal category c
is a γ−monoidal category d with a monoidal isomorphism j : c −
→kerd.
the construction and classification problems of γ−monoidal extension were solved by
raising the main results of schreier-eilenberg-maclane on group extensions to categorical
1
level. with the notations of factor set and crossed product extension, authors proved that
there exists a bijection
∆: h2(γ, c) ↔ext(γ, c)
between the set of congruence classes of factor sets on γ with coefficients in the monoidal
category c and the set of congruence classes of γ−extensions of c.
the case c is a categorical group (also called a gr-category) was considered in
[2]. then, the γ−equivariant structure appears on π−module a, where π = π0(c), a =
autc(1) = π1(c). γ−extensions and γ−functors are classified by functors
ch :γ cg →h3
γ ;
z
γ
: z3
γ →γ cg,
where γcg is the category of γ−extensions, z3
γ is the category in which any object is a triple
(π, a, h), where (π, a) is a γ−pair and h ∈h3; h3
γ is the category obtained from z3
γ when
h ∈z3
γ(π, a) is replaced with h ∈h3
γ(π, a).
as we know, each categorical group is equivalent to a categorical group of the type
(π, a) and the unit constraint is strict (in the sense lx = rx = idx).
so we may solve
the classification problem for this special case thanks to the desription of gr-functors of
gr-categories of the type (π, a) [6].
we may better describe the factor set, and show
that the γ−equivariant structure of a is a necessary condition of the factor set. thus, we
may construct γ−operator 3−cocycles as a induced version of a factor set, instead of using
complex construction as in [2]. by this way, we may obtain the classification theorem in a
stronger form than the result in [2], that is the bijection:
ω: sγ(π, a) →h3
γ(π, a)
where sγ(π, a) is the set of congruence classes of γ−extensions of gr-categories of the type
(π, a).
the classification problem of γ−functors follows this method will be presented in
another paper.
1
some notions
let π be a group and a be a left π−module. a gr-category of the type (π, a) is a category
s = s(π, a, ξ) in which objects are elements x ∈π, and morphisms are automorphisms
aut(x) = {x} × a.
the composition of two morphisms is defined by
(x, u) ◦(x, v) = (x, u + v)
the operation ⊗is defined by
x ⊗y = xy
(x, u) ⊗(y, v) = (xy, u + xv).
the associative constraint ax,y,z is a normalized 3−cocycle (in the sense of group cohomol-
ogy) ξ ∈z3(π, a), and the unit constraint is strict. then from now on, a gr-category of
the type (π, a) refers to the one with above properties.
2
definition 1.1. let γ be a group and let (c, ⊗) be any monoidal category. we say that a
factor set on γ with coefficients in (c, ⊗) is a pair (θ, f) consisting of: a family of monoidal
autoequivalences
f σ = (f σ, f
f σ, c
f σ) : c −
→c, (σ ∈γ)
and a family of isomorphisms of monoidal functors
θσ,τ : f σf τ
∼
−
→f στ, (σ, τ ∈γ)
satisfying the conditions
i) f 1 = idc
ii) θ1,σ = idf σ = θσ,1 (σ ∈γ)
iii) for all ∀σ, τ, γ ∈γ, the following diagrams are commutative
f σf τf γ
θσ,τf γ
−
−
−
−
−
→f στf γ
f σθτ,γ
y
yθστ,γ
f σf τγ
θσ,τγ
−
−
−
−
→
f στγ
in [6], authors described monoidal functors between monoidal categories of the type
(π, a). thanks to this description, we will prove the necessary conditions of a factor set.
definition 1.2. [6] let s = (π, a, ξ), s′ = (π′, a′, ξ′) be gr-categories. a functor f :
s →s′ is called a functor of the type (φ, f) if
f(x) = φ(x) ,
f(x, u) = (φ(x), f(u))
and φ : π →π′, f : a →a′ is a pair of group homomorphisms satisfying f(xa) = φ(x)f(a)
for x ∈π, a ∈a.
we have
theorem 1.3. [6] let s = (π, a, ξ), s′ = (π′, a′, ξ′) be gr-categories and f = (f, e
f , b
f)
be a gr-functor from s to s′. then, f is a functor of the type (φ, f).
according to this theorem, any monoidal autoequivalence f σ is of the form f σ =
(φσ, f σ). this remark is used frequently throughout this paper.
definition 1.4. [2] let γ be a group, π be a γ−group. a γ−module a is a equivariant
module on γ−group π if a is a π−module satisfying
σ(xa) = (σx)(σa),
for all σ ∈γ, x ∈π and a ∈a.
2
γ−graded extension of a gr−category of the type (π, a)
for a given factor set (θ, f), we may construct a γ−graded crossed product extension of c,
denoted by ∆(θ, f) as follows:
c
j
−
−
−
−
→∆(θ, f)
g
−
−
−
−
→g
where ∆(θ, f) is a category in which objects are objects of c and morphisms are pairs
(u, σ) : a →b where σ ∈γ and u : f σ(a) →b is a morphism in c. the composition of
two morphisms:
a
(u,σ)
−
−
−
−
→b
(v,τ)
−
−
−
−
→c
3
is defined by:
(u, σ) * (v, τ) = (u * f σ(v) * θσ,τ(a)−1, στ).
this composition is associative and the unit exists thanks to cocycle and normalized condi-
tions i), ii), iii) of (θ, f).
a stably γ−grading on ∆(θ, f), g : ∆(θ, f) →γ is defined by g(u, σ) = σ, and the
bijection j : c
∼
−
−
−
−
→ker(∆(α, f)) is defined by:
j(a
u
−
−
−
−
→b) = (a
(u,1)
−
−
−
−
→b)
proposition 2.1. if g : c →c′ and h : c′ →c are monoidal equivalence such that
α : g ◦h ∼
= idc′, and β : h ◦g ∼
= idc, and d is a crossed product γ−extension of c by the
factor set (θ, f), then the quadruple (g, h, α, β) induces:
i) the factor set (θ′, f ′) of c′,
ii) a γ−equivalence
∆(θ, f)) ↔∆(θ′, f ′)).
proof. i) let f ′σ be the composition h ◦f σ ◦g and
θ′σ,τ
x
= g(θσ,τ
hx ◦f σ(βf τ hx)).
one can verify that (θ′, f ′) is a factor set of c′.
ii) we extend the functor g : c →c′ to a functor
e
g : ∆(θ, f)) ↔∆(θ′, f ′))
as follows: for the object x of c, let e
gx = gx; for the morphism (u, σ) : x →y, where
u : f σx →y, let
e
g(u, σ) = (g(u ◦f σ(βx)), σ).
one can verify that e
g is a γ−equivalentce
from the above proposition, it is deduced that
corollary 2.2. any γ−extension of a gr-category is equivariant to a γ−extension of a
gr-category of the type (π, a).
we now prove some necessary conditions for the existence of a factor set.
theorem 2.3. let γ be a group and s = s(π, a, ξ) be a gr-category. if (θ, f) is a factor
set of γ, with coefficients in s, then:
i) there exists a group homomorphism
φ : γ −
→autπ ; f : γ −
→auta,
and a is equiped with a π−module γ−equivariant structure, induced by φ, f,
ii) in definition 2.1, the condition i) of a factor set can be deduced from the remaining
conditions.
proof. i) according to theorem 1.1, any autoequivalence f σ, σ ∈γ, of a factor set is of the
form
(φσ, f σ) : s −
→s.
4
since f
f σx,y : f σ(xy) −
→f σx.f σy, σ ∈γ is a morphism in (π, a), we have
f σ(xy) = f σx.f σy, ∀x, y ∈π.
this stated that φσ = f σ is a endomorphism in π. furthermore, f σ is an equivalence,
so that φσ is an automorphism of group π, that is φσ ∈autπ. on the other hand, since
θσ,τ
x
: f σf τx −
→f στx is an arrow in (π, a), we have
(f σf τ)(x) = f στ(x), ∀x ∈π.
thus, φσφτ = φστ. this proved that
φ : γ
−
→
autπ
σ
7−
→
φσ
is a homomorphism of groups. then φ1 = φ(1) = idπ.
let f
f σx,y = (σ(xy), e
f σ(x, y)), in which σ(xy) = φσ(xy); e
f σ : π2 −
→a and c
f σ =
(1, cσ) : f σ1 −
→1 are maps.
from the definition of the monoidal functor f σ, we have
σx. e
f σ(y, z) −e
f σ(xy, z) + e
f σ(x, yz) −e
f σ(x, y) = a(σx, σy, σz) −f σ(a(x, y, z))
(1)
(φσx)cσ + e
f σ(x, 1) = 0
(2)
cσ + e
f σ(1, x) = 0
(3)
we now observe isomorphisms of monoidal functors θσ,τ = (θσ,τ
x ) where
(θσ,τ
x ) = (φσx, tσ,τ(x)) : f σf τ(x) −
→f στ(x),
in which tσ,τ : π −
→a are maps.
we have the following commutative diagrams
f σf τ(x)
(•,tσ,τ(x))
−
−
−
−
−
−
−
→f στ(x)
(•,f τf σ(a))
y
y(•,f στ(a))
f σf τ(x)
(•,tσ,τ(x))
−
−
−
−
−
−
−
→f στ(x)
f σf τ(xy)
(•, e
f σ(f τx,f τy)+f σ e
f τ(x,y))
−
−
−
−
−
−
−
−
−
−
−
−
−
−
−
−
−
−
→f σf τ(x).f σf τ(y)
(•,tσ,τ(xy))
y
y(f στ x,tσ,τ(x))⊗(f σy,tσ,τ(y))
f στ(xy)
(•, e
f στ(x,y))
−
−
−
−
−
−
−
−
→
f στ(x).f στ(y)
f σf τ1
1
f στ1
❄
(1,tσ,τ(1))
❅
❅
❅
❘
(1,f σ(cτ)+cσ)
✒
(1,cστ)
from which we are led to
f σf τ = f στ
(4)
5
f στ(x)tσ,τ(y) −tσ,τ(xy) + tσ,τ(x) = e
f στ(x, y) −e
f σ(f τ
x , f τ
y ) −f σ( e
f τ(x, y))
(5)
f τ(cτ) + cτ −cστ = tσ,τ(1)
(6)
from the equality (4), we are led to a homomorphism
f : γ −
→auta
given by f(σ) = f σ and so that f 1 = f(1) = ida.
now, let
σx = φσx,
σc = f σc
(7)
for all σ ∈γ, x ∈π, c ∈a. thus, since f σ is a functor of the type (φσ, f σ), φσ(xb) =
φσ(x)f σ(b), or
σ(xb) = σ(x)σ(b),
that is a is a π-module γ-equivariant.
ii) from the condition ii) in the definition of a factor set, we are led to
tσ,1 = t1,σ = 0
from the equality (5), for τ = 1, we obtain e
f 1
x,y = 0, that is f
f 1x,y = id. from the equality
(3), for σ = 1, we have: c1 = 0, that is b
f 1 = id. thus, f 1 is an identity monoidal functor.
the theorem is proved.
3
enough strict factor set and induced 3-cocycle
when the monoidal category c is replaced with the categorical group of the type (π, a) we
obtain better descriptions than in the general case. for example, in theorem 3.2, authors
proved that the condition i) of the definition of factor set of categorical groups of the type
(π, a) is redundant. now, we continue "reducing" this concept in terms of other face.
in this paper, we call a factor set (θ, f) enough strict if c
f σ = idi for all σ ∈γ.
definition 3.1. let γ be a group and c be a gr-category of the type (π, a). factor sets
(θ, f) and (μ, g) on γ with coefficients in c are cohomologous if there exists a family of
isomorphisms of monoidal functors
uσ : (f σ, f
f σ, c
f σ)
∼
−
→(gσ, f
gσ, c
gσ)
(σ ∈γ)
satisfying
u1 = id(π,a)
uστ.θστ = μσ,τ.uσgτ.f σuτ
(σ, τ ∈γ)
remark 3.2. if the two representatives (θ, f), (μ, g) are cohomologous, then f σ = gσ, σ ∈
γ.
indeed, from the definition of cohomologous factor sets, there exists a family of iso-
morphisms of monoidal functors
uσ : (f σ, f
f σ, b
f σ) →(gσ, f
gσ b
gσ)
(σ ∈γ).
sinceuσ
x : f σx →gσx is an arrow in (π, a), we have gσx = f σx.
6
furthermore, for any a ∈a, by the commutativity of the diagram
f σx
gσx
f σx
gσx
✲
uσ
x
❄
f σ(x,a)
❄
gσ(x,a)
✲
uσ
x
by the commutativity of the diagram f σ(x, a) = gσ(x, a).
extending lemma 1.1 [2] for a factor set, we have
lemma 3.3. let s be a categorical group of the type (π, a). any factor set (θ, f) on γ
with cofficients in s is cohomologous to an enough strict factor set (μ, g).
proof. for each σ ∈γ, consider a family of isomorphisms in s:
uσ
x =
(
idf σx
if x ̸= 1
( c
f σ)−1
if x = 1
where 1 ̸= σ ∈γ, and u1 = id.
then, we define gσ in a unique way such that uσ : gσ →f σ is a natural transfor-
mation by setting gσ = f σ and:
f
gσx,y = (uσ
x ⊗uσ
y)−1 f
f σx,y(uσ
xy);
c
gσ = idi
for such setting, clearly we have
gσ = (gσ, f
gσ, c
gσ) : s →s
is a monoidal equivalence.
in particular, we have:
c
gσ = idi.
this states the enough
strictness of the family of functors gσ, as well as of the factor set (μ, g).
now, we set μσ,τ : gσgτ →gστ the natural transformation which makes the following
diagram
gσgτ
gστ
f στ
gσf τ
f σf τ
diagram 1
♣♣♣♣♣♣♣♣♣♣♣♣♣♣♣♣♣♣
✲
μσ,τ
◗◗
◗
s
gσuτ
✲
uστ
✲
uσf τ
✑✑
✑
✸
θσ,τ
commute, for all σ, τ ∈γ. clearly, μσ,τ is a isomorphism of monoidal functors.
we will prove that the family of μσ,τ satisfy the condition ii) of the definition of a
factor set. we now prove that they satisfy the condition iii). consider the diagram:
7
gσgτgγ
gσgτf γ
gσf τf γ
gσf τγ
gσf τγ
f σf τf γ
f σf τγ
gστgγ
gστf γ
f στf γ
f στγ
gστγ
✲
μσ,τgγ
✲
μσ,τf γ
✲
uσf τf γ
✲
θσ,τf γ
✲
uσf τγ
✲
θσ,τγ
✲
μσ,τγ
❄
gσgτuγ
❄
gσuτ f γ
❄
gσθτ,γ
✻
gσuτγ
❄
gστuγ
❄
uστf γ
❄
θστ,γ
✻
uστγ
❄
✲
✛
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
gσμτ,γ
μστ,γ
in this diagram, the region (i) commutes thanks to the naturality of μσ,τ; the regions (ii),
(v), (vi), (vii) commute thanks to the diagram 1; the region (iii) commutes thanks to the
naturality of uσ; the region (iv) commutes thanks to the definition of the factor set (θ, f).
so the perimater commutes. this completes the proof.
we now show that any factor set induces a γ−operator 3−cocycle based on the
following definition.
let a γ−pair (π, a), that is, a π−module γ−equavariant a, cohomology groups
hn
γ(π, a) studied in [3]. we recall that cohomology group hn
γ(π, a), with n ≤3, can be
computed as the cohomology group of the struncated cochain complex:
∼
cγ (π, a) : 0 −
−
−
−
→c1
γ(π, a)
∂
−
−
−
−
→c2
γ(π, a)
∂
−
−
−
−
→z3
γ(π, a) −
−
−
−
→0,
in which c1
γ(π, a) consists of normalized maps f : π →a, c2
γ(π, a) consists of nor-
malized maps g : π2 ∪(π × γ) →a and z3
γ(π, a) consists of normalized maps h :
π3 ∪(π2 × γ) ∪(π × γ2) →a satisfying the following 3−cocycle conditions:
h(x, y, zt) + h(xy, z, t) = x(h(y, z, t)) + h(x, yz, t) + h(x, y, z)
(8)
σ(h(x, y, z)) + h(xy, z, σ) + h(x, y, σ) = h(σ(x), σ(y), σ(z)) + (σ(x))(h(y, z, σ)) + h(x, yz, σ)
(9)
σ(h(x, y, τ)) + h(τ(x), τ(y), σ) + h(x, σ, τ) + (στ(x))(h(y, σ, τ)) = h(x, y, στ) + h(xy, σ, τ)
(10)
σ(h(x, τ, γ)) + h(x, σ, τγ) = h(x, στ, γ) + h(γ(x), σ, τ)
(11)
for all x, y, z, t ∈π; σ, τ, γ ∈γ.
for each f ∈c1
γ(π, a), the coboundary ∂f is given by
(∂f)(x, y) = x(f(y)) −f(xy) + f(y),
(12)
(∂f)(x, σ) = σ(f(x)) −f(σ(x)),
(13)
and for each g ∈c2
γ(π, a), ∂g is given by:
(∂g)(x, y, z) = x(g(y, z)) −g(xy, z) + g(x, yz) −g(x, y),
(14)
(∂g)(x, y, σ) = σ(g(x, y)) −g(σ(x), σ(y)) −σ(x)(g(y, σ)) + g(xy, σ) −g(x, σ),
(15)
(∂g)(x, σ, τ) = σ(g(x, τ)) −g(x, στ) + g(τ(x), σ).
(16)
8
proposition 3.4. any enough strict factor set (θ, f) on γ with coefficients in s = (π, a, ξ)
induces an element h ∈z3
γ(π, a).
proof. suppose f σ = (f σ, f
f σ, id). then, we can write
f
f σx,y = (φσ(xy), e
f σ(x, y)) = (σ(xy), g(x, y, σ))
where e
f : π2 × γ →a is a function.
for the family of isomorphisms of monoidal functors θσ,τ = (θσ,τ
x ), we are able to
write
θσ,τ
x
= (φστx, tσ,τ(x)) = (φστx, t(x, σ, τ))
where t : π × γ2 →a is a function.
from functions ξ, e
f, t , in which ξ is associated with the associative constraint a of
(π, a), we determine the function h as follows:
h : π3 ∪(π2 × γ) ∪(π × γ2) →a
where h = ξ ∪e
f ∪t, in the sense
h |π3= ξ;
h |π2×γ= e
f; and h |π×γ2= t
the above determined h is a γ−operator 3−cocycle. indeed, the equalities (1), (5) turn into:
−σx.h(y, z, σ) + h(xy, z, σ) + h(x, y, σ) −h(x, yz, σ) = h(σx, σy, σz) −σ(h(x, y, z)) (17)
(στ)x.h(y, σ, τ) −h(xy, σ, τ) −h(x, σ, τ) = h(x, y, στ) −h(τx, τx, σ) −σh(x, y, τ)
(18)
moreover, from the relations of 3−cocycle ξ, we obtain:
xh(y, z, t) −h(xy, z, t) + h(x, yz, t) −h(x, y, zt) + h(x, y, z) = 0
(19)
the cocycle condition
θστ,γ.θσ,τf γ = θσ,τγ.f σθτ,γ
yeilds
h(x, στ, γ) + h(γx, σ, τ) = h(x, σ, τγ) + σh(x, τ, γ)
(20)
for all x, y, z ∈π;
σ, τ ∈γ. it follows that h satisfies the relations of a 3−cocycle in
z3
γ(π, a). however, we have to prove the normalized property of h.
first, since the unit constraints of (π, a) are strict and the factor set (θ, f) is enough
strict, the equalities (2), (3), (6) turn into:
h(x, 1, σ) = e
f σ(x, 1) = 0 = e
f σ(1, x) = h(1, x, σ)
h(1, σ, τ) = tσ,τ(1) = 0
since c
f 1 = id, we have
h(x, y, 1γ) = e
f 1(x, y) = 0
since the normalized property of associative constraint a,
h(1, y, z) = h(x, 1, z) = h(x, y, 1) = 0
thanks to ii) in the definition of a factor set, we have
h(x, 1γ, τ) = h(x, σ, 1γ) = 0
let h(θ,f ) = h, we have h(θ,f ) ∈z3
γ(π, a). this completes the proof of theorem.
9
4
classification theorem
let (μ, g) be another enough strict factor set on γ with coefficients in ((π, a, ξ), which
is cohomologous to (θ, f). thus, π−module γ−equivariant structure a and the element
h(μ,g) ∈h3
γ(π, a) defined by (μ, g) has the following property:
proposition 4.1. let two enough strict factor sets (μ, g), (θ, f) on γ with coefficients
in categorical group s of the type (π, a) be cohomologous. then, they determine the same
structure of π−module γ−equivariant on a and 3−cocyles inducing h(θ,f ), h(μ,g) are coho-
mologous.
proof. according to remark 3.2, f σ = gσ(σ ∈γ. then, they induce the same π-module
γ-equivariant structure, according to the relation (7) and theorem 2.3.
now, we prove that elements h(θ,f ) and h(μ,g) are cohomologous. we denote h(μ,g) =
h′. hence, by determining of h(μ,g) referred in proposition 3.4, we have
(σ(xy), h′(x, y, σ)) = g
hσx,y; (στx, h′(x, σ, τ) = μσ,τx,
for all x, y ∈π, σ, τ ∈γ let u : π × γ →a be the function defined by u(x, σ) = uσ
x. it
determines an extending 2−cochain of u, denoted by g, with g|π2 : π2 →a is the null map.
since (uστ.θσ,τ)x = (μσ,τ.uσgτ.f σuτ)x, we have
g(x, στ) + h(x, σ, τ) = h′(x, σ, τ) + g(τx, σ) + σg(x, τ)
(21)
since f
gσx,y * uσ
x⊗y = (uσ
x ⊗uσ
y) * f
f σx,y, we have
h′(x, y, σ) −h(x, y, σ) = g(x, σ) + σxg(y, σ) −g(xy, σ)
(22)
since c
f σ = d
hσ = id, we have uσ
1 = d
hσ( c
f σ)−1 = id1. hence
g(1, σ) = 0
for all σ ∈γ
(23)
since u1 = id((π,a),⊗), we have
g(x, 1γ) = 0
for all x ∈g
(24)
by the determining of g and relations (21)-(24), we have g ∈c2
γ(π, a) and h(θ,f ) −
h(μ,g) = ∂g. this completes the proof of proposition.
thus, any factor set (θ, f) on γ with coefficients in categorical groups of the type
(π, a) determines a structure of g−module γ−equivariant a and an element h(θ,f ) ∈
h3
γ(π, a) uniquely.
now, we consider the problem: giving an element h ∈z3
γ(π, a), does there exist a
factor set (θ, f) on γ with coefficients in (π, a), inducing h.
according to the definition of γ−operator cocycle, we have functions
h |π3= ξ;
h |π2×γ= e
f; and h |π×γ2= t
hence, we may determine the factor set (θ, f):
f σx = σx;
f σ(x, c) = (σx, σc)
that is f σ(c) = σc
c
f σ = id1,
f
f σx,y = (σ(xy), e
f(x, y, σ))
θστx = (στx, t(x, σ, τ)),
for all σ, τ ∈γ, x ∈π, c ∈a. clearly, the above determined factor set induces h.
we now state the main result of the paper:
10
theorem 4.2. there exists a bijection
ω: sγ(π, a) →h3
γ(π, a),
where sγ(π, a) is the set of congruence classes of γ−extensions of categorical groups of the
type (π, a).
proof. any element of sγ(π, a) may have a crossed product γ−extension ∆((θ, f), c) be
a presentative, where c is a categorical group (π, a, ξ). according to proposition 3.3, it is
possible to assume that (θ, f) is enough strict. then, (θ, f) induces 3−cocycle h = h(θ,f )
(proposition 3.4). according to proposition 4.1, the correspondence cl(θ, f) →cl(h(θ,f ))
is a map. thanks to the above remark, this correspondence is a surjection. we now prove
that it is an injection.
let ∆and ∆′ be crossed product γ−extension of gr-categories s = s(π, a, ξ),
s′ = s′(π, a, ξ′) by factor sets (θ, f), (θ′, f ′). moreover, 3−cocycles inducing h, h′ are
cohomologous. we will prove that ∆and ∆′ are equivariant γ−extensions.
according to the determining of h, h′,
h |π3= ξ,
h′ |π3= ξ′,
ξ′ = ξ + δg,
where g : π2 →a is a map. then, gr-categories s and s′ are gr-equivalent, so there exists
a gr-equivalence
(k, e
k) : s →s′,
where e
kx,y = (•, g(x, y)). we may extend the gr-functor (k, e
k) to a γ−functor
(k∆, g
k∆) : ∆→∆′
as follows:
k∆(x) = k(x) ; k∆(a, σ) = (k(a), σ) ; g
k∆= e
k.
it is easy to see that (k∆, g
k∆) is a γ−equivalence. hence, ωis an injection.
references
[1] a.m. ceggara, a.r.garzn and j.a.ortega graded extensions of monoidal categories.
journal of algebra. 241 (2001),620-657.
[2] a. m. cegarra, j. m. garca - calcines and j. a. ortega, on grade categorical groups
and equivariant group extensions. canad. j. math. vol. 54(5), 2002 pp. 970 - 997.
[3] a. m. cegarra, j. m. garca - calcines and j. a. ortega,cohomology of groups with
oprators. homology homotopy appl. (1) 4(2002), 1 - 23.
[4] a. frolich and c. t. c wall, graded monoidal categories. compositio math. 28 (1974).
229-285.
[5] s. mac lane, homology, springer- verlag, berlin and new york, 1963.
[6] n. t. quang, on gr-functors between gr-categories: obstruction theory for gr-functors
of the type (φ, f), arxiv: 0708.1348 v2 [math.ct] 18 apr 2009.
11
address: department of mathematics
hanoi national university of education
136 xuan thuy street, cau giay district, hanoi, vietnam.
email: [email protected]
12
|
0911.1690 | canonical quantization of a dissipative system interacting with an
anisotropic non-linear absorbing environment | a canonical quantization scheme is represented for a quantum system
interacting with a nonlinear absorbing environment. the environment is taken
anisotropic and the main system is coupled to its environment through some
coupling tensors of various ranks. the nonlinear response equation of the
environment against the motion of the main system is obtained. the nonlinear
langevin-schr\"{o}dinger equation is concluded as the macroscopic equation of
motion of the dissipative system. the effect of nonlinearity of the environment
is investigated on the spontaneous emission of an initially excited two
level-atom imbedded in such an environmrnt.
| introduction
the simplest way of describing a damped system in classical dynamics is
by adding a resisting force, generally velocity-dependent, to the equation of
motion of the system. frequently the magnitude of the resisting force may
be closely presented, over a limited range of velocity, by the law fd = avn,
where v is the velocity of the damped system and a and n are constants.
for example for the friction force n = 0, viscous force n = 1 and for high
speed motion n = 2 [1]. such an approach is no longer possible in quantum
mechanics, because one can not find a unitary time evolution operator for
both the states and the observables, consistently.
in order to take into account the dissipation in a quantum system, there are
usually two approaches. the first approach is a phenomenological way, by
which the effect of dissipation is taken into account by constructing a suitable
lagrangian or hamiltonian for the system [2, 3]. following this method the
first hamiltonian was proposed by caldirola [4] and kanai [5] and afterward
by others [6, 7]. there are difficulties about the quantum mechanical solu-
tions of the caldirola-kanai hamiltonian. for example quantization using
this way violates the uncertainty relations or canonical commutation rules.
the uncertainty relations vanishes as time tends to infinity [8]-[11].
the second approach is based on the assumption that the damping forces
is caused by an irreversible transfer of energy from the system to a reservoir
[12, 13]. in this method , modeling the absorptive environment by a col-
lection of harmonic oscillators and choosing a suitable interaction between
the system and the oscillators, a consistent quantization is achieved for both
the main system and the environment[14]-[26]. in the heisenberg picture,
one can obtain the linear langevin-schr ̈
odinger equation, as the macroscopic
equation of motion of the main system.[14, 15].
in the present work, following the second approach, a fully canonical
quantization is introduced for a system moving in an anisotropic non-linear
absorbing environment.
the dissipative system is the prototype of some
important problems which the present approach can be applied to cover such
problems straightforwardly.
the paper is organized as follows: in section 2, a lagrangian for the total
system (the main system and the environment) is proposed and a classical
treatment of the dissipative system is achieved. in section 3, the lagrangian
introduced in the section 2 is used for a canonical quantization of both the
2
main system and the non-linear environment. in section 4, the present quan-
tization is used to investigate the effect of the nonlinearity of the environment
on the spontaneous decay rate of an initially excited two-level atom embedded
in the absorbing environment. finally, the paper is closed with a summary
and some concluding remarks in section 5.
2
three-dimensional quantum dissipative sys-
tems
when an absorbing environment responds non-linearly against the motion of
a system, the non-linear langevin-schr ̈
odinger equation is usually appeared
as the macroscopic equation of motion of the system. as an example, when
the electromagnetic field is propagated in an absorbing non-linear polarizable
medium, the vector potential satisfies the non-linear langevin-schr ̈
odinger
equation. in this section, the motion of a three-dimensional system in the
presence of an anisotropic non-linear absorbing environment is classically
treated. for this purpose , the environment is modeled by a continium of
three dimensional harmonic oscillators labeled by a continuous parameter ω.
the total lagrangian is proposed as
l(t) = le + ls + lint.
(1)
which is the sum of three pars. the part le is the lagrangian of the envi-
ronment
le(t) =
z ∞
0
dω
1
2
̇
x(ω, t) * ̇
x(ω, t) −1
2ω2 x(ω, t) * x(ω, t)
.
(2)
where x(ω, t) is the dynamical variable of the oscillator labeled by ω. the
second part ls in (1) is the lagrangian of the main system.
taking the
system as a particle with mass, m, moving under an external potential v (q),
one can write
ls = 1
2m ̇
q(t) * ̇
q(t) −v (q).
(3)
the last part lint in the total lagrangian (1) is the interaction term between
the system and its absorbing environment and includes both the linear and
3
nonlinear contributions as follows
lint =
z ∞
0
dω f (1)
ij (ω) ̇
qi(t) xj(ω, t)
+
z ∞
0
dω
z ∞
0
dω′f (2)
ijk(ω, ω′) ̇
qi(t) xj(ω, t)xk(ω′, t)
+
z ∞
0
dω
z ∞
0
dω′
z ∞
0
dω′′ f (3)
ijkl(ω, ω′, ω′′) ̇
qi(t) xj(ω, t)xk(ω′, t)xl(ω′′, t) + . . . * * *
(4)
where f (1), f (2), f (3), * * * are the coupling tensors of the main system and
its environment. as it is seen from (4) the coupling tensor f (1) describes
the linear contribution of the interaction part and the sequence f (2), f (3), ....
describe, respectively, the first order of the non-linear interaction part, the
second order of the non-linear interaction part and so on. the interaction la-
grangian (4) is the generalization of the lagrangian that previously has been
applied to quantize the electromagnetic field in the presence of anisotropic
linear magnetodielectric media [27].
the coupling tensors f (1), f (2), f (3), ... in (4) are the key parameters of this
quantization scheme. as it will be seen, in the next section, the susceptibil-
ity tensors of the environment (of the various ranks) are expressed in terms
of the coupling tensors. also the noise forces are obtained in terms of the
coupling tensors and the dynamical variables of the environment at t = −∞.
2.1
the classical lagrangian equations
the classical equations of motion of the total system can be obtained using
the principle of the hamilton's least action, δ
z
dt l(t) = 0. these equations
are the euler-lagrange equations. for the dynamical variables, x(ω, t), the
4
euler-lagrange equations are as
d
dt
δl
δ( ̇
xi(ω, t))
!
−
δl
δ(xi(ω, t)) = 0
i = 1, 2, 3
⇒ ̈
xi(ω, t) + ω2xi(ω, t) = ̇
qj(t)f (1)
ji (ω)
+
z ∞
0
dω′ ̇
qj(t)
h
f (2)
jik(ω, ω′) + f (2)
jki(ω′, ω)
i
xk(ω′, t)
+
z ∞
0
dω′
z ∞
0
dω′′ ̇
qj(t)
h
f (3)
jikl(ω, ω′, ω′′) + f (3)
jkil(ω′, ω, ω′′)
+ f (3)
jkli(ω′, ω′′, ω)
i
xk(ω′, t)xl(ω′′, t) + * * ** * *
(5)
also the lagrange equations for the freedom degrees of the main system are
obtained as follows
d
dt
δl
δ( ̇
qi(t))
!
−
δl
δ(qi(t)) = 0
i = 1, 2, 3
⇒m ̈
q(t) + ▽v (q) = − ̇
r(t)
(6)
where
ri(t) =
z ∞
0
dω f (1)
ij (ω) xj(ω, t) +
z ∞
0
dω
z ∞
0
dω′f (2)
ijk (ω, ω′)xj(ω, t) xk(ω′, t)
+
z ∞
0
dω
z ∞
0
dω′f (3)
ijkl(ω, ω′, ω′′) xj(ω, t) xk(ω, t) xl(ω′′, t) + * * *
(7)
in eq. (6) − ̇
r(t) is the force exerted on the main system due to its motion
inside the absorbing environment. it will be seen that the force − ̇
r(t) can be
separated into two parts. one part is the damping force which is dependent
on the various powers of the velocity of the main system. the second part is
the noise forces which has sinusodial time dependence. both the damping and
the noise forces are necessary for a consistent quantization of a dissipative
system. without the noise forces the quantization of a dissipative system
encounter inconsistency. according to the fluctuation- dissipation theorem
the absence of any of these two parts leads to the vanishing of the other part.
5
3
canonical quantization
in order to represent a canonical quantization, the canonical conjugate mo-
menta corresponding to the dynamical variables x(ω, t) and q should be
computed using the lagrangian (1). these momenta are as follows
qi(ω, t) =
δl
δ( ̇
xi(ω, t)) = ̇
xi(ω, t)
i = 1, 2, 3
(8)
pi(t) = δl
δ( ̇
qi) = m ̇
qi + ri(t)
i = 1, 2, 3.
(9)
having the canonical momenta, both the dissipative system and the envi-
ronment can be quantized in a standard fashion by imposing the following
equal-time commutation rules
[qi(t) , pj(t)] = ıħδij
(10)
[xi(ω, t) , qj(ω′, t)] = ıħδijδ(ω −ω′)
(11)
using the lagrangian (1) and the expressions for the canonical momenta
given by (8) and (9), the hamiltonian of the total system clearly can be
written as
h(t) = [p(t) −r(t)]2
2m
+ v (q) + 1
2
z ∞
0
dω
h
q2(ω, t) + ω2x2(ω, t)
i
(12)
where the cartesian components of r(t) is defined by (7). the hamiltonian
(12) is the counterpart of the hamiltonian of the quantized electromagnetic
field in the presence of magnetodielectric media[27]-[29]. using the commu-
tation relations (10), (11) and applying the total hamiltonian (12), it can
be shown that the combination of the heisenberg equations of motion of the
canonical variables x(ω, t) and q(ω, t) leads to the eq.(5). similarly, one
can obtain eq.(6) as the equation of motion of q(t) in the heisenberg picture.
let us introduce the annihilation and creation operators of the environment
as follows
bi(ω, t) =
s
1
2ħω [ωxi(ω, t) + ıqi(ω, t)] .
(13)
from the commutation relations (11) it is clear that the ladder operators
bi(ω, t) and b†
i(ω, t) obey the commutation relations
h
bi(ω, t), b†
j(ω′, t)
i
= δijδ(ω −ω′)
(14)
6
the hamiltonian (12) can be rewritten in terms of the creation and annihi-
lation operators bi(ω, t) and b†
i(ω, t) as follows
h = (p −r(t))2
2m
+ v (q) + hm
(15)
where
hm =
3
x
i=1
z
dω ħω b†
i(ω, t)bi(ω, t)
(16)
is the hamiltonian of the absorbing environment in the normal ordering form
and
ri(t) =
z ∞
0
dω
s
ħ
2ωf (1)
ij (ω)
h
bj(ω, t) + b†
j(ω, t)
i
+
z ∞
0
dω
z ∞
0
dω′
ħ
2
√
ωω′ f (2)
ijk(ω, ω′)
h
bj(ω, t)bk(ω′, t)+ + b†
j(ω, t)b†
k(ω′, t)
+bj(ω, t)b†
k(ω′, t) + b†
j(ω, t)bk(ω′, t)
i
+ * * *
(17)
are the cartesian components of the operator r, where the summation should
be done over the repeated indices.
3.1
the response equation of the environment
the response equation of the absorbing environment is the base of separating
the force − ̇
r(t) , in the right hand of (6), into two parts, that is, the damping
force and the noise force. if eq.(5) is solved for x(ω, t) and then, the obtained
solution is substituted into the definition of r(t) given by (7), one can obtain
the response equation of the environment. the differential equations (5) are
a continuous collection of coupled non-linear differential equations for the
dynamical variables x(ω, t). the exact solution of this equation is impossible
unless an iteration method to be used.
for simplicity here we apply the
first order of approximation and neglect the terms containing the coupling
tensors f (2), f (3), ... in the right hand of (5) and write the solution of eq.(5),
approximately, as
x(ω, t) = xn(ω, t) +
z t
−∞dt′sin ω(t −t′)
ω
(f (1))†(ω) * ̇
q(t′),
(18)
7
where (f (1))†
ij(ω) = f (1)
ji (ω) and xn(ω, t) is the solution of homogeneous
equation ̈
xn(ω, t) + ω2xn(ω, t) = 0. in fact xn(ω, t) is asymptotic form of
x(ω, t) for very large negative times and can be written as
xni(ω, t) =
s
ħ
2ω
h
bin
i (ω)e−ıωt + b†in
i (ω)eıωti
(19)
where bin(ω) and b†in(ω) are some time independent annihilation and creation
operators which obviously satisfy the same commutation relations (14). the
approximated solution (18) yields the response equation of the environment,
such that, the susceptibility tensors appearing in it, satisfy the various sym-
metry properties reported by the literature [30].
now substituting x(ω, t) from (18) in (7), the response equation of the non-
linear absorbing environment is found as follows
r(t) = r(1) + r(2) + ....
r(1)
i (t) =
z +∞
−∞dt χ(1)
ij (t −t′) ̇
qj(t′) + r(1)
n i(t)
r(2)
i (t) =
z +∞
−∞dt′
z +∞
−∞dt′′χ(2)
ijk(t −t′, t −t′′) ̇
qj(t′) ̇
qk(t′′) + r(2)
n i(t)
(20)
where χ(1) is the susceptibility tensor of the environment in the linear regime
and is defined by
χ(1)
ij (t) =
z ∞
0
dωsin ωt
ω1
f (1)
in (ω)f (1)
jn (ω)
t > 0
0
t ≤0
(21)
and χ(2)
ijk causes the first order of the nonlinearity of the response equation,
where for for t1, t2 ≥0 are given by
χ(2)
ijk(t1, t2) =
z ∞
0
dω1
z ∞
0
dω2
sin ω1t1
ω1
sin ω2t2
ω2
f (2)
inm(ω1, ω2) f (1)
jn (ω1) f (1)
km(ω2)
(22)
and χ(2)
ijk is zero for t1, t2 < 0. in (21) and (22) the summation should be
done over the repeated indices m, n.
from the definition (21) it is clear
8
that χ(1) is a symmetric tensor, χ(1)
ij = χ(1)
ji . there are also some symmetry
features for the non-linear susceptibility tensors of the various orders. these
symmetry properties can be satisfied by imposing some conditions on the
coupling tensors f (2), f (3), ... .
for example the susceptibility tensor χ(2),
should satisfy the symmetry property [30]
χ(2)
ijk(t1, t2) = χ(2)
ikj(t2, t1)
(23)
where is fulfilled provided that the coupling tensor f (2) obey the symmetry
condition
f (2)
ijk(ω, ω′) = f (2)
ikj(ω′, ω),
(24)
similarly inserting the approximated solution x(ω, t) from (18) into (7), one
can obtain the (n −1)'th susceptibility tensor of the environment in the
non-linear regime for t1, t2, ...tn ≥0 as the following
χ(n)
i i1...in(t1, t2, ..., tn) =
z ∞
0
dω1
z ∞
0
dω2....
z ∞
0
dωn
sin ω1t1
ω1
sin ω2t2
ω2
....sin ωntn
ωn
× f (n)
ij1j2....jn(ω1, ω2, ... ωn) f (1)
i1j1(ω1) f (1)
i2j2(ω2).... f (1)
injn(ωn)
(25)
and χ(n)
ii1...in(t1, t2, ..., tn) is identically zero for t1, t2, ...tn < 0. the susceptibil-
ity tensor χ(n)
ii1...in(t1, t2, ..., tn) should satisfy the symmetry relations [30]
χ(n)
i i1...ik,...il,...,in(t1, t2, ..., tk, ...tl, ..., tn) = χ(n)
i i1...il,...ik,...,in(t1, t2, ..., tl, ...tk, ..., tn)
(26)
where this symmetry relation is clearly fulfilled by imposing the symmetry
conditions
f (n)
ij1j2.......,jk,...,jl,...,jn(ω1, ω2, ..., ωk, ..., ωl, ..., ωn)
=
f (n)
ij1j2.......,jl,...,jk,...,jn(ω1, ω2, ..., ωl, ..., ωk, ..., ωn)
(27)
on the n'th coupling tensor in the interaction lagrangian (4).
in eq. (20) r(1)
n (t) and r(2)
n (t) are the noise forces in the linear regime
and the first order of non-linearity , respectively, and using the symmetry
9
relation (24) are obtained as
r(1)
n i(t) =
z ∞
0
dω f (1)
ij (ω)xj
n(ω, t)
r(2)
n i(t) =
z ∞
0
dω
z ∞
0
dω′ f (2)
ijk(ω, ω′) xj
n(ω, t) xk
n(ω′, t)
+
z ∞
0
dω
z ∞
0
dω′ f (2)
inm(ω, ω′) f (1)
jm(ω′)
z t
−∞dt′ sin ω′(t −t′)
ω′
×
h
xn
n(ω, t) ̇
qj(t′) + ̇
qj(t′) xm
n (ω′, t)
i
(28)
where the summation should be done over the repeated indices and xn(ω, t)
is the asymptotic solution (19).
it is remarkable that for some known susceptibility tensors χ(1), χ(2), ..., χ(n),
the coupling tensors f (1), f (2), ...f (n) satisfying the definitions (21),(22) and
(25) are not unique.
in fact if the coupling tensors f (1), f (2), ..., f (n) sat-
isfy (21),(22) and (25) for the given susceptibility tensors, also the coupling
tensors f ′(1), f ′(2), ..., f ′(n) defined by
f ′(1)
ij
= f (1)
im ajm
f ′(n)
i i1i2....in = f (n)
i j1j2...jn ai1j1ai2j2....ainjn
(29)
satisfy (21),(22) and (25), where a is an orthogonal matrix aimamj = δij.
the various choices of the coupling tensors f (1), f (2), ..., f (n) which is related
to each other by the orthogonal transformation (29) do not change the physi-
cal observables. the commutation relations between the dynamical variables
of the total system remain unchanged under the orthogonal transformation
(29). for example in the next section it is shown that the decay rate of an
initially excited two-level atom, embedded in a non-linear absorbing environ-
ment, are independent of the various choices of the coupling tensors which is
related to each other by the transformation (29).
now combination of the response equation (20) and equation (6) yields the
non-linear lagevin-schr ̈
odinger equation
m ̈
qi(t) +
z +∞
−∞dt′ ̇
χ(1)
ij (t −t′) ̇
qj(t′) +
+
z +∞
−∞dt′
z +∞
−∞dt′′ ̈
χ(2)
ijk(t −t′, t −t′′) ̇
qj(t′) ̇
qk(t′′) + ....
+∂v (⃗
q)
∂qi
= − ̇
r(1)
n i(t) − ̇
r(2)
n i(t) + ...
(30)
10
as the macroscopic equation of motion of the main system in the anisotropic
non-linear absorbing environment. the velocity dependent terms in the left
hand of this equation are the damping forces exerted on the main system.
the noise forces − ̇
r(1)
n i, − ̇
r(2)
n i, ... in the right hand of (30) are necessary for
a consistent quantization of the dissipative system. as a realization, if this
quantization method would be applied for the electromagnetic field in the
presence of an absorbing non-linear dielectric medium, the vector potential
would satisfy the equation (30). in that case, the tensors χ(1), χ(2), .... would
play the role of the electric susceptibility tensors and − ̇
r(1)
n , − ̇
r(2)
n , ... would
be the noise polarization densities of various orders.
4
the effect of nonlinearity of the environ-
ment on the spontaneous emission of a two-
level atom imbedded in an absorbing envi-
ronment
in this section the effect of non-linearity of the absorbing environment is
investigated on the spontaneous emission of a two-level atom embedded in
such an environment. to calculate the spontaneous decay rate of an initially
excited two-level atom, the quantization scheme in the preceding section is
used and the theory of damping based on the density operator method is
applied [31].
neglecting the second power of the operator r in (15) the
hamiltonian of the total system can be written as
h = h0 + h′
h0 = hs + hm = p2
2m + v (q) +
3
x
i=1
z
dω ħω b†
i(ω)bi(ω)
h′ = −p * r
(31)
let us suppose the main system is a one electron atom with two eigenstates
|1⟩and |2⟩correspond to two eigenvalues e1 and e2, respectively (e2 > e1).
11
the hamiltonian (31) can now be rewritten as [31], [32].
h = h0 + h′
h0 = ħω0σ†σ +
3
x
i=1
z
dω ħω b†
i(ω)bi(ω)
ω0 = e2 −e1
ħ
h′ = ı mω0r *
h
dσ −d∗σ†i
(32)
where m is the electron mass of the atom, σ = |1⟩⟨2| , σ† = |2⟩⟨1| , are
the pauli operators and d = ⟨1|r|2⟩, where r is the position vector of the
electron with respect to the center of mass of the atom. dropping the energy
noncoserving terms correspond to rotating wave approximation and regarding
the relation (17), the interaction term h′ up to the first order of nonlinearity
in the interaction picture is expressed as
h′
i(t) = e
ıh0t
ħ
h′(0) e
−ıh0t
ħ
= ımω0
z ∞
0
dω
s
ħ
2ω f (1)
ij (ω)
h
diσ b†
j(ω) eı(ω−ω0)t −h.c
i
+ımω0
z ∞
0
dω
z ∞
0
dω′
ħ
2
√
ωω′ f (2)
ijk(ω, ω′)
h
diσb†
j(ω)b†
k(ω′)e−ı(ω0−ω−ω′)t
+diσbj(ω)b†
k(ω′)e−ı(ω0+ω−ω′)t + diσb†
j(ω)bk(ω′)e−ı(ω0−ω+ω′)t
−d∗
i σ†bj(ω)bk(ω′)eı(ω0−ω−ω′)t −d∗
i σ†bj(ω)b†
k(ω′)eı(ω0−ω+ω′)t
−d∗
i σ†b†
j(ω)bk(ω′)eı(ω0+ω−ω′)ti
(33)
where the symmetry relation (23) has been used. let the combined density
operator of the atom together with the environment is denoted by ρsr in the
interaction picture. then, the reduced density operator of the atom alone,
denoted by ρs, is obtained by taking the trace of ρsr with respect to the
coordinates of the environment, that is ρs = trr[ρsr]. since it is assumed
that h′
i(t) is sufficiently small, according to the density operator approach for
the damping theory [31], the time evolution of the reduced density operator
ρs is the solution of equation
̇
ρs(t) = −ı
ħtrr[h′
i(t) , ρs(0) ⊗ρr(0)]
−1
ħ2trr
z t
0 dt′ [h′
i(t), , [h′
i(t′) , ρs(t) ⊗ρr(0)]]
(34)
12
up to order of h′2
i , where ρr(0) is the density operator of the environment
at t = 0. in this formalism the environment is taken in equilibrium. also
the markovian approximation has been applied replacing ρs(t′) by ρs(t) in
the integrand in eq.(34).
to calculate the spontaneous emission of the atom, the initial states of the
atom and the environment are taken as
ρr(0) = |0⟩⟨0|
ρs(0) = |2⟩⟨2|
(35)
where |0⟩is the vacuum state of the environment. now substituting h′
i(t)
from (33) into (34) and regarding (35) the time evolution of the reduced
density operator ρs is obtained as
̇
ρs = mω0
2
z ∞
0
dω
ω f (2)
ijj (ω, ω)[diσe−ıω0t + h.c]
−m2ω2
0
2ħ
z ∞
0
dω
ω d∗
i f (1)
ij (ω) f (1)
lj (ω) dl
z t
0 dt′ e−ı(ω−ω0)(t−t′) σ†σρs(t) + h.c
+m2ω2
0
ħ
z ∞
0
dω
ω d∗
i f (1)
ij (ω) f (1)
lj (ω) dl σρs(t)σ†
z t
0 dt′ cos(ω −ω0)(t −t′)
−m2ω2
0
2
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
di1 f (2)
i1j1j1(ω1, ω1) f (2)
i2j2j2(ω2, ω2) di2
×
z t
0 dt′ σρs(t) σe−ıω0(t+t′) + h.c
+m2ω2
0
2
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
d∗
i1 f (2)
i1j1j1(ω1, ω1) f (2)
i2j2j2(ω2, ω2) di2
× [σ†ρs(t) σ + σρs(t) σ†]
z t
0 dt′ cos ω0(t −t′)
+m2ω2
0
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
d∗
i1 f (2)
i1j1j2(ω1, ω2) f (2)
i2j1j2(ω1, ω2) di2
×σρs(t) σ†
z t
0 dt′ cos(ω0 −ω1 −ω2)(t −t′)
−m2ω2
0
4
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
d∗
i1 f (2)
i1j1j1(ω1, ω1) f (2)
i2j2j2(ω2, ω2) di2
×
z t
0 dt′ e−ıω0(t−t′) σσ†ρs(t) + h.c
13
−m2ω2
0
4
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
d∗
i1 f (2)
i1j1j1(ω1, ω1) f (2)
i2j2j2(ω2, ω2) di2
×
z t
0 dt′ e−ıω0(t−t′) ρs(t)σ†σ + h.c
−m2ω2
0
2
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
d∗
i1 f (2)
i1j1j2(ω1, ω2) f (2)
i2j1j2(ω1, ω2) di2
×
z t
0 dt′ e−ı(ω0−ω1−ω2)(t−t′) σ†σρs(t) + h.c
where the the repeated indices implies that the summation should be done
over them.
then, the equation of motion of the matrix elements ρs11 =
⟨1|ρs|1⟩, ρs22 = ⟨2|ρs|2⟩and ρs12 = ρ∗
s21 = ⟨1|ρs|2⟩now is obtained as
̇
ρs11 = m2ω2
0
ħ
z ∞
0
dω
ω d∗
i f (1)
ij (ω) f (1)
lj (ω) dl ρs22(t)
z t
0 dt′ cos(ω −ω0)(t −t′)
+m2ω2
0
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
d∗
i1 f (2)
i1j1j2(ω1, ω2) f (2)
i2j1j2(ω1, ω2) di2
×ρs22(t)
z t
0 dt′ cos(ω0 −ω1 −ω2)(t −t′)
+m2ω2
0
2
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
d∗
i1 f (2)
i1j1j1(ω1, ω1) f (2)
i2j2j2(ω2, ω2) di2 ρs22(t)
×[ρs22(t) −ρs11(t)]
z t
0 dt′ cos ω0(t −t′)
(36)
̇
ρs22 = −m2ω2
0
ħ
z ∞
0
dω
ω d∗
i f (1)
ij (ω) f (1)
lj (ω) dl ρs22(t)
z t
0 dt′ cos(ω −ω0)(t −t′)
−m2ω2
0
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
d∗
i1 f (2)
i1j1j2(ω1, ω2) f (2)
i2j1j2(ω1, ω2) di2
×ρs22(t)
z t
0 dt′ cos(ω0 −ω1 −ω2)(t −t′)
−m2ω2
0
2
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
d∗
i1 f (2)
i1j1j1(ω1, ω1) f (2)
i2j2j2(ω2, ω2) di2 ρs22(t)
×[ρs22(t) −ρs11(t)]
z t
0 dt′ cos ω0(t −t′)
(37)
14
̇
ρs12 = ̇
ρ∗
s21 = mω0
2
z ∞
0
dω
ω f (2)
ijj (ω, ω) die−ıω0t
−m2ω2
0
2
z ∞
0
dω1
z ∞
0
dω2
1
ω1ω2
di1 f (2)
i1j1j1(ω1, ω1) f (2)
i2j2j2(ω2, ω2) di2
×ρs22(t)
z t
0 dt′e−ıω0(t+t′)
(38)
for sufficiently large times the integrals appeared in the equations (36) and
(37) can be approximated by
1
π
z t
0 dt′ cos(ω −ω0)(t −t′) ∼δ(ω −ω0)
1
π
z t
0 dt′ cos(ω0 −ω1 −ω2)(t −t′) ∼δ(ω0 −ω1 −ω2)
1
π
z t
0 dt′ cos ω0(t −t′) ∼δ(ω0) = 0
(39)
hence the time evolution of the matrix elements ρs11 and ρs22 for sufficiently
large times is reduced to
̇
ρs11 = γ ρs22
̇
ρs22 = −γ ρs22
(40)
where
γ = πmω2
0
ħ
d∗
i f (1)
ij (ω)f (1)
lj (ω)dl
+πm2ω2
0
z ∞
0
dω
1
ω(ω −ω0) d∗
i1 f (2)
i1j1j2(ω, ω −ω0) f (2)
i2j1j2(ω, ω −ω0) di2
(41)
is the decay rate of the spontaneous emission of the initially excited two
level atom up to the first order of nonlinearity. the first term in (41) is the
decay rate in the absence of nonlinearity effects and the second term is the
first contribution related to nonlinear effects of the environment. it may be
noted from (40) that ̇
ρs11 + ̇
ρs22 = 0 which implies the conservation of the
probability. an important point is that the decay rate γ is invariant under the
various coupling tensors which is related to each other by the transformation
(29). this should be so, because the decay rate γ is a physical observable.
15
5
summary
a fully canonical quantization of a quantum system moving in an anisotropic
non-linear absorbing environment was introduced. the main dissipative sys-
tem was coupled with the environment through some coupling tensors of
various ranks. the coupling tensors have an important role in this theory.
based on a response equation, the forces against the motion of the main
system were resolved into two parts, the damping forces and the noise forces.
the response equation of the environment was obtained using the heisenberg
equations describing the time evolution of the coordinates of the system and
the environment. some susceptibility tensors of various ranks were attributed
to the environment. the susceptibility tensors in the linear and non-linear
regimes were defined in terms of the coupling tensors of the system and its
environment. it was shown that, by imposing some symmetry conditions on
the coupling tensors, the susceptibility tensors obey the symmetry proper-
ties reported in the literature. a realization of this quantization method is
the quantized electromagnetic field in the presence of a non-linear absorbing
dielectric. finally the effect of the nonlinearity of the environment was in-
vestigated on the spontaneous decay rate of a two-level atom imbedded in
the non-linear environment.
references
[1] m. razavy, classical and quantum dissipative systems, imperial col-
lege press(2005).
[2] j. messer, acta phys. austriaca 50, 75 (1979).
[3] h. dekker, phys. rep. 80, 1 (1981).
[4] p. caldirola, nuovo cimento 18, 393 (1941).
[5] e. kanai, prog. theoret. phys. 3, 440 (1948).
[6] p. havas, nuovo cim. suppl. 5, 363 (1957).
[7] h. h. denman, am. j. phys. 34, 1147 (1966).
[8] w. e. brittin, phys. rev. 77,396 (1950).
16
[9] p. havas, bull. am. phys. soc. 1, 337 (1956).
[10] g. valentini, rend. 1st. lomb. so: a 95, 255 (1961).
[11] m. razavy, can. j. phys. 50, 2037 (1972).
[12] h. haken, rev. mod. phys. 47, 67 (1975).
[13] g. nicolis, i. prigogine, self-organization in non-equilibirium system,
wiely, new york, (1977).
[14] g. w. ford, j. t. lewis, r. f. o,connell, phys. rev. a 37, 4419(1988).
[15] g. w. ford, j. t. lewis, r. f. o,connell, phys. rev. lett. 55,
2273(1985).
[16] a. o. caldeira, a. j. leggett, phys. rev. lett. 46, 211(1981).
[17] a. o. caldeira, a. j. leggett, ann. phys. (n.y.) 149, 374(1983).
[18] a. h. castro neto, a. o. caldeira, phys. rev. lett. 67, 1960(1991).
[19] b. l. hu, j. p. paz, y. zhang, phys. rev. d 45, 2843(1992).
[20] jie-lou liao, e. pollak, chem. phys. 268, 295 (2001).
[21] a. o. caldeira, a. j. leggett, physica 121a, 585 (1983):
130a,
374(1985).
[22] a. o. caldeira, a. j. leggett, ann. phys. (n.y.) 153, 445(1984).
[23] a. j. leggett, s. chakravarty, a. t. dorsey, m. p. a. fisher, a. grag,
w. zwerger, rev. mod. phys. 59, 1 (1987).
[24] h. g. shuster, v. r. veria, phys. rev. b 34, 189(1986).
[25] g. w. ford, m. kac, p. mazur, j. math. phys. 6, 504(1964).
[26] f. kheirandish, m.amooshahi, mod. phys. lett. a, vol.20, no. 39,
3025(2005).
[27] f. kheirandish, m.amooshahi, phys. rev. a 74, 042102(2006).
17
[28] m.amooshahi,
f.
kheirandish,
j.
phys.
a:
math.
theor.
41,
275402(2008).
[29] m.amooshahi, j. math. phys. 50, 062301(2009).
[30] guang s. he, song h. liu, physics of nonlinear optics, world scientific
(1999).
[31] m. o. scully, m. s. zubairy, quantum optics, cambridge, university
press (1997).
[32] p. w. milonni, the quantum vacuum, academic press(1994).
18
|
0911.1691 | vertical partitioning of relational oltp databases using integer
programming | a way to optimize performance of relational row store databases is to reduce
the row widths by vertically partitioning tables into table fractions in order
to minimize the number of irrelevant columns/attributes read by each
transaction. this paper considers vertical partitioning algorithms for
relational row-store oltp databases with an h-store-like architecture, meaning
that we would like to maximize the number of single-sited transactions. we
present a model for the vertical partitioning problem that, given a schema
together with a vertical partitioning and a workload, estimates the costs
(bytes read/written by storage layer access methods and bytes transferred
between sites) of evaluating the workload on the given partitioning. the cost
model allows for arbitrarily prioritizing load balancing of sites vs. total
cost minimization. we show that finding a minimum-cost vertical partitioning in
this model is np-hard and present two algorithms returning solutions in which
single-sitedness of read queries is preserved while allowing column replication
(which may allow a drastically reduced cost compared to disjoint partitioning).
the first algorithm is a quadratic integer program that finds optimal
minimum-cost solutions with respect to the model, and the second algorithm is a
more scalable heuristic based on simulated annealing. experiments show that the
algorithms can reduce the cost of the model objective by 37% when applied to
the tpc-c benchmark and the heuristic is shown to obtain solutions with cost
close to the ones found using the quadratic program.
| introduction
in this paper we consider oltp databases with an h-store [? ] like architecture
in which we would aim for maximizing the number of single-sited transactions
(i.e.
transactions that can be run to completion on a single site).
given a
database schema and a workload we would like to reduce the cost of evaluating
the workload. in row-stores, where each row is stored as a contiguous segment
and access is done in quantums of whole rows, a significant amount of super-
fluous columns/attributes (we will use the term attribute in the following) are
likely to be accessed during evaluation of a workload. it is easy to see that this
superfluous data access may have a negative impact on performance so in an
∗email: [email protected]
1
optimal world the amount of data accessed by each query should be minimized.
one approach to this is to perform a vertical partitioning of the tables in the
schema. a vertical partitioning is a, possibly non-disjoint, distribution of at-
tributes and transactions onto multiple physical or logical sites. (notice, that
vertical and horizontal partitioning are not mutually exclusive and can perfectly
be used together). the optimality of a vertical partitioning depends on the con-
text: olap applications with lots of many-row aggregates will likely benefit
from parallelizing the transactions on multiple sites and exchanging small sub-
results between the sites after the aggregations. oltp applications on the other
hand, with many short-lived transactions, no many-row aggregates and with few
or no few-row aggregates would likely benefit from gathering all attributes read
by a query locally on the same site: inter-site transfers and the synchronization
mechanisms needed for non-single-sited or parallel queries (e.g. undo and redo
logs) are assumed to be bottlenecks in situations with short transaction dura-
tions. ? ] and ? ] discuss the benefits of single-sitedness in high-throughput
oltp databases in more details.
this paper presents a cost model together with two algorithms that find
either optimal or close-to-optimal vertical partitionings with respect to the cost
model. the two algorithms are based on quadratic programming and simulated
annealing, respectively. for a given partitioning and a workload, the cost model
estimates the number of bytes read/written by access methods in the storage
layer and the amount of data transfer between sites. our model is made with a
specific setting in mind, captured by five headlines:
oltp the database is a transaction processing system with many short lived
transactions.
aggregates no many-row aggregates and few (or no) aggregates on small row-
subsets.
preserve single-sitedness we should try to avoid breaking single-sitedness
as a large number of single-sited transactions will reduce the need for
inter-site transfers and completely eliminate the need for undo and redo
logs for these queries if the partitioning is performed on an h-store like
dmbs [? ].
workload known transactions used in the workload together with some run-
time statistics are assumed to be known when applying the algorithms.
furthermore, following the consensus in the related work (see section 1.3) we
simplify the model by not considering time spent on network latency (if all
vertical partitions are placed locally on a single site, then time spend on network
latency is trivially zero anyway). a description of how to include latency in the
model at the expense of increased complexity can be found in appendix a.
1.1
outline of approach
the basic idea is as follows. we are given an input in form of a schema together
with a workload in which queries are grouped into transactions, and each query
is described by a set of statistical properties.
for each query q in the workload and for each table r accessed by q the
input provides the average number nr of rows from table r that is retrieved
2
from or written to storage by query q. together with the (average) width wa of
each attribute a from table r this generally gives a good estimate for how much
attribute a costs in retrievals/writes by access methods for each evaluation of
query q, namely w ′
a,q = wa * nr.
given a set of sites, the challenge is now to find a non-disjoint distribution
of all attributes, and a disjoint distribution of transactions to these sites so
that the costs of retrievals, writes and inter-site transfers, each defined in terms
of w ′
a,q as explained in details below, is minimized.
this means, that the
primary executing site of any given query is assumed to be the site that hosts
the transaction holding that query.
as mentioned above, our algorithms will not break single-sitedness for read
queries and therefore no additional costs are added to the execution of read
queries by applying this algorithm. in contrast, since the storage costs (the sum
of retrieval, write and inter-site transfer costs) for a query is minimized and each
tuple become as narrow as possible, the total costs of evaluating the queries (e.g.
processing joins, handling intermediate subresults, etc.) are assumed to be, if
not minimized, then reduced too.
1.2
contributions
this paper contributes with the following:
• an algorithm optimized for h-store like architectures, preserving single-
sitedness for read queries and in which load balance among sites versus
minimization of total costs can be prioritized arbitrarily,
• a more scalable heuristic, and
• a micro benchmark of a) both algorithms based on tpc-c and a set of
random instances, b) a comparison between the benefits of local versus
remote partition location, and c) a comparison between disjoint and non-
disjoint partitioning.
1.3
related work
a lot of work has been done on data allocation and vertical partitioning but
to the best of our knowledge, no work solves the exact same problem as the
present paper: distributing both transactions and attributes to a set of sites,
allowing attribute replication, preserving single-sitedness for read queries and
prioritizing load balancing vs. total cost minimization. we therefore order the
references below by increasing estimated problem similarity and do not mention
work dedicated on vertical partitioning of olap databases.
in ? ? ] reduced the cost of information retrieval by vertically partitioning
records into a primary and a secondary record segment.
this was done by
constructing a bi-partite graph with two node sets: one set with a node for each
attribute and one set with a node for each transaction. by connecting attribute
and transaction nodes with a weighted edge according to their affinity, a min-cut
algorithm could be applied to construct the partitioning.
? ] assumed a set of horizontal and vertical fragments of a database was
known in advance and produced a disjoint distribution of these fragments onto a
set of network-connected processors using a greedy first-fit bin packing heuristic.
3
similarly, ? ] distributed a set of predefined fragments to a set of sites, but used
a linearized quadratic program to compute the solution.
? ] took as input a geographically distributed database together with statis-
tics for a query pattern on this database and produced as output a non-disjoint
distribution of whole database tables to the physical sites so that the total
amount of transfer was minimized. they modelled the problem as a linearized
quadratic program which was solved in practice using heuristics. the costs of
joins were minimized by first transferring join keys and then transferring the
relevant attributes for the relevant rows to a single collector site.
? ] constructed a disjoint partitioning with non-remote partition placement.
they used an attribute affinity matrix to represent a complete weighted graph
and generated partitions by finding a linearly connected spanning tree and con-
sidering a cycle as a fragment.
? ] generated a non-remote, disjoint partitioning minimizing the amount
of disk access by recursively applying a binary partitioning. the partitioning
decisions were based on an integer program and with strong assumptions on a
system-r like architecture when estimating the amount of disk access.
? ] also constructed a disjoint partitioning with non-remote partition place-
ment. they used a two-phase strategy where the first phase generated all rel-
evant attribute groups using association rules [? ] considering only one query
at a time, and the second phase merged the attribute groups that were useful
across queries.
?
] presented an algorithm for generating disjoint partitioning by either
minimizing costs or by ensuring that exactly k vertical fragments were produced.
inter-site transfer costs were not considered. the partitioning was produced
using a bottom-up strategy, iteratively merging two selected partitions with
the best "merge profit" until only one large super-partition existed. the k-
way partitioning was found at the iteration having exactly k partitions and the
lowest-cost partitioning was found at the iteration with the lowest cost.
? ] minimized the amount of disk access by constructing a non-remote and
non-disjoint vertical partitioning.
two binary partitioning algorithms based
on the branch-and-bound method were presented with varying complexity and
accuracy.
the partitionings were formed by recursively applying the binary
partitioning algorithms on the set of "reasonable cuts".
? ] did not present an algorithm but gave an interesting objective function
for evaluating vertical partitionings. the function was based on the square-error
criterion as given in [? ] for data clustering, but did not cover placement of
transactions which, in our case, has a large influence on the expected costs.
? ] considered the vertical partitioning problem for three different environ-
ments: a) single site with one memory level, b) single site with several memory
levels, and c) multiple sites. the partitions could be both disjoint and non-
disjoint. a clustering algorithm grouped attributes with high affinity by using
an attribute affinity matrix together with a bond energy algorithm [? ]. three
basic algorithms for generating partitions were presented which, depending on
the desired environment, used different prioritization of four access and transfer
cost classes.
4
1.4
outline of paper
in section 2 we derive a cost model together with a quadratic program defining
the first algorithm. section 3 describes a heuristic based on the cost model
found in section 2, and section 4 discusses a couple of ideas for improvements.
computational results are shown in section 5.
2
a linearized qp approach
in this section we develop our base model, a quadratic program (qp), which
will later be extended to handle load balancing and then linearized in order to
solve it using a conventional mixed integer program (mip) solver.
2.1
the base model
in a vertical partitioning for a schema and a workload we would like to minimize
the sum
a + pb
(1)
where a is the amount of data accessed locally in the storage layer, b is the
amount of data needed to be transferred over the network during query updates
and p is a penalty factor.
we assume that each transaction has a primary executing site. for each
transaction t ∈t , each table attribute a ∈a, and each site s ∈s consider
two decision variables xt,s ∈{0, 1} and ya,s ∈{0, 1} indicating if transaction
t is executed on site s and if attribute a is located on site s, respectively. all
transactions must be located at exactly one site (their primary executing site),
that is
x
t∈t
xt,s = 1
, ∀s ∈s
(2)
and all attributes must be located at at least one site, that is
x
a∈a
ya,s ≥1
, ∀s ∈s.
to determine the size of a and b from equation (1) introduce five new static
binary constants describing the database schema:
• αa,q indicates if attribute a itself is accessed by query q
• βa,q indicates if attribute a is part of a table that q accesses
• γq,t indicates if query q is used in transaction t
• δq indicates if query q is a write query
• φa,t indicates if any query in transaction t reads attribute a
single-sitedness should be maintained for reads. that is, if a read query in
transaction t accesses attribute a then a and t must be co-located:
xt,sφa,t = 1 ⇒ya,s = 1
, ∀t ∈t , a ∈a
5
or equivalently
ya,s −xt,sφa,t ≥0
, ∀t ∈t , a ∈a.
in order to estimate the cost of reading, writing and transferring data, in-
troduce the following weights:
• wa denotes the average width of attribute a
• fq denotes the frequency of query q
• na,q denotes for query q the average number of rows retrieved from or
written to the table holding attribute a
then the cost of reading or writing a in query q is estimated to wa,q = wa*fq*na,q
and the cost of transferring attribute a over the network is estimated to pwa,q.
notice, that wa,q is only an estimate due to fq and na,q.
consider the amount of local data access, a, and let a = ar + aw where
ar and aw is the amount of read and write access, respectively. for a given
site r and query q, ar is the sum of all attribute weights wa,q for which 1) q
is a read query, 2) attribute a is stored on r, 3) the transaction that executes
query q is executed on r and 4) q accesses any attribute in the table fraction
that holds a. as we maintain single-sitedness for reads, βa,q can be used to
handle 4), resulting in
ar =
x
a,t,s,q
wa,qβa,qγq,t(1 −δq)xt,sya,s.
accounting for local access of write queries, aw, is less trivial. consider the
following three approaches:
access relevant attributes an attribute a at site s should be accounted for
if and only if there exists an attribute a′ on s that q updates so that a
and a′ are attributes of the same table. while this accounting is the most
accurate of the three it is also the most expensive as it implies an element
of the form ya,sya′,s in the objective function which adds an undesirable
amount of |a|2|s| variables and 3|a|2|s| constraints to the problem when
linearized (see section 2.3).
access all attributes we can get around the increased complexity by assum-
ing that write queries q always writes to all sites containing table fractions
of tables accessed by q, regardless of whether q actually accesses any of
the attributes of the fractions. while this is correct for insert statements
(assuming that inserts always write complete rows) it is likely an overes-
timation for updates: imagine a lot of single-attribute updates on a wide
table where the above method would have split the attribute in question
to a separate partition. this overestimation will imply that the model will
partition tables that are updated often or replicate attributes less often
than the accounting model described above.
access no attributes another approach to simplify the cost function is to
completely avoid accounting for local access for writes and solely let the
network transfer define the write costs. with this underestimation of write
costs, attributes will then tend to be replicated more often than in the first
accounting model.
6
in this paper we choose the second approach, which gives a conservative overes-
timate of the write costs as we then obtain more accurate costs for inserts and
avoid extending the model with undesirably many variables and constraints.
intuitively speaking, this choice implies that read queries will tend to partition
the tables for best possible read-performance, and the write queries will tend to
minimize the amount of attribute replication. we now have
aw =
x
q,a,s
wa,qβa,qδqya,s
and thus
a =
x
a,t,s,q
wa,qβa,qγq,t(1 −δq)xt,sya,s +
x
q,a,s
wa,qβa,qδqya,s.
(3)
b accounts for the amount of network transfer and since we enforce single-
sitedness for all reads b is solely the sum of transfer costs for write queries. we
assume that write queries only transfer the attributes they update and does not
transfer to the site that holds their own transaction:
b =
x
a,t,s,q
wa,qαa,qγq,tδq(1 −xt,s)ya,s.
by noticing that p
a,t,s,q αa,qγq,tya,s = p
a,s,q αa,qya,s we can construct the
minimization problem as
min
p
t,a,s c1(a, t)xt,sya,s + p
a,s c2(a)ya,s
s.t.
p
s xt,s
= 1
∀t
p
s ya,s
≥1
∀a
ya,s −xt,sφa,t
≥0
∀a, t
xt,s, ya,s
∈{0, 1}
∀t, a, s
(4)
where
c1(a, t) =
x
q
wq,aγq,t(βa,q(1 −δq) −pαa,qδq)
and
c2(a) =
x
q
wa,qδq(βa,q + pαa,q).
both c1 and c2 are completely induced by the schema, query workload and
statistics and can therefore be considered static when the partitioning process
starts.
2.2
adding load balancing
we are interested in extending the model in (4) to also handle load balancing
of the sites instead of just minimizing the sum of all data access/transfer. from
equation (3) define the work of a single site s ∈s as
x
a,t
c3(a, t)xt,sya,s +
x
a
c4(a)ya,s
(5)
where c3(a, t) = p
q wa,qγq,tβa,q(1 −δq) and c4(a) = p
q wa,qβa,qδq. introduce
the variable m and for each site s let the value of (5) be a lower bound for m.
7
adding m to the objective function is then equivalent to also minimizing the
work of the maximally loaded site.
in order to decide how to prioritize cost minimization versus load balancing
in the model, introduce a scalar 0 ≤λ ≤1 and weight the original cost from (4)
and m by λ and (1 −λ), respectively. the new objective is then
λ
x
a,t,s
c1(a, t)xt,sya,s + λ
x
a,s
c2(a)ya,s + (1 −λ)m
(6)
where m is constrained as follows:
x
a,t
c3(a, t)xt,sya,s +
x
a,q
c4(a)ya,s ≤m
, ∀s ∈s.
notice that while we are now minimizing (6), the objective of (4) should still
be considered as the actual cost of a solution.
2.3
linerarization
we use the technique discussed in [? , chapter iv, theorem 4] to linearize
the model. this is done by replacing the quadratic terms in the model with a
variable ut,a,s and adding the following new constraints:
ut,a,s ≤xt,s
∀t, a, s
ut,a,s ≤ya,s
∀t, a, s
ut,a,s ≥xt,s + ya,s −1
∀t, a, s
for ut,a,s ≥0, notice that ut,a,s = 1 if and only if xt,s = ya,s = 1 and that ut,a,s
is guaranteed to be binary if both xt,s and ya,s are binary (thus, there is no
need for requiring it explicitly in the model).
now, the model in (4) extended with load balancing looks as follows when
linearized:
min
λ p
t,a,s c1(a, t)ut,a,s + λ p
a,s c2(a)ya,s + (1 −λ)m
s.t.
p
s xt,s
= 1
∀t
p
s ya,s
≥1
∀a
ya,s −xt,sφa,t
≥0
∀a, t
p
a,t c3(a, t)ua,t,s + p
a,q c4(a)ya,s
≤m
∀s
ut,a,s −xt,s
≤0
∀t, a, s
ut,a,s −ya,s
≤0
∀t, a, s
ut,a,s −xt,s −ya,s + 1
≥0
∀t, a, s
xt,s, ya,s
∈{0, 1}
∀t, a, s
ut,a,s
≥0
∀t, a, s
(7)
2.4
complexity
the objective function in quadratic programs can be written on the form
1
2ztqz + cz + d
where in our case z = (x1,1, . . . , x|t |,|s|, y1,1, . . . , x|a|,|s|) is a vector containing
the decision variables, q is a cost matrix, c is a cost vector and d a constant.
8
q can be easily defined from (6) by dividing q into four quadrants, letting the
sub-matrices in the upper-left and lower-right quadrant equal zero and letting
the upper-right and lower-left submatrices be defined by c1(a, t). q is indefinite
and the cost function (6) therefore not convex. as shown by ? ] finding optimum
when q is indefinite is np-hard.
3
the sa solver – a heuristic approach
we develop a heuristic based on simulated annealing (see [? ]) and will refer
to it as the sa-solver from now on. the base idea is to alternately fix x and
y and only optimize the not-fixed vector, thereby simplifying the problem. in
each iteration we search in the neighborhood of the found solution and accept
a worse solution as base for a further search with decreasing probability.
let xt,s hold an assignment of transactions to sites and define the neighbor-
hood x′ of x as a change of location for a subset of the transactions so that for
each t ∈t we still have p
s x′
t,s = 1. similarly, let ya,s hold an assignment of
attributes to sites but define the neighborhood y′ of y as an extended replication
of a subset of the attributes. that is, for each a ∈a in that subset we have
ya,s = 1 ⇒y′
a,s and p
s y′
a,s > p
s ya,s. we found that altering the location
for a constant number of 10% of both transactions/attributes yielded the best
results. the heuristic now looks as pictured in algorithm 1. notice, that the
algorithm 1 the heuristic based on simulated annealing (sa). it iteratively
fixes x and y and accepts a worse solution from the neighborhood with decreasing
probability.
1: initialize temperature τ > 0 and reduction factor ρ ∈]0; 1[
2: set the number l of inner loops
3: initialize x randomly so that (2) is satisfied
4: fix ←"x"
5: s ←findsolution(fix)
6: while not frozen do
7:
for i ∈{1, . . ., l} do
8:
x ←neighborhood of x
9:
y ←neighborhood of y
10:
s′ ←findsolution(fix)
11:
∆←cost(s′) −cost(s)
12:
p ←a randomly chosen number in [0; 1]
13:
if ∆≤0 or p < e−∆/τ then
14:
s ←s′
15:
end if
16:
fix ←the element in {"x","y"} \ {fix}
17:
end for
18:
τ ←ρ * τ
19: end while
linearization constraints is not needed since either x or y will be constant in
each iteration. this reduces the size of the problem considerably.
9
4
further improvements
consider a table with n attributes together with two queries: one accessing at-
tribute 1 through k and one accessing attribute k through n. then it is sufficient
to find an optimal distribution for the three attribute groupings {1, . . ., k −1},
{k} and {k + 1, . . . , n}, considering each group as an atomic unit and thereby
reducing the problem size. in general, it is only necessary to distribute groups
of attributes induced by query access overlaps. ?
] refer to these attribute
overlaps as reasonable cuts. even though this will not improve the worst-case
complexity, this reduction may still have a large performance impact on some
instances.
also, assuming that transactions follow the 20/80 rule (20% of the transac-
tions generate 80% of the load), the problem can be solved iteratively over t
starting with a small set of the most heavy transactions.
5
computational results
we assume that the context is a database with a very high transaction count
like the memory-only database h-store [? ] (now voltdb1) and thus need to
compare ram access versus network transfer time when deciding an appro-
priate network penalty factor p. a pci express 2.0 bus transfers between 32
gbit/s and 128 gbit/s while the bandwidth of pc3 ddr3-sdram is at least
136 gbit/s so the bus is the bottleneck in ram accesses.
we assume that
the network is well configured and latency is minimal. therefore the network
penalty factor could be estimated to p ∈[3; 128] if either a gigabit or 10-gigabit
network is used to connect the physical sites. we assume the use of a 10-gigabit
network and therefore set p = 8 in our tests unless otherwise stated.
we furthermore mainly focus on minimizing the total costs of execution and
therefore set λ low. if λ is kept positive the model will, however, choose the
more load balanced layout if there is a cost draw between multiple layouts. we
set λ = 0.1 in our tests unless otherwise stated.
all tests were run on a macbook pro with a 2.4 ghz intel core 2 duo
and 4gb 1067 mhz ddr3 ram, running mac os x 10.5. the gnu linear
programming kit2 (glpk) 4.39 was used as mip solver, using only a single
thread.
the test implementation is available upon request.
5.1
initial temperature
the temperature τ used in the heuristic described in section 3 determines how
willing the algorithm is to accept a worse solution than the currently best found.
let c∗and c denote the objective for the best solution so far and the currently
generated solution, respectively. in the computational results provided here we
accept a worse solution with 50% probability in the first set of iterations if
c−c∗
c
< 5%. referring to the notation used in algorithm 1, we have 50% =
e0.05c∗/τ and thus an initial temperature of τ = −0.05c∗/ ln 0.5.
1http://voltdb.com
2http://gnu.org/software/glpk
10
5.2
the tpc-c v5 instance
we perform tests on the tpc-c version 5.10.1 benchmark3. the tpc-c specifi-
cation describes transactions, queries and database schema but does not provide
the statistics needed to create a problem instance. we therefore made some sim-
plified assumptions: all queries are assumed to run with equal frequency and
all queries (not transactions) are assumed to access a single row except in the
obvious cases where aggregates are used or there are being iterated over the
result. in these cases we assume that the query accesses 10 rows. thereby, the
new-order transaction for example, are assumed to access 11 rows in average.
we model update queries as two sub-queries: a read-query accessing all
the attributes used in the original query and a write-query only accessing the
attributes actually being written (and thus whose update needs to be distributed
to all replicas).
5.3
random instances
to the best of our knowledge there is no standard library of typical oltp
instances with schemas, workloads and statistics so in order to explore the char-
acteristics of the algorithms we perform some experiments on a set of randomly
generated instances instead as it showed up to be a considerable administrative
and bureaucratic challenge (if possible at all) to collect appropriate instances
from "real life" databases. the randomly generated instances vary in several
parameters in order to clarify which characteristics that influence the potential
cost reduction by applying our vertical partitioning algorithms. the parame-
ters include: number of transactions in workload, number of tables in schema,
maximum number of attributes per table, maximum number of queries per
transaction, percentage of queries being updates, maximum number of different
tables being referred to from a single query, maximum number of individual at-
tributes being referred to by a single query, the set of allowed attribute widths.
we define classes of problem instances by upper bounds on all parameters. in-
dividual instances are then generated by choosing the value of each parameter
evenly distributed between 1 and its upper bound. that is, if e.g. the maximum
allowed number of attributes in tables is k, the number of table attributes for
each table in the generated instance will be evenly distributed between 1 and k
with a mean of k/2.
5.4
results
in the following we perform a series of tests and display the results in tables
where each entry holds the found objective of (4) for the given instance.
table 1 explores the influence of a set of parameters in the randomly gener-
ated instances by varying one parameter at a time while fixing the rest. we test
two classes of instances using the sa solver: a smaller with #tables = |t | =
20 and a larger with #tables = |t | = 100. the results suggest that the largest
workload reduction is obtained for instances having relatively few queries per
transaction, few updates, many attributes per table and/or a moderate number
of attribute references per query. the number of table references per query and
3http://www.tpc.org/tpcc
11
the allowed attribute widths, however, only seem to have moderate influence on
the result.
#tables = |t | = 20
#tables = |t | = 100
|s| = 1
|s| = 2
|s| = 3
|s| = 1
|s| = 2
|s| = 3
a
max queries
per transaction
1
0.585
0.309
0.278
3.194
1.784
1.471
3
1.567
1.478
1.386
5.743
4.550
4.189
5
1.305
1.054
0.972
8.840
7.569
6.983
b
percent
updates queries
0
1.747
1.369
1.110
5.959
4.235
3.510
10
1.567
1.478
1.386
5.743
4.550
4.189
30
1.349
1.244
1.263*
5.106
4.555
4.462
c
max attributes
per table
5
0.520
0.520*
0.520*
2.583
2.772*
2.712*
15
1.567
1.478
1.386
5.743
4.550
4.189
35
1.643
0.968
0.850
14.970
7.341
5.355
d
max table
references per
query
2
0.602
0.430
0.356
3.447
3.022
2.865
5
1.567
1.478
1.386
5.743
4.550
4.189
10
2.246
1.607
1.516
8.147
6.063
5.623
e
max attribute
references per
query
5
0.678
0.288
0.199
5.176
2.526
1.969
15
1.567
1.478
1.386
5.743
4.550
4.189
25
1.115
0.988
1.008*
5.641
5.909*
5.684*
f
allowed
attribute widths
{2, 4, 8}
1.194
1.080
1.030
4.456
3.488
3.500*
{4, 8}
1.567
1.478
1.386
5.743
4.550
4.189
{4, 8, 16}
2.387
2.160
2.060
8.912
6.977
7.000
table 1: comparing the effect of parameter changes. results were found
using the sa solver.
we test three possible values for each parameter,
varying one parameter at the time and fixing all other parameters at their
default value (marked with bold). the costs are shown in units of 106. tests
are divided into two classes having both the number of transactions and
schema tables equal to 20 (left) and 100 (right), respectively. the results
suggest that the largest workload reduction, unsurprisingly, is obtained for
instances having relatively few queries per transaction, few updates, many
attributes per table and/or a moderate number of attribute references per
query. the number of table references per query and the allowed attribute
widths, however, only seem to have moderate influence on the result.
table 3 compares the qp and sa solvers on the tpc-c benchmark and a
set of randomly generated larger instances, divided into two classes with either
large or low potential for cost reduction. the random instances are described
in table 2 where the columns here refer to the single-letter labels for the pa-
rameters shown in table 1.
as seen in table 3 the sa solver is generally
faster than the qp solver but the qp solver obtains lower costs when the in-
stances are small.
expectedly, the instances in class "rndb. . . "
with many
attribute references per query but few queries per table gains little or no cost
reduction by applying the algorithms. tpc-c, on the other hand, gets a cost
reduction of 37% and the random instances in class "rnda. . . ", with many at-
tributes per table and relatively few attribute references per query, get a cost
reduction between 25% and 85%. none of the algorithms found a cost reduction
for the instances rndat4x100 and rndat8x100 because of the "overweight" of
transactions compared to the number of attributes in the schemas.
table 4 depicts an actual partitioning of tpc-c constructed by the qp
solver for three sites.
table 5 illustrates the effect of disjoint versus nondisjoint partitioning, that
is, partitioning without and with attribute replication. as seen, greater cost re-
duction can be obtained when allowing replication but in exchange to increased
12
name
a
b
c
d
e
f
|t |
#tables
rndat4x15
3
10
30
3
8
{2, 4, 8, 16}
15
4
rndat8x15
3
10
30
3
8
{2, 4, 8, 16}
15
8
rndat8x15u50
3
50
30
3
8
{2, 4, 8, 16}
15
8
rndat16x15
3
10
30
3
8
{2, 4, 8, 16}
15
16
rndat32x15
3
10
30
3
8
{2, 4, 8, 16}
15
32
rndat4x100
3
10
30
3
8
{2, 4, 8, 16}
100
4
rndat8x100
3
10
30
3
8
{2, 4, 8, 16}
100
8
rndat16x100
3
10
30
3
8
{2, 4, 8, 16}
100
16
rndat32x100
3
10
30
3
8
{2, 4, 8, 16}
100
32
rndbt4x15
3
10
5
6
28
{2, 4, 8, 16}
15
4
rndbt8x15
3
10
5
6
28
{2, 4, 8, 16}
15
8
rndbt16x15
3
10
5
6
28
{2, 4, 8, 16}
15
16
rndbt16x15u50
3
50
5
6
28
{2, 4, 8, 16}
15
16
rndbt32x15
3
10
5
6
28
{2, 4, 8, 16}
15
32
rndbt4x100
3
10
5
6
28
{2, 4, 8, 16}
100
4
rndbt8x100
3
10
5
6
28
{2, 4, 8, 16}
100
8
rndbt16x100
3
10
5
6
28
{2, 4, 8, 16}
100
16
rndbt32x100
3
10
5
6
28
{2, 4, 8, 16}
100
32
table 2: random instances used when comparing the qp and sa solvers
in table 3. the instances in the upper part (rnda. . . ) are expected to
get a large cost reduction while instances in the lower part (rndb. . . ) are
expected to get a small cost reduction. the columns refer to the single-
letter labels for the parameters shown in table 1.
qp
sa
instance
|a|
|t |
|s|
cost
time (s)
cost
time (s)
|s| = 1
tpc-c v5
92
5
2
0.133
1
0.138
5
0.208
tpc-c v5
92
5
3
0.132
6
0.132
5
0.208
tpc-c v5
92
5
4
0.132
33
0.132
5
0.208
rndat4x15
54
15
4
(0.332)
1800
0.396
10
0.933
rndat8x15
105
15
4
(0.324)
1800
0.327
18
0.808
rndat16x15
225
15
4
(0.267)
1800
0.309
41
1.180
rndat32x15
492
15
4
(0.315)
1800
0.217
89
1.491
rndat64x15
1023
15
4
(0.269)
1800
0.268
190
1.452
rndat4x100
54
100
4
(8.001)
1800
8.246
79
7.946
rndat8x100
105
100
4
(7.681)
1800
8.018
150
7.454
rndat16x100
225
100
4
-
t/o
6.525
321
8.741
rndat32x100
492
100
4
-
t/o
4.501
728
8.916
rndat64x100
1023
100
4
-
t/o
4.119
1531
9.591
rndbt4x15
12
15
4
0.303
65
0.303
3
0.303
rndbt8x15
27
15
4
(0.448)
1800
0.424
6
0.440
rndbt16x15
49
15
4
(0.333)
1800
0.334
9
0.385
rndbt32x15
98
15
4
(0.319)
1800
0.319
16
0.361
rndbt64x15
210
15
4
(0.221)
1800
0.221
31
0.229
rndbt4x100
54
100
4
(4.484)
1800
2.251
18
2.251
rndbt8x100
105
100
4
(4.323)
1800
2.419
37
2.419
rndbt16x100
225
100
4
(2.001)
1800
1.774
62
1.774
rndbt32x100
492
100
4
(2.419)
1800
1.999
124
1.999
rndbt64x100
1023
100
4
-
1800
2.473
270
2.473
table 3: comparing the qp algorithm with the simulated annealing based
heuristic (sa), allowing attribute replication and with remote partition
placement. costs are shown in units of 106. the sa algorithm had a 30
second time limit for each iteration and if the limit was reached it pro-
ceeded with another neighborhood. the qp algorithm had a time bound
of 30 minutes and an mip tolerance gap of 0.1%. where the time limit
was reached, the best found cost (if any) is written in parentheses. "t/o"
indicates that no integer solution was found within the time limit.
13
site 1
transaction payment
customer.c balance
customer.c city
customer.c credit
customer.c credit lim
customer.c data
customer.c discount
customer.c d id
customer.c first
customer.c id
customer.c last
customer.c middle
customer.c phone
customer.c since
customer.c state
customer.c street 1
customer.c street 2
customer.c w id
customer.c zip
district.d city
district.d id
district.d name
district.d state
district.d street 1
district.d street 2
district.d w id
district.d ytd
district.d zip
history.h amount
history.h c d id
history.h c id
history.h c w id
history.h data
history.h date
history.h d id
history.h w id
orderline.ol dist info
orderline.ol number
stock.s order cnt
stock.s remote cnt
stock.s ytd
warehouse.w city
warehouse.w id
warehouse.w name
warehouse.w street 1
warehouse.w street 2
warehouse.w ytd
warehouse.w zip
site 2
transaction stocklevel
customer.c city
customer.c delivery cnt
customer.c payment cnt
customer.c since
customer.c ytd payment
district.d id
district.d next o id
district.d w id
item.i im id
orderline.ol d id
orderline.ol i id
orderline.ol o id
orderline.ol w id
stock.s i id
stock.s quantity
stock.s w id
site 3
transaction delivery
transaction neworder
transaction orderstatus
customer.c balance
customer.c credit
customer.c discount
customer.c d id
customer.c first
customer.c id
customer.c last
customer.c middle
customer.c w id
district.d id
district.d next o id
district.d tax
district.d w id
item.i data
item.i id
item.i name
item.i price
neworder.no d id
neworder.no o id
neworder.no w id
order.o all local
order.o carrier id
order.o c id
order.o d id
order.o entry d
order.o id
order.o ol cnt
order.o w id
orderline.ol amount
orderline.ol delivery d
orderline.ol d id
orderline.ol i id
orderline.ol o id
orderline.ol quantity
orderline.ol supply w id
orderline.ol w id
stock.s data
stock.s dist 01
stock.s dist 02
stock.s dist 03
stock.s dist 04
stock.s dist 05
stock.s dist 06
stock.s dist 07
stock.s dist 08
stock.s dist 09
stock.s dist 10
stock.s i id
stock.s quantity
stock.s w id
warehouse.w id
warehouse.w tax
table 4: the result of a vertical partitioning of the tpc-c benchmark
using the qp solver for three sites. each column represents the contents
of a site and is divided into three sub-sections: a header, a section holding
the transaction names and a longer section holding the attributes assigned
to the respective site.
14
computation time.
w. replication
w/o replication
instance
|a|
|t |
|s|
cost
time (s)
cost
time (s)
ratio
tpc-c v5
92
5
1
0.208
0
0.208
0
-
tpc-c v5
92
5
2
0.133
1
0.207
1
64%
tpc-c v5
92
5
3
0.132
6
0.207
2
64%
tpc-c v5
92
5
4
0.132
33
0.207
3
64%
rndat4x15
54
15
2
4.855
28
6.799
1
71%
rndat8x15
105
15
2
4.710
517
5.809
6
81%
rndat8x15
27
15
2
4.244
4
4.402
0
96%
rndat16x15
49
15
2
3.410
34
3.852
0
89%
table 5: computational results from solving the tpc-c benchmark and a
few random instances with the qp solver. costs are shown in units of 105.
the table shows that costs can be reduced by allowing attribute replication
and that tpc-c does not benefit noticeably from being partitioned and
distributed to more than two sites. the ratio column displays the ratio
between the replicated and non-replicated cost.
table 6 compares two different kinds of partition placements: 1) all partitions
being located at one single site (thereby avoiding inter-site transfers) and 2)
partitions being located at remote sites. these two situations can be simulated
by setting p = 0 and p > 0, respectively. the benefits of local placements are
given by the amount of updates in the workload as only updates cause inter-
site transfers. more updates implies larger costs for remote placements. for
a somewhat extreme case, instance "rndat8x15u50", with 50% of the queries
being updates, the costs are about 33% lower when placing the partitions locally.
local
remote
instance
|a|
|t |
|s|
cost (qp)
cost (sa)
cost (qp)
cost (sa)
tpc-c v5
92
5
1
1.916
1.916
1.916
1.916
tpc-c v5
92
5
2
1.210
1.208
1.221
1.273
tpc-c v5
92
5
3
1.208
1.208
1.220
1.220
rndat4x15
54
15
2
4.709
4.742
4.855
4.888
rndat8x15
105
15
2
4.424
4.808
4.710
5.187
rndat8x15u50
105
20
2
3.189
3.313
4.778
4.873
rndbt8x15
27
15
2
4.365
4.332
4.244
4.730
rndbt16x15
49
15
2
3.335
3.387
3.410
3.404
rndbt16x15u50
49
20
2
5.066
5.220
5.438
5.438
table 6: comparing the costs of local (p = 0) versus remote (p > 0)
location of partitions and with attribute replication allowed. costs are in
units of 105. write-rarely instances or instances in class "rndb. . . " do not
benefit noticeably by placing all partitions locally, even the instances with
50% update queries, however instances in class "rnda. . . " with a large
update ratio do. the reason is that only updates cause inter-site transfer.
that the costs of the local placement for rndbt8x15 is larger than when
placed remotely is since λ > 0.
15
6
conclusion
we have constructed a cost model for vertical partitioning of relational oltp
databases together with a quadratic integer program that distributes both at-
tributes and transactions to a set of sites while allowing attribute replication,
preserving single-sitedness for read queries and in which load balancing vs. total
cost minimization can be prioritized arbitrarily.
we also presented a more scalable heuristic which seems to deliver good
results. for both algorithms we obtained a cost reduction of 37% in our model
of tpc-c and promising results for the random instances. even though the
latter theoretically can be constructed with arbitrary high/low benefits from
vertical partitioning, the test runs on our selected subset of random instances
seem to indicate that 1) our heuristic scales far better than the qp-solver, and
2) it can obtain valuable cost reductions on many real-world oltp databases,
as we tried to select the parameters realistically.
one thing we miss, however, is an official oltp testbed – a library con-
taining realistic oltp workloads, schemas and statistics. such a collection of
realistic instances could serve as base for several insteresting and important
studies for understanding the nature and characteristics of oltp databases.
acknowledgements
the author would like to acknowledge daniel abadi for competent and valuable
discussions and feedback. also, rasmus pagh, philippe bonnet and laurent
flindt muller have been very helpful with insightful comments on preliminary
versions of the paper.
16
a
latency
this section describes how to extend the algorithms to also estimate costs of
network latency for queries accessing attributes on remote sites. we assume,
that all remote access (if any) for queries are done in parallel and with a constant
number of requests per query per remote site. let pl denote a latency penalty
factor and introduce a new binary variable ψq for each query q indicating with
ψq = 1 if q accesses any remotely placed attributes.
letting n denote the
number of remotely accessed attributes by q we have n > 0 ⇒ψq = 1 and
n = 0 ⇒ψq = 0, or equivalently (ψq −1)n = 0 and ψq −n ≤0. this results in
the following two classes of new constraints:
(ψq −1)
x
a,s
δqαa,qγq,t(1 −xt,s)ya,s = 0
, ∀q, t
and
ψq −
x
a,s
δqαa,qγq,t(1 −xt,s)ya,s ≤0
, ∀q, t
the total latency in a given partitioning can now be estimated by the sum
pl
p
q fqψq which can be added to the cost objective function (4).
17
|
0911.1692 | energy spectrum of cosmic ray muons in ~ 100 tev energy region
reconstructed from the bust data | differential and integral energy spectra of cosmic ray muons in the energy
range from several tev to ~ 1 pev obtained by means of the analysis of multiple
interactions of muons (pair meter technique) in the baksan underground
scintillation telescope (bust) are presented. the results are compared with
preceding bust data on muon energy spectrum based on electromagnetic cascade
shower measurements and depth-intensity curve analysis, with calculations for
different muon spectrum models, and also with data of other experiments.
| introduction
energy spectrum of muons plays an important role in the physics of high energy cosmic rays. its characteristics
depend on the primary cosmic ray spectrum and composition, and also on the processes of primary particle interactions
with nuclei of air atoms. therefore information on muon energy spectrum may be used, on the one hand, for extraction
of independent estimates of primary spectrum and composition if to suppose that interaction model is known, and,
on the other hand, under certain assumptions about primary cosmic ray spectrum and composition, for the search of
possible changes in characteristics of hadron interactions above the energy limit reached in accelerator experiments.
the region of muon energies above 100 tev is of a special interest. in this region, the contribution of "prompt"
muons from decays of charmed and other short-lived particles at reasonable suppositions about cross sections of their
production can appear. this has to give some excess of such muons. but on the other hand the influence of the knee
in the primary energy spectrum on muon spectrum shape is expected if the knee has really astrophysical origin. this
effect leads to the decrease of muon flux at such energies. in the alternative case, if the spectrum of primary particles
does not change its slope and appearance of the knee is connected with interaction model changes, the inclusion of new
physical processes or states of matter is required. these processes can change the whole picture of eas development,
and standard estimations of eas energies can appear wrong. as shown in [1], in this case the excess of muons will
be increasing with energy rather sharply.
usually, possible contribution of any fast (in comparison with decays of pions and kaons) processes to generation
of muons is taken into account by means of introduction of the parameter r in the formula describing the inclusive
spectrum of high-energy muons in the atmosphere [2]:
dnμ
deμ
= 0.14e−γμ
μ
×
1
1 + 1.1eμ cos θ
115
+
0.054
1 + 1.1eμ cos θ
850
+ r
[cm−2 s−1 sr−1 gev−1].
(1)
∗corresponding author. address: nevod, mephi, kashirskoe sh. 31, moscow 115409, russia.
email address: [email protected] (a.g. bogdanov)
august 5, 2011
here r is the ratio of the number of these prompt muons to the number of charged pions with the same energy at
production; muon energy is measured in gev; γμ = 2.7. the slope of the energy spectrum of prompt muons is about
a unit less than that from decays of pions and kaons. in contrast to high energy muons from π-, k-decays, the flux
of prompt muons does not exhibit sec θ enhancement with the increase of zenith angle. unfortunately, cross sections
of charmed particle production in a necessary range of kinematic variables are poorly known, and existing theoretical
estimates of the r value have a large spread. nevertheless, the expected range of energies where the fluxes of prompt
muons and muons from π-, k-decays become comparable is near 100 tev [3]; differential spectra of prompt muons
and "usual" muons (from pion and kaon decays) are equal to each other at this energy for r ≈10−3.
if the observed knee in extensive air showers (eas) energy spectrum at pev energies is related with the inclusion
of new physical processes (or formation of a new state of matter) with production in a final state of very high energy
(vhe) muons [4], then their contribution to muon energy spectrum may be estimated by the formula:
divheμ
deμ
= aγ1(e0/106 gev)−γ1
e0
×
n2
μ
fμ
h
1 −(γ1/γ2) (ek/e0)(γ2−γ1)/γ2i,
(2)
where eμ and primary particle energy e0 are related as
eμ nμ/fμ = ∆(e0) = e0 −ek (e0/ek)γ1/γ2 , e0 > ek.
(3)
here γ1 and γ2 are integral eas energy spectrum slopes below and above the knee energy ek; nμ is a typical
multiplicity of produced vhe muons; fμ is a fraction of the difference between primary particle energy and measured
eas energy which is carried away by vhe muons; a = 1.5 × 10−10 cm−2 s−1 sr−1 is the integral intensity of primary
particles with energy above 1 pev.
appearance of vhe muons is also expected at energies about 100 tev; however, their relative contribution should
increase with energy more rapidly compared to muons from charmed particle decays, and this feature is the only one
which could allow separate two hypotheses on possible reasons of changes in the muon energy spectrum behavior.
there are few experimental data in the energy region close to 100 tev (including the data obtained with bust), and
they have a very wide spread (see, e.g., review [5]). this spread is most probably caused by various uncertainties of
the methods used for investigations of the muon energy spectrum in the range ≥10 tev.
unfortunately, the most direct method of muon energy spectrum study – magnetic spectrometer technique – did not
allow reach energies above 10 tev because of both technical (the necessity to ensure high magnetic field induction
simultaneously with manifold increase of magnetized volume) and physical (increase of probability of secondary
electron contamination in the events with the increase of muon energy) reasons. therefore, two other methods of
muon spectrum investigations – calorimeter measurements of muon-induced cascade shower spectrum and depth-
intensity curve analysis – were mainly used.
method based on the depth-intensity measurements has serious uncertainties in estimation of the surface muon
energy related with ambiguities in rock density and its composition, their non-uniformity in depth, and, in case of the
mountain overburden, with errors in slant depth evaluation. besides, this method has a principal upper limitation for
accessible muon energies, since at depths more than about 12 km w.e. (in standard rock) the intensity of atmospheric
muons becomes lower than the background flux of muons locally produced by neutrinos in the surrounding material.
taking into account energy loss fluctuations, such depth corresponds to effective muon threshold energy about 100
tev.
the method based on calorimetric measurements of the spectrum of electromagnetic cascades induced via muon
bremsstrahlung does not have upper physical limit. however, possibilities of investigations of muon spectrum at high
energies are limited by low probability of the production of bremsstrahlung photons with energies comparable to muon
energy (εγ ∼eμ), rapidly decreasing muon intensity, and consequently, by the necessity of corresponding increase of
the detector mass. a special case of this technique represents the burst-size technique, when the cascade is detected
in one point (in one layer of the detector). such approach is used in the analysis of horizontal air showers (has)
which may be produced deep in the atmosphere only by muons (or neutrinos). however, many questions appear in
interpretation of measurements of this kind: is the shower produced by single muon or by several particles? how to
reject the background contribution from usual (hadron-induced) eas? what is the effective target thickness for such
observations? as a rule, there are no simple answers for these questions.
2
it is important to mark that, in contrast to magnetic spectrometer technique where the energies (the momenta) of
individual particles are measured and differential energy spectrum may be directly constructed, two other methods
provide essentially integral estimates: intensity of muons penetrating to the observation depth in depth-intensity mea-
surements, and the amount of muons with energies exceeding the energy of bremsstrahlung photon in measurements
of cascade shower spectrum. at that, effective muon energies do not strongly exceed the energy threshold (typically,
about 2 times).
since the methods discussed above encounter serious difficulties of principal or technical character, other methods
are needed to ensure a breakthrough in the energy region ∼100 tev and higher. from this point of view, the most
promising seems to be pair meter technique [6, 7]. this method of muon energy evaluation is based on measurements
of the number and energies of secondary cascades (with ε << eμ) originated as a result of multiple successive
interactions of muon in a thick layer of matter, mainly due to direct electron-positron pair production. at sufficiently
high muon energies, in a wide range of relative energy transfers ε/eμ ∼10−1 −10−3 pair production becomes the
dominating muon interaction process, and its cross section rapidly increases with eμ. a typical ratio of muon energy
and the energy of these secondary cascades is determined by muon and electron mass ratio and is of the order of 100.
an important advantage of this technique is the absence of principal upper limitation for measured muon energies
(at least up to 1016 −1017 ev, where the influence of landau-pomeranchuk-migdal effect on direct electron pair
production cross section may become important). in case of a sufficient setup thickness (≥500 radiation length) and
large number of detecting layers (of the order of hundred) pair meter technique allows estimating individual muon
energies; possibilities of the method for relatively thin targets depend on the shape of the investigated muon energy
spectrum.
in the present paper, bust data are analyzed on the basis of a modification of multiple interaction method elab-
orated for realization of the pair meter technique in thin setups. the results are compared with earlier bust data
on muon spectrum obtained by means of electromagnetic cascade shower measurements and depth-intensity curve
analysis.
1. measurements of depth-intensity curve and spectrum of electromagnetic cascades at bust
bust [8] is located in an excavation under the slope of mt. andyrchy (north caucasus) at effective rock depth
850 hg/cm2 which corresponds to about 220 gev threshold energy of detected muons. the telescope (fig. 1) rep-
resents a four-floor building with the height 11 m and the base 17 × 17 m2. the floors and the walls are entirely
covered with scintillation detectors (the total number 3152) which form 8 planes (4 vertical and 4 horizontal, two of
the latter being internal ones). the upper horizontal plane contains 576 = 24 × 24 scintillation detectors, the other
three 400 = 20 × 20 detectors each. the distance between neighboring planes in a vertical is 3.6 m. total thickness
of one layer (construction materials and scintillator) is about 7.2 radiation length.
each of the detectors represents an aluminum tank with sizes 0.7 × 0.7 × 0.3 m3 filled with liquid scintillator on
the basis of white spirit viewed by a 15 cm diameter pmt (feu-49) through pmma illuminator. most probable
energy deposition in the detector at passage of a near-vertical muon is 50 mev. the anode output of pmt serves for
measurements of the energy deposition in the plane in the range from 12.5 mev to 2.5 gev and for the formation
of master pulses for various physical programs. pulse channel with operating threshold of 12.5 mev (since 1991, 10
mev threshold) is connected to 12th dynode and provides coordinate information ("yes-no" type). the signal from 5th
dynode of pmt is used to measure the energy deposition in the detector in the range from 0.5 to 600 gev by means
of the logarithmic converter of the pulse amplitude to duration.
bust was created for investigations of cosmic ray muons and neutrinos as a telescope, but in principle it can
detect as single muons (and muon bundles) so muon-induced cascade showers. therefore, for the analysis of data
concerning muon energy spectrum, three methods can be used: depth-intensity relation, measurements of the spectrum
of electromagnetic cascades, and pair meter technique.
results of the analysis of the bust data on the depth-intensity dependence are given in [9]. the underground
muon intensity was measured in two zenith angle intervals (50◦−70◦and 70◦−85◦) for slant depth between 1000 and
12000 hg/cm2. up to 6000 hg/cm2 in both zenith angle intervals the measured intensities agreed with the expectation
for a usual muon spectrum (from pion and kaon decays). however at greater depths some excess of muons at moderate
zenith angles (50◦−70◦) was observed which was interpreted by the authors as an indication for the appearance of
3
5
8
7
6
horizontal planes of scintillation detectors
24 24
20 20
20 20
20 20
figure 1: high-energy muon passing through baksan underground scintillation telescope (geant4 simulation). numbers of the planes (5 to 8)
correspond to sequence of their construction.
prompt muons from charmed particle decays. the estimated contribution of prompt muons corresponded to the value
of the parameter r = (1.5 ± 0.5) × 10−3.
results of investigations of muon spectrum by means of measurements of the spectrum of electromagnetic cas-
cades in bust are described in [10]. in this work, the muon energy spectrum ih(> eh) at the depth of setup location
was derived from the spectrum of energy depositions in the telescope which was used as a 4-layer sampling calorime-
ter. for re-calculation to the surface muon energy spectrum i0(> eμ) the authors used the solution of the kinetic
equation for muon flux passing through a thick layer of matter. some excess of the number of cascades in the tale of
the spectrum found in this experiment could be caused as by methodical so by physical (inclusion of prompt muons)
reasons. authors noted that a similar flattening of the spectrum was also observed in a number of other experiments,
but at different energies, which evidences in favor of methodical reasons of its appearance.
as a whole, results of the analysis of the bust data on the depth-intensity curve and electromagnetic cascade
spectrum poorly agree with each other. below, results of independent analysis of the available bust data based on
ideas of the pair meter technique are described.
2. application of method of multiple interactions in bust
in order to evaluate individual muon energies (assuming that they have a usual power type integral spectrum
∼e−2.7
μ
) by means of the pair meter technique with a reasonable accuracy, it is necessary to detect several (≥5) muon
interactions in the setup with total target thickness of several hundred radiation length and ∼100 detecting layers.
if the number of layers and the setup thickness are low, the pair meter technique turns into the method of "plural"
(in a limiting case, twofold) interactions. in this situation, evaluation of energies of individual muons is practically
impossible; however, energy characteristics of the muon flux may be investigated on a statistical basis. the sensitivity
of such method depends on the shape of muon energy spectrum and, as estimates show, for a more flat spectrum
than the usual one, for example nμ ∼e−1.7
μ
(prompt muons, vhe muons, eas muons in the range eμ << e0), it
is sufficient to detect only two interactions even in the setup with the thickness of the order of several tens radiation
length.
a significant volume of experimental data accumulated at bust (more than 10 years of observations in combi-
nation with about 200 m2 sr geometric acceptance of the telescope) allows infer conclusions on the behavior of the
muon spectrum in the region of very high energies on the basis of the method of multiple interactions, in spite of a
small number of layers in the telescope (four) and low setup thickness (∼30 radiation length).
4
in fact, the structure of bust allows distinctively select not more than two successive interactions of muon in the
telescope (fig. 2). in the longitudinal profile of energy depositions (in horizontal planes) in such events, a minimum
("deep", emin) in one of the inner planes and two maximums ("humps") above and below it must be observed. it is
convenient to denote as e1 the energy deposition measured in the higher maximum, e2 the deposition in the second
one; then the depth of the deep may be characterized by the ratio k2 = e2/emin.
k
= e
/ e
e
top plane of bust (5)
inner plane (7)
bottom plane (6)
inner plane (8)
e
e
run 516828 frame 10093
event 9736-418339
12 gev
1.2 gev
46 gev
3.4 gev
12 gev
0.5 gev
64 gev
4.1 gev
e, gev
figure 2: twofold interactions of muons in bust. left: examples of detected and simulated events; horizontal telescope planes are plotted. hit
detectors ("yes-no") are shown in grey; colors correspond to different energy depositions (the scale below). right: longitudinal profile of energy
depositions in the telescope and definition of phenomenological parameters of the event.
simulation of the bust response for the passage of single muons was performed by means of geant4 toolkit [11,
12]. before production of large-scale simulations, comprehensive tests of the correctness of muon electromagnetic
interaction processes implementation in geant4 in a wide range of energies and for various materials were done. the
number of simulated events for muon energies above 350 gev (at ground surface) was comparable to the expected
number of such muons for the observation period (at "usual" energy spectrum), and for energies more than 1 tev,
10 tev, and 100 tev exceeded the expected muon statistics in about 5, 40, and 500 times, respectively. in every
simulated event, information on energy depositions in scintillation detectors and on muon interactions with energy
transfers more than 1 gev was recorded.
analysis of simulation results has shown that qualitatively the selection parameters e1, e2, k2 influence the event
samples in a following way:
• the shift in e2 is nearly proportional to the shift in muon energy;
• increase of the minimal value of the relative depth of the deep k2 suppresses contribution of nuclear showers
(from inelastic muon interaction with nuclei) which may imitate multiple interactions;
• increase of the threshold in e1 decreases the number of muons with moderate energies (∼tev), while most of
high-energy events (hundreds tev) are retained.
5
among possible versions of muon energy estimation in the pair meter technique, sufficiently effective and conve-
nient is the use of rank statistics of energies transferred in muon interactions: transferred energies ε j in an individual
event are arranged in a decreasing order, and n-th value εn is then used to estimate muon energy [6]. energy depo-
sitions measured in scintillation planes of the telescope, which determine the longitudinal profile of the event, are
not simply related with the transferred energies. this is caused by random location of interaction points relative to
detector planes, superposition of cascades from different interactions, fluctuations of cascade development, etc. how-
ever, analysis of simulated events allows conclude that the energy deposition e2 in the second in value maximum is
determined mainly by the second in energy cascade, related with production of e+e−pair by muon (relative energy
transfers ε/eμ ∼10−2 −10−3), while the largest cascade (associated with the largest energy deposition e1) with a
high probability is caused by muon bremsstrahlung or inelastic muon interaction (with ε/eμ ≥0.1). since the spectra
of rank statistics are nearly similar to the spectrum of muons, it is expedient to use for the following analysis the
distributions of events in the value of e2, and to vary other parameters of event selection: e1 (≥5 gev, ≥20 gev, ≥
40 gev, etc.) and k2 (≥1, 2, 5, ...).
3. analysis of experimental data on multiple interactions of muons
experimental data accumulated at bust during 12.5 years in 1983-1995 and 2 years (2003-2004)after restoration
of amplitude measurement system [13] have been analyzed. periods of reliable operation of all systems responsible
for energy deposition measurements were selected on the basis of a careful statistical analysis of the data. as a result,
the total "live" time of registration amounted to 3.3 × 108 s (more than 10 years), and the total number of events
after preliminary selection (with total energy deposition ≥10 gev in horizontal planes of the telescope) was about
10 millions. in more details, event selection criteria are described in [14]. only information of horizontal telescope
planes was used. the total number of experimental events with twofold muon interactions selected with conditions
e1, e2 ≥5 gev and muon tracks crossing all four horizontal planes equals to 1831; the corresponding statistics of
simulated events amounts to 26951 events.
experimental distributions of the events n(e2) were compared with geant4 simulation results for different selec-
tion criteria (e1 ≥5 gev and k2 ≥1; e1 ≥20 gev and k2 ≥2, etc.) and four different muon energy spectrum
models (fig. 3):
1. usual muon spectrum from π-, k-decays in the atmosphere (equation (1) with r = 0 and γμ = 2.7);
2. usual spectrum with addition of prompt muons at the level of r = 1 × 10−3;
3. the same, but with three times higher prompt muon contribution, r = 3 × 10−3;
4. usual spectrum with inclusion of vhe muons according to equation (2) with following parameters:
nμ = 1, fμ = 0.025, ek = 5 pev, γ1 = 1.7, and γ2 = 2.0.
10
3
10
4
10
5
10
6
10
-3
10
-2
10
-1
model 4
model 3
model 2
e
3
dn
/de
, cm
-2
s
-1
sr
-1
gev
2
e
, gev
model 1
figure 3: differential muon energy spectra for vertical direction (4 models).
6
experimental and calculated integral distributions of the events in e2 are presented in fig. 4. as a whole, within
statistical uncertainties the data and calculations for a usual muon spectrum are in a good agreement in the range 5
gev ≤e2 ≤30 gev. however, at large values of e2 (more than 80 gev) the expected number of events is several
times less (and at the tale of the distribution, almost ten times) than the observed in the experiment. let us note that
namely in the region e2 ∼100 gev and higher the multiple interaction method in bust becomes the most sensitive
to the changes of muon spectrum shape.
10
1
10
2
10
-1
10
0
10
1
10
2
10
3
n (> e
2
), events
e
2
, gev
e
x
p
e
rim
e
n
t:
ca
lcu
la
tio
n
s:
m
o
d
e
l 1
m
o
d
e
l 2
m
o
d
e
l 3
m
o
d
e
l 4
a) e
1
> 5 gev, k
2
> 1
10
1
10
2
10
-1
10
0
10
1
10
2
10
3
n (> e
2
), events
e
2
, gev
e
x
p
e
rim
e
n
t:
ca
lcu
la
tio
n
s:
m
o
d
e
l 1
m
o
d
e
l 2
m
o
d
e
l 3
m
o
d
e
l 4
b) e
1
> 20 gev, k
2
> 2
figure 4: integral distributions of experimental events (the points) and expected spectra (the curves) in e2 for 4 different muon spectrum models
(see the text) and two sets of selection criteria (a,b).
at comparison of the corresponding to fig. 4a differential distribution in e2 with the expected one under assump-
tion of a usual muon spectrum (from π-, and k-decays) the value of χ2 appears equal to 32.9 (at 8 degrees of freedom)
which implies the rejection of such hypothesis on the spectrum shape with about 99.9% confidence. situation re-
mains nearly the same after inclusion of prompt muons with r = 10−3 (spectrum model 2, χ2 = 24.7). much better
agreement is reached at comparison of the data with calculation results for sufficiently large fraction of prompt muons
(model 3, r = 3 × 10−3) or addition of vhe muons (model 4); corresponding values of χ2 in these cases are equal to
17.4 and 15.6, respectively.
it is important to note that the observed excess of events with large values of e2 is retained at different approaches
to data analysis and different selection criteria (compare fig. 4a and fig. 4b). four events with highest values of
e2 (more than 80 gev) are presented in fig. 5. all these events are detected inside the telescope in all horizontal
planes and have a clear topology. therefore, in spite of low statistics, the deviation of experimental distributions from
calculations performed in frame of generation of muons only in π-, k-decays seems to be significant, and evidences
for a possible existence of the fluxes of vhe or prompt muons with the considered parameters.
4. muon energy spectrum
in order to pass from the experimental distributions of event characteristics to the muon energy spectrum, it is
necessary to determine which intervals of muon energies give the main contribution to generation of registered events,
to choose effective estimates for them (mean, logarithmic mean, or median muon energies) and to define the conversion
procedure.
distributions of muon energies giving contribution to the events with several threshold values e2 at fixed parameter
k2 calculated for 4 different assumptions on muon spectrum shape (spectrum models 1-4 described in the preceding
section) are plotted in fig. 6 (a,b,c,d). these distributions are rather wide even for a usual spectrum of muons (fig.
6a), and, in presence of the additional muon flux with a more hard spectrum, at high e2 values become bimodal (figs.
6b-6d). the appearance of the second hump in the region of muon energies of hundreds tev and higher is caused by
a good sensitivity of the multiple interaction method namely to this, more hard, part of the muon spectrum.
in order to illustrate the decisive role of direct electron pair production process in the multiple interaction method,
calculations for muon spectrum with the addition of vhe muons (model 4) were repeated with the exclusion of pair
7
!"
#
!"
#
#
$
%
&
'
!"
&
'
&
'
&
'
#
&
'
&
'
&
'
&
'
&
'
&
'
#
&
'
#
&
'
&
'
&
'
&
'
&
'
#
&
'
e
(
e
(
e
(
e
(
e
)
e
)
e
)
e
)
figure 5: experimental events with highest values of e2. energy depositions measured in scintillation planes are indicated.
production. the obtained distributions (fig. 7) appeared insensitive to the additional vhe muon flux (compare with
fig. 6d), and effective muon energies in this case would not exceed several tens tev even for high values of e2.
energy spectra of muons from the bust data on multiple interactions were obtained in a following way. at first,
for certain sets of selection criteria (e1 and k2 parameters) the differential and integral distributions of the observed
events nobs with an equal step in common logarithm of e2 were constructed, namely, the number of events in every
bin ∆lg(e2, gev) = 0.7-0.9, 0.9-1.1, ..., 2.3-2.5 for the differential distribution, and the total number of events with
lg(e2, gev) ≥0.7, ≥0.9, etc. for the integral one were counted.
expected model distributions nmod of the events in lge2, and also energy distributions of muons giving the con-
tribution to events in a certain interval ∆lg e2 or ≥lge2, corresponding mean, logarithmic mean, median energies e∗
μ
for differential distributions and effective threshold muon energies e∗
μ0 for integral ones were computed on the basis
of the results of geant4 telescope response simulations for the respective combination of selection criteria (e1 and
k2) and four models of surface muon energy spectrum discussed above.
dependences of logarithmic mean, mean, and median muon energies on e2 (for differential in e2 event distri-
butions) are presented in fig. 8 (a,b, and c respectively) for different spectrum models. these dependences clearly
demonstrate the main advantage of the multiple interaction method, namely, the possibility to advance in muon energy
region of hundreds tev and even few pev in case of the presence of substantial flux of muons with a hard spectrum
at these energies. since the energy deposition in scintillator layers of bust constitutes about 10% of cascade energy
[10], the ratio between effective muon energy e∗
μ and e2 reaches in this case the order of 103. in fig. 8d, the depen-
dences of effective threshold energy e∗
μ0 (estimated via logarithmic mean values) for the integral distributions in e2
are shown. qualitatively, it is seen that the dependences in figs. 8a and 8d only weakly differ from each other.
finally, the estimates of differential and integral muon spectra are found in a following way:
d ̃
nμ(e∗
μ)/deμ = dnμ(e∗
μ)/deμ × ndif
obs(e2)/ndif
mod(e2),
(4)
̃
nμ(≥e∗
μ0) = nμ(≥e∗
μ0) × nint
obs(e2)/nint
mod(e2),
(5)
where dnμ(e∗
μ)/deμ and nμ(≥e∗
μ0) are differential and integral muon energy spectra for the respective spectrum
model calculated at corresponding effective muon energy (e∗
μ and ≥e∗
μ0).
8
10
2
10
3
10
4
10
5
10
6
10
7
10
8
0.00
0.05
0.10
0.15
0.20
0.25
dw/dlge
e
, gev
e
2
> 10 gev
e
2
> 40 gev
e
2
> 100 gev
a) model 1
10
2
10
3
10
4
10
5
10
6
10
7
10
8
0.00
0.05
0.10
0.15
0.20
0.25
dw/dlge
e
, gev
e
2
> 10 gev
e
2
> 40 gev
e
2
> 100 gev
b) model 2
10
2
10
3
10
4
10
5
10
6
10
7
10
8
0.00
0.05
0.10
0.15
0.20
0.25
dw/dlge
e
, gev
e
2
> 10 gev
e
2
> 40 gev
e
2
> 100 gev
c) model 3
10
2
10
3
10
4
10
5
10
6
10
7
10
8
0.00
0.05
0.10
0.15
0.20
0.25
dw/dlge
e
, gev
e
2
> 10 gev
e
2
> 40 gev
e
2
> 100 gev
d) model 4
figure 6: energy distributions of muons giving contribution to events with different threshold values e2 for 4 models of muon energy spectrum.
10
2
10
3
10
4
10
5
10
6
10
7
10
8
0.00
0.05
0.10
0.15
0.20
0.25
dw/dlge
e
, gev
e
2
> 10 gev
e
2
> 40 gev
e
2
> 100 gev
model 4,
e
+
e
- = 0
figure 7: energy distributions of muons giving contribution to events with different threshold values e2 for the model 4 of the muon spectrum with
the "switched-off" pair production process (compare fig. 6d).
9
10
1
10
2
10
0
10
1
10
2
10
3
10
4
logarithmic mean e
*, tev
e
2
, gev
model 1
model 2
model 3
model 4
a)
10
1
10
2
10
0
10
1
10
2
10
3
10
4
mean e
*, tev
e
2
, gev
model 1
model 2
model 3
model 4
b)
10
1
10
2
10
0
10
1
10
2
10
3
10
4
median e
*, tev
e
2
, gev
model 1
model 2
model 3
model 4
c)
10
1
10
2
10
0
10
1
10
2
10
3
10
4
logarithmic mean e
0
*, tev
e
2 m
in
, gev
model 1
model 2
model 3
model 4
d)
figure 8: dependences of logarithmic mean (a), mean (b), and median (c) muon energies on e2 for differential in e2 distributions. frame (d):
effective threshold muon energies for integral in e2 distribution. the curves correspond to four different spectrum models.
differential muon energy spectra for vertical direction reconstructed from the experimental data according to the
described procedure at four different assumptions on muon spectrum model are presented in fig. 9. results are
shown for one of the combinations of the selection criteria with highest statistics (e1 ≥5 gev, k2 ≥1). since there
is no generally accepted definition of the effective energy of muons responsible for the observed events, the points
corresponding to all three versions (mean, logarithmic mean, and median energies) are given in the figure. the curves
in each frame represent the assumed spectrum models.
5. discussion
the following conclusions can be made from the analysis of the results presented in fig. 9. if one assumes that
the muon spectrum is formed only due to decays of pions and kaons in the atmosphere (i.e. "usual" muon spectrum,
fig. 9a), then a strong dependence of spectrum reconstruction results on the choice of the effective muon energy
(mean, logarithmic mean, median energy) appears as a large spread of reconstructed points. furthermore, muon
intensity estimated in frame of this assumption in the range of several tens tev (considering median or logarithmic
mean energy) or around 100 tev (according to mean energy) is practically ten times higher than the expected one,
and seriously contradicts results of other experiments compilation of which is given in [5].
the spread of experimental points relative to the model spectrum curves decreases as the contribution of additional
muon flux with a more hard energy spectrum increases (fig. 9b and 9c). at the same time, the agreement is improving
10
10
3
10
4
10
5
10
6
10
7
10
-4
10
-3
10
-2
10
-1
10
0
a) model 1
experiment:
e
mean
e
logarithmic mean
e
median
calculation:
e
3
dn
/de
, cm
-2
s
-1
sr
-1
gev
2
e
, gev
10
3
10
4
10
5
10
6
10
7
10
-4
10
-3
10
-2
10
-1
10
0
b) model 2
experiment:
e
mean
e
logarithmic mean
e
median
calculation:
e
3
dn
/de
, cm
-2
s
-1
sr
-1
gev
2
e
, gev
10
3
10
4
10
5
10
6
10
7
10
-4
10
-3
10
-2
10
-1
10
0
c) model 3
experiment:
e
mean
e
logarithmic mean
e
median
calculation:
e
3
dn
/de
, cm
-2
s
-1
sr
-1
gev
2
e
, gev
10
3
10
4
10
5
10
6
10
7
10
-4
10
-3
10
-2
10
-1
10
0
d) model 4
experiment:
e
mean
e
logarithmic mean
e
median
calculation:
e
3
dn
/de
, cm
-2
s
-1
sr
-1
gev
2
e
, gev
figure 9: differential muon energy spectra reconstructed from bust data on multiple interactions at different assumptions on muon spectrum
model (a,b,c,d) with different choice of effective muon energy: mean (circles), logarithmic mean (diamonds), and median (triangles).
also in the range of moderate muon energies (tens tev); in other words, the dependence of results on the choice of
effective muon energy (mean, logarithmic mean, median energy) at muon spectrum reconstruction disappears. the
best agreement of the data with the expectation in a wide range of energies (from few tev to few pev) is observed
for the spectrum with addition of the flux of vhe muons with parameters indicated above (fig. 9d, spectrum model
4); the r.m.s. deviation of the points from the curve in this case is minimal.
in fig. 10, the integral muon energy spectra measured at bust by means of different methods are compared. one
of the possible reasons of the difference between the results obtained from the depth-intensity curve and electromag-
netic cascade spectrum measurements may be related with different procedures used for muon spectrum reconstruction
from the experimental data. thus, in the paper [9], in order to pass from the depth-intensity dependence (after eval-
uation of the r parameter) to the integral energy spectrum of muons at the surface (taking into account muon energy
loss fluctuations) the mean energy of that part of the spectrum which is responsible for muon flux intensity at a given
depth was used. authors note that the mean energy of prompt muons (from charmed particle decays), due to a more
flat spectrum, is about twice more than the mean energy of usual muons; therefore weighted average of mean energies
for two components of the flux was used for the conversion.
in the paper [10], the transition from the measured spectrum of energy depositions of electromagnetic cascades
in the telescope to the muon energy spectrum was performed for median energies of muons responsible for events
in a given energy deposition bin. a deep in the reconstructed muon energy spectrum around 10 tev (fig. 10)
11
10
3
10
4
10
5
10
6
10
-3
10
-2
10
-1
m
o
d
e
l 4
m
o
d
e
l 3
m
o
d
e
l 2
bust - depth-intensity
bust - calorimeter
bust - pairmeter (model 3)
bust - pairmeter (model 4)
e
2
n
(>e
), cm
-2
s
-1
sr
-1
gev
2
e
, gev
m
o
d
e
l 1
figure 10: integral energy spectrum of muons for vertical direction reconstructed from the bust data on depth-intensity curve (circles, [9]),
spectrum of electromagnetic cascades (triangles, [10]), and by means of multiple interaction method for two models (3 and 4) used in muon
spectrum reconstruction from experimental data. the curves represent calculations for different spectrum models.
most probably is related with some methodical reasons, since it is difficult to suggest any physical explanation of its
appearance. as to the absolute value of muon flux measured by this method, it is necessary to note that the systematic
uncertainty in muon intensity could reach about 25%, since, as it was indicated in [10], the accuracy of the absolute
energy calibration of energy deposition measurements was about 10%.
for the reconstruction of the integral muon energy spectrum from the data on multiple interactions of muons in
bust, the spectrum models 3 and 4 were used (open and solid diamonds in fig. 10, respectively). the estimates of
effective threshold muon energies were obtained on the basis of logarithmic mean values as optimal ones for quasi-
power spectra of particles [6]. as it is seen from the figure, no deviations from the usual spectrum is observed up to
energies of at least ∼10 tev for model 3 and ∼30 tev for model 4, while around 100 tev and higher a considerable
excess in comparison with the spectrum of muons from π-, k-decays appears. at that, the muon energy reconstruction
by means of model 4 gives better agreement of experimental points with theoretical curve than by means of model 3.
a natural question may arise at discussions of the obtained results: how was it possible to register pev muons
with a relatively small-size setup (∼200 m2) bearing in mind that their flux is extremely low? and these doubts are
correct for usual decay muons; in this case, during the period of the experiment (more than 10 years), at best, one or
two muons with energy above 1 pev could cross the telescope. however, for more flat spectra (muons from charmed
particles or vhe muons from new generation processes) the expected number of such muons may reach several tens.
and, even taking into account relatively low probability of generation of events with twofold muon interactions in
such a thin setup as bust (of the order of ∼10−1), the possibility of registration of pev muons becomes quite real.
in fig. 11, the differential muon energy spectrum obtained from the bust data by means of multiple interaction
method is compared with results of other experiments taken from the compilation [5]. as it is seen from the figure,
the present data are the first ones at energies above 100 tev and, in spite of low statistics, evidence for a change
of muon spectrum behavior namely in this region. one may expect that this energy region will be accessible soon
for investigations by means of cascade shower spectrum measurements at icecube [15], and the use of pair meter
technique at such scale setups would also allow to explore the range of pev muon energies.
12
p
*
d (p) ( cm
+
,
s
+
-
sr
+
-
(gev/c)
,
)
muon momentum (gev/c)
baksan (present work)
4
3
2
1
10
3
10
4
10
5
0.01
0.1
10
5
10
6
,
figure 11: differential muon energy spectra for vertical direction measured in various experiments (compilation from [5]). the curves correspond
to different models of prompt muon contribution considered in [5]. present bust results obtained by means of multiple interaction method are
added (open and solid diamonds for models 3 and 4 correspondingly).
conclusion
method of multiple interactions of muons based on the ideas of the pair meter technique gives possibility to use
bust data for estimation of the energy spectrum of cosmic ray muons in a wide energy region from several tev to
hundreds tev. the analysis shows that no serious deviations from the usual spectrum formed as a result of pion and
kaon decays are observed up to muon energies ∼20 tev for model 3 and ∼50 tev for model 4, if the existence of
an additional flux of muons with a more hard spectrum is taken into account. at energies ≳100 tev this additional
flux exceeds the expected contribution of muons from charmed particles corresponding to the parameter r ∼10−3,
and may be explained with r ∼3 × 10−3, which suggests a more fast increase of charmed particle yield compared to
recent theoretical predictions. however, the best description of the experimental data can be achieved by assuming an
additional contribution of vhe muons from new physical processes related with the appearance of the observed knee
in cosmic ray energy spectrum.
references
[1] a.a. petrukhin, in proc. xith rencontres de blois frontiers of matter, blois, france, 1999. ed. by j. tran thanh van (the gioi publishers,
vietnam, 2001), p. 401.
[2] k. nakamura et al. (particle data group), j. phys. g 37, 075021 (2010).
[3] l.v. volkova, o. saavedra, astropart. phys., 32, 136 (2009).
[4] a.a. petrukhin, isvhecri 2010, batavia, usa, 2010, arxiv:1101.1900v1 [astro-ph.he].
[5] e.v. bugaev et al., phys. rev. d, 58 (1998) 05401; arxiv:hep-ph/9803488 v3, jan 2000.
[6] r.p. kokoulin, a.a. petrukhin, nucl. instr. meth. a, 263 (1988) 468.
[7] r.p. kokoulin, a.a. petrukhin, sov. j. part. nucl., 21 (1990) 332.
[8] a.e. chudakov et al., proc. 16th icrc, kyoto, 1979, v. 10, p. 276.
[9] yu.m. andreev et al., proc. 21st icrc, adelaide, 1990, v. 9, p. 301.
[10] v.n. bakatanov et al. sov. j. nucl. phys., 55 (1992) 1169.
[11] s. agostinelli et al., nucl. instr. meth. a, 506 (2003) 250.
[12] j. allison et al., ieee trans. nucl. science, 53 (2006) 270.
[13] a.f. yanin et al., instr. experim. techn., 47, no. 3 (2004) 330.
[14] a.g. bogdanov et al., physics of atomic nuclei, 72 (2009) 2049.
[15] f. halzen, s. klein, rev. sci. instrum 81 (2010) 081101.
13
|
0911.1694 | regularizing portfolio optimization | the optimization of large portfolios displays an inherent instability to
estimation error. this poses a fundamental problem, because solutions that are
not stable under sample fluctuations may look optimal for a given sample, but
are, in effect, very far from optimal with respect to the average risk. in this
paper, we approach the problem from the point of view of statistical learning
theory. the occurrence of the instability is intimately related to over-fitting
which can be avoided using known regularization methods. we show how
regularized portfolio optimization with the expected shortfall as a risk
measure is related to support vector regression. the budget constraint dictates
a modification. we present the resulting optimization problem and discuss the
solution. the l2 norm of the weight vector is used as a regularizer, which
corresponds to a diversification "pressure". this means that diversification,
besides counteracting downward fluctuations in some assets by upward
fluctuations in others, is also crucial because it improves the stability of
the solution. the approach we provide here allows for the simultaneous
treatment of optimization and diversification in one framework that enables the
investor to trade-off between the two, depending on the size of the available
data set.
| introduction
markowitz' portfolio selection theory [1, 2] is one of the pillars of theoretical finance. it
has greatly influenced the thinking and practice in investment, capital allocation, index
tracking, and a number of other fields. its two major ingredients are (i) seeking a trade-
offbetween risk and reward, and (ii) exploiting the cancellation between fluctuations of
(anti-)correlated assets. in the original formulation of the theory, the underlying process
was assumed to be multivariate normal. accordingly, reward was measured in terms of
the expected return, risk in terms of the variance of the portfolio.
the fundamental problem of this scheme (shared by all the other variants that have
been introduced since) is that the characteristics of the underlying process generating
the distribution of asset prices are not known in practice, and therefore averages are
replaced by sums over the available sample. this procedure is well justified as long
as the sample size, t (i.e.
the length of the available time series for each item), is
sufficiently large compared to the size of the portfolio, n (i.e. the number of items).
in that limit, sample averages asymptotically converge to the true average due to the
central limit theorem.
unfortunately, the nature of portfolio selection is not compatible with this limit.
institutional portfolios are large, with n's in the range of hundreds or thousands, while
considerations of transaction costs and non-stationarity limit the number of available
data points to a couple of hundreds at most. therefore, portfolio selection works in a
region, where n and t are, at best, of the same order of magnitude. this, however, is
not the realm of classical statistical methods. portfolio optimization is rather closer to
a situation which, by borrowing a term from statistical physics, might be termed the
"thermodynamic limit", where n and t tend to infinity such that their ratio remains
fixed.
it is evident that portfolio theory struggles with the same fundamental difficulty
that is underlying basically every complex modeling and optimization task: the high
number of dimensions and the insufficient amount of information available about the
system. this difficulty has been around in portfolio selection from the early days and
a plethora of methods have been proposed to cope with it, e.g. single and multi-factor
models [3], bayesian estimators [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], or,
more recently, tools borrowed from random matrix theory [18, 19, 20, 21, 22, 23]. in
the thermodynamic regime, estimation errors are large, sample to sample fluctuations
are huge, results obtained from one sample do not generalize well and can be quite
misleading concerning the true process.
the same problem has received considerable attention in the area of machine
learning. we discuss how the observed instabilities in portfolio optimization (elaborated
in section 2) can be understood and remedied by looking at portfolio theory from the
point of view of machine learning.
portfolio optimization is a special case of regression, and therefore can be
understood as a machine learning problem (see section 3).
in machine learning, as
regularizing portfolio optimization.
3
well as in portfolio optimization, one wishes to minimize the actual risk, which is the
risk (or error) evaluated by taking the ensemble average. this quantity, however, can
not be computed from the data, only the empirical risk can. the difference between the
two is not necessarily small in the thermodynamic limit, so that a small empirical risk
does not automatically guarantee small actual risk [24].
statistical learning theory [24, 25, 26] finds upper bounds on the generalization
error that hold with a certain accuracy.
these error bounds quantify the expected
generalization performance of a model, and they decrease with decreasing capacity of
the function class that is being fitted to the data. lowering the capacity therefore lowers
the error bound and thereby improves generalization. the resulting procedure is often
referred to as regularization and essentially prevents over-fitting (see section 4).
in the thermodynamic limit, portfolio optimization needs to be regularized. we
show in section 5 how the above mentioned concepts, which find their practical
application in support vector machines [27, 28], can be used for portfolio optimization.
support vector machines constitute an extremely powerful class of learning algorithms
which have met with considerable success.
we show that regularized portfolio
optimization, using the expected shortfall as a risk measure, is almost identical to
support vector regression, apart from the budget constraint. we provide the modified
optimization problem which can be solved by linear programming.
in section 6, we discuss the financial meaning of the regularizer:
minimizing
the l2 norm of the weight vector corresponds to a diversification pressure. we also
discuss alternative constraints that could serve as regularizers in the context of portfolio
optimization.
taking this machine learning angle allows one to organize a variety of ideas in the
existing literature on portfolio optimization filtering methods into one systematic and
well developed framework. there are basically two choices to be made: (i) which risk
measure to use, and (ii) which regularizer. these choices result in different methods,
because different optimization problems are being solved.
while we focus here on the popular expected shortfall risk measure (in section 5),
the variance has a long history as an important risk measure in finance. several existing
filtering methods that use the variance risk measure essentially implement regularization,
without necessarily stating so explicitly. the only work we found in this context [7] that
mentiones regularization in the context of portfolio optimization has not been noticed by
the ensuing, closely related, literature. it is easy to show that when the l2 norm is used
as a regularizer, then the resulting method is closely related to bayesian ridge regression,
which uses a gaussian prior on the weights (with the difference of the additional budget
constraint). the work on covariance shrinkage, such as [8, 9, 10, 11], falls into the same
category. other priors can be used [17], which can be expected to lead to different results
(for an insightful comparison see e.g. [29]). using the l1 norm has been popularized
in statistics as the "lasso" (least absolute shrinkage and selection operator) [29], and
methods that use any lp norm are also known as the "bridge" [30].
regularizing portfolio optimization.
4
2. preliminaries – instability of classical portfolio optimization.
portfolio
optimization
in
large
institutions
operates
in
what
we
called
the
thermodynamic limit, where both the number of assets and the number of data points
are large, with their ratio a certain, typically not very small, number. the estimation
problem for the mean is so serious [31, 32] as to make the trade-offbetween risk and
return largely illusory. therefore, following a number of authors [8, 9, 33, 34, 35], we
focus on the minimum variance portfolio and drop the usual constraint on the expected
return. this is also in line with previous work (see [36] and references therein), and
makes the treatment simpler without compromising the main conclusions. an extension
of the results to the more general case is straightforward.
nevertheless, even if we forget about the expected return constraint, the problem
still remains that covariances have to be estimated from finite samples.
it is an
elementary fact from linear algebra that the rank of the empirical n × n covariance
matrix is the smaller of n and t. therefore, if t < n, the covariance matrix is singular
and the portfolio selection task becomes meaningless. the point t = n thus separates
two regions: for t > n the portfolio problem has a solution, whereas for t < n, it
does not.
even if t is larger than n, but not much larger, the solution to the minimum
variance problem is unstable under sample fluctuations, which means that it is not
possible to find the optimal portfolio in this way.
this instability of the estimated
covariances, and hence of the optimal solutions, has been generally known in the
community, however, the full depth of the problem has only been recognized recently,
when it was pointed out that the average estimation error diverges at the critical point
n = t [37, 38, 39].
in order to characterize the estimation error, kondor and co-workers used the
ratio q2
0 between (i) the risk, evaluated at the optimal solution obtained by portfolio
optimization using finite data and (ii) the true minimal risk. this quantity is a measure
of generalization performance, with perfect performance when q2
0 = 1, and increasingly
bad performance as q2
0 increases.
as found numerically in [38] and demonstrated
analytically by random matrix theory techniques in [40], the quantity q0 is proportional
to (1 −n/t)−1/2 and diverges when t goes to n from above.
the identification of the point n = t as a phase transition [36, 41] allowed for
the establishment of a link between portfolio optimization and the theory of phase
transitions, which helped to organize a number of seemingly disparate phenomena into
a single coherent picture with a rich conceptual content. for example, it has been shown
that the divergence is not a special feature of the variance, but persists under all the
other alternative risk measures that have been investigated so far: historical expected
shortfall, maximal loss, mean absolute deviation, parametric var, expected shortfall,
and semivariance [36, 41, 42, 43]. the critical value of the n/t ratio, at which the
divergence occurs, depends on the particular risk measure and on any parameter that
the risk measure may depend on (such as the confidence level in expected shortfall).
regularizing portfolio optimization.
5
however, as a manifestation of universality, the power law governing the divergence
of the estimation error is independent of the risk measure [36, 41, 42], the covariance
structure of the market [39], and the statistical nature of the underlying process [44].
ultimately, this line of thought led to the discovery of the instability of coherent risk
measures [45].
3. statistical reasons for the observed instability in portfolio optimization
as mentioned above, for simplicity and clarity of the treatment we do not impose a
constraint on the expected return, and only look for the global minimum risk portfolio.
this task can be formalized as follows: given a fixed budget, customarily taken to
be unity, given t past measurements of the returns of n assets: xk
i , i = 1, . . . , n,
k = 1, . . . , t, and given the risk functional f(w*x), find a weighted sum (the portfolio),
w * x,‡ such that it minimizes the actual risk
r(w) = ⟨f(w * x)⟩p(x),
(1)
under the constraint that p
i wi = 1. the central problem is that one does not know the
distribution p(x), which is assumed to underly the generation of the data. in practice,
one then minimizes the empirical risk, replacing ensemble averages by sample averages:
remp(w) = 1
t
t
x
k=1
f(w * x(k))
(2)
now, let us interpret the weight vector as a linear model. the model class given by the
linear functions has a capacity h, which is a concept that has been introduced by vapnik
and chervonenkis in order to measure how powerful a learning machine is [24, 25, 26]. (in
the statistical learning literature, a learning machine is thought of as having a function
class at its disposal, together with an induction principle and an algorithmic procedure
for the implementation thereof [46]). the capacity measures how powerful a function
class is, and thereby also how easy it is to learn a model of that class. the rough idea is
this: a learning machine has larger capacity if it can potentially fit more different types
of data sets.
higher capacity comes, however, at the cost of potentially over-fitting
the data. capacity can be measured, for example, by the vapnik-chervonenkis (vc-)
dimension [24], which is a combinatoric measure that counts how many data points can
be separated in all possible ways by any function of a given class.
to make the idea tangible for linear models, focus on two dimensions (n = 2). for
each number of points, n, one can choose the geometrical arrangement of the points in
the plane freely. once it is chosen, points are labeled by one of two labels, say "red"
and "blue". can a line separate the red points from the blue points for any of the 2n
different ways in which the points could be colored? the vc-dimension is the largest
number of points for which this can be done. two points can trivially be separated
by a line. three points that are not arranged collinear can still be separate for any of
‡ notation: bold face symbols are understood to denote vectors.
regularizing portfolio optimization.
6
the 8 possible labelings. however, for four points this is no longer the case, since there
is no geometrical arrangement for which one could not find a labeling that can not be
separated by a line. the vc-dimension is 3, and in general, for linear models in n
dimensions, it is n + 1 [46, 47].
in the regime in which the number of data points are much larger than the capacity
of the learning machine, h/t << 1, a small empirical risk guarantees small actual risk
[24]. for linear functions through the origin that are otherwise unconstrained, the vc-
dimension grows with n. in the thermodynamic regime, where n/t is not very small,
minimizing the empirical risk does not necessarily guarantee a small actual risk [24].
therefore it is not guaranteed to produce a solution that generalizes well to other data
drawn from the same underlying distribution.
in solving the optimizing problem that minimizes the empirical risk, eq. (2) in the
regime in which n/t is not very small, portfolio optimization over-fits the observed data.
it thereby finds a solution that essentially pays attention to the seeming correlations
in the data which come from estimation noise due to finite sample effects, rather than
from real structure. the solution is thus different for different realizations of the data,
and does not necessarily come close to the actual optimal portfolio.
4. overcoming the instability
the generalization error can be bounded from above (with a certain probability) by
the empirical error plus a confidence term that is monotonically increasing with some
measure of the capacity, and depends on the probability with which the bound holds
[48]. several different bounds have been established, connected with different measures
of capacity, see e.g. [47].
poor generalization and over-fitting can be improved upon by decreasing the
capacity of the model [25, 26], which helps to lower the generalization error. support
vector machines are a powerful class of algorithms that implement this idea.
we suggest that if one wants to find a solution to the portfolio optimization problem
in the thermodynamic regime, then one should not minimize the empirical risk alone,
but also constrain the capacity of the portfolio optimizer (the linear model).
how can portfolio optimization be regularized? portfolio optimization is essentially
a regression problem, and therefore we can apply statistical learning theory, in particular
the work on support vector regression.
note first that the capacity of a linear model class for which the length of the
weight vector is restricted to ∥w∥2 ≤a has an upper bound which is smaller than
the capacity of unconstrained linear models [25, 26]. the capacity is minimized when
the length of the weight vector is minimized [25, 26]. vapnik's concept of structural
risk minimization [48] results in the support vector algorithm [27, 28] which finds the
model with the smallest capacity that is consistent with the data, that is the model
with smallest ∥w∥2. this leads to a convex constrained optimization problem [27, 28]
which can be solved using linear programming.
regularizing portfolio optimization.
7
5. regularized portfolio optimization with the expected shortfall risk
measure.
while the original markowitz' formulation [1] measures risk by the variance, many other
risk measures have been proposed since. today, the most widely used risk measure, both
in practice and in regulation, is value at risk (var) [49, 50]. var has, however, been
criticized for its lack of convexity, see e.g. [51, 52, 53], and an axiomatic approach,
leading to the introduction of the class of coherent risk measures, was put forward [51].
expected shortfall, essentially a conditional average measuring the average loss above a
high threshold, has been demonstrated to belong to this class [54, 55, 56].
expected shortfall has been steadily gaining popularity in recent years.
the
regularization we propose here is intended to cure its weak point, the sensitivity to
sample fluctuations, at least for reasonable values of the ratio n/t.
choose the risk functional f(z) = zθ(z −αβ), where αβ is a threshold, such that
a given fraction β of the (empirical) loss-distribution over z lies above αβ. one now
wishes to minimize the average over the remaining tail distribution, containing the
fraction ν := 1 −β, and defines the expected shortfall as
es = min
ǫ
"
ǫ + 1
νt
t
x
k=1
1
2
−ǫ −w * x(k) + | −ǫ −w * x(k)|
#
.
(3)
the term in the sum implements the θ-function, while ν in the denominator ensures
normalization of the tail distribution. it has been pointed out [57] that this optimization
problem maps onto solving the linear program:
min
w,ξ,ǫ
"
1
t
t
x
k=1
ξk + νǫ
#
(4)
s.t.
w * x(k) + ǫ + ξk ≥0;
ξk; ≥0
(5)
x
i
wi = 1.
(6)
we propose to implement regularization by including the minimization of ∥w∥2. this
can be done using a lagrange multiplier, c, to control the trade-off– as we relax the
constraint on the length of the weight vector, we can, of course, make the empirical
error go to zero and retrieve the solution to the minimal expected shortfall problem.
the new optimization problem reads:
min
w,ξ,ǫ
"
1
2∥w∥2 + c
1
t
t
x
k=1
ξk + νǫ
!#
(7)
s.t.
−w * x(k) ≤ǫ + ξk;
(8)
ξk ≥0;
ǫ ≥0;
(9)
x
i
wi = 1.
(10)
regularizing portfolio optimization.
8
the problem is mathematically almost identical to a support vector regression (svr)
algorithm called ν-svr. there are two differences: (i) the budget constraint is added,
and (ii) the loss function is asymmetric. expected shortfall is an asymmetric version
of the ǫ-intensive loss, used in support vector regression, defined as the maximum of
{0; |f(x) −y| −ǫ}, where f(x) is the interpolant, and y the measured value (response).
in that sense ǫ measures an allowable error below which deviations are discarded.§
the use of asymmetric risk measures in finance is motivated by the consideration
that investors are not afraid of upside fluctuations. however, to make the relationship
to support vector regression as clear as possible, we will first solve the more general
symmetrized problem, before restricting our treatment to the completely asymmetric
case, corresponding to expected shortfall. in addition, one may argue that focusing
exclusively on large negative fluctuations might not be advisable even from a financial
point of view, especially when one does not have sufficiently large samples. in a relatively
small sample it may happen that a particular item, or a certain combination of items,
dominates the rest, i.e. produces a larger return than any other item in the portfolio
at each time point, even though no such dominance exists on longer time scales. the
probability of such an apparent arbitrage increases with the ratio n/t, and when it
occurs it may encourage an investor acting on a lopsided risk measure to take up very
large long positions in the dominating item(s), which may turn out to be detrimental
on the long run. this is the essence of the argument that has led to the discovery of
the instability of coherent and downside risk measures [43, 45].
according to the above, let us consider the general case where positive deviations
are also penalized. the objective function, eq. (7), then becomes
min
w,ξ,ǫ
"
1
2∥w∥2 + c
1
t
t
x
k=1
(ξk + ξ∗
k) + νǫ
!#
,
(11)
and additional constraints have to be added to eqs. (8) to (10):
w * x(k) ≤ǫ + ξ∗
k;
ξ∗
k ≥0.
(12)
this problem corresponds to ν-svr, a well understood regression method [60], with
the only difference that the budget constraint, eq. (10) is added here. in the finance
context the associated loss might be called symmetric tail average (sta). solving the
regularized expected shortfall minimization problem, eqs. (7)–(10) is a special case of
solving the regularized sta minimization problem, eq. (11) with the constraints eqs.
(8)–(10) and (12). therefore, we solve the more general problem first (section 5.1),
before providing, in section 5.2, the solution to the regularized expected shortfall, eqs.
(7)–(10).
§ the mathematical similarity between minimum expected shortfall without regularization and the eν-
svm algorithm [58] was pointed out, but incorrectly, in [59]. there is an important difference between
the two optimization problems. in eν-svm, the length of the weight vector, ∥w∥, is constrained, which
implements capacity control. in the pure expected shortfall minimization, eq. (4), this is not done.
instead, the total budget p
i wi is fixed. this difference is not correctly identified in the proof of the
central theorem (theorem 1) in [59].
regularizing portfolio optimization.
9
5.1. regularized symmetric tail average minimization
the solution to the regularized symmetric tail average problem, eq.
(11) with
the constraints eqs.
(8)–(10) and (12), is found in analogy to support vector
regression, following [60], by writing down the lagrangean, using lagrange multipliers,
{α, α∗, γ, λ, η, η∗}, for the constraints. the solution is then a saddle point, i.e. minimum
over primal and maximum over dual variables. the lagrangean is different from the
one that arises in ν-svr in that it is modified by the budget constraint:
l[w, ξ, ξ∗, ǫ, α, α∗, γ, λ, η, η∗] = 1
2∥w∥2 + c
t
t
x
k=1
(ξk + ξ∗
k) + cνǫ −λǫ + γ
x
i
wi −1
!
+
t
x
k=1
α∗
k(w * x(k) −ǫ −ξ∗
k) −
t
x
k=1
αk(w * x(k) + ǫ + ξk)
−
t
x
k=1
(ηkξk + η∗
kξ∗
k)
(13)
= f[w] + ǫ
cν −λ −
t
x
k=1
(αk + α∗
k)
!
−γ
(14)
+
t
x
k=1
ξk
c
t −αk −ηk
+ ξ∗
k
c
t −α∗
k −η∗
k
with
f[w] = w *
1
2w −
t
x
k=1
(αk −α∗
k)x(k) −γ1
!!
,
(15)
where 1 denotes the unit vector of length n. setting the derivative of the lagrangian
w.r.t. w to zero gives:
wopt =
t
x
k=1
(αk −α∗
k)x(k) −γ1
(16)
this solution for the optimal portfolio is sparse in the sense that, due to the karush-
kuhn-tucker conditions (see e.g.
[61]), only those points contribute to the optimal
portfolio weights, for which the inequality constraints in (8), and the corresponding
constraints in eq.
(12), are met exactly.
the solution of wopt contains only those
points, and effectively ignores the rest.
this sparsity contributes to the stability of
the solution. regularized portfolio optimization (rpo) operates, in contrast to general
regression, with a fixed budget.
as a consequence, the lagrange multiplier γ now
appears in the optimal solution, eq. (16). compared to the optimal solution in support
vector (sv) regression, wsv, the solution vector under the budget constraint, wrpo, is
shifted by γ:
wrpo = wsv −γ1.
(17)
let us now consider the dual problem.
the dual is,
in general, a function
of the dual variables, which are here {α, α∗, γ, λ, η, η∗}, although we will see in
regularizing portfolio optimization.
10
the following that some of these variables drop out.
the dual is defined as
d := minw,ξ,ξ∗,ǫ l[w, ξ, ξ∗, ǫ, α, α∗, γ, λ, η, η∗], and the dual problem is then to maximize
d over the dual variables. we can replace the minimization over w by evaluating the
lagrangian at wopt. for that we have to evaluate
f[wopt] = −1
2∥wopt∥2
(18)
=
−1
2
t
x
k=1
(αk −α∗
k)x(k) −γ1
!2
.
(19)
for the other terms in the lagrangian, we have to consider different cases:
(i) if
cν −λ −pt
k=1(αk + α∗
k)
< 0, then l can be minimized by letting ǫ →∞,
which means that d = −∞.
(ii) if
cν −λ −pt
k=1(αk + α∗
k)
≥
0:
the term ǫ
cν −λ −pt
k=1(αk + α∗
k)
vanishes.
reason: if equality holds, this is trivially true, and if the inequality
holds strictly then l can be minimized by setting ǫ = 0.
similarly, for the other constraints (the notation (∗) means that this is true for variables
with and without the asterisk):
(i) if
c
t −α(∗)
k
−η(∗)
k
< 0, then l can be minimized by letting ξ(∗)
k
→∞, which
means that d = −∞.
(ii) if
c
t −α(∗)
k
−η(∗)
k
≥0, then ξk
c
t −α(∗)
k
−η(∗)
k
= 0. reason: if the inequality
holds strictly then l can be minimized by ξ(∗)
k
= 0. if equality holds then it is
trivially true.
by a similar argument, the term γ in eq. (14) disappears in the dual. altogether we
have that either d = −∞, or
d(α, α∗, γ) = min
ξ,ξ∗,ǫ f[wopt(α, α∗, γ)] = −1
2∥wopt∥2
(20)
and
t
x
k=1
(α∗
k + αk) ≤cν −λ
(21)
and
α(∗)
k
+ η(∗)
k
≤c
t .
(22)
note that the variables ξ(∗)
k , η(∗)
k , ǫ, λ do not appear in f[wopt(α, α∗, γ)].
the dual
problem is therefore given by
max
α,α∗,γ
−1
2
t
x
k=1
(αk −α∗
k)x(k) −γ1
!2
.
(23)
s.t.
{αk, α∗
k} ∈
0, c
t
(24)
t
x
k=1
(α∗
k + αk) ≤cν.
(25)
regularizing portfolio optimization.
11
we can analytically maximize over γ and obtain for the optimal value
γ = 1
n
t
x
k=1
(αk −α∗
k)
n
x
i=1
x(k)
i
−1
!
(26)
the optimal projection (= optimal portfolio) is given by
wopt * x =
t
x
k=1
(αk −α∗
k)x(k) * x −1
n
t
x
k=1
(αk −α∗
k)
n
x
i=1
x(k)
i
−1
!
1 * x.
(27)
for n →∞the second term vanishes and the solution is the same as the the solution
in support vector regression. note that the kernel-trick (see e.g. [47]), which is used
in support vector machines to find nonlinear models hinges on the fact that only dot
products of input vectors appear in the support vector expansion of the solution. as a
consequence of the budget constraint, one can no longer use the kernel-trick (compare
eq.
(27)).
as long as we disregard derivatives, this is not a problem for portfolio
optimization.
keep in mind, however, that the budget constraint introduces this
otherwise undesirable property.
support vector algorithms typically solve the dual form of the problem (for a recent
survey see [62]), which is in our case given by
max
α,α∗,γ −1
2
" t
x
k=1
t
x
l=1
(αk −α∗
k)(αl −α∗
l )
x(k)x(l) −1
n
n
x
i=1
x(k)
i
n
x
i=1
x(l)
i
!#
(28)
s.t. {αk, α∗
k} ∈
0, c
t
;
t
x
k=1
(α∗
k + αk) ≤cν.
for n →∞the problem becomes identical to ν-svr, which can be solved by linear
programming, for which software packages are available [63]. for finite n, it can still
be solved with existing methods, because it is quadratic in the αk's. solvers such as
the ones discussed in [64] and [62] can be used, but have to be adapted to this specific
problem.
the regularized symmetric tail average minimization problem (eq. (11) with the
constraints eqs. (8)–(10) and (12)) is, as we have shown here, directly related to support
vector regression which uses the ǫ-insensitive loss function. the ǫ-insensitive loss is stable
to local changes for data points that fall outside the range specified by ǫ. this point
is elaborated in section 3 in [60], and relates this method to robust estimation of the
mean. it can also be extended to robust estimation of quantiles [60] by scaling of the
slack variables ξk by μ and ξ∗
k by 1 −μ, respectively.
this scaling translates directly to the portfolio optimization problem, which is an
extreme case: downside risk measures penalize only loss, not gain. the asymmetry in
the loss function corresponds to μ = 1.
regularizing portfolio optimization.
12
5.2. regularized expected shortfall.
by this final change we arrive at the regularized portfolio optimization problem, eqs.
(7)–(10), which we originally set out to solve. this is now easily solved in analogy to
the previous paragraphs: the slack variables ξ∗
k disappear, together with the respective
lagrange multipliers which enforce constraints, including α∗
k. the optimal solution is
now
wopt =
t
x
k=1
αkx(k) −γ1,
(29)
with
γ = 1
n
t
x
k=1
αk
n
x
i=1
x(k)
i
−1
!
.
(30)
the dual problem is given by
max
αk
−1
2
" t
x
k=1
t
x
l=1
αkαl
x(k)x(l) −1
n
n
x
i=1
x(k)
i
n
x
i=1
x(l)
i
!#
s.t.
αk ∈
0, c
t
;
t
x
k=1
αk ≤cν.
(31)
which, like its symmetric counterpart, eq. (28), can be solved by adjusting existing
algorithms.
the formalism provides a free parameter, c, to set the balance between the original
risk function and the regularizer. its choice may depend on a number of factors, such
as the investors time horizon, the nature of the underlying data, and, crucially, on the
ratio n/t. intuitively, there must be a maximum allowable value cmax(n/t) for c,
such that when one puts more emphasis on the data, c > cmax(n/t), then over fitting
will occur with high probability. it would be desirable to know an analytic expression
for (a bound on) cmax(n/t). in practice, cross-validation methods are often employed
in machine learning to set the value of c. those methods are not free of problems (see,
for example, the treatment in [65]), and the optimal choice of this parameter remains
an open problem.
6. regularization corresponds to portfolio diversification.
above, we have controlled the capacity of the linear model by minimizing the l2 norm
of the portfolio weight vector. in the finance context, minimizing
∥w∥2 =
x
i
w2
i ≃
1
neff
(32)
corresponds roughly to maximizing the effective number of assets, neff, i.e. to exerting
a pressure towards portfolio diversification [66]. we conclude that diversification of the
portfolio is crucial, because it serves to counteract the observed instability by acting as
a regularizer.
regularizing portfolio optimization.
13
other constraints that penalizes the length of the weight vector could alternatively
be considered as a regularizer, in particular any lp norm. the budget constraint alone,
however, does not suffice as a regularizer, since it does not constrain the length of
the weight vector. adding a ban on short selling, wi ≥0, to the budget constraint,
p
i wi = 1, limits the allowable solutions to a finite volume in the space of weights and
is equivalent to requiring that p
i |wi| ≤1.∥it thereby imposes a limit on the l1 norm,
that is on the sum of the absolute amplitudes of long and short positions.
one may argue that it may be a good idea to use the l1 norm instead of the
l2 norm, because that may make the solution sparser. however, the l1 norm has a
tendency to make some of the weights vanish. indeed, it has been shown that in the
orthonormal design case (using the variance as the risk measure) an l1 regularizer will
set some of the weights to zero, while an l2 regularizer will scale all the weights [29].
the spontaneous reduction of portfolio size has also been demonstrated in numerical
simulations [67]: as one goes deeper and deeper into the regime where t is significantly
smaller than n, under a ban on short selling, more and more of the weights will become
zero. the same "freezing out" of the weights has been observed in portfolio optimization
[68] as an empirical fact.
it is important to stress that the vanishing of some of the weights does not reflect
any structural property of the objective function, it is just a random effect: as clearly
demonstrated by simulations [67], for a different sample a different set of weights
vanishes.
the angle of the weight vector fluctuates wildly from sample to sample.
(the behavior of the solutions is similar for other limit systems as well.) this means
that the solutions will be determined by the limit system and the random sample,
rather than by the structure of the market.
so the underlying instability is merely
"masked", in that the solutions do not run away to infinity, but they are still unstable
under sample fluctuations when t is too small. as it is certainly not in the interest
of the investor to obtain a portfolio solution which sets weights to zero on the basis
of unreliable information from small samples, the above observations speak strongly in
favor of using the l2 norm over the l1 norm.
7. conclusion
we have made the observation that the optimization of large portfolios minimizes the
empirical risk in a regime where the data set size is similar to the size of the portfolio.
in that regime, a small empirical risk does not necessarily guarantee a small actual risk
[24]. in this sense naive portfolio optimization over-fits the data. regularization can
overcome this problem by reducing the capacity of the considered model class.
regularized portfolio optimization has choices to make, not only about the risk
function, but also about the regularizer. here, we have focussed on the increasingly
popular expected shortfall risk measure.
using the l2 norm as a regularizer leads
to a convex optimization problem which can be solved with linear programming. we
∥this point has been made independently by [17].
regularizing portfolio optimization.
14
have shown that regularized portfolio optimization is then a variant of support vector
regression. the differences are an asymmetry, due to the tolerance to large positive
deviations, and the budget constraint, which is not present in regression.
our treatment provides a novel insight into why diversification is so important. the
l2 regularizer implements a pressure towards portfolio diversification. therefore, from
a statistical point of view, diversification is important as it is one way to control the
capacity of the portfolio optimizer and thereby to find a solution which is more stable,
and hence meaningful.
in summary, the method we have outlined in this paper allows for the unified
treatment of optimization and diversification in one principled formalism. it shows how
known methods from modern statistics can be used to improve the practice of portfolio
optimization.
8. acknowledgements
we thank leon bottou for helpful discussions and comments on the manuscript. this
work has been supported by the "cooperative center for communication networks data
analysis", a nap project sponsored by the national office of research and technology
under grant no. kckha005. ss thanks the collegium budapest for hosting her during
this collaboration, and the community at the collegium for providing a creative and
inspiring atmosphere.
[1] h. markowitz. portfolio selection. journal of finance, 7:77–91, 1952.
[2] h. markowitz. portfolio selection: efficient diversification of investments. j. wiley and sons,
new york, 1959.
[3] e. j. elton and m. j. gruber. modern portfolio theory and investment analysis. wiley, new
york, 1995.
[4] j.d. jobson and b. korkie.
improved estimation for markowitz portfolios using james-stein
type estimators. proceedings of the american statistical association (business and economic
statistics), 1:279–284, 1979.
[5] p. jorion. bayes-stein estimation for portfolio analysis. journal of financial and quantitative
analysis, 21:279–292, 1986.
[6] p.a. frost and j.e. savarino.
an empirical bayes approach to efficient portfolio selection.
journal of financial and quantitative analysis, 21:293–305, 1986.
[7] r. macrae and c. watkins. safe portfolio optimization. in h. bacelar-nicolau, f. c. nicolau, and
j. janssen, editors, proceedings of the ix international symposium of appliedstochastic models
and data analysis: quantitative methods in business and industry society, asmda-99, 14-17
june 1999, lisbon, portugal, page 435. ine, statistics national institute, portugal, 1999.
[8] r. jagannathan and t. ma.
risk reduction in large portfolios:
why imposing the wrong
constraints helps. journal of finance, 58:1651–1684, 2003.
[9] o. ledoit and m. wolf. improved estimation of the covariance matrix of stock returns with an
application to portfolio selection. journal of empirical finance, 10(5):603–621, 2003.
[10] o. ledoit and m. wolf. a well-conditioned estimator for large-dimensional covariance matrices.
j. multivar. anal., 88:365–411, 2004.
[11] o. ledoit and m. wolf. honey, i shrunk the sample covariance matrix. j. portfolio management,
31:110, 2004.
[12] v. demiguel, l. garlappi, and r. uppal. optimal versus naive diversification: how efficient is
the 1/n portfolio strategy? review of financial studies, 2007.
regularizing portfolio optimization.
15
[13] l. garlappi, r. uppal, and t. wang. portfolio selection with parameter and model uncertainty:
a multi-prior approach. review of financial studies, 20:41–81, 2007.
[14] v. golosnoy and y. okhrin. multivariate shrinkage for optimal portfolio weights. the european
journal of finance, 13:441–458, 2007.
[15] r. kan and g. zhou. optimal portfolio choice with parameter uncertainty. journal of financial
and quantitative analysis, 42:621–656, 2007.
[16] g. frahm and ch. memmel. dominating estimators for the global minimum variance portfolio,
2009. deutsche bundesbank, discussion paper, series 2: banking and financial studies.
[17] v. demiguel, l. garlappi, f. j. nogales, and r. uppal.
a generalized approach to portfolio
optimization: improving performance by constraining portfolio norms. management science,
55:798–812, 2009.
[18] l. laloux, p. cizeau, j.-ph. bouchaud, and m. potters. noise dressing of financial correlation
matrices. phys. rev. lett., 83:1467–1470, 1999.
[19] v. plerou, p. gopikrishnan, b. rosenow, l.a.n. amaral, and h.e. stanley. universal and non-
universal properties of cross-correlations in financial time series. phys. rev. lett., 83:1471,
1999.
[20] l. laloux, p. cizeau, j.-p. bouchaud, and m. potters.
random matrix theory and financial
correlations. international journal of theoretical and applied finance, 3:391, 2000.
[21] v. plerou, p. gopikrishnan, b. rosenow, l. a. n. amaral, t. guhr, and h. e. stanley. a random
matrix approach to cross-correlations in financial time-series. phys. rev. e, 65:066136, 2000.
[22] z. burda, a. goerlich, and a. jarosz. signal and noise in correlation matrix. physica, a343:295,
2004.
[23] m. potters and j.-ph. bouchaud. financial applications of random matrix theory: old laces and
new pieces. acta phys. pol., b36:2767, 2005.
[24] v. vapnik and a. chervonenkis. on the uniform convergence of relative frequencies of events to
their probabilities. theory of probability and its applications, 16(2):264–280, 1971.
[25] v. vapnik. the nature of statistical learning theory. springer verlag, new york, 1995.
[26] v. vapnik. statistical learning theory. john wiley and sons, new york, 1998.
[27] b. e. boser, i. m. guyon, and v. n. vapnik. a training algorithm for optimal margin classifiers.
in d. haussler, editor, proc. 5th annual acm workshop on computational learning theory,
pages 144–152. acm press, 1992.
[28] c. cortes and v. vapnik. support vector networks. machine learning, 20:273–297, 1995.
[29] r. tibshirani.
regression shrinkage and selection via the lasso.
j. royal. statist. soc b.,
58(1):267–288, 1996.
[30] i. frank and j. friedman. a statistical view of some chemometrics regression tools. technometrics,
35:109–148, 1993.
[31] v.k. chopra and w.t. ziemba. the effect of errors in means, variances, and covariances on
optimal portfolio choice. journal of portfolio management, 19:611, 1993.
[32] r.c. merton. on estimating the expected return on the market: an exploratory investigation.
journal of financial economics, 8:323361, 1980.
[33] y. okhrin and w. schmied. distribution properties of portfolio weights. journal of econometrics,
134:235–256, 2006.
[34] a. kempf and c. memmel. estimating the global minimum variance portfolio. schmalenbach
business review, 58:332348, 2006.
[35] g. frahm.
linear statistical inference for global and local minimum variance portfolios.
statistical papers, 2008. doi: 10.1007/s00362-008-0170-z.
[36] i. kondor, s. pafka, and g. nagy.
noise sensitivity of portfolio selection under various risk
measures. journal of banking and finance, 31:1545–1573, 2007.
[37] s. pafka and i. kondor. noisy covariance matrices and portfolio optimization. eur. phys. j., b
27:277–280, 2002.
[38] s. pafka and i. kondor.
noisy covariance matrices and portfolio optimization ii.
physica, a
regularizing portfolio optimization.
16
319:487–494, 2003.
[39] s. pafka and i. kondor. estimated correlation matrices and portfolio optimization. physica, a
343:623–634, 2004.
[40] z. burda, j. jurkiewicz, and m. a. nowak. is econophysics a solid science? acta physica polonica,
b 34:87–132, 2003.
[41] m. mezard s. ciliberti, i. kondor.
on the feasibility of portfolio optimization under expected
shortfall. quantitative finance, 7:389–396, 2007.
[42] m. mezard s. ciliberti. risk minimization through portfolio replication. eur. phys. j., b 57:175–
180, 2007.
[43] i. varga-haszonits and i. kondor.
the instability of downside risk measures.
j. stat. mech.,
p12007, 2008.
[44] i. varga-haszonits and i. kondor. noise sensitivity of portfolio selection in constant conditional
correlation garch models. physica, a385:307–318, 2007.
[45] i. kondor and i. varga-haszonits.
feasibility of portfolio optimization under coherent risk
measures, 2008. submitted to quantitative finance.
[46] b. sch ̈
olkopf.
support vector learning.
gmd-bericht ; 287. oldenbourg, m ̈
unchen, germany,
1997. dissertation: berlin, techn. univ., diss., 1997.
[47] b. sch ̈
olkopf, c. j.c. burges, and a. j. smola.
advances in kernel methods - support vector
learning. mit press, cambridge, ma, 1999.
[48] v. vapnik and a. chervonenkis.
theory of pattern recognition.
nauka, 1974.
[in russian.
german translation available from akademie-verlag, berlin, 1979.].
[49] p. jorion.
var: the new benchmark for managing financial risk.
mcgraw-hill, new york,
2000.
[50] j.p.
morgan
and
reuters.
riskmetrics.
technical
document
available
at
http://www.riskmetrics.com.
[51] p. artzner, f. delbaen, j.-m. eber, and d. heath.
coherent measures of risk.
mathematical
finance, 9:203–228, 1999.
[52] p. embrechts.
extreme value theory:
potential and limitations as an integrated risk
measurement tool. derivatives use, trading and regulation, 6:449–456, 2000.
[53] c. acerbi, c. nordio, and c. sirtori. expected shortfall as a tool for financial risk management.,
2001. unpublished.
[54] c. acerbi.
spectral measures of risk: a coherent representation of subjective risk aversion.
journal of banking and finance, 26(7):1505–1518, 2002.
[55] c. acerbi and d. tasche.
on the coherence of expected shortfall.
journal of banking and
finance, 26(7):1487–1503, 2002.
[56] c. acerbi. coherent representations of subjective risk-aversion. in g. szeg ̈
o, editor, risk measures
for the 21st century. john wiley and sons., 2004.
[57] r. t. rockafellar and s. uryasev.
optimization of conditional value-at-risk.
journal of risk,
2(3):21–41, 2000.
[58] f. perez-cruz, j. weston, d.j.l. herrmann, and b. sch ̈
olkopf. extension of the nu-svm range
for classification. in advances in learning theory: methods, models and applications, volume
190 of nato science series iii: computer and systems sciences, pages 179–196. ios press,
amsterdam, 2003.
[59] a. takeda and m. sugiyama. ν-support vector machine as conditional value-at-risk minimization,
2008.
[60] b. sch ̈
olkopf, a. j. smola, r. c. williamson, and p. l. bartlett. new support vector algorithms.
neural computation, 12(5):1207–1245, 05 2000.
[61] d. p. bertsekas. nonlinear programming. athena scientific, belmont, ma, 1995.
[62] l ́
eon bottou and chih-jen lin. support vector machine solvers. in l ́
eon bottou, olivier chapelle,
dennis decoste, and jason weston, editors, large scale kernel machines, pages 301–320. mit
press, cambridge, ma., 2007.
regularizing portfolio optimization.
17
[63] robert
j.
vanderbei.
loqo
users
manual.x.
software
available
at
http://www.princeton.edu/ rvdb/loqo/loqo.html.
[64] antoine bordes, seyda ertekin, jason weston, and l ́
eon bottou. fast kernel classifiers with online
and active learning. journal of machine learning research, 6:1579–1619, september 2005.
[65] y. bengio and y. grandvalet. no unbiased estimator of the variance of k-fold cross-validation.
in s. becker, l. saul, and b. sch ̈
olkopf, editors, advances in neural information processing
systems 16 (nips'03). mit press, cambridge, ma., 2004.
[66] j.-ph. bouchaud and m. potters.
theory of financial risk - from statistical physics to risk
management. cambridge university press, cambridge, uk, 2000.
[67] n. gulyas and i. kondor. portfolio instability and linear constraints, 2007. submitted to physica
a.
[68] b. scherer and r. d. martin. introduction to modern portflio optimization with nuopt and
s-plus. springer, 2005.
|
0911.1695 | h\"ansch--couillaud locking of mach--zehnder interferometer for carrier
removal from a phase-modulated optical spectrum | we describe and analyse the operation and stabilization of a mach--zehnder
interferometer, which separates the carrier and the first-order sidebands of a
phase-modulated laser field, and which is locked using the h\"ansch--couillaud
method. in addition to the necessary attenuation, our interferometer
introduces, via total internal reflection, a significant polarization-dependent
phase delay. we employ a general treatment to describe an interferometer with
an object which affects the field along one path, and we examine how this phase
delay affects the error signal. we discuss the requirements necessary to ensure
the lock point remains unchanged when phase modulation is introduced, and we
demonstrate and characterize this locking experimentally. finally, we suggest
an extension to this locking strategy using heterodyne detection.
| introduction
for many experiments and laser-locking schemes it is
necessary to create light which is phase-coherent with
and frequency shifted relative to a master laser oscilla-
tor. common approaches include acousto-optical mod-
ulation [1], electro-optical modulation [2], and current
modulation of the laser [3–5]. the latter two of these
approaches give, typically, a phase-modulated spectrum.
it is often useful to separate the frequency components
of such a spectrum, and there are several devices which
can perform this task [6,7]. we describe one of these-
a mach–zehnder interferometer-and a locking-scheme
based on the method of h ̈
ansch and couillaud [8]; we
include a description of a relative phase delay between
the linear polarization components, which is expected
for many real devices, and we take care to ensure the
lock point does not change when phase modulation is
introduced. this property is essential in our application,
where we use a sideband from an electro-optically mod-
ulated laser field to drive raman transitions between
hyperfine states in cold alkali-earth atoms (similar to
ref. [9]), and in which we change the modulation fre-
quency and depth during the experiment. for related
uses, see references [10] and [11]. we will provide a sim-
ple and general analysis, and a demonstration of a robust
embodiment in a realistic experimental setting.
2.
theoretical framework
consider a light field ⃗
e0 incident on an interferometer, as
depicted in fig. 1. one path passes unperturbed to the
output beam-splitter, while the second path is subject to
a phase delay φ, and passes through an object, described
by ˆ
o, before recombining with the first path. the output
field is hence
⃗
et = 1
2
⃗
e0 −eiφ ˆ
o ⃗
e0
;
(1)
the minus sign is a consequence of the phase change un-
der reflection.
fig. 1. prototype interferometer showing incident field
⃗
e0, beam-splitters, output fields ⃗
et and ⃗
es (the latter
after a quarter-wave plate λ/4) and an internal object
which affects the field which follows the longer path.
we pass this field through a quarter-wave plate (with
fast-axis at 45◦to the vertical) and analyze the resulting
electric field:
⃗
es = ˆ
q 1
2
⃗
e0 + eiφ ˆ
o ⃗
e0
, where ˆ
q =
1
√
2
1
i
i
1
(2)
in the standard jones matrix representation [12]. we de-
fine the error signal s as the difference between the linear
polarization components of the field ⃗
es, and the trans-
mission t as the horizontal component of the field ⃗
et :
s =
⃗
es * ˆ
x
2 −
⃗
es * ˆ
y
2 and t =
⃗
et * ˆ
x
2 ;
(3)
we have discarded the vertical polarization from the out-
put, which reduces the power but means it is possible to
achieve complete extinction.
by choosing the incident field and the internal object
this framework can be used to describe a variety of lock-
ing schemes. the original h ̈
ansch–couillaud scheme, al-
though here phrased in terms of interferometers rather
than a cavity, corresponds to horizontally polarized light
and a rotated linear polarizer [8].
we now apply this framework to describe an internal
object which attenuates and also delays one polarization.
the linearly polarized incident field ⃗
e0, and the object
ˆ
o are described by the following:
⃗
e0 = e0
cosθ
sin θ
and ˆ
o =
1
0
0
(1 −α)eiβ
(4)
1
where α is the attenuation, β is the phase delay, and θ is
the angle by which the linearly polarized light is inclined
from vertical.
in the absence of attenuation, the two polarizations
will be subjected only to different delays, and the dif-
ference between the two displaced interference patterns
could be used as an error signal; here the field before the
quarter-wave plate would be used to derive s.
if the phase delay β were zero, we would recover a
rotated version of the h ̈
ansch–couillaud scheme. also,
if the attenuation was total (α = 1), any phase delay
would be inconsequential and we would again recover
this original scheme.
in the intermediate case of partial attenuation (α < 1)
and non-zero phase delay (β ̸= 0), we find that the er-
ror signal crosses zero at the maximum of transmission,
but has the non-zero value
1
2(1 −α) sin β sin 2θ at the
transmission minimum. its gradient about the extrema is
± 1
4
1 −(1 −α) cos β
sin 2θ. in this intermediate regime,
the introduction of phase modulation, as discussed in
the next section, affects the positions at which the error
signal crosses zero. as described later, we found that a
phase delay was unavoidable in our device; therefore, in
order to recover the h ̈
ansch–couillaud scheme, we in-
troduced total attenuation of the vertical polarization
component.
3.
phase modulated light
we may analyze a modulated field ⃗
e(t) = ⃗
e0ei
r t
0 ω(t′)dt′
in terms of its fourier components. for the case of sim-
ple phase modulation, where
r t
0 ω(t′)dt′ = ω0t+m cosωt
(ω0: unmodulated frequency; ω: modulation frequency;
m: modulation depth), we expand using the jacobi–
anger identity, treat the sidebands as independent fields,
and sum their contributions to obtain the error signal
spm
ω0τ
=
+∞
x
n=−∞
|jn(m)|2 s
ω0τ + nωτ
,
(5)
where the subscript 'pm' indicates that phase modula-
tion is present; τ = φ/ω0 is the optical path delay in our
device, jn(m) is the nth order bessel function, and we
have assumed that all frequency components interact in
the same way with the optical elements.
we operate the device near the condition ωτ = π
which separates the carrier from the first-order sidebands
or, more generally, ensures odd and even numbered side-
bands exit from opposite ports of the interferometer. un-
less the signal crosses zero at the transmission minimum,
the position where it does cross will shift when phase
modulation is introduced. hence we must ensure the sig-
nal is zero at the transmission extrema while maintaining
a non-zero gradient; this is satisfied for β = 0 or α = 1.
to achieve this, we can compensate for any differential
phase shift β in our device using a waveplate, or we can
introduce complete attenuation of the vertical polariza-
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
0
5
10
15
20
25
photodiode signal (arb. units)
piezo voltage offset (volts)
transmission
signal
fig. 2. transmission t (black) and error signal s (red)
with (solid) and without (dashed) phase modulation of
the input light, as the path difference is scanned using a
piezo-electric stack. the dashed vertical lines mark the
positions where the error signals coincide, and these are
very close to s = 0. they are, however, displaced from
the transmission extrema, but there is adequate scope
for optimization for a given input field.
tion using a linear polarizer. the overall signal becomes
spm(ω0τ) = s
ω0τ
+∞
x
n=−∞
(−1)n |jn(m)|2 .
(6)
under these conditions, any change in phase modula-
tion depth does not affect the positions where the error
signal crosses zero; the gradient changes, but for modu-
lation depths m ≲0.38π it maintains the same sign.
4.
experimental implementation
a mach–zehnder interferometer was constructed from
readily
available
components,
including
two
non-
polarizing bk7 beam-splitter cubes, and a bk7 right-
angled prism [13]. the cubes were glued using low-
expansion uv curing glue [14], with care taken to ensure
their faces were parallel, and the pair was mounted on a
kinematic mount. the prism was glued to a translation
stage; a screw was available for coarse path-difference ad-
justment, and a piezo-electric stack was used for small
adjustments and locking feedback. incident light was
spatially filtered and collimated (to ensure the incident
wavefronts were flat) and aligned into the device. beam
overlap was found visually, and then maximized by scan-
ning the path difference using the piezo and observing
the contrast ratio of power exiting each of the output
ports.
in the absence of a relative phase delay between the
polarizations, one could generate an error signal using
the slight polarization selectivity of reflections by the
nominally polarization insensitive beam-splitter cubes.
however, we found that a significant relative phase de-
lay of β ≈78◦was introduced by the two total internal
2
0
0.2
0.4
0.6
0.8
1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
photodiode signal (arb. units)
frequency relative to carrier (ghz)
maximized carrier
minimized carrier
0
0.1
-0.8
-0.7
-0.6
0
0.1
0.6
0.7
0.8
fig. 3. phase-modulated spectra after filtering by the
mach–zehnder interferometer, showing maximum and
minimum carrier transmission. the modulation fre-
quency (2.7 ghz) is larger than the free-spectral range of
the scanning fabry–p ́
erot cavity (2 ghz) used to obtain
this trace, and so the sidebands appear at ±700 mhz for
the lower- and upper-sidebands, respectively; these are
magnified in the insets. by comparing the amplitudes, we
estimate the modulation depth to be m ≈0.2π, and by
comparing with an unfiltered reference trace (obtained
by blocking one path of the interferometer), we see that,
for the case of minimized carrier transmission, the carrier
is attenuated by more than 20 db while approximately
1 db of sideband power is lost. this trace was smoothed
using a 5 mhz bandwidth moving-average filter.
reflections inside our right-angled prism [12] and, to re-
cover the h ̈
ansch–couillaud method, it was necessary to
introduce a linear polarizer to ensure complete attenua-
tion of the vertical polarization (i.e. α = 1).
fig. 2 shows the interferometer transmission meas-
ured with and without the phase-modulated optical side-
bands that we wish to separate. the modulation fre-
quency is ω= 2π × 2.725 ghz and the wavelength is
λ = 780 nm; the device is operated near to a path differ-
ence cτ = cπ/ω≈55 mm, which corresponds to the con-
dition ωτ = π. the photodiodes sample different parts of
the beam cross-section, and also have different respon-
sivities. it may be necessary to adjust the photodiode
balance and the offset, but as demonstrated by this un-
optimized trace, a real device operates approximately as
predicted. from the decrease in visibility and error sig-
nal, we estimate a modulation depth of m ≈0.2π. this
agrees with the more direct measurement with a scan-
ning fabry–p ́
erot cavity, as shown in fig. 3. the imper-
fect behaviour of the device is accounted for partly by
the unequal reflectivities of the beam-splitter cubes, but
a more significant problem is the spatial overlap of the
fields in this free-space device; we would expect improved
performance and more ideal behaviour if the device was
implemented using single-mode optical fibers and fiber-
based beam-splitters.
0
10
20
30
40
50
0.01
0.1
1
10
100
power spectral density (db, arb. units)
fourier frequency (hz)
unlocked
locked
fig. 4. fast fourier transform of the recorded output
when the system is locked (black) and unlocked (red).
the plot has been smoothed using a 10 mhz bandwidth
moving-average filter.
we constructed a feedback circuit with an integrated
high-voltage output, similar to that in ref. [15], and
recorded the transmission with and without feedback;
the fourier transforms of these are shown in fig. 4. the
bandwidth of the circuit is ∼100 hz, and for low fre-
quencies (< 10 hz) the circuit reduces drift by several
orders of magnitude. the very slightly increased noise
at high frequency is expected for a feedback circuit.
5.
alternative: heterodyne detection
another method by which we could ensure the error
signal would remain unchanged when phase modulation
was introduced would be to mix the photodiode signals
with a frequency-shifted reference, derived from the un-
modulated field, and extract electronically a signal cor-
responding to the beat note of this reference mixed with
the carrier. furthermore, it would be possible to tune
the reference field frequency close to, and hence make
the error signal depend upon, any chosen frequency com-
ponent in a complex, more powerful spectrum. alterna-
tively, with a sufficiently fast photodiode, one may tune
the detection electronics to select a desired optical fre-
quency component.
this reference field could be created by using an
acousto-optical modulator (aom) to pick offa small
fraction of the laser field before it passes through, for ex-
ample, an electro-optical phase modulator; see fig. 5. as
an example, consider an interferometer which separates
positive and negative first-order sidebands from a 3ghz
phase-modulated field. using a 100mhz aom-shifted
reference field and a fast photodiode, one could detect
beat-frequencies at 2900mhz, 100mhz, and 3100mhz;
using standard electronics, a specific frequency compo-
nent could be extracted and passed to the locking cir-
cuit. here, feedback could be made to depend on one
sideband, leaving the other sideband and a significant
fraction of the carrier. the resulting spectrum would be
3
fig. 5. a proposed h ̈
ansch–couillaud scheme using
heterodyne detection. an acousto-optical modulator
(aom) extracts a reference before sideband modula-
tion. light passes through the interferometer, composed
of non-polarizing beam-splitter cubes and a right-angled
prism, and is detected by photodiodes labeled (±). the
reference light is incident on the detectors and the lin-
ear polarizers (solid black lines) are at 45◦relative to
the beam-splitter cube. the two half-wave plates labeled
λ/2 serve to introduce the small rotation necessary for
the h ̈
ansch–couillaud scheme before the mach–zehnder,
and to rotate the reference light by 45◦so that equal
power from this reference falls on the photodiodes af-
ter the polarizing beamsplitter cube in the heterodyne
detection setup.
well suited to driving stimulated raman transitions.
6.
conclusions
we have described and demonstrated a mach–zehnder
interferometer used to separate carrier and first-order
sidebands from a phase-modulated laser field, which we
have locked using the h ̈
ansch–couillaud method. the
aim of this article was twofold. firstly, we provided a
simple model that allows the interferometer to be under-
stood in terms of its constituent birefringent and reflec-
tive elements. h ̈
ansch and couillaud did not consider a
differential phase-change between the polarisations, but
this arises naturally in our real device and so we have ex-
tended their model. we then applied this model to the
analysis of the interferometer used in our experiments;
our results show that the technique is, despite its sim-
plicity, appropriate for this commonly-encountered situ-
ation.
the polarization-dependent phase delay, originating
from total internal reflection in the corner-reflector of
the mach–zehnder interferometer, affects the error sig-
nal, causing an offset at the transmission minimum and
leaving the locking scheme sensitive to intensity fluctua-
tions. these effects can be eliminated by extinguising one
polarization using an internal linear polarizer, leaving
the lock point approximately fixed as phase modulation
is introduced. the slight residual offset of the lock points
from the transmission extrema observed experimentally
was readily corrected by adjusting the photodiode bal-
ance, and the interferometer was easily optimized for a
given input spectrum. an alternative approach accom-
plishes this indifference to phase modulation using het-
erodyne detection and a frequency-shifted reference light
field.
references
1. p. bouyer, t. l. gustavson, k. g. haritos, and m. a.
kasevich, "microwave signal generation with optical in-
jection locking," optics letters 21, 1502 (1996).
2. k. szymaniec, "injection locking of diode lasers to
frequency modulated source," optics communications
144, 50–54 (1997).
3. k. y. lau, c. harder, and a. yariv, "direct modula-
tion of semiconductor lasers at f > 10 ghz by low-
temperature operation," applied physics letters 44,
273 (1984).
4. j. ringot, y. lecoq, j. garreau, and p. szriftgiser,
"generation of phase-coherent laser beams for raman
spectroscopy and cooling by direct current modulation
of a diode laser," the european physical journal d 7,
285 (1999).
5. c.
affolderbach,
a.
nagel,
s.
knappe,
c.
jung,
d. wiedenmann, and r. wynands, "nonlinear spec-
troscopy with a vertical-cavity surface-emitting laser
(vcsel)," applied physics b: lasers and optics 70,
407–413 (2000).
6. d. haubrich, m. dornseifer, and r. wynands, "lossless
beam combiners for nearly equal laser frequencies," rev.
sci. inst. 71, 338–340 (2000).
7. r. p. abel, u. krohn, p. siddons, i. g. hughes, and
c. s. adams, "faraday dichroic beam splitter for raman
light using an isotopically pure alkali-metal-vapor cell,"
optics letters 34, 3071 (2009).
8. t. w. h ̈
ansch and b. couillaud, "laser frequency sta-
bilization by polarization spectroscopy of a reflecting
reference cavity," optics communications 35, 441–444
(1980).
9. m. kasevich and s. chu, "measurement of the gravita-
tional acceleration of an atom with a light-pulse atom
interferometer," applied physics b photophysics and
laser chemistry 54, 321–332 (1992).
10. i. dotsenko, w. alt, s. kuhr, d. schrader, m. muller,
y. miroshnychenko, v. gomer, a. rauschenbeutel, and
d. meschede, "application of electro-optically generated
light fields for raman spectroscopy of trapped cesium
atoms," applied physics b 78, 711–717 (2004).
11. j. schneider, o. gl ̈
ockl, g. leuchs, and u. l. andersen,
"quadrature measurements of a bright squeezed state
via sideband swapping," optics letters 34, 1186 (2009).
12. e. hecht, optics (addison wesley, 2001), 4th ed.
13. suitable components are available from many suppliers,
including thorlabs (bs011 and ps911).
14. a suitable low-expansion glue is manufactured by dy-
max; part number op-67-ls.
15. v. v. yashchuk, d. budker, and j. r. davis, "laser fre-
quency stabilization using linear magneto-optics," rev.
sci. inst. 71, 341–346 (2000).
4
|
0911.1696 | a quantum lovasz local lemma | the lovasz local lemma (lll) is a powerful tool in probability theory to show
the existence of combinatorial objects meeting a prescribed collection of
"weakly dependent" criteria. we show that the lll extends to a much more
general geometric setting, where events are replaced with subspaces and
probability is replaced with relative dimension, which allows to lower bound
the dimension of the intersection of vector spaces under certain independence
conditions. our result immediately applies to the k-qsat problem: for instance
we show that any collection of rank 1 projectors with the property that each
qubit appears in at most $2^k/(e \cdot k)$ of them, has a joint satisfiable
state.
we then apply our results to the recently studied model of random k-qsat.
recent works have shown that the satisfiable region extends up to a density of
1 in the large k limit, where the density is the ratio of projectors to qubits.
using a hybrid approach building on work by laumann et al. we greatly extend
the known satisfiable region for random k-qsat to a density of
$\omega(2^k/k^2)$. since our tool allows us to show the existence of joint
satisfying states without the need to construct them, we are able to penetrate
into regions where the satisfying states are conjectured to be entangled,
avoiding the need to construct them, which has limited previous approaches to
product states.
| introduction and results
in probability theory, if a number of events are all independent of one another, then there is a
positive (possibly small) probability that none of the events will occur. the lov ́
asz local lemma
(proved in 1975 by erd ̈
os and lov ́
asz) allows one to relax the independence condition slightly: as
long as the events are "mostly" independent of one another and are not individually too likely,
then there is still a positive probability that none of them occurs. in its simplest form it states
theorem 1 ([el75]). let b1, b2, . . . , bn be events with pr(bi) ≤p and such that each event is mutually
independent of all but d of the others. if p * e * (d + 1) ≤1 then pr(vn
i=1 bc
i ) > 0.
the lov ́
asz local lemma (lll) is an extremely powerful tool in probability theory as it sup-
plies a way of dealing with rare events and of showing that a certain event holds with positive
probability. it has found an enormous range of applications (see, e.g., [as04]), for instance to
graph colorability [el75], lower bounds on ramsey numbers [spe77], geometry [mp87], and al-
gorithms [mt09]. for many of these results there is no known proof which does not use the local
lemma.
one notable application of the lll is to determine conditions under which a k-cnf formula is
satisfiable. if each clause of such a formula φ involves a disjoint set of variables, then it is obvious
that φ is satisfiable. one way to see this is to observe that a random assignment violates a clause
with probability p = 2−k and hence the probability that all m clauses are satisfied by a random
assignment is (1 −p)m > 0. but what if some of the clauses share variables, i.e., if they are "weakly
dependent"? this question is readily answered by using the lll:
corollary 2. let φ be a k-sat formula in cnf-form. if every variable appears in at most 2k/(e * k) clauses
then φ is satisfiable.
this corollary follows from thm. 1 by letting bi be the event that the i-th clause is not satisfied
for a random assignment, which happens with probability p = 2−k, and noting that each clause
depends only on the d ≤(2k/e) −k other clauses that share a variable with it. in particular
this corollary gives a better understanding of sat, the prototype np-complete problem in classical
complexity theory.
in the last decade enormous advances have been made in the area of quantum complexity, the
theory of easy and hard problems for a quantum computer. in particular, a natural quantum
analog of k-sat, called k-qsat, was introduced by bravyi [bra06]: instead of clauses we have
projectors π1, . . . , πm, each acting non-trivially on k qubits, and we have to decide if all of them
can be satisfied jointly. more precisely, we ask if there is a state |ψ⟩on all qubits such that πi|ψ⟩=
0 for all 1 ≤i ≤m (in physics language: we ask if the system is frustration-free). this problem1
was shown to be qma1-complete for k ≥4 [bra06] and as such has received considerable attention
[liu06, liu07, bs07, bt08, lmss09, llm+09, bmr09].
note that the question is easy for a set of "disjoint" projectors: if no two projectors share any
qubits, then clearly |ψ⟩= |ψ1⟩⊗* * * ⊗|ψm⟩is a satisfying state, where |ψi⟩is such that πi|ψi⟩= 0,
just like in the case of disjoint k-sat. it is thus very natural to ask if there still is a joint satisfy-
ing state when the projectors are "weakly" dependent, i.e., share qubits only with a few other
projectors. one might speculate that a quantum local lemma should provide the answer.
1when defined with an appropriate promise gap for no-instances
2
motivated by this question we ask: is there a quantum local lemma? what will take the
role of notions like probability space, probability, events, conditional probability and mutual indepen-
dence? what properties should they have? and can we prove an analogous statement to cor. 2 for
k-qsat?
our results:
we answer all these questions in the positive by first showing how to generalize the
notions of probability and independence in a meaningful way applicable to the quantum setting
and then by proving a quantum local lemma. we then show that it implies a statement analogous
to cor. 2 for k-qsat with exactly the same parameters as in the classical case. as we describe later
in this section, we then combine our results with recent advances in the study of random qsat
to substantially widen the satisfiable range and to provide greatly improved lower bounds on the
conjectured threshold between the satisfiable and the unsatisfiable region.
let us first focus on the conceptual step of finding the right notions of probability and indepen-
dence. in the quantum setting we deal with vector spaces and the probability of a certain event
to happen is determined by its dimension. it is thus very natural to have the following corre-
spondence of classical and "quantum" notions, using the apparent similarity between events and
linear spaces:
definition 3. we define the following, in correspondence with the classical notions:
probability space ω
→
vector space v
event a ∈ω
→
subspace a ⊆v
complement ac = ω\ a
→
orthogonal subspace a⊥
probability pr(a)
→
relative dimension r(a) := dim a
dim v
union and disjunction a ∨b, a ∧b
→
a + b = {a + b|a ∈a, b ∈b}, a ∩b
conditioning pr(a|b) = pr(a∧b)
pr(b)
→
r(a|b) := r(a∩b)
r(b)
= dim(a∩b)
dim(b)
a, b independent pr(a ∧b) = pr(a) * pr(b)
→
a, b r-independent r(a ∩b) = r(a) * r(b)
this definition by analogy brings us surprisingly far. it can be verified (see sec. 2) that many
useful properties hold for r, like (i) 0 ≤r ≤1, (ii) monotonicity: a ⊆b ⇒r(a) ≤r(b), (iii) the
chain rule (iv) an "inclusion/exclusion" formula and (v) r(a) + r(a⊥) = 1.
there are, however, two important differences between probability and relative dimension.
one concerns the complement of events. for probabilities, the conditional version of property
(v) holds: pr(a|b) + pr(ac|b) = 1.
for r we can easily find counterexamples to the state-
ment r(a|b) + r(a⊥|b) = 1 (for instance two non-equal non-orthogonal lines a and b in a
two-dimensional space, where r(a|b) + r(a⊥|b) = 0). it is this property that is used in most
proofs of the local lemma, and one of the difficulties in our proof of a quantum lll (qlll) is to
circumvent its use.
the second difference concerns our notion of r-independence. in probability theory, if a and b
are independent, then so are ac and b. again, this is not true any more for r and easy counterex-
amples can be found (see sec. 2). it is thus important to find the right formulation of a quantum
local lemma concerning mutual independence of events. keeping these caveats in mind and us-
ing our notion of relative dimension, we prove a general quantum lll (see sec. 3), which in its
simplest form gives:
theorem 4. let x1, x2, . . . , xn be subspaces, where r(xi) ≥1 −p and such that each subspace is mutu-
ally r-independent of all but d of the others. if p * e * (d + 1) ≤1 then r(tn
i=1 xi) > 0.
3
note that in contrast to the classical lll in thm. 1 which is stated in terms of the "bad" events
bi, here we are working with the "good" events. while in the classical case these two formulations
are equivalent, this is no longer the case for our notion of r-independence.
an immediate application of our qlll is to k-qsat, where we are able to show the exact
analogue of cor. 2.
corollary 5. let {π1, . . . , πm} be a k-qsat instance where all projectors have rank 1. if every qubit
appears in at most 2k/(e * k) projectors, then the instance is satisfiable.
it follows by defining (with a slight abuse of notation) subspaces xi = π⊥
i of satisfying states
for πi. noticing that r(xi) = 1 −2−k and that projectors are mutually r-independent whenever
they do not share qubits, and observing that an equivalent formulation of the k-qsat-problem is
to decide whether dim(tm
i=1 π⊥
i ) > 0, thm. 4 gives the desired result (see secs. 2 and 3 for details
and more applications to k-qsat).
random qsat:
over the past few decades a considerable amount of effort was dedicated to un-
derstanding the behavior of random k-sat formulas [ks94, mpz02, mmz05]. research in this area
has witnessed a fruitful collaboration among computer scientists, physicists and mathematicians,
and is motivated in part by an attempt to better understand the class np, as well as some recent
surprising applications to hardness of approximation (see, e.g., [fei02]).
the main focus in this area is an attempt to understand the phase transition phenomenon of
random k-sat, namely, the sharp transition from being satisfiable with high probability at low
clause density to being unsatisfiable with high probability at high clause density. the existence of
this phase transition at a critical density αc was proven by friedgut in 1999 [fri99];2 however only
in the k = 2 case its value is known exactly (αc = 1 [cr92, goe92, bbc+01]). a long line of works
for k = 3 have narrowed it down to 3.52 ≤αc ≤4.49 [kkl03, hs03, dkmp08] (with evidence that
αc ≈4.267 [mpz02]), and in the large k limit it has been shown that 2k ln 2 −o(k) ≤αc ≤2k ln 2
[ap04].
the quantum analogue of this question, namely understanding the behavior of random k-qsat
instances, has recently started attracting attention. as in the classical case, the motivation here
comes from an attempt to understand qma1, the quantum analogue of np (of which k-qsat is a
complete problem), as well as the possibility of applications to hardness of approximation, but also
from the hope to obtain insight into phase transition effects in other quantum physical systems.
the definition of a random k-qsat instance is similar to the one in the classical case. fix
some α > 0. then a random k-qsat instance on n qubits of density α is obtained by repeating the
following m = αn times: choose a random subset of k qubits and pick a random rank-1 projector on
them. an equivalent way to describe this is to say that we choose a random k-uniform hypergraph
from the ensemble gk(n, m), in which m = αn k-hyperedges are picked uniformly at random from
the set of all possible k-hyperedges on n vertices (with repetitions) and then a random rank-1
projector is chosen for each hyperedge.
in a first work on the random k-qsat model, laumann et al. [lmss09] fully characterize the
k = 2 case and show a threshold at density αq
c = 1/2 using a transfer matrix approach introduced
by bravyi [bra06]. curiously, the satisfying states in the satisfiable region are product states. they
also establish the first lower and upper bounds on a possible (conjectured [bmr09]) threshold. in
2actually, it is still not known whether the critical density converges for large n; see [fri99] for details on this
technical (but nontrivial) issue.
4
a recent breakthrough bravyi, moore and russell [bmr09] have dramatically improved the upper
bound to 0.574 * 2k, below the large k limit of ln 2 * 2k ≈0.69 * 2k for the classical threshold!
recently, laumann et al. [llm+09] have given substantially improved lower bounds, essen-
tially showing the following.
theorem 6. [llm+09] if there is a matching of projectors to qubits such that (i) each projector is matched
to a qubit on which it acts nontrivially and (ii) no qubit is matched to more than one projector, then the
k-qsat instance is satisfiable.
such a matching exists with high probability for random instances of qsat if the density is
below some critical value c(k) (hence c(k) ≤αq
c), with c(3) ≈0.92 and c(k) →1 for large k.
there remained a distressingly large gap between the best rigorous lower (< 1) and upper
(≈0.574 * 2k) bounds for a satisfiable/non-satisfiable threshold of random k-qsat.
using our quantum lll we are able to dramatically improve the lower bound on such a
threshold. to get a better intuition on the kind of bounds the quantum lll can give in this setting,
let us first look at a simple toy example: random k-qsat instances picked according to the uniform
distribution on d-regular k-hypergraphs gk(n, d) (so m = dn/k and their density is α = d/k). it
is easy to see that a matching as assumed in thm. 6 only exists iff k ≥d, so this technique shows
satisfiability only below density 1. our cor. 5, on the other hand, immediately implies that the
instance is satisfiable as long the density α ≤2k/(e * k2). it is this order of magnitude that we
manage to achieve also in the random k-qsat model described above. we show
theorem 7. a random k-qsat instance of density α ≤2k/(12 * e * k2) is satisfiable with high probability
for any k ≥1. hence αq
c ≥2k/(12 * e * k2).
all previous lower bound proofs [lmss09, llm+09] were based on constructing tensor product
states which satisfy all constraints. in fact it is conjectured [llm+09] that c(k) is the critical density
above which entangled states would necessarily appear as satisfying states. to our knowledge no
technique has allowed to deal with entangled satisfying states in this setting. using the quantum
lll allows us to show the existence of a satisfying state without the need to generate it, and in
particular the satisfying state need not be a product state (and probably is not). we conjecture that
the improvement in our bound, which is roughly exponential in k, is due to this difference.
the main difficulty we encounter in the proof of thm. 7 (see sec. 4) is that even though the
average degree in gk(n, m = αn) is of the right order of magnitude (≈2k/k) to apply the quantum
lll (cor. 5), the maximum degree can deviate vastly from it (its expected size is roughly logarith-
mic in n), and hence prevent a direct application of the quantum lll. the key insight is that we
can split the graph into two parts, one essentially consisting of high degree vertices that deviate by
too much from the average degree and the other part containing the remaining vertices. we then
show that the first part obeys the matching conditions of thm. 6 [llm+09] and hence has a sat-
isfying state, and the second part obeys the maximum degree requirements of the quantum lll
and is hence also satisfiable. the challenge is to "glue" these two satisfying solutions together. for
this we need to make sure that each edge in the second part intersects the first part in at most one
qubit (by adding all other edges to the first part, while carefully treating the resulting dependen-
cies). we can then create a new (k −1)-local projector of rank 2 for each intersecting edge, which
reflects the fact that one qubit of this edge is already "taken". this allows to effectively decouple
the two parts.
5
discussion and open problems:
we have shown a general quantum lll. an obvious open
question is whether it has more applications for quantum information.
we call our generalization of the lov ́
asz local lemma "quantum" in view of the applications
we have given. however, stricto sensu there is nothing quantum in our version of the lll; it is
a statement about subspaces and the dimensions of their intersections. as such it seems to be
very versatile and we hope that it will find a multitude of other applications, not only in quantum
information, but also in geometry or linear algebra. more generally, our lll holds for any set of
objects with a valuation r and operations t and + that obey properties (i)-(iv) (see lemma 8) and
might be applicable even more generally. since the lll has so many applications, we hope that
our "geometric" lll becomes equally useful.
the standard proof of the classical lll is non-constructive in the sense that it asserts the exis-
tence of an object that obeys a system of constraints with limited dependence, but does not yield
an efficient procedure for finding an object with the desired property. in particular, it does not
provide an efficient way to find the actual satisfying assignment in cor. 2. a long line of research
[bec91, alo91, mr98, cs00, sri08, mos08] has culminated in a very recent breakthrough result by
moser [mos09] (see also [mt09]), who gave an algorithmic proof of the lll that allows to effi-
ciently construct the desired satisfying assignment (and more generally the object whose existence
is asserted by the lll [mt09]). moser's algorithm itself is a rather simple random walk on assign-
ments; an innovative information theoretic argument proves its correctness (see also [for09]). this
opens the exciting possibility to draw an analogy for a (possibly quantum) algorithm to construct
the satisfying state in instances of qsat which are known to be satisfiable via our qlll, and we
hope to explore this connection in future work.
structure of the paper:
in sec. 2 we study properties of relative dimension r and of r-independence,
allowing us to prove a general qlll in sec. 3. sec. 4 extends our results to the random k-qsat
model and presents our improved bound on the size of the satisfiable region.
2
properties of relative dimension
here we summarize and prove some of the properties of the relative dimension r and of r-independence
as defined in def. 3, which will be useful in the proof of the quantum lll in the next section.
lemma 8. for any subspaces x, y, z, xi ⊆v the following hold
(i) 0 ≤r(x) ≤1.
(ii) monotonicity: x ⊆y →r(x) ≤r(y).
(iii) chain rule:
r(tn
i=1 xi|y) = r(x1|y) * r(x2|x1 ∩y) * r(x3|x1 ∩x2 ∩y) * . . . * r(xn| tn−1
i=1 xi ∩y).
(iv) inclusion/exclusion: r(x) + r(y) = r(x + y) + r(x ∩y).
(v) r(x) + r(x⊥) = 1 and r(x|y) + r(x⊥|y) ≤1.
(vi) r(x|z) + r(y|z) −r(x ∩y|z) ≤1.
6
proof. properties (i), (ii), (iii) and (v) follow trivially from the definition.
property (iv) follows from dim(x) + dim(y) = dim(x ∩y) + dim(x + y), which is an easy
to prove statement about vector spaces (see e.g. [kos97], thm. 5.3).
property (vi) follows from (ii) and (iv): inclusion/exclusion (iv) gives r(x ∩z) + r(y ∩z) =
r(x ∩z + y ∩z) + r(x ∩y ∩z) ≤r(z) + r(x ∩y ∩z), where the last inequality follows from
the monotonicity property (ii) using x ∩z + y ∩z ⊆z. dividing by r(z) gives the desired
result.
we also need to extend our definition of r-independence (def. 3) to the case of several sub-
spaces, in analogy to the case of events.
definition 9 (mutual independence). an event a (resp. subspace x) is mutually independent (resp.
mutually r-independent) of a set of events (resp. subspaces) {y1, . . . , yl} if for all s ⊆[l], pr(a| vl
i=1 yi) =
pr(a) (resp. r(x| tl
i=1 yi) = r(x)).
note that unlike in the case of probabilities, it is possible that two subspaces a and b are mutu-
ally r-independent but ac and b are not mutually r-independent. one example for this are the fol-
lowing subspaces of r4: a = span({(1, 0, 0, 0), (0, 1, 0, 0)} and b = span({(1, 0, 0, 0), (0, 1, 1, 0)}.
we have r(a|b) = r(a) = 1/2 but r(a⊥|b) = 0 while r(a⊥) = 1/2.
let us now relate the notion of mutual r-independence to the situation in k-qsat instances.
we first associate a subspace with a projector, in the natural way.
definition 10 (projectors and associated subspace). a k-local projector on n-qubits is a projector of
the form π ⊗in−k, where π is a projector on k qubits q1, . . . , qk and in−k is the identity on the remaining
qubits. we say that π acts on q1, . . . , qk. for a projector π, let its satisfying space be xπ⊥:= ker π =
{|ψ⟩| π|ψ⟩= 0}. when there is no risk of confusion we denote xπ⊥by π⊥and its complement by π.
recall that in statements like cor. 5 we would like to say that two projectors are mutually
r-independent if they do not share any qubits. this is indeed the case, as the following lemma
shows.
lemma 11. assume a projector π does not share any qubits with projectors π1, . . . , πl. then xπ⊥is
mutually r-independent of {xπ⊥
1 , . . . , xπ⊥
l}.
proof. let us split the hilbert space h of the entire system into h = h1 ⊗h2, where h1 is the
space which consists of the qubits π acts on non-trivially (and π1, . . . , πlact as identity) and the
remaining space h2. by assumption there are projectors π and π1, . . . , πlsuch that π = π ⊗in−k
and πi = ik ⊗πi. for every s ⊆[l],
r(π|
\
i∈s
πi) = dim(π t
i∈s πi)
dim(t
i∈s πi
= dim(π ⊗t
i∈s πi)
dim(i ⊗t
i∈s πi) = dim(π) dim(t
i∈s πi)
dim(h1) dim(t
i∈s πi) = r(π).
remark:
in exactly the same way one can show that π is mutually r-independent of {π⊥
1 , . . . , π⊥
l}
and that both π and π⊥are mutually r-independent of {π1, . . . , πl}. hence the property of not
sharing qubits (or, for subspaces, having a certain tensor structure), which in particular implies
mutual r-independence, is in some sense a stronger notion of independence than r-independence.
7
to prove our quantum lll we only require the weaker notion of r-independence, which poten-
tially makes the quantum lll more versatile and applicable in settings where there is no tensor
structure.
3
the quantum local lemma
we begin by stating the classical general lov ́
asz local lemma. to this end we need to be more
precise about what we mean by "weak" dependence, introducing the notion of the dependency
graph for both events and subspaces (see e.g. [as04] for the case of events), where we use relative
dimension r as in def. 3.
definition 12 (dependency graph for events/subspaces). the directed graph g = ([n], e) is a de-
pendency graph for
(i) the events a1, . . . , an if for every i ∈[n], ai is mutually independent of {aj|(i, j) /
∈e},
(ii) the subspaces x1, . . . , xn if for every i ∈[n], xi is mutually r-independent of {xj|(i, j) /
∈e}.
with these notions in place we can state the general lov ́
asz local lemma (sometimes also
called the asymmetric lll).
theorem 13 ([el75]). let a1, a2, . . . , an be events with dependency graph g = ([n], e). if there exists
0 ≤y1, . . . , yn < 1, such that pr(ai) ≤yi * ∏(i,j)∈e(1 −yj), then
pr(
n
^
i=1
ac
i) ≥
n
∏
i=1
(1 −yi).
in particular, with positive probability no event ai holds.
we prove a quantum generalization of this lemma with exactly the same parameters. as men-
tioned before, we have to modify the formulation of the lll to account for the unusual way
r-independence behaves under complement. we are now ready to state and prove our main re-
sult.
theorem 14 (quantum lov ́
asz local lemma). let x1, x2, . . . , xn be subspaces with dependency graph
g = ([n], e). if there exist 0 ≤y1, . . . , yn < 1, such that
r(xi) ≥1 −yi ∏
(i,j)∈e
(1 −yj),
(1)
then r(tn
i=1 xi) ≥∏n
i=1(1 −yi).
note that when r is replaced by pr and t by v we recover the lll thm. 13. our proof uses
properties that hold both for pr and r, in particular we also prove thm. 13. one can say that we
generalize the lll to any notion of probability for which the properties (i)-(iv) of lemma 8 hold
(these are the only properties of r we need in the proof).
proof of theorem 14: we modify the proof in [as04] in order to avoid using the property pr(a|b) +
pr(ac|b) = 1 which does not hold for r. to show thm. 14, it is sufficient to prove the following
lemma.
8
lemma 15. for any s ⊂[n], and every i ∈[n], r(xi| t
j∈s xj) ≥1 −yi.
thm. 14 now follows from the chain rule (lemma 8.iii):
r(
n
\
i=1
xi) = r(x1)r(x2|x1)r(x3|x1 ∩x2) . . . r(xn|
n−1
\
j=1
xj) ≥
n
∏
i=1
(1 −yi) .
we prove the lemma by complete induction on the size of the set s. for the base case, if s is
empty, we have
r(xi) ≥1 −
yi ∏
(i,j)∈e
(1 −yj)
≥1 −yi.
inductive step: to prove the statement for s we assume it is true for all sets of size < |s|. fix i and
define d = s ∩{j|(i, j) ∈e} and i = s\d (i and d are the independent and dependent part of s
with respect to the i'th element). let xi = t
j∈i xj and xd = t
j∈d xj. then
1 −r(xi|
\
j∈s
xj) = 1 −r(xi|xi ∩xd) = 1 −r(xi ∩xd|xi)
r(xd|xi)
= r(xd|xi) −r(xi ∩xd|xi)
r(xd|xi)
. (2)
to show the lemma we need to upper bound this expression by yi. we first upper bound the
numerator:
r(xd|xi) −r(xi ∩xd|xi) ≤1 −r(xi|xi) = 1 −r(xi) ≤yi ∏
(i,j)∈e
(1 −yj),
where for the first inequality we use lemma 8.vi, then the fact that xi and xi are r-independent,
and the assumption on r(xi), eq. (1) in thm. 14.
now, we lower bound the denominator of eq. (2). suppose d = {j1, . . . , j|d|}, then
r
\
j∈d
xj|xi
= r
xj1|xi
* . . . * r
xj|d||xj1 ∩. . . ∩xj|d|−1 ∩xi
≥∏
j∈d
1 −yj ≥∏
j:(i,j)∈e
1 −yj.
the equality follows from the chain rule (lemma 8.iii), the first inequality follows from the induc-
tive assumption, and the second inequality follows from the fact that d = {j|(i, j) ∈e} ∩s ⊆
{j|(i, j) ∈e}, and that yj < 1.
for many applications we only need a simpler version of the quantum lll, often called the
symmetric version, which we have already stated in thm. 4.
proof of theorem 4: thm. 4 follows from thm. 14 in the same way the symmetric lll of thm. 1
follows from the more general lll of thm. 13 [as04]; we include it here for completeness: if d = 0
then r(tn
i=1 xi) = πn
i=1r(xi) > 0 by the chain rule (lemma 8.iii) and mutual r-independence of
all subspaces. for d ≥1, by the assumption there is a dependency graph g = ([n], e) for the
subspaces x1, . . . , xn in which for each i; |{j|(i, j) ∈e}| ≤d. taking yi = 1/(d + 1) (< 1) and
using that for d ≥1, (1 −
1
d+1)d > 1
e we get
r(xi) ≥1 −p ≥1 −
1
e(d + 1) ≥1 −
1
d + 1(1 −
1
d + 1)d ≥1 −yi(1 −yi)|{j|(i,j)∈e}|,
9
which is the necessary condition eq. (1) in thm. 14. hence
r(
n
\
i=1
xi) ≥(1 −
1
d + 1)n > 0.
(3)
note that eq. (3) also allows us to give a lower bound on the dimension of the intersecting
subspace, which might be useful for some applications.
we can now move to the implications of the qlll for "sparse" instances of qsat and prove
cor. 5. it is a special case of this slightly more general corollary.
corollary 16. let {π1, . . . , πm} be a k-qsat instance where all projectors have rank at most r. if every
qubit appears in at most d = 2k/(e * r * k) projectors, then the instance is satisfiable.
proof. by assumption, each projector shares qubits with at most k(d −1) other projectors. as we
have already shown in lemma 11, each π⊥
i is mutually r-independent from all but d = k(d −1)
of the other π⊥
j . with p = r * 2−k we have r(π⊥
i ) ≥1 −p. the corollary follows from thm. 4
because p * e * (d + 1) ≤r * 2−k * e(k(2k/(e * r * k) −1) + 1) ≤1.
4
an improved lower bound for random qsat
this section is devoted to the proof of thm. 7. as mentioned in the introduction, in random k-qsat
we study a distribution over instances of k-qsat with fixed density, defined as follows.
definition 17 (random k-qsat). random k-qsat of density α is a distribution over instances {π1, . . . , πm}
on n qubits, where m = αn, obtained as follows:
1. construct a k-uniform hypergraph g with n vertices and m edges (the constraint hypergraph) by
choosing m times, uniformly and with replacement, from the (n
k) possible k-tuples of vertices.
2. for each edge i (1 ≤i ≤m) pick a k-qubit state |vi⟩acting on the corresponding qubits uniformly
from all such states (according to the haar measure) and set πi = |vi⟩⟨vi| ⊗in−k.
remark:
(gk(n, m) vs. gk(n, p)) the distribution on hypergraphs obtained in the first step is
denoted by gk(n, m) and has been studied extensively (see, e.g., [bol01, as04]). a closely related
model is the so called erd ̈
os-renyi gk(n, p) model, where each of the (n
k) k-tuples is independently
chosen to be an edge with probability p. for p = m/(n
k) the expected number of edges in gk(n, p)
is m and these two distributions are very close to each other. in most cases proving that a certain
property holds in one implies that it holds in the other (see [bol01]). there seems to be no con-
sensus whether to define the random k-sat and k-qsat models with respect to the distribution
gk(n, m) or gk(n, p); for instance the upper bounds on the random k-qsat threshold of [bmr09]
are shown in the gk(n, m) model, whereas the lower bounds [lmss09, llm+09] are given in the
gk(n, p) model. this, however, does not matter, as properties such as being satisfiable with high
probability will always hold for both models.
as mentioned, for α = c * 2k/k2, even though a graph from gk(n, m) has average degree davg =
kα = c * 2k/k, and hence on average each qubit appears in c * 2k/k projectors, we cannot apply
10
the qlll and its cor. 5 directly: the degrees in gk(n, m) are distributed according to a poisson
distribution with mean davg and hence we expect to see some high degree vertices (in fact the
expected maximum degree at constant density is expected to be roughly logarithmic in n [bol01]).
the idea behind the proof of thm. 7 is to single out the "high-degree" part vh of the graph and to
treat it separately. the key is to show (i) that the matching conditions of laumann et al.'s thm. 6
is fulfilled by vh on one hand and (ii) to demonstrate how to "glue" the solution on vh with the
one provided by qlll on the remaining graph.
we first show how to glue two solutions, which also clarifies the requirements for h.
lemma 18 (gluing lemma). let p = {π1, . . . , πm} be an instance of k-qsat with rank-1 projectors.
assume that there is a subset of the qubits vh and a partition of the projectors into two sets h and l, where
h (possibly empty) consists of all projectors that act only on qubits in vh, such that
1. the reduced instance given by h (restricted to qubits in vh) is satisfiable.
2. each qubit /
∈vh appears in at most 2k/(4 * e * k) projectors from l.
3. each projector in l has at most one qubit in vh.
then p is satisfiable.
proof. let |φh⟩be a satisfying state for h on the qubits vh (if h = ∅this can be any state). to
extend it to the whole instance, we need to deal with the projectors in l acting on a qubit from
vh. let l = {π1, . . . , πl}. from l we construct a new "decoupled" instance l′ = {q1, . . . , ql}
of k-qsat with projectors of rank at most 2 that have no qubits in vh. if πi ∈l does not act
on any qubit in vh, we set qi := πi. otherwise, order the k qubits on which πi acts such that
the first one is in vh. πi can be written as πi = |vi⟩⟨vi| ⊗in−k, where |vi⟩is a k-qubit state. we
can decompose |vi⟩= a0|0⟩⊗|v0
i ⟩+ a1|1⟩⊗|v1
i ⟩, where the first part of the tensor product is
the qubit in vh and |v1
i ⟩and |v2
i ⟩are (k −1)-qubit states on the remaining qubits. define qi =
|v1
i ⟩⟨v1
i | + |v2
i ⟩⟨v2
i | ⊗in−k+1. call vl′ the set of qubits on which the projectors in l′ act on. note
that by construction vl′ is disjoint from vh, and that vh ∪vl′ is the set of all qubits in p; hence
h and l′ are "decoupled".
claim 19. assume there is a satisfying state |φl′⟩for l′ on vl′. then |φ⟩= |φh⟩⊗|φl′⟩is a satisfying
state for p.
proof. by construction, |φ⟩satisfies all the projectors from h and all projectors in l that do not
have qubits in vh. to see that it also satisfies any projector πi from l with a qubit in vh, observe
that |φl′⟩is orthogonal to both |v1
i ⟩and |v2
i ⟩. hence no matter how |φl′⟩is extended on the qubit
of vh in πi, the resulting state is orthogonal to |vi⟩.
it remains to show that l′ is satisfiable. this follows immediately from cor. 16: we observe that
each projector in l′ can be viewed as a k-local projector of rank at most 4; and by the assumption
each qubit in vl′ appears in at most 2k/(4 * e * k) projectors of l′.
the gluing lemma 18 only depends on the underlying constraint hypergraph. we can hence
give the construction of the "high degree" part of the instance purely in terms of hypergraphs,
and will from now on associate subsets of edges with the corresponding subsets of projectors.
motivated by the gluing lemma, our goal is to separate a set of "high degree" vertices vh (above
11
a certain cut-off degree d) with induced edges h such that each edge outside h has at most one
vertex in vh. we achieve this by starting with the high degree vertices and iteratively adding all
those edges that intersect in more than one vertex.
definition 20 (construction of vh). let g = g([n], e) be a k-uniform hypergraph and d > 0. con-
struct sets of vertices v0, v1, . . . ⊆[n] and edges e1, e2 . . . ⊆e iteratively in the following steps, starting
with all sets empty:
0) let v0 = {v ∈v| deg(v) > d}.
1) for all e ∈e \ e0, if e has 2 or more vertices in v0, then add e to e1, and add to v1 all vertices in e
not already in v0.
.
.
.
i) for all e ∈e \ (e0 ∪. . . ∪ei−1), if e has 2 or more vertices in si−1
j=0 vj, add e to ei, and add to vi all
the vertices in ei which are not already in si−1
j=0 vj.
stop at the first step s such that es = ∅.
let vh := ss
i=0 vi, h := ss
i=1 ei and l := e \ h.
by construction all the vi are disjoint and similarly for the ei. the process of adding edges
stops at some step s (es = ∅), because e \ (e0 ∪. . . ∪es−1) keeps shrinking until this happens.
note that h consists precisely of all those edges in e that have only vertices in vh (i.e. g(vh, h)
is the hypergraph induced by g on vh).
to show that a random k-qsat instance of density α is satisfiable with high probability, we
only need to show that the construction of vh, h and l of def. 20 fulfills the conditions of the
gluing lemma 18 with high probability. we set d = 2k/(4 * e * k) in def. 20, so that conditions
2. and 3. are fulfilled by construction. to finish the proof of thm. 7 it thus suffices to show that
the instance given by h on qubits in vh is satisfiable. to show this we build on laumann et al.'s
thm. 6.
lemma 21. for a random k-qsat instance with density α ≤2k/(12 * e * k2), the reduced instance h
obtained in the construction of def. 20 with d = 2k/(4 * e * k) fulfills the matching conditions of thm. 6
with high probability.
proof. the proof of this key lemma proceeds in two parts. the first one (lemma 22) shows that any
hypergraph induced by a small enough subset of vertices in a hypergraph from gk(n, αn) fulfills
the matching conditions. the second part (lemma 23) then shows that vh is indeed small enough
with high probability.
lemma 22 (small subgraphs have a matching). let g be a random hypergraph distributed according
to gk(n, αn) and let γ = (e(e2 * α)1/(k−2))−1. with high probability, for all w ⊂v with |w| < γn, the
induced hypergraph on w obeys the matching conditions of thm. 6.
proof. there is simple intuition why small sets obey the matching conditions - the density inside
a small induced graph is much smaller than the density of g: for simplicity set α = 2k−1 and
γ = 1/(2 + 2δ) for some δ > 0. imagine fixing w ⊂v of size γn and then picking the graph g
according to gk(n, p) with p = αn/(n
k) ≈
α
nk−1 = ( 2
n)k−1. the induced graph on w is distributed
according to gk(γn, p) and hence its density is α′ = p * (γn
k )/γn ≈p * (γn)k−1 = (1 + δ)−(k−1) ≪1.
12
at such low densities the matching conditions are fulfilled with high probability (see the remark
below thm. 6). we proceed to prove the somewhat stronger statement that the matching condi-
tions hold for all small subsets.
let us first examine the matching conditions. we can construct a bipartite graph b(g), where
on the left we put the edges of g and on the right the vertices of g. we connect each edge on the
left with those vertices on the right that are contained in that edge. then the matching conditions
of thm. 6 are equivalent to saying that there is a matching in b(g) that covers all left vertices.
by hall's theorem [hal35, die97], such a matching exists iff for all t, every subset of t edges on
the left is connected to at least t vertices on the right. hence, there is a "bad" subset w ⊂v with
|w| < γn not obeying the matching conditions iff for some t < γn there is a subset of vertices of
size t −1 that contains t edges. let us compute the probability of such a bad event to happen.
first, fix a subset s ⊆v of size t −1 and let us compute the probability that it contains t edges.
the probability that a random edge lands in s is at most ((t −1)/n)k. since in gk(n, m) all m edges
are picked independently, we get
pr[s contains t edges] ≤
m
t
t −1
n
kt
.
by the union bound over all subsets s of size t −1 (there are ( n
t−1) of them) and all t we get the
following bound
pr[∃"bad" w] ≤
γn
∑
t=1
n
t −1
m
t
t −1
n
kt
≤
γn
∑
t=1
n
t
αn
t
t
n
kt
≤
ne
t
t αne
t
t t
n
kt
=
γn
∑
t=1
e2α
t
n
k−2!t
=:
γn
∑
t=1
at.
note that the sum is clearly dominated by the first term (t = 1). more precisely we have
∀1 ≤t < γn −1
at+1
at
= e2α
t + 1
t
(k−2)t t + 1
n
k−2
≤e2αek−2γk−2 =: r < 1,
where for the last inequality we have used the bound on γ. hence ∑γn
t=1 at ≤∑γn
t=1 a1rt−1 =
1
1−ra1,
and we get pr[∃"bad" w] ≤
1
1−r
e2α
nk−2 →0.
lemma 23 (vh is small). let g be a hypergraph picked from gk(n, αn) and let vh be the set of vertices
generated by the procedure in definition 20 with d = 2k/(4 * e * k). then for k ≥12 and αk ≤d/3, with
high probability |vh| ≤(ǫ0 + o(1))n for some ǫ0 satisfying ǫ0 < γ where γ is the constant from lemma
22.
remark:
as is standard in the model of random k-sat and random k-qsat, if we look at the
large k limit we will always first take the limit n →∞for fixed k and then k →∞. hence we will
always treat k (and d and α) as a constant in o(*) and o(*) terms.
proof. throughout the proof we will set α to its maximum allowed value of d/(3k). the statement
of lemma 23 for smaller α then follows by monotonicity.
13
for the proof of this lemma, we first replace gk(n, αn) by a slightly different model of random
hypergraphs g′
k(n, α′n). in g′
k(n, α′n), we first generate a random sequence of vertices of length
kα′n with each vertex picked i.i.d. at random. we then divide the sequence into blocks of length
k and, for each block that contains k different vertices, we create a hyperedge. (for blocks that
contain the same vertex twice, we do nothing.)
the expected number of blocks containing the same vertex twice is o((k
2)α′) = o(1). there-
fore, we can choose α′ = α + o(1) and, with high probability, we will get at least αn edges (and
each of those edges will be uniformly random). this means that it suffices to prove the lemma for
g′
k(n, α′n).
for this model, we will show that |vi| satisfies the following bounds:
claim 24. there is an ǫ0 < γ
2 and ǫi := 2−iǫ0 such that for all i : 0 ≤i ≤l with l := ⌈3
2 log n⌉, with
probability at least 1 −2i
n2,
|vi| ≤ǫin.
(4)
this implies that vl is empty with probability at least 1 −o( 1
√n). in this case, |vh| = ∑l−1
i=0 |vi|.
with probability at least 1 −2l+1
n2 = 1 −o( 1
√n), (4) is true for all i. then,
|vh| =
l−1
∑
i=0
|vi| ≤2ǫ0n < γn,
which completes the proof of the lemma.
in what follows we will repeatedly use azuma's inequality [azu67, hoe63, as04]:
let y0, . . . , yn be a martingale, where |yi+1 −yi| ≤1 for all 0 ≤i < n. for any t > 0,
pr(|xn −x0| ≥t) ≤exp(−t2
2n).
(5)
we now prove claim 24, by induction on i. we start with the base case i = 0. here, we will
also bound r0, the number of edges incident to v0, and show
pr [r0 ≥ǫ0dn] ≤
1
2n2 .
(6)
the i = 0 case.
recall that v0 = {v|deg(v) ≥d}.
by linearity of expectation, e[|v0|] =
npr(deg(v) ≥d). the degree of a vertex is a sum of independent 0-1 valued random variables
with expectation slightly less than α′k. in the large n limit, this becomes a poisson distribution
with mean ≤α′k = d/3 + o(1). using the tail bound for poisson distributions (see, e.g., [as04]
thm. a.1.15), we obtain pr(deg(v) ≥d) ≤(e2/27)d/3. note that for k ≥12 we have
( e2
27)d/3 ≤5
8ǫ0, where we set ǫ0 =
α′
12d2k = d/(3k) + o(1)
12d2k
≤
1
12dk2 < γ
2 .
then, e[|v0|] ≤5
8ǫ0n.
to bound e[r0], observe that r0 ≤∑v∈v0 deg(v) and hence
e[r0] ≤n pr(deg(v) ≥d) * e[deg(v)|deg(v) ≥d] ≤5
8ǫ0n * e[deg(v)|deg(v) ≥d] ≤5
6ǫ0nd,
where for the last inequality we have used e[deg(v)|deg(v) ≥d] ≤4
3d, which follows from the
following simple fact:
14
fact 25. let x be a random variable distributed according to a poisson distribution with mean λ. then for
k > 1, e[x|x ≥kλ] ≤(k + 1)λ.
proof.
e[x|x ≥kλ] = ∑∞
j=kλ j * pr(x = j)
pr(x ≥kλ)
=
1
pr(x ≥kλ)
∞
∑
j=kλ
je−λ λj
j! =
1
pr(x ≥kλ) λ
∞
∑
j=kλ
e−λ
λj−1
(j −1)!
= λ
1 + pr(x = kλ −1)
pr(x ≥kλ)
≤λ
1 + pr(x = kλ −1)
pr(x = kλ)
= λ
1 + kλ
λ
= (1 + k)λ.
to prove (4) and (6), we use azuma's inequality eq. (5). let x0, x1, . . . , xkα′n be the martingale
defined in the following way. we pick the vertices of the sequence defining g at random one by
one and let xi be the expectation of |v0| (resp. r0) when the first i vertices of the sequence are
already chosen and the rest is still uniformly random. picking one vertex in any particular way
changes the size of |v0| by at most 1 and of r0 by at most d (when the degree of a vertex crosses
the threshold d to be in v0). therefore, for v0, |xi −xi−1| ≤1 (|xi −xi−1| ≤d for the bound on
r0). for v0, by azuma's inequality
pr[||v0| −e[|v0|]| ≥t] = pr[|xkα′n −x0| ≥t] ≤e−
t2
2kα′n .
to make this probability less than 1/n2, we chose t = 2
√
kα′√
n ln n. then, with probability at
least 1 −1
n2 , |v0| ≤e[|v0|] + o(
p
n log n) ≤5
8ǫ0n + o(
p
n log n) ≤ǫ0n, which gives bound (4).
similarly, to show bound (6) for r0, we choose t = 2d
√
kα′p
n(ln n + 1). then, we get that with
probability at least 1 −
1
2n2, r0 ≤e[r0] + o(
p
n log n) ≤5
6ǫ0nd + o(
p
n log n) ≤ǫ0nd.
the i > 0 case.
we will first condition on the event f that bound (6) holds and bounds (4) hold
for all previous i. moreover, we fix the following objects:
• the sets v0, . . . , vi−1;
• the edges in e1, . . . , ei−1;
• the degrees of all vertices v ∈v0 ∪. . . ∪vi−1;
conditioning on v0, . . . , vi−1 and their degrees is equivalent to fixing the number of times that
each v ∈v0 ∪. . . ∪vi−1 appears in the sequence defining the graph g according to g′
k(n, α′n).
furthermore, conditioning on e1, . . . , ei−1 means that we fix some blocks of the sequence to be
equal to edges in e1, . . . , ei−1. we can then remove those blocks from the sequence and adjust the
degrees of the vertices that belong to those edges. conditioning on e1, . . . , ei−1 also means that
we condition on the fact that there is no other block containing two vertices from v0 ∪. . . ∪vi−2.
we now consider a random sequence of vertices satisfying those constraints. let b be the total
number of blocks (after removing e0, . . . , ei−1) and call mj the number of blocks that contain one
element of vj for 0 ≤j ≤i −1. (the mj are fixed since the vj are fixed.) the sequence of vertices
on the b blocks is uniformly random among all sequences with a fixed number of occurrences of
elements in vj (a total of mj) and such that no two of them occur in the same block. note that
15
an edge from ei must have at least one of its vertices in vi−1. we have m0 + . . . + mi−2 blocks
containing one vertex from v0 ∪. . . ∪vi−2 each. for each of those blocks, the probability that one
of the mi−1 occurrences of v ∈vi−1 ends up in it is at most
(k −1)
mi−1
kb −m0 −. . . −mi−2
.
(7)
for any other block, the probability that two or more occurrences of v ∈vi−1 are in it is at most
k
2
mi−1(mi−1 −1)
(kb −m0 −. . . −mi−2)(kb −m0 −. . . −mi−2 −1) ≤
k
2
mi−1
kb −m0 −. . . −mi−2
2
. (8)
observe that ej+1 + mj ≤dvj for j ≥1 since each vertex in vj is incident to less than d
edges. moreover, e1 + m0 ≤r0. note that this implies that kb −(m0 + m1 + . . . + mi−2) ≥
kα′n −k [r0 + d(v1 + . . . + vi−2)]. recall that we are conditioning on the event f that the bounds
in (4) and (6) hold, and hence we can further bound kb −(m0 + m1 + . . . + mi−2) ≥kα′n −
k [ǫ0nd + d(ǫ1n + . . . + ǫi−2n)] ≥kα′n −2kdǫ0n. for our choice of ǫ0 ≤
α′
12kd2 we hence obtain
kb −(m0 + . . . + mi−2) ≥α′ k
2n.
by combining (7) and (8), using the union bound for all relevant blocks, we get
e[|ei|] ≤
(k −1) m0 + . . . + mi−2
α′ k
2n
+ α′n
k
2
mi−1
(α′ k
2n)2
!
mi−1 ≤2mi−1
m0 + . . . + mi−1
α′n
≤2d2vi−1
r0/d + v1 + . . . + vi−1
α′n
.
since we are conditioning on the event f that (4) and (6) hold, we can bound r0 and vj and obtain
e[|ei|] ≤2d2vi−1
ǫ0n + ǫ1n + . . . + ǫi−1n
α′n
≤2d2vi−1
2ǫ0
α′ ≤vi−1
3k ,
where we have substituted ǫ0 =
α′
12d2k. together with the observation that |vi| ≤k|ei| we have
hence shown in our setting that
e[|vi|] ≤vi−1/3.
(9)
the large deviation bound (4) again follows from azuma's inequality (5). we pick the sequence
of kb vertices (after removing e0, . . . , ei−1) vertex by vertex and let xi to be the expectation of
|vi| after picking the i first vertices of the sequence. then, x0, x1, . . . , xkb form a martingale and
choosing one vertex of the sequence affects |vi| by at most k. therefore, |xi −xi−1| ≤k when
bounding |vi|. we now apply azuma's inequality (5) with t = 2k
p
kα′n(ln n + 1) and obtain, in
our setting of fixed sets vj, fixed degrees of their elements and fixed sets ej for 0 ≤j ≤i −1, and
conditioning on the event f,
pr(||vi| −e[|vi||] ≥o(
p
n log n)) ≤
1
2n2 .
16
using eq. (9), the induction hypothesis and the fact that we are conditioning on bound (4) to hold,
we get that with probability at least 1 −
1
2n2 ,
|vi| ≤e[|vi|] + o(
p
n log n) ≤vi−1
3
+ o(
p
n log n) ≤ǫi−1n
3
+ o(
p
n log n) ≤ǫin.
since this holds for all fixed sets vj, fixed degrees of their elements and fixed sets ej for 0 ≤j ≤
i −1, it also holds when we remove this conditioning (while still conditioning on the event f). by
the union bound, f does not hold with probability at most 2i−1
2
n2 . hence, with probability at least
1 −2i
n2, vi ≤ǫin and we have shown the bound in (4).
this terminates the proof of lemma 21 for all k ≥12. for smaller values of k our bound of
α ≤2k/(12 * e * k2) is smaller than the bound obtained by laumann et al. [llm+09], and hence
the lemma also holds. hence we have shown thm. 7.
remark:
note that in the limit of large k our results can be tightened to give a bound of α ≤
(d −o(
p
d log d))/k = 2k/(4 * e * k2) −o(
√
2k/2√
k) for the satisfiable region. the analysis
essentially changes only for the bound on e[|v0|] in the beginning of the i = 0 base case, where
we have to use the tail bound for the poisson distribution for smaller deviations.
acknowledgments
the authors would like to thank the erwin schr ̈
odinger international institute in vienna, where
part of this work was done, for its hospitality. we thank noga alon, chris laumann and oded
regev for valuable discussions.
references
[alo91]
n. alon. a parallel algorithmic version of the local lemma. random structures and
algorithms, 2(4):367–378, 1991.
[ap04]
d. achlioptas and y. peres. the threshold for random k-sat is 2ˆ k log 2-o (k). amer-
ican mathematical society, 17(4):947–974, 2004.
[as04]
n. alon and j.h. spencer. the probabilistic method. wiley-interscience, 2004.
[azu67]
k. azuma. weighted sums of certain dependent random variables. tohoku mathemat-
ical journal, 19(3):357–367, 1967.
[bbc+01]
b. bollob ́
as, c. borgs, j.t. chayes, j.h. kim, and d.b. wilson. the scaling window of
the 2-sat transition. random structures and algorithms, 18(3):201–256, 2001.
[bec91]
j. beck. an algorithmic approach to the lov ́
asz local lemma. random structures and
algorithms, 2(4):343–365, 1991.
[bmr09]
s. bravyi, c. moore, and a. russell. bounds on the quantum satisfibility threshold.
arxiv preprint arxiv:0907.1297, 2009.
17
[bol01]
b. bollob ́
as. random graphs. cambridge university press, 2 edition, 2001.
[bra06]
s. bravyi. efficient algorithm for a quantum analogue of 2-sat. arxiv preprint quant-
ph/0602108, 2006.
[bs07]
s. beigi and p.w. shor. on the complexity of computing zero-error and holevo capac-
ity of quantum channels. arxiv preprint arxiv:0709.2090, september 2007.
[bt08]
s. bravyi and b. terhal. complexity of stoquastic frustration-free hamiltonians. arxiv
preprint arxiv:0806.1746, june 2008.
[cr92]
v. chv ́
atal and b. reed. mick gets some (the odds are on his side). in proceedings
of the 33rd annual symposium on foundations of computer science, pages 620–627. ieee
computer society, 1992.
[cs00]
a. czumaj and c. scheideler. coloring non-uniform hypergraphs: a new algorithmic
approach to the general lov ́
asz local lemma. in proceedings of the 11th symposium on
discrete algorithms, 17:30-39, 2000.
[die97]
r. diestel. graph theory (graduate texts in mathematics). springer heidelberg, 1997.
[dkmp08] j. diaz, l. kirousis, d. mitsche, and x. perez-gimenez. a new upper bound for 3-sat.
arxiv preprint arxiv:0807.3600, july 2008.
[el75]
p. erd ̈
os and l. lov ́
asz. problems and results on 3-chromatic hypergraphs and some
related questions. infinite and finite sets, 2:609–627, 1975.
[fei02]
uriel feige. relations between average case complexity and approximation complex-
ity. in proc. 34th annual acm symp. on theory of computing (stoc), pages 534–543,
2002.
[for09]
l. fortnow.
a kolmogorov complexity proof of the lov ́
asz local lemma.
http://blog.computationalcomplexity.org/2009/06/kolmogorov-complexity-proof-of-lov.html,
2009.
[fri99]
e. friedgut. sharp thresholds of graph properties, and the k-sat problem. journal of
the american mathematical society, 12(4):1017–1054, october 1999.
[goe92]
a. goerdt. a threshold for unsatisfiability. in mathematical foundations of computer
science 1992, pages 264–274. 1992.
[hal35]
p. hall. on representatives of subsets. j. london math. soc, 10(1):26–30, 1935.
[hoe63]
w. hoeffding. probability inequalities for sums of bounded random variables. journal
of the american statistical association, pages 13–30, 1963.
[hs03]
m.t. hajiaghayi and g.b. sorkin. the satisfiability threshold of random 3-sat is at
least 3.52. arxiv preprint math/0310193, 2003.
[kkl03]
a.c. kaporis, l.m. kirousis, and e. lalas. selecting complementary pairs of literals.
electronic notes in discrete mathematics, 16:47–70, october 2003.
18
[kos97]
a.i. kostrikin. linear algebra and geometry. taylor & francis, 1997.
[ks94]
s. kirkpatrick and b. selman. critical behavior in the satisfiability of random boolean
expressions. science, 264(5163):1297–1301, may 1994.
[liu06]
y.k. liu. consistency of local density matrices is qma-complete. in approximation,
randomization, and combinatorial optimization. algorithms and techniques, pages 438–
449. 2006.
[liu07]
y.k. liu. the complexity of the consistency and n-representability problems for quan-
tum states. arxiv preprint arxiv:0712.3041, december 2007.
[llm+09] c.r. laumann, a.m. l ̈
auchli, r. moessner, a. scardicchio, and s. l sondhi. on prod-
uct, generic and random generic quantum satisfiability. arxiv preprint arxiv:0910.2058,
october 2009.
[lmss09]
c.r. laumann, r. moessner, a. scardicchio, and s.l. sondhi. phase transitions and
random quantum satisfiability. arxiv preprint arxiv:0903.1904, march 2009.
[mmz05]
m. m ́
ezard, t. mora, and r. zecchina. clustering of solutions in the random satisfia-
bility problem. physical review letters, 94(19):197205, may 2005.
[mos08]
r.a. moser. derandomizing the lov ́
asz local lemma more effectively. arxiv preprint
arxiv:0807.2120, july 2008.
[mos09]
r.a. moser. a constructive proof of the lov ́
asz local lemma. in proceedings of the 41st
annual acm symposium on theory of computing, pages 343–350. acm new york, ny,
usa, 2009.
[mp87]
p. mani-levitska and j. pach. decomposition problems for multiple coverings with
unit balls. manuscript, 1987.
[mpz02]
m. mezard, g. parisi, and r. zecchina. analytic and algorithmic solution of random
satisfiability problems. science, 297(5582):812–815, 2002.
[mr98]
m. molloy and b. reed. further algorithmic aspects of the local lemma. in proceedings
of the 30th annual acm symposium on theory of computing, pages 524-529, 1998.
[mt09]
r.a. moser and g. tardos. a constructive proof of the general lov ́
asz local lemma.
arxiv preprint arxiv:0903.0544, 2009.
[spe77]
j. spencer. asymptotic lower bounds for ramsey functions. discrete math, 20(1):69–76,
1977.
[sri08]
a. srinivasan.
improved algorithmic versions of the lov ́
asz local lemma.
in pro-
ceedings of the nineteenth annual acm-siam symposium on discrete algorithms, pages
611–620, 2008.
19
|
0911.1697 | time-varying autoregressions in speech: detection theory and
applications | this article develops a general detection theory for speech analysis based on
time-varying autoregressive models, which themselves generalize the classical
linear predictive speech analysis framework. this theory leads to a
computationally efficient decision-theoretic procedure that may be applied to
detect the presence of vocal tract variation in speech waveform data. a
corresponding generalized likelihood ratio test is derived and studied both
empirically for short data records, using formant-like synthetic examples, and
asymptotically, leading to constant false alarm rate hypothesis tests for
changes in vocal tract configuration. two in-depth case studies then serve to
illustrate the practical efficacy of this procedure across different time
scales of speech dynamics: first, the detection of formant changes on the scale
of tens of milliseconds of data, and second, the identification of glottal
opening and closing instants on time scales below ten milliseconds.
| introduction
t
his article presents a statistical detection framework
for identifying vocal tract dynamics in speech data
across different time scales. since the source-filter view of
speech production motivates modeling a stationary vocal tract
using the standard linear-predictive or autoregressive (ar)
model [2], it is natural to represent temporal variation in
the vocal tract using a time-varying autoregressive (tvar)
process. consequently, we propose here to detect vocal tract
changes via a generalized likelihood ratio test (glrt) to
determine whether an ar or tvar model is most appropriate
for a given speech data segment. our main methodological
contribution is to derive this test and describe its asymp-
totic behavior. our contribution to speech analysis is then
to consider two specific, in-depth case studies of this testing
framework: detecting change in speech spectra, and detecting
glottal opening and closing instants from waveform data.
earlier work in this direction began with the fitting of
piecewise-constant ar models to test for nonstationarity [3],
[4]. however, in reality, the vocal tract often varies slowly,
rather than as a sequence of abrupt jumps; to this end, [5]–[8]
studied time-varying linear prediction using tvar models. in
a more general setting, kay [9] recently proposed a version of
based upon work supported in part by darpa grant hr0011-07-1-
0007, dod air force contract fa8721-10-c-0002, and an nsf graduate
research fellowship. a preliminary version of this material appeared in
the 10th annual conference of the international speech communication
association (interspeech 2009) [1]. the opinions, interpretations, conclusions,
and recommendations are those of the authors and are not necessarily endorsed
by the united states government.
d. rudoy and p. j. wolfe are with the statistics and information sciences
laboratory, harvard university, oxford street, cambridge, ma 02138 (e-
mail: {rudoy, patrick}@seas.harvard.edu)
t. f. quatieri is with the lincoln laboratory, massachusetts institute of
technology, lexington, ma 02173 usa. (e-mail: [email protected]).
the rao test for ar vs. tvar determination; however, when
available, likelihood ratio tests often outperform their rao
test counterparts for finite sample sizes [10]. nonparametric
approaches to detecting spectral change in acoustic signals
were proposed by the current authors in [11], [12].
detecting spectral variation across multiple scales is an
important first step toward appropriately exploiting vocal tract
dynamics. this can lead to improved speech analysis algo-
rithms on time scales on the order of tens of milliseconds
for speech enhancement [11], [13]–[15], classification of time-
varying phonemes such as unvoiced stop consonants [7], and
forensic voice comparison [16]. at the sub-segmental time
scale (i.e., less than one pitch period), sliding-window ar
analysis has been used to capture vocal tract variation and to
study the excitation waveform as a key first step in applications
including inverse filtering [17], speaker identification [18],
synthesis [19], and clinical voice assessment [20].
in the first part of this article, we develop a general
detection theory for speech analysis based on tvar models.
in section ii, we formally introduce these models, derive their
corresponding maximum-likelihood estimators, and develop a
glrt appropriate for speech waveforms. after providing ex-
amples using real and synthetic data, including an analysis of
vowels and diphthongs from the timit database [21], we then
formulate in section iii a constant false alarm rate (cfar) test
and characterize its asymptotic behavior. in section iv, we
discuss the relationship of our framework to classical methods,
including the piecewise-constant ar approach of [3].
next, we consider two prototype speech analysis applica-
tions: in section v, we apply our glrt framework to detect
formant changes in both whispered and voiced speech. we
then show how to detect glottal opening and closing instants
via the glrt in section vi. we evaluate our results on
the more difficult problem of detecting glottal openings [22]
using ground-truth data obtained by electroglottograph (egg)
analysis, and also show performance comparable to methods
based on linear prediction and group delay for the task of
identifying glottal closures. we conclude and briefly discuss
future directions in section vii.
ii. time-varying autoregressions and testing
a. model specification
recall the classical pth-order linear predictive model for
speech, also known as an ar(p) autoregression [2]:
ar(p):
x[n] =
p
x
i=1
aix[n −i] + σw[n],
(1)
where the sequence w[n] is a zero-mean white gaussian
process with unit variance, scaled by a gain parameter σ > 0.
arxiv:0911.1697v2 [stat.ap] 18 apr 2010
revised manuscript
2
a more flexible pth-order time-varying autoregressive model
is given by the following discrete-time difference equation [5]:
tvar(p):
x[n] =
p
x
i=1
ai[n]x[n −i] + σw[n].
(2)
in contrast to (1), the linear prediction coefficients ai[n] of (2)
are time-dependent, implying a nonstationary random process.
the model of (2) requires specification of precisely how the
linear prediction coefficients evolve in time. here we choose
to expand them in a set of q +1 basis functions fj[n] weighed
by coefficients αij as follows:
ai[n] =
q
x
j=0
αijfj[n],
for all 1 ≤i ≤p.
(3)
we assume throughout that the "constant" function f0[n] = 1
is included in the chosen basis set, so that the classical ar(p)
model of (1) is recovered as ai ≡αi0 * 1 whenever αij = 0
for all j > 0. many choices are possible for the functions
fj[n]-legendre [23] and fourier [5] polynomials, discrete
prolate spheroidal functions [6], and even wavelets [24] have
been used in speech applications.
the functional expansion of (3) was first studied in [23],
[25], and subsequently applied to speech analysis by [5]–[7],
among others. coefficient trajectories ai[n] have also been
modeled as sample paths of a suitably chosen stochastic pro-
cess (see, e.g., [26]). in this case, however, estimation typically
requires stochastic filtering [13] or iterative methods [27] in
contrast to the least-squares estimators available for the model
of (3), which are described in section ii-c below.
b. ar vs. tvar generalized likelihood ratio test (glrt)
we now describe how to test the hypothesis h0 that a
given signal segment x = (x[0] x[1] * * * x[n −1])t has been
generated by an ar(p) process according to (1), against the
alternative hypothesis h1 of a tvar(p) process as specified
by (2) and (3) above. we introduce a glrt to examine
evidence of change in linear prediction coefficients over time,
and consequently in the vocal tract resonances that they
represent in the classical source-filter model of speech.
according to the functional expansion of (3), the tvar(p)
model of (2) is fully described by p(q + 1) expansion coeffi-
cients αij and the gain term σ. for convenience we group the
coefficients αij into q + 1 vectors αj, 0 ≤j ≤q, as
αj ≜
α1j
α2j
* * *
αpj
t .
we may then partition a vector α ∈rp(q+1)×1 into blocks
associated to the ar(p) portion of the model αar, and the
remainder αtv, which captures time variation:
α ≜
αt
ar | αt
tv
t =
αt
0
| αt
1
αt
2
* * * αt
q
t .
(4)
recalling that the tvar(p) model (hypothesis h1) reduces
to an ar(p) model (hypothesis h0) precisely when αj = 0
for all j > 0, we may formulate the following hypothesis test:
model :
tvar(p) with parameters α, σ2;
hypotheses :
(
h0 : αj = 0
for all j > 0,
h1 : αj ̸= 0
for at least one j > 0. (5)
estimate αtv, αar
under h1
estimate, αar
under h0
x
estimate σ2
under h1
estimate σ2
under h0
t(x)
-
+
(
)
1
2
ˆ
(
)ln
h
n
p
σ
−
(
)
0
2
ˆ
(
)ln
h
n
p
σ
−
fig. 1.
computation of the glrt statistic t(x) according to section ii-c.
each of these two hypotheses in turn induces a data likelihood
in the observed signal x ∈rn×1, which we denote by phi(*)
for i = 1, 2. the corresponding generalized likelihood ratio
test comprises evaluation of a test statistic t(x), and rejection
of h0 in favor of h1 if t(x) exceeds a given threshold γ:
t(x) ≜2 ln supα,σ2 ph1(x; α, σ2)
supα0,σ2 ph0(x; α0, σ2)
h1
≷
h0
γ.
(6)
c. evaluation of the glrt statistic
the numerator and denominator of (6) respectively im-
ply maximum-likelihood (ml) parameter estimates of α =
(αt
ar | αt
tv)t and α0 in (4) under the specified tvar(p)
and ar(p) models, along with their respective gain terms σ2.
intuitively, when h0 is in force, estimates of αtv will be
small; we formalize this notion in section iii-a by showing
how to set the test threshold γ to achieve a constant false alarm
rate.
as we now show, conditional ml estimates are easily
obtained in closed form, and terms in (6) reduce to estimates
of σ2 under hypotheses h0 and h1, respectively. given n
observations, partitioned according to
x = (xp xn−p)t ≜
x[0] * * * x[p −1] | x[p] * * * x[n −1]t ,
the joint probability density function of α, σ2 is given by:
p(x ; α, σ2) = p(xn−p | xp ; α, σ2)p(xp ; α, σ2).
(7)
here the notation | reflects conditioning on random variables,
whereas ; indicates dependence of the density on non-random
parameters. as is standard practice, we approximate the un-
conditional data likelihood of (7) by the conditional likelihood
p(xn−p | xp ; α, σ2), whose maximization yields an estimator
that converges to the exact (unconditional) ml estimator as
n →∞(see, e.g., [28] for this argument under h0).
gaussianity of w[n] implies the conditional likelihood
p(xn−p | xp; α, σ2) =
1
(2πσ2)(n−p)/2 exp
−
n−1
x
n=p
e2[n]
2σ2
!
,
where e[n] ≜x[n] −pp
i=1
pq
j=0 αijfj[n]x[n −i] is the
associated prediction error. the log-likelihood is therefore
ln p(xn−p | xp; α, σ2) = −n −p
2
ln(2πσ2) −∥xn−p −hxα∥2
2σ2
(8)
where the (n−p+1)th row of the matrix hx ∈r(n−p)×p(q+1)
is given by the kronecker product (x[n −1] * * * x[n −p]) ⊗
(f0[n] f1[n] * * * fq[n]) for any p ≤n ≤n −1.
revised manuscript
3
100
200
300
400
500
−4
−2
0
2
4
synthetic signal
x[n]
sample number
100
200
300
400
500
−1.5
−1
−0.5
0
0.5
1
true ar trajectories
a[n]
sample number
100
200
300
400
500
−1.5
−1
−0.5
0
0.5
1
sample number
coefficient magnitude
fitted ar trajectories
true model
fitted ar model
fitted tvar model
−1
−0.5
0
0.5
1
−1
−0.5
0
0.5
1
imaginary axis
real axis
pole trajectories
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
probability of detection
frequency jump: π/80 rad
5ms
10ms
15ms
20ms
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
frequency jump: 3π/80 rad
5ms
10ms
15ms
20ms
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
frequency jump: 5π/80 rad
5ms
10ms
15ms
20ms
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
probability of false alarm
frequency jump: 7π/80 rad
5ms
10ms
15ms
20ms
fig. 2.
example of glrt detection performance for a "formant-like" synthetic tvar(2) signal. left: a test signal and its tvar coefficients are shown at
top, with pole trajectories and ar vs. tvar estimates below. right: operating characteristics of the corresponding glrt (p = 2 tvar coefficients, q = 4
legendre polynomials, fs = 16 khz) shown for various frequency jumps and data lengths.
maximizing (8) with respect to α therefore yields the least-
squares solution of the following linear regression problem:
xn−p = hxα + σw,
(9)
where w ≜(w[p] * * * w[n −1])t . consequently, the condi-
tional ml estimate of α follows from (8) and (9) as
b
α =
ht
x hx
−1 ht
x xn−p.
(10)
the estimator of (10) corresponds to a generalization of the
covariance method of linear prediction-to which it exactly
reduces when the number q of non-constant basis functions
employed is set to zero [5]; we discuss the corresponding
generalization of the autocorrelation method in section iv-b.
the conditional ml estimate of σ2 is obtained by substitut-
ing (10) into (8) and maximizing with respect to σ2, yielding
c
σ2 =
1
n −p
n−1
x
n=p
x[n]x[n]−
p
x
i=1
q
x
j=0
c
αijfj[n]x[n]x[n−i]
. (11)
under h0 (the time-invariant case), the estimator of (11)
reduces to the familiar c
σ2 = b
rxx[0] −pp
i=1 αi0b
rxx[i], where
rxx[τ] is the autocorrelation function of x[n] at lag τ.
in summary, the conditional ml estimates of αar, αtv and
σ2 under h1 are obtained using (10) and (11), respectively.
estimates of αar and σ2 under h0 are obtained by setting
q = 0 in (10) and (11). substituting these estimates into the
glrt statistic of (6), we recover the following intuitive form
for t(x), whose computation is illustrated in fig. 1:
t(x) = (n −p) ln
c
σ2h0/c
σ2h1
.
(12)
d. evaluation of glrt detection performance
to demonstrate typical glrt behavior, we first consider
an example detection scenario involving a "formant-like"
signal synthesized by filtering white gaussian noise through
a second-order digital resonator. the resonator's center fre-
quency is increased by δ radians halfway through the duration
of the signal, while its bandwidth is kept constant; an example
560-sample signal with δ = 7π/80 radians is shown in fig. 2.
detection performance in this setting is summarized in the
right-hand panel of fig. 2, which shows receiver operating
characteristic (roc) curves for different signal lengths n
and frequency jump sizes δ. these were varied in the ranges
n ∈{80, 240, 400, 560} samples (10 ms increments) and
δ ∈{π/80, 3π/80, 5π/80, 7π/80} radians (200 hz incre-
ments), and 1000 trial simulations were performed for each
combination. to generate data under h0, δ was set to zero. in
agreement with our intuition, detection performance improves
when δ is increased while n is fixed, and vice versa-simply
put, larger changes and those occurring over longer intervals
are easier to detect. moreover, even though the span of the
chosen legendre polynomials does not include the actual
piecewise-constant coefficient trajectories, the norm of their
projection onto this basis set is sufficiently large to trigger a
detection with high probability.
we next consider a large-scale experiment designed to
test the sensitivity of the test statistic t(x) to vocal tract
variation in real speech data. to this end, we fitted ar(10)
and tvar(10) models (with q = 4 legendre polynomials)
to all instances of the vowels /eh/, /ih/, /ae/, /ah/, /uh/, /ax/
(as in "bet,' "bit," "bat," "but," "book," and "about") and the
diphthongs /ow/, /oy/, /ay/, /ey/, /aw/, /er/ (as in "boat," "boy,"
"bite," "bait," "bout," and "bird") in the training portion of the
timit database [21]. data were downsampled to 8 khz, and
values of t(x) were averaged across all dialects, speakers,
and sentences (50, 000 vowel and 25, 000 diphthong instances
in total).
per-phoneme averages are reported in table i, and indicate
considerably stronger detections of vocal tract variation in
diphthongs than in vowels-and indeed a two-sample t test
revised manuscript
4
table i
vocal tract variation in timit vowels & diphthongs.
vowel
eh
ih
ae
ah
uh
ax
t(x)
67.5
60.5
94.6
63.8
58.9
32.1
diphthong
ow
oy
ay
ey
aw
er
t(x)
134.1
302.4
187.4
130.6
161.6
133.0
easily rejects (p-value ≊0) the hypothesis that the average
values of t(x) for the two groups are equal. this finding
is consistent with the physiology of speech production, and
demonstrates the sensitivity of the glrt in practice.
iii. analysis of detection performance
to apply the hypothesis test of (5), it is necessary to select
a threshold γ as per (6), such that the null hypothesis of a
best-fit ar(p) model is rejected in favor of the fitted tvar(p)
model whenever t(x) > γ. below we describe how to choose
γ to guarantee a constant false alarm rate (cfar) for large
sample sizes, and give the asymptotic (in n) distribution of the
glrt statistic under h0 and h1, showing how these results
yield practical consequences for speech analysis.
a. derivation of glrt asymptotics and cfar test
under suitable technical conditions [29], likelihood ratio
statistics take on a chi-squared distribution χ2
d(0) as the sample
size n grows large whenever h0 is in force, with the degrees
of freedom d equal to the number of parameters restricted
under the null hypothesis. in our setting, d = pq since the pq
coefficients αtv are restricted to be zero under h0, and we
may write that t(x) ∼χ2
pq(0) under h0 as n →∞.
thus, we may specify an allowable asymptotic constant
false alarm rate for the glrt of (5), defined as follows:
lim
n→∞pr {t(x) > γ; h0} = pr
χ2
pq(0) > γ
.
(13)
since the asymptotic distribution of t(x) under h0 depends
only on p and q, which are set in advance, we can determine a
cfar threshold γ by fixing a desired value (say, 5%) for the
right-hand side of (13), and evaluating the inverse cumulative
distribution function of χ2
pq(0) to obtain the value of γ that
guarantees the specified (asymptotic) constant false alarm rate.
when x is a tvar process so that the alternate hypothesis
h1 is in force, t(x) instead takes on (as n
→∞) a
noncentral chi-squared distribution χ2
d(λ). its noncentrality
parameter λ > 0 depends on the true but unknown parameters
of the model under h1; thus in general
t(x)
n→∞
∼
χ2
pq(λ),
(
λ = 0
under h0,
λ > 0
under h1.
(14)
it is easily shown by the method of [9] that the expression for
λ in the case at hand is given by
λ = αt
tv(f t f ⊗σ−2r)αtv,
(15)
where
*
denotes the schur complement with respect to
the first p × p matrix block of its argument, the (j + 1)th
column of the matrix f
∈
r(n−p)×(q+1) is given by
fj[p]
fj[p + 1]
* * *
fj[n −1]t , and r is given by:
r ≜
rxx[0]
rxx[1]
* * *
rxx[p −1]
rxx[1]
rxx[0]
* * *
rxx[p −2]
.
.
.
.
.
.
...
.
.
.
rxx[p −1]
rxx[p −2]
* * *
rxx[0]
.
here {rxx[0], rxx[1], . . . , rxx[p −1]} is the autocorrelation
sequence corresponding to αar (given, e.g., by the "step-down
algorithm" [28]). the expression of (15) follows from the
fact that f t f ⊗σ−2r is the fisher information matrix for
our tvar(p) model; its schur complement arises from the
composite form of our hypothesis test, since the parameters
αar, σ2 are unrestricted under h0.
more generally, we may relate this result to the underlying
tvar coefficient trajectories ai[n], arranged as columns of
a matrix a, with each column-wise mean trajectory value a
corresponding entry in a matrix ̄
a. letting ̃
a ≜a− ̄
a denote
the centered columns of a, and noting both that f t f ⊗r =
f t f ⊗r and that f (f t f )−1f t a = a when h1 is in
force, properties of kronecker products [30] can be used to
show that (15) may be written as
λ = σ−2 tr( ̃
ar ̃
at ).
(16)
thus λ depends on the centered columns of a, which contain
the true but unknown coefficient trajectories ai[n] minus their
respective mean values.
b. model order selection
the above results yield not only a practical cfar
threshold-setting procedure, but also a full asymptotic descrip-
tion of the glrt statistic of (6) under both h0 and h1. in
light of this analysis, it is natural to ask how the tvar model
order p should be chosen in practice, along with the number
q of non-constant basis functions. in deference to the large
literature on the former subject [2], we adopt here the standard
"2 coefficients per 1 khz of speech bandwidth" rule of thumb.
intuitively, the choice of basis functions should be well
matched to the expected characteristics of the coefficient tra-
jectories ai[n]. to make this notion quantitatively precise, we
appeal to the results of (14)–(16) as follows. first, the statis-
tical power of our test to successfully detect small departures
from stationarity is measured by the quantity pr
χ2
d(λ) > γ
.
a result of [31] then shows that for fixed γ, the power function
pr
χ2
d(λ) > γ
is:
1) strictly monotonically increasing in λ, for fixed d;
2) strictly monotonically decreasing in d for fixed λ.
each of these properties in turn yields a direct and important
consequence for speech analysis:
• test power is maximized when λ attains its largest value:
for fixed p and q, the noncentrality parameter λ of (16)
determines the power of the test as a function of σ2 and
the true but unknown coefficient trajectories a.
• overfitting the data reduces test power: choosing p or
q to be larger than the true data-generating model will
revised manuscript
5
0
0.2
0.4
0.6
0.8
1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
probability of false alarm
probability of detection
effect of model order on detection performance
p = 2
p = 4
p = 6
p = 10
0
0.2
0.4
0.6
0.8
1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
q = 1
q = 4
q = 7
q = 10
fig. 3.
the effect of overfitting on the detection performance of the glrt
statistic for the synthetic signal of fig. 2. an increase in the model order-p
(left) and q (right)-decreases the probability of detection at any cfar level.
result in a quantifiable loss in power, as λ will remain
fixed while the degrees of freedom increase.
the first of these consequences follows from property 1
above, and reveals how test power depends on the energy
of the centered tvar trajectories
̃
a = a − ̄
a for fixed
̄
a and p, q, σ2. to verify the second consequence, observe
that the product ̃
ar ̃
at remains unaffected by an increase in
either p or q beyond that of the true tvar(p) model. then
by property 2, the corresponding increase in the degrees of
freedom pq will lead to a loss of test power.
this analysis implies that care should be taken to ade-
quately capture the energy of tvar coefficient trajectories
while guarding against overfitting; this formalizes our ear-
lier intuition and reinforces the importance of choosing a
relatively low-dimensional subspace formed by the span of
low-frequency basis functions whose degree of smoothness
is matched to the expected tvar(p) signal characteristics
under h1. this conclusion is further illustrated in fig. 3,
which considers the effects of overfitting on the "formant-like"
synthetic example of section ii-d, with p = 2, n = 100 sam-
ples, δ = 7π/80 radians, and piecewise-constant coefficient
trajectories. not only is the effect of overfitting p apparent in
the left-hand panel, but the detection performance also suffers
as the degree q of the legendre polynomial basis is increased,
as shown in the right-hand panel.
iv. relationship to classical approaches
we now relate our hypothesis testing framework to two
classical approaches in the literature. first, we compare its
performance to that of brandt's test [3], which has seen wide
use both in earlier [4], [32] and more recent studies [15],
[33], [34], for purposes of transient detection and automatic
segmentation for speech recognition and synthesis. second,
we demonstrate its advantages relative to the autocorrelation
method of time-varying linear prediction [5], showing that data
windowing can adversely affect detection performance in this
nonstationary setting.
a. classical piecewise-constant ar approach
a related previous approach is to model x as an ar
process with piecewise-constant parameters that can undergo
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
probability of detection
piecewise−constant signal rocs
t'r(x)
t(x)
t'(x)
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
probability of false alarm
piecewise−linear signal rocs
t(x)
t'(x)
0
50
100
150
200
250
300
2000
2100
2200
2300
2400
2500
ω[n]
time−varying center frequency
0
50
100
150
200
250
0
0.2
0.4
0.6
0.8
1
argmaxr t'
r(x)
sample number
ml changepoint location
fig. 4.
comparing the detection performance of the statistic of (6) and that
of (17): (top-left) comparison using the piecewise-constant signal (n = 100,
δ = 5π/80) of section ii-d with p = 2 and q = 2 legendre polynomials
used for computing (6); (top-right) piecewise-linear center frequency of the
digital resonator used to generate the 2nd synthetic example; (bottom-left)
comparison using the piecewise-linear signal (n = 300) with p = 2 and q =
3 legendre polynomials used for computing (6); (bottom-right) histogram of
the changepoint r that maximizes the test statistic t ′
r(x) for each instantiation
of the signal with piecewise-linear tvar coefficient trajectories.
at most a single change [35]. the essence of this ap-
proach, first employed in the speech setting by [3], is to
split x into two parts according to x = (xr | xn−r) =
(x[0] * * * x[r −1] | x[r] * * * x[n −1])t for some fixed r, and
to assume that under h0, x is modeled by an ar(p) process
with parameters α0, whereas under h1, xr and xn−r are
described by distinct ar(p) processes with parameters αr and
αn−r, respectively.
in this context, testing for change in ar parameters at
some known r can be realized as a likelihood ratio test; the
associated test statistic t ′
r(x) is obtained by applying the
covariance method to x, xr, and xn−r in order to estimate
α0, αr, and αn−r, respectively. however, since the value of
r is unknown in practice, t ′
r(x) must also be maximized over
r, yielding a test statistic t ′(x) as follows:
t ′(x) ≜max
r
t ′
r(x)
2p ≤r < n −2p, with
(17)
t ′
r(x) ≜
sup
αr, αn−r
ph1(xr; αr)ph1(xn−r; αn−r)
sup
α0
ph0(x; α0)
.
(18)
we compared the detection performance of the glrt
statistic of (6) with that of (17) on both the piecewise-constant
signal of fig. 2 and a piecewise-linear tvar(2) signal to
illustrate their respective behaviors-the resulting roc curves
are shown in fig. 4. in both cases, it is evident that the tvar-
based statistic of (6) has more power than that of (17), in part
due to the extra variability introduced by maximizing over all
values of r in (17)-especially those near the boundaries of
its range. even in the case of the piecewise-constant signal,
correctly matched to the assumptions underlying (17), the
revised manuscript
6
tvar-based test is outperformed only when r is known a
priori, and (18) is used. this effect is particularly acute in the
small sample size setting-an important consideration for the
single-pitch-period case study of section vi.
this example demonstrates that any estimates of r can be
misleading under model mismatch. as shown in the bottom-
right panel of fig. 4, the detected changepoint is often esti-
mated to be near the start or end of the data segment, but
no "true" changepoint exists since the time-varying center
frequency is continuously changing. thus piecewise-constant
models are only simple approximations to potentially complex
tvar coefficient dynamics; in contrast, flexibility in the
choice of basis functions implies applicability to a broader
class of time-varying signals.
note also that computing (17) requires brute-force evalua-
tion of (18) for all values of r, whereas (6) need be calculated
once. moreover, t ′(x) fails to yield chi-squared (or any
closed-form) asymptotics [35], thus precluding the design of
a cfar test and any quantitative evaluation of test power.
b. classical linear prediction and windowing
recall that our glrt formulation of section ii, stemming
from the tvar model of (2), generalized the covariance
method of linear prediction to the time-varying setting. the
classical autocorrelation method also yields least-squares esti-
mators, but under a different error minimization criterion than
that corresponding to conditional maximum likelihood. to see
this, consider the tvar model
x[n] =
p
x
i=1
ai[n −i]x[n −i] + σw[n],
(19)
in lieu of (2). grouping the coefficients αij into p vectors e
αi ≜
(αi0 αi1 * * * αiq)t , 1 ≤i ≤p, induces a partition of the
expansion coefficients given by e
α ≜(e
αt
1 e
αt
2 * * * e
αt
p )t -
a permutation of elements of α in (4). the autocorrelation
estimator of e
α is then obtained by minimizing the prediction
error over all n ∈z, while assuming that x[n] = 0 for all n /
∈
[0, . . . , n −1], and is equivalent to the least-squares solution
of the following linear regression problem:
x = f
hx e
α + σ e
w,
(20)
where e
w =
w[0]
* * *
w[n −1])t
and the nth row of
f
hx ∈rn×p(q+1) is given by (f0[n −1]x[n −1] * * * f0[n −
p]x[n −p] * * * fq[n −1]x[n −1] * * * fq[n −p]x[n −p]). the
autocorrelation estimate of e
α then follows from (20) as:1
b
e
α = (f
ht
x f
hx)−1f
ht
x x.
(21)
moreover, when the autocorrelation method is used for
spectral estimation in the stationary setting, x is often pre-
multiplied by a smooth window. to empirically examine
the role of data windowing in the time-varying setting, we
generated a short 196-sample synthetic tvar(2) signal x
1as noted by [5], f
ht
x f
hx is a block-toeplitz matrix comprised of p2
symmetric blocks of size (q + 1) × (q + 1)-this special structure arises as a
direct consequence of the synchronous form of the tvar trajectories in (19).
thus, the multichannel levinson-durbin recursion [36] may be used to invert
f
ht
x f
hx directly.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.2
0.4
0.6
0.8
1
probability of false alarm
probability of detection
covariance−based test
autocorrelation−based test
autcorrelation (hamming window)
0
5
10
15
20
25
30
0
0.02
0.04
0.06
0.08
histogram of t(x)
t(x)
autocorrelation−based test
autocorrelation (hamming window)
fig. 5.
comparison of covariance- and autocorrelation-based test statistics,
based on 5000 trials with a short (196-sample) data record. top: roc curves
showing the effects of data windowing on detection performance. bottom:
detail of how windowing changes the distribution of t(x) under h0.
using q = 0 (h0) and q = 2 (h1) non-constant legendre
polynomials, and fitted x using ar(3) and tvar(3) models-
with the extra autoregressive order expected to capture the
effects of data windowing. we then generated an roc curve
associated with the glrt statistic of (12), shown in the top
panel of fig. 5, along with roc curves corresponding to an
evaluation of (12) following the autocorrelation-rather than
the covariance-method, both with and without windowing.
the bottom panel of fig. 5 shows the empirical distributions
of both autocorrelation-based test statistics under h0, and
indicates how windowing has the inadvertent effect of hinder-
ing detection performance in this setting. we have observed
the effects of fig. 5 to be magnified for even shorter data
records, implying greater precision of the covariance-based
glrt approach, which also has the advantage of known test
statistic asymptotics under correct model specification.
v. case study i: detecting formant motion
we now introduce a glrt-based sequential detection al-
gorithm to identify vocal tract variation on the scale of tens
of milliseconds of speech data, and undertake a more refined
analysis than that of section ii-d to demonstrate its efficacy
on both whispered and voiced speech. our results yield strong
empirical evidence that appropriately specified tvar models
can capture vocal tract dynamics, just as ar models are known
to provide a time-invariant vocal tract representation that is
robust to glottal excitation type.
a. sequential change detection scheme
our basic approach is to divide the waveform into a
sequence of k short-time segments {x1, x2, . . . , xk} using
shifts of a single n0-sample rectangular window, and then to
merge these segments, from left to right, until spectral change
is detected via the glrt statistic of (6). the procedure,
detailed in algorithm 1, begins by merging the first pair of
adjacent short-time segments x1 and x2 into a longer segment
revised manuscript
7
algorithm 1 sequential formant change detector
1) initialization: set γ via (13), input waveform data x
• compute k short-time segments {x1, . . . , xk} of
x using shifts of a rectangular window
• set k = 1, xl = x1, xr = x2
• set a marker array c[k] = 0 for all 1 ≤k < k
2) while k < k
• set xm = xl + xr and compute t(xm) via (6)
• if t(xm) < γ (no formant motion within xm)
– set xl = xm, c[k] = 0
else (formant motion detected within xm)
– set xl = xk, c[k] = 1
• set xr = xk+1, k = k + 1
3) return the set of markers {k : c[k] = 1}
xm and computing t(xm); failure to reject h0 implies that
xm is stationary. thus, the short-time segments remain merged
and the next pair considered is (xm, x3). this procedure
continues until h0 is rejected, indicating the presence of
change within the merged segment under consideration. in
this case, the scheme is re-initialized, and adjacent short-time
segments are once again merged until a subsequent change in
the spectrum is detected.
in algorithm 1, the cfar threshold γ of (13) is set prior to
observing any data, by appealing to the asymptotic distribution
of t(x) under h0 developed in section iii-a. in principle,
the time resolution to within which change can be detected is
limited only by n0. using arbitrarily short windows, however,
increases the variance of the test statistic and results in
an increase in false alarms-a manifestation of the fourier
uncertainty principle. decreasing γ also serves to increase the
(constant) false alarm rate, and leads to spurious labeling of
local fluctuations in the estimated coefficient trajectories (e.g.,
due to the position of the sliding window relative to glottal
closures) as vocal tract variation.
b. evaluation with whispered speech
in order to evaluate the glrt in a gradually more realistic
setting, we first consider the case of whispered speech to avoid
the effects of voicing, and apply the formant change detection
scheme of algorithm 1 to whispered utterances containing
slowly-varying and rapidly-varying spectra, respectively.
the waveform used in the first experiment comprises a
whispered vowel /a/ (as in "father") followed by a diphthong
/ai/ (as in "liar"). it was downsampled to 4 khz in order to
focus on changes in the first two formants, and algorithm 1
was applied to this waveform as well as to its 0–1 khz and
1–2 khz subbands (containing the first and second formants,
respectively).
results are summarized in fig. 6, and clearly demonstrate
that the glrt is sensitive to formant motion. all three
spectrograms indicate that spectral change is first detected near
the boundary of the vowel and diphthong-precisely when the
vocal tract configuration starts to change. subsequent consec-
utive changes are found when sufficient formant change has
fig. 6.
result of applying algorithm 1 (16 ms rectangular windows, p = 4,
q = 2 legendre polynomials, 1% cfar) to detect formant movement in
the whispered waveform /a ai/. spectrograms corresponding to subbands
containing the first formant only (a) second formant only (b) and both formants
(c) were computed using 16 ms hamming windows with 50% overlap, and
are overlaid with formant tracks computed by wavesurfer [37]. black lines
demarcate times at which formant motion was detected; the time-domain
waveform overlaid with these boundaries is also shown (d).
been observed relative to data duration-a finding consistent
with our earlier observation in section ii-d that more data
are required to detect small changes in the ar coefficient
trajectories, and by proxy the vocal tract, at the same level
of statistical significance (i.e., same false alarm rate).
next observe that whereas three "changepoints" are found
when the waveform contains two moving resonances, a total
of three "changepoints" are marked in the single-resonance
waveforms shown in figs. 6(a) and 6(b). intuitively, each
of these signals can be thought of as having "less" spectral
change than the waveform shown in fig. 6(c), which contains
both formants. thus, since the corresponding amounts of
spectral change are smaller, longer short-time segments are
required to detect formant movement-as indicated by the
delays in detecting the vowel-diphthong transition seen in
figs. 6(a) and (b) relative to (c).
we next conducted a second experiment to demonstrate that
the glrt can also detect a more rapid onset of spectral change
as compared to, e.g., the relatively slow change in the spectrum
of the diphthong. to this end we applied algorithm 1 to a
sustained whispered vowel (/i/ as in "beet"), followed by the
plosive /t/ at 10 khz. the results, shown in fig. 7, indicate
that no change is detected during the sustained vowel, whereas
the plosive is clearly identified.
finally, we have observed change detection results such as
these to be robust to not only reasonable choices of p (roughly
2 coefficients per 1 khz of speech bandwidth) and q (1–10),
but also to the size of the initial window length (10–40 ms),
and the constant false alarm rate (1–20%).
revised manuscript
8
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
spectrogram
time
frequency
0
500
1000
1500
2000
2500
3000
3500
4000
4500
time−domain waveform
sample number
amplitude
fig. 7.
algorithm 1 (16 ms windows, p=10, q =4 legendre polynomials,
1% cfar), applied to detect formant movement in the whispered waveform
/i t/. its spectrogram (top) is overlaid with formant tracks computed by
wavesurfer [37] and black lines demarcating the time instants at which
formant motion was detected; the time-domain signal is also shown (bottom).
c. extension to voiced speech
we next conducted an experiment to show that the tvar-
based glrt is robust to the presence of voicing. we repeated
the first experiment of section v-b above using a voiced
vowel-diphthong pair /a ai/ over the range 0–4 khz. the same
parameter settings were employed, except for the addition of
two poles to take into account the shape of the glottal pulse
during voicing [2]. algorithm 1 yields the results shown in
fig. 8(a), which parallel those shown in fig. 6 for the whis-
pered case. indeed, the first change occurs at approximately the
vowel-diphthong boundary, with subsequent "changepoints"
marked when sufficient formant movement has been observed.
the similarities in these results are due in part to the
fact that the analysis windows employed in both cases span
at least one pitch period. to wit, consider the synthesized
voiced phoneme /a/ and the associated glrt statistic of (6)
shown in the top and bottom panels of fig. 8(b), respectively.
even though the formants of the synthesized phoneme are
constant, the value of t(x) undergoes a stepwise decrease
from over the 1% cfar threshold when < 1 pitch period is
observed, to just above the 50% cfar threshold when < 1.5
periods are observed-and finally stabilizes to a level below
the 50% cfar threshold after more than two periods are seen.
in contrast, the glrt statistic computed for the associated
whispered phoneme, generated by filtering white noise by a
vocal tract parameterized by the same formant values and
shown in the bottom panel of fig. 8(b), remains time-invariant.
these results indicate that the periodic excitation during
voicing has negligible impact on the glrt statistic when
longer (i.e., > 2 pitch periods) speech segments are used,
and explain the robustness of the glrt statistic t(x) to the
presence of voicing in the experiments of this section. on the
other hand, the glrt is sensitive to the glottal flow when
shorter speech segments are employed, suggesting that it can
be also used effectively on sub-segmental time scales, as we
show in section vi.
vi. case study 2: sub-segmental speech analysis
we now demonstrate that our glrt framework can be
used not only to detect formant motion across multiple pitch
(a) spectrogram of the voiced waveform /a ai/ is overlaid with formant
tracks computed by wavesurfer [37] and black lines demarcating the time
instants at which formant motion was detected; the time domain signal is
shown for reference (bottom).
0
50
100
150
200
250
300
350
400
450
500
amplitude
synthetic voiced phoneme segment
0
50
100
150
200
250
300
350
400
450
500
0
10
20
30
40
50
60
test statistics
samples
t(x)
t(x): whispered
t(x): voiced
50% cfar
1% cfar
(b) formant-synthesized voiced phoneme /a/ (top) and associated glrt
statistic (bottom, green) are shown along with 1% (solid black) and 50%
cfar (dashed-black) thresholds. window lengths of 5 −35 ms at 1 ms
(16-sample) increments with p = 6, q = 3 legendre polynomials were
used to calculate t(x). values of t(x) for a whispered /a/ (bottom, blue)
generated using the same formant values are shown for comparison.
fig. 8.
detecting vocal tract dynamics in voiced speech (a) and the impact
of the quasi-periodic glottal flow on the glrt statistic t(x) (b).
periods, as discussed above in section v, but also to detect
vocal tract variations within individual pitch periods. since
the vocal tract configuration is relatively constant during the
glottal airflow closed phase, and undergoes change at its
boundaries [18], a hypothesis test for vocal tract variation
provides a natural way to identify both glottal opening and
closing instants within the same framework.
we show below that this framework is especially well
suited to detecting the gradual change associated with glottal
openings, and can also be used to successfully detect glottal
closures. glottal closure identification is a classical problem
(see, e.g., [19] for a recent review), with mature engineering
solutions typically based on features of the linear prediction
residual or the group delay function (see, e.g., [17], [19],
[22] and references therein). in contrast, the slow onset of the
open phase results in a difficult detection problem, and glottal
revised manuscript
9
airflow velocity at the glottis
waveform
50
100
150
200
250
300
350
400
450
typical egg derivative
closed phase
closed phase
open phase
fig. 9.
glottal openings and closures demarcated over two pitch periods
of a typical vowel, shown with idealized glottal flow (top), speech (middle),
and egg derivative (bottom) waveforms as a function of time.
opening detection has received relatively little attention in the
literature [22], with preliminary results reported only in recent
conference proceedings [38], [39].
a. physiology of sub-segmental variations
figure 9 illustrates the idealized open and closed glottal
phases associated with a typical vowel, along with the corre-
sponding waveform and derivative electroglottograph (degg)
data indicating approximate opening and closing instants [20],
[40]. in each pitch period, the glottal closure instant (gci) is
defined as the moment at which the vocal folds close, and
marks the start of the closed phase-an interval during which
no airflow volume velocity is measured at the glottis (top
panel), and the acoustic output at the lips takes the form of
exponentially-damped oscillations (middle panel). nominally,
the glottal opening instant (goi) indicates the start of the open
phase: the vocal folds gradually begin to open until airflow
velocity reaches its maximum amplitude, after which they
begin to close, leading to the next gci.
time-invariance of the vocal tract suggests the use of
linear prediction to estimate formant values during the closed
phase [2], and then to use changes in these values across
sliding windows to determine goi and gci locations [18].
indeed, as the vocal folds begin to open at the goi, the
vocal tract gradually lengthens, resulting in a change in the
frequency and bandwidth of the first formant [41]-an effect
that can be explained by a source-filter model with a time-
varying vocal tract. furthermore, the assumption that short-
term statistics of the speech signal undergo maximal change
in the vicinity of a gci implies that such regions will exhibit
large linear-prediction errors.
b. detection of glottal opening instants
we first give a sequential algorithm to detect gois via the
glrt statistic t(x). to study the efficacy of the proposed
method, we assume that the timings of the glottal closures
are available, and use these to process each pitch period
independently. in addition to evaluating the absolute error rates
of our proposed scheme using recordings of sustained vowels,
we also compare it with the method of [17]-a standard
prediction-error-based approach that remains in wide use, and
effectively underlies more recent approaches such as [38].
1) sequential goi detection procedure: in contrast to the
"merging" procedure of algorithm 1, our basic approach here
is to scan a sequence of short-time segments xw, induced
by shifts of an n0-sample rectangular window initially left-
aligned with a glottal closure instant, until spectral change is
detected via the glrt statistic of (6).
at each iteration, the window slides one sample to the right,
and t(xw) is evaluated; this procedure continues until t(xw)
exceeds a specified cfar threshold γ, indicating that spectral
change was detected, and signifying the beginning of the open
phase. in this case, the goi location is declared to be at the
right edge of xw. on the other hand, a missed detection results
if a goi has not been identified by the time the right edge
of the sliding window coincides with the next glottal closure
instant. the exact procedure is summarized in algorithm 2.
algorithm 2 sequential glottal opening instant detector
1) initialization: input one pitch period of data x between
two consecutive glottal closure locations g1 and g2
• set wl = g1, wr = wl + n0, and set γ via (13)
• set xw = (x[wl] * * * x[wr])
2) while t(xw) < γ and wr < g2
• increment wl and wr (slide window to right)
• recompute xw and evaluate t(xw)
3) if wr < g2, then return wr as the estimated glottal
opening location, otherwise report a missed detection.
since each instantiation of algorithm 2 is confined to a
single pitch period, the parameters n0, p, and q must be
chosen carefully. to ensure robust estimates of the tvar
coefficients, the window length n0 cannot be too small; on the
other hand, if it exceeds the length of the entire closed-phase
region, then the goi cannot be resolved. likewise, choosing a
small number of tvar coefficients results in smeared spectral
estimates, whereas using large values of p leads to high test
statistic variance and a subsequent increase in false alarms; this
same line of reasoning also leads us to keep q small. thus,
in all the experiments reported in section vi-b, we employ
n0 = 50-sample windows, p = 4 tvar coefficients and the
first 2 legendre polynomials as basis functions (q = 1). we
also evaluated the robustness of our results with respect to
these settings, and observed that using window lengths of 40–
60 samples, 3–6 tvar coefficients, and 2–4 basis functions
also leads to reasonable results in practice.
2) evaluation: we next evaluated the ability of algorithm 2
to identify the glottal opening instants in five sustained vowels
uttered by a male speaker (109 hz average f0), synchronously
recorded with an egg signal (center for laryngeal surgery
and voice rehabilitation, massachusetts general hospital),
and subsequently downsampled to 16 khz. the speech filing
system [42] was used to extract degg peaks and dips, which
revised manuscript
10
100
200
300
400
−1
−0.5
0
0.5
1
speech waveform
x[n]
221
241
261
281
301
−4
−2
0
2
4
ar coefficient trajectories
a[n]
100
200
300
400
−0.05
0
0.05
0.1
0.15
egg derivative
degg[n]
221
241
261
281
301
0
5
10
15
20
glrt test statistic
sample number
t(x)
15% cfar
fig. 10.
algorithm 2 applied to detect the goi in a pitch period of the
vowel /a/ (top left), shown together with its egg derivative (bottom left).
the sliding window is left-aligned with the gci (solid black line); estimated
ar coefficients (top right) and the glrt statistic t(x) of (6) (bottom right)
are then computed for each subsequent window position. the detected goi
(dashed black line) corresponds to the location of the first determined change
(at the 15% cfar level) in vocal tract parameters.
in turn provided a means of experimentally measured ground
truth for our evaluations.
a typical example of goi detection is illustrated in fig. 10,
which shows the results of applying algorithm 2 to an excised
segment of the vowel /a/. the detected goi in this example
was declared to be at the right edge of the first short-
time segment xw for which t(xw) exceed the 15% cfar
threshold γ, and is marked by a dashed black line in all
four panels of fig. 10. as can be seen in the bottom-right
panel, the estimated goi coincides precisely with a dip in the
degg waveform. moreover, as the top-right panel shows, this
location corresponds to a significant change in the estimated
coefficient trajectories, likely due both to a change in the
frequency and bandwidth of the first formant (resulting from
nonlinear source-filter interaction [18], [41]), as well as an
increase in airflow volume velocity (from zero) at the start of
the open phase.
detection rates were then computed over 75 periods of
each vowel, and detected gois were compared to degg
dips in every pitch period that yielded a goi detection.
the resultant detection rates and root mean-square errors
(rmse, conditioned on successful detection) are reported in
table ii, along with a comparison to the prediction-error-based
approach of [17], which we now describe.
table ii
goi detection accuracy (ms, no. missed detections).
/a/
/e/
/i/
/o/
/u/
glrt rmse (ms)
0.69
1.03
1.00
1.15
0.69
wmg [17] rmse (ms)
1.04
1.78
1.13
1.97
1.10
glrt missed det.
0
5
0
0
0
wmg [17] missed det.
8
18
4
6
4
3) comparison with approach of wong, markel, and gray
(wmg) [17]: the approach of [17] involves first computing a
1400
1600
1800
2000
2200
2400
degg waveform
degg[n]
1400
1600
1800
2000
2200
2400
ar coefficient trajectories
a[n]
1400
1600
1800
2000
2200
2400
algorithm 2
t(x)
1400
1600
1800
2000
2200
2400
approach of wong, markel and gray [19]
sample number
η(x)
fig. 11.
comparison of algorithm 2 and the approach of [17] for goi
detection. the egg derivative for 8 periods of the vowel /a/, and estimated
ar coefficients, are shown for all sliding window positions (top two panels)
along with the associated values of t(x) and η(x) (bottom two panels). true
and estimated goi locations are indicated by solid and dashed black lines,
respectively. note the variability in the dynamic range of η(x) from one pitch
period to the next, and the missed detection (2nd pitch period, bottom panel).
normalized error measure η(xw) for each short-time segment
xw (induced by a sliding window as in algorithm 2), and then
identifying the goi instant with the right edge of xw when
a large increase in η(xw) is observed. the measure η(xw)
is obtained by fitting a time-invariant ar(p) model to xw
(using (10) with q = 0), calculating the norm of the resultant
prediction error, and normalizing by the energy of short-time
segment xw.
figure 11 provides a comparison of this approach to that of
algorithm 2, over 8 periods of the vowel /a/. here algorithm 2
is implemented with a 15% cfar level, but the threshold for
η(x) must be set manually, since no theoretical guidelines
are available [17]. indeed, as illustrated in the bottom panel
of fig. 11, variability in the dynamic range of η(x) across
pitch periods implies that any fixed threshold will necessarily
introduce a tradeoff between detection rates and rmse. in
this example, lowering the threshold to intersect with η(x)
in the second pitch period-and thereby removing the missed
detection-results in a 25% increase in rmse.
the denominator of the glrt statistic t(xw) depends on
the same prediction error residual used to calculate η(xw);
however, as indicated by fig. 11, it remains much more stable
across pitch periods. thus, while the approach of [17] relies
on large absolute changes in ar residual energy to detect
glottal openings, that of algorithm 2 explicitly takes into
account the ratio of ar to tvar residual energies-resulting
in improved overall performance. indeed, though thresholds
were set individually for each vowel of table ii, and manually
revised manuscript
11
adjusted to obtain the best rmse performance while keeping
the number of missed detections reasonably small, algorithm 2
with a 15% cfar threshold exhibits both superior detection
rates and rmse.
c. detection of glottal closure instants
although our main focus here is on goi detection, the
glrt statistic of (6) may also be employed to detect glottal
closures. indeed, under the assumption stated earlier that the
speech signal undergoes locally maximal change in the vicinity
of a gci, a simple gci detection algorithm immediately sug-
gests itself: compute (6) for every location of an appropriate
sliding analysis window, and declare the glottal closure to
occur at the midpoint of the window with the largest associated
value of t(x). in this formulation, t(x) is being treated
simply as a signal with features that may be helpful in finding
the gci locations; no test threshold need be set. a typical
result is shown in the third panel of fig. 12, obtained using
the same parameter settings (50-sample window, p = 4, q = 2)
as in the goi detection scheme of section vi-b.
we compared this method to two others based on linear
prediction and group delay, as described above. first, we
implemented the alternative likelihood-ratio epoch detection
(lred) approach of [32], which tests for a single change
in ar parameters. second we used the "front end" of the
popular dypsa algorithm for goi detection [22], comprising
the generation of gci candidates and their weighting by the
ideal phase-slope function deviation cost as implemented in
the voicebox online toolbox [43]. table iii summarizes the
gci estimation results under the same conditions as reported
in table ii. all three methods are comparable in terms of
accuracy, though the glrt approach proposed here can be
used-with the same parameter settings-for both gci and
goi detection.
table iii
gci detection accuracy (ms).
vowel/ gci rmse
/a/
/e/
/i/
/o/
/u/
glrt (n0 = 50, p = 4, q = 2)
0.47
1.05
0.73
0.97
1.03
lred [32] (n0 = 72, p = 6)
1.02
0.69
1.00
0.65
1.12
dypsa front end [22](n0 = 50)
0.61
0.68
0.70
0.79
1.10
results from both our approach and the dypsa front end
can in turn be propagated across pitch periods (using, e.g.,
dynamic programming [22]) to inform a broader class of
group-delay methods [19], though we leave such a system-
level comparison as the subject of future work.
vii. discussion
the goal of this article has been to develop a statis-
tical framework based on time-varying autoregressions for
the detection of nonstationarity in speech waveforms. this
generalization of linear prediction was shown to yield efficient
fitting procedures, as well as a corresponding generalized
likelihood ratio test. our study of glrt detection performance
yielded several practical consequences for speech analysis.
incorporating these conclusions, we presented two algorithms
100
200
300
400
500
600
700
x[n]
100
200
300
400
500
600
700
degg[n]
100
200
300
400
500
600
700
t(x)
fig. 12.
using the glrt statistic of (6) to find gci locations in a segment
of the vowel /a/. the speech waveform (top), the egg derivative (middle) and
the glrt statistic (bottom) are overlaid with the true (solid, black line) and
estimated (dashed, black line) glottal closure locations in each pitch period.
to identify changes in the vocal tract configuration in speech
data at different time scales. at the segmental level we demon-
strated the sensitivity of the glrt to vocal tract variations
corresponding to formant changes, and at the sub-segmental
scale, we used it to identify both glottal openings and closures.
methodological extensions include augmenting the tvar
model presented here to explicitly account for the quasi-
periodic nature of the glottal flow (or its time derivative),
and deriving a glrt statistic corresponding to (6) in the
case where only noisy waveform measurements are available.
important next steps in applying the hypothesis testing frame-
work to practical speech analysis include further development
of the glottal closure and opening detection schemes of the
previous section, which were here applied independently in
each pitch period. incorporating the dynamic programming
approach of [22] will likely serve to improve performance, as
will incorporating the glrt statistic as part of global frame-
to-frame cost function in such a framework.
acknowledgements
the authors wish to thank daryush mehta at the center for
laryngeal surgery and voice rehabilitation at massachusetts
general hospital for providing recordings of audio and egg
data, nicolas malyska for helpful discussions, and the anony-
mous reviewers for suggestions that have improved the paper.
references
[1] d. rudoy, t. f. quatieri, and p. j. wolfe, "time-varying autoregressive
tests for multiscale speech analysis," in proc. 10th ann. conf.
intl. speech commun. ass., 2009. [online]. available: http://sisl.seas.
harvard.edu
[2] t. f. quatieri, discrete-time speech signal processing: principles and
practice.
upper saddle river, nj: prentice-hall, 2002.
[3] a. v. brandt, "detecting and estimating parameter jumps using ladder
algorithms and likelihood ratio tests," proc. ieee intl. conf. acoust.
speech signal process., vol. 8, pp. 1017–1020, 1983.
revised manuscript
12
[4] r. andre-obrecht, "a new statistical approach for the automatic seg-
mentation of continuous speech signals," ieee trans. acoust. speech
signal process., vol. 36, pp. 29–40, 1988.
[5] m. g. hall, a. v. oppenheim, and a. s. willsky, "time-varying
parametric modeling of speech," signal process., vol. 5, pp. 267–285,
1983.
[6] y. grenier, "time-dependent arma modeling of nonstationary signals,"
ieee trans. acoust. speech signal process., vol. 31, pp. 899–911, 1983.
[7] k. s. nathan and h. f. silverman, "time-varying feature selection and
classification of unvoiced stop consonants," ieee trans. speech audio
process., vol. 2, pp. 395–405, 1994.
[8] k. schnell and a. lacroix, "time-varying linear prediction for speech
analysis and synthesis," in proc. ieee intl. conf. acoust. speech signal
process., 2008, pp. 3941–3944.
[9] s. m. kay, "a new nonstationarity detector," ieee trans. signal
process., vol. 56, pp. 1440–1451, 2008.
[10] s. kay, fundamentals of statistical signal processing: detection the-
ory.
upper saddle river, nj: prentice-hall, 1998.
[11] d. rudoy, p. basu, t. f. quatieri, b. dunn, and p. j. wolfe, "adaptive
short-time analysis-synthesis for speech enhancement," in proc. ieee
intl. conf. acoust. speech signal process., 2008, pp. 4905–4908.
[online]. available: http://sisl.seas.harvard.edu
[12] d. rudoy, p. basu, and p. j. wolfe, "superposition frames for adaptive
time-frequency representations and fast reconstruction," ieee trans.
signal process., vol. 58, pp. 2581–2596, 2010.
[13] j. vermaak, c. andrieu, a. doucet, and s. j. godsill, "particle methods
for bayesian modeling and enhancement of speech signals," ieee trans.
speech audio process., vol. 10, pp. 173–185, 2002.
[14] r. c. hendriks, r. heusdens, and j. jensen, "adaptive time segmentation
for improved speech enhancement," ieee trans. audio speech lang.
process., vol. 14, pp. 2064–2074, 2006.
[15] v. tyagi, h. bourlard, and c. wellekens, "on variable-scale piecewise
stationary spectral analysis of signals for asr," speech commun.,
vol. 48, pp. 1182–1191, 2006.
[16] g. s. morrison, "likelihood-ratio forensic voice comparison using
parametric representations of the formant trajectories of diphthongs,"
j. acoust. soc. am., vol. 125, pp. 2387–2397, 2009.
[17] d. y. wong, j. d. markel, and a. h. gray, "least squares glottal inverse
filtering from the acoustic speech waveform," ieee trans. acoust.
speech signal process., vol. 27, pp. 350–355, 1979.
[18] m. d. plumpe, t. f. quatieri, and d. a. reynolds, "modeling of the
glottal flow derivative waveform with application to speaker identifica-
tion," ieee trans. speech audio process., vol. 7, pp. 569–586, 1999.
[19] m. brookes, p. a. naylor, and j. gudnasson, "a quantitative assessment
of group delay methods of identifying glottal closures in voiced speech,"
ieee trans. audio speech lang. process., vol. 8, pp. 1017–1020, 2006.
[20] d. g. childers and j. n. larar, "electroglottography for laryngeal
function assessment and speech analysis," ieee trans. on biomed. eng.,
vol. 31, pp. 807–817, 1984.
[21] j. s. garofolo, l. lamel, w. fisher, j. fiscus, d. pallett, n. dahlgren,
and v. zue, timit acoustic-phonetic continuous speech corpus.
philadelphia, pa: linguistic data consortium, 1993.
[22] p. a. naylor, a. kounoudes, j. gudnasson, and m. brookes, "estimation
of glottal closure instants in voiced speech using the dypsa algorithm,"
ieee trans. audio speech lang. process., vol. 15, pp. 34–43, 2007.
[23] l. a. liporace, "linear estimation of non-stationary signals," j. acoust.
soc. am., vol. 58, pp. 1268–1295, 1975.
[24] m. k. tsatsanis and g. b. giannakis, "time-varying system identifica-
tion and model validation using wavelets," ieee trans. signal process.,
vol. 41, pp. 3512–3523, 1993.
[25] t. s. rao, "the fitting of non-stationary time series models with time
dependent parameters," j. roy. stat. soc. b, vol. 32, pp. 312–322, 1970.
[26] g. kitagawa and w. gersch, "a smoothness priors time-varying ar
coefficient modeling of nonstationary covariance time series," ieee
trans. automat. control, vol. 30, pp. 48–56, 1985.
[27] t. hsiao, "identification of time-varying autoregressive systems using
maximum a posteriori estimation," ieee trans. signal process., vol. 56,
pp. 3497–3509, 2008.
[28] s. kay, modern spectral estimation: theory and application.
upper
saddle river, nj: prentice-hall, 1988.
[29] m. kendall, a. stuart, j. k. ord, and s. arnold, kendall's advanced
theory of statistics.
hodder arnold, 1999, vol. 2a.
[30] j. w. brewer, "kronecker products and matrix calculus in system
theory," ieee trans. circuits syst., vol. 25, pp. 772–781, 1978.
[31] s. d. gupta and m. d. perlman, "power of the noncentral f-test: effect
of additional variates on hotelling's t2 test," j. am. statist. ass., vol. 69,
pp. 174–180, 1974.
[32] e. moulines and r. d. francesco, "detection of the glottal closure
by jumps in the statistical properties of the speech signal," speech
commun., vol. 9, pp. 401–418, 1990.
[33] j. p. bello, l. daudet, s. abdallah, c. duxbury, and m. davies, "a
tutorial on onset detection in music signals," ieee trans. speech audio
process., vol. 13, pp. 1035–1048, 2005.
[34] s. jarifia, d. pastora, and o. rosec, "a fusion approach for automatic
speech segmentation of large corpora with application ot speech synthe-
sis," speech communication, vol. 50, pp. 67–80, 2008.
[35] r. e. quandt, "tests of the hypothesis that a linear regression system
obeys two separate regimes," j amer. stat. assoc., vol. 55, pp. 324–330,
1960.
[36] s. l. marple, digital spectral analysis. englewood cliffs, nj: prentice-
hall, 1987.
[37] k. sj ̈
olander and j. beskow. (2005) wavesurfer 1.8.5 for windows.
kth
royal
institute
of
technology.
[online].
available:
http:
//www.speech.kth.se/wavesurfer/wavesurfer-185-win.zip
[38] t. drugman and t. dutoit, "glottal closure and opening instant detection
from speech signals," in proc. 10th ann. conf. intl. speech commun.
ass., 2009.
[39] m. p. thomas, j. gudnason, and p. a. naylor, "detection of glottal
closing and opening instants using an improved dypsa framework," in
proc. 17th eur. signal process. conf, glasgow, scotland, 2009.
[40] n. henrich, c. d'alessandro, b. doval, and m. castellengo, "on the
use of the derivative of electroglottographic signals for characterization
of nonpathological phonation," j. acoust. soc. am., vol. 115, pp. 1321–
1332, 2004.
[41] t. v. ananthapadmanabha and g. fant, "calculation of true glottal flow
and its components," speech commun., vol. 1, pp. 167–184, 1982.
[42] m.
huckvale.
(2000)
speech
filing
system:
tools
for
speech
research.
university
college
london.
[online].
available:
http:
//www.phon.ucl.ac.uk/resource/sfs
[43] m.
brookes.
(2006)
voicebox:
a
speech
processing
toolbox
for matlab. imperial college london. [online]. available: http:
//www.ee.imperial.ac.uk/hp/staff/dmb/voicebox/voicebox.html
|
0911.1698 | finite temperature properties of a supersolid: a rpa approach | we study in random-phase approximation the newly discovered supersolid phase
of ${}^4$he and present in detail its finite temperature properties. ${}^4$he
is described within a hard-core quantum lattice gas model, with nearest and
next-nearest neighbour interactions taken into account. we rigorously calculate
all pair correlation functions in a cumulant decoupling scheme. our results
support the importance of the vacancies in the supersolid phase. we show that
in a supersolid the net vacancy density remains constant as function of
temperature, contrary to the thermal activation theory. we also analyzed in
detail the thermodynamic properties of a supersolid, calculated the jump in the
specific heat which compares well to the recent experiments.
| introduction
one of the biggest accomplishments of theoretical con-
densed matter physics is the ability to classify various
phases and phase transitions by their mathematical or-
der. these mathematical orders are usually expressed by
order parameters reflecting certain limiting behaviours of
two particle correlation functions. this concept makes it
possible to describe the properties of physically very dif-
ferent systems in a common language and establish a link
between them.
without this concept, the counter-intuitive idea of a su-
persolid, i.e. a solid that also exhibits a superflow, would
not have been conceivable. in this language supersolidity,
firstly proposed by andreev and lifshitz[1] in 1969 and
picked up by leggett and chester in 1970[3,2] is the clas-
sification of systems that simultaneously exhibit diagonal
and off-diagonal long range order. yet the idea of a super-
solid was reluctantly received by the scientific community,
because early experiments failed to detect any such effect
in helium-4. apart from john goodkind[4] who had pre-
viously seen suspicious signals in the ultrasound signals of
solid helium, physicists were surprised when in 2004 kim
and chan [5,6] announced the discovery of supersolid he-
lium. although equipped with a head start of more than
30 years the theoretical understanding of the supersolid
state lags vastly behind, not least because supersolidity
as we observe it today is nothing like what the pioneers
in early 1970's had anticipated.
it is now evident that defectons and impurities play a cru-
cial role in the formation of the supersolid state. however,
the data and the results of various experiments draw a
rather complex picture of the supersolid phase. the nor-
mal solid to supersolid transition is not of the usual first or
second order type transition but bears remarkable resem-
blance to the kosterlitz thouless transition well-known in
two dimensional systems. furthermore annealing experi-
ments show that 3he impurities play a significant role but
the data remain all but conclusive, as the measured su-
perfluid density varies by order of magnitudes among the
different groups. recently a change in the shear modulus
of solid helium was found and the measured signal almost
identically mimics the superfluid density measured by kim
and chan [5,6]. some popular theories give plausible ex-
planations of some aspects of the matter. that, such as
the vortex liquid theory suggested by phil anderson [9],
is in good agreement with properties of the phase tran-
sition; theories based on defection networks are capable
of describing the change in the shear modulus. however,
we feel that a satisfying and comprehensive theory is still
missing. part of the problem stems from the complexity of
the models. many models are only at sufficient accuracy
solvable with numerical monte carlo methods. the re-
sults, doubtlessly useful, are seldom intuitive and the lack
of analytical results do not meet our notion of the under-
standing of a phenomenon. other approaches on the other
hand are limited in their analytical significance and rather
represent a phenomenological ansatz.
in this work we attempt to fill a part of this gap in the the-
oretical understanding of the supersolid phase. we present
a theory of supersolidity in a quantum-lattice gas model
(qlg) beyond simple mean-field approaches. following
the approach of k.s. liu and m.e. fisher[12] we map the
qlg model to the anisotropic heisenberg model (ahm).
2
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
the method of green's functions proves to be very suc-
cessful in the description of ferromagnetic and antiferro-
magnetic phases and we use this method to investigate the
supersolid phase which corresponds to a canted antiferro-
magnetic phase. applying the random-phase approxima-
tion (rpa), we broke down using cumulant decoupling,
the higher order green's functions to obtain a closed set
of solvable equations. this method gives a fully quantum
mechanical and analytical solution of a supersolid phase.
we will see that quantum fluctuations have a significant
effect on the net vacancy density of the supersolid and we
will also see that the superfluid state is unstable against
zero-energy quasi-particle excitations. we also derive im-
portant thermodynamic properties of this model and de-
rive formulas for interesting properties such as the power
law exponents and the jump of the specific heat across the
normal solid to supersolid line.
the paper is organized as follows: in section 2 we intro-
duce the generic hamiltonian of a bosonic many body sys-
tem and discretize it to a quantum lattice gas model. this
model is equivalent to the anisotropic heisenberg model
in an external field and we will identify the corresponding
phases in section 3. in section 4 we re-derive the classical
mean-field solution and discuss briefly their significance
before we in section 5 recapitulate the green's functions
for the anisotropic heisenberg model in the random-phase
approximation at zero temperature. the ground state of
the system is obtained by solving the corresponding self-
consistency equation. in the following two sections we de-
rive basic thermodynamic properties as well as ordinary
differential equations to calculate the first and second or-
der phase transition lines. in section 8 we analyse the
quasi-particle energy spectrum and in the last section we
calculate phase diagrams for various parameter sets and
analyse the properties of the supersolid phase and the cor-
responding transitions.
2 generic hamiltonian and anisotropic
heisenberg model
if we neglect possible 3he impurities supersolid helium-4
is a purely bosonic system whose dynamics and structure
are governed by a generic bosonic hamiltonian:
h =
z
d3xψ†(x)(−1
2m∇2 + μ)ψ(x)
+1
2
z
d3xd3x′ψ†(x)ψ†(x′)v (x −x′)ψ(x)ψ(x′)
(1)
here ψ†(x) are the particle creation operators and ψ(x)
destruction operators and obey the usual bosonic commu-
tation relations. hamiltonians such as in eq. (1) are not,
due to the vast size of the many body hilbert-space, di-
agonalisable even for simple potentials v (x). the accom-
plishment of any successful theory is to find an approxi-
mation that sustains the crucial mechanism while reduc-
ing the mathematical complexity to a traceable level. here
we follow the method of matsubara and matsuda [14] who
successfully introduced the quantum lattice gas (qlg)
model to describe superfluid helium. we believe that the
quantum lattice gas model is particularly useful for super-
solids as the spatial discretization of this model serves as a
natural frame for the crystal lattice of (super)-solid helium
and a bipartite lattice elegantly simplifies the problem of
breaking translational invariance symmetry for states that
exhibit diagonal long range order.
according to matsubara and tsuneto [13] the generic hamil-
tonian eq. (1) in the discrete lattice model reads:
h = μ
x
i
ni +
x
ij
uij(a†
i −a†
j)(ai −aj) +
x
ij
vijninj
(2)
here uij are solely finite for nearest neighbor and next
nearest neighbor hopping and otherwise zero. the val-
ues of unn and unnn are such that the kinetic energy is
isotropic up to the 4th order. as atoms do not penetrate
each other there can exist only one atom at a time on a lat-
tice site and consequently a† and a are the creation and
annihilation operators of a hard core boson commuting
on different lattice sites, but obey the anti-commutator
relations on identical sites. equation (2) is the hubbard
model in 3 dimensions for hard core bosons. neither fully
bosonic nor fermionic, the lack of wicks theorem inhibits
the application of pertubative field theory though hard
core bosonic systems are algebraic identical to spin sys-
tems. a simple transformation, where the bosonic opera-
tors are substituted by spin-1/2 operators generating the
su(2) lie algebra, maps the present qlg model onto the
anisotropic heisenberg model:
h = hz x
i
sz
i +
x
ij
j∥
ijsz
i sz
j +
x
ij
j⊤
ij (sx
i sx
j + sy
i sy
j )(3)
here the correspondence between the qlg parameters
and the spin coupling constants is given by:
j∥
ij = vij
j⊤
ij = −2uij
hz = −μ +
x
j
j⊤
ij −
x
j
j∥
ij
(4)
3 phases
the self-consistency equations in the random-phase ap-
proximation as we will derive in a later chapter are very
lengthy and therefore it is our primary goal to keep the
present model as simple as possible while still being able to
describe the crucial physics. for this reason we will define
the anisotropic heisenberg model on a bipartite bcc lat-
tice which consists of two interpenetrating sc sub-lattices
as figure 1 shows. in this lattice geometry the perfectly
solid phase is composed of a fully occupied (on-site) sub-
lattice a while sublattice b refers to the empty interstitial
and is consequently vacant. as there is no spatial density
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
3
a
b
fig. 1. the bcc lattice consists of two interpenetrating sc
sub-lattices. in the perfect solid phase one sub-lattice (i.e. sub-
lattice a) serves as on-site centers and is occupied while sub-
lattice b represents the empty interstitial and is vacant (for
simplicity, we only present the two dimensional case).
table 1.
possible magnetic phases and the corresponding
phases of the hubbard model. all phases are defined by their
long range order. the columns, from left to right, are the spin
configurations, magnetic phases, odlro, dlro and corre-
sponding 4he phases, respectively.
↑↑
fe
no
no
normal liquid
րր
cfe
yes
no
superfluid
րւ
caf
yes
yes
supersolid
↑↓
af
no
yes
normal solid
variation in the liquid phases both sublattices are equally
occupied, and the mean occupation number simply cor-
responds to the particle density. we are aware that the
above choice of sc sublattices does not reflect the true 4he
crystal structure which is hexagonal closed-packed (hcp).
nevertheless we believe that crucial physical properties
such as phase transition and critical constants do not de-
pend on the specific geometry as long as no other effects
such as frustration are evoked.
table 3 charts the various magnetic phases and identifies
the corresponding phases of the 4he system. according
to their spin configurations we identify the four magnetic
phases to be of ferromagnetic (fe), canted ferromagnetic
(cfe), canted anti-ferromagnetic (caf) and antiferro-
magntic (af) orders. the off-diagonal long range order
parameter m1 and the diagonal long range order m2 are:
m1 = ⟨sx
a⟩+ ⟨sx
b⟩
m2 = ⟨sz
a⟩−⟨sz
b⟩
(5)
which readily identify the corresponding phases of the he-
lium system.
0.5
1
1.5
2
t [k]
2
4
6
8
hz
caf
cfe
fe
af
fig. 2.
the phase diagram for j⊤
1
= 1.498, j⊤
2
= 0.562,
j∥
1 = −3.899 and j∥
2 = −1.782 as calculated by mf.
4 mean-field solution
in our previous work [10] we have already re-derived the
classical mean-field solution of the anisotropic heisenberg
model at zero temperature. as this model provides an easy
and intuitive access to the model we extend the approx-
imation to finite temperatures as was done by k.s liu
and m.e. fisher [12] and briefly discuss its solution and
phase diagram. the mean-field hamiltonian is obtained by
substituting the spin- 1
2 operators with their expectation
values:
hmf = −hz(⟨sz
a⟩+ ⟨sz
b⟩)
−2j∥
1 ⟨sz
a⟩⟨sz
b⟩−j∥
2 (⟨sz
a⟩⟨sz
a⟩+ ⟨sz
b⟩⟨sz
b⟩)
−2j⊤
1 ⟨sx
a⟩⟨sx
b⟩−j⊤
2 (⟨sx
a⟩⟨sx
a⟩+ ⟨sx
b⟩⟨sx
b⟩)
(6)
here j∥
1 = −q1j∥
i∈a,j∈b, j∥
2 = −q2j∥
i∈a,j∈a , j⊤
1
=
−q1j⊤
i∈a,j∈b and j⊤
2
= −q1j⊤
i∈a,j∈a where q1 = 6 and
q2 = 8 are the number of nearest and next nearest neigh-
bors. the mean value of sy drops out as the randomly
broken symmetry sx ↔sy (odlro) allows for ⟨sy⟩= 0.
the standard method of deriving the corresponding self-
consistency equations is to minimize the helmholtz's free
energy f = h −t s. the entropy s is given by the pseudo
spin entropy of the system:
s = −1
2[(1
2 + sa) ln(1
2 + sa) + (1
2 −sa) ln(1
2 −sa)
+(1
2 + sb) ln(1
2 + sb) + (1
2 −sb) ln(1
2 −sb)](7)
where sa =
p
⟨sza⟩2 + ⟨sxa⟩2 and sb =
p
⟨szb⟩2 + ⟨sxb⟩2.
in the canted anti-ferromagnetic and the canted ferro-
magnetic states there are 4 self-consistency equations in
the ferromagnetic and anti-ferromagnetic phases where
⟨sx⟩= ⟨sy⟩= 0 they are reduced by two. these equations
are readily obtained by differentiating the free energy with
4
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
respect to ⟨sz⟩and ⟨sx⟩respectively. the resulting equa-
tions can be rearranged to yield:
p
⟨sza⟩2 + ⟨sxa⟩2 = tanh(βωa)
2
p
⟨szb⟩2 + ⟨sxb⟩2 = tanh(βωb)
2
(8)
hz + 2⟨sza⟩(j∥
2 −j⊤
2 ) + 2⟨szb⟩j∥
1 = 2j⊤
1
⟨sxb⟩
⟨sxa⟩⟨sza⟩
hz + 2⟨szb⟩(j∥
2 −j⊤
2 ) + 2⟨sza⟩j∥
1 = 2j⊤
1
⟨sxa⟩
⟨sxb⟩⟨szb⟩(9)
where
ωa = [(2j⊤
1 ⟨sx
b⟩+ 2j⊤
2 ⟨sx
a⟩)2 +
(2j∥
1 ⟨sz
b⟩+ 2j∥
2⟨sz
a⟩+ hz)2]
1
2
ωb = [(2j⊤
1 ⟨sx
a⟩+ 2j⊤
2 ⟨sx
b⟩)2 +
(2j∥⟨sz
b⟩+ 2j∥⟨sz
a⟩+ hz)2]
1
2
(10)
the two equations (eq. (9)) are dismissed in the ferromag-
netic and the anti-ferromagnetic phases as the transversal
fields ⟨sx
a⟩and ⟨sx
b⟩become zero. at zero temperature
t = 0 the free energy and ⟨h⟩coincide. this allows us to
deduce the phases at absolute zero in the limiting cases
(limits of hz) from equation (6). in the limit of hz →∞
the hamiltonian reduces to an effective one particle model:
h = hz(⟨sza⟩+ ⟨szb⟩)
(11)
consequently the system will be in the energetically favor-
able ferromagnetic phase. in the opposite limit, hz →0
and with antiferromagnetic coupling j∥
1 < 0 the term of
nearest neighbor interaction
h = j∥
1 ⟨sza⟩⟨szb⟩
(12)
is the only term that significantly lowers the energy. there-
fore the ground-state is the anti-ferromagnetic state. at
t=0 and a suitable choice of parameters the canted fer-
romagnetic and the canted anti-ferromagnetic phases are
realized in-between these limits as seen in figure 2. how-
ever with increasing temperature the regions of canted
ferromagnetic and canted anti-ferromagnetic phases de-
plete and at sufficiently high temperatures only the anti-
ferromagnetic and ferromagnetic phases survive. as we
can see from the phase diagram in figure 2 most tran-
sitions are of second order. only for parameter regimes
where the caf does not appear the resulting cfe-af
is of first order. the boundary lines are defined by ordi-
nary differential equations and generally need to be calcu-
lated numerically. nonetheless there exist analytical ex-
pressions for the locations of the transitions at absolute
zero as derived by matsuda and tsuneto [13]. the ferro-
magnetic to canted ferromagnetic transition point is de-
termined by equations (9) if we set ⟨sza⟩= ⟨szb⟩= 1
2
and ⟨sxa⟩= ⟨sxb⟩= 0:
hz
f e−cf e = j⊤
1 + j⊤
2 −j∥
1 −j∥
2
(13)
equally the canted anti-ferromagnetic to anti-ferromagnetic
transition is defined by ⟨sza⟩= −⟨szb⟩= 1
2 and ⟨sxa⟩=
⟨sxb⟩= 0:
hz
caf −af =
q
(−j∥
1 + j∥
2 −j⊤
2 )2 −(j⊤
1 )2
(14)
the canted ferromagnetic and the canted anti-ferromagnetic
phases coexist where the order parameter of the dlro
,m1 = ⟨sza⟩−⟨szb⟩approaches zero. we replace ⟨sza⟩
and ⟨szb⟩in equation (9) with m1 and m2 = ⟨sza⟩+⟨szb⟩
and retain only linear terms of m1. subtracting and adding
both equations respectively yields:
hz + 2m2(j∥
2 −j⊤
2 + j∥
1 ) = 2j⊤
1 m2
2m1(j∥
2 −j⊤
2 −j∥
1 ) = −2j⊤
1 m14m2
2 + 1
4m2
2 −1
(15)
we used that
p
⟨sza⟩2 + ⟨sxa⟩2 = 1
4 at t=0. the solu-
tion of these two equations determine the corresponding
transition point which is given by:
hz
cf e−caf = j∥
1 + j∥
2 −j⊤
1 −j⊤
2
−j∥
1 + j∥
2 + j⊤
1 −j⊤
2
×
q
(−j∥
1 + j∥
2 −j⊤
2 )2 −(j⊤
1 )2
(16)
5 green's functions
although the classical mean-field theory is quite insight-
ful and gives an accurate description of the variously or-
dered phases its fails to take quantum fluctuations and
quasi-particle excitations into account. hence, in order to
overcome these shortcomings and to derive a fully quan-
tum mechanical approximation we solve the anisotropic
heisenberg model in the random-phase approximation which
is based on the green's function technique. at finite tem-
perature the retarded and advanced tyablikov [15,16] com-
mutator green's function defined in real time are:
gμν
ijret(t) = −iθ(t)⟨|[sμ
i (t), sν
j ]|⟩
gμν
ijadv(t) = iθ(−t)⟨|[sμ
i (t), sν
j ]|⟩
(17)
the average ⟨⟩involves the usual quantum mechanical as
well as thermal averages. the most successful technique of
solving many body green's function involves the method
of equation of motion which is given by:
i∂tgxy
ijret(t) = δ(t)⟨[sx
i , sy
j ]⟩−iθ(t)⟨[[sx
i , h], sy
j ]⟩
i∂tgxy
ijadv(t) = δ(t)⟨[sx
i , sy
j ]⟩+ iθ(−t)⟨[[sx
i , h], sy
j ]⟩
(18)
the commutator [sx
i , h] can be eliminated by using the
heisenberg equation of motion giving rise to higher, third
order green's functions on the rhs. in order to obtain a
closed set of equations we apply the cumulant decoupling
procedure which splits up the third order differential equa-
tion into products of single operator expectation values
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
5
and two spin green's functions. the cumulant decoupling
[17] is based on the assumption that the last term of the
following equality is negligible:
⟨ˆ
a ˆ
b ˆ
c⟩=
⟨ˆ
a⟩⟨ˆ
b ˆ
c⟩+ ⟨ˆ
b⟩⟨ˆ
a ˆ
c⟩
+⟨ˆ
c⟩⟨ˆ
a ˆ
b⟩−2⟨ˆ
a⟩⟨ˆ
b⟩⟨ˆ
c⟩
+⟨( ˆ
a −⟨ˆ
a⟩)( ˆ
b −⟨ˆ
b⟩)( ˆ
c −⟨ˆ
c⟩)⟩
(19)
the set of differential equations is not explicitly dependent
of the temperature, i.e. temperature dependence solely
comes with thermal averaging of the single operator expec-
tation values. therefore the solution is formally identical
to the zero temperature solution and the detailed deriva-
tion of the green's function in their full form can be found
in ref. [11].
the averages of the spin operator, appearing in the cu-
mulant decoupling scheme, have to be determined self-
consistently. two self-consistent equations can be derived
from correlation functions corresponding to the green's
functions. the self-consistency equations of the canted
ferromagnetic (superfluid) and canted antiferromagnetic
(supersolid) phases are structurally different from the fer-
romagnetic (normal solid) and antiferromagnetic (normal
fluid) phases, as the off-diagonal long range order increases
the number of degrees of freedom by two and hence two
additional conditions, resulting from the analytical prop-
erties of the commutator green's functions, apply. thus,
the self-consistency equations of the canted phases, can
be written as a function of the temperature, the external
magnetic field and the spins in x-direction:
f c
a(⟨sx
a⟩, ⟨sx
b⟩, hz, t ) = 0
f c
b(⟨sx
a⟩, ⟨sx
b⟩, hz, t ) = 0
(20)
similar the self-consistency equations for the ferromag-
netic and anti-ferromagnetic phases:
f nc
a (⟨sz
a⟩, ⟨sz
b⟩, hz, t ) = 0
f nc
b (⟨sz
a⟩, ⟨sz
b⟩, hz, t ) = 0
(21)
these equations involve a 3 dimensional integral over the
first brillouin zone. as explained in the work [10] on the
zero temperature formalism this integral can be reduced
to a two dimensional integral by introducing a general-
ized density of state (dos) and gives the model a wider
applicability.
6 thermodynamic properties
the relation between then qlg and the anisotropic heisen-
berg model is such that the chemical potential μ corre-
sponds to the external field hz, i.e. the grand canonical
partition function in the qlg corresponds to the canoni-
cal partition function in the anisotropic heisenberg model
where the number of spins is fixed. consequently, thermo-
dynamic potentials of the two models are related as:
θqlg −μn = θheisenberg
(22)
here θ refers to an arbitrary thermodynamic potential.
in the same way as the ground state minimizes the in-
ternal energy u = ⟨h⟩at absolute zero, the free energy
f = u −t s is minimized at finite temperatures. we wish
to stress that the internal energy in the present approxi-
mation cannot be derived accurately from the expectation
value of the hamiltonian in the following way:
u = hz x
i
⟨sz
i ⟩+
x
ij
j∥
ij⟨sz
i sz
j ⟩
+
x
ij
j⊤
ij (⟨sx
i sx
j ⟩+ ⟨sy
i sy
j ⟩)
(23)
the cumulant decoupling, though a good approximation
to the anisotropic heisenberg model, is also an exact solu-
tion of an unknown effective hamiltonian heff. therefore
thermodynamically consistent results are only obtained if
the effective hamiltonian is substituted in the equation
above, i.e. u = ⟨heff⟩. here, as we do not know the ex-
plicit form of this effective model, we have to integrate the
free energy from thermodynamic relations:
df = ⟨sz
a⟩+ ⟨sz
b⟩
2
dhz + sdt
(24)
this equation allows us to select the ground state in re-
gions where two or more phases exist self-consistently ac-
cording to the random-phase approximation. the entropy
of the spin system is given by
s =
z
d3k
1
1 + exp(−βω(k)) log(
1
1 + exp(−βω(k)))
+
1
1 + exp(βω(k)) log(
1
1 + exp(βω(k)))
(25)
this formula is of purely combinatorial origin and reflects
the fact that the hard-core boson system is equivalent to a
fermionic system given by the jordan-wigner transforma-
tion. the ω(k) terms refer to the energies of the spin-wave
excitations. in the solid phases both branches have to be
taken into account. the entropy of the anisotropic heisen-
berg model given for a fixed number of spins corresponds
to the entropy of the qlg model at a constant volume.
therefore, in order to obtain the usual configurational en-
tropy of the qlg, we have to divide by the number of
particles per unit cell: (sconf =
2s
na+nb ).
here na = 1/2 −sz
a and nb = 1/2 −sz
b are the particle
occupation numbers on lattice sites a and b. as the na-
ture of the normal solid to supersolid phase transition is
not yet satisfactorily understood recent experiments [19]
have focused on the behavior of the specific heat across
the transition line in the hope of shedding light on the
matter. the specific heat at constant temperature and
constant pressure respectively are given by:
cv = t
∂sconf
∂t
t,v,n
,
cp = t
∂sconf
∂t
t,p,n
(26)
although the external magnetic field hz in the spin model
is an observable the corresponding quantity in the qlg
6
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
model, namely the chemical potential is not. therefore we
are also interested in attaining a formula for the pressure
associated with a certain chemical potential. the relation-
ship is most easily derived from the following maxwell re-
lation:
∂p
∂μ
t,v
=
∂n
∂v
t,μ
= #lattice sites
v
(1 −ǫ)
(27)
where ǫ := ⟨sz
a⟩+ ⟨sz
b⟩. note that, using this equation in
order to obtain the pressure at any specific temperature
we need a reference point, i.e. a chemical potential where
the corresponding pressure is known. this point is given
by μ →∞which corresponds to n
→0 and, hence,
p →0. consequently, in order to obtain the pressure for
a specific chemical potential μ′ we have to integrate over
the interval [∞, μ′].
7 boundary lines
the state of the system at any point in t and hz is given
by the self-consistency equations and the free energy as
can be derived from eq. (24). nevertheless the resulting
computations come at high computational cost and there-
fore it seems most feasible to derive odes which deter-
mine the first and second order transition lines. first we
will derive the ordinary differential equations which define
the more abundant second order transitions. the normal
fluid (fe) and the normal solid (cfe) phases are deter-
mined by equations (21) and the supersolid (caf) and
superfluid (cfe) are defined by equations (20) and con-
dition equation (9). consequently on the ss-ns and sf-
nf transition line, where both the normal (fe and cfe)
and the super (cfe and caf) phases coexists following
equations hold:
f n
a (⟨sza⟩, ⟨szb⟩, hz, t ) = 0
f n
b (⟨sza⟩, ⟨szb⟩, hz, t ) = 0
hz + 2⟨sza⟩(j∥
2 −j⊤
2 ) + 2⟨szb⟩j∥
1 = 2j⊤
1
⟨sxb⟩
⟨sxa⟩⟨sza⟩
hz + 2⟨szb⟩(j∥
2 −j⊤
2 ) + 2⟨sza⟩j∥
1 = 2j⊤
1
⟨sxa⟩
⟨sxb⟩⟨szb⟩
(28)
on the ss-ns boundary line the quotient ⟨sxa⟩/⟨sza⟩is
not known a priori and therefore we eliminate it in the
equation above yielding:
f n
a (⟨sza⟩, ⟨szb⟩, hz, t ) = 0
f n
b (⟨sza⟩, ⟨szb⟩, hz, t ) = 0
f(⟨sza⟩, ⟨szb⟩, hz)
:= (hz + 2⟨sza⟩(j∥
2 (0) −j⊤
2 ) + 2⟨szb⟩j∥
1 ) ×
(hz + 2⟨szb⟩(j∥
2 −j⊤
2 ) + 2⟨sza⟩j∥
1 ) −(2j⊤
1 ⟨szb⟩)2 = 0
(29)
we introduce a variable s which parametrizes the bound-
ary curve. if we for instance choose ds=dt we get the
following set of ordinary differential equations, defining
the nf-sf and the ns-ss transition lines:
∂f n
a
∂⟨sza⟩
∂f n
a
∂⟨szb ⟩
∂f n
a
∂hz
∂f n
a
∂t
∂f n
b
∂⟨sza⟩
∂f n
b
∂⟨szb ⟩
∂f n
b
∂hz
∂f n
b
∂t
∂f
∂⟨sza⟩
∂f
∂⟨szb ⟩
∂f
∂hz
0
0
0
0
1
*
∂⟨sza⟩
∂s
∂⟨szb ⟩
∂s
∂hz
∂s
∂t
∂s
=
0
0
0
1
(30)
upon crossing the sf-ss transition line, coming from the
superfluid phase the set of possible solutions branches off
into two phases, the supersolid and a non-physical (com-
plex valued) superfluid phase. therefore any matrix of or-
dinary differential equations will render a singularity and
consequently we have to approach the transition line in a
limiting process:
f s
a(⟨sxa⟩, ⟨sxb⟩, hz, t ) = 0
f s
b(⟨sxa⟩, ⟨sxb⟩, hz, t ) = 0
lim
ǫ→0(⟨sxa⟩−⟨sxb⟩−ǫ) = 0
(31)
the resulting ode is
∂f s
a
∂⟨sxa⟩
∂f s
a
∂⟨sxb ⟩
∂f s
a
∂hz
∂f s
a
∂t
∂f s
b
∂⟨sxa⟩
∂f s
b
∂⟨sxb ⟩
∂f s
b
∂hz
∂f s
b
∂t
1
−1
0
0
0
0
0
1
*
∂⟨sxa⟩
∂s
∂⟨sxb ⟩
∂s
∂hz
∂s
∂t
∂s
=
0
0
ǫ
1
(32)
which defines the superfluid to supersolid transition. as
mentioned previously, there are certain parameter regimes
where not all four possible phases are appearing and con-
sequently a first order transition (mostly superfluid to su-
persolid) will occur. in the previous chapter we have seen
that such a transition line is difficult to locate. however,
a tricritical point frequently appears in the phase diagram
and at this point the first order transition evolves into a
second order transition. this tricritical point can be taken
as a initial value for a differential equation defining the
corresponding first order transition line.
the relevant ode may be derived from a clausius clapey-
ron type equation. on the transition line both phases have
equal free energy. hence:
∆s
∆(⟨sza⟩+ ⟨szb⟩) = ∂t
∂hz
(33)
where s refers to the spin entropy as derived in the pre-
vious section (eq. (25)) and ∆refers to the difference of
either entropy or spin mean-field of the superfluid and the
normal solid phases. this equation together with ds = dt
and the total derivative of the two self-consistency equa-
tions for the normal solid and one equation for the super-
fluid phase form a set of 5 odes determining ⟨sx⟩in the
superfluid phase and ⟨sza⟩and ⟨szb⟩in the normal solid
phase along the boundary line in the t −hz plane:
∂f s
∂⟨sx⟩0
0
∂f s
∂hz
∂f s
∂t
0
∂f n
a
∂⟨sza⟩
∂f n
a
∂⟨szb ⟩
∂f n
a
∂hz
∂f n
a
∂t
0
∂f n
b
∂⟨sza⟩
∂f n
b
∂⟨szb ⟩
∂f n
b
∂hz
∂f n
b
∂t
0
0
0
∆s
∆⟨sza+b⟩
0
0
0
0
1
(34)
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
7
8 excitation spectrum
the superfluid phase features, due to spontaneously bro-
ken u(1) symmetry, the well know gapless goldstone bosons,
i.e. linear phonons. the supersolid phase additionally ex-
hibits a second, gapped branch which is due to the break
down of discrete translational symmetry.
figure 4 reveals a zero frequency mode in the supersolid
phase at [100] of the first brillouin zone and consequently
the superfluid to supersolid phase transition is character-
ized by a collapsing roton minimum at [100]. the disper-
sion relation in the superfluid (cfe) phase is given by:
ω(k) = 2{(j⊤
1 (γ1(k) −1) + j⊤
2 (γ2(k) −1)) ×
⟨sz⟩2(j⊤
1 (γ1(k) −1) + j⊤
2 (γ2(k) −1))−
⟨sx⟩2(j⊤
1 + j⊤
2 −j∥
1 γ1(k) −j∥
2 γ2(k))
i
}1/2
(35)
from this equation we can see that the energy possibly
goes to zero at [100] (corresponds to γ1(k) = −1 and
γ2(k) = 1) when following condition is fulfilled:
j⊤
1 + j⊤
2 + j∥
1 −j∥
2 < 0
(36)
hence we obtain a further condition (supplementary to
eq. (16)) for the existence of the superfluid to supersolid
transition.
equation (35) allows for the existence of a second region
of the reciprocal lattice space where the dispersion rela-
tion might go soft. for γ1(k) = 0 and γ2(k) = −1 which
corresponds to [111] we obtain following condition:
−2j⊤
2
j⊤
1 + j⊤
2 + j∥
2
> 0
(37)
it is also interesting to study the behavior of the excitation
spectrum with increasing temperature. in a conventional
superfluid the long wave-length behavior is given by:
ω(k) = no(t )v (0)
m
k
(38)
here v(0) is the interaction potential at zero momentum
and m is the particles' mass. the density of the conden-
sate no(t ) typically decreases with increasing temperature
and for that reason we expect lower energies with increas-
ing temperature. in figure 3 we can see that the quasi-
particle energies indeed decrease with increasing temper-
ature. apart from the region in the vicinity of [100] the
energies at higher temperatures lie significantly lower than
those ones closer to absolute zero. this is important as it
will contribute to the variation of thermodynamic quan-
tities such as the entropy or the specific heat. figure 4
depicts the variation of the excitation spectrum with in-
creasing temperature in the supersolid phase. in this phase
the low lying phonon branch mostly depletes with increas-
ing temperature, although there exist a region between
[110] and [111] where the zero temperature spectrum is
significantly higher. contrary to the phonon branch the
gapped mode lifts the energy around the long wave length
εk
[100]
[110]
[111]
[000]
[000]
fig. 3.
excitation spectrum in the superfluid phase for for
j⊤
1 = 1.498k, j⊤
2 = 0.562k, j∥
1 = −3.899k, j∥
2 = −1.782k
and hz = 3. solid line refers to t = 0, dotted to t = 0.5,
dashed line to t = 1 and the long dashed line to t = 1.3.
εk
[000]
[100]
[110]
[111]
[000]
fig. 4.
excitation spectrum in the supersolid phase for for
j⊤
1 = 1.498k, j⊤
2 = 0.562k, j∥
1 = −3.899k, j∥
2 = −1.782k
and hz = 0.8. the solid lines refer to t = 0 and the dashed
lines to t = 1.
limit [000] and around [100]. in comparison with the super-
fluid dispersion relation the excitation spectrum changes
its form/shape rather than scaling down with increasing
temperature as in the superfluid phase. we observe that
the supersolid phase exhibits a more complex and diverse
structure than the superfluid or normal solid phases alone.
9 discussion
9.1 on finite temperature properties
in this section we will discuss finite temperature properties
of the anisotropic heisenberg model and its solution in
the random-phase approximation. in order to be able to
compare the temperature dependence of the model in the
random-phase approximation with the classical mean-field
8
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
0
0.5
1
1.5
2
t [k]
0
2
4
6
8
hz
sf (cfe)
nf (fe)
ss (caf)
ns (af)
fig. 5. the phase diagram for j⊤
1 = 1.498k, j⊤
2 = 0.562k,
j∥
1 = −3.899k and j∥
2 = −1.782k in the rpa (solid lines)
and in classical mf (dashed lines). mf overestimates the tem-
perature by about 30%.
approximation we chose the set of parameters that was
extensively scrutinized by liu and fisher [12]:
j⊤
1 = 1.498k
j⊤
2 = 0.562k
j∥
1 = −3.899k
j∥
2 = −1.782k
(39)
as mentioned in the previous section, liu and fisher [12]
have chosen this set of parameters because it provides ar-
guably the best fit to the phase diagram of helium-4. since
we believe that the validity of the quantum lattice gas
model is too limited to appropriately reproduce the be-
havior of helium-4 over the whole range of temperature
and pressure we do not discuss most properties in the
pressure-temperature space but rather present the major
part of the results in the more comprehensible magnetic
field -temperature coordinates. only where the theory can
be compared to relevant experimental data, such as the
heat capacity at constant pressure we work in the corre-
sponding representation.
the phase diagram of the anisotropic heisenberg model in
t and hz coordinates is given in figure 5. the dashed lines
correspond to the phase diagram of the mean-field ap-
proximation. we see that the diagrams are quantitatively
quite similar but in the mean-field solution the tempera-
ture is somewhat overestimated giving an approximately
30% higher temperature for the tetra-critical point. as
mentioned before the critical magnetic fields hz
c, due to
quantum fluctuations, are lower in the random-phase ap-
proximation. this effect is most distinctive on the super-
solid to superfluid transition line as there the deletion of
the spin magnitude is strongly pronounced.
the net vacancy density ǫ, density of vacancies minus den-
sity of interstitials sparked interest as the question arose
[18] as to whether the number of vacancies follow predic-
tions of thermal activation theory or are due to the effects
0
0.5
1
1.5
2
2.5
3
t/tc
0
0.02
0.04
0.06
0.08
net vacancy density
fig. 6. net vacancy density for four different pressures. the
dashed line shows the curve expected if the normal solid would
exist down to zero temperature
0.5
1
1.5
hz
0
0.05
0.1
0.15
0.2
ε = s
z
a
+ s
z
b
fig. 7. net vacancy density as a function of the external mag-
netic field (equates to the chemical potential) for four different
temperatures. the solid line refers to t=0k, the dashed line
to t=0.4k, the long dashed line to t=0.7k and the dotted
line refers to t=1k.
of an incommensurate crystal. figure 6 shows the net va-
cancy density in the supersolid and the normal solid phase
at constant pressures. in isobar curves, curves of constant
pressure, the magnetic field hz is controlled through equa-
tion (27). we see that the net vacancy density is nearly
constant in the supersolid phase. only in the normal solid
phase the net vacancy density increases exponentially with
increasing temperature in total agreement with classical
thermal activation theory. the almost temperature inde-
pendent behavior of the net vacancy density in the su-
persolid phase is an important finding of the quantum
lattice gas model and should be observable in high reso-
lution experiments if this effect is real. in figure (7) we
plotted the net vacancy density as a function of hz (or
equivalently μ the chemical potential) across the normal
solid, the supersolid and the superfluid phases for various
temperatures, namely t = 0k, t = 0.4k, t = 0.7k
and t = 1.0k. the term net vacancy density does not
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
9
0
0.2
0.4
0.6
0.8
t/tc
0
0.01
0.02
0.03
0.04
-∆f/k
0.1
1e-07
1e-06
1e-05
0.0001
0.001
0.01
fig. 8. free energy in the supersolid phase. the leading con-
tribution comes from a t 4.2-term. the inset shows the free
energy in double logarithmic scale. the dashed line is the fit
to the t 4.2-term. the leading correction comes from a t 5-term,
indicated by the long dashed line.
fig. 9. [color online] 3d plot of the spin entropy. the four
phases are clearly distinguishable as the entropy is non-smooth
across the transition lines.
0.5
1
1.5
2
t/tc
0.2
0.4
0.6
0.8
1
1.2
1.4
cp
fig. 10. specific heat at constant pressure.
0
0.025
0.05
0.075
0.1
0.125
ε = s
z
a+s
z
b
0
0.5
1
1.5
t [k]
fig. 11. curves show the net vacancy density as a function of
temperature on the supersolid to normal solid transition lines.
the solid line refers to set 1: j⊤
1
= 1.498k, j⊤
2
= 0.562k,
j∥
1 = −3.899k and j∥
2 = −1.782k and the dashed line to set
2: j⊤
1 = 0.5k, j⊤
2 = 0.5k, j∥
1 = −2.0k and j∥
2 = −0.5k.
have a physical meaning in the superfluid phase and here
the quantity ǫ rather refers to the particle density of the
fluid. the particle density in the superfluid phase is lin-
ear in hz and independent of the temperature t , which
follows immediately from condition eq. (9) as ⟨sx
a⟩/⟨sx
b⟩
is equal to one in the fluid phase. in the supersolid phase
the dependence of the net vacancy density on the chemi-
cal potential is stronger than in the superfluid phase. the
chemical potential (magnetic field hz) is roughly inversely
proportional to the pressure. meaning that the superfluid
phase exhibits a higher compressibility than the supersolid
phase, which is a quite remarkable result. interestingly the
net vacancy density in the supersolid phase also increases
linearly with the chemical potential. this is due to the
ratio of the superfluid order parameter ⟨sx
a⟩/⟨sx
b⟩varies
as the square root of the magnetic field in the vicinity of
the transition:
⟨sx
a⟩/⟨sx
b⟩∝1 −
q
hz −hz
caf −af
(40)
the exponent 1
2 is typical for mean-field type approxima-
tions and appears close to all transition lines. figure 11
shows the net vacancy density of the model on the su-
persolid to normal solid transition line as a function of
temperature, and also reveals the mean-field type square
root law dependence.
at zero temperature in the solid phase the net vacancy
density is equal to zero, hence the crystal is incompress-
ible. in real systems compressibility usually occurs as a
result of change in the lattice constant a, characterized by
the grueneisen parameter. the quantum lattice gas model
does not take this effect into account since the lattice con-
stant is treated as a constant. however measurements have
shown that the lattice constant in the (supersolid) helium
is almost a constant indicating that the net vacancy den-
sity is the crucial parameter. also of interest are the free
energy and the entropy. figure (8) shows the free energy of
10
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
the anisotropic heisenberg model in the supersolid (caf)
phase at constant pressure. at low t the free energy usu-
ally follows a power law:
f ∝t α
(41)
the coefficient α is a universal property of the model
which is constant over the entire regime of a phase. the
exponent is most easily acquired in the log-log plot, shown
in the inset of the plot. in this logarithmic scale α is given
by the slope of the curve and the leading contribution is
given by, approximately
α = 4.2
(42)
in the supersolid region. this is close in value to the usual
t 4-term attributed to the linear phonon modes, and fol-
lows from equation (25) with ω(k) linear in k. here the
t 4-term is solely due to the superfluid mode and in real
solids an additional t 4-term contribution, accounting for
the lattice phonon-modes, will appear. the logarithmic
plot also reveals the leading correction to the free energy
given by α = 5.
the non-configurational entropy of the system over the en-
tire range of temperature and magnetic field hz is given in
figure 9. all four phases are visible and as expected from
a thermodynamically equilibrated system the entropy is
monotonically increasing with respect to temperature.
figure 10 depicts the configurational specific heat at con-
stant pressure in the supersolid and the normal solid phase.
the jump in the specific heat at the critical temperature
tc, in agreement with the second order phase transition,
appears to be smeared out due to numerical inaccuracies
as the specific heat is the second derivation of the free
energy which had to be integrated of the interval [hz, ∞].
the jump in the specific heat may also be calculated from
following formula which is an analogy to the clausius-
clapeyron equation:
∆∂2f
∂hz2
dhz2
dt 2
t l
+ ∆∂2f
∂hz∂t
dhz
dt
t l
+ ∆∂2f
∂t 2 = 0
(43)
∆ch = −t ∆∂2f
∂t 2
= t ∆
∂(⟨sza⟩+ ⟨szb⟩)
∂hz
∂hz
∂t
2
t l
+t ∆
∂(⟨sza⟩+ ⟨szb⟩)
∂t
∂hz
∂t
t l
(44)
as we have cp = ch −t
∂2f
∂t ∂hz ( ∂hz
∂t )p , we have for the
specific heat at constant pressure:
∆cp = t ∆
∂(⟨sza⟩+ ⟨szb⟩)
∂hz
∂hz
∂t
2
t l
+t ∆
∂(⟨sza⟩+ ⟨szb⟩)
∂t
∂hz
∂t
t l
−
∂hz
∂t
p
(45)
0
0.1
0.2
0.3
0.4
0.5
0.6
t [k]
0.8
1
1.2
1.4
1.6
1.8
2
hz
0
0.1
0.2
0.3
0.84
0.85
0.86
0.87
nf
sf
ns
ss
fig. 12.
the phase diagram for j⊤
1
= 0.5k, j⊤
2
= 0.5k,
j∥
1 = −1.0k and j∥
2 = 0.5k. the supersolid phase vanishes
below t=0.323k. the resulting sf-ns transition is first order.
for the values corresponding to figure 10 we obtain an
estimated jump of 0.02 which is in good agreement with
the curve.
9.2 first order boundary lines
in parameter regimes where the supersolid phase does not
appear in certain temperature regions there consequently
appears a first order phase transition between the super-
fluid and the normal solid phase. liu and fisher [12] com-
pared the free energies of the competing phases to estab-
lish the transition line. this procedure is not applicable
in the random-phase approximation, as was outlined in
the section on thermodynamic properties. other than the
mean-field approximation where the hamiltonian is given
by equation (6) the effective hamiltonian of the random-
phase approximation is not known. therefore we have
to integrate the first order transition line from a clau-
sius clapeyron like equation as derived in the section on
boundary lines. a set of parameters which exhibits such
a first order transition at low temperatures is given by:
j⊤
1 = 0.5
j⊤
2 = 0.5
j∥
1 = −1.0
j∥
2 = 0.5
(46)
the corresponding phase diagram is shown in figure 12.
according to mean-field eq. (14) and eq. (16) the sec-
ond order superfluid to supersolid transition as well as
the supersolid to normal solid transition is at absolute
zero at hz = 0.86603, implying that the supersolid phase
does not exist at zero temperature. at higher tempera-
tures (above t > 0.323k) the supersolid phase does exist.
at t = 0.323k where the sf-ss and the ss-ns transi-
tion lines intersect there occurs a tricritical point. on this
tricritical point the superfluid and normal solid phases co-
exist (as well as the supersolid phase) and the first order
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
11
0
0.5
1
1.5
2
t [k]
0.8
1
1.2
1.4
1.6
hz
nf
sf
ss
ns
tricritical points
fig. 13.
the phase diagram for j⊤
1
= 0.38k, j⊤
2
= 2.5k,
j∥
1 = −1.2k and j∥
2 = 2.4k. the supersolid phase does not
appear above t=1.785k. the two tricritical points are con-
nected by a first order sf-ns phase transition.
phase transition line can be calculated from the ordinary
differential equation as has been derived in the section on
boundary lines (eq. (34)). the tricritical point is as such
the starting point for the integration of the ode.
there also exist a regime of parameters where the super-
solid phase is suppressed at higher temperatures as can be
seen in figure 13 which corresponds to:
j⊤
1 = 0.38
j⊤
2 = 2.5
j∥
1 = −1.2
j∥
2 = 2.4
(47)
the sf-ss and the ss-ns transition lines converge before
the nf-sf line is reached, hence no tetracritical point
such as in figure 5 is present. the two tricritical points
are connected by a first order superfluid to normal solid
transition line.
9.3 beyond the model
in section 8 we have seen that the supersolid phase ap-
pears if and only if the roton dip at γ1 = −1 and γ2 = 1,
i.e. [001] of the first brillouin zone collapses. additionally
we have seen that the model also allows for a collapsing
minimum at [111] if condition eq. (37) is met. a set of
parameters that fulfills this condition is given by:
j⊤
1 = 0.5k
j⊤
2 = 0.5k
j∥
1 = −2k
j∥
2 = −1.5k
(48)
note that the nearest neighbor and the next nearest neigh-
bor constants in this configuration j⊤
1
and j⊤
2 , corre-
sponding to the kinetic energy, are relatively weak and
0
0.2
0.4
0.6
0.8
t [k]
0
1
2
3
4
5
hz
nf
sf
unknown phase
fig. 14.
phase diagram for j⊤
1
= 0.5k,j⊤
2
= 0.5k, j∥
1 =
−2k and j∥
2 = −1.5k. the superfluid phase becomes unstable
due to a imaginary quasi-particle spectrum (dashed line). rpa
does not yield any stable phase beneath that line.
are both of equal strength, leading to a highly anisotropic
kinetic energy. figure 14 shows the corresponding phase
diagram according to the random-phase approximation.
the normal fluid to superfluid transition line starts at
hz = 4.5
(49)
at absolute zero and decreases with increasing tempera-
ture. according to eq. (14) and eq. (16) the critical exter-
nal fields hz defining the superfluid to supersolid and the
supersolid to normal solid transitions are, due to negative
values under the square root, imaginary and hence phys-
ically not relevant. consequently in the classical mean-
field approximation the superfluid phase extends down to
hz = 0 and a first order superfluid to normal solid tran-
sition does not occur as the relatively large negative j∥
2
increases the free energy of a possible solid phase.
the random-phase approximation however draws a slightly
different picture. analogous to the classical mean-field
solution the random-phase approximation also yields a
phase transition near hz = 4.5. but unlike the classical
mean-fields solution, the superfluid phase here does not
survive all the way down to hz = 0. due to the partic-
ular choice of parameters the superfluid phase becomes
unstable at around hz = 2; i.e the quasi-particle spectrum
turns imaginary at γ1(k) = 0 and γ2(k) = −1 ([111]). in-
terestingly beyond this line no other stable phase exists in
the present approach; there is no set of spin fields ⟨sxa⟩,
⟨sxb⟩, ⟨sza⟩and ⟨szb⟩that solves the self-consistency
equations (20) or (21).
to understand why the present approach breaks down and
how the phase below hz = 2 might look like we first inves-
tigate the physical meaning of the collapsing roton mini-
mum at [100] (equivalently [010] and [001]) leading to the
supersolid phase consisting of two sc sub-lattices as al-
ready analyzed in the previous sections: the roton dip at
[100] corresponds to a density wave given by:
cos(2πx/a) + cos(2πy/a) + cos(2πz/a)
3
(50)
12
a.j. stoffel, m. gul ́
acsi: finite temperature properties of a supersolid: a rpa approach
fig. 15. two dimensional projection of the lattice structure
of a (super)-solid phase on three sub-lattices triggered by a
collapse of the roton minimum at [111] of the first brillouin
zone.
this density wave takes on value one on sub-lattice a and
minus one on sub-lattice b, hence it reproduces the peri-
odicity of the supersolid crystal.
in the same way a collapsing roton dip on the main diag-
onals [111] as given here refers to following density wave:
cos(πx/a) cos(πy/a) cos(πz/a)
(51)
this density wave yields zero on sub-lattice b and alter-
natingly one and minus one on sub-lattice a (neighbors
have opposite signs). consequently this phase refers to a
(super)-solid phase exhibiting three sub-lattices a, b and
c where the mean-fields take on three different values (see
fig. 15). it would be quite interesting to study the pos-
sibilities and properties of such a phase and we leave as
future work the extension of the present approach to ac-
count for a three sub-lattice phase and the investigation
its properties.
10 conclusion
in this paper we extended the mean-field theory of the
qlg model by liu and fisher [12] at finite temperatures
and employed a random-phase approximation, where we
derived green's functions using the method of equation of
motion. we applied the cumulant decoupling procedure
to split up emergent third order green's functions in the
eom. in comparison to the mf theory employed by liu
and fisher [12] the rpa is a fully quantum mechanically
solution of the qlg model and therefore takes quantum
fluctuations as well as quasi-particle excitations into ac-
count. the computational results show that these quasi-
particle excitations are capable to render the superfluid
phase unstable and thus evoke a phase transition. quan-
tum fluctuations account for additional vacancies and in-
terstitials even at zero temperature and most interestingly
the net vacancy density is altered in the supersolid phase.
a potential shortcoming of the rpa is the lack of knowl-
edge of the effective hamiltonian and therefore the inter-
nal and free energy. usually the free energy is needed to
compute first order transition lines as the ground state is
given by the lowest energy state. we bypassed this obsta-
cle by deriving a clausius-clapeyron like equation which
defines the first order superfluid to normal solid transi-
tion line. the entropy which is an input parameter of
this equation is calculated from the spin wave excitation
spectrum. the jump in the specific heat across the sec-
ond order superfluid to supersolid transition line reveals
information about the nature of the transition. however
the specific heat is the second derivative of the free en-
ergy and thus the jump is smeared out by the numerical
calculations. consequently we derived an equation which
gives an optional estimation of the jump and is in good
agreement with the numerical estimate. most important
our theory predicts a net vacancy density which, in the
supersolid phase is significantly different from thermal ac-
tivation theory. in the normal solid phase the net vacancy
density roughly follows the predictions of thermal acti-
vation theory, although quantum mechanical effects give
a measurable contribution. across the phase transition,
however, in the supersolid phase the net vacancy density
stays rather constant as t increases.
references
1. a. f. andreev and i. m. lifshitz, zh. eksp. teor. fiz.
56,(1969) 2057 [jetp 29, (1969) 1107].
2. g. v. chester, phys. rev. a 2, (1970) 256.
3. a. j. leggett, phys. rev. lett. 25, (1970) 1543.
4. p. c. ho, i. p. bindloss and j. m. goodkind, j. low temp.
physics 109, (1997) 409.
5. e. kim, m.h.w. chan, nature 427, (2004) 225.
6. e. kim, m.h,w. chan, science 305, (2004) 1941.
7. y. aoki, j. c. graves, h. kojima, phys. rev. lett. 99,
(2007) 015301.
8. day, j.r. beamish, j. nature 450, (2007) 853856.
9. p.w. anderson, nature phys. 3,(2007) 160.
10. a. stoffel and m. gulacsi, manuscript submitted to phys.
rev. b.
11. a. stoffel and m. gulacsi, manuscript in preparation.
12. k.-s. liu, m.e. fisher, j. low. temp. phys. 10, (1973)
655.
13. h. matsuda, t. tsuneto, prog. theoret. phys. suppl. 46,
(1970) 411.
14. t. matsubara and h. matsuda,progr. theoret. phys. 16,
(1956) 569;17, (1957).
15. n.n. bogolyubov, s.v. tyablikov, doklady akad. nauk.
s.s.s.r. 126, 53 (1959) [translation: soviet phys. -doklady
4,(1959) 604].
16. s.v. tyablikov, ukrain. mat. yhur. 11, (1959) 287.
17. p.e. bloomfield and e.b. brown, phys. rev. b 22, (1980)
1353.
18. p. w. anderson, w. f. brinkman and david a. huse,
science 310,(2005) 1164.
19. x. lin, a. clark and m. chan, nature 449,(2007) 1025.
|
0911.1699 | review of quasi-elastic charge-exchange data in the nucleon-deuteron
breakup reaction | the available data on the forward charge exchange of nucleons on the deuteron
up to 2gev per nucleon are reviewed. the value of the inclusive nd->pnn/np->pn
cross section ratio is sensitive to the fraction of spin-independent
neutron-proton backward scattering. the measurements of the polarisation
transfer in d(\vec{n},\vec{p})nn or the deuteron analysing power in
p(\vec{d},pp)n in high resolution experiments, where the final nn or pp pair
emerge at low excitation energy, depend upon the longitudinal and transverse
spin-spin np amplitudes. the relation between these types of experiments is
discussed and the results compared with predictions of the impulse
approximation model in order to see what new constraints they can bring to the
neutron-proton database.
| introduction
the charge exchange of neutrons or protons on the deuteron has a very long his-
tory. the first theoretical papers that dealt with the subject seem to date from
the beginning of 1950s with papers by chew [1, 2], gluckstein and bethe [3], and
pomeranchuk [4]. the first two groups were strongly influenced by the measure-
ments of the differential cross section of the d(n, p) reaction that were then being
undertaken at ucrl by powell [5]. apart from coulomb effects, by charge sym-
metry the cross section for this reaction should be the same as that for d(p, n).
the spectrum of the emerging neutron in the forward direction here shows a very
strong peaking for an energy that is only a little below that of the incident pro-
ton beam. there was therefore much interest in using the reaction as a means
of producing a good quality neutron beam up to what was then "high" energies,
i.e., a few hundred mev. the theory of this proposal was further developed by
watson [6], shmushkevich [7], migdal [8], and lapidus [9].
since we have recently reviewed the phenomenology of the d(n, p) and d(p, n)
charge exchange [10], the theory will not be treated here in any detail. the aim
of the present paper is rather to discuss the database of the existing inclusive
and exclusive measurements and make comparisons with the information that is
available from neutron-proton elastic scattering data.
the proton and neutron bound in the deuterons are in a superposition of
3s1 and 3d1 states and their spins are parallel. on the other hand, if the four-
momentum transfer t = −q2 between the incident neutron and final proton in
the nd →p{nn} reaction is very small, the pauli principle demands that the two
emerging neutrons be in the spin-singlet states 1s0 and 1d2. in impulse (single-
scattering) approximation, we would then expect the transition amplitude to
be proportional to a spin-flip isospin-flip nucleon-nucleon scattering amplitude
times a form factor that represents the overlap of the initial spin-triplet deuteron
wave function with that of the unbound (scattering-state) nn wave function. the
peaking observed in the energy spectrum of the outgoing proton is due to the
huge neutron-neutron scattering length, which leads to a very strong final state
interaction (fsi) between the two neutrons.
a detailed evaluation of the proton spectrum from the d(n, p)nn reaction
would clearly depend upon the deuteron and nn wave functions, i.e., upon low
energy nuclear physics. however, a major advance was made by dean [11, 12].
he showed that, if one integrated over all the proton energies, there was a closure
sum rule where all the dependence on the nn wave function vanished.
dσ
dt
nd→p{nn}
= (1 −f(q))
dσ
dt
si
np→pn
+ [1 −1
3f(q)]
dσ
dt
sf
np→pn
,
(1.1)
where f(q) is the deuteron form factor. here the neutron-proton differential cross
section is split into two parts that represent the contribution that is independent
2
of any spin transfer (si) between the initial neutron and final proton and one
where there is a spin flip (sf).
if the beam energy is high, then in the forward direction q ≈0, f(0) = 1,
and eq. (1.1) reduces to
dσ
dt
nd→p{nn}
= 2
3
dσ
dt
sf
np→pn
.
(1.2)
there are modifications to eq. (1.1) through the deuteron d-state though these
do not affect the forward limit of eq. (1.2) [11, 12, 13]. as a consequence, the
ratio
rnp(0) =
dσ
dt
nd→p{nn}
,
dσ
dt
np→pn
= 2
3
dσ
dt
sf
np→pn
,
dσ
dt
np→pn
(1.3)
is equal to two thirds of the fraction of spin flip in np →pn between the incident
neutron and proton outgoing in the beam direction. it is because the ratio of
two unpolarised cross sections can give information about the spin dependence of
neutron-proton scattering that so many groups have made experimental studies
in the field and these are discussed in section 3. of course, for this to be a useful
interpretation of the cross section ratio the energy has to be sufficiently high for
the dean sum rule to converge before any phase space limitations become impor-
tant. the longitudinal momentum transfer must be negligible and terms other
than the np →pn impulse approximation should not contribute significantly to
the evaluation of the sum rule. although the strong nn fsi helps with these
concerns, all the caveats indicate that eq. (1.3) would provide at best only a
qualitative description of data at the lower energies.
the alternative approach is not to use a sum rule but rather to measure the
excitation energy in the outgoing dineutron or diproton with good resolution
and then evaluate the impulse approximation directly by using deuteron and
nn scattering wave functions, i.e., input information from low energy nuclear
physics. this avoids the questions of the convergence of the sum rule and so might
yield useful results down to lower energies. a second important feature of the
d(p, n)pp reaction in these conditions is that the polarisation transfer between
the initial proton and the final neutron is expected to be very large, provided
that the excitation energy epp in the final two-proton system is constrained to
be only a few mev [14, 15]. in fact the reaction has been used by several groups
to furnish a polarised neutron beam [16, 17, 18] but also as a means to study
neutron-proton charge exchange observables, as described in section 4.
bugg and wilkin [13, 19] realised that in the small epp limit the deuteron
tensor analysing powers in the p(⃗
d, {pp})n reaction should also be large and with
a significant angular structure that was sensitive to the differences between the
neutron-proton spin-flip amplitudes. this realisation provided an impetus for the
study of high resolution p(⃗
d, {pp})n experiments that are detailed in section 5.
3
the inclusive (p, n) or (n, p) measurements of section 3 and the high resolution
ones of sections 4 and 5 are in fact sensitive to exactly the same physics input.
to make this explicit, we outline in section 2 the necessary np formalism through
which one can relate the forward values of rnp or rpn, the polarisation transfer
in d(⃗
n, ⃗
p )nn and the deuteron tensor analysing power in p(⃗
d, {pp})n in impulse
approximation to the longitudinal and transverse polarisation transfer coefficients
in neutron-proton elastic scattering. predictions for the observables are made
there using an up-to-date phase shift analysis.
data are available on the rnp and rpn parameters in, respectively, inclusive
d(n, p)nn and d(p, n)pp reactions at energies that range from tens of mev up to
2 gev and the features of the individual experiments are examined in section 3,
where the results are compared to the predictions of the phase shift analysis.
polarisation transfer data have become steadily more reliable with time, with
firmer control over the nn excitation energies and better calibrated polarisation
measurements so that the data described in section 3 now extend from 10 mev
up to 800 mev.
four experimental programmes were devoted to the study of the cross sec-
tion and tensor analysing powers of the p(⃗
d, {pp})n reaction using very different
experimental techniques. their procedures are described in section 5 and the re-
sults compared with the predictions of the plane wave impulse approximation. in
general this gives a reasonable description of the data out to a three-momentum
transfer of q ≈mπ by which point multiple scatterings might become important.
these data are however only available in an energy domain where the neutron-
proton database is extensive and reliable and the possible extensions are also
outlined there.
the comparison between the sum-rule and high resolution approaches is one
of the subjects that is addressed in our conclusions of section 6. the consistency
between the information obtained from the d(⃗
n, ⃗
p )nn and p(⃗
d, {pp})n reactions
in the forward direction is striking and the belief is expressed that this must
contribute positively to our knowledge of the neutron-proton charge exchange
phenomenology.
2
neutron-proton and nucleon-deuteron observables
we have shown that the input necessary for the evaluation of the forward charge
exchange observables can be expressed as combinations of pure linearly indepen-
dent np →np observables evaluated in the backward direction [10]. although the
expressions are independent of the scattering amplitude representation, for our
purposes it is simplest to use the results of polarisation transfer experiments. the
nn formalism gives two series of polarisation transfer parameters that are mutu-
ally dependent [20]. using the notation xsrbt for experiments with measured spin
orientations for the scattered (s), recoil (r), beam (b), and target (t) particles,
4
we have either the polarisation transfer from the beam to recoil particles,
dσ
dt k0rb0 = 1
4tr
σ2rmσ1bm†
,
(2.1)
or the polarisation transfer from the target to the scattered particle
dσ
dt ks00t = 1
4tr
σ1smσ2tm†
.
(2.2)
here σ1s, σ1b, σ2t, and σ2r are the corresponding pauli matrices and m is the
scattering matrix. the unpolarised invariant elastic scattering cross section
dσ
dt = π
k2
dσ
dω= 1
4tr
mm†
,
(2.3)
where k is the momentum in the cm frame and t is the four-momentum transfer.
a first series of parameters describes the scattering of a polarised neutron
beam on an unpolarised proton target, where the polarisation of the final outgo-
ing protons is measured by an analyser through a second scattering. the spins
of the incident neutrons can be oriented either perpendicularly or longitudinally
with respect to the beam direction, with the final proton polarisations being mea-
sured in the same directions. at θcm = π there are two independent parameters,
k0nn0(π) and k0ll0(π), referring respectively to the transverse (n) and longitudi-
nal (l) directions. it was shown in ref. [10] that the forward d(n, p)n/p(n, p)n
cross section ratio can be written in terms of these as
rnp(0) = 1
6 {3 −2k0nn0(π) −k0ll0(π)} .
(2.4)
a second series of parameters describes the scattering of an unpolarised neu-
tron beam on a polarised proton target, where it is the polarisation of the final
outgoing neutron that is determined. this leads to the alternative expression for
rpn(0):
rnp(0) = 1
6 {3 −2kn00n(π) + kl00l(π)} ,
(2.5)
where kn00n(π) = k0nn0(π) but kl00l(π) = −k0ll0(π). other equivalent relations
are to be found in ref. [20]
it cannot be stressed enough that the small angle (n, p) charge exchange on
the deuteron is sensitive to the spin transfer from the incident neutron to the
outgoing proton and not that to the outgoing neutron. the latter observables
are called the depolarisation parameters d which, for example, are given in the
case of a polarised target by
dσ
dt d0r0t = 1
4tr
σ2rmσ1tm†
.
(2.6)
5
if one were to evaluate instead of eq. (2.5) the combination
rnp(0) = 1
6 {3 −2d0n0n(π) −d0l0l(π)} ,
(2.7)
then one would get a completely independent (and wrong) answer. using the
said sp07 phase shift solution at 100 mev one finds that rnp(0) = 0.60 while
rnp(0) = 0.13. hence one has to be very careful with the statement that the
np →np spin dependence in the backward direction is weak or strong. it depends
entirely on which particles one is discussing.
in plane wave impulse approximation, the one non-vanishing deuteron ten-
sor analysing power in the p(d, {pp})n reaction in the forward direction can be
expressed in terms of the same spin-transfer parameters, provided that the exci-
tation energy in the pp system is very small such that it is in the 1s0 state [13, 10]:
ann(0) =
2(k0ll0(π) −k0nn0(π))
3 −k0ll0(π) −2k0nn0(π) *
(2.8)
in an attempt to minimise confusion, observables in the nucleon-deuteron sector
will be labelled with capital letters and only carry two subscripts.
in the same approximation, the longitudinal and transverse spin-transfer pa-
rameters in the d(⃗
p,⃗
n)pp between the initial proton and the final neutron emerg-
ing in the beam direction are similarly given by
kll(0) = −
1 −3k0ll0(π) + 2k0nn0(π)
3 −k0ll0(π) −2k0nn0(π)
,
knn(0) = −
1 + k0ll0(π) −2k0nn0(π)
3 −k0ll0(π) −2k0nn0(π)
.
(2.9)
independent of any theoretical model, these parameters are related by [14, 21]
kll(0) + 2knn(0) = −1.
(2.10)
equally generally, in the 1s0 limit the forward longitudinal and transverse
deuteron tensor analysing powers are trivially related;
all(0) = −2ann(0) ,
(2.11)
and these are in turn connected to the spin-transfer coefficients through [21]
all(0) = −(1 + 3kll(0))/2
or
ann(0) = −(1 + 3knn(0))/2.
(2.12)
we stress once again that, although eqs. (2.8,2.9) are model dependent,
eqs. (2.10), (2.11), and (2.12) are exact if the final pp system is in the 1s0 state.
the variation of the np backward elastic cross section with energy and the
values of rnp(0), ann(0), and kll(0) have been calculated using the energy
6
table 1: values of the np backward differential cross section in the cm sys-
tem dσ/dω, and in invariant normalisation dσ/dt. also shown are the forward
d(n, p)n/p(n, p)n ratio rnp(0), the longitudinal polarisation transfer parameter
kll(0) in the d(⃗
p,⃗
n)pp reaction, and the deuteron analysing power ann(0) in the
p(⃗
d, {pp})n reaction at the same energy per nucleon. these have all been eval-
uated from the plane wave impulse approximation using the energy dependent
psa of arndt et al., solution sp07 [22].
tn
dσ/dω
dσ/dt
rnp(0)
kll(0)
ann(0)
(gev)
mb/sr
mb/(gev/c)2
0.010
78.74
52728
0.404
-0.370
-0.027
0.020
42.92
14371
0.433
-0.273
0.045
0.030
29.84
6661
0.466
-0.167
0.125
0.040
23.56
3944
0.498
-0.085
0.186
0.050
20.11
2693
0.525
-0.030
0.227
0.060
18.04
2013
0.547
0.000
0.250
0.070
16.71
1599
0.565
0.014
0.260
0.080
15.81
1323
0.579
0.014
0.261
0.090
15.17
1129
0.591
0.006
0.255
0.100
14.68
983
0.600
-0.008
0.244
0.120
13.98
780
0.613
-0.048
0.214
0.150
13.27
592
0.627
-0.118
0.162
0.200
12.46
417
0.639
-0.231
0.077
0.250
11.88
318
0.645
-0.327
0.005
0.300
11.45
255
0.645
-0.405
-0.054
0.350
11.19
214
0.644
-0.472
-0.104
0.400
11.02
184
0.639
-0.530
-0.148
0.450
10.88
162
0.631
-0.582
-0.186
0.500
10.62
142
0.621
-0.630
-0.223
0.550
10.10
123
0.608
-0.678
-0.259
0.600
9.45
105
0.596
-0.726
-0.295
0.650
9.07
93.4
0.588
-0.762
-0.321
0.700
8.96
85.8
0.586
-0.773
-0.330
0.750
8.95
79.9
0.588
-0.769
-0.327
0.800
8.93
74.7
0.592
-0.761
-0.321
0.850
8.98
69.9
0.596
-0.754
-0.315
0.900
8.81
65.5
0.601
-0.748
-0.311
0.950
8.73
61.5
0.605
-0.744
-0.308
1.000
8.65
57.9
0.609
-0.740
-0.305
1.050
8.57
54.7
0.613
-0.737
-0.303
1.100
8.50
51.7
0.616
-0.735
-0.302
1.150
8.44
49.1
0.620
-0.735
-0.301
1.200
8.40
46.8
0.623
-0.736
-0.302
1.250
8.38
44.9
0.626
-0.739
-0.304
1.300
8.39
43.2
0.629
-0.740
-0.308
7
dependent gw/vpi psa solution sp07 [22] and are listed in table 1.
the
relations between the observables used in refs. [22] and [20] are to be found in
the said program.
the gw/vpi psa for proton-proton scattering can be used up to 3.0 gev
but, according to the authors, the predictions are at best qualitative above
2.5 gev [22].
because this is an energy dependent analysis, one cannot use
the said program to estimate the errors of any observable. although the equiv-
alent psa for neutron-proton scattering was carried out up to 1.3 gev, very few
spin-dependent observables have been measured above 1.1 gev.
let us summarise the present status of the np database at intermediate en-
ergies. about 2000 spin-dependent np elastic scattering data points, involving
11 to 13 independent observables, were determined at saturne 2 over large
angular intervals mainly between 0.8 and 1.1 gev [23, 24]. a comparable amount
of np data in the region from 0.5 to 0.8 gev was measured at lampf [25] and
in the energy interval from 0.2 to 0.56 gev at psi [26]. the triumf group also
contributed significantly up to 0.515 gev [27].
the saturne 2 and the psi data were together sufficient, not only to imple-
ment the psa procedure, but also to perform a direct amplitude reconstruction
at several energies and angles. it appears that the spin-dependent data are more
or less sufficient for this procedure at the lower energies, whereas above 0.8 gev
there is a lack of np differential cross section data, mainly at intermediate angles.
3
measurements of unpolarised quasi-elastic
charge-exchange observables
3.1
the (n, p) experiments
the first measurement of the d(n, p) differential cross section was undertaken
at ucrl by powell in 1951 [5].
these data at 90 mev were reported by
chew [2], though only in graphical form, and from this one deduces that rnp(0) =
0.40 ± 0.04. a year later cladis, hadley, and hess, working also at the ucrl
synchrocyclotron, published data obtained with the 270 mev neutron beam [28].
their value of 0.71 ± 0.02 for the ratio of their own deuteron/hydrogen data is
clearly above the permitted limit of 2/3 by more than the claimed error bar. this
may be connected with the very broad energy spectrum of the incident neutron
beam, which had a fwhm ≈100 mev.
at the dubna synchrocyclotron the first measurements were carried out by
dzhelepov et al. [29, 30] in 1952 - 1954 with a 380 mev neutron beam. somewhat
surprisingly, the authors considered that their result, rnp(0) = 0.20 ± 0.04, to
be compatible with the ucrl measurements [5, 28]. in fact, later more refined
experiments [31] showed that the dzhelepov et al. value was far too low and it
should be discarded from the database.
8
at the end of that decade larsen measured the same quantity at lrl berkeley
at the relatively high energy of 710 mev and obtained rnp(0) = 0.48 ± 0.08 [32].
however, no previous results were mentioned in his publication.
in his contribution to the 1962 cern conference [33], dzhelepov presented the
angular dependence of rnp(θ) at 200 mev. although he noted that the authors
of the experiment were yu. kazarinov, v. kiselev and yu. simonov, no reference
was given and we have found no publication. reading the value from a graph,
one obtains rnp(0) = 0.55 ± 0.03.
one advantage of working at very low energies, as was done in moscow [34],
is that one can obtain a neutron beam from the 3h(d, n)4he reaction that is
almost monochromatic. at 13.9 mev there is clearly no hope at all of fulfilling
the conditions of the dean sum rule so that the value given in table 2 was
obtained with a very severe cut. instead, the group concentrated on the final
state interaction region of the two neutrons which, in some ways, is similar to
the approach of the high resolution experiments to be discussed in section 5. by
comparing the data with the d(p, n)pp results of ref. [35], it was possible to see
the effects of the coulomb repulsion when the two protons were detected in the
fsi peak.
though the value obtained by measday [36] at 152 mev has quite a large
error bar, rnp(0) = 0.65 ± 0.10, this seems to be mainly an overall systematic
effect because the variation of the result with angle is very smooth. these results
show how rnp(θ) approaches two thirds as the momentum transfer gets large and
the pauli blocking becomes less important.
the 794 mev measurement from lampf [37] is especially detailed, with very
fine steps in momentum transfer. extrapolated to t = 0 it yields rnp(0) = 0.56±
0.04. however, the authors suggest that the true value might be a little higher
than this due to the cut that they imposed upon the lowest proton momentum
considered.
by far the most extensive d(n, p)nn data set at medium energies was obtained
by the freiburg group working at psi, the results of which are only available in the
form of a diploma thesis [31]. however, the setup used by the group for neutron-
proton backward elastic scattering is described in ref. [38]. the psi neutron
beam was produced through the interaction of an intense 589 mev proton beam
with a thick nuclear target. this delivered pulses with widths of less than 1 ns and
bunch spacings of 20 or 60 ns. combining this with a time-of-flight path of 61 m
allowed for a good selection of the neutron momentum, with an average resolution
of about 3% fwhm. data were reported at fourteen neutron energies from 300
to 560 mev, i.e., above the threshold for pion production so that the results
could be normalised using the cross section for np →dπ0, which was measured in
parallel [38]. over this range rnp(0) showed very little energy dependence, with
an average value of 0.62 ± 0.01, which is quite close to the upper limit of 2/3.
at the jinr vblhe dubna a high quality quasi-monoenergetic polarised
neutron beam was extracted in 1994 from the synchrophasotron for the purposes
9
of the ∆σl(np) measurements [39, 40], though this accelerator was stopped in
2005. polarised deuterons are not yet available from the jinr nuclotron but, on
the other hand, intense unpolarised beams with very long spills could be obtained
from this machine. since the final ∆σl set-up included a spectrometer, the study
of the energy dependence of rnp(0) could be extended up to 2.0 gev through
the measurement of seven points [41]. that at 550 mev agrees very well with
the neighbouring psi point [31] while the one at 800 mev is consistent with the
lampf measurement [37]. since the values of rnp(0) above 1 gev could not have
been reliably predicted from previous data, the nuclotron measurements in the
interval 1.0 < tn < 2 gev can be considered to be an important achievement in
this field. it would be worthwhile to complete these experiments by measurements
in smaller energy steps in order to recognise possible anomalies or structures. it
is also desirable to extend the investigated interval up to the highest neutron
energy at the nuclotron (≈3.7 gev) since such measurements are currently only
possible at this accelerator.
the data on rnp(0) from the d(n, p)nn experiments discussed above are sum-
marised in table 2, where the kinetic energy, facility, year of publication, and
reference are also listed. several original papers show the values of the angu-
lar distribution of the charge exchange cross section on the deuteron. in such
cases, the rnp(0) listed here were obtained using the predictions for the free for-
ward np charge-exchange cross sections taken from the said program (solution
sp07) [22]. these values are shown in table 1.
3.2
the (p, n) experiments
although high quality proton beams have been available at many facilities, the
evaluation of a rpn(0) ratio from d(p, n)pp experiments requires the division of
this cross section by that for the charge exchange on a nucleon target. where
necessary, we have done this using the predictions of the sp07 said solution [22]
given in table 1. given also the difficulties in obtaining absolute normalisations
when detecting neutrons, we consider that in general the results obtained using
neutron beams are likely to be more reliable.
the low energy data of wong et al. [35] at 13.5 mev do show evidence of
a peak for the highest momentum neutrons but this is sitting on a background
coming from other breakup mechanisms that are probably not associated with
charge exchange. the value given in table 3 without an error bar is therefore
purely indicative.
in 1953 hofmann and strauch [42], working at the harvard university ac-
celerator, published results on the interaction of 95 mev protons with several
nuclei and measured the d(p, n) reaction for the first time. an estimation of the
charge-exchange ratio from the plotted data gives rpn(0) = 0.48 ± 0.03.
the measurements at 30 and 50 mev were made using the time-of-flight fa-
cility of the rutherford laboratory (rhel) proton linear accelerator [43]. the
10
table 2: the rnp(0) data measured using the d(n, p)nn reaction.
the total
estimated uncertainties quoted do not take into account the influence of the
different possible choices on the cut on the final proton momentum.
tn
rnp(0)
facility
year
ref.
(mev)
13.9
0.19
moscow
1965
[34]
90.0
0.40 ± 0.04
ucrl
1951
[5]
152.0
0.65 ± 0.10
harvard
1966
[36]
200.0
0.55 ± 0.03
jinr dlnp
1962
[33]
270.0
0.71 ± 0.02
ucrl
1952
[28]
299.7
0.65 ± 0.03
psi
1988
[31]
319.8
0.64 ± 0.03
psi
1988
[31]
339.7
0.64 ± 0.03
psi
1988
[31]
359.6
0.63 ± 0.03
psi
1988
[31]
379.6
0.64 ± 0.03
psi
1988
[31]
380.0
0.20 ± 0.04
inp dubna
1955
[29]
399.7
0.61 ± 0.03
psi
1988
[31]
419.8
0.62 ± 0.03
psi
1988
[31]
440.0
0.63 ± 0.03
psi
1988
[31]
460.1
0.61 ± 0.03
psi
1988
[31]
480.4
0.61 ± 0.03
psi
1988
[31]
500.9
0.59 ± 0.03
psi
1988
[31]
521.1
0.60 ± 0.03
psi
1988
[31]
539.4
0.62 ± 0.03
psi
1988
[31]
550.0
0.59 ± 0.05
jinr vblhe
2009
[41]
557.4
0.63 ± 0.03
psi
1988
[31]
710.0
0.48 ± 0.08
lrl
1960
[32]
794.0
0.56 ± 0.04
lampf
1978
[37]
800.0
0.55 ± 0.02
jinr vblhe
2009
[41]
1000
0.55 ± 0.03
jinr vblhe
2009
[41]
1200
0.55 ± 0.02
jinr vblhe
2009
[41]
1400
0.58 ± 0.04
jinr vblhe
2009
[41]
1800
0.57 ± 0.03
jinr vblhe
2009
[41]
2000
0.56 ± 0.05
jinr vblhe
2009
[41]
11
neutron spectrum, especially at 30 mev, does not show a clear separation of the
charge-exchange impulse contribution from other mechanisms and the dean sum
rule is far from being saturated. the same facility was used at the higher energies
of 95 and 144 mev, where the target was once again deuterated polythene [44].
this allowed the spectrum to be studied up to a proton-proton excitation en-
ergy epp ≈14 mev when neutrons from reactions on the carbon in the target
contributed. it was claimed that the cross sections obtained had an overall nor-
malisation uncertainty of about ±10% and that the impulse approximation could
describe the data within this error bar.
the highest energy (p, n) data were produced at lampf [45], where the
charge-exchange peak was clearly separated from other mechanisms, including
pion production, and the conditions for the use of the dean sum rule were well
satisfied. their high value of rpn(0) = 0.66 ± 0.08 at 800 mev would be reduced
to 0.61 if the np data of table 1 were used for normalisation instead of those
available in 1976.
the approach by the ucl group working at tp = 135 mev at harwell was
utterly different to the others. they used a high-pressure wilson cloud chamber
triggered by counters, which resulted in a large fraction of the 1740 photographs
containing events [46]. this led to the 1048 events of proton-deuteron collisions
that were included in the final data analysis. instead of detecting the neutron
from the d(p, n)pp reaction, the group measured both protons. in a sense therefore
the experiment is similar to that of the dubna bubble chamber group [47], but
in inverted kinematics. due to the geometry of the counter selection system, the
apparatus was blind to protons that were emitted in a cone of laboratory angles
θlab < 10◦with energies above 6 mev. although the corrections for the associate
losses are model dependent, these should not affect the neutrons emerging at
small angles and the results were integrated down to a neutron kinetic energy
that was 8 mev below the maximum allowed. the differential cross sections were
compared to the plane wave impulse approximation calculations of castillejo and
singh [48].
the results from the various d(p, n)pp experiments are summarised in table 3.
3.3
the unpolarised dp →ppn reaction
in principle, far more information is available if the two final protons are measured
in the deuteron charge exchange reaction and not merely the outgoing neutron.
this has been achieved by using a beam of deuterons with momentum 3.35 gev/c
incident on the dubna hydrogen bubble chamber.
because of the richness of
the data contained, the experiment has had a very long history with several
reanalyses [49, 50, 51, 52, 47].
of the seventeen different final channels studied, the largest number of events
(over 105) was associated with deuteron breakup.
these could be converted
very reliably into cross sections by comparing the sum over all channels with
12
table 3: the rpn(0) data measured using the d(p, n)pp reaction.
the total
estimated uncertainties quoted do not take into account the influence of the
different possible choices on the cut on the final neutron momentum.
tkin
rpn(0)
facility
year
ref.
(mev)
13.5
0.18
livermore
1959
[35]
30.1
0.14 ± 0.04
rhel
1967
[43]
50.0
0.24 ± 0.06
rhel
1967
[43]
95.0
0.48 ± 0.03
harvard
1953
[42]
94.7
0.59 ± 0.03
harwell
1967
[44]
135.0
0.65 ± 0.15
harwell
1965
[46]
143.9
0.60 ± 0.06
harwell
1967
[44]
647.0
0.60 ± 0.08
lampf
1976
[45]
800.0
0.66 ± 0.08
lampf
1976
[45]
the known total cross section. corrections were made for the loss of elastic dp
scattering events at very small angles. the dp →ppn events were divided into two
categories, depending upon whether it was the neutron or one of the two protons
that had the lowest momentum in the deuteron rest frame. this identification
of the charge-retention or charge-exchange channels is expected to be subject to
little ambiguity for small momentum transfers. with this definition, the total
cross section for deuteron charge exchange was found to be 5.85 ± 0.05 mb.
the big advantage of the bubble chamber approach is that one can check many
of the assumptions that are made in the analysis. the crucial one is, of course,
the separation into the charge-exchange and charge-retention events. in the latter
case the distribution of "spectator" momenta psp falls smoothly with psp but in
the charge-exchange sample there is a surplus of events for psp ≳200 mev/c
that may be associated with the virtual production of a ∆(1232) that de-excites
through ∆n →pp. perhaps a fifth of the charge-exchange cross section could
be due to this mechanism [51] but, fortunately, such events necessarily involve
significant momentum transfers and would not influence the extrapolation to
q = 0.
after making corrections for events that have larger opening angles [47], the
data analysis gives a value of
dσ
dt (dp →{pp}n)
t=0
= 2
3
dσsf
dt (dp →{pp}n)
t=0
= 30 ± 4 mb/(gev/c)2, (3.1)
where σsf is the cross section corresponding to the spin flip from the initial proton
to the final neutron and the 2/3 factor comes from the dean sum rule. some of
13
the above error arises from the estimation of the effects of the wide angle proton
pairs and in the earlier publication of the group [52], where the same data set was
treated somewhat differently, a lower value of 25±3 mb/(gev/c)2 was obtained.
the dubna bubble chamber measurement can lead to a relatively precise
value of the average of the spin-spin amplitudes-squared. using eq. (3.1) one
obtains very similar information to that achieved with the high resolution dp →
{pp}n measurements to be discussed in section 5 and with very competitive error
bars. on the other hand, if the primary aim is to derive estimates for the spin-
independent contribution to the forward np charge-exchange cross section, then it
loses some of the simplicity and directness of the d(n, p)nn/p(n, p)n comparison.
this is because one has to evaluate the ratio of two independently measured
numbers, each of which has its own normalisation uncertainty. the problem is
compounded by the fact that, as we have seen from the direct (n, p) measurements
of rnp(0), the contribution of the spin-independent amplitude represents only a
small fraction of the total.
in the earlier publications by the dubna group, the necessary normalisation
denominator was taken from the elastic neutron-proton scattering measurements
of shepard et al. at the pennsylvania proton accelerator [53]. these were made at
sixteen energies and over wide angular ranges. however they disagreed strongly
with all other existing np data, not only in the absolute values, but also in
the shapes of angular distributions. this problem was already apparent at low
energies, starting 182 mev. as a result, these data have long been discarded
by physicists working in the field and they have been removed from phase shift
analysis databases, e.g. from the saclay-geneva psa in 1978 [54].
a much more reliable np →pn data set was provided by the er54 group of
bizard et al. [55], numerical values of which are to be found in refs. [56, 57]. fit-
ting these data with two exponentials, gives a forward cross section of dσ/dt|t=0 =
54.7 ± 0.2 mb/(gev/c)2, which the dubna group used in their final publica-
tion [47].
it is very different from the shepard et al. result [53] of 36.5 ±
1.4 mb/(gev/c)2, which the group quoted in their earlier work [52]. this dif-
ference, together with the changed analysis corrections, accounts for the diverse
values of rnp(0) from the same experiment that are given in table 4.
3.4
data summary
the values of rnp(0) and rpn(0) from tables 2 and 3 are shown in graphical form
in fig. 1, with only the early dubna point [29] being omitted. the p(d, 2p) values
in table 4 represent the results of increased statistics and a different analysis and
only the point from the last publication is shown [47].
the first comparison of such data with np phase shift predictions was made
in 1991 in a thesis from the freiburg group [58], where both the gw/vpi [59]
and saclay-geneva [54] were studied. the strong disagreement with the results
of the psi measurements [31] was due to the author misinterpreting the relevant
14
table 4: summary of the available experimental data on the rnp(0) ratio mea-
sured with the dubna bubble chamber using the dp →{pp} n reaction. the
kinetic energy quoted here is the energy per nucleon. the error bars reflect both
the statistical and systematic uncertainties. although the data sets are basically
identical, the 2008 analysis [47] is believed to be the most reliable.
tkin
rnp(0)
facility
year
ref.
(mev)
977
0.43 ± 0.22
jinr vblhe
1975
[49]
977
0.63 ± 0.12
jinr vblhe
2002
[52]
977
0.55 ± 0.08
jinr vblhe
2008
[47]
quantity as being rnp(0) of eq. (2.7) instead of rnp(0) of eq. (2.4).
the correct predictions from the current gw/vpi phase shift analysis ob-
tained on the basis of eq. (2.4) are shown in fig. 1 up to the limit of their
validity at 1.3 gev. the small values of rnp(0) at low energies is in part due
to the much greater importance of the spin-independent contribution there, as
indicated by the phase shift predictions. there are effects arising also from the
limited phase space but, when they are included (dashed curve), they change the
results only marginally. a much greater influence is the cut that authors have to
put onto the emerging neutron or proton to try to isolate the charge-exchange
contribution from that of other mechanisms. this procedure becomes far more
ambiguous at low energies when relatively severe cuts have to be imposed.
the data in fig. 1 seem to be largest at around the lowest psi point [31],
where they get close to the allowed limit of 0.67. in fact, if the glauber shadow-
ing effect is taken into account [60], this limit might be reduced to perhaps 0.63.
as already shown by the phase shift analysis, the contribution from the spin-
independent term is very small in this region. on the other hand, in the region
from 1.0 to 1.3 gev the phase shift curve lies systematically above the experi-
mental data. since the conditions for the dean sum rule seem to be best satisfied
at high energies, this suggests that the said solution underestimates the spin-
independent contribution above 1 gev. it has to be noted that the experimental
np database is far less rich in this region.
15
figure 1: experimental data on the rnp(0) ratio taken in the forward direction.
the closed circles are from the (n, p) data of table 2, the open circles from
the (p, n) data of table 3, and the cross from the (d, 2p) datum of table 4.
these results are compared to the predictions of eq. (2.4) using the current said
solution [22], which is available up to a laboratory kinetic energy of 1.3 gev. the
dashed curve takes into account the limited phase space available at the lower
energies.
4
polarisation transfer measurements in d(⃗
p, ⃗
n)pp
it was first suggested by phillips [14] that the polarisation transfer in the charge
exchange reaction d(⃗
p,⃗
n)pp should be large provided that the excitation energy
epp in the final pp system is small. under such conditions the diproton is in
the 1s0 state so that there is a spin-flip transition from a jp = 1+ to a 0+
configuration of the two nucleons. this spin-selection argument is only valid for
the highest neutron momentum since, as epp increases, p- and higher waves enter
and the polarisation signal reduces [15]. nevertheless, the reaction has been used
successfully by several groups to produce polarised neutron beams [16, 17, 18].
in the 1s0 limit, there are only two invariant amplitudes in the forward di-
rection and, as pointed out in eq. (2.10), the transverse and longitudinal spin-
transfer coefficients knn and kll are then related by kll(0) + 2knn(0) = −1.
16
one obvious experimental challenge is to get sufficient energy resolution through
the measurement of the produced neutron to guarantee that the residual pp sys-
tem is in the 1s0 state. the other general problem is knowing sufficiently well the
analysing power of the reaction chosen to measure the final neutron polarisation.
some of the earlier experiments failed on one or both of these counts.
the first measurement of knn(0) for d(⃗
p,⃗
n)pp seems to have been performed
at the rochester synchrocyclotron at 200 mev in the mid 1960s [61]. a neutron
polarimeter based upon pn elastic scattering was used, with the analysing power
being taken from the existing nucleon-nucleon phase shifts. however, the resolu-
tion on the final proton energies was inadequate for our purposes, with an energy
spread of 12 mev fwhm coming from the primary beam and the finite target
thickness.
a similar experiment was undertaken at 30 and 50 mev soon afterwards at
the rhel proton linear accelerator [62]. the results represent averages over the
higher momentum part of the neutron spectra. a liquid 4he scintillator was used
to measure the analysing power in neutron elastic scattering from 4he, though
the calibration standard was uncertain by about 8%.
although falling largely outside the purpose of this review, it should be noted
that there were forward angle measurements of knn(0) at the triangle uni-
versities nuclear laboratory at five very low energies, ranging from 10.6 to
15.1 mev [63]. this experiment also used a 4he polarimeter that in addition
served to measure the neutron energy with a resolution of the order of 200 kev.
although all the data at the lowest epp were consistent with knn(0) ≈−0.2,
a very strong dependence on the pp excitation energy was found, with knn(0)
passing through zero in all cases for epp < 2 mev. hence, after unfolding the
resolution it is likely that the true value at epp = 0 is probably slightly more
negative than −0.2. the strong variation with epp is reproduced in a simple
implementation of the faddeev equations that was carried out, though without
the inclusion of the coulomb interaction [64].
the rcnp experiment at 50, 65, and 80 mev used a deuterated polyethy-
lene target [65]. the calibration of the neutron polarimetry was on the basis of
the charge exchange from 6li to the 0+ ground state of 6be, viz 6li(⃗
p,⃗
n)6begs.
although at the time the polarisation transfer parameters for this reaction had
not been measured, they were assumed to be the same as for the transition to
the first excited (isobaric analogue) state of 6li. this was subsequently shown
to be a valid assumption by a direct measurement of neutron production with
a 6li target [66]. on the other hand, the resolution in epp was of the order of
6 mev, which arose mainly from the measurement of the time of flight over 7 m.
as a consequence, the authors could not identify clearly the strong dependence
of knn(0) on epp that was seen in experiments where the neutron energy was
better measured [63, 67, 68]. such a dependence would have been more evident
in the data if there had not been a contribution at higher epp from the 12c in
the target.
17
the most precise measurements of the polarisation transfer parameters at
low energies were accomplished in experiments at psi at 56 and 70 mev [67, 68].
one of the advantages of their setup was the time structure of the psi injector
cyclotron, where bursts of width 0.7 ns, separated by 20 ns, were obtained at
72 mev, increasing to about 1.2 ns, separated by 70 ns, at 55 mev. this allowed
the production of a near-monoenergetic neutron beam for use in other low energy
experiments [69]. beams with a good time structure were also obtained after
acceleration of the protons to higher energies and these were necessary for the
measurements of rnp(0) [31].
the target size was small compared to the time-of-flight path of ≈4.3 m in
the initial experiment [67] so that the total timing resolution of typically 1.4 ns
led to one in epp of a few mev. the polarisation of the proton beam was very
well known and that of the recoil neutron was measured by elastic scattering of
the neutrons from 4he. apart from small coulomb corrections, the analysing
power of 4he(⃗
n, n)4he should be identical to that of the proton in 4he(⃗
p, p)4he,
for which reliable data existed.
the results at both 54 and 71 mev showed that the polarisation transfer
parameters change very strongly with the measured neutron energy and hence
with epp. this must go a long way to explain the anomalous results found by
the rcnp group [65]. at 54 mev both knn and kll were measured and, when
extrapolated to the 1s0 limit of maximum neutron energy, the values gave
kll +2knn = (−0.1164±0.013)+2(−0.4485±0.011) = −1.013±0.026 , (4.1)
in very satisfactory agreement with the 1s0 identity of eq. (2.10).
the subsequent psi measurement at 70.4 mev made significant refinements
in two separate areas [68]. the extension of the flight path to 11.6 m improved
the resolution in the neutron energy by about a factor of three, which allowed a
much more detailed study to the epp dependence of knn to be undertaken. the
neutron polarimeter used the p(⃗
n, p)n reaction and an independent calibration
was carried out by studying the 14c(⃗
p,⃗
n)14n2.31 reaction in the forward direction.
the 2.31 mev level in question is the first excited state of 14n, which is the isospin
analogue of the jp = 0+ ground state of 14c. in such a case there can be no spin
flip and the polarisation of the recoil neutron must be identical to that of the
proton beam. in order to isolate this level cleanly, the neutron flight path was
increased further to 16.4 m for this target.
the results confirmed those of the earlier experiment [67] and, in particular,
showed that even in the forward direction knn(0) varied significantly with the
energy of the detected neutron. the dependence of the parameterisation of the
results on epp is shown in fig 2. near the allowed limit, epp is equal to the
deviation of the neutron energy from its kinematically allowed maximum.
a strong variation of the polarisation transfer parameter with epp is predicted
when using the faddeev equations [68, 70], though these do not give a perfect de-
scription of the data. these calculations represent full multiple scattering schemes
18
figure 2: fit to the measured values of knn of the d(⃗
p,⃗
n)pp reaction in the
forward direction at a beam energy of 70.4 mev as a function of the excitation
energy in the pp final state [68].
with all binding corrections and off-shell dependence of the nucleon-nucleon am-
plitudes. nevertheless it is important to note that the kll(0) prediction for very
low epp is quite close to that of the plane wave impulse approximation. on the
other hand, the fact that both the data and a sophisticated theoretical model
show the strong dependence on epp brings into question the hope that the 1s0
proton-proton final state remains dominant in the forward direction for low beam
energies. this is one more reason to doubt the utility of the dean sum rule to
estimate rnp(0) at low energies.
the validity of the plane wave impulse approximation for the unpolarised
d(p, n)pp reaction at 135 mev has also been tested at iucf [71]. the conclusions
drawn here are broadly similar to those from an earlier study at 160 mev [72].
in the forward direction the plane wave approach reproduces the shape of the
dependence on epp out to at least 5 mev, though the normalisation was about
20% too low. on the other hand, the group evaluated the model using an s-state
hulth ́
en wave function for the deuteron and so it is not surprising that some
renormalisation was required.
the epp dependence follows almost exclusively
from the pp wave function, which was evaluated realistically. the comparison
with more sophisticated faddeev calculations was, of course, hampered by the
difficulty of including the coulomb interaction, which is particularly important
19
for low epp [14].
the values of knn(0) obtained at iucf at 160 mev [72] show a weaker
dependence on epp than that found in the experiments below 100 mev [67, 68].
nevertheless these data do indicate that the influence of p-waves in the final pp
system is not negligible for epp ≈10 mev.
the early measurements of knn(0) and kll(0) at lampf [17, 18] were
hampered by the poor knowledge of the neutron analysing power in ⃗
np elas-
tic scattering that was used in the polarimeter. this was noted by bugg and
wilkin [13], who pointed out that, although the data were taken in the forward
direction and with good resolution, they failed badly to satisfy the identity of
eq. (2.10). they suggested that both polarisation transfer parameters should
be renormalised by overall factors so as to impose the condition.
in view of
this argument and the results of the subsequent lampf experiment [73], the
values reported from these experiments in table 5 have been scaled such that
kll(0) + 2knn(0) = −0.98 (to allow for some dilution from the p-waves in the
pp system) and the error bars increased a little to account for the uncertainty in
this procedure.
the above controversy regarding the values of the forward polarisation trans-
fer parameters in the 500 – 800 mev range was conclusively settled by a subse-
quent lampf experiment by mcnaughton et al. in 1992 [73]. following an idea
suggested by bugg [74], the principle was to produce a polarised neutron beam
through the d(⃗
p,⃗
n)pp reaction, sweep away the charged particles with a bend-
ing magnet, and then let the polarised neutron beam undergo a second charge
exchange through the d(⃗
n, ⃗
p )nn reaction. by charge symmetry, the values of
kll(0) for the two reactions are the same and, if the energy loss in both cases is
minimised, the beam polarisation pb and final proton polarisation pp are related
by
pp = [kll(0)]2 pb .
(4.2)
the beauty of this techniques is that only proton polarisations had to be measured
with different but similarly calibrated instruments.
also, because the square
occurs in eq. (4.2), the errors in the evaluation of kll are reduced by a factor
of two. the energy losses were controlled by time-of-flight measurements and
very small corrections were made for the fact that the two reactions happened at
slightly different beam energies.
the overall precision achieved in this experiment was typically 3% and the
results clearly demonstrated that there had been a significant miscalibration in
much of the earlier lampf neutron polarisation standards. the group also sug-
gested clear renormalisations of the measured polarisation transfer parameters.
since several of the authors of the earlier papers also signed the mcnaughton
work, this lends a seal of approval to the procedure.
the longitudinal polarisation transfer in the forward direction was measured
later at lampf at 318 and 494 mev [75] with neutron flight paths of, re-
20
spectively, 200 and 400 m so that the energy resolution was typically 750 kev
(fwhm). this allowed the authors to use the 14c(⃗
p,⃗
n)14n2.31 reaction to cal-
ibrate the neutron polarimeter, a technique that was taken up afterwards at
psi [68]. including these results, we now have reliable values of either kll(0) or
knn(0) from low energies up to 800 mev.
21
4.1
data summary
the values of knn(0) and kll(0) measured in the experiments discussed above
are presented in table 5 and shown graphically in fig. 3. the results are com-
pared in the figure with the predictions tabulated in table 1 of the pure 1s0 plane
wave impulse approximation of eq. (2.9) that used the said phase shifts [22] as
input. wherever possible the data are extrapolated to epp = 0. this is especially
important at low energies and, if this causes uncertainties or there are doubts in
the calibration standards, we have tried to indicate such data with open symbols,
leaving closed symbols for cases where we believe the data to be more trustworthy.
figure 3: forward values of the longitudinal and transverse polarisation transfer
parameters kll(0) and knn(0) in the d(⃗
p,⃗
n)pp reaction as functions of the
proton kinetic energy tn.
in general we believe that greater confidence can
be placed in the data represented by closed symbols, which are from refs. [73]
(stars), [75] (circles), [72] (triangle), [67, 68] (squares), and the average of the five
tunl low energy points [63] (inverted triangle). the open symbols come from
refs. [62] (diamonds), [65] (triangle), [61] (circle), [18] (crosses), and [17] (star),
with the latter two being renormalised as explained in table 5. the curve is the
plane wave 1s0 prediction of eq. (2.8), as tabulated in table 1.
the impulse approximation curve gives a semi-quantitative description of all
the data, especially the more "reliable" results. at low energies we expect that
this approach would be at best indicative but it is probably significant that the
curve falls below the mcnaughton et al.
results [73] in the 500 to 800 mev
range, where the approximation should be much better. it is doubtful whether
the glauber correction [60, 13] can make up this difference and this suggests that
the current values of the said neutron-proton charge-exchange amplitudes [22]
might require some slight modifications in this energy region. similar evidence is
22
found from the measurements of the deuteron analysing power, to which we now
turn.
23
table 5: measured values of the longitudinal and transverse polarisation transfer
parameters for the d(⃗
p,⃗
n)pp reaction in the forward direction. the total esti-
mated uncertainties quoted do not take into account the influence of the different
possible choices on the cut on the final neutron momentum.
data marked ∗
have been renormalised to impose kll(0) + 2knn(0) = −0.98 and the error bar
increased slightly.
tn
kll(0)
knn(0)
facility
year
ref.
(mev)
10.6
-
−0.17 ± 0.06
tunl
1980
[63]
12.1
-
−0.20 ± 0.07
tunl
1980
[63]
13.1
-
−0.14 ± 0.05
tunl
1980
[63]
14.1
-
−0.12 ± 0.06
tunl
1980
[63]
15.1
-
−0.22 ± 0.09
tunl
1980
[63]
30
-
−0.13 ± 0.03
rhel
1969
[62]
50
-
−0.23 ± 0.07
rhel
1969
[62]
50
-
−0.27 ± 0.05
rcnp
1986
[65]
54
−0.116 ± 0.013
−0.449 ± 0.011
psi
1990
[67]
65
-
−0.31 ± 0.03
rcnp
1986
[65]
70.4
-
−0.457 ± 0.011
psi
1999
[68]
71
-
−0.480 ± 0.013
psi
1990
[67]
80
-
−0.37 ± 0.04
rcnp
1986
[65]
160
-
−0.43 ± 0.04
iucf
1987
[72]
203
-
−0.27 ± 0.11
rochester
1987
[61]
305
−0.411 ± 0.010
-
lampf
1992
[73]
318
−0.41 ± 0.01
-
lampf
1993
[75]
485
−0.579 ± 0.011
-
lampf
1992
[73]
494
−0.59 ± 0.01
-
lampf
1993
[75]
500
−0.60 ± 0.03∗
−0.19 ± 0.04∗
lampf
1985
[18]
635
−0.686 ± 0.012
-
lampf
1992
[73]
650
−0.79 ± 0.03∗
−0.10 ± 0.03∗
lampf
1985
[18]
722
−0.717 ± 0.013
-
lampf
1992
[73]
788
−0.720 ± 0.017
-
lampf
1992
[73]
800
−0.68 ± 0.05∗
−0.15 ± 0.04∗
lampf
1981
[17]
800
−0.78 ± 0.04∗
−0.10 ± 0.04∗
lampf
1985
[18]
24
5
deuteron polarisation studies in high resolu-
tion (⃗
d, 2p) experiments
we have pointed out through eq. (2.12) that in the 1s0 limit the deuteron (⃗
d, 2p)
tensor analysing power in the forward direction can be directly evaluated in terms
of the (⃗
p,⃗
n) polarisation transfer coefficient.
therefore, instead of measuring
beam and recoil polarisations, much of the same physics can be investigated by
measuring the analysing power with a polarised deuteron beam without any need
to detect the polarisation of the final particles. this is the approach advocated
by bugg and wilkin [19, 13]. unlike the sum-rule methodology applied by a
dubna group [47], only the small part of the p(⃗
d, 2p)n final phase space where
epp is at most a few mev needs to be recorded. for this purpose one does not
need the large acceptance offered by a bubble chamber and four separate groups
have undertaken major programmes using different electronic equipment.
we
now discuss their results.
5.1
the spes iv experiments
the franco-scandinavian collaboration working at saclay studied the p(⃗
d, 2p)n
reaction at 0.65, 1.6, and 2.0 gev by detecting both protons in the high resolution
spes iv magnetic spectrometer [76, 77, 78, 79]. the small angular acceptance
(1.7◦× 3.4◦) combined with a momentum bite of ∆p/p ≈7% gave access only
to very low pp excitation energies and monte carlo simulations showed that the
peak of the epp distribution was around 650 kev. under these circumstances
any contamination from p-waves in the pp system can be safely neglected. on
the other hand, the small angular acceptance meant that away from the forward
direction the data were primarily sensitive to ann. on account of the small
acceptance, the deflection angle in the spectrometer was adjusted to measure the
differential cross section and ann at discrete values of the momentum transfer
q.
the results for the laboratory differential cross section and ann obtained
at 1.6 gev for both the p(⃗
d, 2p)n and quasi-free d(⃗
d, 2p)nn reactions are shown
in fig. 4. also shown in the figure are the authors' theoretical predictions of
the plane wave impulse approximation and also ones that included the glauber
double-scattering term [60, 13]. these give quite similar results for momentum
transfers below about 150 mev/c but produce important changes for larger q,
especially in the deuteron analysing power. the neutron-proton charge exchange
amplitudes used were the updated versions of the analysis given in ref. [80] that
were employed in other theoretical estimates [13, 81, 82, 83]. the predictions
were averaged over the spes iv angular acceptance and, in view of the rapid
change in the transition form factor with q, this effect can be significant. the
validity of this procedure was tested by reducing the horizontal acceptance by a
25
figure 4: the measurements of the p(⃗
d, 2p)n laboratory differential cross section
and deuteron tensor analysing power at 1.6 gev by the franco-scandinavian
group [79] are compared to their theoretical impulse approximation estimates
without the double scattering correction (dashed curve) and with (solid line).
the experimental cross section data (stars) have been normalised to the solid
line at q = 0.7 fm−1. it should be noted that the ratio of the data on deuterium
(open circles) to those on hydrogen is not affected by this uncertainty.
factor of two [78].
the acceptance of the spes iv spectrometer for two particles was very hard
to evaluate with any precision and the hydrogen data were normalised to the
theoretical prediction at q = 0.7 fm−1 that included the glauber correction. on
the other hand, the ratio of the cross section with a deuterium and hydrogen
target could be determined absolutely and, away from the forward direction, was
found to be 0.68±0.04. this is reduced even more for small q, precisely because of
the pauli blocking in the unobserved nn system, similar to that we discussed for
the evaluation of rnp(0). since for small epp the np spin-independent amplitude
cannot contribute and the spin-orbit term vanishes at q = 0, the extra reduction
factor should be precisely 2/3, which is consistent with the value observed.
a high precision (unpolarised) d(d, 2p)nn experiment was undertaken at kvi
(groningen) to investigate the neutron-neutron scattering length [84]. in this
case the pp and nn systems were both in the 1s0 region of very small excitation
energies. the shape of the nn excitation energy spectrum was consistent with
that predicted by plane wave impulse approximation with reasonable values of
the nn scattering length.
the primary aim of the franco-scandinavian group was the investigation
of spin-longitudinal and transverse responses in medium and heavy nuclei and
26
also to extend these studies to the region of ∆(1232) excitation in the ⃗
dp →
{pp}∆0. nevertheless, it is interesting to ask how useful these data could be
for the establishment or checking of neutron-proton observables.
the (d, 2p)
transition form factor decreases very rapidly with momentum transfer because of
the large deuteron size. as a consequence, the glauber double scattering term,
which shares the momentum transfer between two collisions, becomes relatively
more important.
estimates of this effect are more model dependent [13, 79]
and, as is seen from fig. 4, it may be dangerous to rely on them beyond about
q ≈150 mev/c.
absolute cross sections were not measured in these experiments and there were
only two points in the safe region of momentum transfer and these represented
averages over significant ranges in q. the central values of q marked on fig. 4
were evaluated from a monte carlo simulation of the spectrometer that used the
theoretical model as input. as a consequence, the results give relatively little
information on the magnitudes of the spin-flip compared to the non-spin-flip
amplitudes. it is perhaps salutary to note that at larger q the estimate of the
cross section without the double scattering correction describes the data better
than that which included it. however, the reverse is true for the analysing power.
the major contribution to the np database comes from the measurement of
ann at small q. since the beam polarisation was known with high precision, this
provides a robust relation between the magnitudes of the three spin-flip ampli-
tudes but only at two average values of q. neutron-proton scattering has been
extensively studied in the 800 mev region [22], and so it is not surprising that
this p(⃗
d, 2p)n experiment gave results that are completely consistent with its pre-
dictions. the dip in ann in both the theoretical estimates and the experimental
data is due primarily to the expected vanishing of the distorted one-pion-exchange
contribution to one of the spin-spin amplitudes for q ≈mπ.
5.2
the rcnp experiments
almost simultaneously with the start of the spes iv experiments [76], an rcnp
group studied the deuteron tensor analysing power ann in the p(⃗
d, 2p)n reaction
at the much lower energies of td = 70 mev [85]. the primary motivation was
to compare the forward angle data with the results of the polarisation transfer
parameter knn that had been measured previously by the same group [65]. for
small angles a magnetic spectrograph was used, which restricted the excitation
energy of the final protons to be less than 200 kev. at larger angles, where the
cross section is much smaller, a si telescope array with a larger acceptance was
employed and the selection epp < 1 mev was imposed in the off-line analysis.
in all cases, the only significant background arose from the random coincidence
of two protons from the breakup of separate deuterons.
this is particularly
important for small angles due to the spectator momentum distribution in the
27
deuteron.
additional data were taken at 56 mev, but solely in the forward
direction.
at such low energies, the plane wave impulse approximation based upon the
neutron-proton charge exchange amplitudes may provide only a semi-quantitative
description of the experimental data; there are likely to be significant contribu-
tions from direct diagrams. nevertheless, as can be seen in fig. 5, the estimates
given in the paper [85] that were made using the then existing (sp86) said
phase shift solution [22] were reasonable near the forward direction and would be
even closer if modern np solutions were used. at larger angles there is significant
disagreement between the data and model and the authors show that part of
this could be rectified if the np input amplitudes were evaluated at the mean of
the incident and outgoing energies. this feature has been implemented in the
more refined impulse approximation calculations of ref. [81], where the theory
was evaluated in the brick-wall frame.
figure 5: measurements of the deuteron tensor analysing power ann for the
p(⃗
d, 2p)n reaction at td = 70 mev by the rcnp collaboration as a function of
momentum transfer q [85]. in all cases epp < 1 mev. the results are compared
to the authors' own theoretical plane wave impulse approximation estimates that
were based upon the said sp86 phase shift solution [22] .
28
the group was disappointed to find that in the forward direction the relation
of eq. (2.12) between their own (⃗
p,⃗
n) spin transfer data [65] and their deuteron
tensor analysing power was far from being satisfied. this could not be explained
by the difference in beam energy or the smearing over small angles. because the
(d, 2p) results were obtained under the clean 1s0 conditions of epp < 200 kev, the
problem must be laid at the door of the much poorer energy resolution associated
with the detection of neutrons. it was only the later psi experiment [68] which
showed that the spin-transfer parameter varied very strongly with epp and, as
argued in section 4, this is probably the resolution of the discrepancy.
5.3
the emric experiments
the aims and the equipment of the emric collaboration [81, 82, 83], also working
at saclay, were very different and much closer to the original ideas of bugg and
wilkin [19, 13]. the driving force was the desire to use the (⃗
d, 2p) reaction as the
basis for the construction of a deuteron tensor polarimeter that could be used
to measure the polarisation of the recoil deuteron in electron-deuteron elastic
scattering. for this purpose the device had to have a much larger acceptance
than that available at spes iv and be compact, so that it could be transported
to and implemented in experiments at an electron machine.
the emric apparatus was composed of an array of 5 × 5 csi scintillator
crystals (4 × 4 × 10 cm3), optically coupled to phototubes, which provided in-
formation on both energy and particle identification. placed at 70 cm from the
liquid hydrogen target, it subtended an angular range of ±7◦so that several
overlapping settings were used in order to increase the angular coverage. since
the orientation of the deuteron polarisation could be rotated through the use of
a solenoid, away from the forward direction this gave access to both transverse
deuteron tensor analysing powers, the sideways ass as well as the normal ann,
under identical experimental conditions.
in the initial experiment at a deuteron beam energy of td = 200 mev [82],
the angular resolution achieved with the csi crystals was only ±1.6◦but in the
second measurement at td = 350 mev the system was further equipped with two
multiwire proportional chambers that improved it to 0.1◦. having identified fast
protons using a pulse-shape analysis technique based on the time-decay properties
of the csi crystals, their energies could be measured with a resolution of the order
of 2%. the missing mass of a proton pair yielded a clean neutron signal with a
fwhm = 14 mev/c2, the only contamination coming from events where not all
the energy was deposited in the csi array.
the compact system allowed measurements over the wide angular and epp
ranges that are necessary for the construction of a polarimeter with a high figure of
merit. however, for the present discussion we concentrate our attention purely on
the data where epp < 1 mev, for which the dilution of the analysing power signal
by the proton-proton p waves is small. the emric results for the differential
29
cross section and two tensor analysing powers at 350 mev are shown in fig. 6.
due to a slip in the preparation of the publication [83], both the experimental data
and the impulse approximation model were downscaled by a factor of two [86],
which has been corrected in the figure shown here. one should take into account
that there are systematic errors (not shown) arising from the efficiency corrections
that are estimated to be typically of the order of 20%, though they are larger at
the edges of emric [87]. this might account for the slight oscillations of the
data around the theoretical prediction in fig. 6.
figure 6:
measurements of the p(⃗
d, 2p)n differential cross section and two
deuteron tensor analysing powers for epp < 1 mev at a beam energy of
td = 350 mev by the emric collaboration [83] are compared to the theoretical
plane wave impulse approximation estimates of ref. [81]. the values of both the
experimental cross section data and theoretical model have been scaled up by a
factor of two to correct a presentational oversight in the publication [83].
the plane wave impulse approximation calculation of ref. [81] describes the
data quite well, though one has to note that the presentation is on a logarithmic
scale and that there are at least 20% normalisation uncertainties.
the data
represented three settings of the emric facility and their fluctuations around
the predictions could be partially due to minor imperfections in the acceptance
corrections. the model is also satisfactory for the analysing powers out to at
least q ≈150 mev/c, from which point the ann data remain too negative.
however, as we argued with the spes iv results of fig. 4, it is at about this
value of q that the glauber double scattering correction becomes significant.
we can therefore conclude that the good agreement of the ass and ann data
in the "safe" region of q ≲150 mev/c is confirmation that the ratios of the
different spin-spin contributions given by the bugg amplitudes of ref. [80] are
30
quite accurate. nevertheless, their overall strength is checked far less seriously
by these data because of the normalisation uncertainty and the logarithmic scale
of fig. 4.
the emric experiment [83] was the only one of those discussed that was
capable of investigating the variation of the deuteron analysing power ann with
excitation energy and, in view of the strong effects found for the d(⃗
p,⃗
n)pp polari-
sation transfer parameters at 56 and 70 mev [67, 68], it would be interesting to see
if anything similar happened for ann. extrapolating the td = 200 mev results
to the forward direction, it is seen that ann ≈0.23, 0.17 and 0.10 for the three
bins of excitation energy epp < 1 mev, 1 < epp < 4 mev, and 4 < epp < 8 mev,
respectively. this variation is smaller than that found for knn [67, 68]. on the
other hand, since the (longitudinal) momentum transfer remains very small in
the forward direction, the plane wave impulse approximation predicts very little
change with epp.
the aim of the group was to show that the (⃗
d, 2p) reaction had a large and
well understood polarisation signal and this was successfully achieved. the expe-
rience gained with the emric device laid the foundations for the development
of the polder polarimeter [88, 86], which was subsequently used to separate
the contributions from the deuteron monopole and quadrupole form factors at
jlab [89].
5.4
the anke experiments
a fourth experimental approach is currently being undertaken using the anke
magnetic spectrometer that is located at an internal target position forming a chi-
cane in the cosy cooler synchrotron. this machine is capable of accelerating
and storing protons and deuterons with momenta up to 3.7 gev/c, i.e., kinetic
energies of tp = 2.9 gev and td = 2.3 gev. the (⃗
d, 2p) measurements form part
of a much larger spin programme that will use combinations of polarised beams
and targets [90]. only results from a test experiment at td = 1170 mev are
presently available [91, 92], and these are described below.
there are several problems to be overcome before the p(⃗
d, 2p)n reaction could
be measured successfully at anke. the horizontal acceptance for the reaction is
limited to laboratory angles in the range of approximately −2◦< θhor < 4◦and
much less in the vertical direction. this constrains severely the range of momen-
tum transfers that can be studied. furthermore, the axis of the spin alignment of
the circulating beam is vertical and, unlike the emric case [83], there is insuffi-
cient place for a solenoid to rotate the polarisation. as a consequence, the values
31
of ann and ass cannot be extracted under identical condition. furthermore,
the polarisations of the beam have to be checked independently at the anke
energy. finally, unlike the external beam experiments of spes iv or emric,
the luminosity inside the storage ring has also to be established at the anke
position.
most of the above difficulties can be addressed by using the fact that one
can observe and measure simultaneously in anke the following reactions: ⃗
dp →
{pp}n, ⃗
dp →dp, ⃗
dp →
3heπ0, and ⃗
dp →pspdπ0, where psp is a fast spectator
proton. what cannot, of course, be avoided is the cut in the momentum trans-
fer which at td = 1170 mev means that the deuteron charge exchange reaction
has good acceptance only for q ≲150 mev/c. however, we already saw in the
spes iv case that for larger momentum transfers the double scattering correc-
tions become important and, as a result, the extraction of information on np
amplitudes becomes far more model dependent.
the luminosity, and hence the cross section, was obtained from the mea-
surement of the dp →pspdπ0 reaction, for which the final spectator proton and
produced deuteron fall in very similar places in the anke forward detector to
the two protons from the charge exchange reaction. using only events with small
spectator momenta, and interpreting the reaction as being due to that induced by
the neutron in the beam deuteron, np →dπ0, reliable values could be obtained
for the luminosity. this approach had the subsidiary advantage that to some ex-
tent the glauber shadowing correction [60] cancels out between the dp →pspdπ0
and dp →{pp}n reactions.
the cosy polarised ion source that feeds the circulating beam was pro-
grammed to provide a sequence of one unpolarised state, followed by seven com-
binations of deuteron vector and tensor polarisations. although these were mea-
sured at low energies, it had to be confirmed that there was no loss of polarisation
through the acceleration up to td = 1170 mev. this was done by measuring the
analysing powers of ⃗
dp →dp, ⃗
dp →
3heπ0, and ⃗
dp →pspdπ0 and comparing
with results given in the literature [93]. as expected, there was no discernable
depolarisation.
due to the geometric limitations, the acceptance of the anke forward de-
tector varies drastically with the azimuthal production angle φ. the separation
between ann and ass depends upon studying the variation of the cross section
with φ. an accurate knowledge of the acceptance is not required for this purpose
because one can work with the ratio of the polarised to unpolarised cross section
where, to first order, the acceptance effects drop out. the monte carlo simula-
tion of the acceptance was sufficiently good to give only a minor contribution to
the error in the unpolarised cross section itself. the claimed overall cross section
uncertainty of 6% is dominated by that in the luminosity evaluation.
the limited anke acceptance also cuts into the epp spectrum and the collab-
oration only quote data integrated up to a maximum of 3 mev. the results shown
in fig. 7 were obtained with a cut of epp < 1 mev, as were the updated theoret-
32
ical predictions from ref. [81], where the current said np elastic amplitudes at
585 mev were used as input [22].
q [mev/c]
0
50
100
150
b / (mev/c)]
μ
/dq [
σ
d
0
1
2
3
4
q [mev/c]
0
50
100
150
tensor analysing powers
-1.5
-1.0
-0.5
0.0
0.5
1.0
nn
a
ss
a
figure 7:
measurements of the p(⃗
d, 2p)n differential cross section and two
deuteron tensor analysing powers for epp < 1 mev at a beam energy of
td = 1170 mev by the anke collaboration [91, 92] are compared to the theo-
retical plane wave impulse approximation estimates of ref. [81].
the agreement between the plane wave impulse approximation and the ex-
perimental data is very good for all three observables over the full momentum
transfer range that is accessible at anke. since there have been many neutron-
proton experiments in this region, it is to be believed that the np elastic scattering
amplitudes are very reliable at 585 mev. extrapolating the results to q = 0 and
using the impulse approximation model, one finds that ann = −0.26±0.02. this
is to be compared to the said value of −0.28, though no error can be deduced
directly on their prediction [22]. all this suggests that the methodology applied
by the anke collaboration is sufficient to deliver useful np amplitudes at higher
energies, where less is known experimentally. compared to the spes iv and
emric experiments, there are finer divisions in momentum transfer and hence
more points in the safe q region.
apart from taking data up to the maximum cosy energy of td ≈2.3 gev,
there are plans to measure the deuteron charge exchange reaction with a po-
larised beam and target [90]. the resulting values of the two transverse spin
correlation parameters will allow the relative phases of the spin-flip amplitudes
to be determined.
to go higher in energy, it will be necessary to use a proton beam on a deu-
terium target, detecting both slow recoil protons from the p⃗
d →{pp}n in the
silicon tracking telescopes with which anke is equipped [94]. the drawback
here is that the telescopes require a minimum momentum transfer so that the
energies of the protons can be measured and this is of the order of 150 mev/c at
low epp. this technique has already been used at celsius to generate a tagged
33
neutron beam on the basis of the pd →npp reaction at 200 mev by measuring
both slow recoil protons in silicon microstrip detectors [95].
5.5
data summary
in table 6 we present the experimental values of the deuteron tensor analysing
power in the ⃗
dp →{pp}n reaction extrapolated to the forward direction. the
error bars include some attempt to take into account the uncertainty in the
angular extrapolation. the resulting data are also shown in fig. 8.
table 6: measured values of the forward deuteron tensor analysing power ann
in the ⃗
dp →{pp}n reaction in terms of the kinetic energy per nucleon tn. the
errors include some estimate for the extrapolation to θ = 0◦.
tn
ann(0)
facility
year
ref.
(mev)
28
0.015 ± 0.021
rcnp
1987
[85]
35
0.134 ± 0.018
rcnp
1987
[85]
100
0.23 ± 0.03
emric
1993
[83]
175
0.15 ± 0.03
emric
1993
[83]
325
−0.05 ± 0.03
spes iv
1995
[79]
585
−0.26 ± 0.03
anke
2009
[92]
800
−0.27 ± 0.04
spes iv
1995
[79]
1000
−0.32 ± 0.04
spes iv
1995
[79]
in the forward direction the plane wave impulse approximation predictions
of eq. (2.8) for the forward analysing power should be quite accurate provided
that the excitation energy in the final diproton is small so that it is in the 1s0
state. this condition is well met by the data described here, where epp is always
below 1 mev [79, 85, 83, 92]. this prediction, which is also tabulated in table 1,
describes the trends of the data very well in regions where the neutron-proton
phase shifts are well determined.
we also show in the figure the values of ann deduced using eq. (2.12) from
the d(⃗
p,⃗
n)pp measurements summarised in table 5. only those data are retained
where the neutron polarisation was well measured and the pp excitation energy
was small, though generally not as well determined as when the two final protons
were detected. the consistency between the (⃗
d, pp) and (⃗
p,⃗
n) data is striking and
it is interesting to note that they both suggest values of ann that are slightly
lower in magnitude at high energies than those predicted by the np phase shifts
of the said group [22]. the challenge now is to continue measuring these data
into the more unchartered waters of even higher energies.
34
although we have concentrated here on the results for the forward analysing
power, it is clear that this represents only a small part of the total data set as
demonstrated by the results of figs. 4, 6, and 7.
35
figure 8: values of the forward deuteron tensor analysing power in the ⃗
dp →
{pp}n reaction as a function of the kinetic energy per nucleon tn. the directly
measured experimental data (closed symbols) from spes iv (squares) [79], em-
ric (closed circles) [83], anke (star) [92], and rcnp (triangles) [85] were all
obtained with a pp excitation energy of 1 mev or less. the error bars include
some estimate of the uncertainty in the extrapolation to θ = 0. the open sym-
bols were obtained from measurements of the polarisation transfer parameter in
d(⃗
p,⃗
n)pp by using eq. (2.12). the data are from refs. [73] (circles), [75] (squares),
[72] (cross), and [67, 68] (triangles). the curve is the plane wave 1s0 prediction
of eq. (2.8), as tabulated in table 1.
36
6
conclusions
originally the deuteron was thought of merely as a useful substitute for a free
neutron target. as an example of this, it has been shown that at large momentum
transfers the spin-dependent parameters measured in free np scattering and quasi-
free in pd collisions give very similar results [24]. the situation is very different
at low momentum transfers where it is not clear which of the nucleons is the
spectator or, indeed, whether the concept of calling one of the nucleons a spectator
makes any sense at all. however, a more interesting effect comes about in the
medium energy neutron charge exchange on the deuteron, nd →p{nn}, when
the excitation energy enn in the two neutron system is very low. under such
conditions the pauli principle demands that the two neutrons should be in a 1s0
state and there then has to be spin-flip isospin-flip transition from the spin-triplet
np in the deuteron to the singlet nn system. the rate for the charge-exchange
deuteron breakup nd →p{nn} would then depend primarily on the spin-spin
np →pn amplitudes.
the above remarks only assume a practical importance because of an "acci-
dent" in the low energy nucleon-nucleon interaction. in the nn system there is
an antibound (or virtual) state pole only a fraction of an mev below threshold.
although the pole position is displaced slightly in the pp case by the coulomb
repulsion, it results in huge pp and nn scattering lengths. in the nd →pnn re-
action, it leads to the very characteristic peak at the hard end of the momentum
spectrum of the produced proton. since we know that these events are the result
of the spin-flip interaction, we clearly want to use them to investigate in greater
depth this interaction. there are two distinct ways to try to achieve our aims
and we have tried to review them both in this article. these are the inclusive
(sum-rule) approach of section 3 and the high resolution polarisation experiments
of sections 4 and 5.
in impulse approximation, at zero momentum transfer, the d(n, p)nn inter-
action only excites spin-singlet final states and dean [11, 12] has shown that
the inclusive measurement of the proton momentum spectrum can then be in-
terpreted in terms of the spin-flip np amplitudes through the use of a sum rule.
though the shape of the proton momentum spectrum must depend upon the
details of the low energy nn interaction and also on the deuteron d-state, the
integral over all momenta would not, provided that the sum rule has converged
before any of the limitations imposed by the three-body phase space have kicked
in.
the inclusive approach has many positive advantages, in addition to being
independent of the low energy nucleon-nucleon dynamics. in a direct comparison
of the production rates of protons in the d(n, p)nn and p(n, p)n reactions using
the same apparatus, many of the sources of systematic errors drop out in the
evaluation of the cross section ratio rnp(0). these are primarily effects associated
with the neutron flux and uncertainties in the proton detection system.
37
there are, however, no similar benefits when working with a proton beam,
where one measures instead d(p, n)pp. here one can only construct the rpn(0)
ratio by dividing by a p(n, p)n cross section that has been measured in an in-
dependent experiment. this is probably the reason why there are fewer entries
in table 3 compared to table 2. we must therefore stress that, in general, the
d(np)nn determinations of rnp(0) are much to be preferred over those of d(p, n)pp.
on the face of it, the determination of rpn(0) through the measurement of the
two fast protons from the p(d, pp)n reaction in a bubble chamber looks like a very
hard way to obtain a result [47]. in addition to having to use independent data to
provide the normalisation cross section in the denominator, the reaction is first
measured exclusively in order afterwards to construct an inclusive distribution.
on the other hand, a full kinematic determination allows one to check many of
the assumptions made in the analysis and, in particular, those related to the
isolation of the charge-exchange impulse approximation contribution from those
of other possible feynman diagrams.
a major difficulty in any of the inclusive measurements is ensuring that the
phase space is sufficiently large that the sum rule has been saturated without
being contaminated by other driving mechanisms. this means that the low energy
determinations of rnp(0) are all likely to underestimate the "true" value and
there could be some effects from this even through the energy range of the psi
experiments [31]. even more worrying is the fact that at low energies the rapid
variation of knn(0) with epp, as measured in the d(⃗
p,⃗
n)pp reaction [68], shows
that there are significant deviations from plane wave impulse approximation with
increasing epp. these deviations are probably too large to be ascribed to effects
arising from the variation of the longitudinal momentum transfer with epp. this
brings into question the whole sum rule approach at low energies.
the alternative high resolution approach of measuring the 1s0 peak of the
final state interaction requires precisely that, i.e., high resolution. this can be
achieved in practice by measuring the (n, p) reaction with a very long time-of-
flight path [75] or by measuring the protons in the dp →{pp}n reaction with
either a deuteron beam [79, 83, 92] or a very low density deuterium target [46].
the resulting data are then sensitive to the low energy np interaction in the
deuteron and the pp interaction in the 1s0 final state. however, such interactions
are well understood and lead to few ambiguities in the charge exchange predic-
tions. establishing a good overall normalisation can present more of a challenge.
in addition to obvious acceptance and efficiency uncertainties, if one evaluates
a cross section integrated up to say epp = 3 mev then one has to measure the
3 mev with good absolute precision, which is non-trivial for a deuteron beam in
the gev range. hence it might be that at high energies the inclusive measure-
ments could yield more precise determinations of absolute values of rnp(0) [41]
than could be achieved by using high resolution experiments.
on the other hand, measuring just the fsi peak with good resolution allows
one more easily to follow the variation with momentum transfer and there are also
38
fewer kinematic ambiguities. more crucially, the spin information from the (⃗
n, ⃗
p )
or (⃗
d, {pp}) reactions enables one to separate the different spin contributions to
the small angle charge exchange cross section. it could of course be argued that
this is not just a benefit for an exclusive reaction since, if the dubna bubble
chamber experiments [47] had been carried out with a polarised deuteron beam,
then these would also have been able to separate the contributions from the two
independent forward spin-spin contributions through the use of the generalised
dean sum rule [13, 10]. it is, however, much more feasible to carry out (d, {pp})
measurements with modern electronic equipment and the hope is that, through
the use of polarised beams and targets, they will lead to evaluations of the relative
phases between the three independent np →pn spin-spin amplitudes out to at
least q ≈mπ [90].
we have been very selective in this review, concentrating our attention on
the forward values of the nd →pnn/np →pn cross section ratio, the (⃗
n, ⃗
p ) po-
larisation transfer, and the deuteron tensor analysing power in nucleon-deuteron
charge-exchange break-up collisions. in the latter cases, we have specialised to
the kinematic situations where two of the final nucleons emerge in the 1s0 state.
under these conditions there are strong connections between the three types of
experiment described and this we have tried to stress. however, there is clearly
much additional information in the data at larger angles, which we have here gen-
erally neglected. we have also avoided discussing the extensive data that have
been taken on nuclear targets, where the selectivity of the (⃗
n, ⃗
p ) or (⃗
d, {pp})
reactions can be used to identify particular classes of final nuclear states. at
the higher energies, these states could even include the excitation of the ∆(1232)
isobar.
despite the successful measurements, none of the rnp(0) data nor those from
the exclusive polarised measurements have so far been included in any of the
existing phase shift analyses. they have merely been used as a posteriori checks
on their predictions. we have argued that they could also provide valuable input
into the direct neutron-proton amplitude reconstruction in the backward direc-
tion [10]. for any of these purposes it would be highly desirable to control further
the range of validity of the models used to interpret the data and, in particu-
lar, to examine further the effects of multiple scattering. there remain therefore
theoretical as well as experimental challenges to be overcome.
acknowledgements
we are grateful to j. ludwig for furnishing us with a partial copy of ref. [31]. sev-
eral people gave us further details regarding their own experiments. these include
d. chiladze, m.j. esten, v. glagolev, a. kacharava, s. kox, m.w. mcnaughton,
t. motobayashi, i. sick, and c.a. whitten. there were also helpful discussions
with d.v. bugg, z. janout, n. ladygina, i.i. strakovsky, e. strokovsky, and
39
yu. uzikov.
one of the authors (cw) wishes to thank the institute of ex-
perimental and applied physics of the czech technical university prague, and
its director stanislav posp ́
ıˇ
sil, for hospitality and support during the prepara-
tion of this paper. this work has been supported by the research programme
msm 684 077 0029 of the ministry of education, youth, and sport of the czech
republic.
40
references
[1] g.f. chew, phys. rev. 80, 196 (1950).
[2] g.f. chew, phys. rev. 84, 710 (1951).
[3] r.l. gluckstein and h. bethe, phys. rev. 81, 761 (1950).
[4] i. pomeranchuk, doklady akad. nauk 77, 249 (1951).
[5] w.m. powell, report ucrl 1191, berkeley (1951).
[6] k.m. watson, phys. rev. 88, 1163 (1952).
[7] i.m. shmushkevich, thesis, leningrad physico-technical inst. (1953).
[8] a.b. migdal, j. exp. theor. phys. (ussr) 28 (1955) 3; trans. sov. phys.
jetp 1, 2 (1955).
[9] l.i. lapidus, j. exp. theor. phys. (ussr) 32 (1957) 1437; trans. sov. phys.
jetp 5, 1170 (1957).
[10] f. lehar and c. wilkin, eur. phys. j. a 37, 143 (2008).
[11] n.w. dean, phys. rev. d 5, 1661 (1972).
[12] n.w. dean, phys. rev. d 5, 2832 (1972).
[13] d.v. bugg and c. wilkin, nucl. phys. a 167, 575 (1987).
[14] r.j.n. phillips, rep. prog. phys. 22, 562 (1959); idem proc. phys. soc. a
74, 652 (1959); idem nucl. phys. 53, 650 (1964).
[15] g.v. dass and n.m. queen, j. phys. a 1, 259 (1968).
[16] a.s. clough et al., phys. rev. c 21, 988 (1980).
[17] p.j. riley et al., phys. lett. 103b, 313 (1981).
[18] j.s. chalmers et al., phys. lett. 153b, 235 (1985).
[19] d.v. bugg and c. wilkin, phys. lett. 154b, 243 (1985).
[20] j. bystrick ́
y, f. lehar, and p. winternitz, j. phys. (paris) 39, 1 (1978).
[21] g.g. ohlsen, rep. prog. phys. 35, 717 (1972).
[22] r.a. arndt, i.i. strakovsky, and r.l. workman, phys. rev. c 62,
034005 (2000); r.a. arndt, w.j. briscoe, i.i. strakovsky, and r.l. work-
man, phys. rev. c 76, 025209 (2007); said solutions available from
http://gwdac.phys.gwu.edu.
41
[23] d. adams et al., acta polytechnica (prague) 36, 11 (1996).
[24] a. de lesquen et al., eur. phys. j. c 11, 69 (1999).
[25] m.w. mcnaughton et al.. phys. rev. c 53, 1092 (1996) and references
therein.
[26] a. ahmidouch et al., eur. phys. j. c 2, 627 (1998) and references therein.
[27] d.v. bugg et al., phys. rev. c 21, 1004 (1980) and references therein.
[28] j.r. cladis, j. hadley, and w.n. hess, phys. rev. 86, 110 (1952).
[29] v.p. dzhelepov et al., izvestia akad. nauk 19, 573 (1955).
[30] v.p. dzhelepov et al., nuovo cim. suppl. 3, 61 (1956).
[31] b. pagels, untersuchung der quasielastischen ladungsaustauschreaktion
nd →pnn im neutronenergiebereich von 290 bis 570 mev, diploma the-
sis, universit ̈
at freiburg im breisgau (1988).
[32] r.r. larsen, nuovo cim. a 18, 1039 (1960).
[33] v.p. dzhelepov, recent investigations on nucleon-nucleon scattering at the
dubna synchrocyclotron, proc. int. conf. high energy phys., ed. j. prentki
(cern, geneva, 1962), p. 19.
[34] v.k. voitovetskii, i.l. korsunskii, and yu.f. pazhin, nucl. phys. 69, 531
(1965).
[35] c. wong et al., phys. rev. 116, 164 (1959).
[36] d.f. measday, phys. lett. 21, 66 (1966).
[37] b.e. bonner et al., phys. rev. c 17, 664 (1978).
[38] j. franz, e. r ̈
ossle, h. schmitt, and l. schmitt, physica scripta t87, 14
(1999).
[39] v.i. sharov et al., eur. phys. j. c 37, 79 (2004).
[40] f. lehar, phys. part. nuclei 36, 501 (2005).
[41] v.i. sharov et al., eur. phys. j. a 39, 267 (2009).
[42] j.a. hofmann and k. strauch, phys. rev. 90, 559 (1953).
[43] c.j. batty, r.s. gilmore, and g.h. stafford, phys. lett. 16, 137 (1965).
[44] a. langsford et al., nucl. phys. a 99, 246 (1967).
42
[45] c.w. bjork et al., phys. lett. 63b, 31 (1976).
[46] m.j. esten, t.c. griffith, g.j. lush, and a.j. metheringham, rev. mod.
phys. 37, 533 (1965); idem nucl. phys. 86, 289 (1966).
[47] v.v. glagolev et al., cent. eur. j. phys. 6, 781 (2008).
[48] l. castillejo and l.s. singh, proc. conf. on nuclear forces and the few
body problem (pergamon press, london, 1960) p. 193.
[49] b.s. aladashvili et al., nucl. phys. b 86, 461 (1975).
[50] b.s. aladashvili et al., j. phys. g 3, 7 (1977).
[51] b.s. aladashvili et al., j. phys. g 3, 1225 (1977).
[52] v.v. glagolev et al., eur. phys. j. a 15, 471 (2002).
[53] p.f. shepard, t.j. devlin, r.e. mishke, and j. solomon, phys. rev. d 10,
2735 (1974).
[54] j. bystrick ́
y, c. lechanoine-leluc, and f. lehar, j. phys. (paris) 48, 199
(1987).
[55] g. bizard et al., nucl. phys. b 85, 14 (1975).
[56] j. bystrick ́
y and f. lehar, nucleon-nucleon scattering data, ed. h. behrens
und g. ebel, (fachinformationszentrum karlsruhe), 1978 edition, nr.11-1
(1978), 1981 edition, nr.11-2 and nr.11-3 (1981).
[57] j. bystrick ́
y et al., in landolt-b ̈
ornstein, vol. 9 (springer, berlin, 1980).
[58] r. binz, untersuchung der spinabh ̈
angigen neutron-proton wechselwirkung
im energiebereich von 150 bis 1100 mev, phd thesis, universit ̈
at freiburg
im breisgau (1991).
[59] r.a. arndt, j.s. hyslop iii, and l.d. roper, phys. rev. d 35, 199 (1987).
[60] r.j. glauber and v. franco, phys. rev. 156, 1685 (1967).
[61] n.w. reay, e.h. thorndike, d. spalding, and a.r. thomas, phys. rev.
150, 806 (1966).
[62] l.p. robertson et al., nuc. phys. a 134, 545 (1969)
[63] p.w. lisowski, r.c. byrd, r.l. walter, and t.b. clegg, nucl. phys. a 334,
45 (1980).
[64] m. jain and g. doolen, phys. rev. c 8, 124 (1973).
43
[65] h. sakai et al., phys. lett. b 177, 155 (1986).
[66] r. henneck et al., phys. rev. 37, 2224 (1988).
[67] m.a. pickar et al., phys. rev. c 42, 20 (1990).
[68] m. zeier et al., nucl. phys. a 654, 541 (1999); m. zeier, phd thesis, uni-
versit ̈
at basel, 1997.
[69] r. henneck et al., nucl. instr. meth. phys. res. a 259, 329 (1987).
[70] w. gl ̈
ockle et al., phys. rep. 274, 107 (1996).
[71] b.d. anderson et al., phys. rev. c 54, 1531 (1996).
[72] h. sakai et al., phys. rev. c 35, 344 (1987).
[73] m.w. mcnaughton et al., phys. rev. c 45, 2564 (1992).
[74] d.v. bugg, private communication noted in ref. [73].
[75] d.j. mercer et al., phys. rev. lett. 71, 684 (1993).
[76] c. ellegaard et al., phys. rev. lett. 59, 974 (1987).
[77] c. ellegaard et al., phys. lett. b 231, 365 (1989).
[78] t. sams, ph.d. thesis, niels bohr institute, copenhagen (1990),
available at www.nbi.dk/ sams.
[79] t. sams et al., phys. rev. c 51, 1945 (1995).
[80] r. dubois et al., nucl. phys. a 377, 554 (1982).
[81] j. carbonell, m.b. barbaro, and c. wilkin, nucl. phys. a 529, 653 (1991).
[82] s. kox et al., phys. lett. b 266, 265 (1991).
[83] s. kox et al., nucl. phys. a 556, 621 (1993).
[84] c. b ̈
aumer et al., phys. rev. c 71, 044003 (2005).
[85] t. motobayashi et al., nucl. phys. a 481, 207 (1988).
[86] j.-s. r ́
eal, ph.d. thesis, university of grenoble (1994).
[87] s. kox, private communcation (june 2009).
[88] s. kox et al., nucl. instr. meth. phys. res. a 346, 527 (1994).
[89] d. abbott et al., phys. rev. lett. 84, 5053 (2000).
44
[90] a. kacharava, f. rathmann, c. wilkin, spin physics from cosy to fair,
cosy proposal 152 (2005), arxiv:nucl-ex/0511028.
[91] d. chiladze et al., phys. lett. b 637, 170 (2006).
[92] d. chiladze et al., eur. phys. j. a 40, 23 (2009).
[93] d. chiladze et al., phys. rev. st accel. beams 9, 050101 (2006).
[94] r. schleichert et al., ieee trans. nucl. sci. 50, 301 (2003).
[95] t. peterson et al., nucl. instr. meth. phys. res. a 527, 432 (2004).
45
|
0911.1700 | four-dimensional spin foam perturbation theory | we define a four-dimensional spin-foam perturbation theory for the ${\rm
bf}$-theory with a $b\wedge b$ potential term defined for a compact semi-simple
lie group $g$ on a compact orientable 4-manifold $m$. this is done by using the
formal spin foam perturbative series coming from the spin-foam generating
functional. we then regularize the terms in the perturbative series by passing
to the category of representations of the quantum group $u_q(\mathfrak{g})$
where $\mathfrak{g}$ is the lie algebra of $g$ and $q$ is a root of unity. the
chain-mail formalism can be used to calculate the perturbative terms when the
vector space of intertwiners $\lambda\otimes \lambda \to a$, where $a$ is the
adjoint representation of $\mathfrak{g}$, is 1-dimensional for each irrep
$\lambda$. we calculate the partition function $z$ in the dilute-gas limit for
a special class of triangulations of restricted local complexity, which we
conjecture to exist on any 4-manifold $m$. we prove that the first-order
perturbative contribution vanishes for finite triangulations, so that we define
a dilute-gas limit by using the second-order contribution. we show that $z$ is
an analytic continuation of the crane-yetter partition function. furthermore,
we relate $z$ to the partition function for the $f\wedge f$ theory.
| introduction
spin foam models are state-sum representations of the path integrals for bf theories on sim-
plicial complexes. spin foam models are used to define topological quantum field theories and
quantum gravity theories, see [1]. however, there are also perturbed bf theories in various di-
mensions, whose potential terms are powers of the b field, see [10]. the corresponding spin-foam
perturbation theory generating functional was formulated in [10], but further progress was hin-
dered by the lack of the regularization procedure for the corresponding perturbative expansion
and the problem of implementation of the triangulation independence.
the problem of implementation of the triangulation independence for general spin foam per-
turbation theory was studied in [2], and a solution was proposed, in the form of calculating
the perturbation series in a special limit. this limit was called the dilute-gas limit, and it was
given by λ →0, n →∞, such that g = λn is a fixed constant, where λ is the perturbation
theory parameter, also called the coupling constant, n is the number of d-simplices in a simpli-
cal decomposition of a d-dimensional compact manifold m and g is the effective perturbation
2
j. faria martins and a. mikovi ́
c
parameter, also called the renormalized coupling constant. however, the dilute-gas limit could
be used in a concrete example only if one knew how to regularize the perturbative contributions.
the regularization problem has been solved recently in the case of three-dimensional (3d)
euclidean quantum gravity with a cosmological constant [9], following the approach of [3, 8].
the 3d euclidean classical gravity theory is equivalent to the su(2) bf-theory with a b3
perturbation, and the corresponding spin foam perturbation expansion can be written by using
the ponzano–regge model.
the terms in this series can be regularized by replacing all the
spin-network evaluations with the corresponding quantum spin-network evaluations at a root of
unity. by using the chain–mail formalism [23] one can calculate the quantum group perturbative
corrections, and show that the first-order correction vanishes [9]. consequently, the dilute-gas
limit has to be modified so that g = λ2n is the effective perturbation parameter [9].
another result of [9] was to show that the dilute gas limit cannot be defined for an arbitrary
class of triangulations of the manifold. one needs a restricted class of triangulations such that
the number of possible isotopy classes of a graph defined from the perturbative insertions is
bounded. in 3d this can be achieved by using the triangulations coming from the barycentric
subdivisions of a regular cubulation of the manifold [9].
in this paper we are going to define the four-dimensional (4d) spin-foam perturbation theory
by using the same approach and the techniques as in the 3d case. we start from a bf-theory
with a b ∧b potential term defined for a compact semi-simple lie group g on a compact
4-manifold m. in section 2 we define the formal spin foam perturbative series by using the
spin-foam generating functional method. we then regularize the terms in the series by passing
to the category of representations for the quantum group uq(g) where g is the lie algebra of g
and q is a root of unity. in sections 4 and 5 we then use the chain–mail formalism to calculate
the perturbative contributions. the first-order perturbative contribution vanishes, so that we
define the dilute-gas limit in section 6 by using the second-order contribution. we calculate
the partition function z in the dilute-gas limit for a class of triangulations of a 4-dimensional
manifold which are arbitrarily fine and have a controllable local complexity.
we conjecture
that such a class of triangulations always exists for any 4-dimensional manifold, and can be
given by the triangulations corresponding to the barycentric subdivisions of a fixed cubulation
of the manifold. we then show that z is given as an analytic continuation of the crane–yetter
partition function. in section 7 we relate the path-integral for the f ∧f theory with the spin
foam partition function and in section 8 we present our conclusions.
2
spin foam perturbative expansion
let g be the lie algebra of a semisimple compact lie group g. the action for a perturbed
bf-theory in 4d can be written as
s =
z
m
bi ∧fi + λ gij bi ∧bj
,
(2.1)
where b = bili is a g-valued two-form, li is a basis of g, f = da + 1
2[a, a] is the curvature
2-form for the g-connection a on a principal g-bundle over m, xi = xi and gij is a symmetric
g-invariant tensor. here if x and y are vector fields in the manifold m then [a, a](x, y ) =
[a(x), a(y )].
we will consider the case when gij ∝δij, where δij is the kronecker delta symbol. in the
case of a simple lie group, this is the only possibility, while in the case of a semisimple lie
group one can also have gij which are not proportional to δij. for example, in the case of
the so(4) group one can put gab,cd = ǫabcd, where ǫ is the totally antisymmetric tensor and
1 ≤a, . . . , d ≤4. we will also use the notation tr (xy ) = xiyi and ⟨xy ⟩= gijxiy j.
four-dimensional spin foam perturbation theory
3
consider the path integral
z(λ, m) =
z
dadbei r
m(bi∧fi+λ⟨b∧b⟩).
(2.2)
it can be evaluated perturbatively in λ by using the generating functional
z0(j , m) =
z
dadbei r
m(bi∧fi+bi∧ji),
(2.3)
(where jili is an arbitrary 2-form valued in g) and the formula
z(λ, m) = exp
−iλ
z
m
gij
δ
δji
∧
δ
δjj
z0(j , m)
j =0.
(2.4)
the path integrals (2.3) and (2.4) can be represented as spin foam state sums by discretizing
the 4-manifold m, see [10]. this is done by using a simplicial decomposition (triangulation)
of m, t(m). it is useful to introduce the dual cell complex t ∗(m) [24] (a cell decomposition
of m), and we will denote the vertices, edges and faces of t ∗(m) as v, l and f, respectively.
a vertex v of t ∗(m) is dual to a 4-simplex σ of t(m), an edge l of t ∗(m) is dual to a tetra-
hedron τ of t(m) and a face f of t ∗(m) is dual to a triangle ∆of t(m).
the action (2.1) then takes the following form on t(m)
s =
x
∆
tr (b∆ff) + λ
5
x
σ
x
∆′,∆′′∈σ
⟨b∆′b∆′′⟩,
where ∆′ and ∆′′ are pairs of triangles in a four-simplex σ whose intersection is a single vertex
of σ and b∆=
r
∆b. the variable ff is defined as
eff =
y
l∈∂f
gl,
where f is the face dual to a triangle ∆, l's are the edges of the polygon boundary of f and gl
are the dual edge holonomies.
one can then show that
z0(j , m) =
x
λf,ιl
y
f
dim λf
y
v
a5(λf(v), ιl(v), jf(v)),
(2.5)
where the amplitude a5(λf(v), ιl(v), jf(v)), also called the weight for the 4-simplex σ, is given
by the evaluation of the four-simplex spin network whose edges are colored by ten λf(v) irreps
and five ιl(v) intertwiners, while each edge has a d(λ)(ej ) insertion. here d(λ)(ej ) is the repre-
sentation matrix for a group element ej in the irreducible representation (irrep) λ, see [10, 18].
note that a vertex v is dual to a 4-simplex σ, so that the set of faces f(v) intersecting at v is
dual to the set of ten triangles of σ. similarly, the set of five dual edges l(v) intersecting at v
is dual to the set of five tetrahedrons of σ. the sum in (2.5) is over all colorings λf of the
set of faces f of t ∗(m) by the irreps λf of g, as well as over the corresponding intertwiners ιl
for the dual complex edges l. equivalently, λf label the triangles of t(m), while ιl label the
tetrahedrons of t(m).
in the case of the su(2) group and j = 0, the amplitude a5 gives the 15j symbol, see [1, 13].
for the general definition of 15j-symbols a5(λf(v), ιl(v)) see [16, 17]. then z0(j = 0, m) can
be written as
z0(m) =
x
λf ,ιl
y
f
dim λf
y
v
a5(λf(v), ιl(v)),
(2.6)
4
j. faria martins and a. mikovi ́
c
which after quantum group regularization (by passing to a root of unity) becomes a manifold
invariant known as the crane–yetter invariant [6].
the formula (2.4) is now given by the discretized expression
z(λ, m, t) = exp
−iλ
5
x
σ
x
∆,∆′∈σ
gij
∂2
∂j i
f ∂j j
f′
z0(j , m)
j =0,
(2.7)
where t denotes the triangulation of m. the equation (2.7) can be rewritten as
z(λ, m, t) =
x
λf,ιl
y
f
dim λf exp
−iλ
x
σ
ˆ
vσ
! y
v
a5(λf(v), ιl(v)),
(2.8)
where the operator ˆ
vσ is given by
ˆ
vσ = 1
5
x
∆,∆′∈σ
gijl(λ)
i
⊗l(λ′)
j
≡1
5
x
f,f′;v∈f∩f′
gijl(λf )
i
⊗l
(λf′)
j
.
(2.9)
this operator acts on the σ-spin network evaluation a5 by inserting the lie algebra basis ele-
ment l(λ) for an irrep λ into the spin network edge carrying the irrep λ. the expression (2.9)
follows from (2.5), (2.7) and the relation
∂d(λ)(ej )
∂j i
j =0 = l(λ)
i
.
following (2.9), let us define a g-edge in a 4-simplex spin network as a line connecting the
middle points of two edges of the spin network, such that this line is labelled by the tensor gij.
we associate to a g-edge the linear map
x
ij
gijl(λf )
i
⊗l
(λf′)
j
,
where λf and λf′ are the labels of the spin network edges connected by the g-edge and l(λf )
i
denotes the action of the basis element li of g in the representation λf.
the action of the operator gijl(λf )
i
⊗l
(λf′)
j
in a single 4-simplex of the j = 0 spin foam
state sum (2.5) can be represented as the evaluation of a spin network γσ,g obtained from the
4-simplex spin network γσ by adding a g-edge between the two edges of γσ labeled by λf and λf′.
when gij ∝δij and the intertwiners cλλ′a, from λ ⊗λ′ to a, where a is the adjoint repre-
sentation, form a one-dimensional vector space, γσ,g becomes the 4-simplex spin network with an
insertion of an edge labeled by the adjoint irrep, see fig. 1. this simplification happens because
the matrix elements of l(λ)
i
can be identified with the components of the intertwiner cλλ′a,
since these intertwiners are one-dimensional vector spaces, i.e.
l(λ)
i
αβ = cλλa
α β i ,
(2.10)
so that
x
i
l(λ)
i
αβ
l(λ′)
i
α′β′ =
x
i
cλλa
α β icaλ′λ′
i α′ β′ .
(2.11)
then the right-hand side of the equation (2.11) represents the evaluation of the spin network in
fig. 2. the condition (2.10) is not too restrictive since it includes the su(2) and so(4) groups.
we need to consider this particular case in order to be able to use the chain–mail techniques.
four-dimensional spin foam perturbation theory
5
figure 1. a 15j symbol (4-simplex spin network) with a g-edge insertion (dashed line). here a is the
adjoint representation.
figure 2. spin network form of equation (2.11).
the action of ( ˆ
vσ)n in a5 is given by the evaluation of a γσ,n spin network which is obtained
from the γσ spin network by inserting n g-edges labeled by the adjoint irrep. these additional
edges connect the edges of γσ which correspond to the triangles of the 4-simplex σ where the
operators l(λ)
i
⊗l(λ′)
i
from ˆ
vσ act.
let
z(m, t) =
∞
x
n=0
inλnzn(m, λ, t),
then
z0(m) = z0(j , m)
j =0.
the state sum z0 is infinite, unless it is regularized. the usual way of regularization is by
using the representations of the quantum group uq(g) at a root of unity, which, by passing to
a finite-dimensional quotient, yields a modular hopf algebra [26]. there are only finitely many
irreps with non-zero quantum dimension in this case, and the corresponding state sum z0 has
the same form as in the lie group case, except that the usual spin network evaluation used for
the spin-foam amplitudes has to be replaced by the quantum spin network evaluation. in this
way one obtains a finite and triangulation independent z0, usually known as the crane–yetter
invariant [6]. this 4-manifold invariant is determined by the signature of the manifold [23, 26].
the same procedure of passing to the quantum group at a root of unity can be applied to the
perturbative corrections zn, but in order to obtain triangulation independent results, the dilute
gas limit has to be implemented [2, 9].
6
j. faria martins and a. mikovi ́
c
2.1
the chain–mail formalism and observables of the crane–yetter invariant
the chain–mail formalism for defining the turaev–viro invariant and the crane–yetter inva-
riant was introduced by roberts in [23]. in the four-dimensional case, the construction of the
related manifold invariant z0(m) had already been implemented by broda in [4]. however, the
equality with the crane–yetter invariant, as well as the relation of z0(m) with the signature
of m appears only in the work of roberts [23].
we will follow the conventions of [3]. let m be a four-dimensional manifold. suppose we
have a handle decomposition [12, 14, 24] of m, with a unique 0-handle, and with hi handles
of order i (where i = 1, 2, 3, 4).
this gives rise to the link chlm in the three-sphere s3,
with h2 + h1 components (the "chain–mail link"), which is the kirby diagram of the handle
decomposition [12, 14]. namely, we have a dotted unknotted (and 0-framed) circle for each
1-handle of m, determining the attaching of the 1-handle along the disjoint union of two balls,
and we also have a framed knot for each 2-handle, along which we attach it. this is the four-
dimensional counterpart of the three-dimensional chain–mail link of roberts, see [23, 3].
the crane–yetter invariant z0(m), which coincides with the invariant z0(j , m)j =0, defined
in the introduction, see equation (2.6), can be represented as a multiple of the spin-network eva-
luation of the chain mail link chlm, colored with the following linear combination of quantum
group irreps (the ω-element):
ω=
x
λ
(dimq λ)λ,
see [23]. explicitly, by using the normalizations of [3]:
z0(m) = η−1
2 (h2+h1+h3−h4+1)⟨chlm, ωh2+h1⟩,
(2.12)
where
η =
x
λ
(dimq λ)2.
roberts also proved in [23] that z0(m) = κs(m), where κ is the evaluation of a 1-framed unknot
colored with the ω-element and s(m) denotes the signature of m.
given a triangulated manifold (m, t), consider the natural handle decomposition of m ob-
tained from thickening the dual cell decomposition of m; see [23, 24].
then a handle de-
composition of m (with a single 0-handle), such that (2.12) explicitly gives the formula for
z0(m) = z0(j , m)j =0, appearing in equation (2.5), is obtained from this handle decomposi-
tion by canceling pairs of 0- and 1-handles [12, 14, 24], until a single 0-handle is left; in this case,
in the vicinity of each 4-simplex, the chain–mail link has the form given in fig. 3. this explicit
calculation appears in [23, 3] and essentially follows from the lickorish encircling lemma [13, 15]:
the spin-network evaluation of a graph containing a unique strand (colored with the represen-
tation λ) passing through a zero framed unknot colored with ωvanishes unless λ is the trivial
representation.
a variant of the crane–yetter model z0 in (2.12) is achieved by inspecting its observables,
addressed in [3]. consider a triangulated 4-manifold m. consider the handle decomposition
of m obtained from thickening the dual complex of the triangulation, and eliminating pairs of
0- and 1-handles until a single 0-handle is left. any triangle of the triangulation of m therefore
yields a 2-handle of m.
now choose a set s with ns triangles of m, which will span a (possibly singular) surface σ2
of m. color each t ∈s with a representation λt. the associated observable of the crane–yetter
invariant is:
z0(m, s, λs) = η−1
2(h2+h1+h3−h4+1)⟨chlm; ωh2+h1−ns, λs⟩
y
t∈s
dimq(λt),
(2.13)
four-dimensional spin foam perturbation theory
7
figure 3. portion of the chain-mail link corresponding to a 4-simplex; this may have additional meridian
circles (corresponding to 1-handles) since we also eliminate pairs of 0- and 1-handles, until a single 0-
handle is left.
where
⟨chlm; ωh2+h1−ns, λs⟩
denotes
the
spin-network
evaluation
of
the
chain–mail
link chlm, where the components associated with the triangles t ∈s are colored by λs and
the remaining components with ω. we can see chlm as a pair (ls, ks), where ks denotes the
components of the chain–mail link given by the triangles of s and ls the remaining components
of the chain–mail link. we thus have chlm = ls ∪ks.
let zwrt(n, γ) denote the witten–reshetikhin–turaev invariant of the colored graph γ
embedded in the 3-manifold n, in the normalization of [3]. then theorem 2 of [3] says that:
z0(m, s, λs) = zwrt
∂(m \ ˆ
σ2), k′
s
κs(m\ˆ
σ2) η
χ(σ2)
2
−ns y
t∈s
dimq(λt).
(2.14)
here ˆ
σ2 is an open regular neighborhood of σ2 in m, s denotes the signature of the manifold
and χ denotes the euler characteristic. the link k′
s is the link in ∂(m \ ˆ
σ2) along which the 2-
handles associated to the triangles of σ2 would be attached, in order to obtain m. this theorem
of [3] essentially follows from the fact that the pair (ls, ks) is a surgery presentation of the
pair
∂(m \ ˆ
σ2), k′
s
, a link embedded in a manifold, apart from connected sums with s1 × s2.
3
the first-order correction
recall that there are two possible ways of representing the crane–yetter invariant z0: as a state
sum invariant (2.6) and as the evaluation of a chain–mail link (2.12). it follows from (2.8)
that z1 can be written as nz′
0 where n is the number of 4-simplices of t(m) and z′
0 is the
state sum given by a modification of the state sum z0 where a single 4-simplex is perturbed by
the operator ˆ
vσ.
in order to calculate z′
0 consider a 4-manifold m with a triangulation t whose dual complex
is t ∗. given a 4-simplex σ ∈t we define an insertion i as being a choice of a pair of triangles
of σ which do not belong to the same tetrahedron of σ and have therefore a single vertex in
common (following (2.7) we will distinguish the order in which the triangles appear). given the
colorings λf of the triangles of σ (or of the dual faces f) and the colorings ιl of the tetrahedrons
of σ (or of the dual edges l), then a5(λf, ιl, i) is the evaluation of the spin network of fig. 1.
8
j. faria martins and a. mikovi ́
c
figure 4. portion of the graph cmli
m corresponding to a 4-simplex with an insertion. all strands are
to be colored with ω, unless they intersect the insertion, as indicated.
we then have:
z1(m, t) = 1
5
x
σ
x
iσ
x
λf ,ιl
a5(λf(σ), ιl(σ), iσ)
y
f
dim λf
y
v̸=v(σ)
a5(λf(v), ιl(v)),
(3.1)
where v(σ) is the vertex of t ∗corresponding to σ. this sum is over the set of all 4-simplices σ
of t(m), as well as over the set iσ of all insertions of σ and over the set of all colorings (λf, ιl)
of the faces and the edges of t ∗(m) (or equivalently, a sum over the colorings of the triangles
and the tetrahedrons of t(m).)
the infinite sum in (3.1) is regularized by passing to the category of representations of the
quantum group uq(g), where q is a root of unity. in order to calculate z1(m, t) in this case, let
us represent it as an evaluation of the chain–mail link chl(m, t) [23] in the three-sphere s3.
as explained in subsection 2.1, the invariant z0(m) can be represented as a multiple of
the evaluation of the chain mail link chlm colored with the linear combination of the quan-
tum group irreps ω= p
λ(dimq λ)λ, see equation (2.12). analogously, by extending the 3-
dimensional approach of [9], a chain-mail formulation for the equation (3.1) can be given. con-
sider the handle decomposition of m obtained by thickening the dual cell decomposition of m
associated to the triangulation t of m. one can cancel pairs of 0- and 1-handles, until a single
0-handle is left. let chlm be the associated chain-mail link (the kirby diagram of the handle
decomposition). we then have
z1(m, t) = 1
5
x
i
x
λ,λ′
η−1
2 (h2+h1+h3−h4+1) dimq λ dimq λ′⟨chli
m, ωh2+h1−2, λ, λ′⟩,
(3.2)
where, as before, an insertion i is the choice of a pair of triangles t1 and t2 in some 4-simplex
of m, such that t1 and t2 have only one vertex in common. given an insertion i, the graph
chli
m is obtained from the chain-mail link chlm by inserting a single edge (colored with
the adjoint representation of g) connecting the components of chlm (colored with λ and λ′)
corresponding to t1 and t2; see fig. 4. chli
m can be considered as a pair (li, γi) where li
denotes the components of chli
m not incident to the insertion i (which are exactly h2 +h2 −2)
and γi denotes the component of chli
m containing the insertion i. hence we use the notation
⟨chli
m, ωh2+h1−2, λ, λ′⟩to mean the evaluation of the pair (li, γi) where all components
four-dimensional spin foam perturbation theory
9
figure 5. spin network γ1(λ, λ′). here a is the adjoint representation.
of li are colored with ωand the two circles of γi are colored with λ and λ′, with an extra edge
connecting them, colored with the adjoint representation a.
consider an insertion i connecting the triangles t1 and t2, which intersect at a single vertex.
equation (3.2) coincides, apart from the inclusion of the single insertion, with the observables of
the crane–yetter invariant [3] (subsection 2.1) for the pair of triangles colored with λ and λ′; see
equation (2.13). by using the discussion in section 6 and theorem 2 of [3] (see equation (2.14))
one therefore obtains, for each insertion i and each pair of irreps λ, λ′:
η−1
2(h2+h1+h3−h4+1)⟨chli
m, ωh2+h1−2, λ, λ′⟩= zwrt(s3, γ(λ, λ′))η−3
2z0(m),
(3.3)
where γ(λ, λ′) is the colored link of fig. 5. in addition zwrt denotes the witten–reshetikhin–
turaev invariant [27, 22] of graphs in manifolds, in the normalization of [3, 8].
note that
in the notation of equation (2.14), σ2 = t1 ∪t2 is two triangles which intersect at a vertex,
thus χ(σ2) = 1 and also its regular neighborhood ˆ
σ2 is homeomorphic to the 4-disk, thus
s(m) = s(m \ ˆ
σ2).
equation (3.3) follows essentially from the fact that the pair chli
m = (li, γi) is a surgery
presentation [14, 12] of the pair (s3, γ(λ, λ′)), apart from connected sums with s1 × s2; c.f.
theorem 3, below.
to see this, note that (after turning the circles associated with the 1-handles of m into dotted
circles) the link li is a kirby diagram for the manifold m minus an open regular neighborhood σ′
of the 2-complex σ made from the vertices and edges of the triangulation of m, together with the
triangles t1 and t2. since t1 and t2 intersect at a single vertex, any regular neighborhood ˆ
σ2 of
the (singular) surface σ2 spanned by t1 and t2 is homeomorphic to the 4-disk d4. therefore σ′ is
certainly homeomorphic to the boundary connected sum
♮k
i=1(d3×s1)
♮d4, whose boundary is
#k
i=1(s2×s1)
#s3, for some positive integer k. here # denotes the connected sum of manifolds
and ♮denotes the boundary connected sum of manifolds. the circles c1, c2 ⊂γi associated with
the triangles t1 and t2 define a link which lives in ∂ˆ
σ2 ∼
= s3 ⊂
#k
i=1(s2 × s1)
#s3.
the two circles c1 and c2 define a 0-framed unlink in s3, with each individual component being
unknotted. let us see why this is the case. we will turn the underlying handle decomposition
of m upside down, by passing to the dual handle decomposition of m, where each i-simplex of
the triangulation of m yields an i-handle of m; see [12, p. 107]. consider the bit p ⊂m of
the handle-body yielded by the 2-complex σ, thus p is (like σ′) a regular neighborhood of σ.
maintaining the 0-handle generated by the vertex t1∩t2, eliminate some pairs of 0- and 1-handles,
in the usual way, until a single 0-handle of p is left. clearly ∂p ∗= ∂(m \ σ′), where ∗denotes
the orientation reversal. the circles c1 and c2, in ∂p ∗, correspond now (since we considered
the dual handle decomposition) to the belt-spheres of the 2-handles of p (attached along ct1
and ct2) and associated with the triangles t1 and t2. since c1 and c2 are 0-framed meridians going
around ct1 and ct2 (see [12, example 1.6.3]) it therefore follows that these circles are unlinked
and are also, individually, unknotted; see fig. 6. given this and the fact that the insertion i
colored with a also lives in s3, it follows that chli
m = (li, γi) is a surgery presentation of the
pair (s3, γ(λ, λ′)), apart from the connected sums, distant from γ(λ, λ′), with s1 × s2.
since the evaluation of the tadpole spin network is zero, it follows that zwrt(s3, γ(λ, λ′))= 0,
and consequently
10
j. faria martins and a. mikovi ́
c
figure 6. the kirby diagram for p in the vicinity of the triangles t1 and t2. we show the belt-spheres c1
and c2 of the 2-handles of p (attaching along ct1 and ct2) associated with the triangles t1 and t2.
theorem 1. for any triangulation t of m we have z1(m, t) = 0.
4
the second-order correction
since z1 = 0, we have to calculate z2 in an appropriate limit such that the partition function z
is different from z0 and such that z is independent of the triangulation [9]. let n be the
number of 4-simplices. from (2.8) we obtain
z2(m) = 1
2
x
λf ,ιl
y
f
dim λf
x
σ
ˆ
v 2
σ +
x
σ̸=σ′
ˆ
vσ ˆ
vσ′
y
v
a5(λf(v), ιl(v)),
(4.1)
where
ˆ
vσa5(λf(v), ιl(v)) = 1
5
x
insertions i of σ
a5(λf(v), ιl(v), i),
(4.2)
if σ is dual to v, see fig. 1. on the other hand, ˆ
vσa5(λf(v), ιl(v)) = a5(λf(v), ιl(v)) if v is not
dual to σ. in order to to solve the possible framing and crossing ambiguities arising from the
equation (4.1), a method analogous to the one used in [9] can be employed. note that there
are exactly 30 insertions in a 4-simplex σ, corresponding to pairs of triangles of σ with a single
vertex in common. this is because there are exactly three triangles of σ having only one vertex
in common with a given triangle of σ.
analogously to the first-order correction, z2 can be written as
z2(m, t) = 1
2η−1
2(h2+h1+h3−h4+1)
n
x
k=1
⟨chlm, ωh2+h1, ˆ
v 2
k ⟩
+ 1
2η−1
2 (h2+h1+h3−h4+1)
x
1≤k̸=l≤n
⟨chlm, ωh2+h2, ˆ
vk, ˆ
vl⟩,
(4.3)
where the first sum denotes the contributions from two insertions ˆ
v in the same 4-simplex σk
and the second sum represents the contributions when the two insertions ˆ
v act in different
4-simplices σk and σl. as in the previous section, we will use the handle decomposition of m
with an unique 0-handle naturally obtained from the thickening of t ∗(m).
note that each ⟨chlm, ωh2+h1, ˆ
v 2
k ⟩corresponds to a sum over all the possible choices of
pairs of insertions in the 4-simplex σk. the value of ⟨chlm, ωh2+h1, ˆ
v 2
k ⟩is obtained from the
evaluation of the chain-mail link chlm colored with ω, which contains g-edges carrying the
adjoint representation, as in the calculation of the first-order correction.
four-dimensional spin foam perturbation theory
11
figure 7. colored graph γ′
2(λ, λ′, λ′′), a wedge graph. here a is the adjoint representation.
a configuration c is, by definition, a choice of insertions distributed along a set of 4-simplices
of m. given a positive integer n and a set r of 4-simplices of m, we denote by cn
r the set of
configurations with n insertions distributed along r.
by expanding each v into a sum of
insertions, the equation (4.3) can be written as:
z2(m, t) = 1
50η−1
2(h2+h1+h3−h4+1)
n
x
k=1
x
c∈c2
{σk}
⟨chlc
m, ωh2+h1⟩
+ 1
50η−1
2(h2+h1+h3−h4+1)
x
1≤k̸=l≤n
x
c∈c2
{σk,σl}
⟨chlc
m, ωh2+h2⟩.
(4.4)
note that each graph chlc
m splits naturally as (lc, γc), where the first component contains
the circles non incident to any insertion of c.
the second sum in equation (4.4) vanishes, because it is the sum of terms proportional to:
x
λ,λ′
⟨γ1(λ, λ′)⟩
2
z0(m)
(4.5)
and to
x
λ,λ′,λ′′
⟨γ′
2(λ, λ′, λ′′)⟩z0(m),
(4.6)
where γ1 is the dumbbell spin network of fig. 5, and γ′
2 is a three-loop spin network, see fig. 7.
these spin networks arise from the cases when the pair (lc, γc) is a surgery presentation of
the disjoint union
s3, γ1
⊔
s3, γ1
and of
s3, γ′
2
, respectively, apart from connected sums
with s1 × s2; see theorem 3 below. the former case corresponds to a situation where the
two insertions act in pairs of triangles without a common triangle, and the latter corresponds
to a situation where the two pairs of triangles have a triangle in common, which necessarily
is a triangle in the intersection σk ∩σl of the 4-simplices σk and σl. the evaluations in (4.5)
and (4.6) vanish since the corresponding spin networks have tadpole subdiagrams.
the first sum in (4.4) also gives the terms proportional to the ones in equations (4.5) and (4.6).
these terms correspond to two insertions connecting two pairs made from four distinct triangles
of σk and to two insertions connecting two pairs of triangles made from three distinct triangles
of σk, respectively. all these terms vanish.
the non-vanishing terms in equation (4.4) arise from a pair of insertions connecting the same
two triangles in a 4-simplex. there are exactly 30 of these. therefore, by using theorem 3 of
section 5, we obtain:
z2(m, t) = 3n
5 η−2 x
λ,λ′
dimq λ dimq λ′⟨γ2(λ, λ′)⟩z0(m),
where γ2 is a two-handle dumbbell spin network, see fig. 8. we thus have:
12
j. faria martins and a. mikovi ́
c
figure 8. colored graph γ2(λ, λ′), a double dumbbell graph. as usual a is the adjoint representation.
theorem 2. the second-order perturbative correction z2(m, t) divided by the number of n of
4-simplices of the manifold is triangulation independent. in fact:
z2(m, t)
n
= 3
5η−2 x
λ,λ′
dimq(λ) dimq(λ′)⟨γ2(λ, λ′)⟩z0(m).
here ⟨γ2(λ, λ′)⟩denotes the spin-network evaluation of the colored graph γ2(λ, λ′). note
that
⟨γ2(λ, λ′)⟩= θ(a, λ, λ)θ(a, λ′, λ′)
dimq a
,
which is is obviously non-zero, and therefore z2(m, t) ̸= 0.
5
higher-order corrections
for n > 2, the contributions to the partition function will be of the form
η−1
2(h2+h1+h3−h4+1)⟨chlm, ωh1+h2, ( ˆ
v1)k1 * * * ( ˆ
vn)kn ⟩,
where k1 + * * * + kn = n. by using the equation (4.2), each of these terms splits as a sum of
terms of the form:
1
5n η−1
2(h2+h1+h3−h4+1)⟨chlc
m, ωh1+h2⟩= ⟨c⟩,
(5.1)
where c is a set of n insertions (a "configuration") distributed among the n 4-simplices of the
chosen triangulation of m, such that the 4-simplex σi has ki insertions. insertions are added
to the chain-mail link chlm as in fig. 4, forming a graph chlc
m (for framing and crossing
ambiguities we refer to [9]).
note that each graph chlc
m splits as (lc, γc) where lc contains the components of chlm
not incident to any insertion. as in the n = 1 and n = 2 cases, equation (5.1) coincides, apart
from the extra insertions, with the observables of the crane–yetter invariant defined in [3]; see
subsection 2.1. therefore, by using the same argument that proves theorem 2 of [3] we have:
theorem 3. given a configuration c consider the 2-complex σc spanned by the k triangles
of m incident to the insertions of c. let σ′
c be a regular neighborhood of σc in m. then m
can be obtained from m \ σ′
c by adding the 2-handles corresponding to the faces of σ (and
some further 3- and 4-handles, corresponding to the edges and vertices of σ). these 2-handles
attach along a framed link k in ∂(m \ σ′), a manifold diffeomorphic to ∂(σ′
c) with the reverse
orientation.
the insertions of c can be transported to this link k defining a graph kc in
∂(σ′∗
c) = ∂(m \ σ′
c). we have:
⟨c⟩=
x
λ1,...,λk
zwrt
∂(σ′∗
c), kc; λ1, . . . , λk
κs(m\σ′)η
χ(σ)
2
−k dimq λ1 * * * dimq λk,
where s(m \ σ′) denotes the signature of the manifold m \ σ′ and χ denotes the euler charac-
teristic.
four-dimensional spin foam perturbation theory
13
note that, up to connected sums with s1 × s2, the pair chlc
m = (lc, γc) is a surgery
presentation of (∂(m \ σ′
c), kc) = (∂(σ′∗
c), kc). unlike the n = 1 and n = 2 cases, it is not
possible to determine the pair (∂(σ′∗
c), kc) for n ≥3 without having an additional information
about the configuration.
in fact, considering the set of all triangulations of m, an infinite
number of diffeomorphism classes for (m \σ′, kc) is in general possible for a fixed n; see [9] for
the three dimensional case. this makes it complicated to analyze the triangulation independence
of the formula for zn(m, t) for n ≥3.
since
z(m, t) =
x
n
λnzn(m, t),
where
zn(m, t) =
x
k1+***+kn=n
1
k1! * * * kn!η−1
2 (h2+h1+h3−h4+1)⟨chlm, ωh1+h2, ( ˆ
v1)k1 * * * ( ˆ
vn)kn ⟩,
in order to resolve the triangulation dependence of zn, let us introduce the quantities
zn = lim
n→∞
zn(m, t)
n
n
2 z0(m)
;
(5.2)
see [9, 2]. the limit is to be extended to the set of all triangulations t of m, with n being
the number of 4-simplices of m, in a sense to be made precise; see [9]. from sections 3, 4 and
theorem 2 it follows that
z1 = 0,
z2 =
3η−2
5 dimq a
x
λ
dimq λθ(a, λ, λ)
!2
.
note that the values of z1 and z2 are universal for all compact 4-manifolds. the expression for z2
is finite because there are only finitely many irreps for the quantum group uq(g), of non-zero
quantum dimension, when q is a root of unity.
6
dilute-gas limit
we will now show how to define and calculate the limit in the equation (5.2). let m be a 4-
manifold and let us consider a set s of triangulations of m, such that for any given ǫ > 0 there
exists a triangulation t ∈s such that the diameter of the biggest 4-simplex is smaller than ǫ,
i.e. the triangulations in s can be chosen to be arbitrarily fine. we want to calculate the limit
in equation (5.2) only for triangulations belonging to the set s.
furthermore, we suppose that s is such that (c.f. [9]):
restriction 1 (control of local complexity-i). together with the fact that the triangulations
in s are arbitrarily fine we suppose that:
there exists a positive integer l such that any 4-simplex of any triangulation t ∈s intersects
at most l 4-simplices of t.
let us fix n and consider z2n(m, t) when n →∞. the value of z2n will be given as a sum
of contributions of configurations c such that n1 insertions of ˆ
v act in a 4-simplex σ1, n2 of
insertions of ˆ
v act in the 4-simplex σ2 ̸= σ1 and so on, such that n1 + n2 + * * * + nn = 2n and
nk ≥0.
14
j. faria martins and a. mikovi ́
c
a configuration for which any 4-simplex has either zero or two insertions, with all 4-simplices
which have insertions being disjoint will be called a dilute-gas configuration. there will be
15n
n!
n!(n −n)! −δ(n, t),
dilute-gas configurations, where δ is the number of pairs of 4-simplices in t with non-empty
intersection. from restriction 1 it follows that δ(n, t) = o(n) as n →∞.
each dilute-gas configuration contributes zn
2 z0(m) to z2n(m, t) and we can write
z2n
z0
=
n!
n!(n −n)! −o(n)
zn
2 +
x
non-dilute c
⟨c⟩
z0
,
(6.1)
where ⟨c⟩=
1
5n η−1
2(h2+h1+h3−h4+1)⟨chlc
m, ωh1+h2⟩, denotes the contribution of the configura-
tion c.
let us describe the contribution of the non-dilute configurations c more precisely. recall
that a configuration c is given by a choice of n insertions connecting n pairs of triangles of m,
where each pair belongs to the same 4-simplex of m and the triangles have only one common
vertex.
given a configuration c with n insertions, consider a (combinatorial) graph γc with a vertex
for each triangle appearing in c and for each insertion and edge connecting the corresponding
vertices. the graph γc is obtained from γc by collapsing the circles of γc of it into vertices.
however note that γc is merely a combinatorial graph, whereas γc is a graph in s3, which can
have a complicated embedding.
if γc has a connected component homeomorphic to the graph made from two vertices and an
edge connecting them, then ⟨c⟩vanishes, since in this case the embedded graph whose surgery
presentation is given by (chlc
m, γc) will have a tadpole. in fact, looking at theorem 3, one of
the connected components of (∂(σ′∗), kc) will be (s3, γ1), where γ1 is the graph in fig. 5.
consider a manifold with a triangulation with n 4-simplices, satisfying restriction 1. the
number of possible configurations c with l insertions with make a connected graph γc is bounded
by n(10l)l−1(l−1)!. in particular the number of non-dilute configurations v with 2n insertions
and yielding a non-zero contribution is bounded by
max
l1+***+l2n=2n
li̸=1
∃i : li≥3
b
2n
y
i=1
n(10l)lk−1(lk −1)! = o
n n−1
.
(6.2)
this is simply the statement that if a graph γc has k connected components, then it has o(n k)
possible configurations. since k ≤n −1 for a non-dilute configuration, the bound (6.2) follows.
we now need to estimate the value of ⟨c⟩for a non-dilute configuration c. we will need to
make the following restriction on the set s. we refer to the notation introduced in theorem 3.
restriction 2 (control of local complexity-ii). the set of s of triangulations of m is such
that given a positive integer n then the number of possible diffeomorphism classes for the pair
(∂(σ′
c), kc) is finite as we vary the triangulation t ∈s and the configuration c with n inser-
tions.
in the three-dimensional case a class of triangulations s satisfying restrictions 1 and 2 was
constructed by using a particular class of cubulations of 3-manifolds (which always exist, see [5])
and their barycentric subdivisions, see [9]. these cubulations have a simple local structure,
with only three possible types of local configurations, which permits a case-by-case analysis as
the cubulations are refined through barycentric subdivisions. in the case of four-dimensional
four-dimensional spin foam perturbation theory
15
cubulations, no such list is known, although it has been proven that a finite (and probably huge)
list exists. therefore the approach used in the three-dimensional case cannot be directly applied
to the four-dimensional case. however, it is reasonable to assume that triangulations coming
from the barycentric subdivisons of a cubulation of m satisfy restriction 2.
more precisely, given a cubulation □of m, let ∆□be the triangulation obtained from □by
taking the cone of each i-face of each cube of □, starting with the 2-dimensional faces. consider
the class s = {∆□(n)}∞
n=0, where □(n) is the barycentric subdivision of order n of □. then we
can see that restriction 1 is satisfied by this example, and we conjecture that s also satisfies
restriction 2.
the restriction 2 combined with theorem 3 implies that the value of ⟨c⟩in equation (6.1)
is bounded for a fixed n, considering the set of all triangulations in the set s and all possible
configurations with 2n insertions. since the number of non-dilute configurations c which have
a non-zero contribution is of o(n n−1), it follows that
x
non-dilute c
⟨c⟩
z0
= o
n n−1
.
therefore
lim
n→∞
z2n(m, t)
n nz0(m) = zn
2
n! ,
or
z2n(m, t)
z0(m)
≈zn
2
n! n n,
(6.3)
for large n.
in the case of z2n+3, the dominant configurations for triangulations with a large number n of
4-simplices consist of configurations c (as before called dilute) whose associated combinatorial
graph γc has as connected components a connected closed graph with three edges and (n −1)
connected closed graphs with two edges. we can write:
z2n+3
z0
=
x
dilute c
⟨c⟩+
x
non-dilute c
⟨c⟩.
since the number of dilute configurations is of o(n n), while the second sum is of o(n n−1), due
to the restrictions 1 and 2, we then obtain for large n
z2n+3
z0
= o(n n).
more precisely
z2n+3
z0
≈z3(z2)nn
(n −1)!
n!(n −1 −n)!,
or
z2n+3
z0
≈z3(z2)n n n
n! ,
(6.4)
for large n, where z3 is the sum of two terms. the first term is
30
6 × 53
x
λ,λ′
dimq λ dimq λ′⟨γ3(λ, λ′)⟩,
16
j. faria martins and a. mikovi ́
c
figure 9. colored graph γ3(λ, λ′), a triple dumbbell graph. as usual a is the adjoint representation.
figure 10. colored graph γ′
3(λ, λ′, λ′′). as usual a is the adjoint representation.
where γ3 is the triple dumbbell graph of fig. 9, corresponding to three insertions connecting the
same pair of triangles of the underlying 4-simplex (there are exactly 30 of these). the second
term is
30
6 × 53
x
λ,λ′,λ′′
dimq λ dimq λ′ dimq λ′′⟨γ′
3(λ, λ′, λ′′)⟩,
where γ′
3 appears in fig. 10. this corresponds to three insertions making a chain of triangles,
pairwise having only a vertex in common (there are exactly 30 insertions like these for each
4-simplex).
6.1
large-n asymptotics
let us now study the asymptotics of z(m, t) for n →∞. we will denote z(m, t) as z(λ, n)
and z0(m) as z0(λ0), in order to highlight the fact that the crane–yetter state sum z0(m)
can be understood as a path integral for the bf-theory with a cosmological constant term
λ0
r
m⟨b ∧b⟩, such that λ0 is a certain function of an integer k0, which specifies the quantum
group at a root of unity whose representations are used to construct the crane–yetter state
sum. in the case of a quantum su(2) group it has been conjectured that λ0 = 4π/k0, see [25].
consequently
z(λ) =
z
da db ei r
m⟨b∧f +λb∧b⟩=
z
dadbei r
m⟨b∧f +λ0b∧b⟩ei(λ−λ0) r
m ⟨b∧b⟩,
which means that our perturbation parameter is λ −λ0 instead of λ.
let us consider the partial sums
zp (λ, n) =
p
x
n=0
inzn(n)(λ −λ0)n,
where p =
√
n. in this way we ensure that each perturbative order n in zp is much smaller
than n when n is large. we can then use the estimates from the previous section, and from (6.3)
four-dimensional spin foam perturbation theory
17
and (6.4) we obtain
zp (λ, n)
z0(λ0)
≈
p/2
x
m=0
i2m(λ −λ0)2m n m
m! zm
2 +
(p −3)/2
x
m=0
i2m+3(λ −λ0)2m+3z3(z2)m n m
m!
≈
1 + iz3(λ −λ0)3
∞
x
m=0
(−1)m gm
m! zm
2 =
1 + iz3(λ −λ0)3
e−gz2,
where
g = (λ −λ0)2n.
(6.5)
given that
z(λ, n) = lim
p →∞zp (λ, n)
for |λ −λ0| < r, where r is the radius of convergence of
z(λ, n) =
∞
x
n=0
in(λ −λ0)nzn(n),
(6.6)
then
z(λ, n) ≈z√
n(λ, n)
for large n. therefore
z(λ, n)
z0(λ0) ≈
1 + iz3(λ −λ0)3
exp (−gz2) ,
(6.7)
for λ ≈λ0, where g = (λ −λ0)2n. in the limit n →∞, λ →λ0 and g constant we obtain
z(λ, n)
z0(λ0) →exp(−gz2).
we can rewrite this result as
z(m, g) = e−gz2z0(m),
(6.8)
where z(m, g) is the perturbed partition function in the dilute-gas limit. this value is triangu-
lation independent and it depends on the renormalized coupling constant g.
note that (6.7) can be rewritten as
z∗(λ, g)
z0(λ0) ≈
1 + iz3(λ −λ0)3
exp (−gz2) ,
(6.9)
where we have changed the variable n to variable g = n(λ −λ0)2 and
z∗(λ, g) = z
λ,
g
(λ −λ0)2
.
(6.10)
the approximation (6.9) is valid for λ →λ0 and
g
(λ −λ0)2 →∞.
18
j. faria martins and a. mikovi ́
c
the result (6.8) can be understood as the lowest order term in the asymptotic expansion (6.9)
where g is a constant. however, one can have a more general situation where g = f(λ −λ0)
such that
f(λ −λ0)
(λ −λ0)2 →∞,
(6.11)
for λ →λ0.
in this case
z∗(λ, g) ≈e−z2f(λ−λ0)
1 + iz3(λ −λ0)3
z0(λ0),
(6.12)
which opens the possibility that
z∗(λ, g) ≈[z0(k)]k=φ(λ) ,
(6.13)
where z0(k) is the real number extension of the crane–yetter state sum and k = φ(λ) is the
relation between k and λ.
in the case of a quantum su(2) group at a root of unity
z0(m, k) = e−iπs(m)r(k),
(6.14)
where s(m) is the signature of m and r(k) = k(2k + 1)/4(k + 2), see [23]. the value of λ
which corresponds to k is conjectured to be k = 4π/λ, see [25], and this is an example of the
function φ.
the relation λ ∝1/k could be checked by calculating the large-spin asymptotics of the
quantum 15j-symbol for the case of the root of unity, in analogy with the three-dimensional case,
where by computing the asymptotics of the quantum 6j-symbol one can find that λ = 8π2/k2,
see [21]. the quantum 15j-symbol asymptotics is not known, but for our purposes it is sufficient
to know that λ →0 as k →∞.
equations (6.12), (6.13) and (6.14) imply
f(λ −λ0) ≈iπs(m)
z2
[h(λ) −h(λ0)] + iz3
z2
(λ −λ0)3,
(6.15)
where h(λ) = r(φ(λ)).
the solution (6.15) is consistent with the condition (6.11), since
h′(λ0) ̸= 0. however, f has to be a complex function, although the original definition (6.5)
suggests a real function. this means that λ has to take complex values in order for (6.13) to
hold, i.e. we need to perform an analytic continuation of the function in (6.9).
also note that the definition (6.6) and the fact that z1(n) = 0 (theorem 1) imply that
z′(λ0, n) = 0, but this does not imply that
lim
λ→λ0
dz∗(λ, f(λ −λ0))
dλ
= 0
since z(λ, n) and z∗(λ, g) are different functions of λ due to (6.10) and g = f(λ −λ0). there-
fore the approximation (6.13) is consistent. this also implies that we can define a triangulation
independent z(λ, m) as the real number extension of the function z0(k, m), see (6.14). there-
fore
z(λ, m) = z0(k, m)|k=φ(λ) = ̃
z0(λ, m).
(6.16)
four-dimensional spin foam perturbation theory
19
7
relation to ⟨f ∧f ⟩theory
it is not difficult to see that the equations of motion for the action (2.1) are equivalent to the
equations of motion for the action
̃
s =
z
m
⟨f ∧f⟩,
because the b field can be expressed algebraically as bi = −gijf j/(2λ).
at the path-integral level, this property is reflected by the following consideration. one can
formally perform a gaussian integration over the b field in the path integral (2.2), which gives
the following path integral
z = d(λ, m)
z
da exp
i
4λ
z
m
⟨f ∧f⟩
,
(7.1)
where d(λ, m) is the factor coming from the determinant factor in the gaussian integration
formula.
more precisely, if we discretize m by using a triangulation t with n triangles, then the path
integral (2.2) becomes a finite-dimensional integral
z y
l,i
dai
l
z y
f,i
dbi
f exp
i
x
∆
tr (b∆ff) + iλ
5
x
σ
x
∆′,∆′′∈σ
⟨b∆′b∆′′⟩
.
(7.2)
the integral over the b variables in (7.2) can be written as
z +∞
−∞
* * *
z +∞
−∞
y
k,l
dbkleiλ(b,qb)+i(b,f),
(7.3)
where m = dim a, b = (b11, . . . , bmn) and f = (f11, . . . , fmn) are vectors in rmn, (x, y ) is
the usual scalar product in rmn and q is an mn × mn matrix.
the integral (7.3) can be defined as the analytic continuation λ →iλ, f →if of the formula
z +∞
−∞
* * *
z +∞
−∞
y
k,l
dbkle−λ(b,qb)+(f,b) =
r
πmn
λmn det qe(f,q−1f)/4λ,
(7.4)
so that when n →∞such that the triangulations become arbitrarily fine, we can represent the
limit as the path integral (7.1).
since
r
m⟨f ∧f⟩is a topological invariant of m, which is the second characteristic class
number c2(m), see [7], we can write
z(m, λ) = e(m, λ)eic2(m)/λ,
where
e(m, λ) = d(m, λ)
z
da
and d(m, λ) denotes the (λmn det q)−1/2 factor from (7.4). as we have shown in the previous
section, z(λ, m) = ̃
z0(m, λ), so that in the case of su(2)
e(m, λ) = e−ic2(m)/λ−iπs(m)h(λ).
(7.5)
therefore one can calculate the volume of the moduli space of connections on a principal
bundle provided that the relation k = φ(λ) is known for the corresponding quantum group.
20
j. faria martins and a. mikovi ́
c
8
conclusions
the techniques developed for the 3d spin foam perturbation theory in [9] can be extended to
the 4d case, and hence the 4d partition function has the same form as the corresponding 3d
partition function in the dilute gas limit, see (6.8). the constant z2 depends only on the group g
and an integer, and z2 is related to the second-order perturbative contribution, see section 5.
the constant z2 appears because the constant z1 vanishes for the same reason as in the 3d case,
which is the vanishing of the tadpole spin network evaluation.
the result (6.8) implies that z(m, g) is not a new manifold invariant, but it is propor-
tional to the crane–yetter invariant.
given that the renormalized coupling constant g is
an arbitrary number, a more usefull way of representing our result is the asymptotic for-
mula (6.12). this formula allowed us to conclude that z(m, λ) can be identified as the crane–
yetter partition function evaluated at k = φ(λ), see (6.13) and (6.16).
the formula (6.12)
also applies to the spin foam perturbation expansion in 3d, where z2 and z3 are given as the
state sums of the corresponding 3d graphs, see [9]. therefore the formula (6.12) is the jus-
tification for the conjecture made in [9], where the triangulation independent z(λ, m) was
identified with the turaev–viro partition function ztv (m, k) for k = 4π2/λ2 in the su(2)
case.
the relation (6.16) was useful for determining the volume of the moduli space of connections
on the g-principal bundle for arbitrary values of λ, given that z(m, λ) is related to the path
integral of the ⟨f ∧f⟩theory, see (7.5). however, it still remains to be proved the conjec-
ture that k ∝1/λ for g = su(2), while for the other groups the function k = φ(λ) is not
known.
note that the result (6.8) depends on the existence of a class of triangulations of m which
are arbitrarily fine, but having a finite degree of local complexity. as explained in section 5
it is reasonable to assume that such a class exists, and can be constructed by considering the
triangulations coming from the barycentric subdivisions of a fixed cubulation of m.
our approach applies to lie groups whose vector space of intertwiners λ ⊗λ →a is one-
dimensional for each irreducible representation λ. this is true for the su(2) and so(4) groups,
but it is not true for the su(3) group. this can probably be fixed by adding extra information
to the chain-mail link with insertions at the 3-valent vertices.
also note that we only considered the gij ∝δij case. this is sufficient for simple lie groups,
but in the case of semi-simple groups one can have non-trivial gij. especially interesting is the
so(4) case, where gij ∝ǫabcd. in the general case one will have to work with spin networks
which will have l(λ) and gij insertions, so that it would be very interesting to find out how to
generalize the chain–mail formalism to this case.
one of the original motivations for developing a four-dimensional spin-foam perturbation
theory was a possibility to obtain a nonperturbative definition of the four-dimensional euclidean
quantum gravity theory, see [10] and also [19, 20]. the reason for this is that general relativity
with a cosmological constant is equivalent to a perturbed bf-theory given by the action (2.1),
where g = so(4, 1) for a positive cosmological constant, while g = so(3, 2) for a negative
cosmological constant and gij = ǫabcd in both cases, see [19, 20]. however, the gij in the gravity
case is not a g-invariant tensor, since it is only invariant under a subgroup of g, which is the
lorentz group. consequently this perturbed bf theory is not topological.
in the euclidean gravity case one has g = so(5), and the invariant subgroup is so(4) since
gij = ǫabcd. one can then formulate a spin foam perturbation theory along the lines of section 3.
however, the chain–mail techniques cannot be used, because gij is not a g-invariant tensor
and therefore one lacks an efficient way of calculating the perturbative contributions. in order
to make further progress, a generalization of the chain–mail calculus has to be found in order
to accommodate the case when gij is invariant only under a subgroup of g.
four-dimensional spin foam perturbation theory
21
acknowledgments
this work was partially supported fct (portugal) under the projects ptdc/mat
/
099880/2008,
ptdc/mat/098770/2008, ptdc/mat/101503/2008. this work was also partially supported
by cma/fct/unl, through the project pest-oe/mat/ui0297/2011.
references
[1] baez j., an introduction to spin foam models of quantum gravity and bf theory, in geometry and quantum
physics (schladming, 1999), lecture notes in phys., vol. 543, springer, berlin, 2000, 25–93, gr-qc/9905087.
[2] baez j., spin foam perturbation theory, in diagrammatic morphisms and applications (san francisco, ca,
2000), contemp. math., vol. 318, amer. math. soc., providence, ri, 2003, 9–21, gr-qc/9910050.
[3] barrett j.w., faria martins j., garc ́
ıa-islas j.m., observables in the turaev–viro and crane–yetter models,
j. math. phys. 48 (2007), 093508, 18 pages, math.qa/0411281.
[4] broda b., surgical invariants of four-manifolds, hep-th/9302092.
[5] cooper d., thurston w., triangulating 3-manifolds using 5 vertex link types, topology 27 (1988), 23–25.
[6] crane l., yetter d.a., a categorical construction of 4d topological quantum field theories, in quantum
topology, ser. knots everything, vol. 3, world sci. publ., river edge, nj, 1993, 120–130, hep-th/9301062.
[7] eguchi t., gilkey p.b., hanson a.j., gravitation, gauge theories and differential geometry, phys. rep. 66
(1980), 213–393.
[8] faria martins j., mikovi ́
c a., invariants of spin networks embedded in three-manifolds, comm. math. phys.
279 (2008), 381–399, gr-qc/0612137.
[9] faria martins j., mikovi ́
c a., spin foam perturbation theory for three-dimensional quantum gravity,
comm. math. phys. 288 (2009), 745–772, arxiv:0804.2811.
[10] freidel l., krasnov k., spin foam models and the classical action principle, adv. theor. math. phys. 2
(1999), 1183–1247, hep-th/9807092.
[11] freidel l., starodubtsev a., quantum gravity in terms of topological observables, hep-th/0501191.
[12] gompf r.e., stipsicz a.i., 4-manifolds and kirby calculus, graduate studies in mathematics, vol. 20,
american mathematical society, providence, ri, 1999.
[13] kauffman l.h., lins s.l., temperley–lieb recoupling theory and invariants of 3-manifolds, annals of math-
ematics studies, vol. 134, princeton university press, princeton, nj, 1994.
[14] kirby r.c., the topology of 4-manifolds, lecture notes in mathematics, vol. 1374, springer-verlag, berlin,
1989.
[15] lickorish w.b.r., the skein method for three-manifold invariants, j. knot theory ramifications 2 (1993),
171–194.
[16] mackaay
m.,
spherical 2-categories
and 4-manifold
invariants,
adv. math.
143
(1999),
288–348,
math.qa/9805030.
[17] mackaay m., finite groups, spherical 2-categories, and 4-manifold invariants, adv. math. 153 (2000), 353–
390, math.qa/9903003.
[18] mikovi ́
c a., spin foam models of yang–mills theory coupled to gravity, classical quantum gravity 20 (2003),
239–246, gr-qc/0210051.
[19] mikovi ́
c a., quantum gravity as a deformed topological quantum field theory, j. phys. conf. ser. 33 (2006),
266–270, gr-qc/0511077.
[20] mikovi ́
c a., quantum gravity as a broken symmetry phase of a bf theory, sigma 2 (2006), 086, 5 pages,
hep-th/0610194.
[21] mizoguchi s., tada t., three-dimensional gravity from the turaev–viro invariant, phys. rev. lett. 68
(1992), 1795–1798, hep-th/9110057.
[22] reshetikhin n., turaev v.g., invariants of 3-manifolds via link polynomials and quantum groups,
invent. math. 103 (1991), 547–597.
22
j. faria martins and a. mikovi ́
c
[23] roberts j., skein theory and turaev–viro invariants, topology 34 (1995), 771–787.
[24] rourke c.p., sanderson b.j., introduction to piecewise-linear topology, reprint, springer study edition,
springer-verlag, berlin – new york, 1982.
[25] smolin l., linking topological quantum field theory and nonperturbative quantum gravity, j. math. phys.
36 (1995), 6417–6455, gr-qc/9505028.
[26] turaev v.g., quantum invariants of knots and 3-manifolds, de gruyter studies in mathematics, vol. 18,
walter de gruyter & co., berlin, 1994.
[27] witten e., quantum field theory and the jones polynomial, comm. math. phys. 121 (1989), 351–399.
|
0911.1701 | search for chaos in neutron star systems: is cyg x-3 a black hole? | the accretion disk around a compact object is a nonlinear general
relativistic system involving magnetohydrodynamics. naturally the question
arises whether such a system is chaotic (deterministic) or stochastic (random)
which might be related to the associated transport properties whose origin is
still not confirmed. earlier, the black hole system grs 1915+105 was shown to
be low dimensional chaos in certain temporal classes. however, so far such
nonlinear phenomena have not been studied fairly well for neutron stars which
are unique for their magnetosphere and khz quasi-periodic oscillation (qpo). on
the other hand, it was argued that the qpo is a result of nonlinear
magnetohydrodynamic effects in accretion disks. if a neutron star exhibits
chaotic signature, then what is the chaotic/correlation dimension? we analyze
rxte/pca data of neutron stars sco x-1 and cyg x-2, along with the black hole
cyg x-1 and the unknown source cyg x-3, and show that while sco x-1 and cyg x-2
are low dimensional chaotic systems, cyg x-1 and cyg x-3 are stochastic
sources. based on our analysis, we argue that cyg x-3 may be a black hole.
| introduction
x-ray
binary
systems
vary
on
timescales
rang-
ing
from
months
to
milli-seconds
(see,
e.g.,
(chen et al. 1997; paul et al. 1997; nowak et al. 1999;
cui 1999;
gleissner et al. 2004;
axelsson 2008)).
detailed analysis of their temporal variability and
fluctuation provides important insights into the geom-
etry and physics of emitting regions and the accretion
process.
however, the origin of variability is still not
clear. it could be due to varying external parameters,
like the infalling mass accretion rate.
it could also
be due to possible instabilities in the inner regions of
the accretion disk where the flow is expected to be
nonlinear and turbulent. uttley et al. (2005) (see also
timmer et al. 2000 and thiel et al. 2001) argued that
the non-linear behavior of a system can be understood
from the log-normal distribution of the fluxes and
the rms-flux relation.
this implies that the temporal
behavior of the system may be driven by underlying
stochastic variations. by studying the underlying non-
linear behavior, important constraints can be obtained
on these various possibilities.
an elegant way of obtaining the constraint is to per-
form the nonlinear time series analysis of observed data
and to compute the correlation dimension d2 in a non-
subjective manner. this technique has already been used
to diverse situations ((grassberger & procaccia 1983a;
grassberger & procaccia 1983b;
schreiber 1999;
aberbandel 1996; serre et al. 1996; misra et al. 2004;
misra et al. 2006; harikrishnan et al. 2006), and refer-
ences therein).
by obtaining d2 as a function of the
embedding dimension m, one can infer the origin of the
variability. for example, d2 ≈m for all m corresponds
to the system having stochastic fluctuation which favors
1 bidya [email protected]
2 [email protected]
3 [email protected]
the idea that x-ray variations are driven by variations
of some external parameters.
on the other hand, a
saturated d2 to a finite (low) value, beyond a certain
m, implies a deterministic chaos which argues in favor
of inner disk instability.
however, to implement the
algorithm successfully, the system in question should
provide enough data.
the technique was used earlier to understand the
nonlinear
nature of
a
black hole
system
cyg
x-
1 (unno et al. 1990) and an active galactic nucleus
(agn) ark 564 (gliozzi et al. 2002), but due to in-
sufficient data points the
analyses were hampered
and no concrete conclusions were made about d2.
later on, another black hole system grs 1915+105
was
analyzed
(misra et al. 2004;
misra et al. 2006;
harikrishnan et al. 2006) which was shown to display
low dimensional chaos in certain temporal classes, while
stochastic in other classes.
however, so far none of the neutron star systems
have been analyzed in detail in order to understand
the origin of nonlinearity.
decades back, voges et al.
(1987) attempted to understand the chaotic nature of
her x-1, but the analysis was hampered by low signal
to noise ratio (norris & matilsky 1989).
since then
the investigation of chaotic signature in neutron stars
remains unattended. can a neutron star system not be
deterministic? indeed several features of x-ray binary
systems consisting of a neutron star, such as their magne-
tosphere and khz quasi-periodic oscillation (qpo) and
its possible relation to the spin frequency of the neutron
star, favor the idea that they exhibit nonlinear reso-
nance (e.g.
(blaes et al. 2007; mukhopadhyay 2009)).
while the qpo itself is a mysterious feature whose
origin is still unclear, its possible link to the spin
frequency of the neutron star4 indicates the origin
4 however, some authors (mendez & belloni 2007) suggested
that the khz qpos may not be related to the spin.
2
karak, dutta, mukhopadhyay
of qpo to be from nonlinear phenomena.
several
lmxbs having a neutron star exhibit twin khz qpos
(mendez et al. 1998;
van der klis 2006).
for some
of
them,
e.g.
4u
1636-53
(jonker et al. 2002),
ks
1731-260
(smith et al. 1997),
4u
1702-
429
(markwardt et al. 1999),
4u
1728-34
(van straaten et al. 2002),
the
spin
frequency
of
the neutron star has been predicted from observed data.
however, for the source sco x-1, which exhibits no-
ticeable time variability (mendez & van der klis 2000),
while we observe twin khz qpos, we do not know the
spin frequency yet (but see (mukhopadhyay 2009)).
for another neutron star cyg x-2, we do observe
khz qpos (wijnands et al. 1998) as well.
several
black holes also exhibit qpos, e.g.
grs 1915+105
(belloni et al. 2001;
mcclintock & remillard 2006),
cyg x-1 (angelini et al. 1994).
in the present paper, we first aim at analyzing the time
series of two neutron star sources sco x-1 and cyg x-2 to
understand if a neutron star is a deterministic nonlinear
(chaotic) system. then we try to manifest the knowl-
edge of nonlinear (chaotic/random) property of compact
sources to distinguish a black hole from a neutron star.
subsequently, knowing their difference based on the said
property, we try to identify the nature of a unknown
source (whether it is a black hole or a neutron star).
while the nature of some sources, as mentioned above,
has already been predicted based on alternate method,
for some others, e.g. cyg x-3, ss433, it has not yet been
confirmed.
for the present purpose, we therefore concentrate on
three additional sources cyg x-1, cyg x-2 and cyg x-3.
while cyg x-1 has been predicted to be a black hole and
cyg x-2 be a neutron star, the nature of cyg x-3 is not
confirmed yet. some authors (ergma & yungelson 1998;
schmutz et al. 1996; szostek & zdziarski 2008) argued
for a black hole nature of cyg x-3, on the basis of its
jet, the time variations in the infrared emission lines, the
bepposax x-ray spectra and so on. however, earlier it
was argued for a neutron star (chadwick et al. 1985) by
measuring its 1000 gev γ-rays which suggests a pulsar
period of 12.5908 ± 0.0003 ms. by analyzing the time
series and computing the correlation dimension d2, here
we aim at pinpointing the nature of cyg x-3: whether a
black hole or a neutron star.
in the next section, we briefly outline the procedure
to be followed in understanding the nonlinear nature of
a compact object from observed data and to implement
it to analyze the neutron star source sco x-1. in §3, we
then describe nonlinear behaviors of cyg x-1, cyg x-2
and cyg x-3. subsequently, in §4, we compare all the
results and argue for a black hole nature of cyg x-3.
finally, we summarize in §5.
2. procedure and nonlinear nature of sco x-1
the method to obtain d2 is already established (see,
e.g., (grassberger & procaccia 1983b; misra et al. 2004;
harikrishnan et al. 2006)). therefore, here we discuss it
briefly.
we consider pca data of the rxte satellite
(see table 1 for the observations ids) from the archive
for our analysis. we process the data using the ftools
software. we extract a few continuous data streams of
2500 −3500 sec long. the time resolution used to gen-
erate lightcurves is ∼0.1 −1 sec. this is the range of
optimum resolution, at least for the sources we consider,
to minimize noise without losing physical information of
the sources. a finer time resolution would be poisson
noise dominated and a larger binning might give too few
data points to derive physical parameters from it (see
misra et al. 2004, 2006, for details).
then we calculate the correlation dimension accord-
ing to the grassberger & procaccia (1983a,b) algorithm.
from the time series s(ti) (i = 1,2,...,n), we construct an
m dimensional space (called embedding space ), in which
any vector has the following form:
x(ti) = [s(ti), s(ti + τ), ........., s(ti + (m −1)τ)],
(1)
where τ is the time delay chosen in such a way that
each component of the vector x(ti) is independent of each
other. for a particular choice of embedding dimension
m, we compute the correlation function:
cm(r) =
1
n(nc −1)
n
x
i=1
nc
x
j=1,j̸=i
θ(r −|xi −xj|),
(2)
which is basically the average number of points within a
hypersphere of diameter r, where θ is a heaviside step
function, n the total number of points and nc the num-
ber of centers. if the system has a strange attractor, then
one can show that for a small value of r
d2(m) = d log cm(r)
d log r
.
(3)
in this numerical calculation, we divide the whole
phase space into m cubes of length r around a point
and we count the average number of data points in
these cubes to calculate cm(r). the edge effects, which
come due to the finite number of data points, have been
avoided by calculating cm(r) in the range rmin < r <
rmax, where rmin is the value of r for cm(r) just greater
than one and rmax can be found by restricting the m
cubes to be within the embedding space. in fig. 1, we
show the variation of log(cm(r)) with log(r) for different
values of m for sco x-1 data.
d2(m) can be calculated from the linear part of the
log(cm(r)) vs. log(r) curve and its value depends on the
value of m. for a stochastic system, d2 ≈m for all m.
on the other hand, for a chaotic or deterministic system,
initially d2(m) increases linearly with the increase of m,
then it reaches a certain value and saturates. this satu-
rated value of d2 is taken to be the correlation dimension
of the system which is a non-integer. the standard de-
viation gives the error in d2.
we first concentrate upon the neutron star source
sco x-1.
in figs.
2a,b we show respectively the
lightcurve and the variation of d2 as a function of
m and find that d2 saturates to a value 2.6 (± 0.8).
as this is a non-integer, the system might be chaotic.
on the other hand, we know that the lorenz attrac-
tor is an example of an ideal chaos with d2 = 2.05.
therefore, sco x-1 may be like a lorenz system. but
due to noise its d2 seems appearing slightly higher
(misra et al. 2004; misra et al. 2006) than the actual
value. however, one should be cautious about the fact
that sco x-1 is a bright source (much brighter than other
sources considered later). hence, the dead time effect on
search for chaos in neutron stars: is cyg x-3 a black hole?
3
the detector might affect the actual value of saturated
d2 and the computed value might be slightly different
than the actual one. however, this can not rule out the
signature of chaos in sco x-1, particularly because the
corresponding count rates are confined in the same order
of magnitude and hence the dead time effect, if any, is
expected to affect all the count rates in a similar way.
however, a saturated d2 is necessary but not a suf-
ficient evidence for chaos.
existence of color noise
(for which the power spectrum p(ν) ∝ν−α, where
the power spectral indices α = 0, 1 and 2 correspond
to "white", "pink" and "red" noise respectively) into
a stochastic system might lead to a saturated d2 of
low value as well (e.g.
osborne & provenzale 1989;
theiler et al 1992;
misra et al.
2006;
harikrish-
nan et al.
2006).
therefore, it is customary to an-
alyze data by alternate approach(s) to distinguish it
from a pure noisy time series (kugiumtzis 1999). one
of the techniques is the surrogate data analysis (e.g.
(schreiber & schmitz 1996)), which has been described
earlier in detail and implemented for a black hole
(misra et al. 2006; harikrishnan et al. 2006).
in brief,
surrogate data is random data generated by taking the
original signal and reprocessing it so that data has the
same/similar fourier power spectrum and autocorrela-
tion along with the same distribution, mean and vari-
ance as of the original data, but has lost all deterministic
characters. then the same analysis is carried out to the
original data and the surrogate data to identify any dis-
tinguishable feature(s) between them. the scheme pro-
posed by schreiber & schmitz (1996), known as iter-
ative amplitude-adjusted fourier transform (iaaft),
is more consistent to generate surrogate data.
figures 2c-f compare results for the original data with
the surrogate data. it is clear that while distributions
and power spectra are same/similar for both the data
sets, d2 is much higher for the surrogate data which
suggests existence of low dimensional chaos in sco x-1
with d2 ∼2.6. this confirms, for the first time to best of
our knowledge, a neutron star source to display chaotic
behavior. as the existence of chaos is a plausible signa-
ture of instability in the inner region of accretion flows
which is nonlinear and turbulent, as mentioned in §1, the
corresponding qpo, which is presumably an inner disk
phenomenon as well, is expected to be governed by non-
linear resonance mechanisms (e.g. mukhopadhyay 2009).
in table 1, we enlist the average counts < s >, its root
mean square (rms) variation
√
< s2 > −< s >2/ <
s >, the expected poisson noise < pn >
≡
√
< s >,
and the ratio of the expected poisson noise to the rms
value for all sources. it clearly shows a strong correla-
tion between the inferred behavior of the systems and
the ratio of the expected poisson noise to the rms value.
3. nonlinearity of cyg x-1,2,3
we now look into three additional compact sources:
cyg x-1 (black hole), cyg x-2 (neutron star) and cyg x-
3 (nature is not confirmed yet), and apply the same
analysis as in the case of sco x-1.
figure 3b shows
that d2 for cyg x-1 seems not to saturate and ap-
pears very high.
however, there is no surprise in it
because its variability is similar to the temporal class
χ of the black hole grs 1915+105 which was shown
to be poisson noise dominated and stochastic in nature
(misra et al. 2004). indeed, earlier analysis of cyg x-1
data, while it could not conclusively quantify the under-
lying chaotic behavior due to insufficient data, revealed
very high dimensional chaos. moreover a large < pn >
(as well < pn > /rms) for cyg x-1, compared to that
for sco x-1 given in table 1, reveals the system to be
noise dominated. this ensures cyg x-1 to be different
from sco x-1. however, the variation of d2 as a func-
tion of m for the original data does not deviate notice-
ably from that of corresponding surrogate data, as shown
in fig. 3c, which argues that cyg x-1 is not a chaotic
system.
figures 4b,c show that d2 for cyg x-2 saturates to a
low value ∼4, which is significantly different than that
of corresponding surrogate data. the power spectra and
distributions, on the other hand, for original and surro-
gate data are same/similar (as shown in sco x-1, not
repeated further). the saturated d2 for cyg x-2 is al-
most double than that of lorenz system, possibly due to
high poisson noise to rms ratio (see table-1). this sug-
gests the corresponding system to be a low dimensional
chaos.
from figs. 5b,c we see that for cyg x-3 the varia-
tions of d2 as a function of m for original and surrogates
data are similar to that of cyg x-1. this confirms that
the behavior of the unknown source cyg x-3 is similar
to that of the black hole source cyg x-1 (see, however,
(axelsson et al. 2008)). note from table 1 that cyg x-1,
x-2, x-3 are significantly noise dominated compared to
sco x-1. although noise could not suppress the chaotic
signature in the neutron star cyg x-2, its saturated d2
is higher than that of sco x-1. on the other hand, even
though the poisson noise to rms ratio in cyg x-1 is lower
than that in cyg x-2 (but poisson noise itself is higher
in cyg x-1), its d2 never saturates, which confirms the
source to be non-chaotic; the apparent stochastic signa-
ture is not due to poisson noise present into the system.
4. comparison between cyg x-1, cyg x-2 and
cyg x-3
finally, we compare the variations of d2 for all three
cases of cygnus in fig.
6.
remarkably we find that
d2 values for cyg x-1 and cyg x-3 practically overlap,
appearing much larger compared to that for cyg x-2
which is shown to be a low dimensional chaotic source.
on the other hand, cyg x-2 is a confirmed neutron star
and cyg x-1 a black hole. therefore, cyg x-3 may be a
black hole.
5. summary
the source cyg x-3, whose nature is not confirmed yet,
seems to be a black hole based on the analysis of its non-
linear behavior. on the other hand, we have shown, for
the first time to best of our knowledge, that neutron star
systems could be chaotic in nature. the signature of de-
terministic chaos, which argues in favor of inner disk in-
stability, into an accreting system has implications in un-
derstanding its transport properties particularly in ke-
plerian accretion disk (winters et al. 2003). note that
in keplerian accretion disks transport is necessarily due
to turbulence in absence of significant molecular viscos-
ity. the signature of chaos confirms instability and then
plausible turbulence. on the other hand, for a rotating
neutron star having a magnetosphere, signature of chaos
4
karak, dutta, mukhopadhyay
suggests their qpos to be nonlinear resonance phenom-
ena (mukhopadhyay 2009).
the absence of chaos and
related/plausible signature of instability in cyg x-1 and
cyg x-3 suggests the underlying accretion disk to be
sub-keplerian (narayan & yi 1995; chakrabarti 1996)
in nature which is dominated significantly by gravita-
tional force.
this work is partly supported by a project (grant no.
sr/s2/hep12/2007) funded by department of science
and technology (dst), india.
also the financial sup-
port to one of the authors (jd) has been acknowledged.
the authors would like to thank arnab rai choudhuri
of iisc and the anonymous referee for carefully reading
the manuscript, constructive comments and suggestions.
references
aberbandel, h. d. l. 1996, analysis of observed chaotic data
(springer: new york)
angelini, l., white, n. e., stella, l. 1994, in new horizon of
x-ray astronomy, ed. f. makino, & t. ohashi (tokyo:
universial academy press), 429
axelsson, m. 2008, aipc, 1054, 135
axelsson, m., larsson, s. & hjalmarsdotter, l. 2008, mnras,
394, 1544
belloni, t., m ́
endez, m., s ́
anchez-fern ́
andez, c. 2001, apj, 372,
551
blaes, o. m., srmkov ́
a, e., abramowicz, m. a., kluniak, w.,
torkelsson, u. 2007, apj, 665, 642
chadwick, p. m., dipper, n. a., dowthwaite, j. c., gibson, a. i.
& harrison, a. b. 1985, nature, 318, 642
chakrabarti, s. k. 1996, apj, 464, 664
chen, x., swank, j. h. & taam, r. e. 1997, apj, 477, l41.
cui, w. 1999, apj, 524, l59
ergma, e. & yungelson, l. r. 1998, a&a, 333, 151
gleissner, t., wilms, j., pottschmidt, k., uttley, p., nowak, m.
a., & staubert, r. 2004, a&a, 414, 1091
gliozzi, m., brinkmann, w., r ̈
ath, c., papadakis, i. e., negoro,
h. & scheingraber, h. 2002, a&a, 391, 875
grassberger, p. & procaccia, i. 1983, physica d, 9, 189
grassberger, p. & procaccia, i 1983, phys. rev. lett., 50, 346
harikrishnan, k. p., misra, r., ambika, g. & kembhavi, a. k.
2006, physica d, 215, 137
jonker, p. g., mendez, m., & van der klis, m. 2002, mnras,
336, l1
kugiumtzis, d. 1999, phys. rev. e, 60, 2808
markwardt, craig b., strohmayer, tod e. & swank, jean h,
1999, apj 512, l125
mcclintock, j. e., & remillard, r. a. 2006, in compact stellar
x-ray sources, ed. w. h. g. lewin & m. van der klis,
(cambridge: cambridge univ. press)
mendez, m., & belloni, t. 2007, mnras, 381, 790
mendez, m., van der klis, m., wijnands, r., ford, e. c., van
paradijis, j., & vaughan, b. a. 1998, apj, 505, l23
mendez, m. & van der klis 2000, mnras 318, 938
misra, r., harikrishnan, k. p., ambika, g. & kembhavi, a. k.
2006, apj, 643, 1114
misra, r., harikrishnan, k. p., mukhopadhyay, b., ambika, g. &
kembhavi, a. k. 2004, apj, 609, 313
mukhopadhyay, b. 2009, apj, 694, 387
narayan, r. & yi, i. 1995, apj, 452, 710
norris, j. p. & matilsky, t. a. 1989, apj, 346, 912
nowak, m. a., vaughan, b. a., wilms, j., dove, j. b. &
begelman, m. c. 1999, apj, 510, 874
osborne, a. r. & provenzale, a. 1989, phy. d, 35, 357
paul, b., agrawal, p. c., rao, a. r., vahia, m. n., yadav, j. s.,
marar, t. m. k., seetha, s., kasturirangan, k. 1997, a&a 320
l37
schmutz, w., geballe, t. r. & schild, h. 1996, a&a, 311, 25
schreiber, t. 1999, phys. rep., 308, 1
schreiber, t. & schmitz, a. 1996, phys. rev. lett., 77, 635
serre, t., kollath, z. & buchler, j. r. 1996, a&a, 311, 833
smith, d. a., morgan, e. h., & bradt, h. 1997, apj, 479, 137
szostek, a. & zdziarski, a. a. 2008, mnras, 386, 593
theiler, j., eubank, s., longtin, a., galdrikian, b., doyne, f. j.
1992, physica d, 58, 77
unno, w., et al. 1990, pasj, 42, 269
timmer, j, schwarz, u & voss, h. u et al. 2000, phys. rev. e,
61, 1342
thiel, m., romano, m & schwarz, u et al. 2001, a&a suppl. 276,
187
uttley, p., mchardy, i. m., vaughan, s. 2005, mnras, 359, 345
van der klis, m. 2006, adspr, 38, 2675
van straaten, s., van der klis, m., di salvo, t., & belloni, t.
2002, apj, 568, 912
voges, w., atmanspacher, h., & scheingraber, h. 1987, apj,
320, 794
winters, w. f., balbus, s. a., & hawley, j. f. 2003, mnras,
340, 519
wijnands, r., homan, j., van der klis, m., kuulkers, e., van
paradijs, j., lewin, w. h. g., lamb, f. k., psaltis, d. &
vaughan, b. 1998, apj, 493, l87
search for chaos in neutron stars: is cyg x-3 a black hole?
5
4
5
6
7
8
9
10
11
−15
−10
−5
0
log (cm)
log (r)
m = 2
m = 4
m = 6
m = 8
m = 12
m = 14
m = 10
fig. 1.- variation of log (cm ) as a function of log( r) for different embedding dimensions. the linear scaling range is used to calculate
the correlation dimension.
table 1
observed data
source
obs. i. d.
< s >
rms
< p n >
< p n > /rms
behavior
sco x-1
91012-01-02-00
58226
0.074
0.004
0.054
c
cyg x-1
10512-01-09-01
10176
0.261
0.031
0.119
nc/s
cyg x-2
10063-10-01-00
4779
0.075
0.014
0.191
c
cyg x-3
40061-01-07-00
3075
0.125
0.057
0.455
nc/s
columns:- 1:
name of the source, 2:
rxte observational identification number from which the data has been extracted.
3:
the average count in the lightcurve < s > 4: the root mean square variation in the lightcurve, rms. 5: the expected poisson noise
variation, < p n >≡
√
< s >. 6: the ratio of the expected poisson noise to the actual root mean square variation 7: the behavior of the
system (c: chaotic behavior; s: stochastic behavior; nc: nonchaotic behavior)
6
karak, dutta, mukhopadhyay
38.65745
38.65755
38.65765
38.65775
45
50
55
60
65
70
t/107
rate/103
(a)
5.0 5.5 6.0 6.5
0
50
100
150
200
n
rate/104
(d)
0 2 4 6 8 10 12
0
2
4
6
8
10
12
m
d2
(b)
10
−3
10
−2
10
−1
10
0
10
−8
10
−6
10
−4
10
−2
ν
power
(e)
10
−3
10
−2
10
−1
10
0
10
−8
10
−6
10
−4
10
−2
ν
power
(f)
0 2 4 6 8 10
0
2
4
6
m
d2
(c)
fig. 2.-
sco x-1: (a) variation of count rate as a function of time in units of 107 sec (lightcurve), without subtracting the initial
observation time. (b) variation of correlation dimension, along with error bars, as a function of embedding dimension for original data.
the solid line along the diagonal of the figure indicates an ideal stochastic curve. (c) variation of correlation dimension as a function of
embedding dimension for original (points) and corresponding surrogate (dashed lines) data. (d) variation of number of count rate as a
function of count rate itself in units of 104 sec−1 (distribution) for original (solid line) and surrogate (points) data. power-spectra for (e)
original and (f) surrogate data.
search for chaos in neutron stars: is cyg x-3 a black hole?
7
7.7709
7.7710
7.7711
0
10
20
30
t/107
rate/103
(a)
0 0.5 1.0 1.5 2.0
2.5
0
500
1000
1500
2000
n
rate/104
(d)
0 2 4 6 8 10 12
0
2
4
6
8
10
12
m
d2
(b)
10
0
10
−5
10
−4
10
−3
10
−2
ν
power
(f)
10
−2
10
−1
10
0
10
1
10
−5
10
−4
10
−3
10
−2
ν
power
(e)
0 2 4 6 8 10
0
2
4
6
8
m
d2
(c)
fig. 3.- cyg x-1: same as fig. 2.
8
karak, dutta, mukhopadhyay
32.3262 32.3263 32.3264 32.3265
3
4
5
6
t/107
rate/103
(a)
0
2
4
6
8
10
12
0
2
4
6
8
10
12
m
d2
(b)
0
2
4
6
8
10
0
2
4
6
8
m
d2
(c)
fig. 4.- cyg x-2: (a) variation of count rate as a function of time in units of 107 sec (lightcurve). (b) variation of correlation dimension,
along with error bars, as a function of embedding dimension for original data. the solid line along the diagonal of the figure indicates an
ideal stochastic curve. (c) variation of correlation dimension as a function of embedding dimension for original (points) and corresponding
surrogate (dashed lines) data.
39.0451 39.0452 39.0453 39.0454 39.0455
0
1
2
3
4
5
t/107
rate/103
(a)
0
2
4
6
8
10
12
0
2
4
6
8
10
12
m
d2
(b)
0
2
4
6
8
10
0
2
4
6
8
m
d2
(c)
fig. 5.- cyg x-3: same as fig. 4.
search for chaos in neutron stars: is cyg x-3 a black hole?
9
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
m
d2
cyg x−1
cyg x−2
cyg x−3
fig. 6.-
comparison of the variation of correlation dimension as a function of embedding dimension between cyg x-1 (open circle),
cyg x-2 (star), cyg x-3 (cross).
|
0911.1703 | lorenz cycle for the lorenz attractor | in this note we study energetics of lorenz-63 system through its lie-poisson
structure.
| introduction
in 1955 e. lorenz [1] introduced the concept of energy cycle as a powerful instrument to
understand the nature of atmospheric circulation. in that context conversions between
potential, kinetic and internal energy of a fluid were studied using atmospheric equations of
motion under the action of an external radiative forcing and internal dissipative processes.
following these ideas, in this paper we will illustrate that chaotic dynamics governing
lorenz-63 model can be described introducing an appropriate energy cycle whose
components are kinetic, potential energy and casimir function derived from lie-poisson
structure hidden in the system; casimir functions, like enstrophy or potential vorticity in fluid
dynamical context, are very useful in analysing stability conditions and global description of
a dynamical system.
a typical equation describing dissipative-forced dynamical systems can be written in
einstein notation as:
{
}
,
i
i
ij
j
i
x
x h
x
f
=
−λ
+
&
n
i
...
2
,
1
=
(1)
equations (1) have been written by kolmogorov, as reported in [2], in a fluid dynamical
context, but they are very common in simulating natural processes as useful in chaos
synchronization [3]. here, antisymmetric brackets represent the algebraic structure of
hamiltonian part of a system described by function h , and a cosymplectic matrix j [4],
{
}
g
f
j
g
f
k
i
ik
∂
∂
=
,
. (2)
positive definite diagonal matrix λ represents dissipation and the last term f represents
external forcing. such a formalism, as mentioned before, is particularly useful in fluid
dynamics [5], where navier-stokes equations show interesting properties in their
hamiltonian part (euler equations). moreover, finite dimensional systems as (1) represent
the proper reduction of fluid dynamical equations [6], in terms of conservation of the
symplectic structures in the infinite domain [7]. method of reduction, contrary to the
classical truncation one, leads to the study of dynamics on lie algebras, i.e to the study of
lie-poisson equations on them, which are extremely interesting from the physical viewpoint
and with a mathematical aesthetical appeal [8,9]. given a group g and a real-valued
function (possibly time dependent),
r
g →
*
:
e
t
h
, which plays the role of the hamiltonian,
in the local co-ordinates
i
x the lie-poisson equations read as
h
x
c
x
k
j
j
ik
i
∂
=
&
, (3)
where tensor
j
ik
c represents the constants of structure of the lie algebra g and the
cosymplectic matrix assumes the form
( )
j
ik
ik
j
j
c x
=
x
. it is straightforward to show that, in
this formalism g is endowed with a poisson bracket characterized by expression (2) for
functions
( )
*
g
∞
∈c
g
f,
. casimir functions c are given by the kernel of bracket (2), i.e.
{
}
( )
*
g
∞
∈
∀
=
c
g
g
c
,
0
,
, therefore they represent the constants of motion of the hamiltonian
system,
{
}
,
0
c
c h
=
=
&
; moreover they define a foliation of the phase space [10] .
ii. lorenz system and its geometry
here we will be interested in
( )
3
so
g =
with
i
i
jk
ij
x
j
ε
=
, where
ijk
ε
stands for the levi-civita
symbol; in case of a quadratic hamiltonian function,
1
2
ik
i
k
k
x x
=
ω
(4)
they represent the euler equations for the rigid body, with casimir
1
2
ij
i
j
c
x x
δ
=
and relative
foliation geometry
2
s .
in a previous paper [11] it has been shown that also the famous lorenz-63 system [12]
⎪
⎩
⎪
⎨
⎧
−
=
−
+
−
=
+
−
=
3
2
1
3
2
1
3
1
2
2
1
1
x
x
x
x
x
x
x
x
x
x
x
x
β
ρ
σ
σ
&
&
&
(5)
where
8
10,
,
28
3
σ
β
ρ
=
=
=
, can be written in the kolmogorov formalism as in (1),
(
)
⎪
⎩
⎪
⎨
⎧
+
−
−
=
−
−
−
=
+
−
=
σ
ρ
β
β
σ
σ
σ
3
2
1
3
2
1
3
1
2
2
1
1
x
x
x
x
x
x
x
x
x
x
x
x
&
&
&
.
(6)
assuming the following axially symmetric gyrostat [13] hamiltonian with rotational kinetic
energy k and a linear potential
(
)
k
k
k
u x
x
ω
=
will be written as
h
k
u
=
+
(7)
with:
1
2
3
1
0
0
0
2
0
0
0
2
ω =
⎡
⎤
⎢
⎥
ω =
ω =
⎢
⎥
⎢
⎥
ω =
⎣
⎦
inertia tensor,
1
2
3
0
0
0
1
0
0
0
σ
β
λ =
⎡
⎤
⎢
⎥
λ =
λ =
⎢
⎥
⎢
⎥
λ =
⎣
⎦
dissipation,
internal forcing given by an axisymmetric rotor
[
]
0,0,σ
=
ω
and external forcing
(
)
0,0, β ρ
σ
=
−
+
⎡
⎤
⎣
⎦
f
. in order to distinguish effects of different terms in the energy cycle we
leave the notation
[
]
0,0,ω
=
ω
assuming that for numerical values in lorenz attractor ω
σ
=
.
casimir function
( )
c t , that is constant in the conservative case, will give a useful
geometrical vision to the understanding of dynamical behaviour of (6). this is given by
studying fixed points of
( )
i
i
c t
x
c
=
∂
&
&
, which defines an invariant triaxial ellipsoid
0
ξ with
center 0,0,
2
ρ
σ
+
⎧
⎫
−
⎨
⎬
⎩
⎭
and axes
,
,
2
2
2
f
f
f
a
b
c
β
βσ
β
=
=
=
, having equation
0
ij
i
j
i
i
x x
f x
−λ
+
=
. (8)
fixed points of system (6)
(
)
(
)
{
}
1 ,
1 ,
1
β ρ
β ρ
σ
= ±
−
±
−
−
−
±
x
and
(
)
{
}
0,0,
ρ
σ
=
−
+
0
x
belong to
0
ξ . computing the flux density of vector field
=
u
x
& through this manifold
( )
i
i
u
φ
=
∂ξ
u
, because of reflection symmetry
i
i
x
x
→−
,
1,2
i =
of equations (6), two
symmetric regions are identified by respectively,
0
φ <
and
0
φ >
. lorenz attractor
ψintersects the manifold in these regions at maxima and minima of casimir function,
entering the ellipsoid trough
( )
min c twice where
0
φ <
, right
r
ψ and left lobe
l
ψ , and
symmetrically exiting trough
( )
max c twice where
0
φ >
,
( )
( )
{
}
min
,max
c
c
∩
=
0
ψ
ξ
(fig.1).
fig.1 invariant ellipsoid
0
ξ intersecting the attractor in the set
( )
( )
{
}
min
,max
c
c
∩
=
0
ψ
ξ
. casimir maxima
(red) and minima (blue) are shown. black stars represent 2 of the 3 fixed points. note that
0
x lies on the southern pole.
in order to show points on
0
ξ , only part of trajectory is shown.
the particular choice of parameters
8
10,
28,
3
σ
ρ
β
=
=
=
has moreover the peculiarity
for
( )
max c to be an ordered set [9]. this property gives the opportunity to find a range
of variability for the maximum radius of the casimir sphere ( )
r t
c
=
,
( )
min
0
max
r
r
r t
r
r
±
=
≤
≤
=
, (9)
where
(
) (
)
2
2
1
1
r
β ρ
σ
± =
−
+
+
and
0
r
ρ
σ
=
+
; moreover ellipsoid
0
ξ defines a
natural poincarè section for lorenz equations with interesting properties in the
associated return map for casimir maxima [14].
iii. energy cycle
in order to introduce an energy cycle into system (1), we first consider pure conservative
case. here dynamics lies on
2
s and we note that introduction of potential u produces a
deformation on the geodesics trajectory [15] of the riemannian metric given by the
quadratic form (4). adding the spin rotor
3
x
ω
, the centre of the ellipsoid of revolution is
shifted from the origin, that remains the centre of the casimir sphere of radius
2c . as
regards fixed points, given at
0
ω =
by two isolated centers, namely the poles (
)
2 ,0,0
c
±
of
2
s , and all points belonging to 'equatorial circle' (
)
0,
2
sin ,
2
cos
c
c
θ
θ , introduction of
potential
(
)
3
u x
reduces this set to four equilibrium points located at
(
)
1,2
0,0,
2
f
c
=
±
and
(
)
2
3,4
2
,0,
f
c
ω
ω
= ±
−
−
. this last pair starts to migrate into the south pole of
2
s as ω
grows, and disappears for
2c
ω >
giving rise to an oyster bifurcation [16], leaving two
stable centers
1,2
f . lie-poisson structure of the system permits to analyze nonlinear
stability of these two points introducing pseudoenergy functions [17]
1,2
1,2
i
h
c
λ
=
+
, where
1,2
λ
are the solutions of equation
1,2
1,2
i
i
f
f
h
c
λ
∂
= −∂
(10)
at fixed points
(
)
1,2
0,0,
2
f
c
=
±
. computation of quadratic form
1,2
,
i j
f
i
∂
shows
1
f as a
maximum and
1
f as a minimum for
2c
ω >
. we point out that introduction of potential
reduces number of fixed points on the sphere to the minimal number 2; therefore it
stabilizes the system's dynamics.
total energy e , identified with hamiltonian, does not change and in terms of k and u , a
simple energy cycle, similar to the classical oscillator, can be described using bracket
formalism (4) with relative rules {
} {
}
(
)
,
,
,
u h
u k
u k
=
=c
and {
} {
}
(
)
,
,
,
k h
k u
k u
=
=c
(
)
(
)
,
,
0
k
k u
u
u k
c
⎧
=
⎪
⎪
=
⎨
⎪
=
⎪
⎩
&
&
&
c
c
, (11)
where
(
)
,
u k
c
is positive if energy is flowing from u to k ; the net rate of conversion of
potential into kinetic energy factor is of the form
(
)
(
)
1
2
1
2
,
u k
x x
ω
=
ω −ω
c
(12)
due to the linear dependence of u on
3
x . as a result, a symmetry between quadrants i and
iii holds since
(
)
1
2
0
,
0
x x
u k
>
⇒
<
c
corresponding to a net conversion of kinetic energy
into potential one k
u
→
; the opposite happens in quadrants ii and iv where
(
)
1
2
0
,
0
c
x x
u k
<
⇒
>
and u
k
→
.
following the ideas of extending the algebraic formalism of hamiltonian dynamics to
include dissipation [18] , we introduce a lyapunov function ( )
( )
(
)
*
3
l x
c
so
∞
∈
, together with
a symmetric bracket
,
ik
i
k
f
l f
g
l
f
=
=
∂
∂
&
(13)
where
( )
ik
g
x generally is a symmetric negative matrix. taken alone, formalism (13) gives
rise to a gradient system dynamics
,
i
i
ik
k
x
l x
g
l
=
=
∂
&
(14).
including lie-poisson structure (2), it is possible to study equations (11) adding various
kinds of dissipation models depending on the choice of 'metric tensor'
( )
ik
g
x
(
)
(
)
,
,
,
,
,
k
k u
l k
u
u k
l u
c
l c
⎧
=
+
⎪
⎪
=
+
⎨
⎪
=
⎪
⎩
&
&
&
c
c
(15).
because of
(
)
,
0
c h =
c
last equation in (15) describes the contraction of the manifold
where motion takes place.
in order to find a dissipation process that naturally takes into account the compact and
semisimple structure of
( )
3
so
, we use the so called cartan-killing dissipation derived form
cartan-killing metric [18[
1
2
n
m
ik
im
kn
g
ε ε
=
(16)
physically with this choice, for l
c
α
=
and α
+
∈r , dynamics reduces to an isotropic linear
damping
ik
ik
g
δ
= −
of both energy and casimir functions, miming a rayleigh dissipation; in
this way trajectories approache a stable fixed point at the origin
(
)
0
0,0,0
=
x
.
because of λ term of lorenz equations, we can introduce a lyapunov function with
anisotropic dissipation of the form
1
2
ik
i
k
l
x x
=
λ
(17);
moreover in the spirit of formalism (13), an external torque representing forcing can be
easily included in the symmetric bracket by a translation l
l
g
→
+
where
3
g
f x
= −
⋅
(18).
it is interesting to note that on the ellipsoid
0
ξ , both
(
)
l
g
∇
+
and lorenz field u = x
& are
orthogonal to c
∇
, but since determinant
(
)
,
,
0
c
l
g
∇
∇
+
≠
u
, the 3- vectors do not belong
to the same plane.
energy cycle for lorenz attractor can be finally written as
(
)
(
)
(
)
3
,
,
2
ij
jk
i
k
k
u k
x x
g
u
u k
u
f
c
l
g
β
ω
⎧
= −
−λ ω
−ω
⎪
⎪
=
−
+
⎨
⎪
= −
+
⎪
⎩
&
&
&
c
c
(19),
i.e. defining forcing terms as
3
3
3
,
,
,
k
u
c
f
g k
f
x
f
g u
f
f
g c
fx
ω
⎧
=
=
ω
⎪
=
=
⎨
⎪
=
=
⎩
(20).
in this formalism the first two equations of (19) describe energy variations of a particle
dynamics constrained to move on a spherical surface of variable radius ( )
2
r t
c
=
. it is
easy to verify that for isotropic dissipation l
c
α
=
, even in presence of forcing, equations
(19) describe a purely dissipative dynamics. in spherical coordinates after simplification, it
becomes
2
2
sin
r
r
f
α
θ
= −
−
&
(21).
for lorenz parameters, combined effects of conservative part, anisotropic dissipation and
forcing components of (1), makes dynamics of the spherical radius deterministic, bounded,
recurrent and sensitive to initial conditions, as shown in fig.2, in other words chaotic.
motion on a variable but topologically stable manifold justifies the presence of last equation
in (19) that takes into account the background field
( )
c t .
fig.2 sensitivity to initial conditions for two numerically different casimir functions top: time evolution for two casimir radii
( )
( )
2
r t
c t
=
. bottom:
( )
r t
δ
; with
( )
0
0.008
r t
δ
≈
if we consider steady state conditions, where all three time derivatives in (19) are set to
zero, we note that last equation represents ellipsoid
0
ξ of equation (8) and the only point
solution lying on it is given by fixed point
(
)
{
}
0,0,
ρ
σ
=
−
+
0
x
corresponding to the
asymptotic
max
r
in (9). here
(
)
,
0
u k =
c
,
(
)
2
2
k
c
ρ
σ
=
=
+
and potential reaches its
minimum value
(
)
u
σ ρ
σ
= −
+
.
in order to study behaviour of casimir function and its conversion terms, it will be useful to
rewrite last equation of (19) in terms of lyapunov function l and forcing g , both contained
in a function
( )
(
)
w
2l
g
= −
+
x
. as a matter of facts, substituting
(
)
2
w
c
l
g
= −
+
=
&&
&
&
& in (19),
we have:
.
(
)
(
)
,
,
,
c
c
w
w k
w u
l
g w
=
+
+
+
&
(22)
from which applying antisymmetric properties of lie- poisson bracket, conversion terms for
u and k are written as
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
1
2
3
1
2
1
2
1
2
,
2
,
,
2
,
2
,
,
2
1
0
c
c
c
c
c
c
ijk
j
k
i
w k
k l
k g
x x x
f
x x
w u
u l
u g
x x
ε
ω σ
⎛
⎞
=
+
=
ω λ
+
ω −ω
⎜
⎟
⎝
⎠
=
+
=
−
+
∑
(23)
note that all of terms in (23) are linear functions of
(
)
,
u k
c
. for lorenz parameters it
results
0
ijk
j
k
i
ε ω λ >
∑
,
(
)
1
0
ω σ −
>
and
(
)
1
2
0
f ω −ω
>
from which energy cycle reads as
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
3
2
2
1
2
3
1
2
2
1
2
,
,
1
1
1
2
2
1
,
2
3
ij
jk
i
k
k
u k
x x
g
u
u k
u
f
f
w
u
u k
l
g
lg
β
ω
β
β
σ
σ
σ
ω
ω
⎧
⎪
= −
−λ ω
−ω
⎪
⎪
⎪
=
−
+
⎨
⎪
⎛
⎞
⎡
⎤
−
ω +
−
ω +
−
ω
⎪
⎣
⎦
=
+
−
+
ω −ω
⋅
+
+
+
⎜
⎟
⎪
⎜
⎟
ω −ω
⎪
⎝
⎠
⎩
&
&
&
c
c
c
(24)
this reformulation of energy cycle takes into account dissipation and forcing in conversion
terms between
,
,
k u w where w
c
= & at least formally plays the role of internal energy of
the system.
iv. conversion factors
apart from dissipation and forcing terms from (24) it is clear that energy cycle for lorenz
attractor can be studied by analysing the behaviour of conversion term
(
)
,
u k
c
shown in
fig.3.
fig.3. kinetic energy, potential energy and their relative conversion term
(
)
,
u k
c
are plotted on lorenz attractor. colour
ranges from black to bright copper, going from low to high numerical values. for
(
)
,
u k
c
, regions of negative values
1
2
0
x x <
, or transition regions where
(
)
,
0
u k >
c
, are shown in orange. they cover about 12% of the attractor.
given that dynamics of lorenz system is well described as a sequence of traps and jumps
between the two lobes of the strange attractor; we observe what follows.
when trapped in a lobe, system state experiences spiral-like trajectory, whose centre is in a
fixed point
(
)
(
)
{
}
1 ,
1 ,
1
β ρ
β ρ
σ
± = ±
−
±
−
−
−
x
and whose radius increases in time as
energy and casimir maxima, till touching the boundaries planes
1
0
x =
or
2
0
x =
where a
transition to the opposite lobe occurs.
after lobe transition, trajectory starts from regions close to the opposite unstable fixed point
(
)
(
)
{
}
1 ,
1 ,
1
β ρ
β ρ
σ
=
−
−
−
−
xm
m
m
, moving towards boundary planes and so on.
looking at the attractor as a fractal object, 12% of its whole structure lies in transition
regions where
1
2
0
x x <
.
the above described dynamical behaviour above described is ruled by laws (23) and (24)
as shown in fig.4.
fig.4 time evolution for energy cycle variables. top sign of
( )
1
x t represents jumps and traps. jumps are associated to
spikes in conversion terms. second row:
( )
c t
expansions and contractions of the casimir sphere, in its maxima, are
modulated by the sign of conversion terms values. middle and fourth lines, conversion terms
(
)
,
u k
c
and
(
)
,
w k
c
show an opposite phase dynamics. bottom
(
)
,
w u
c
. note inequality
(
)
(
)
(
)
,
,
,
u k
w u
w k
<
<
c
c
c
reminding that
(
)
(
)
,
2
,
w u
u l
=
c
c
, a conversion from kinetic to potential energy
k
u
→
occurs in trapping regions
1
2
0
x x >
where a number of loops around
±
x of increasing
radius occurs; in the same regions a conversion w
u
→
occurs, (fig.5).
fig.5 potential energy, "internal energy" and their relative conversion term
(
)
,
w u
c
are plotted on lorenz attractor. colour
ranges from black to bright copper, going from low to high numerical values. regions where
(
)
,
0
w u >
c
, are shown in
orange.
as regards to term
(
)
,
w k
c
, coordinate
3
x drives the behaviour of
(
)
,
k l
c
, giving rise to
both conversions k
l
↔
inside a lobe of the attractor, while k
g
→
is the only possible
conversion, (fig.6).
fig.6. kinetic energy, "internal energy" and their relative conversion term
(
)
,
c w k
are plotted on lorenz attractor. colour
ranges from black to bright copper, going from low to high numerical values. regions where
(
)
,
0
w k >
c
are shown in
orange.
in trapping regions function w
c
= & act as a source for total energy h
u
k
=
+
,
(
)
,
0
w k >
c
and
(
)
,
0
w u >
c
.
in this phase, casimir sphere, centred in
{
}
0,0,0
=
0
x
, expands.
in regions
1
2
0
x x <
a drastic change in energy cycle occurs, depending on the bounded
nature of the system. here potential energy is transformed into kinetic energy u
k
→
and
lyapunov function into potential energy, l
u
→
; therefore an implosion of casimir sphere
occurs. also, because of (23) l
k
↔
and g
k
→
.
in lobe-jumping regions function w
c
= & acts as a sink for both potential and kinetic energy,
since
(
)
,
0
w k <
c
and
(
)
,
0
w u <
c
.
concerning numerical values of conversion terms, the following inequalities hold:
(
)
(
)
3
,
2
1
,
ijk
j
k
i
k l
x
k g
f
ε
⎛
⎞
=
ω λ
<
⎜
⎟
⎝
⎠
∑
c
c
, (25)
(
)
(
)
(
)
,
1
1
2
1
,
k u
w u
σ
−
=
<
−
c
c
, (26)
(
)
(
)
(
)(
)
(
)
,
3
2
1
2
1
,
w k
w u
β
ρ
σ
ω σ
−
+
≤
<
−
c
c
, (27)
(since max z
ρ
σ
=
+
):
(
)
(
)
(
)
,
,
,
k u
w k
w u
<
<
c
c
c
. (28)
conversion rules are reassumed in the following diagram where dashed lines represent
cycle in jumping regions
1
2
0
x x <
and continuous lines refer to trapping lobes
1
2
0
x x >
of the
attractor.
v. mechanical ape\upe and predictability
in the spirit of 1955 lorenz work on general circulation of the atmosphere [1], we introduce
for system (6) quantities respectively known as available potential energy
max
min
ape
u
u
=
−
and unavailable potential energy
min
upe
u
=
; they represent, respectively, the portion of
potential energy that can be converted into kinetic energy, and the portion that cannot.
in atmospheric science ape is a very important subject since its variability determines
transitions in the atmospheric circulation. in analogy with the theory of margules [19], in
which a fluid inside a vessel is put into motion and free surface oscillates up and down,
while potential energy is constrained from below by the potential energy of the fluid at rest
we start by considering the conservative system :
1
2
2
1
3
1
3
1
2
x
x
x
x x
x
x
x x
σ
σ
=
⎧
⎪
= −
−
⎨
⎪
=
⎩
&
&
&
(29)
fixing casimir and energy values (
)
0
0
,
c e
, equation
(
)
,
0
c u k =
gives at
1
0
x =
⇒
max
0
k
c
= ω
and
min
0
u
e
c
=
−ω
;
at
2
0
x =
⇒
(
)(
)
(
)
2
2
max
0
0
2
1
1
u
c
e
ω
ω ω
⎡
⎤
= −
+
−
ω −
−
ω −
⎣
⎦
and
min
0
max
k
e
u
=
−
, where
2
3
ω = ω = ω for lorenz system, therefore we get:
(
)(
)
2
2
0
0
0
0
0
0
2
1
1
c
e
ape
e
c
upe
e
c
ω
ω ω
⎧
−
+
−
ω −
−
⎪
=
−
+ ω
⎨
ω −
⎪
=
−ω
⎩
(30)
in case of full lorenz system, energy and casimir are not conserved even though their
associated surfaces intersect instantaneously; therefore introducing dissipation and forcing
one can still consider the evolution of ape and upe as state functions. fig .7 shows a
graphical representation of the two quantities over lorenz attractor; ape increases as
3
x
decreases while upe follows the opposite way.
fig.7 mechanical ape and upe plotted on lorenz attractor.
this behaviour coincides with that of predictability regions over the attractor computed
using breeding vectors technique [20] and shown in fig.8. giving an initial perturbation
δ
0
x , red vector growth g over n=8 steps is computed as
1 log
g
n
δ
δ
⎛
⎞
=
⎜
⎟
⎝
⎠
0
x
x
, regions of the
lorenz attractor within which all infinitesimal uncertainties decrease with time [21] are
located in regions of
(
)
max upe and
(
)
min ape .
fig.8 breeding vectors map for lorenz attractor, solution of equations (6). colour ranges from black to bright copper
meaning low to high predictability.
finally we note that the set
(
)
(
)
{
}
min
,max
ape
ape
∩
=
ape
ψ
ξ
, where
ape
ξ
represents the
surface
(
)
0
d
ape
dt
=
has as ordered subset
(
)
max ape . in this view, lorenz map for
ape maxima ape shown in fig.9 assumes the meaning of 'handbook' for regime
transitions; for low values of ape system is trapped withine one lobe until it reaches the
minimum necessary value to jump into the opposite one.
fig.9 left: ordered set for ape maximum values lying on ellipsoid
ape
ξ
(not shown); colour ranges from black to bright
copper, going from low to high numerical values. right: lorenz map for ape maxima.
vi. energetics and dynamics
physically, introduction of function w has the following meaning: ellipsoid
0
ξ contains sets
( )
min c
l,r
ψ
and
( )
max c
l,r
ψ
for system (6) and acts as a boundary for regions of maximum forcing
and dissipation.
inside solid ellipsoid ξ ,
2
g
l
>
and forcing drives motion, also because of
0
w
c
=
>
&
which
implies that casimir sphere continually expands. and for
(
)
,
0
c u k <
trajectory of (6) links a
point
( )
min c
∈
l,r
ψ
x
to a point
( )
max c
∈
l,r
ψ
y
into the same lobe. otherwise, for
(
)
,
0
c u k >
lorenz equations link a minimum
( )
min c
∈
l,r
ψ
x
of
,
l r
ψ
into a maximum
( )
max c
∈
r,l
ψ
y
of the
opposite lobe.
outside ξ , (more precisely into the region (
)
∪
∩
l
r
ψ
ψ
ξ), 2l
g
>
,
0
w <
and casimir
sphere continually implodes; here dissipation constrains motion to be globally bounded
inside a sphere of radius
max
r
ρ
σ
=
+
, being
(
)
0
tr −λ <
.
for
(
)
,
0
c u k <
, dynamics links a point
( )
max c
∈
l,r
ψ
y
to a point
( )
min c
∈
l,r
ψ
x
in the same lobe;
for
(
)
,
0
c u k >
, instead, a point
( )
max c
∈
l,r
ψ
y
will be linked to a point
( )
min c
∈
r,l
ψ
x
of the
opposite lobe. fig.10 shows these links.
fig. 10 traps and jumps. trajectories start from right lobe
r
ψ
casimir maxima (shown in red). solid line track ends up on
the same set while dash-dot one reaches maxima on the other lobe
l
ψ
. black spots indicate initial conditions for both the
trajectories, minima for casimir are shown in blue, while black stars represent fixed points
±
0
x ,x
it is remarkable to note that using formalism above described, it is possible to proof the
outward spiral motion around fixed points
±
x .
for a trajectory (
)
,
l
⊂
1
2
l,r
p p
ψ
linking two points
( )
( )
( )
1
1
2
2
,
max
t
t
c
∈
l,r
ψ
p
p
(
)
2
1
,
0
c
t
t
u k dτ <
⇒
∫
(
)
2
1
0
t
t
u
u
f
d
β
ω
τ
+
−
<
∫
&
(31)
after integration and indicating by
3
x
ω
the time average of potential energy along
(
)
l
1
2
p ,p
we get
(
)
(
)
(
)
(
)
3
3
3
2
1
0
x
x
x
t
t
β
ρ
σ
−
<
−
+
−
<
−
2
1
p
p
( 32 )
and then
(
)
(
)
(
)
(
)
3
3
3
3
x
x
x
x
± >
>
>
1
2
0
x
p
p
x
.
in order to better understand the statistics of persistence in the lobes, let's consider the
effect of conversion factor (12). system (6) can be written as a particular case of the
following set of equations
(
)
1
1
2
2
1
3
1
2
3
1
2
3
x
x
x
x
x x
x
x
x
x x
x
σ
ω
ω
β
β ρ
ω
⎧
= −
+
⎪
= −
−
−
⎨
⎪
=
−
−
+
⎩
&
&
&
(33).
fig.11 shows that under variation of parameter ω the resulting attractor, while conserving its
global topology and fractal properties, will explore a greater volume domain due to the
increased external forcing.
fig.11 statistical behaviour as a function of ω term. left: from top to bottom (as ω increases),
( )
1
,
1,2,3
k
x
t
k =
shows
less and less persistence of trajectory inside a lobe; right: signs of conversion term over the attractor: regions where
(
)
,
0
u k >
c
(in orange) approach unstable points
±
x
as ω increases and region where
(
)
,
0
u k <
c
(in blue)
decreases in size.
the most important effect, however, is given by a significant change in persistence statistics;
more precisely regions
(
)
,
0
c u k >
will expand to inner regions of attractor close to fixed
points
±
x , increasing the probability of jumping to the opposite lobe.
vii. conclusions
up to now, lorenz system has been studied under many viewpoints in literature; in this paper
energy cycle approach has been fully exploited. the nature of lie-poisson structure in lorenz
equation has been shown to be fruitful, for example in finding a geometrical invariant, ellipsoid
0
ξ , whose physical meaning is the boundary of action between forcing and dissipation. in this
manner, kinetic-potential energy transfer term
(
)
,
c u k keeps track of dynamical behaviour of
trapping and jumping, also giving information about global predictability of the system as
illustrated by direct comparison of conversion factors with classical results on predictability.
acknowledgements. authors warmly thank prof. e.kalnay for having provided codes in
ref.[20] in order to compute breeding vector analysis.
refences
[1] e.lorenz, tellus 7, 157-167 (1955)
[2] v.i. arnold, proc.roy. soc., a 434 19-22 (1991)
[3] a.d'anjous, c.sarasola and f.j. torrealdea, journ. of phys.:conference series 23
238-251. (2005)
[4] j.e. marsden and t. ratiu, introduction to mechanics and symmetry, springer, berlin,
1994
[5] p.j. morrison, rev. mod. phys. 70 467-521 (1998)
[6] a.pasini, v.pelino and s.potestà, phys.lett. a 241 (1998) 77-83
[7] v.zeitlin, phys.rev.lett 93 no 26 264501-1-264501-3 (2004)
[8] v.pelino and a.pasini, , phys.lett. a 291 389-396 (2001)
[9] v.pelino and f.maimone, phys rev.e 76, (2007)
[10] v.i. arnold and b.a. khesin, topological methods in hydrodynamics, springer, berlin
1988
[11] a.pasini and v.pelino, phys.lett. a 275 435-446 (2000)
[12] e.n. lorenz, j.atmos. sci. 20 130 (1963)
[13] a.elipe,v. lanchares, cel. mech dyn astr 101 49-68 (2008)
[14] m.gianfelice,f.maimone, v.pelino, s.vaienti, invariant densities for expanding lorenz-
like maps, in preparation.
[15] k.suzuki,y.watanabe,t.kambe, j.phys.a math.gen 31 6073-6080 (1998)
[16] a.elipe, m.arribas and a.riaguas, j.phys.a:math.gen. 30 587-601 (1997)
[17] r.salmon, lectures on geophysical fluid dynamics oxford (1998)
[18] p.j. morrison, journal of physics 169 1-12 (2009)
[19] j.marshall, r.a. plumb, atmosphere, ocean and cliamte dynamics, academic press
(2007)
[20] e.evans, n.batti,j.kinney,l.pann,m.pena,s.chih,e.kalny,j.hansen, bams 519-524
(2004)
[21] l.a.smith, c.ziehmann, k.fraedrich q.j.r meteorol.soc. 125, 2855-2886 (1999)
|
0911.1704 | universal correlations and coherence in quasi-two-dimensional trapped
bose gases | we study the quasi-two-dimensional bose gas in harmonic traps at temperatures
above the kosterlitz-thouless transition, where the gas is in the normal phase.
we show that mean-field theory takes into account the dominant interaction
effects for experimentally relevant trap geometries. comparing with quantum
monte carlo calculations, we quantify the onset of the fluctuation regime,
where correlations beyond mean-field become important. although the density
profile depends on the microscopic parameters of the system, we show that the
correlation density (the difference between the exact and the mean-field
density) is accurately described by a universal expression, obtained from
classical-field calculations of the homogeneous strictly two-dimensional gas.
deviations from universality, due to the finite value of the interaction or to
the trap geometry, are shown to be small for current experiments. we further
study coherence and pair correlations on a microscopic scale. finite-size
effects in the off-diagonal density matrix allows us to characterize the
cross-over from kosterlitz-thouless to bose-einstein behavior for small
particle numbers. bose-einstein condensation occurs below a characteristic
number of particles which rapidly diverges with vanishing interactions.
| introduction
in recent years, several experiments [1, 2] studied two-
dimensional ultra-cold atomic gases from the normal
phase down in temperature to the kosterlitz–thouless
transition [3] and into the low-temperature superfluid
phase. the interference of two simultaneously prepared
two-dimensional gases evidenced the presence of vortices
[1].
related experiments investigated interaction and
correlation effects [2, 4] in the density profile and in co-
herence patterns. for a quantitative description of the
kosterlitz–thouless transition and of the interaction ef-
fects, it proved necessary to account for the quasi-two-
dimensional nature of the gas, that is to include thermal
excitations in the strongly confined z-axis in addition to
the weak trapping potential in the xy-plane [5, 6].
in weakly interacting two-dimensional bose gases,
the kosterlitz–thouless phase transition occurs at rel-
atively high phase-space density (number of atoms per
phase-space cell λ2
t
= 2πħ2/mt ).
this density is
ncλ2
t ≃log(ξn/ ̃
g) [7–9], where ̃
g characterizes the two-
dimensional interaction strength, t is the temperature,
m the mass of the atoms, and n the density. the coef-
ficient ξn = 380 ± 3 was determined numerically using
classical-field simulations [9]. for the ens experiment
of hadzibabic et al.
[1], the critical phase-space den-
sity is ncλ2
t ∼8 in the center of the trap, whereas in
the nist experiment of clad ́
e et al. [2], ncλ2
t is close
to 10.
for phase-space densities between one and the
∗electronic address: [email protected]
†electronic address: [email protected]
critical number, the gas is quantum degenerate yet nor-
mal. the atoms within one phase-space cell are indistin-
guishable. they lose their particle properties and acquire
the characteristics of a field. the mean-field description
of particles interacting with a local atomic density n(r)
may further be modified through correlations and fluctu-
ations. quantum correlations can be several times larger
than the scale λt. this gives rise to "quasi-condensate"
behavior inside the normal phase.
in this paper, we study the quantum-degenerate regime
at high phase-space density in the normal phase.
we
first discuss the peculiar quasi-two-dimensional thermo-
dynamic limit where, as the number of particles in the
gas is increased, the interactions and the lattice geome-
try are scaled such that a finite fraction of all particles
are in the excited states of the system. in this thermo-
dynamic limit, the kosterlitz–thouless transition takes
place at a temperature comparable to the bose–einstein
transition temperature in the non-interacting case, and
the local-density approximation becomes exact. we first
clarify the relation between different recent versions of
quasi-two-dimensional mean-field theory [6, 10, 11] in the
local-density approximation (lda), and also determine
the finite-size corrections to the lda. we compare mean-
field theory to a numerically exact solution obtained by
path-integral quantum monte carlo (qmc) calculations
with up to n ≳105 interacting particles in a harmonic
trap with parameters chosen to fit the experiments. we
concentrate on the correlation density, the difference be-
tween the exact density and the mean-field density at
equal chemical potential, and show that it is essentially
a universal function, independent of microscopic details.
within classical-field theory, the correlation density is
obtained from a reparametrization of known results for
2
the strictly two-dimensional homogeneous system [12].
the classical-field results hold for small interaction pa-
rameters ̃
g →0, but our full qmc solution accounts
for corrections. we compute the correlation density by
qmc and show that it is largely independent of the trap
geometry, the temperature, and the interaction strength.
we also study off-diagonal coherence properties, and
the density–density correlation function of the quasi-two-
dimensional gas.
it is well known that even at high
temperature, bosonic bunching effects enhance the pair-
correlation function on length scales below λt which for
the ideal bose gas approaches the characteristic value
2n2 at vanishing separation. in our case, interference in
the z-direction reduces the in-plane density fluctuations
even for an ideal gas and within mean-field theory, and
the reduction of the pair correlations from 2n2 no longer
proves the presence of beyond-mean-field effects.
we finally discuss finite-size effects in the quasi-two-
dimensional bose gas. for the density profile, they are
not very large, but we point out their great role for
off-diagonal correlations. the latter are responsible for
a cross-over between the physics of bose–einstein con-
densation at small particle number and the kosterlitz–
thouless physics for larger systems; both regimes are of
relevance for current experiments. this cross-over takes
place at a particle number n ∼ ̃
g−2 which grows very
rapidly as the interaction in the gas diminishes.
ii.
system parameters and mean-field
description
a.
quasi-two-dimensional thermodynamics
we consider n bosons in a three-dimensional pancake-
shaped harmonic potential with parameters ωx = ωy = ω
and ωz ≫ω at inverse temperature β = 1/t . the z
variable is separate from x and y, and we denote the
three-dimensional vectors as ⃗
r = (r, z), and write two-
dimensional vectors as r = (x, y), and r = |r|.
the quasi-two-dimensional regime of the bose gas [6] is
defined through a particular thermodynamic limit n →
∞, where the temperature is a fixed fraction t ≡t/t 2d
bec
of the bose–einstein transition temperature of the ideal
two-dimensional bose gas, t 2d
bec =
√
6nħω/π. although
the two-dimensional trapped bose gas undergoes a bose–
einstein transition only for zero interactions, t 2d
bec still
sets the scale for the kosterlitz–thouless transition in
the interacting gas[6, 8]. in the quasi-two-dimensional
regime, a finite fraction of atoms remains in excited states
in z. the excitation energy is scaled as ħωz ∝t 2d
bec,
which implies that ωz increases as n 1/2 in the thermo-
dynamic limit.
interatomic
collisions
are
intrinsically
three-
dimensional.
here, we consider the experimentally
relevant case where the range of the scattering potential
r0 is much smaller than the typical lateral extension
lz = (mωz/ħ)−1/2, and also where r0 is much smaller
than the inter-particle distances.
the interactions
are then described by the three-dimensional s-wave
scattering length as, and one may characterize the
quasi-two-dimensional gas
through
a
bare
effective
two-dimensional interaction strength, ̃
g,
̃
g = mg
ħ2
z
dz [ψ0(z)]4
(1)
g = 4πħ2as
m
,
(2)
where ψ0(z) is the unperturbed ground state of the con-
fining potential and g is the usual three-dimensional
coupling constant.
for a harmonic confinement, ̃
g =
√
8πas/lz, and as/lz must be kept constant in the ther-
modynamic limit to obtain a fixed two-dimensional in-
teraction strength.
quasi-two-dimensional scattering amplitudes depend
logarithmically on energy, ǫ, in terms of a universal func-
tion of as/lz and of ǫ/ħωz ∼t/ωz, which are both kept
constant in the quasi-two-dimensional thermodynamic
limit. the logarithmic energy dependence yields small
corrections of order (as/lz)2 [13–15] to the bare interac-
tion ̃
g. they can be neglected in the following.
the scaling behavior in the quasi-two-dimensional
limit corresponds to the following reduced variables:
̃
r = r/lt
̃
z = z/lz
t = t/t 2d
bec ̃
ωz = ħωz/t 2d
bec
̃
n = nλ2
t
̃
g = mg/(
√
2πlzħ2),
(3)
where lt = (t/mω2)1/2 is the thermal extension in the
plane. the quasi-two-dimensional limit consists in tak-
ing n →∞, with t, ̃
ωz and ̃
g all constant. in this limit,
lt /λt = t
p
3n/π3 ≫1, so that macroscopic and mi-
croscopic length scales separate, and the scaling of the
three-dimensional density, n3d, is at constant
̃
n3d = n3dλ2
tlz.
(4)
in reduced variables, the normalization condition n =
r
dr n(r) = (lt /λt)2 r
d ̃
r ̃
n( ̃
r) is expressed as
z ∞
0
d ̃
r ̃
r ̃
n( ̃
r) = π2
6t2 .
(5)
the local-density approximation becomes exact in the
quasi-two-dimensional limit.
in the ens experiment, 87rb atoms are trapped at
temperatures t
≈50 −100 nk.
the in-plane trap-
ping frequencies are ω/(2π) ≈50hz whereas the con-
finement is of order ωz/(2π) ≈3khz. with n ∼2 * 104
atoms trapped inside one plane, typical parameters are
t 2d
bec ≈300nk (using ħ/kb ≃7.64 * 10−3nks), so that
̃
ωz ≈0.44 −0.55.
the scattering length as = 5.2nm
leads to an effective coupling constant ̃
g = 0.13, using
ħ/m ≃6.3 * 10−8m2s−1a−1, where a is the atomic mass
number. in the nist experiment, sodium atoms at t ≈
t 2d
bec ≈100nk are confined by harmonic trapping poten-
tials with ω/(2π) ≈20hz, and ωz/(2π) ≈1khz. this is
3
described by reduced parameters ̃
g = 0.02 and ̃
ωz = 0.50.
the critical densities are ̃
nc ≈log(380/ ̃
g) ≈8.2 for the
ens parameters, somewhat lower than the nist value
̃
nc ≈9.9. using the quasi-two-dimensional mean-field es-
timates of ref. [6], the kosterlitz–thouless temperatures
are located at tkt ≡tkt/t 2d
bec ≈0.69 and tkt ≈0.74,
respectively.
the quasi-two-dimensional limit describes a kinemat-
ically two-dimensional gas, whose extension in the z di-
rection is of the order of the thermal wavelength λt. as
̃
ωz ≡λ2
t/(2πtl2
z) is decreased, a system at finite n turns
three-dimensional. this is already the case for the ideal
quasi-two-dimensional gas (with ̃
g = 0) where the bose–
einstein transition temperature crosses over from two-
dimensional to three-dimensional behavior as a function
of ̃
ωz, with asymptotic behavior given by
tbec ∼
h
ζ(2)
ζ(3)
i1/3
̃
ω1/3
z
−1
6
ζ(2)
ζ(3) ̃
ωz
for ̃
ωz ≪1
1 −
1
2ζ(2)3/2 exp (− ̃
ωz)
for ̃
ωz ≫1
(6)
(see [6]).
in eq. (6), the first term for ̃
ωz ≪1 de-
scribes three-dimensional bose–einstein condensation in
an anisotropic trapping potential.
for the interacting bose gas,
the nature of the
kosterlitz–thouless transition in two dimensions dif-
fers from the bose–einstein transition of the three-
dimensional gas.
for large ̃
ωz, universal features of
the kosterlitz–thouless transition are preserved, but the
density profiles and the value of the kosterlitz–thouless
transition temperature depend on ̃
ωz and ̃
g [5, 6]. for
small confinement strength ̃
ωz, a dimensional cross-over
between the two-dimensional kosterlitz–thouless transi-
tion and the three-dimensional bose–einstein condensa-
tion takes place at particle numbers such that the level
spacing in the confined direction is comparable to the
(two-dimensional) correlation energies, t ̃
ωz ≲ ̃
g ̃
n/π.
the quasi-two-dimensional limit differs from the "ex-
perimentalist's" thermodynamic limit where the atom
number is increased in a fixed trap geometry, and at
constant temperature.
in this situation, the ratio be-
tween the microscopic and the macroscopic length scales,
lt /λt = t/(ħω
√
2π), remains constant and finite. the
number of particles in any region of nearly constant den-
sity remains also finite so that, in contrast to the quasi-
two-dimensional thermodynamic limit, corrections to the
lda persist.
b.
n-body and mean-field hamiltonians
the gas specified in section ii a is described by the
hamiltonian
h =h0 + v,
(7)
h0 =
n
x
i=1
−ħ2∇2
i
2m + 1
2m
ω2r2
i + ω2
zz2
i
,
(8)
v =
n
x
i<j=1
v(|⃗
r i −⃗
r j|),
(9)
where v is the three-dimensional interaction potential.
we compute the n-body density matrix at finite temper-
ature using three-dimensional path-integral qmc meth-
ods.
we thus obtain all the thermodynamic observ-
ables [5, 16, 17] for up to n = 106. qmc calculations
have clearly demonstrated the presence of a kosterlitz–
thouless transition [6] for parameters corresponding to
the ens experiment.
in the mean-field approximation, one replaces the n-
body interaction between atoms in eq. (9) by an effec-
tive single-particle potential.
the mean-field hamilto-
nian writes
hmf = h0 + vmf,
(10)
where
vmf =
n
x
i=1
2gn3d(⃗
r i) −g
z
d⃗
r [n3d(⃗
r )]2
(11)
is the mean-field potential energy.
from the corre-
sponding partition function in the canonical or grand-
canonical ensemble, all thermodynamic quantities can
be calculated. the three-dimensional density n3d(⃗
r ) in-
side the mean-field potential must be determined self-
consistently. in all situations treated in the present pa-
per, self-consistency is reached through straightforward
iteration.
mean-field theory leads to an effective schr ̈
odinger
equation for the single-particle wavefunction, ψj(⃗
r ), of
energy ǫj,
−ħ2∇2
r
2m + 1
2mω2r2 −ħ2∂2
2m∂z2 + 1
2mω2
zz2 + 2gn3d(⃗
r )
ψj(⃗
r ) = ǫjψj(⃗
r )
(12)
together with the total density
n3d(⃗
r ) =
x
i
ψ∗
j(⃗
r )ψj(⃗
r )
eβ(μ−ǫj) −1 .
(13)
the exact solution of the mean-field eigenfunctions and
eigenvalues for finite systems, is rather involved, but con-
siderably simplifies in the local-density approximation.
at finite n, in the canonical ensemble, we solve the
mean-field equations through a quantum monte carlo
4
simulation with n particles which avoids an explicit cal-
culation of all eigenfunctions. in contrast to the usual
interaction energy within qmc, consisting, in general,
of a pair interaction potential, the mean-field interaction
energy is simply given in terms of an anisotropic, single-
particle potential proportional to the three-dimensional
density profile, n3d(⃗
r ), as in eqs (10) and (11).
this
interaction potential must be obtained self-consistently
as usual in mean-field. once self-consistency in the den-
sity is reached, one can compute correlation functions
and off-diagonal elements of the reduced one-body den-
sity matrix.
c.
mean-field: local-density approximation
mean-field
theory
simplifies
in
the
quasi-two-
dimensional thermodynamic limit, as the local-density
approximation then becomes exact.
this is because
the natural length scale of the system, λt, separates
from the macroscopic scale lt of variation of the density
(λt/lt →0). the particle numbers inside a region of
constant density diverges.
the decoupling of length
scales implies that the xy-dependence of the single
particle wavefunctions in eq. (12) separates in the
thermodynamic limit.
using scaled variables, eq. (3),
this yields
−1
2
d2
d ̃
z2 + 1
2 ̃
z2 +
2 ̃
gt
√
2π ̃
ωz
̃
nmf
3d( ̃
r, ̃
z)
̃
φν( ̃
r, ̃
z) = ̃
ǫν( ̃
r)
̃
ωz
̃
φν( ̃
r, ̃
z),
(14)
for the eigenfunctions, ̃
φν( ̃
r, ̃
z), and eigenvalues, ̃
ǫν( ̃
r),
in the confined direction, at a given radial distance, ̃
r.
the reduced local density ̃
nmf
3d is given by the normalized
wavefunctions ̃
φ( ̃
r, ̃
z):
̃
nmf
3d( ̃
r, ̃
z) =
x
ν
̃
φ2
ν( ̃
r, ̃
z) ̃
nmf
ν ( ̃
r)
̃
nmf
ν ( ̃
r) = −log [1 −exp ( ̃
μ( ̃
r) − ̃
ǫν( ̃
r)/t)] .
(15)
the position-dependence in eq. (14) and eq. (15) only
enters parametrically through the ̃
r-dependence of the
chemical potential,
̃
μ( ̃
r) = ̃
μ − ̃
r2
2 ,
(16)
and the local-density approximation becomes exact in
the quasi-two-dimensional thermodynamic limit. within
lda, density profiles (as in fig. 1) are directly related
to the equation of state ̃
n( ̃
μ) of a quasi-two-dimensional
system, which is homogeneous in the xy-plane.
the schr ̈
odinger equation of eq. (14) is conveniently
written in the basis {ψ0, ψ1, . . . , ψn, . . . } of the one-
dimensional harmonic oscillator with ω = m = 1, as
it diagonalizes eq. (14) for ̃
g = 0. using
̃
φν( ̃
z) =
x
μ
aμνψμ( ̃
z),
(17)
(where we have dropped the index corresponding to ̃
r or,
equivalently, to ̃
μ), we can write it as a matrix equation
a − ̃
ǫν
̃
ωz
aν = 0
(18)
with
eigenvalues
̃
ǫν/ ̃
ωz
and
eigenvectors
aν
=
{a0ν, . . . , anν}, of the (n + 1) × (n + 1) matrix
aμν = νδμν +
2 ̃
gt
√
2π ̃
ωz
z
d ̃
z ψμ( ̃
z) ̃
nmf
3d( ̃
μ, ̃
z)ψν( ̃
z)
(19)
(where 0 ≤μ, ν ≤n) and the density ̃
nmf
3d( ̃
μ, ̃
z) =
−p
ν ̃
φ2
ν( ̃
z) log[1 −exp( ̃
μ − ̃
ǫν/t)].
the wavefunctions
ψν are easily programmed (see, e.g., [18] sect. 3.1), and
the self-consistent mean-field solutions at each value of ̃
μ
can be found via iterated matrix diagonalization.
this full solution of the lda mean-field equations is
analogous to the one in ref.[10].
the mean-field ver-
sion used in [11], however, neglects the off-diagonal cou-
plings in aνμ with ν ̸= μ. in [6], we used a simplified
mean-field potential in order to reach explicit analytical
expressions. these different mean-field approximations
essentially coincide at all relevant temperatures [11], but
ground-state occupations in z slightly differ. further re-
placing the coupling constant ̃
g by ̃
g tanh1/2( ̃
ωz/2t) has
allowed us, in ref. [6], to improve the agreement with
the qmc results close to the transition. this is because
mean-field theory overestimates the effect of the inter-
actions in the fluctuation regime.
in the following we
always quantify beyond-mean-field corrections with re-
spect to the full lda solution of eq. (19).
iii.
correlation density and
universality
comparisons between the qmc and the mean-field
density profiles are shown in fig. 1 for the ens parame-
ters at reduced temperature t = 0.71, slightly above the
kosterlitz–thouless temperature.
finite-size effects as
well as deviations from mean-field theory are visible for
nλ2
t ≳5. in this section, we concentrate on correlation
corrections to mean-field theory in the thermodynamic
limit and postpone the discussion of finite-size effects to
section iv. we analyze the qmc density profiles within
the validity of the lda, eq. (16), and compare qmc and
mean-field densities at the same local chemical potential
[29] which defines the correlation density, ∆ ̃
n,
∆ ̃
n( ̃
μ) = ̃
n( ̃
μ) − ̃
nmf( ̃
μ).
(20)
as in experiments, the chemical potential is not a control
parameter of the qmc calculation, but it can be obtained
5
from a fit of the wings of the density profile with ̃
n ≲1
to the mean-field equation of state.
mean-field effects take into account the dominant in-
teraction effects which, in particular, determine shape
and energies of the ground and excited states in the
tightly confined direction.
one expects that correla-
tion effects do not modify these high-energy modes, but
merely affects the low-energy distribution of xy-modes in-
side the confining ground state, ̃
φ0( ̃
z). this assumption
is supported by a direct comparison of the normalized
density distribution in ̃
z between the qmc solution and
the mean-field approximation (inset of fig. 1) at differ-
ent radial distances, ̃
r. for small ̃
r, the ground state of
the confining potential is strongly populated. for larger
̃
r, higher modes of the one-dimensional harmonic oscilla-
tor are thermally occupied, and the density distribution
broadens. however, the normalized density profile in ̃
z
is everywhere well described by mean-field theory, and
correlation effects hardly modify the mode structure in
the confined direction.
1
2
3
4
5
6
7
8
0
0.2
0.4
0.6
0.8
1
1.2
1.4
density ~
n = nλt
2
position ~
r = rβ1/2
n=100,000
10,000
1000
mf-lda
0
0.5
0
2
n(~
z) (norm.)
~
z=zωz
1/2
~
r=0
1
2
3
fig. 1: qmc density profile for the ens parameters ( ̃
g =
0.13, ̃
ωz = 0.55) at temperature t = t/t 2d
bec = 0.71 for
different values of the particle number, compared to the lda
mean-field solution of section ii b at the same total number
of particles. the inset shows the density distribution n( ̃
z) at
different values of the radial distance ̃
r.
in the homogeneous two-dimensional gas, corrections
to mean-field theory at small ̃
g are described by classical-
field theory, and correlation effects in the density pro-
file can be expressed in terms of a universal function of
β(μ −2gnmf) [8, 12]. in a quasi-two-dimensional geom-
etry, the corresponding relevant quantity is given by the
local mean-field gap ∆mf between the local ground-state
energy in the confining potential and the local chemical
potential,
∆mf( ̃
r) = ̃
ǫ0( ̃
r)/t − ̃
μ( ̃
r).
(21)
within mean-field theory, it fixes the local xy-density in
the ground state of the confining potential
̃
nmf
0 ( ̃
r) = −log
1 −e−∆mf( ̃
r)
.
(22)
in the strictly two-dimensional limit, we have
∆mf →β(2 ̃
għ2nmf/m −μ) = ̃
g ̃
nmf/π − ̃
μ,
(23)
where nmf is the total mean-field density. deviations are
noticeable for large densities, as illustrated in fig. 2 (we
have absorbed the zero-point energy ̃
ωz/(2t) in the chem-
ical potential). within lda, we expect that the correla-
tion density ∆ ̃
n coincides to leading order in ̃
g with the
classical-field-theory results of the homogeneous strictly
two-dimensional system [12], expressed as functions of
the mean-field gap ∆mf [30].
-2
-1
0
1
2
3
4
5
6
7
8
-1
0
-∆, density ~
n = nλ2
~
μ
lda density
2d density
-gap: ( ~
μ-~
ε0 /t)
~
μ -~
g~
n/π
fig. 2: mean-field equation of state and gap ∆mf for ens
parameters ̃
g = 0.13, ̃
ωz = 0.55, t = 0.71. the gap differs
from the approximation ̃
μ − ̃
g ̃
n/π only at high density. the
2d mean-field curve (see eqs (22) and (23)) illustrates the de-
pendence of the equation of state on microscopic parameters.
in ref. [12], the critical density ̃
nc and chemical po-
tential ̃
μc at the kosterlitz–thouless transition were de-
termined to
̃
nc = log ξn
̃
g ,
ξn = 380 ± 3,
(24)
̃
μc = ̃
g
π log ξμ
̃
g ,
ξμ = 13.2 ± 0.4,
(25)
and the equation of state in the neighborhood of the tran-
sition was written as
̃
n − ̃
nc = 2πλ(x), with x = ( ̃
μ − ̃
μc)/ ̃
g.
(26)
the function λ(x) was tabulated. consistent with the
classical-field approximation, we can expand eq. (22) to
leading order in ∆mf,
̃
nmf
0 (∆mf) = −log(∆mf),
(27)
so that we can express x through ∆mf:
x(∆mf) = ̃
nmf
π
−∆mf
̃
g
− ̃
μc
̃
g
= −
∆mf
̃
g
+ 1
π log
ξμ
∆mf
̃
g
.
(28)
6
thus, we obtain the correlation density as a function of
∆mf:
∆ ̃
n = 2πλ[x(∆mf)] + log (ξn∆mf/ ̃
g) ,
(29)
and a straightforward inversion of eq. (28) allows us to
translate the data of [12] in order to obtain the correla-
tion density as a function of the mean-field gap. an em-
pirical interpolation of the numerical data with ≲10%
error inside the fluctuation regime, ∆c
mf ≤∆mf ≤∆f
mf
(∆c
mf and ∆f
mf are defined below), is given by
∆ ̃
n(∆mf/ ̃
g) ≃1
5
−1 +
1
∆mf/ ̃
g
1
1 + π∆2
mf/ ̃
g2
(30)
where positivity is imposed since, within classical field
theory, the correlation density must be positive and of
order (∆mf/ ̃
g)−2 for ∆mf/ ̃
g →∞.
0
0.5
1
1.5
2
2.5
3
3.5
0
0.1
0.2
0.3
0.4
correlation density ∆ ~
n
∆mf/~
g
mean-field
fluctuation region
kt
class.-field interpolation
(ω,t,~
g) = (0.55,0.71,0.13)
(1.64,0.57,0.14)
(0.82,0.57,0.14
(1.64,0.57,0.07)
(0.82,0.57,0.07)
fig. 3: correlation density ∆ ̃
n vs. rescaled mean-field gap
∆mf/ ̃
g. qmc data for various interaction strengths and con-
finements ̃
ωz are compared with the interpolation eq. (30) of
classical-field results [12].
in fig. 3, we plot the classical-field results for the
correlation density as a function of the rescaled mean-
field gap. we also indicate the onset of the kosterlitz–
thouless transition ( ̃
μ = ̃
μc, x = 0 in eq. (29)) at
∆c
mf/ ̃
g = 0.0623, which yields ∆ ̃
nc = 3.164. the cor-
relation density in the normal phase is thus finite for all
interactions, whereas the mean-field density diverges as
̃
nmf = −log ∆c
mf ∝−log g for small interactions at tkt.
in fig. 3, we furthermore compare the classical-field data
for the correlation density with the results of qmc simu-
lations of quasi-two-dimensional trapped bose gases with
different coupling constants ̃
g and confinement strengths
̃
ωz. the qmc data illustrates that the external trapping
and the quasi-two-dimensional geometry preserve univer-
sality in the experimental parameter regime. however,
the finite coupling constant ̃
g introduces small deviations
due to quantum corrections.
from fig. 3, we further see that the correlation den-
sity is reduced to roughly 10% of its critical value for
mean-field gaps ∆f
mf ≃ ̃
g/π. thus, only densities with
̃
n ≳ ̃
nf ≈ ̃
nmf(∆mf ≈ ̃
g/π) are significantly affected by
correlations, and ̃
nf can be considered as the boundary
of the fluctuation regime. in fact, perturbation theory
fails inside this regime. for a strictly two-dimensional
system, we have
̃
nf ≈log(π/ ̃
g),
(31)
and the fluctuation regime is reached for densities ̃
n ≳
̃
nf. outside the fluctuation regime, ̃
n ≲ ̃
nf, mean-field
theory is rather accurate, and can be improved pertur-
batively, if necessary.
to understand this criterion, which is important for
the kosterlitz–thouless to bose–einstein cross-over at
small n (see section iv), we briefly analyze the per-
turbative structure of the two-dimensional single-particle
green's function beyond mean-field theory [8]. within
classical-field theory, second-order diagrams are ultravi-
olet convergent. each additional higher order brings in a
factor ħ2 ̃
g/m for the interaction vertex, one integration
over two-momenta, a factor t , and two green's functions
(the internal lines). dimensional analysis of the integrals
involved shows that each vertex insertion adds a factor
̃
g/∆mf. this implies that perturbation theory fails for
̃
g/∆mf ≳1.
for lower densities, 1 ≲ ̃
n ≲ ̃
nf, the gas is quantum
degenerate, yet it is accurately described by mean-field
theory. in contrast to fully three-dimensional gases, the
quantum-degenerate regime can be rather broad in two
dimensions for gases with ̃
g ≪1. in this regime, the den-
sity, yet normal, is no longer given by a thermal gaussian
distribution. this was observed in the nist experiment
[2] where ̃
nf ≃5. in fig. 4, we illustrate this effect via
the approximations for the tails of the distribution ̃
n( ̃
r)
with between one and five gaussians, as
̃
n( ̃
r) = π2
6t2
kmax
x
k=1
kπk exp
−k ̃
r2
2
.
(32)
where πk is determined by the formal expansion of the
logarithm in eq. (15), but also appears as a cycle weight
in the path-integral representation of the bosonic den-
sity matrix where they can be measured (see the inset
of fig. 4). the successive approximations have no free
parameters.
figure 5 summarizes the density profiles of a strictly
two-dimensional bose gas in the limit ̃
g →0 where the
classical-field calculations determine the correlation den-
sity.
at the critical temperature tkt, the density in
the center of the trap is critical, ̃
n(0) = ̃
nc.
correla-
tion effects are important only in the fluctuation regime,
̃
r ≲1.25√ ̃
g, where ̃
n( ̃
r) ≳ ̃
nf. however, the distribution
of the correlation density introduces no further qualita-
tive features to the mean-field component. the density
profile may be integrated using the interpolation formula
for the correlation density, eq. (30). the critical tem-
perature of the strictly two-dimensional bose gas as a
7
1
2
3
4
5
6
7
8
0
0.2
0.4
0.6
0.8
1
1.2
1.4
density ~
n = nλt
2
position ~
r = rβ1/2
1 gauss.
4 gauss.
n=100,000
10,000
1,000
0.001
0.01
0.1
1
5
20
cycle weight πκ
k
fig. 4: qmc density profile for the nist parameters t =
t/t 2d
bec = 0.75, ̃
g = 0.02, and ̃
ωz = 0.5, for different n. the
data are compared to the expansion of eq. (32), considering
the largest (k = 1) and the four largest terms (k = 1, . . . , 4).
the inset shows the qmc cycle weights πk for n = 100 000
(see [6]).
function of the total number of particles is given by
tkt
t 2d
bec
≃
1 + 3 ̃
g
π3 log2 ̃
g
16 +
6 ̃
g
16π2
15 + log ̃
g
16
−1/2
.
this expression includes correction of order ̃
g log ̃
g com-
pared to the mean-field estimate of refs [6, 8].
since
corrections beyond classical-field theory are rather small
(see fig. 3), the kosterlitz–thouless temperature of the
strictly two-dimensional trapped bose gas is accurately
described by this equation even for large coupling con-
stants.
for general quasi-two-dimensional gases, fig. 5 re-
mains qualitatively correct, but the lda mean-field den-
sity profile in the quasi-two-dimensional geometry must
be used.
numerical integration of this ̃
g →0 den-
sity profile for the ens parameters with ̃
ωz = 0.55
and ̃
g = 0.13 leads tkt ≃0.71 t 2d
bec, in close agree-
ment with tkt ≃0.70 t 2d
bec determined in ref. [5] di-
rectly from qmc calculations using finite-size extrapola-
tions. the quasi-two-dimensional transition temperature
is smaller than the one of the strictly two-dimensional gas
(t 2d
kt ≃0.86 t 2d
bec for ̃
g = 0.13). for the nist param-
eters ( ̃
g = 0.02, ̃
ωz = 0.5), we have tkt ≃0.74 t 2d
bec
from the integration of the ̃
g →0 density, and qmc
data indicate a transition slightly below this value.
iv.
finite-size effects and
bose–einstein cross-over
a.
central coherence
in the normal phase, the off-diagonal elements of
the single-particle density matrix remain short-ranged,
-log(~
g)
-log(~
g)+3
-log(~
g)+6
0
~
g1/2
2 ~
g1/2
position ~
r
ideal gas
~
n
mean-field gas
~
nmf
~
nmf+∆~
n
0
1
-log(~
g)
~
n
exp(-~
r2/2)
~
r
fig. 5: schematic density profile of a strictly two-dimensional
trapped bose gas at tkt for ̃
g →0. for ̃
r ≫√ ̃
g, ̃
n coincides
with the ideal gas at t 2d
bec (inset, the classical boltzmann dis-
tribution e− ̃
r2/2 is given for comparison). in the fluctuation
regime, for ̃
r ≲1.25√ ̃
g, mean-field and correlation effects be-
come important. the density diverges as ∼log(1/ ̃
g), yet the
correlation contribution ∆ ̃
n remains finite.
so that they can be described locally.
from the self-
consistent eigenfunctions of the mean-field schr ̈
odinger
equation, eq. (12) and eq. (13), we also obtain the off-
diagonal reduced single-body density matrix:
̃
n(1)
mf (⃗
r ;⃗
r ′) = λ2lz
x
j
ψ∗
j(⃗
r ) ̃
ψj(⃗
r ′)
exp ( ̃
μ −βǫj) −1.
(33)
in the local-density approximation, we can separate the
contributions of the different transverse modes, and we
obtain
̃
n(1)
mf (⃗
r ;⃗
r ′) =
x
ν
̃
n(1)
mf,ν(r; r′) ̃
φν( ̃
z) ̃
φν( ̃
z′)
(34)
with
̃
n(1)
mf,ν(r; r′) =
z
d2k
(2π)2
λ2
teik*(r−r′)
eβħ2k2/2m+∆mf( ̃
r) −1.
(35)
here we have used that within the lda, the density re-
mains constant on the scale λt, so that the mean-field
gaps at ̃
r and ̃
r′ are the same.
at low densities, where the mean-field gap is large,
∆mf ≫1, we can expand the bose function in eq. (35)
in powers of exp (−∆mf), and off-diagonal matrix ele-
ments rapidly vanish for distances larger than the ther-
mal wavelength λt. at higher densities, in the quantum-
degenerate regime, ∆mf ≪1, many gaussians con-
tribute, and coherence is maintained over larger dis-
tances. in the limit ∆mf →0, we can expand the de-
nominator in eq. (35), exp
βħ2k2/2m + ∆mf
−1 ≈
βħ2k2/2m + ∆mf, and the off-diagonal density matrix
decays exponentially. in this regime, the local mean-field
coherence length is given by ξmf = λt/√4π∆mf.
8
in fig. 6 and fig. 7 we compare the normalized off-
diagonal coherence function in the center of the trap
c(r) =
r
dz n(1)
3d (r, z; 0, 0)
r
dz n(1)
3d (0, z; 0, 0)
(36)
from qmc calculations with lda for the ens and nist
conditions. we see that for ̃
n ≲ ̃
nf, as in the case of the
density profile, mean-field theory accurately describes the
single-particle coherence. however, it is evident that at
higher densities, ̃
n ≳ ̃
nf, where correlation effects for the
diagonal elements of the density matrix are important,
mean-field theory also fails to describe the off-diagonal
matrix elements.
to characterize the decay of the off-diagonal density
matrix in the fluctuation regime, ̃
n ≳ ̃
nf, we consider
a simple one-parameter model which neglects the mo-
mentum dependence of the self-energies in the ground
state of the confining potential. the single parameter of
the model, the effective local gap ∆( ̃
r), is chosen such
that it reproduces the local density of the qmc data.
the density matrix of this "gap"-model, ̃
n(1)
∆(⃗
r ;⃗
r ′) =
p
ν ̃
n(1)
∆,ν(r; r′) ̃
φν( ̃
z) ̃
φν( ̃
z′), is a straightforward general-
ization of mean-field theory, where in eq. (35), we replace
∆ν =
(
∆
for ν = 0
∆mf
otherwise .
(37)
to fix the gap ∆of this model, we require that the diag-
onal elements of the density matrix reproduces the exact
density
̃
n(1)
∆,0(r; r) = ̃
nmf
0 ( ̃
r) + ∆ ̃
n( ̃
r).
(38)
0
0.2
0.4
0.6
0.8
1
0
5
10
15
20
central coherence c(r)
position r/λt
n=160,000
40,000
10,000
c∆(r)
cmf(r)
0
0.2
0.4
0.6
0.8
1
1
2
3
4
r/λt
c(r)
fig. 6: off-diagonal coherence c(r) for ens parameters with
t = 0.71 (main graph, ̃
n > ̃
nf) and t = 0.769 (inset, ̃
n < ̃
nf)
compared to the mean-field prediction cmf(r) and the gap
model of eq. (37). in the fluctuation regime, finite-size effects
for off-diagonal correlations are more pronounced than for the
density (see fig. 1).
outside the fluctuation regime the gap model reduces
to the mean-field limit.
inside the fluctuation regime,
where a direct comparison of the coherence with mean-
field theory is not very useful, the gap model provides
the basis to quantify off-diagonal correlations. it cannot
describe the build-up of quasi-long-range order at the
kosterlitz–thouless transition, but its correlation length
ξ∆= λt /
p
4π∆( ̃
r) > ξmf bounds from below the true
correlation length in the normal phase.
in fig. 6, we
show that the gap model accounts for the increase of
the coherence length inside the fluctuation regime, ̃
n >
̃
nf for the ens parameters. for smaller interactions, as
in the nist experiment, finite-size effects qualitatively
change the off-diagonal elements of the density matrix
(see fig. 7).
0
0.2
0.4
0.6
0.8
1
0
5
10
15
20
25
30
central coherence c(r)
position r/λt
n=640,000
160,000
10,000
c∆(r)
cmf(r)
0
0.2
0.4
0.6
0.8
1
1
2
3
4
r/λt
c(r)
fig. 7: off-diagonal coherence c(r) for nist parameters with
t = 0.74 (main graph) and t = 0.769 (inset) in comparison
with the mean-field prediction, cmf(r), and the gap model,
c∆(r), defined in eq. (37). at t = 0.769, the total central
density is ̃
n(0) ≃5.1 < ̃
nf, and the system is outside the
fluctuation regime. at t = 0.74, ̃
n(0) ≃10.5 > ̃
nf, and the
system is close to the kosterlitz–thouless transition, ∆mf/ ̃
g ≃
0.08. strong finite-size effects are evident in the fluctuation
regime.
b.
density profile
finite-size effects in the density profile are less dra-
matic than for the coherence (see fig. 1). within mean-
field theory, we have compared the density profiles of the
finite system directly with those in the thermodynamic
limit (lda), using the finite n solution obtained by the
adapted qmc calculation described in section ii b. the
mean-field analysis indicates that correlation effects are
at the origin of the size-effects of the full qmc den-
sity profiles in fig. 1, in particular, at small system size,
n = 1000.
9
c.
bose–einstein cross-over
the finite-size effects in the coherence reflect the un-
derlying discrete mode structure of level spacing ∼ħω.
off-diagonal properties for ξ∆≳lr are cut offby the
extension of the unperturbed ground-state wavefunc-
tion, lr = (mω/ħ)−1/2, and resemble those of a bose-
condensed system with a significant ground-state occu-
pation.
whereas in the thermodynamic limit, the in-
teracting quasi-two-dimensional trapped bose gas under-
goes a kosterlitz–thouless phase transition, the cross-
over to bose–einstein condensation sets in when ∆≈
βħω.
if this happens outside the fluctuation regime,
∆f
mf ≲∆mf ≲βħω, the bose condensation will essen-
tially have mean-field character. since the temperature
scale is given by ħω/t 2d
bec = π/
√
6n, the discrete level
spacing is important for small system sizes, n ≲nfs,
with
nfs(∆mf) ≈1
6
π2
∆2
mft2 =
π2
6 ̃
g2t2(∆mf/ ̃
g)2 .
(39)
for small ̃
g, close to t 2d
bec where ∆mf is of order ̃
g, these
finite-size effects trigger bose–einstein condensation for
small n. in particular, for systems with n ≲nfs(∆f
mf) ≈
π4/(6 ̃
g2), a cross-over to a mean-field-like bose conden-
sation occurs[31], whereas for n ≳nfs(∆c
mf) ≈400 ̃
g−2
kosterlitz–thouless-like behavior sets in (see inset of
fig. 10). we notice that the finite-size scale nfs ∝1/g2
diverges very rapidly with vanishing interactions, which
could make the cross-over experimentally observable.
for a finite system with n ≲nfs(∆f
mf), the condensate
wavefunction does not develop immediately a thomas–
fermi shape, but remains close to the gaussian ground-
state wavefunction of the ideal gas with typical extension
lr = (mω/ħ)−1/2. thus, for small condensate fraction
n0, deviations of the moment of inertia i of the trapped
gas from its classical value, icl =
r
d2r r2n(r) ∼nl2
t ,
are negligible, of order (icl −i)/icl ∼n0l2
r/l2
t ∼n −1/2.
only for larger condensates with ̃
gn0 ≫2π, the self-
interaction energy dominates the kinetic energy, and the
condensate wavefunction approaches the thomas–fermi
distribution of radius ∼lt , resulting in a non-classical
value of the moment of inertia. in this low-temperature
regime, the system can be described by a condensate with
a temperature-dependent fluctuating phase [13]. there-
fore, for small systems, a non-classical moment of iner-
tia only occurs at lower temperatures than condensation,
roughly, at a condensate fraction n0 ≳ ̃
g.
to illustrate the cross-over between the bose–einstein
regime at small n and the kosterlitz–thouless regime at
large n, we have calculated the condensate fraction and
condensate wavefunction for the ens parameter in fig. 8.
to determine both quantities in inhomogeneous systems,
n(1)
3d (⃗
r ,⃗
r ′) must be explicitly diagonalized, as the eigen-
functions of the single-particle density matrix are not
fixed by symmetry alone. in quasi-two-dimensional sys-
tems, the full resolution of the off-diagonal density ma-
trix in the tightly confined z-direction is difficult. it is
more appropriate to consider the in-plane density matrix,
n(1)(r, r′), where the confined direction is integrated over.
n(1)(r, r′) =
r
dz
r
dz′ n(1)
3d (r, z; r′, z′). because of rota-
tional symmetry, n(1)(r, r′) is block-diagonal in angular-
momentum fourier components
n(1)(r, r′) =
∞
x
n=0
∞
x
l=−∞
nnlφ∗
nl(r′)φnl(r)elα(r,r′)
(40)
where α(r, r′) denotes the angle between r and r′, and
nnl is the occupation number of the normalized eigen-
mode φnl.
the (in-plane) condensate fraction, n0 =
n00/n, corresponds to the largest eigenvalue with l = 0
and condensate wavefunction φ0(r) = φ00(r). projection
on the fourier components is convenient for determining
the condensate fraction and wavefunction within qmc.
for the ens parameters in fig. 8, the central density
is already inside the fluctuation regime, and the conden-
sate wavefunction differs from the gaussian ground state
of an ideal gas. however, for small systems, n ≲103,
it still has a gaussian shape indicating that the conden-
sate kinetic energy dominates the potential energy. the
condensate fraction vanishes as n0 ∼n −1/2.
the qmc calculations of [5] demonstrated that the
condensate fraction of the quasi-two-dimensional bose
gas vanishes in the normal and in the superfluid phase
in the thermodynamic limit, n →∞. however, in the
low-temperature superfluid phase, the condensate frac-
tion approaches zero very slowly with increasing system
size, so that an extensive condensate remains for practi-
cally all mesoscopic systems.
0
0.1
0.2
0.3
0.4
0.5
0.6
0
1
2
3
4
5
condensate wavefunction
r/lr
ψ0
n=250
1,000
10,000
40,000
160,000
0
0.05
0.1
0
0.02 0.04 0.06
n0/n
n-1/2
fig. 8:
condensate fraction n0 (inset) and wavefunction
φ0(r) (main graph), computed by qmc for ens parameters
with t = 0.71 for various system sizes. the condensate wave-
function is modified with respect to the ground-state wave-
function ψ0(r) of the unperturbed harmonic oscillator, but
the gaussian shape is preserved. the two largest systems are
above the scale nfs (see eq. (39))
10
v.
two-particle correlations
a.
pair-correlation function
density–density correlations can be analyzed by con-
sidering the three-dimensional pair-correlation function,
n(2)(⃗
r ;⃗
r ′).
this quantity factorizes within mean-field
theory into terms described by the one-particle density
matrix, n(1)
mf (⃗
r 1;⃗
r 2) (see section iv a):
n(2)
mf (⃗
r 1;⃗
r 2) = nmf(⃗
r 1)nmf(⃗
r 2) +
h
n(1)
mf (⃗
r 1;⃗
r 2)
i2
. (41)
for vanishing distances, ⃗
r 1 →⃗
r 2, mean-field theory pre-
dicts n(2)
mf (⃗
r ,⃗
r ) = 2n2
mf(⃗
r ). for bose-condensed atoms,
this bunching effect is absent.
in two dimensions, deviations from 2n2 of the pair-
correlation function at contact signal beyond-mean-field
fluctuations[12, 19, 21]. in ref. [12], the universal char-
acter of the contact value was used to define the quasi-
condensate density, nqc(r) ≡[2[n(r)]2 −n(2)(r, r)]1/2.
this quantity has been studied in ref. [22] for a quasi-
two-dimensional trapped gas within classical-field theory.
the
pair-correlation
function
of
the
quasi-two-
dimensional gas is obtained by integrating both coordi-
nates over the confined direction
n(2)(r1; r2) =
z
dz1
z
dz2 n(2)(r1, z1; r2, z2),
(42)
and mean-field expressions for this quantity follow from
eq. (41) together with eqs (34) and (35).
inside the
fluctuation regime, the gap model (using eq. (37) in the
mean-field expressions) again leads to an improved pair-
correlation function n(2)
∆.
figure 9 illustrates that outside the fluctuation regime
mean-field theory describes the pair-correlation function
well. in contrast to a strictly two-dimensional gas, the
contact value of the pair correlation function is below 2.
even in the mean-field regime, the occupation of more
than one mode in the confined direction causes a notice-
able reduction of the pair-correlation function at contact.
the above definition of the quasi-condensate must there-
fore be modified in this geometry to maintain its univer-
sal character.
at short distances r ∼r0, the pair-correlation function
depends on the specific form of the interaction.
this
cannot be reproduced by the single-particle mean-field
approximation. however, two-particle scattering proper-
ties dominate for small enough distances as, for example,
the wavefunction of hard spheres must vanish for overlap-
ping particles. this feature can be included in mean-field
theory by multiplying its pair-correlation functions by a
short-range term χ2d(r), which accounts for two-particle
scattering[23]. in two dimensions, χ2d shows a charac-
teristic logarithmic behavior for short distances:
χ2d(r →0) ≃
"
1 + ̃
g
2π log
r
πec
2
r
λt
#2
.
1
1.1
1.2
1.3
1.4
0
0.5
1
1.5
2
g(r)
r/λt
t = 1.0
g∆(r)
χ2d(r) g∆(r)
n=100,000
1
1.2
1.4
1.6
0
0.5
1
1.5
t = 0.75
r/λt
g(r)
fig. 9: central pair correlations, g(r) = n(2)(r, 0)/[n(r)n(0)],
of the quasi-two-dimensional trapped bose gas at temperature
t = t 2d
bec (main figure) and t = 0.75 t 2d
bec (inset) for n =
100, 000 atoms (ens parameters), together with the predic-
tion of the mean-field gap model, g∆(r) = n(2)
∆(r, 0)/n(r)n(0),
and the short-range improved mean-field model, χ2d(r)g∆(r).
factorizing out the short-range behavior from the pair-
correlation function, the correlation part of the renor-
malized pair-correlation function, ̃
n(2)
∆− ̃
n(2)/χ2d, should
be dominated by contributions from classical-field the-
ory, its contact value is universal, and it might be used
to define a quasi-condensate density in quasi-two dimen-
sions via ̃
n2
qc
= limr→0
h
̃
n(2)
∆(r, 0) − ̃
n(2)(r, 0)/χ2d(r)
i
(see fig. 10).
similar to the correlation density, the
quasi-condensate density is universal. at tkt, the clas-
sical field result is ̃
nqc ≃7.2, whereas it is around 2.7 at
the onset of the fluctuation regime, so that, in the normal
phase, nqc/n vanishes as | log ̃
g|−1 for ̃
g →0.
b.
local-density correlator
due to the three-dimensional nature of the underly-
ing interaction potential, observables which couple di-
rectly to local three-dimensional density fluctuations in-
volve the following density correlator
k(2) = lim
δ→0
√
2πlz
r
dz n(2)(r, z; r + δ, z)
χ3d(δ)
,
(43)
where χ3d(r ≪λt) ≃(1 −as/r)2 describes the uni-
versal short-distance behavior of the three-dimensional
two-body wavefunction in terms of the s-wave scattering
length as.
arguments similar to those in section v a show
that the local-density correlator in general differs from
the contact value of the quasi-two-dimensional pair-
correlation function, further, the integration over the
square of the ground state density in z leads to a
11
0
5
10
0
0.1
0.2
0.3
0.4
~
nqc
∆mf/~
g
mean-field
fluctuation region
kt
cft
n=100,000
bec
kt
16~
g-2
400~
g-2
n
∆mf
∆c
mf
∆f
mf
fig. 10:
quasi-condensate density nqc obtained by qmc
from the renormalized pair-correlation function at t
=
0.71 t 2d
bec, and t = 0.75 t 2d
bec (see fig. 9) for ens parame-
ters, plotted as a function of ∆mf, and compared to classical-
field simulations [12]. the inset shows the boundary of the
region with strong finite-size effects (see eq. (39)).
phase-space-density dependence which destroys the sim-
ple mean-field property k(2)
∝2n of strictly two-
dimensional bose gases.
vi.
conclusions
in
this
paper,
we
have
studied
the
quasi-two-
dimensional trapped bose gas in the normal phase above
the kosterlitz–thouless temperature for small interac-
tions ̃
g < 1. we have discussed the three qualitatively
distinct regimes of this gas: for phase-space densities
̃
n ≲1, it is classical. at higher density, 1 ≲ ̃
n ≲ ̃
nf, the
gas is in the quantum mean-field regime, and its coher-
ence can be maintained over distances much larger than
λt.
finally, mean-field theory fails in the fluctuation
regime ̃
nf ≲ ̃
n ≤ ̃
nc and beyond-mean-field corrections
must be taken into account.
in the fluctuation regime, for small interactions, the
deviations of the density profile with respect to the mean-
field profile are universal.
mean-field theory thus ac-
counts for most microscopic details of the gas (which
depend on the interactions and on the trap geometry).
we have shown in detail how to extract the correlation
density (the difference between the density and the mean-
field density at equal chemical potential) from qmc den-
sity profiles and the lda mean-field results, and com-
pared it to the universal classical field results. quantum
corrections to the equation of state, expected of order ̃
g,
were demonstrated to be small for current experiments
with ̃
g ≲0.2.
the smooth behavior of quantum cor-
rections, which has been already noticed in qmc cal-
culations of the kosterlitz–thouless transition tempera-
ture in homogeneous systems [24], strongly differs from
the three-dimensional case[25, 26] where quantum correc-
tions to universality are non-analytic [27, 28], and where
the universal description holds only asymptotically. it
would be interesting if these universal deviations from
mean-field theory could be observed experimentally.
correlation effects in local observables, e.g. the den-
sity profile, and local-density correlators, converge rather
quickly to their thermodynamic limit value, and corre-
lation effects for mesoscopic systems are well described
by the local-density approximation. off-diagonal coher-
ence properties show much larger finite-size effects, in
particular for weak interactions. this introduces quali-
tative changes for mesoscopic system sizes and, in par-
ticular, the cross-over between bose–einstein physics at
small particle number n ≲const/ ̃
g2 and the kosterlitz–
thouless physics for larger systems. tuning the inter-
action strength via a feshbach resonance might make it
possible to observe the cross-over between bose–einstein
condensation and kosterlitz–thouless physics in current
experiments.
acknowledgments
w. k. acknowledges the hospitality of aspen center
for physics, where part of this work was performed.
nb: the computer programs used in this work are avail-
able from the authors.
[1] z. hadzibabic, p. kr ̈
uger, m. cheneau, b. battelier, and
j. dalibard, nature 441, 1118 (2006).
[2] p. clade, c. ryu, a. ramanathan, k. helmerson, w.d.
phillips phys. rev. lett. 102, 170401 (2009).
[3] j. m. kosterlitz and d. j. thouless, j. phys. c 6, 1181
(1973); j. m. kosterlitz, j. phys. c 7, 1046 (1974);
v. l. berezinskii, sov. phys. jetp 32, 493 (1971); 34,
610 (1972).
[4] p. kr ̈
uger, z. hadzibabic, and j. dalibard, phys. rev.
lett. 99, 040402 (2007).
[5] m. holzmann and w. krauth, phys. rev. lett. 100
190402 (2008).
[6] m. holzmann, m. chevallier, and w. krauth, epl 82
30001 (2008).
[7] d.s. fisher and p.c. hohenberg, phys. rev. b 37, 4936
(1988).
[8] m. holzmann, g. baym, j.-p. blaizot, and f. lalo ̈
e,
proc. nat. acad. sci. 104, 1476 (2007).
[9] n. prokof'ev, o. ruebenacker, and b. svistunov, phys.
rev. lett. 87, 270402 (2001).
[10] z. hadzibabic, p. kr ̈
uger, m. cheneau, s. p. rath, and
j. dalibard, new j. phys 10 045006 (2008).
12
[11] r. n. bisset, d. baillie, and p. b. blakie, phys. rev. a
79, 013602 (2009).
[12] n. prokof'ev and b. svistunov, phys. rev. a 66, 043608
(2002).
[13] d. s. petrov, m. holzmann, and g. v. shlyapnikov,
phys. rev. lett. 84, 2551 (2000).
[14] d. s. petrov and g. v. shlyapnikov, phys. rev. a 64,
012706 (2001).
[15] l.-k. lim, c. m. smith, and h.t.c. stoof, phys. rev. a
78, 013634 (2008).
[16] w. krauth, phys. rev. lett. 77, 3695 (1996).
[17] m. holzmann, w. krauth, and m. naraschewski, phys.
rev. a 59, 2956 (1999).
[18] w. krauth, statistical mechanics: algorithms and com-
putations, oxford university press (oxford, uk) (2006).
[19] yu. kagan, v.a. kashurnikov, a.v. krasavin, n.v.
prokof'ev, and b.v. svistunov, phys. rev. a 61, 43608
(2000).
[20] m. chevallier and w. krauth, phys. rev. e 76 051109
(2007).
[21] l. giorgetti, i. carusotto, and y. castin, phys. rev. a
76, 013613 (2007).
[22] r.n. bisset, m.j. davis, t.p. simula, and p.b. blakie,
phys. rev. a 79, 033626 (2009).
[23] m. holzmann and y. castin, eur. phys. j. d 7, 425
(1999).
[24] s. pilati, s. giorgini, and n. prokof'ev, phys. rev. lett.
100, 140405 (2008).
[25] g. baym, j.-p. blaizot, m. holzmann, f. lalo ̈
e, and
d. vautherin, phys. rev. lett. 83, 1703 (1999); eur.
phys. j. b24, 107 (2001).
[26] m. holzmann and g. baym, phys. rev. lett. 90, 040402
(2003).
[27] m. holzmann, g. baym, j.-p. blaizot, and f. lalo ̈
e,
phys. rev. lett. 87, 120403 (2001).
[28] p. arnold, g. moore, and b. tom ́
asˆ
ık, phys. rev. a 65,
013606 (2002).
[29] in general, at equal chemical potential, the mean-field
density differs from the exact one, and so does also the
total number of particles in the trap.
[30] similar
to the mean-field gap,
one may introduce
an
effective
mean-field
coupling
constant
̃
gmf
=
̃
g
r
d ̃
z | ̃
φ0( ̃
z)|4/
r
d ̃
z | ̃
φ0( ̃
z)|2 which accounts for modifi-
cations of the in-plane interactions in the ground state of
the confining potential. this leads to small corrections,
not visible for the experimental parameters considered in
this paper.
[31] in contrast to the infinite mean-field gas, these finite
mean-field systems undergo a bose–einstein condensa-
tion slightly below t 2d
bec [8].
|
0911.1705 | simulation-based model selection for dynamical systems in systems and
population biology | computer simulations have become an important tool across the biomedical
sciences and beyond. for many important problems several different models or
hypotheses exist and choosing which one best describes reality or observed data
is not straightforward. we therefore require suitable statistical tools that
allow us to choose rationally between different mechanistic models of e.g.
signal transduction or gene regulation networks. this is particularly
challenging in systems biology where only a small number of molecular species
can be assayed at any given time and all measurements are subject to
measurement uncertainty. here we develop such a model selection framework based
on approximate bayesian computation and employing sequential monte carlo
sampling. we show that our approach can be applied across a wide range of
biological scenarios, and we illustrate its use on real data describing
influenza dynamics and the jak-stat signalling pathway. bayesian model
selection strikes a balance between the complexity of the simulation models and
their ability to describe observed data. the present approach enables us to
employ the whole formal apparatus to any system that can be (efficiently)
simulated, even when exact likelihoods are computationally intractable.
| introduction
mathematical models are widely used to describe and analyze complex systems and
processes. formulating a model to describe, e.g. a signalling pathway or host par-
asite system, requires us to condense our assumptions and knowledge into a single
coherent framework [1]. mathematical analysis and computer simulations of such
models then allow us to compare model predictions with experimental observations
in order to test, and ultimately improve these models. the continuing success, for
example of systems biology, relies on the judicious combination of experimental and
theoretical lines of argument.
because many of the mathematical models in biology (as in many other disciplines)
∗to whom correspondence should be addressed: [email protected], [email protected]
1
arxiv:0911.1705v3 [q-bio.qm] 20 jan 2010
are too complicated to be analyzed in a closed form, computer simulations have
become the primary tool in the quantitative analysis of very large or complex bio-
logical systems. this, however, can complicate comparisons of different candidate
models in light of (frequently sparse and noisy) observed data. whenever proba-
bilistic models exist, we can employ standard model selection approaches of either
a frequentist, bayesian, or information theoretic nature [2,3]. but if suitable prob-
ability models do not exist, or if the evaluation of the likelihood is computationally
intractable, then we have to base our assessment on the level of agreement between
simulated and observed data. this is particularly challenging when the parameters
of simulation models are not known but must be inferred from observed data as
well. bayesian model selection side-steps or overcomes this problem by marginal-
izing (that is integrating) over model parameters, thereby effectively treating all
model parameters as nuisance parameters.
for the case of parameter estimation when likelihoods are intractable, approximate
bayesian computation (abc) frameworks have been applied successfully [4–9]. in
abc the calculation of the likelihood is replaced by a comparison between the ob-
served data and simulated data. given the prior distribution p(θ) of parameter
θ, the goal is to approximate the posterior distribution, p(θ|d0) ∝f(d0|θ)p(θ),
where f(d0|θ) is the likelihood of θ given the data d0. abc methods have the
following generic form:
1 sample a candidate parameter vector θ∗from prior distribution p(θ).
2 simulate a data set d∗from the model described by a conditional probability
distribution f(d|θ∗).
3 compare the simulated data set, d∗, to the experimental data, d0, using
a distance function, d, and tolerance ε; if d(d0, d∗) ≤ε, accept θ∗.
the
tolerance ε ≥0 is the desired level of agreement between d0 and d∗.
the output of an abc algorithm is a sample of parameters from the distribution
p(θ|d(d0, d∗) ≤ε). if ε is sufficiently small then this distribution will be a good
approximation for the "true" posterior distribution, p(θ|d0). a tutorial on abc
methods is available in the suppl. material.
such a parameter estimation approach can be used whenever the model is known.
however, when several plausible candidate models are available we have a model
selection problem, where both the model structure and parameters are unknown. in
the bayesian framework, model selection is closely related to parameter estimation,
but the focus shifts onto the marginal posterior probability of model m given data
d0,
p(m|d0) = p(d0|m)p(m)
p(d0)
where p(d0|m) is the marginal likelihood and p(m) the prior probability of the
model [10].
this framework has some conceptual advantages over classical hy-
2
pothesis testing: for example, we can rank an arbitrary number of different non-
nested models by their marginal probabilities; and rather than only considering ev-
idence against a model the bayesian framework also weights evidence in a model's
favour [11]. in practical applications, however, a range of potential pitfalls need con-
sidering: model probabilities can show strong dependence on model and parameter
priors; and the computational effort needed to evaluate these posterior distributions
can make these approaches cumbersome.
the computationally expensive step in bayesian model selection is the evaluation of
the marginal likelihood, which is obtained by marginalizing over model parameters;
i.e. p(d0|m) =
r
f(d0|m, θ)p(θ|m)dθ, where p(θ|m) is the parameter prior for
model m. here we develop a computationally efficient abc model selection formal-
ism based on a sequential monte carlo (smc) sampler. we show that our abc
smc procedure allows us to employ the whole paraphernalia of the bayesian model
selection formalism, and we illustrate the use and scope of our new approach in a
range of models: chemical reaction dynamics, gibbs random fields, and real data
describing influenza spread and jak-stat signal transduction.
2
abc for model selection
our goal is to estimate the marginal posterior distribution of a model, p(m|d0),
and in this section we explain two ways in which this problem can be approached.
in the joint space based approach we define a joint space of model indicators, m =
1, 2, . . . , |m|, and corresponding model parameters, θ, obtain the joint posterior
distribution over the combined space of models and parameters, p(θ, m|d0), and
finally marginalize over parameters to obtain p(m|d0). in the second, marginal
likelihood based approach, we estimate marginal likelihoods (also called the evidence),
p(d0|m), for each given model, and use these to calculate the marginal posterior
model distributions through
p(m|d0) =
p(d0|m)p(m)
p
m′ p(d0|m′)p(m′).
both approaches have been applied under the abc rejection scheme, which is com-
putationally prohibitive for models with even an only moderate number of parame-
ters [12,13]. here we incorporate ideas from smc to both of the above approaches,
making them computationally more efficient. in this section we present only the
more powerful approach abc smc model selection on the joint space. we refer the
reader to the suppl. material for derivations and details, as well as discussion on
the abc smc model selection algorithm based on the marginal likelihood approach.
in model selection based on abc rejection we adapt the basic abc procedure
(presented in the introduction) to the joint space, where particles (m, θ) consist of a
model indicator m and a parameter θ. the abc rejection model selection algorithm
on the joint space proceeds as follows [13]:
3
1 draw m∗from the prior p(m).
2 sample θ∗from the prior p(θ|m∗).
3 simulate a candidate data set d∗∼f(d|θ∗, m∗).
4 compute the distance. if d(d0, d∗) ≤ε, accept (m∗, θ∗), otherwise reject it.
5 return to 1.
once a sample of n particles has been accepted, the marginal posterior distribution
is approximated by
p(m = m′|d0) ≈#accepted particles(m′, .)
n
.
in the abc smc model selection algorithm on the joint space, particles (parame-
ter vectors) {
(m1,θ1), . . . , (mn,θn)
} are sampled from the prior distribution, p(m, θ),
and propagated through a sequence of intermediate distributions, p(m, θ|d(d0, d∗) ≤
εi), i = 1, . . . , t −1, until they represent a sample from the target distribution,
p(m, θ|d(d0, d∗) ≤εt ). the tolerances εi are chosen such that ε1 > . . . > εt ≥0,
and the distributions thus gradually evolve towards the target posterior distribution.
the algorithm is presented below (and explained in the suppl. tutorial).
abc smc model selection algorithm on the joint space
ms1 initialize ε1, . . . , εt .
set the population indicator t = 1.
ms2.0 set the particle indicator i = 1.
ms2.1 if t = 1, sample (m∗∗, θ∗∗) from the prior distribution p(m, θ).
if t > 1, sample m∗with probability pt−1(m∗) and draw m∗∗∼kmt(m|m∗).
sample θ∗from previous population {θ(m∗∗)t−1} with weights wt−1 and draw
θ∗∗∼kpt,m∗∗(θ|θ∗).
if p(m∗∗, θ∗∗) = 0, return to ms2.1.
simulate a candidate data set d(b) ∼f(d|m∗∗, θ∗∗) bt times (b = 1, . . . , bt)
and calculate bt(m∗∗, θ∗∗).
if bt(m∗∗, θ∗∗) = 0, return to ms2.1.
ms2.2 set (m(i)
t , θ(i)
t ) = (m∗∗, θ∗∗) and calculate the weight of the particle as
w(i)
t (m(i)
t , θ(i)
t ) =
bt(m(i)
t , θ(i)
t ),
if t = 1
p(m(i)
t , θ(i)
t )bt(m(i)
t , θ(i)
t )
s
,
if t > 1.
4
where
bt(m(i)
t , θ(i)
t )
=
1
bt
bt
x
b=1
1(d(d0, d∗
b) ≤εt)
s
=
|m|
x
j=1
pt−1(m(j)
t−1)kmt(m(i)
t |m(j)
t−1) ×
x
k;mt−1=m(i)
t
w(k)
t−1kpt,m(i)
t (θ(i)
t |θ(k)
t−1)
pt−1(mt−1 = m(i)
t )
if i < n set i = i + 1, go to ms2.1.
ms3 normalize the weights wt.
sum the particle weights to obtain marginal model probabilities,
pt(mt = m) =
x
i;m(i)
t =m
w(i)
t (m(i)
t , θ(i)
t ).
if t < t, set t = t + 1, go to ms2.0.
particles sampled from a previous distribution are denoted by a single asterisk, and
after perturbation by a double asterisk. km is a model perturbation kernel which
allows us to obtain model m from model m∗and kp is the parameter perturba-
tion kernel. bt ≥1 is the number of replicate simulation run for a fixed particle
(for deterministic models bt = 1) and |m| denotes the number of candidate models.
the output of the algorithm, i.e. the set of particles {(mt , θt )} associated with
weights wt , is the approximation of the full posterior distribution on the joint model
and parameter space. the approximation of the marginal posterior distribution of
the model obtained by marginalization is
pt (mt = m) =
x
i;m(i)
t =m
w(i)
t (m(i)
t , θ(i)
t ),
and we can also straightforwardly obtain the marginalized parameter distributions.
the algorithm requires the user to define the prior distribution, distance function,
tolerance schedule and perturbation kernels. in all examples presented in the results
section we choose uniform prior distributions for all parameters and models; that is
all models are a priori equally plausible. such priors are informative in a sense that
they define a feasible parameter region (e.g. reaction rates are positive), but they
are predominantly non-informative as they do not specify any further preference for
particular parameter values. this way the inference will mostly be informed by the
information contained in the data. a good tolerance can be found empirically by
5
trying to reach the lowest distance feasible and arrive at the posterior distribution
in a computationally efficient way. our perturbation kernels are component-wise
truncated uniform or gaussian and are automatically adapted by feeding back in-
formation on the obtained parameter ranges from the previous population. distance
functions are defined for each model as specified in the results section. the algo-
rithm presented in toni et al. [8] is a special case of the above algorithm for discrete
uniform km kernel and uniform prior distribution of the model p(m).
3
results
in this section we illustrate abc smc for model selection on a simple example of
stochastic reaction kinetics. we then compare the computational efficiency of abc
smc for stochastic models of gibbs random fields with that of the abc rejection
model selection method. finally, we apply the algorithm to several real datasets:
first we select between different stochastic models of influenza epidemics (where we
can compare our approach with previously published results obtained using exact
bayesian model selection), and then apply our approach to choose from among
different mechanistic models for the stat5 signaling pathway.
3.1
chemical reaction kinetics
we illustrate our algorithm for the stochastic reaction kinetic models x + y
k1
−
→2y
and x
k2
−
→y . the first is a model of an autocatalytic reaction, where the reaction
product y is the catalyst for the reaction. in the second, molecules y do not need to
be present for a change from x to y to occur. such models have, for example, been
considered in the context of prion replication dynamics [14,15], where x represents
a healthy form of a prion protein and y a diseased form.
we simulate synthetic datasets of y measured at 20 time points using gillespie
algorithm [16] from model 2 with parameter k2 = 30 and initial conditions x0 = 40,
y0 = 3 (figure 1(a), suppl. table 1). we apply our abc smc algorithm for model
selection, which identifies the correct model with high confidence (figure 1(b)).
3.2
gibbs random fields
gibbs random fields have become staple models in machine learning, including appli-
cations in computational biology and bioinformatics (see for example [13,17]). here
we use two gibbs random field models [18], for which closed form posterior distribu-
tions are available. this allows us to compare the abc smc approximated posterior
distributions of the models to true posterior distribtuions, and to demonstrate the
computational efficiency of our approach when compared to model selection based
on abc rejection sampling.
both models, m0 and m1, are defined on a sequence of n binary random variables,
6
0.00
0.02
0.04
0.06
0.08
0.10
0
10
20
30
40
number of molecules
(a)
0.00
0.02
0.04
0.06
0.08
0.10
0
10
20
30
40
number of molecules
(a)
1
2
model
0.0
0.4
0.8
prior
1
2
model
0.0
0.4
0.8
population1
1
2
model
0.0
0.4
0.8
population2
1
2
model
0.0
0.4
0.8
population3
1
2
model
0.0
0.4
0.8
population4
1
2
model
0.0
0.4
0.8
population5
(b)
f
igure 2:
(a) stochastic trajectories of species x (red) and y (blue). model 1 is simulated
for k1 = 2 .1 (dark colours), model 2 for k2 = 30 (light colours). data points are represented
by circles. (b) w
e have repeated the model selection run 20 times; the red sections present
25% and 75% quantiles around the median. prior distribution
p(m) is chosen uniform and
k1, k2 ?
u(0, 100). p
erturbation kernels are chosen as follows:
k
p t(k|k? ) = u(−σ,σ), σ =
2(max{k}t−1 −min{k}t−1) and k
m t(m|m? ) = 0 .7 if m = m?
and 0.3 otherwise. distance
function is mean squared error and epsilon schedule ? = {3000, 1400, 600, 140, 40}.
3
model
model
1 2
1 2
1 2
1 2
1 2
1 2
prior
population 1
population 2
population 3
population 4
population 5
0.0 0.4 0.8
0.0 0.4 0.8
0.0 0.4 0.8
(b)
figure 1:
1
figure 1: (a) stochastic trajectories of species x (red) and y (blue). model 1 is simulated
for k1 = 2.1 (dashed line), model 2 for k2 = 30 (full line). data points are represented by
circles. (b) we have repeated the model selection run 20 times; the red sections present
25% and 75% quantiles around the median. prior distribution p(m) is chosen uniform and
k1, k2 ∼u(0, 100). perturbation kernels are chosen as follows: kpt(k|k∗) = k∗+ u(−σ, σ),
σ = 2(max{k}t−1 −min{k}t−1) and kmt(m|m∗) = 0.7 if m = m∗and 0.3 otherwise.
number of particles n = 1000.
bt = 1.
distance function is mean squared error and
tolerance schedule ε = {3000, 1400, 600, 140, 40}.
x = (x1, . . . , xn), xi ∈{0, 1}; m0 is a collection of n iid bernoulli random variables
with probability θ0/(1 + exp(θ0)); m1 is equivalent to a standard ising model, i.e.
x1 is taken to be a binary random variable and p(xi+1 = xi|xi) = θ1/(1 + exp(θ1))
for i = 2, . . . , xn. the likelihood functions are
f0(x|θ0)= eθ0s0(x)
(1 + eθ0)n
and
f1(x|θ1)=
eθ1s1(x)
2(1 + eθ1)n−1 ,
where s0(x) = pn
i=1 1(xi = 1) and s1(x) = pn
i=2 1(xi = xi−1) are sufficient statis-
tics, respectively.
we simulate 1000 datasets from both models for different values of parameters
θ0 ∼u(−5, 5), θ1 ∼u(0, 6) and n = 100.
using abc smc for model selec-
tion allows us to estimate posterior model distributions correctly and demonstrate
a considerable computational speed-up in abc smc compared to abc rejection
(figure 2).
3.3
infuenza infection outbreaks
we next apply abc smc for model selection to models of the spread of different
strains of the influenza virus. we use data from influenza a (h3n2) outbreaks that
occurred in 1977-78 and 1980-81 in tecomseh, michigan [19] (suppl. table 2), and
a second dataset of an influenza b infection outbreak in 1975-76 and influenza a
(h1n1) infection outbreak in 1978-79 in seattle, washington [20] (suppl. table 3).
7
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
p(m=0)
p (m=0)
smc
(a)
0
500
1000
1500
2000
2500
0.000
0.005
0.010
0.015
0.020
0
5
10
15
0.00 0.01 0.02
0.03 0.04
0.05
frequency
n /n
smc
rej
(b)
f
igure 5:
5
figure 2: (a) true vs. inferred posterior model distribution. in abc smc we use the
euclidian distance d(d0, x) =
p
(s0(d0) −s0(x))2 + (s1(d0) −s1(x))2. n = 500. bt = 1.
tolerance schedule: ε = {9, 4, 3, 2, 1, 0}. perturbation kernels: kmt(m|m∗) = 0.75 if m =
m∗and 0.25 otherwise; kpt(θ|θ∗) = θ∗+ u(−σ, σ), σ = 0.5(max{θ}t−1 −min{θ}t−1). we
have excluded those datasets for which all states are in 0 or 1 (for which p(m = 0) ≈0.3094 is
also correctly inferred) from the analysis. (b) comparison of the number of simulation steps
needed by abc rejection (nrej) and abc smc (nsmc); abc smc yields an approximately
50-fold speed-up on average.
the basic questions to be addressed here are whether (i) different outbreaks of the
same strain and (ii) outbreaks of different molecular strains of the influenza virus
can be described by the same model of disease spread.
we assume that virus can spread from infected to susceptible individuals and distin-
guish between spread inside households or across the population at large [20]. let
qc denote the probability that a susceptible individual does not get infected from
the community and qh the probability that a susceptible individual escapes infection
within their household. then wjs, the probability that j out of the s susceptibles
in a household become infected, is given by
wjs =
s
j
wjj(qcqj
h)s−j,
(1)
where w0s = qs
c, s = 0, 1, 2, . . ., and wjj = 1 −pj−1
i=0 wij. we are interested in
inferring the pair of parameters qh and qc of the model (1) using the data from
suppl.
table 2.
these data were obtained from two separate outbreaks of the
same strain, h3n2, and the question of interest is whether these are characterized
by the same epidemiological parameters (this question was previously considered
in [21, 22]). to investigate this issue, we consider two models: one with four pa-
rameters, qh1, qc1, qh2, qc2, which describes the hypothesis that each outbreak has
its own characteristics; the second models the hypothesis that both outbreaks share
the same epidemiological parameter values for qh and qc. prior distributions of all
parameters are chosen to be uniform over the range [0, 1].
8
to apply abc smc, we use a distance function
d(d0, d∗) = 1
2(||d1 −d∗(qh1, qc1)||f + ||d2 −d∗(qh2, qc2)||f ),
where || ||f denotes the frobenious norm, d0 = d1 ∪d2 with d1 the 1977-78 out-
break and d2 the 1980-81 outbreak datasets from suppl. table 2, and d∗is the
simulation output from model (1). the results we obtain are sumarized in figure
3(a) - 3(b) and strongly suggest that the two outbreaks appear to have shared the
same epidemiological characteristics. figure 3(a) shows the posterior distribution of
the four-parameter model. the marginal posterior distributions of qh1 and qc1 are
largely overlapping with the marginal posterior distributions of qh2 and qc2 and we
therefore, unsurprisingly, get strong evidence in favour of the two-parameter model.
figure 3(b) shows the marginal posterior distribution of the model; the posterior
probability of model 1 is 0.98 (median over 10 runs), which gives unambiguous sup-
port to model 1, meaning that outbreaks of the same strain share the same dynamics.
outbreaks due to a different viral strain (suppl. table 3) have different characteris-
tics as indicated by the posterior distribution of the four-parameter model presented
in figure 3(c). this was confirmed by applying our model selection algorithm; the
inferred posterior marginal model probability of a two-parameter model was negli-
gible (results not shown). from figure 3(c) we also see that these differences are
due to differences in viral spread across the community whereas within-household
dynamics are comparable. we thus explore a further model with three parameters,
qc1, qc2, qh (model 1), where the two outbreaks share the same within-household
characteristics (qh), and compare it against and the four-parameter model (model
2). the obtained bayes factor suggests that there is only very week evidence in
favour of model 1 (figure 3(d)), which is in agreement with the result of [21].
in general genetic predisposition, differences in immunity and lifestyle etc.
will
lead to heterogeneity in susceptibility to viral infection among the host population.
such a model can be written as [22]
wjs(v) =
s−j
x
i=0
s
i
vi(1 −v)s−iwj,s−i.
(2)
on the basis of the previous results, we combine both outbreak data sets from suppl.
table 2, and find some evidence that model (2) explains the data better than model
(1), suggesting that the host-virus dynamics are shaped by the molecular nature of
the viral strain, as well as by variability in the host population (see suppl. figure
2).
3.4
jak-stat signaling pathway
having convinced ourselves that the novel abc smc model selection approach
agrees with the analytical model probabilities, and those obtained using conven-
9
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
qh
qc
(a)
1
2
model
0.0
0.2
0.4
0.6
0.8
1.0
population8
(b)
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
qh
qc
(c)
1
2
model
0.0
0.2
0.4
0.6
0.8
1.0
population7
(d)
f
igure 2:
2
1
2
1
2
model
model
approximation of model
posterior p(m|d0)
approximation of model
posterior p(m|d0)
figure 3: (a) abc smc posterior distributions for parameters inferred for a four-parameter
model from the data in suppl. table 2. marginal posterior distributions of parameters qc1,
qh1 (red) and qc2, qh2 (blue). (b) estimation of a posterior marginal distribution p(m|d0).
model 1 is a two-parameter and model 2 a four-parameter model (1).
all intermediate
populations are shown in suppl. figure 1(a). (c) the same as (a) but here the data used
is from suppl. table 3. (d) estimation of a posterior marginal distribution. model 1 is a
two-parameter and model 2 a three-parameter model (1). all intermediate populations are
shown in suppl. figure 1(b).
10
tional bayesian model selection, while outperforming conventional abc rejection
model selection approaches, we can now turn our attention to real world scenarios
that have not previously been considered from a bayesian (exact or approximate)
perspective. here we consider models of signaling though the erythropoietin recep-
tor (epor), transduced by stat5 (figure 4(a)) [23, 24]. signaling through this
receptor is crucial for proliferation, differentiation, and survival of erythroid progen-
itor cells [25]. when the epo hormone binds to the epor receptor, the receptor's
cytoplasmic domain is phosporylated, which creates a docking site for signaling
molecules, in particular stat5. upon binding to the activated receptor, stat5
first becomes phosphorylated, then dimerizes and translocates to the nucleus, where
it acts as a transcription factor. there have been competing hypotheses about what
happens with the stat5 in the nucleus.
originally it had been suggested that
stat5 gets degraded in the nucleus in an ubiquitin-asssociated way [26], but other
evidence suggests that they are dephosphorylated in the nucleus and then trafficked
back to the cytoplasm [27].
the ambiguity of the shutoffmechanism of stat5 in the nucleus triggered the
development of several mathematical models [29,30,32] describing different hypothe-
ses. all models assume mass action kinetics and denote the amount of activated
epo-receptors by epora, monomeric unphosphorylated and phosporylated stat5
molecules by x1 and x2, respectively, dimeric phosphorylated stat5 in the cyto-
plasm by x3 and dimeric phosphorylated stat5 in the nucleus by x4. the most
basic model timmer et al. developed, under the assumption that phosphorylated
stat5 does not leave the nucleus, consists of the following kinetic equations,
̇
x1
=
−k1x1epora
(3)
̇
x2
=
−k2x2
2 + k1x1epora
̇
x3
=
−k3x3 + 1
2k2x2
2
̇
x4
=
k3x3.
(4)
one can then assume that phosphorylated stat5 dimers dissociate and leave the
nucleus; this is modelled by adding appropriate kinetic terms to the equations (3)
and (4) of the basic model to obtain
̇
x1
=
−k1x1epora + 2k4x4
̇
x4
=
k3x3 −k4x4.
the cycling model can be developed further by assuming a delay before stat5
leaves the nucleus:
̇
x1
=
−k1x1epora + 2k4x3(t −τ)
̇
x4
=
k3x3 −k4x3(t −τ).
(5)
this model was chosen as the best model in the original analyses [29,30] based on a
numerical evaluation of the likelihood, followed by a likelihood ratio test and boot-
strap procedure for model selection. the data are partially observed time course
11
arbouzova, n. i. et al. development 2006;133:2605-2616
the canonical model of jak/stat signalling
(a)
1
2
3
0.0
0.4
0.8
population1
1
2
3
0.0
0.4
0.8
population2
1
2
3
0.0
0.4
0.8
population3
1
2
3
0.0
0.4
0.8
population4
1
2
3
0.0
0.4
0.8
population5
1
2
3
0.0
0.4
0.8
population6
1
2
3
0.0
0.4
0.8
population7
1
2
3
0.0
0.4
0.8
population8
1
2
3
0.0
0.4
0.8
population9
1
2
3
0.0
0.4
0.8
population10
1
2
3
0.0
0.4
0.8
population11
1
2
3
0.0
0.4
0.8
population12
1
2
3
0.0
0.4
0.8
population13
1
2
3
0.0
0.4
0.8
population14
1
2
3
0.0
0.4
0.8
population15
1
2
3
model
0.0
0.4
0.8
population16
1
2
3
model
0.0
0.4
0.8
population17
1
2
3
model
0.0
0.4
0.8
population18
1
2
3
model
0.0
0.4
0.8
population19
1
2
3
model
0.0
0.4
0.8
population20
population 11
population 11
population 13
population 12
population 14
population 15
population 18
population 19
population 17
population 16
population 20
population 8
population 9
population 10
population 7
population 6
population 2
population 3
population 4
population 5
population 1
(b)
figure 1:
1
figure 4:
(a) stat5 signaling pathway. adapted from [28]. (b) histograms show pop-
ulations of the model parameter m.
population 20 represents the approximation of the
marginal posterior distribution of m. tolerance schedule: ε = {200, 100, 50, 35, 30, 25, 22,
20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8}. perturbation kernels: kmt(m|m∗) = 0.6 if
m = m∗and 0.2 otherwise; kpt(θ|θ∗) = θ∗+ u(−σ, σ), σ = 0.5(max{θ}t−1 −min{θ}t−1).
n = 500.
distance function: d(d0, d∗) =
s
p
t
y(1)
0
(t)−y∗(1)(t)
σ(1)
d0(t))
2
+
y(2)(t)−y∗(2)(t)
σ(2)
d0(t)
2
,
with d0 = {y(1)
0 , y(2)
0 }, d∗= {y∗(1), y∗(2)} and y(1) the total amount of phosphoryalated
stat5 in the cytoplasm and y(2) the total amount of stat5 in the cytoplasm. σ(1)
d0 and
σ(2)
d0 are the associated confidence intervals; reassuringly, other distance functions, e.g. the
square root of the sum of squared errors yield identical model selection results (data not
shown).
12
measurements of the total amount of stat5 in the cytoplasm, and the amount of
phosphorylated stat5 in the cytoplasm; both are only known up to a normalizing
factor.
we propose a further model with clear physical interpretation where the delay acts
on stat5 inside the nucleus (x4) rather than on x3 (in equation (5)), for which a
biological interpretation is difficult. instead of x3(t −τ), we propose to model the
delay of phosphorylated stat5 x4 in the nucleus directly and obtain [31]:
̇
x1
=
−k1x1epora + 2k4x4(t −τ)
̇
x4
=
k3x3 −k4x4(t −τ).
we perform the abc smc model selection algorithm on the following non-nested
models: (1) cycling delay model with x3(t −τ), (2) cycling delay model with
x4(t−τ), (3) cycling model without a delay. the model parameter m can therefore
take values 1, 2 and 3.
for each proposed model and parameter combination we numerically solve the ode
equations of the model and add ε ∼n(0, σ) to obtain the simulated time course
data. the noise parameter σ can be either fixed or treated as another parameter to
be estimated; we consider the latter option, under the assumption that the experi-
mental noise is independent and identically distributed for all time points.
figure 4(b) shows intermediate populations leading to the abc smc marginal
posterior distribution over the model parameters m (population 20). bayes factors
can be calculated from the last population and according to the conventional in-
terpretation of bayes factors [33], it can be concluded that there is strong evidence
in favour of model 3 compared to model 1, positive evidence in favour of model 3
compared to model 2, and positive evidence in favour of model 2 compared to model
1. thus cycling appears to be clearly important and the model that receives the
most support is the cycling model without a time-delay. here the flexibility of abc
smc has allowed us to perform simultaneous model selection on non-nested models
of ordinary and time-delay differential equations.
4
discussion
we have developed a novel model selection methodology based on approximate
bayesian computation and sequential monte carlo. the results obtained in our ap-
plications illustrate the usefulness and wide applicability of our abc smc method,
even when experimental data are scarce, there are no measurements for some of
the species, temporal data are not measured at equidistant time points, and when
parameters such as kinetic rates are unknown. in the context of dynamical systems
our method can be applied across all simulation and modelling (including qualitative
modelling) frameworks; for jak-stat signal transduction dynamics, for example,
13
we have been able to compare the relative explanatory power of ode and time-delay
differential equation models. our model selection procedure is also not confined to
dynamical systems; in fact the scope for application is immense and limited only by
the availability of efficient simulation approaches.
routine application to complex models in systems, computational and population
biology with hundreds or thousands of parameters [34] will require further numeri-
cal developments due to the high computational cost of repeated simulations. smc
based abc methods are, however, highly paralellizable and we believe that future
work should exploit this property to make these methods computationally more ef-
ficient. further potential improvements might come from (i) regression adjustment
techniques that have so far been applied in the parameter estimation abc frame-
work [4,35,36] (ii) from automatic generation of the tolerance schedules [37], and (iii)
by developing more sophisticated perturbation kernels that exploit inherent prop-
erties of biological dynamical systems such as sloppiness [38,39]; here especially we
feel that there is substantial room for improvement as the likelihoods of dynamical
systems contain information about the qualitative behaviour [40] which can also be
exploited in abc frameworks.
5
conclusion
we conclude by emphasizing the need for inferential methods which can assess the
relative performance and reliability of different models. the need for such reliable
model selection procedures can hardly be overstated: with an increasing number of
biomedical problems being studied using simulation approaches, there is an obvious
and urgent need for statistically sound approaches that allow us to differentiate
between different models. if parameters are known or the likelihood is available in
a closed form, then the model selection is generally straightforward. however, for
many of the most interesting systems biology (and generally, scientific) problems
this is not the case and here abc smc can be employed.
acknowledgement
we are especially grateful to paul kirk for his insightful comments and many valu-
able discussions. we furthermore thank the members of theoretical systems biology
group for discussions and comments on earlier versions of this paper.
funding: this work was supported through a mrc priority studentship (t.t.)
and bbsrc grant bb/g009374/1.
references
[1] may rm. uses and abuses of mathematics in biology. science, 303(5659):790–3, 2004.
14
[2] burnham k and anderson d.
model selection and multimodel inference:
a practical
information-theoretic approach. springer-verlag new york, inc., 2002.
[3] vyshemirsky v and girolami ma. bayesian ranking of biochemical system models. bioinfor-
matics, 24(6):833–9, 2008.
[4] beaumont ma, zhang w and balding dj. approximate bayesian computation in population
genetics. genetics, 162(4):2025–2035, 2002.
[5] marjoram p, molitor j, plagnol v and tavare s. markov chain monte carlo without likeli-
hoods. proc natl acad sci usa, 100(26):15324–8, 2003.
[6] sisson sa, fan y and tanaka mm. sequential monte carlo without likelihoods. proc natl
acad sci usa, 104(6):1760–5, 2007.
[7] ratmann o, jorgensen o, hinkley t, stumpf m, richardson s and wiuf c. using likelihood-
free inference to compare evolutionary dynamics of the protein networks of h. pylori and p.
falciparum. plos comput biol, 3(11):2266–2278, 2007.
[8] toni t, welch d, strelkowa n, ipsen a and stumpf mph. approximate bayesian computation
scheme for parameter inference and model selection in dynamical systems. j. r. soc. interface,
6:187–202, 2009.
[9] ratmann o, andrieu c, wiuf c and richardson s.
model criticism based on likelihood-
free inference, with an application to protein network evolution. proc natl acad sci usa,
106(26):10576–81, 2009.
[10] gelman a, jb c, stern h and rubin d. bayesian data analysis. chapman & hall/crc,
2nd edition, 2003.
[11] jeffreys h. theory of probability. 1st ed. the clarendon press, oxford, 1939.
[12] wilkinson rd. bayesian inference of primate divergence times. phd thesis, university of
cambridge, 2007.
[13] grelaud a, robert cp, marin jm, rodolphe f and taly jf. abc likelihood-free methods for
model choice in gibbs random fields. bayesian analysis, 4(2):317–336, 2009.
[14] prusiner sb. novel proteinaceous infectious particles cause scrapie. science, 216(4542):136–44,
1982.
[15] eigen m. prionics or the kinetic basis of prion diseases. biophys chem, 63(1):a1–18, 1996.
[16] gillespie d. exact stochastic simulation of coupled chemical reactions. the journal of physical
chemistry, 1977.
[17] wei, z. and li h. (2007). a markov random field model for network-based analysis of genomic
data. bioinformatics, 23(12), 1537–44.
[18] møller j. spatial statistics and computational methods. springer, 2003.
[19] addy c, jr il and haber m. a generalized stochastic model for the analysis of infectious
disease final size data. biometrics, 961–974, 1991.
[20] jr il and koopman j. household and community transmission parameters from final distri-
butions of infections in households. biometrics, 115–126, 1982.
[21] clancy d and o'neill pd. exact bayesian inference and model selection for stochastic models
of epidemics among a community of households. scand j stat, 34(2):259–274, 2007.
[22] o'neill p, balding d, becker n, eerola m and mollison d.
analyses of infectious disease
data from household outbreaks by markov chain monte carlo methods. j roy stat soc c-app,
49:517–542, 2000.
[23] darnell je. stats and gene regulation. science, 277(5332):1630–5, 1997.
[24] horvath cm. stat proteins and transcriptional responses to extracellular signals. trends
biochem sci, 25(10):496–502, 2000.
15
[25] klingmuller u, bergelson s, hsiao jg and lodish hf.
multiple tyrosine residues in the
cytosolic domain of the erythropoietin receptor promote activation of stat5. proc natl acad
sci usa, 93(16):8324–8, 1996.
[26] kim tk and maniatis t. regulation of interferon-gamma-activated stat1 by the ubiquitin-
proteasome pathway. science, 273(5282):1717–9, 1996.
[27] k ̈
oster m and hauser h. dynamic redistribution of stat1 protein in ifn signaling visualized
by gfp fusion proteins. eur j biochem, 260(1):137–44, 1999.
[28] arbouzova ni and zeidler mp.
jak/stat signalling in drosophila: insights into conserved
regulatory and cellular functions. development, 133(14):2605–16, 2006.
[29] swameye i, muller tg, timmer j, sandra o and klingmuller u. identification of nucleocy-
toplasmic cycling as a remote sensor in cellular signaling by databased modeling. proc natl
acad sci usa, 100(3):1028–33, 2003.
[30] muller tg, faller d, timmer j, swameye i, sandra o and klingm ̈
uller u. tests for cycling
in a signalling pathway. journal of the royal statistical society series c, 53(4):557, 2004.
[31] zi, z. and klipp, e. (2006) sbml-pet: a systems biology markup language-based parameter
estimation tool. bioinformatics, 22(21), 2704–5.
[32] timmer j and muller t. modeling the nonlinear dynamics of cellular signal transduction. int
j bifurcat chaos, 14:2069–2079, 2004.
[33] kass r and raftery a. bayes factors. journal of the american statistical association, 90:773–
795, 1995.
[34] chen ww, schoeberl b, jasper pj, niepel m, nielsen ub, lauffenburger da and sorger pk.
input-output behavior of erbb signaling pathways as revealed by a mass action model trained
against dynamic data. mol syst biol, 5:239, 2009.
[35] blum mg and fran ̧
cois o. non-linear regression models for approximate bayesian computa-
tion. statistics and computing, in press, 2009.
[36] excoffier cldwl.
bayesian computation and model selection in population genetics.
arxiv:0901.2231v1 [stat.me], 2009.
[37] del moral, p., doucet, a., and jasra, a. (2009). an adaptive sequential monte carlo method
for approximate bayesian computation. imperial college technical report.
[38] gutenkunst r, waterfall j, casey f, brown k, myers c and sethna j. universally sloppy
parameter sensitivities in systems biology models. plos comput biol, 3(10):e189, 2007.
[39] secrier, m., toni, t., and stumpf, m. p. h. (2009). the abc of reverse engineering biological
signalling systems. mol. biosyst, (in press) doi: 10.1039/b908951a.
[40] kirk pdw, toni t and stumpf mph.
parameter inference for biochemical systems that
undergo a hopf bifurcation. biophysical journal, 95(2):540–9, 2008.
[41] robert cp and casella g. monte carlo statistical methods. springer, 2004.
[42] doucet a, freitas nd and gordon n. sequential monte carlo methods in practice. springer,
2001.
[43] moral pd, doucet a and jasra a. sequential monte carlo samplers. j. royal statist. soc. b,
2006.
16
supplementary material a: derivation of abc smc model
selection algorithms
we start this section by briefly reviewing the building blocks of the abc smc algorithm of toni
et al. [8], which is based on sequential importance sampling (sis). the main idea of importance
sampling is to sample from the desired target distribution π (which can be impossible or hard to
sample from) indirectly through sampling from a proposal distribution η [41]. to get a sample from
π, one can instead sample from η and weight the samples by importance weights
w(x) = π(x)
η(x) .
in sis one reaches the target distribution πt through a series of intermediate distributions, πt,
t = 1, . . . , t −1 [42, 43].
if it is hard to sample from these distributions one can use the idea
of importance sampling described above to sample from a series of proposal distributions ηt and
weight the obtained samples by importance weights
wt(xt) = πt(xt)
ηt(xt) .
(6)
in sis the proposal distributions are defined as
ηt(xt) =
z
ηt−1(xt−1)κt(xt−1, xt)dxt−1,
(7)
where ηt−1 is the previous proposal distribution and κt is a markov kernel.
to apply sis, we need to define the intermediate and the proposal distributions.
in an abc
framework [4,5], which is based on comparisons between simulated and experimental datasets, we
define the intermediate distributions as [6,8]
πt(x) = p(x)
bt
bt
x
b=1
1
d(d0, d(b)(x)) ≤εt
,
(8)
where p(x) denotes the prior distribution and d(1), . . . , d(bt) are bt ≥1 data sets generated for
a fixed parameter x, d(b) ∼f(d|x). 1(x) is an indicator function and εt is the tolerance required
from particles contributing to the intermediate distribution t. to simplify the notation we define
bt(x) =
1
bt
pbt
b=1 1
d(d0, d(b)(x)) ≤εt
.
we define the first proposal distribution to equal the prior distribution, η1(x) = p(x). the proposal
distribution at time t (t = 2, . . . , t), ηt, is defined as
ηt(xt)
=
1 (p(xt) > 0) 1 (bt(xt) > 0)
z
πt−1(xt−1)kt(xt|xt−1)dxt−1,
(9)
where kt denotes the perturbation kernel (e.g. random walk around the particle). for details of
how this proposal distribution was obtained, see [8].
in the remainder of this section we introduce three different ways in which abc smc ideas pre-
sented above can be used in the model selection framework. we start by proposing a simple and
naive incorporation of the above building blocks for model selection. we then continue by deriving
an abc smc model selection algorithm on the joint model and parameter space, which is presented
in the methods section of the paper. in the end we present abc smc algorithm for approximation
of the marginal likelihood, which can also be employed for model selection.
the only of these three algorithms that we present in the main part of the paper and use in
examples is algorithm ii (abc smc model selection on the joint space), since the other two algo-
rithms (i and iii) are computationally too expensive and impractical to use.
17
i) abc smcm rejθ model selection algorithm
very naively and stragihtforwardly the intermediate distributions can be defined as
πt(m) = p(m)bmt(m),
where
bmt(m) :=
p
θ∼p (θ|m) 1(d(d0, d(θ, m)) < εt)
p
θ∼p (θ|m) 1(p(θ|m) > 0)
.
this means that for each model m we calculate bmt(m) as the ratio between the number of accepted
particles (where the distance falls below εt) and all sampled particles, where parameters θ of model
m are sampled from the prior distribution p(θ|m).
if a set of candidate models m of a finite size |m| is being considered, and n denotes the number
of particles, then we can write the algorithm as follows:
ms1 initialize ε1, . . . , εt .
set the population indicator t = 1.
ms2 for i = 1, . . . , |m|, calculate the weights as
w(i)
t (m(i)
t ) =
bmt(m(i)
t ),
if t = 1
p (m(i)
t
)bmt(m(i)
t
)
pn
j=1 w(j)
t−1kmt(m(i)
t
|m(j)
t−1),
if t > 1.
ms3 normalize the weights.
if t < t, set t = t + 1, go to ms2.
in this algorithm we estimate the posterior distribution of the model indicator m sequentially (i.e.
using ideas from sis), but the integration over model parameters is not sequential; we always sample
them from the prior distribution p(θ|m) (i.e. in the rejection sampling manner). this algorithm is
therefore computationally very expensive. it would be computationally more efficient to generate
θt by exploiting the knowledge about θ that is contained in {θ}t−1. in addition to learning m
sequentially, i.e. by exploiting {m}t−1 for generating mt, we would also like to learn θ sequentially.
in order to do this, we define
ii) abc smc model selection on the joint space
let (m, θ) denote a particle from a joint space, where m corresponds to the model indicator and θ
are the parameters of model m. we define the intermediate distributions by
πt(m, θ) = p(m, θ)bt(m, θ),
where
bt(m, θ) = 1
bt
bt
x
b=1
1(d(d0, d(b)(m, θ)) ≤εt).
in the following equations kmt denotes the perturbation kernel for the model parameter, kpt,m
denotes the perturbation kernel for the parameters of model m, and t is the population number.
now we derive the sequential importance sampling weights
wt(mt, θt) = πt(mt, θt)
ηt(mt, θt) .
18
for a particle (mt, θt) from population t, we define the proposal distribution ηt(mt, θt) as
ηt(mt, θt)
=
1 (p(mt, θt) > 0) 1 (bt(mt, θt) > 0)
(10)
×
z
mt−1
πt−1(mt−1)kmt(mt|mt−1)dmt−1
×
z
θt−1|mt−1=mt
πt−1(θt−1)kpt(θt|θt−1)dθt−1
∝
1 (p(mt, θt) > 0) 1 (bt(mt, θt) > 0)
×
|m|
x
j=1
pt−1(m(j)
t−1)kmt(mt|m(j)
t−1)
×
x
k;mt−1=mt
w(k)
t−1
pt−1(mt−1 = mt)kpt,mt(θt|θ(k)
t−1),
where intermediate marginal model probabilities pt(m) are defined as
pt(mt = m) =
x
mt=m
wt(mt, θt).
the weights for all accepted particles are (obtained by including (8) and (10) in equation (6))
wt(mt, θt) =
p(mt, θt)bt(mt, θt)
p|m|
j=1 p (j)
t−1(m(j)
t−1)kmt(mt|m(j)
t−1) p
k;mt−1=mt
w(k)
t−1
pt−1(mt−1=mt)kpt,mt(θt|θ(k)
t−1)
.
the resulting abc smc algorithm is presented in the methodology section of the main part of the
paper.
iii) abc smc approximation of the marginal likelihood p(d0|m)
if we can calculate the marginal likelihood p(d0|m) for each of the candidate models that we
consider in the model selection problem, then we can calculate the marginal posterior distribution
of a model m as
p(m|d0) =
p(d0|m)p(m)
p
m′ p(d0|m′)p(m′).
(11)
we now explain how to calculate p(d0|m) for model m. in the abc rejection-based approach the
posterior distribution of the parameters for each model m are estimated independently by employing
abc rejection; the marginal likelihood then equals the acceptance rate,
p(d0|m) ≈#accepted particles given model m
nm
,
(12)
i.e. the ratio between the number of accepted versus the number of proposed particles nm. we can
use this marginal likelihood estimate to calculate p(m|d0) using equation (11). this approach has
been used in [12].
we now derive how abc smc can be used for estimating the marginal likelihood, which can
be then used for model selection. in a usual abc smc setting for drawing samples from the pos-
terior parameter distribution p(θ|m, d0) for a given model m, we define intermediate distributions
as
πt(θ) = p(θ)1(d(d0, d(θ)) ≤εt).
(13)
the target distribution πt is an unnormalized approximation of the posterior distribution p(θ|m, d0).
we are now interested in its normalization constant, i.e. the marginal likelihood,
p(d0|m) ≈
z
θ
πt (θ)dθ.
19
let us call the integrals of πt(θ),
r
θ πt(θ)dθ, the intermediate marginal likelihoods.
in the usual abc smc parameter estimation setting, our goal is to obtain samples from distribu-
tion πt (θ), whereas our goal here is to obtain its normalization constant. while this distribution as
defined in equation (13) is in general unnormalized, the abc smc parameter estimation algorithm
performs normalization of weights at every t and therefore returns its normalized version [8]. so
we cannot use the usual output of abc smc directly. instead we proceed as follows.
we would like to draw particles from the following target distribution:
tt (θ) = p(θ)1[d(d0, d(θ)) ≤εt ] + p(θ)1[d(d0, d(θ)) > εt ],
where p(θ) is the prior distribution. to draw samples from tt we can use abc smc, where we
define the intermediate distributions as
tt(θ)
=
p(θ)1[d(d0, d(θ)) ≤εt] + p(θ)1[d(d0, d(θ)) > εt]
=
t 1
t (θ) + t 2
t (θ).
in each population we accept n particles, and a particle is only rejected if it falls outside the
boundaries of tt. we classify the accepted particles in two sets, θ1
t := {θ; d(d0, d(θ)) ≤εt} and
θ2
t := {θ; d(d0, d(θ)) > εt}, depending on the distance reached. in each population t we can then
calculate the intermediate marginal likelihoods by
z
θ
t 1
t (θ)dθ =
x
θ∈θ1
t
wt(θ).
the target marginal likelihood,
r
θ t 1
t (θ)dθ, is our approximation of p(d0|m). in an abc rejection
setting, where t = 1 and all weights are equal, this result corresponds to (12).
after calculating p(d0|m) for each m, we can use equation (11) to calculate the marginal pos-
terior distributions for model m,
p(m|d0) ≈
p(m) p
θ∈θ1
t wt (θ)
p(m′) p
m′
p
θ′∈θ′1
t w′
t (θ′).
the model selection algorithm based on approximating the marginal likelihood proceeds as follows:
algorithm
m1 for model mj, j = 1, . . . , |m| do steps s1 to s4. then go to m2.
s1 initialize ε1, . . . , εt .
set the population indicator t = 1.
s2.0 set the particle indicator i = 1.
s2.1 if t = 1, sample θ∗∗independently from p(θ).
if t > 1, sample θ∗from the previous population {θ(i)
t−1} with weights wt−1 and perturb the
particle to obtain θ∗∗∼kt(θ|θ∗), where kt is a perturbation kernel.
if p(θ∗∗) = 0, return to s2.1.
for a particle θ∗∗simulate a candidate data set d and calculate d(d0, d(θ∗∗)).
if d(d0, d(θ∗∗)) ≤εt, add θ∗∗to θ1
t(mj). if d(d0, d(θ∗∗)) > εt, add θ∗∗to θ2
t(mj).
s2.1 calculate the weight for particle θ(i)
t
= θ∗∗:
w(i)
t (θ(i)
t ) =
(
1,
if t = 1
p(θ(i)
t )/
pn
j=1 w(j)
t−1kt(θ(i)
t |θ(j)
t−1)
,
if t > 1.
if i < n set i = i + 1, go to s2.1.
20
s3 normalize the weights.
if t < t, set t = t + 1, go to s2.0.
s4 calculate
p(d0|mj) ≈
p(mj) p
θ∈θ1
t (mj) wt (θ)
p
m′ p(m′) p
θ′∈θ′1
t (m) w′
t (θ′).
m2 for each mj calculate p(mj|d0) using equation
p(m|d0) =
p(d0|m)p(m)
p
m′ p(d0|m′)p(m′).
the computational advantage of this model selection algorithm compared to the marginal likelihood
model selection based on abc rejection can be obtained by (i) starting with a small number of
particles n in population 1 and increasing it in each subsequent population. this way not much
computational effort is spent on simulations in earlier populations, but we nevertheless have a big
enough sample set in the last population to obtain a reliable estimate; (ii) exploiting the property
that intermediate distributions in the parameter estimation framework should be included in one
another, and so
range θ1
t ≥range θ1
t+1,
t = 1, . . . , t −1.
in other words, a proposed particle in population t cannot belong to θ1
t if it cannot be obtained
by perturbing any of the particles in θ1
t−1. we can therefore reject some of the proposed particles
without simulation. this means a huge saving in computational time, since simulations are the
most expensive part of abc based algortihms. however, one of the obvious ways to exploit this
property would be to use a truncated perturbation kernel with ranges they cover being smaller than
the range of prior distribution. but we find this unsatisfactory and, in the present form, feel that
evaluating the marginal model likelihood directly is not practical.
supplementary material b: tutorial on abc rejection
and abc smc for parameter estimation and model se-
lection
available in arxiv (reference arxiv:0910.4472v2 [stat.co]).
supplementary material c: supplementary figures and
datasets
available on the bioinformatics webpage.
21
|
0911.1706 | lp convergence with rates of smooth poisson-cauchy type singular
operators | in this article we continue the study of smooth poisson-cauchy type singular
integral operators on the line regarding their convergence to the unit operator
with rates in the lp norm, p greater equal one. the related established
inequalities involve the higher order lp modulus of smoothness of the engaged
function or its higher order derivative.
| introduction
the rate of convergence of singular integrals has been studied in [9], [13], [14], [15], [7], [8], [4], [5], [6]
and these articles motivate our work. here we study the lp, p ≥1, convergence of smooth poisson-
cauchy type singular integral operators over r to the unit operator with rates over smooth functions
with higher order derivatives in lp(r). we establish related jackson type inequalities involving the
higher lp modulus of smoothness of the engaged function or its higher order derivative.
the
discussed operators are not in general positive, see [10], [11]. other motivation comes from [1], [2].
2
results
in the next we introduce and deal with the smooth poisson-cauchy type singular integral operators
mr,ξ(f; x) defined as follows.
for r ∈n and n ∈z+ we set
αj =
(−1)r−j r
j
j−n,
j = 1, . . . , r,
1 −
r
x
j=1
(−1)r−j r
j
j−n,
j = 0,
(1)
that is
r
p
j=0
αj = 1.
1
george a. anastassiou, razvan a. mezei
2
let f ∈cn(r) and f (n) ∈lp(r), 1 ≤p < ∞, α ∈n, β >
1
2α, we define for x ∈r, ξ > 0 the
lebesgue integral
mr,ξ(f; x) = w
z ∞
−∞
pr
j=0 αjf(x + jt)
t2α + ξ2αβ
dt,
(2)
where the constant is defined as
w =
γ (β) αξ2αβ−1
γ
1
2α
γ
β −
1
2α
.
note 1. the operators mr,ξ are not, in general, positive. see [10], (18).
we notice by w
r ∞
−∞
1
(t2α+ξ2α)β dt = 1, that mr,ξ(c, x) = c, c constant, see also [10], [11], and
mr,ξ(f; x) −f(x) = w
r
x
j=0
αj
z ∞
−∞
[f(x + jt) −f(x)]
1
t2α + ξ2αβ dt
.
(3)
we use also that
z ∞
−∞
tk
t2α + ξ2αβ dt =
(0,
k odd,
1
ξ2αβ−k−1α
γ( k+1
2α )γ(β−k+1
2α )
γ(β)
,
k even, with β > k+1
2α ,
(4)
see [16].
we need the rth lp-modulus of smoothness
ωr(f (n), h)p := sup
|t|≤h
∥∆r
tf (n)(x)∥p,x,
h > 0,
(5)
where
∆r
tf (n)(x) :=
r
x
j=0
(−1)r−j
r
j
f (n)(x + jt),
(6)
see [12], p. 44. here we have that ωr(f (n), h)p < ∞, h > 0.
we need to introduce
δk :=
r
x
j=1
αjjk,
k = 1, . . . , n ∈n,
(7)
and denote by ⌊*⌋the integral part. call
τ(w, x) :=
r
x
j=0
αjjnf (n)(x + jw) −δnf (n)(x).
(8)
notice also that
−
r
x
j=1
(−1)r−j
r
j
= (−1)r
r
0
.
(9)
according to [3], p. 306, [1], we get
τ(w, x) = ∆r
wf (n)(x).
(10)
thus
∥τ(w, x)∥p,x ≤ωr(f (n), |w|)p,
w ∈r.
(11)
george a. anastassiou, razvan a. mezei
3
using taylor's formula, and the appropriate change of variables, one has (see [6])
r
x
j=0
αj[f(x + jt) −f(x)] =
n
x
k=1
f (k)(x)
k!
δktk + rn(0, t, x),
(12)
where
rn(0, t, x) :=
z t
0
(t −w)n−1
(n −1)! τ(w, x)dw,
n ∈n.
(13)
using the above terminology we obtain for β >
2⌊n
2 ⌋+1
2α
that
∆(x) := mr,ξ(f; x) −f(x) −
⌊n/2⌋
x
m=1
f (2m)(x)δ2m
(2m)!
γ
2m+1
2α
γ
β −2m+1
2α
γ
1
2α
γ
β −
1
2α
ξ2m = r∗
n(x),
(14)
where
r∗
n(x) := w
z ∞
−∞
rn(0, t, x)
1
t2α + ξ2αβ dt,
n ∈n.
(15)
in ∆(x), see (14), the sum collapses when n = 1.
we present our first result.
theorem 1. let p, q > 1 such that
1
p + 1
q = 1, n ∈n, α ∈n, β > 1
α
1
p + n + r
and the rest as
above. then
∥∆(x)∥p ≤
(2α)
1
p γ (β) γ
qβ
2 −
1
2α
1
q ξnτ
1
p
γ
qβ
2
1
q γ
1
2α
1
p γ
β −
1
2α
(rp + 1)
1
p [(n −1)!] (q(n −1) + 1)1/q
ωr(f (n), ξ)p,
(16)
where
0 < τ :=
"z ∞
0
(1 + u)rp+1
unp−1
(u2α + 1)pβ/2 du −
z ∞
0
unp−1
(u2α + 1)pβ/2 du
#
< ∞.
(17)
hence as ξ →0 we obtain ∥∆(x)∥p →0.
if additionally f (2m) ∈lp(r), m = 1, 2, . . . ,
n
2
then ∥mr,ξ(f) −f∥p →0, as ξ →0.
proof. we observe that
|∆(x)|p = w p
z ∞
−∞
rn(0, t, x)
1
t2α + ξ2αβ dt
p
≤w p
z ∞
−∞
|rn(0, t, x)|
1
t2α + ξ2αβ dt
!p
≤w p
z ∞
−∞
z |t|
0
(|t| −w)n−1
(n −1)!
|τ(sign(t) * w, x)|dw
1
t2α + ξ2αβ dt
!p
.
(18)
hence we have
i :=
z ∞
−∞
|∆(x)|pdx ≤w p
z ∞
−∞
z ∞
−∞
γ(t, x)
1
t2α + ξ2αβ dt
!p
dx
!
,
(19)
where
γ(t, x) :=
z |t|
0
(|t| −w)n−1
(n −1)!
|τ(sign(t) * w, x)|dw ≥0.
(20)
george a. anastassiou, razvan a. mezei
4
therefore by using h ̈
older's inequality suitably we obtain
r.h.s.(19) = w p
z ∞
−∞
z ∞
−∞
γ(t, x)
1
t2α + ξ2αβ dt
!p
dx
!
= w p *
z ∞
−∞
z ∞
−∞
γ(t, x)
1
t2α + ξ2αβ/2
1
t2α + ξ2αβ/2 dt
p
dx
≤w p *
z ∞
−∞
z ∞
−∞
γ(t, x)
1
t2α + ξ2αβ/2
p
dt
z ∞
−∞
1
t2α + ξ2αβ/2
q
dt
p
q
dx
= w p *
z ∞
−∞
z ∞
−∞
γp(t, x)
1
t2α + ξ2αpβ/2 dt
z ∞
−∞
1
t2α + ξ2αqβ/2 dt
p
q
dx
= w p *
z ∞
−∞
z ∞
−∞
γp(t, x)
1
t2α + ξ2αpβ/2 dt
dx
γ
1
2α
γ
qβ
2 −
1
2α
γ
qβ
2
αξqαβ−1
p
q
=
ξpαβ−1α [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
qβ
2
p
q γ
1
2α
γ
β −
1
2α
p
z ∞
−∞
z ∞
−∞
γp(t, x)
1
t2α + ξ2αpβ/2 dt
dx
.
(21)
again by h ̈
older's inequality we have
γp(t, x) ≤
r |t|
0 |τ(sign(t) * w, x)|pdw
((n −1)!)p
|t|np−1
(q(n −1) + 1)p/q .
(22)
consequently we have
r.h.s.(21) ≤
ξpαβ−1α [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
qβ
2
p
q γ
1
2α
γ
β −
1
2α
p
*
z ∞
−∞
z ∞
−∞
r |t|
0 |τ(sign(t) * w, x)|pdw
((n −1)!)p
|t|np−1
(q(n −1) + 1)p/q
1
t2α + ξ2αpβ/2 dt
dx
= : (∗),
(calling
c1 :=
ξpαβ−1α [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
qβ
2
p
q γ
1
2α
γ
β −
1
2α
p ((n −1)!)p(q(n −1) + 1)p/q
)
(23)
george a. anastassiou, razvan a. mezei
5
and
(∗) = c1
z ∞
−∞
z ∞
−∞
z |t|
0
|τ(sign(t) * w, x)|pdw
!
|t|np−1
1
t2α + ξ2αpβ/2 dx
dt
= c1
z ∞
−∞
z ∞
−∞
z |t|
0
|∆r
sign(t)*wf (n)(x))|pdw
!
|t|np−1
1
t2α + ξ2αpβ/2 dx
dt
= c1
z ∞
−∞
z ∞
−∞
z |t|
0
|∆r
sign(t)*wf (n)(x))|pdw
!
dx
!
|t|np−1
1
t2α + ξ2αpβ/2 dt
= c1
z ∞
−∞
z |t|
0
z ∞
−∞
|∆r
sign(t)*wf (n)(x))|pdx
dw
!
|t|np−1
1
t2α + ξ2αpβ/2 dt
≤c1
z ∞
−∞
z |t|
0
ωr(f (n), w)p
pdw
!
|t|np−1
1
t2α + ξ2αpβ/2 dt
.
(24)
so far we have proved
i ≤c1
z ∞
−∞
z |t|
0
ωr(f (n), w)p
pdw
!
|t|np−1
1
t2α + ξ2αpβ/2 dt
.
(25)
by [12], p. 45 we have
(r.h.s.(25)) ≤c1
ωr(f (n), ξ)p
p
z ∞
−∞
z |t|
0
1 + w
ξ
rp
dw
!
|t|np−1
1
t2α + ξ2αpβ/2 dt
=: (∗∗).
(26)
but we see that
(∗∗) =
ξc1
rp + 1
ωr(f (n), ξ)p
p
j ,
(27)
where
j =
z ∞
−∞
1 + |t|
ξ
rp+1
−1
!
|t|np−1
1
t2α + ξ2αpβ/2 dt
= 2
z ∞
0
1 + t
ξ
rp+1
−1
!
tnp−1
1
t2α + ξ2αpβ/2 dt.
(28)
here we find
j = 2ξp(n−αβ)
z ∞
0
(1 + u)rp+1 −1
unp−1
1
(u2α + 1)pβ/2 du
= 2ξp(n−αβ)
"z ∞
0
(1 + u)rp+1
unp−1
(u2α + 1)pβ/2 du −
z ∞
0
unp−1
(u2α + 1)pβ/2 du
#
.
(29)
thus by (17) and (29) we obtain
j = 2ξp(n−αβ)τ.
(30)
george a. anastassiou, razvan a. mezei
6
we notice that
0 < τ <
z ∞
0
(1 + u)rp+1 unp−1
(u2α + 1)pβ/2
du
<
z ∞
0
(1 + u)rp+1 (1 + u)np−1
(u2α + 1)pβ/2
du
=
z ∞
0
(1 + u)p(n+r)
(u2α + 1)pβ/2 du =: i1.
also call
k :=
z 1
0
(1 + u)p(n+r)
(u2α + 1)pβ/2 du < ∞.
then we can write
i1 = k +
z ∞
1
(1 + u)p(n+r)
(u2α + 1)pβ/2 du < k + 2p(n+r)
z ∞
1
up(n+r)
(u2α + 1)pβ/2 du = k + 2p(n+r)i2,
where i2 := r ∞
1
up(n+r)
(u2α+1)pβ/2 du.
since
1
1+u2α <
1
u2α , we have
1
(1+u2α)pβ/2 <
1
upαβ , for u ∈[1, ∞).
so we get
i2 <
z ∞
1
up(n+r−αβ)du = lim
ε→∞
z ε
1
up(n+r−αβ)du
= lim
ε→∞
εp(n+r−αβ)+1 −1
p (n + r −αβ) + 1
=
−1
p (n + r −αβ) + 1,
which is a positive number since β > 1
α
1
p + n + r
.
consequently i2 is finite, so is i1, proving τ < ∞.
using (27) and (30) we get
(∗∗) =
ξc1
rp + 1
ωr(f (n), ξ)p
p
2ξp(n−αβ)τ
=
2α [γ (β)]p γ
qβ
2 −
1
2α
p
q τ
(rp + 1) γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p
q ((n −1)!)p(q(n −1) + 1)p/q
ξpn
ωr(f (n), ξ)p
p
.
(31)
i.e. we have established that
i ≤
2α [γ (β)]p γ
qβ
2 −
1
2α
p
q τ
(rp + 1) γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p
q ((n −1)!)p(q(n −1) + 1)p/q
ξpn
ωr(f (n), ξ)p
p
.
(32)
that is finishing the proof of the theorem.
■
the counterpart of theorem 1 follows, case of p = 1.
theorem 2. let f ∈cn(r) and f (n) ∈l1(r), n ∈n, α ∈n, β > n+r+1
2α
. then
∥∆(x)∥1 ≤
1
(r + 1) (n −1)!γ
1
2α
γ
β −
1
2α
(33)
*
"r+1
x
k=1
r + 1
k
γ
n + k
2α
γ
β −n + k
2α
#
ωr(f (n), ξ)1ξn.
george a. anastassiou, razvan a. mezei
7
hence as ξ →0 we obtain ∥∆(x)∥1 →0.
if additionally f (2m) ∈l1(r), m = 1, 2, . . . ,
n
2
then ∥mr,ξ(f) −f∥1 →0, as ξ →0.
proof. it follows
|∆(x)| = w
z ∞
−∞
rn(0, t, x)
1
t2α + ξ2αβ dt
≤w
z ∞
−∞
|rn(0, t, x)|
1
t2α + ξ2αβ dt
≤w
z ∞
−∞
z |t|
0
(|t| −w)n−1
(n −1)!
|τ(sign(t) * w, x)|dw
!
1
t2α + ξ2αβ dt.
(34)
thus
∥∆(x)∥1 =
z ∞
−∞
|∆(x)|dx ≤w *
(35)
z ∞
−∞
z ∞
−∞
z |t|
0
(|t| −w)n−1
(n −1)!
|τ(sign(t) * w, x)|dw
!
1
t2α + ξ2αβ dt
!
dx
=: (∗)
but we see that
z |t|
0
(|t| −w)n−1
(n −1)!
|τ(sign(t) * w, x)|dw ≤
|t|n−1
(n −1)!
z |t|
0
|τ(sign(t) * w, x)|dw.
(36)
therefore it holds
(∗) ≤w
z ∞
−∞
z ∞
−∞
|t|n−1
(n −1)!
z |t|
0
|τ(sign(t) * w, x)|dw
!
1
t2α + ξ2αβ dt
!
dx
=
w
(n −1)!
z ∞
−∞
z |t|
0
z ∞
−∞
|τ(sign(t) * w, x)|dx
dw
!
|t|n−1
t2α + ξ2αβ dt
!
≤
w
(n −1)!
z ∞
−∞
z |t|
0
ωr(f (n), w)1dw
!
|t|n−1
t2α + ξ2αβ dt
!
.
(37)
i.e. we get
∥∆(x)∥1 ≤
w
(n −1)!
z ∞
−∞
z |t|
0
ωr(f (n), w)1dw
!
|t|n−1
t2α + ξ2αβ dt
!
.
(38)
consequently we have
∥∆(x)∥1 ≤wωr(f (n), ξ)1
(n −1)!
z ∞
−∞
z |t|
0
1 + w
ξ
r
dw
!
|t|n−1
t2α + ξ2αβ dt
!
= 2ξwωr(f (n), ξ)1
(r + 1) (n −1)!
z ∞
0
1 + t
ξ
r+1
−1
!
tn−1
t2α + ξ2αβ dt
!
=
2γ (β) αξ2αβωr(f (n), ξ)1
(r + 1) (n −1)!γ
1
2α
γ
β −
1
2α
z ∞
0
1 + t
ξ
r+1
−1
!
tn−1
t2α + ξ2αβ dt
!
. (39)
george a. anastassiou, razvan a. mezei
8
we have gotten so far
∥∆(x)∥1 ≤
2γ (β) αξ2αβωr(f (n), ξ)1 * λ
(r + 1) (n −1)!γ
1
2α
γ
β −
1
2α
,
(40)
where
λ :=
z ∞
0
1 + t
ξ
r+1
−1
!
tn−1
t2α + ξ2αβ dt.
(41)
one easily finds that
λ =
z ∞
0
r+1
x
k=1
r + 1
k
t
ξ
k!
tn−1
t2α + ξ2αβ dt
= ξn−2αβ
r+1
x
k=1
r + 1
k
z ∞
0
t n+k−1
(t 2α + 1)β dt
= ξn−2αβ
r+1
x
k=1
r + 1
k
kn+k.
(42)
where
kn+k :=
z ∞
0
t n+k−1
(t 2α + 1)β dt = γ
n+k
2α
γ
β −n+k
2α
γ (β) 2α
.
(43)
∥∆(x)∥1 ≤
1
(r + 1) (n −1)!γ
1
2α
γ
β −
1
2α
"r+1
x
k=1
r + 1
k
γ
n + k
2α
γ
β −n + k
2α
#
ωr(f (n), ξ)1ξn.
we have proved (33).
■
the case n = 0 is met next.
proposition 1. let p, q > 1 such that
1
p + 1
q = 1, α ∈n, β > 1
α
r + 1
p
and the rest as above.
then
∥mr,ξ(f) −f∥p ≤
(2α)
1
p [γ (β)] γ
qβ
2 −
1
2α
1
q θ
1
p
γ
1
2α
1
p γ
β −
1
2α
γ
qβ
2
1
q ωr(f, ξ)p,
(44)
where
0 < θ :=
z ∞
0
(1 + t)rp
1
(t2α + 1)pβ/2 dt < ∞.
(45)
hence as ξ →0 we obtain mr,ξ →unit operator i in the lp norm, p > 1.
proof. by (3) we notice that,
mr,ξ(f; x) −f(x) = w
r
x
j=0
αj
z ∞
−∞
(f(x + jt) −f(x))
1
t2α + ξ2αβ dt
= w
z ∞
−∞
r
x
j=0
αj (f(x + jt) −f(x))
1
t2α + ξ2αβ dt
= w
z ∞
−∞
r
x
j=1
αjf(x + jt) −
r
x
j=1
αjf(x)
1
t2α + ξ2αβ dt
george a. anastassiou, razvan a. mezei
9
= w
z ∞
−∞
r
x
j=1
(−1)r−j
r
j
j−n
f(x + jt) −
r
x
j=1
(−1)r−j
r
j
j−n
f(x)
1
t2α + ξ2αβ dt
= w
z ∞
−∞
r
x
j=1
(−1)r−j
r
j
f(x + jt) −
r
x
j=1
(−1)r−j
r
j
f(x)
1
t2α + ξ2αβ dt
(9)
= w
z ∞
−∞
r
x
j=1
(−1)r−j
r
j
f(x + jt) +
(−1)r−0
r
0
f(x + 0t)
1
t2α + ξ2αβ dt
= w
z ∞
−∞
r
x
j=0
(−1)r−j
r
j
f(x + jt)
1
t2α + ξ2αβ dt
(6)
= w
z ∞
−∞
(∆r
tf) (x)
1
t2α + ξ2αβ dt
!
.
(46)
and then
|mr,ξ(f; x) −f(x)| ≤w
z ∞
−∞
|∆r
tf(x)|
1
t2α + ξ2αβ dt
!
.
(47)
we next estimate
z ∞
−∞
|mr,ξ(f; x) −f(x)|pdx ≤
z ∞
−∞
(w)p
z ∞
−∞
|∆r
tf(x)|
1
t2α + ξ2αβ dt
!p
dx
= (w)p
z ∞
−∞
z ∞
−∞
|∆r
tf(x)|
1
t2α + ξ2αβ/2
1
t2α + ξ2αβ/2
dt
p
dx
≤(w)p
z ∞
−∞
z ∞
−∞
|∆r
tf(x)|
1
t2α + ξ2αβ/2
p
dt
1
p
z ∞
−∞
1
t2α + ξ2αβ/2
q
dt
1
q
p
dx
= (w)p
z ∞
−∞
z ∞
−∞
|∆r
tf(x)|p
1
t2α + ξ2αpβ/2 dt
z ∞
−∞
1
t2α + ξ2αqβ/2 dt
p
q
dx
= (w)p
z ∞
−∞
z ∞
−∞
|∆r
tf(x)|p
1
t2α + ξ2αpβ/2 dt
γ
1
2α
γ
qβ
2 −
1
2α
γ
qβ
2
αξqαβ−1
p
q
dx
= (w)p
γ
1
2α
γ
qβ
2 −
1
2α
γ
qβ
2
αξqαβ−1
p
q z ∞
−∞
z ∞
−∞
|∆r
tf(x)|p
1
t2α + ξ2αpβ/2 dt
dx
george a. anastassiou, razvan a. mezei
10
=
αξαβp−1 [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p
q
z ∞
−∞
z ∞
−∞
|∆r
tf(x)|p
1
t2α + ξ2αpβ/2 dx
dt
=
αξαβp−1 [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p
q
z ∞
−∞
z ∞
−∞
|∆r
tf(x)|p dx
1
t2α + ξ2αpβ/2 dt
≤
αξαβp−1 [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p
q
z ∞
−∞
ωr(f, |t|)p
p
1
t2α + ξ2αpβ/2 dt
=
2αξαβp−1 [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p
q
z ∞
0
ωr(f, t)p
p
1
t2α + ξ2αpβ/2 dt
≤
2αξαβp−1 [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p
q
ωr(f, ξ)p
p
z ∞
0
1 + t
ξ
rp
1
t2α + ξ2αpβ/2 dt
=
2αξαβp−1 [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p
q
ωr(f, ξ)p
p
z ∞
0
(1 + t )rp
1
(t 2α + 1)pβ/2 ξαpβ ξdt
=
2α [γ (β)]p γ
qβ
2 −
1
2α
p
q
γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p
q ωr(f, ξ)p
p
z ∞
0
(1 + t)rp
1
(t2α + 1)pβ/2 dt.
(48)
we have established (44).
we also notice that
θ =
z ∞
0
(1 + t)rp
(1 + t2α)pβ/2 dt =
z 1
0
(1 + t)rp
(1 + t2α)pβ/2 dt +
z ∞
1
(1 + t)rp
(1 + t2α)pβ/2 dt
<
z 1
0
(1 + t)rp
(1 + t2α)pβ/2 dt + 2rp
z ∞
1
trp
(1 + t2α)pβ/2 dt
<
z 1
0
(1 + t)rp
(1 + t2α)pβ/2 dt + 2rp
z ∞
1
tp(r−αβ)dt
=
z 1
0
(1 + t)rp
(1 + t2α)pβ/2 dt −
2rp
p (r −αβ) + 1,
the last, since β > 1
α
r + 1
p
, is a finite positive constant. thus 0 < θ < ∞.
■
we also give
proposition 2. assume β > r+1
2α . it holds
∥mr,ξf −f∥1 ≤
2αγ (β)
γ
1
2α
γ
β −
1
2α
z ∞
0
(1 + t)r
1
(t2α + 1)β dt
!
ωr(f, ξ)1.
(49)
hence as ξ →0 we get mr,ξ →i in the l1 norm.
george a. anastassiou, razvan a. mezei
11
proof. by (47) we have again
|mr,ξ(f; x) −f(x)| ≤w
z ∞
−∞
|∆r
tf(x)|
1
t2α + ξ2αβ dt
!
.
next we estimate
z ∞
−∞
|mr,ξ(f; x) −f(x)| dx ≤w
z ∞
−∞
z ∞
−∞
|∆r
tf(x)|
1
t2α + ξ2αβ dt
!
dx
= w
z ∞
−∞
z ∞
−∞
|∆r
tf(x)| dx
1
t2α + ξ2αβ dt
≤w
z ∞
−∞
ωr(f, |t|)1
1
t2α + ξ2αβ dt
≤w2ωr(f, ξ)1
z ∞
0
1 + t
ξ
r
1
t2α + ξ2αβ dt
=
γ (β) ξ2αβ−12α
γ
1
2α
γ
β −
1
2α
ωr(f, ξ)1
z ∞
0
ξ (1 + t)r
1
(t2α + 1)β ξ2αβ dt
=
γ (β) 2α
γ
1
2α
γ
β −
1
2α
ωr(f, ξ)1
z ∞
0
(1 + t)r
1
(t2α + 1)β dt.
(50)
we have proved (49).
we also notice that
0 <
z ∞
0
(1 + t)r
1
(t2α + 1)β dt
=
z 1
0
(1 + t)r
(t2α + 1)β dt +
z ∞
1
(1 + t)r
(t2α + 1)β dt
<
z 1
0
(1 + t)r
(t2α + 1)β dt + 2r
z ∞
1
tr−2αβdt
=
z 1
0
(1 + t)r
(t2α + 1)β dt −
2r
(r −2αβ + 1),
which is a positive finite constant.
■
in the next we consider f ∈cn(r) and f (n) ∈lp(r), n = 0 or n ≥2 even, 1 ≤p < ∞and the
similar smooth singular operator of symmetric convolution type
mξ(f; x) = w
z ∞
−∞
f(x + y)
1
y2α + ξ2αβ dy,
for all x ∈r, ξ > 0.
(51)
that is
mξ(f; x) = w
z ∞
0
(f(x + y) + f(x −y))
1
y2α + ξ2αβ dy,
for all x ∈r, ξ > 0. notice that m1,ξ = mξ. let the central second order difference
( ̃
∆2
yf)(x) := f(x + y) + f(x −y) −2f(x).
(52)
notice that
( ̃
∆2
−yf)(x) = ( ̃
∆2
yf)(x).
george a. anastassiou, razvan a. mezei
12
when n ≥2 even using taylor's formula with cauchy remainder we eventually find
( ̃
∆2
yf)(x) = 2
n/2
x
ρ=1
f (2ρ)(x)
(2ρ)!
y2ρ + r1(x),
(53)
where
r1(x) :=
z y
0
( ̃
∆2
tf (n))(x)(y −t)n−1
(n −1)! dt.
(54)
notice that
mξ(f; x) −f(x) = w
z ∞
0
( ̃
∆2
yf(x))
1
y2α + ξ2αβ dy.
(55)
furthermore by (4), (53) and (55) we easily see that
k(x) : = mξ(f; x) −f(x) −
n/2
x
ρ=1
f (2ρ)(x)
(2ρ)!
γ
2ρ+1
2α
γ
β −2ρ+1
2α
γ
1
2α
γ
β −
1
2α
ξ2ρ
(56)
= w
z ∞
0
z y
0
( ̃
∆2
t f (n))(x)(y −t)n−1
(n −1)! dt
1
y2α + ξ2αβ dy,
where β > (n+1)
2α .
therefore we have
|k(x)| ≤w
z ∞
0
z y
0
̃
∆2
t f (n)
(x)(y −t)n−1
(n −1)! dt
1
y2α + ξ2αβ dy.
(57)
here we estimate in lp norm, p ≥1, the error function k(x). notice that we have ω2(f (n), h)p < ∞,
h > 0, n = 0 or n ≥2 even. operators mξ are positive operators.
the related main lp result here comes next.
theorem 3. let p, q > 1 such that
1
p + 1
q = 1, n ≥2 even, α ∈n, β > 1
α
1
p + n + 2
and the rest
as above. then
∥k(x)∥p ≤
̃
τ 1/pα1/pγ
qβ
2 −
1
2α
1/q
2
1
q γ
1
2α
1/p γ
β −
1
2α
γ
qβ
2
1/q
(q(n −1) + 1)1/q (2p + 1)1/p
γ (β)
(n −1)!ξnω2(f (n), ξ)p,
(58)
where
0 < ̃
τ =
z ∞
0
(1 + u)2p+1 −1
upn−1
1
(1 + u2α)pβ/2 du < ∞.
(59)
hence as ξ →0 we get ∥k(x)∥p →0.
if additionally f (2m) ∈lp(r), m = 1, 2, . . . , n
2 then ∥mξ(f) −f∥p →0, as ξ →0.
proof. we observe that
|k(x)|p ≤w p
z ∞
0
z y
0
̃
∆2
tf (n)
(x)(y −t)n−1
(n −1)! dt
1
y2α + ξ2αβ dy
!p
.
(60)
call
̃
γ(y, x) :=
z y
0
| ̃
∆2
tf (n)(x)|(y −t)n−1
(n −1)! dt ≥0,
y ≥0,
(61)
george a. anastassiou, razvan a. mezei
13
then we have
|k(x)|p ≤w p
z ∞
0
̃
γ(y, x)
1
y2α + ξ2αβ dy
!p
.
(62)
consequently
λ : =
z ∞
−∞
|k(x)|pdx ≤w p
z ∞
−∞
z ∞
0
̃
γ(y, x)
1
y2α + ξ2αβ dy
!p
dx
= w p
z ∞
−∞
z ∞
0
̃
γ(y, x)
1
y2α + ξ2αβ/2
1
y2α + ξ2αβ/2 dy
p
dx
(by h ̈
older's inequality)
≤w p
z ∞
−∞
z ∞
0
( ̃
γ(y, x))p
1
y2α + ξ2αpβ/2 dy
z ∞
0
1
y2α + ξ2αqβ/2 dy
p/q
dx
= w p
γ
1
2α
γ
qβ
2 −
1
2α
2γ
qβ
2
αξqαβ−1
p/q
z ∞
−∞
z ∞
0
( ̃
γ(y, x))p
1
y2α + ξ2αpβ/2 dy
dx
=
[γ (β)]p αξαβp−1γ
qβ
2 −
1
2α
p/q
2
p
q γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q
z ∞
−∞
z ∞
0
( ̃
γ(y, x))p
1
y2α + ξ2αpβ/2 dy
dx
= : (∗).
(63)
by applying again h ̈
older's inequality we see that
̃
γ(y, x) ≤
r y
0 | ̃
∆2
t f (n)(x)|pdt
1/p
(n −1)!
y(n−1+ 1
q )
(q(n −1) + 1)1/q .
(64)
therefore it holds
(∗) ≤
[γ (β)]p αξαβp−1γ
qβ
2 −
1
2α
p/q
2
p
q γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q
[(n −1)!]p (q(n −1) + 1)p/q
*
z ∞
−∞
z ∞
0
z y
0
| ̃
∆2
tf (n)(x)|pdt
yp(n−1+ 1
q )
1
y2α + ξ2αpβ/2 dy
dx
=
[γ (β)]p αξαβp−1γ
qβ
2 −
1
2α
p/q
2
p
q γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q
[(n −1)!]p (q(n −1) + 1)p/q
*
z ∞
0
z ∞
−∞
z y
0
| ̃
∆2
tf (n)(x)|pdt
yp(n−1+ 1
q )
1
y2α + ξ2αpβ/2 dx
dy
= : (∗∗).
(65)
we call
c2 :=
[γ (β)]p αξαβp−1γ
qβ
2 −
1
2α
p/q
2
p
q γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q
[(n −1)!]p (q(n −1) + 1)p/q
.
(66)
george a. anastassiou, razvan a. mezei
14
and hence
(∗∗) = c2
z ∞
0
z ∞
−∞
z y
0
| ̃
∆2
tf (n)(x)|pdt
dx
yp(n−1+ 1
q )
1
y2α + ξ2αpβ/2 dy
= c2
z ∞
0
z y
0
z ∞
−∞
| ̃
∆2
tf (n)(x)|pdx
dt
ypn−1
1
y2α + ξ2αpβ/2 dy
= c2
z ∞
0
z y
0
z ∞
−∞
|∆2
tf (n)(x −t)|pdx
dt
ypn−1
1
y2α + ξ2αpβ/2 dy
= c2
z ∞
0
z y
0
z ∞
−∞
|∆2
tf (n)(x)|pdx
dt
ypn−1
1
y2α + ξ2αpβ/2 dy
≤c2
z ∞
0
z y
0
ω2(f (n), t)p
pdt
ypn−1
1
y2α + ξ2αpβ/2 dy
≤c2ω2(f (n), ξ)p
p
z ∞
0
z y
0
1 + t
ξ
2p
dt
!
ypn−1
1
y2α + ξ2αpβ/2 dy
.
(67)
i.e. so far we proved that
λ ≤c2ω2(f (n), ξ)p
p
z ∞
0
z y
0
1 + t
ξ
2p
dt
!
ypn−1
1
y2α + ξ2αpβ/2 dy
.
(68)
but
r.h.s.(68) =
c2ξ
2p + 1ω2(f (n), ξ)p
p
z ∞
0
1 + y
ξ
2p+1
−1
!
ypn−1
1
y2α + ξ2αpβ/2 dy
.
(69)
call
m :=
z ∞
0
1 + y
ξ
2p+1
−1
!
ypn−1
1
y2α + ξ2αpβ/2 dy,
(70)
and
̃
τ :=
z ∞
0
(1 + u)2p+1 −1
upn−1
1
(1 + u2α)pβ/2 du.
(71)
that is
m = ξp(n−αβ) ̃
τ.
(72)
therefore it holds
λ ≤
̃
τ [γ (β)]p αξpnγ
qβ
2 −
1
2α
p/q
ω2(f (n), ξ)p
p
2
p
q (2p + 1) γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q
[(n −1)!]p (q(n −1) + 1)p/q
.
(73)
we have established (58).
■
the counterpart of theorem 3 follows, p = 1 case.
george a. anastassiou, razvan a. mezei
15
theorem 4. let f ∈cn(r) and f (n) ∈l1(r), n ≥2 even, α ∈n, β > n+3
2α . then
∥k(x)∥1 ≤
1
6γ
1
2α
γ
β −
1
2α
(n −1)!
3γ
n + 1
2α
γ
β −n + 1
2α
(74)
+3γ
n + 2
2α
γ
β −n + 2
2α
+ γ
n + 3
2α
γ
β −n + 3
2α
ω2(f (n), ξ)1ξn.
hence as ξ →0 we obtain ∥k(x)∥1 →0.
if additionally f (2m) ∈l1(r), m = 1, 2, . . . , n
2 then ∥mξ(f) −f∥1 →0, as ξ →0.
proof. notice that
̃
∆2
tf (n)(x) = ∆2
t f (n)(x −t),
(75)
all x, t ∈r. also it holds
z ∞
−∞
|∆2
tf (n)(x −t)|dx =
z ∞
−∞
|∆2
tf (n)(w)|dw ≤ω2(f (n), t)1,
all t ∈r+.
(76)
here we obtain
∥k(x)∥1 =
z ∞
−∞
|k(x)|dx
(57)
≤w
z ∞
−∞
z ∞
0
z y
0
̃
∆2
tf (n)(x)
(y −t)n−1
(n −1)! dt
1
y2α + ξ2αβ dy
!
dx
≤w
z ∞
−∞
z ∞
0
yn−1
(n −1)!
z y
0
̃
∆2
tf (n)(x)
dt
1
y2α + ξ2αβ dy
!
dx
= w
z ∞
0
z ∞
−∞
z y
0
̃
∆2
tf (n)(x)
dt
dx
yn−1
(n −1)!
1
y2α + ξ2αβ
!
dy
(75)
= w
z ∞
0
z ∞
−∞
z y
0
∆2
tf (n)(x −t)
dt
dx
yn−1
(n −1)!
1
y2α + ξ2αβ
!
dy
= w
z ∞
0
z y
0
z ∞
−∞
∆2
tf (n)(x −t)
dx
dt
yn−1
(n −1)!
1
y2α + ξ2αβ
!
dy
(76)
≤w
z ∞
0
z y
0
ω2(f (n), t)1dt
yn−1
(n −1)!
1
y2α + ξ2αβ
!
dy
≤wω2(f (n), ξ)1
z ∞
0
z y
0
1 + t
ξ
2
dt
!
yn−1
(n −1)!
1
y2α + ξ2αβ dy
!
= wω2(f (n), ξ)1
z ∞
0
1 + y
ξ
3
−1
!
ξ
3
yn−1
(n −1)!
1
y2α + ξ2αβ dy
!
=
γ (β) αξn
γ
1
2α
γ
β −
1
2α
(n −1)!3ω2(f (n), ξ)1
z ∞
0
(1 + y )3 −1
y n−1
1
(y 2α + 1)β dy
!
george a. anastassiou, razvan a. mezei
16
=
γ (β) αξn
γ
1
2α
γ
β −
1
2α
(n −1)!3ω2(f (n), ξ)1
z ∞
0
3y + 3y 2 + y 3
y n−1
1
(y 2α + 1)β dy
!
=
γ (β) αξn
γ
1
2α
γ
β −
1
2α
(n −1)!3ω2(f (n), ξ)1
z ∞
0
3y n + 3y n+1 + y n+2
1
(y 2α + 1)β dy
!
=
1
6γ
1
2α
γ
β −
1
2α
(n −1)!
3γ
n + 1
2α
γ
β −n + 1
2α
+3γ
n + 2
2α
γ
β −n + 2
2α
+ γ
n + 3
2α
γ
β −n + 3
2α
ω2(f (n), ξ)1ξn.
(77)
we have proved (74).
■
the related case here of n = 0 comes next.
proposition 3. let p, q > 1 such that
1
p + 1
q = 1, α ∈n, β > 1
α
2 + 1
p
and the rest as above.
then
∥mξ(f) −f∥p ≤
ρ1/pγ (β) α1/pγ
qβ
2 −
1
2α
1/q
2
1
q γ
1
2α
1/p γ
β −
1
2α
γ
qβ
2
1/q ω2(f, ξ)p,
(78)
where
0 < ρ :=
z ∞
0
(1 + y)2p
1
(y2α + 1)βp/2 dy < ∞.
(79)
hence as ξ →0 we obtain mξ →i in the lp norm, p > 1.
proof. from (55) we get
|mξ(f; x) −f(x)|p ≤w p
z ∞
0
̃
∆2
yf(x)
1
y2α + ξ2αβ dy
!p
.
(80)
we then estimate
z ∞
−∞
|mξ(f; x) −f(x)|pdx ≤w p
z ∞
−∞
z ∞
0
̃
∆2
yf(x)
1
y2α + ξ2αβ dy
!p
dx
= w p
z ∞
−∞
z ∞
0
̃
∆2
yf(x)
1
y2α + ξ2αβ/2
1
y2α + ξ2αβ/2 dy
p
dx
≤w p
z ∞
−∞
z ∞
0
̃
∆2
yf(x)
1
y2α + ξ2αβ/2
p
dy
1
p
z ∞
0
1
y2α + ξ2αβ/2
q
dy
1
q
p
dx
= w p
z ∞
−∞
z ∞
0
̃
∆2
yf(x)
p
1
y2α + ξ2αβp/2 dy
z ∞
0
1
y2α + ξ2αβq/2 dy
p
q
dx
= w p
z ∞
0
z ∞
−∞
̃
∆2
yf(x)
p
1
y2α + ξ2αβp/2 dx
dy
γ
1
2α
γ
qβ
2 −
1
2α
2γ
qβ
2
αξqαβ−1
p
q
george a. anastassiou, razvan a. mezei
17
=
[γ (β)]p αξαβp−1γ
qβ
2 −
1
2α
p/q
2
p
q γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q
*
z ∞
0
z ∞
−∞
∆2
yf(x −y)
p dx
1
y2α + ξ2αβp/2 dy
≤
[γ (β)]p αξαβp−1γ
qβ
2 −
1
2α
p/q
2
p
q γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q
z ∞
0
ω2(f, y)p
p
1
y2α + ξ2αβp/2 dy
≤
[γ (β)]p αξαβp−1γ
qβ
2 −
1
2α
p/q
2
p
q γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q ω2(f, ξ)p
p
z ∞
0
1 + y
ξ
2p
1
y2α + ξ2αβp/2 dy
=
[γ (β)]p αξαβp−1γ
qβ
2 −
1
2α
p/q
2
p
q γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q ω2(f, ξ)p
p
z ∞
0
(1 + y )2p
1
(y 2α + 1)βp/2 ξαβp ξdy
=
[γ (β)]p αγ
qβ
2 −
1
2α
p/q
2
p
q γ
1
2α
γ
β −
1
2α
p γ
qβ
2
p/q ω2(f, ξ)p
p
z ∞
0
(1 + y)2p
1
(y2α + 1)βp/2 dy.
(81)
the proof of (78) is now completed.
■
also we give
proposition 4.
for α ∈n, β >
3
2α, it holds,
∥mξf −f∥1 ≤
"
1
2 + γ
1
α
γ
β −1
α
γ
1
2α
γ
β −
1
2α
+ γ
3
2α
γ
β −
3
2α
2γ
1
2α
γ
β −
1
2α
#
ω2(f, ξ)1.
(82)
hence as ξ →0 we get mξ →i in the l1 norm.
proof. from (55) we have
|mξ(f; x) −f(x)| ≤w
z ∞
0
̃
∆2
yf(x)
1
y2α + ξ2αβ dy
!
.
(83)
hence we get
z ∞
−∞
|mξ(f; x) −f(x)|dx ≤w
z ∞
−∞
z ∞
0
| ̃
∆2
yf(x)|
1
y2α + ξ2αβ dy
!
dx
= w
z ∞
0
z ∞
−∞
| ̃
∆2
yf(x)|dx
1
y2α + ξ2αβ dy
= w
z ∞
0
z ∞
−∞
|∆2
yf(x −y)|dx
1
y2α + ξ2αβ dy
= w
z ∞
0
z ∞
−∞
|∆2
yf(x)|dx
1
y2α + ξ2αβ dy
≤w
z ∞
0
ω2(f, y)1
1
y2α + ξ2αβ dy
george a. anastassiou, razvan a. mezei
18
≤wω2(f, ξ)1
z ∞
0
1 + y
ξ
2
1
y2α + ξ2αβ dy
= wω2(f, ξ)1
z ∞
0
(1 + x)2
1
(x2α + 1)β ξ2αβ ξdx
=
γ (β) α
γ
1
2α
γ
β −
1
2α
ω2(f, ξ)1
z ∞
0
(1 + x)2
1
(x2α + 1)β dx
=
"
1
2 + γ
1
α
γ
β −1
α
γ
1
2α
γ
β −
1
2α
+ γ
3
2α
γ
β −
3
2α
2γ
1
2α
γ
β −
1
2α
#
ω2(f, ξ)1.
(84)
we have established (82).
■
references
[1] g.a. anastassiou, rate of convergence of non-positive linear convolution type operators. a
sharp inequality, j. math. anal. and appl., 142 (1989), 441–451.
[2] g.a. anastassiou, sharp inequalities for convolution type operators, journal of approximation
theory, 58 (1989), 259–266.
[3] g.a. anastassiou, moments in probability and approximation theory, pitman research notes
in math., vol. 287, longman sci. & tech., harlow, u.k., 1993.
[4] g.a. anastassiou, quantitative approximations, chapman & hall/crc, boca raton, new
york, 2001.
[5] g.a. anastassiou, basic convergence with rates of smooth picard singular integral operators,
j. of computational analysis and applications,vol.8, no.4 (2006), 313-334.
[6] g.a. anastassiou, lp convergence with rates of smooth picard singular operators, differential
& difference equations and applications, hindawi publ. corp., new york, (2006), 31–45.
[7] g.a. anastassiou and s. gal, convergence of generalized singular integrals to the unit, uni-
variate case, math. inequalities & applications, 3, no. 4 (2000), 511–518.
[8] g.a. anastassiou and s. gal, convergence of generalized singular integrals to the unit, mul-
tivariate case, applied math.
rev., vol.
1, world sci.
publ.
co., singapore, 2000, pp.
1–8.
[9] g.a. anastassiou and r. mezei, lp convergence with rates of smooth gauss-weierstrass
singular operators, nonlinear studies, accepted 2008.
[10] g.a. anastassiou and r. a. mezei, global smoothness and uniform convergence of smooth
poisson-cauchy type singular operators, submitted 2009.
[11] g.a. anastassiou and r. a. mezei, a voronovskaya type theorem for poisson-cauchy type
singular operators, submitted 2009.
[12] r.a. devore and g.g. lorentz, constructive approximation, springer-verlag, vol.
303,
berlin, new york, 1993.
[13] s.g. gal, remark on the degree of approximation of continuous functions by singular integrals,
math. nachr., 164 (1993), 197–199.
george a. anastassiou, razvan a. mezei
19
[14] s.g. gal, degree of approximation of continuous functions by some singular integrals, rev.
anal. num ́
er, th ́
eor. approx., (cluj), tome xxvii, no. 2 (1998), 251–261.
[15] r.n. mohapatra and r.s. rodriguez, on the rate of convergence of singular integrals for
h ̈
older continuous functions, math. nachr. 149 (1990), 117–124.
[16] d. zwillinger, crc standard mathematical tables and formulae, 30th edition, chapman &
hall/crc, boca raton, 1995.
|
0911.1707 | a dynamic vulnerability map to assess the risk of road network traffic
utilization | le havre agglomeration (codah) includes 16 establishments classified seveso
with high threshold. in the literature, we construct vulnerability maps to help
decision makers assess the risk. such approaches remain static and do take into
account the population displacement in the estimation of the vulnerability. we
propose a decision making tool based on a dynamic vulnerability map to evaluate
the difficulty of evacuation in the different sectors of codah. we use a
geographic information system (gis) to visualize the map which evolves with the
road traffic state through a detection of communities in large graphs
algorithm.
| introduction
the population of the seine estuary is exposed to several types of natural and industrial hazards. it is included in the
drainage basin of the "lézarde" and is also exposed to significant technological risks. the modeling and assessment of
the danger is useful when it intersects with the exposed stakes. the most important factor is people. recent events have
shown that our agglomerations are vulnerable in front of emergency situations. the examination of impacted
populations remains a difficult exercise. in this context, the major risk management direction team (dirm) of le
havre agglomeration (codah) has developed a model of spatial and temporal population exposed allocation pret-
resse; the scale is the building (bourcier and mallet 2006); thus by distinguishing their day and night occupation. the
model was able to locate people during the day both in their workplace and their residence (the unemployed and
retirees). although the model was able to locate the diurnal and nocturnal population, it remains static because it does
not take into account the daily movement of people and the road network utilization.
for a better evacuation of people in the case of a major risk, we need to know the state of road traffic to determine how
to allocate the vehicles on the road network and model the movement of these vehicles. in fact, the panic effect of some
people can lead to accidents and traffic jams, which may be too grievous with a danger that spreads quickly. the panic
generally results from the lack of coordination and dialogue between individuals (provitolo, 2007).
in the literature, several models were developed to calculate a score of the vulnerability related to the road network
utilization. this score may depend on social, biophysical, demographical or other aspects. most of these models adopt a
pessimistic approach to calculate this vulnerability: this case is met when a group of people in a hazardous area decide
all to take the same route to evacuate this area, which unfortunately happens quite often in the real world evacuation
situations. although it helps decision-makers to estimate the risk by a census vulnerability map, this approach remains
static and does not take into account the evolution of the road network traffic.
in this paper, we have to simplify the representation of the population displacement, which is a complex phenomena.
we also propose a dynamic and pessimistic approach related to the access to the road network. to this end, we model
the road network by a dynamic graph (the dynamics is due to the traffic evolution). a simple model based on traffic
flow will also be proposed. then, we apply a self-organization algorithm to detect communities on the graph belonging
to the collective intelligence algorithms. the algorithm allows us to define the different vulnerable neighborhoods of the
agglomeration in the case of an evacuation due to a potential danger, while taking into account the evolution of the road
network traffic. the result of this algorithm will be visualized into a gis on a dynamic vulnerability map which
categorizes various sectors depending on the difficulty of access to the road network. the map will help decision-
makers in a better estimation of risk in the communes of the codah. it will enrich pret-ress static model
developed at the codah, taking into account the mobility of the population.
3 directive seveso is an european directive, it lays down to the states to identify potential dangerous site. it intends to prevent
major accidents involving dangerous substances and limit their consequences for man and the environment, with a view to ensuring
high levels of protection throughout the community.
2 vulnerability assessment approaches
traditional methods of conception and evaluation of the population at risk do not sometimes treat the behavioral
evacuee's response (e.g. initial response to an evacuation, travel speed, family interactions / group, and so on.); they
describe prescriptive rules as the travel distance. these traditional methods are not very sensitive to human behavior for
different emergency scenarios. the computerized models offer the potential to evaluate the evacuation of a
neighborhood in emergency situations and overcome these limitations (castel 2006).
recently, some interesting applications have been developed by including the population dynamics, the models of urban
growth patterns and land use.
for computer modelers, this integration provides the ability to have computing entities as agents that are linked to real
geographical locations. for gis users, it provides the ability to model the emergence of phenomena by various
interactions of agents in time and space by using a gis (najlis and north 2004).
many researchers have emphasized the need to create models of based vector multi agent systems (mas), which may
require topological data structure provided in gis (parker 2004). in this way, one can represent an abm in which may
coexist n levels of organizations with several classes of agents (e.g. level 1: individuals or companies, level 2 and 3:
agents, cities, communities. . .) (daude 2005).
in (cutter et al 2000), the author presents a method to spatially estimate the vulnerability and treats the biophysical and
social aspects (access to resources, people with evacuation special needs, people with reduced mobility ...).
several layers are created into the gis (a layer by a danger), and all these layers are combined into one composed of
intersecting polygons to build a generic vulnerability map. to complete, it was necessary to take into account the
infrastructure and various possible emergency exits. so, a new map has been constructed and a new layer has been
incorporated. this work has been applied to the george town canton in which we find various natural and industrial
risks, and where there are different types of people.
in the neighborhood evacuation cases on a micro scale, a number of studies based on micro simulation have been
developed. in their paper (church and cova 2000), the authors presented a model to estimate the necessary time to
evacuate a neighborhood according to the effective of the population, number of vehicles, roads capacity and number of
vehicles per minute. the model is based on the optimization in order to find the critical area around a point at a
potential danger in a pessimistic way. this model has been coupled with a gis (arcinfo) to visualize the results
(identify evacuation plans) and construct an evacuation vulnerability map for the city (santa barbara).
cova and church (1997) opened the way on the study based on geographic information systems to evacuate people.
their study identified the communities that may face transport difficulties during an evacuation. research has modeled
the population by lane occupation during an evacuation emergency using the city of santa barbara.
an optimization based model (graph partitioning problem) was realized to find the neighborhood that causes the highest
vulnerability around each node in the graph and a vulnerability map around nodes in the city was constructed. a
constructive heuristic has been used to calculate the best cluster around each node. this heuristic was developed in c
and the result was displayed on a map (with arcinfo).
nevertheless, in this approach, we predefine the maximum number of nodes in a neighborhood, which may not always
be realistic and does not take into account the traffic evolution during the calculation of critical neighborhoods. so, the
vulnerable neighborhoods don't evolve according to traffic state.
in our work, we try to build a dynamic vulnerability map evolving with the traffic dynamics, in which the nodes number
of in a critical neighborhood, is not predetermined and can change depending on traffic state.
3 problematic
in this paper, the vulnerability is related to the access to the road network. we are persuaded that an accident on the
network may cause traffic jams and therefore serious problems. so, it is important to have a pessimistic approach which
takes into account the worst behavior of evacuees during a disaster (e.g. all individuals evacuate by taking the same exit
route).
to address this vulnerability, we have to finely represent the population and the dynamic state of road traffic. in pret-
resse model developed within the major risks management team of codah, we have ventilated the day / night
population at the buildings. the model was able to locate people during the day both in their workplace and their
residence (the unemployed and retirees). it has been estimated that people will be in their residence during the night. a
displacement survey was also realized in codah agglomeration and will be included in our work. pret-ress will be
enriched by our model to dynamically assess the vulnerability related to the road traffic evolution.
4 dynamic model
4.1
system architecture
the system consists of two related modules. the first one is dedicated to simulate the flow and apply the communities
detection algorithm on the graph. the second one allows visualizing the result into a gis. the architecture is illustrated
in the following figure.
figure 1: system architecture
the simulation module contains three components:
- the dynamic graph extracted from the road network layer and detailed in the following section.
- the flow management component consists of vehicles flow simulator applied on the graph.
- the communities' detection component, detailed in the section 5. its takes the extracted graph and the current flow as
an input and returns the formed communities according to the current state of road traffic.
the visualization module consists of the road network layer integrated into the gis. this module communicates with
the simulation module: the graph is constructed from this module, which in turn get the simulation result and visualize
it as a dynamic vulnerability map.
4.2
environment modeling
the road network is integrated as a layer in the geographic information system (gis). from this layer, we extract the
data by using the open source java gis toolkit geotools. this toolkit provides several methods to manipulate geospatial
data and implements open geospatial consortium (ogc) specifications, so we can read and write esri shapefile
format. once the necessary road network data extracted, we use the graphstream tool developed within litis
laboratory of le havre to construct a graph. this tool is designed for modeling; processing and visualizing graphs.
the data extracted from network layer contains the roads circulation direction, roads id, roads type, their lengths and
geometry.
the extracted multigraph g (t) = (v (t), e (t)) represents the road network where v (t) is the set of nodes and e (t) the
set of arcs. we deal with a multigraph because there was sometimes more than one oriented arc in the same direction
between two adjacent nodes due to multiple routes between two points in le havre road network. graphstream
facilitates this task because it is adapted to model and visualize multigraphs. in the constructed multigraph:
- the nodes represent roads intersections;
- the arcs represent the roads taken by vehicles;
- the weight on each arc represents the needed time to cross this arc, depending on the current load of the traffic;
- dynamic aspect relates to the weights of the arcs, which can evolve in time, according to the evolution of the fluidity
of circulation;
we have also constructed a voronoi tessellation (thiessen polygon) around nodes and projected the population in
buildings on these nodes. the population in buildings is extracted from pret-ress model.
4.3
traffic management
for a better evacuation of people in a major risk situation, we need to know the state of the road network to determine
how to allocate the vehicles on this network and to model the movement of these vehicles. different types of models
can be adopted:
- the microscopic model details the behavior of each individual vehicle by representing interactions (modeled by a car
following model) with other vehicles and in general by using a spatialization. we can extend the model by adding a
regulation model with priorities rules, traffic lights... microscopic models may be applied on crowds movement as in
boids collective approaches (reynolds 1987). in our problem, a microscopic model is used on the scale of a sector or a
neighborhood. it has the advantage to model vehicle behavior in an evacuation of a neighborhood, people panic,
interactions between vehicles, accidents...
-the macroscopic model is based on the analogy between vehicular traffic and the fluid flow within a canal. it allows us
to visualize the flow on the roads rather than individual vehicles. it is used at many sectors or the entire city scale.
the hybrid model allows coupling the two types of dynamics flow models within the same simulation. several works
have already borrowed this direction (hennecke et al. 2000, bourrel and henn 2002, magne et al. 2000). however, this
approach is relatively new and very few have adopted it to our knowledge (hman et al. 2006).
in risk context, the use of a hybrid model is very important especially when dealing with large volume of data: changing
the scale from micro to macro in a region where we haven't a crisis situation (everything is normal) allows to
economize the computation and the change from macro to micro in a critical situation allows to zoom and detect the
behaviors and interactions between entities in danger.
in this paper, we used a simple model of macroscopic flow:
- a set of flow of cars moving from one arc to another adjacent one.
- the arcs are limited in capacity of vehicles.
- the flow can be broken and two or more flows can gather on a node.
- traffic jams may appear in certain places of the road network; those places will be more vulnerable than others.
we have adopted a macroscopic model in which flows circulate normally (no accidents) because the goal now is to
establish a dynamic pessimistic vulnerability map which is not always the case in the real world (90% of people takes
an exit route and the rest takes another route for example). hence, it is important to have in the near future a micro
approach with a change of scale (from micro to macro and vice versa during the simulation) to simulate scenarios of
danger in real time (accidents, behavior of drivers, vehicles interactions ...), a study on which we are working actually .
4.4
complex system
a complex system is characterized by numerous entities of the same or different nature that interact in a non-trivial way
(non-linear, feedback loop ...); the global emergence of new properties not seen in these entities: properties or evolution
cannot be predicted by simple calculations.
codah can be seen as a complex system in which the environment may influence on evacuation by imposing some
rules which may reduce the flow fluidity (existence or not of safe refuge and emergency exits, routes traffic direction,
priorities, traffic lights...) and vice versa (a fire or an accident may cause damages and change the environment). this
system is in perpetual evolution; it is far from equilibrium dynamics with an absence of any global control. some
organizations may appear or disappear according to different interactions. entities as vehicles and pedestrians interact
between them and with environment.
5 comunities detection algorithm
our aim is to identify communities in graphs, i.e. dense areas strongly linked to each other and more weakly linked to
the outside world.
the concept of communities in a graph is difficult to explain formally. it can be seen as a set of nodes whose internal
connections density is higher than the outside density without defining formal threshold (pons 2005). the goal is to find
a partition of nodes in communities according to a certain predefined criteria without fixing the number of such
communities or the number of nodes in a community.
radicchi (radicchi et al., 2004) proposes two possible definitions to quantify a community definition:
• community c in a strong sense:
c
i
i
d
i
d
out
c
out
c
∈
∀
>
),
(
)
(
. a node belongs to a strong community if it has more
connections within the community than outside.
• community c in a weak sense:
∑
∑
∈
∈
∈
∀
>
c
i
out
c
c
i
out
c
c
i
i
d
i
d
),
(
)
(
. a community is qualified as weak if the sum of all
degrees inside is more important than the sum of degrees towards the rest of the graph.
)
(i
d out
c
is the exiting edges number from a node i belonging to the community c towards the nodes of the same
community.
out
c
d
is the exiting edges number from a node i belonging to the community c towards the nodes of other communities.
finding organizations is a new field of research (albert and barabasi, 2002; newman, 2004a). interesting works were
developed in the literature on the detection of structure in large communities in graphs (clauset et al. 2004, newman
2004a; b, pons and latapy 2006). in our problem, we look for a self-organization in networks with an evolutionary
algorithm close to the detection of communities in large graphs and belonging to collective intelligence algorithms. this
algorithm is based on the spread of forces in graphs.
figure 2: communities detection in a graph
the algorithm principle is to color the graph using pheromones and it uses several colonies of ants, each of a distinct
color. each colony will collaborate to colonize zones, whereas colonies compete to maintain their own colored zone
(see figure 2). solutions will therefore emerge and be maintained by the ant behavior. the solutions will be the color of
each vertex in the graph. indeed, colored pheromones are deposited by ants on edges. the color of a vertex is obtained
from the color having the largest proportion of pheromones on all incident edges.
we have an interaction between each two local adjacent nodes according to the attraction force that exists between
them. this force depends in our case on n/c, where n is the number of vehicles on the arc between 2 nodes neighbors
and c represents vehicles capacity of the arc. this report was chosen because, in each community, we will have a large
number of vehicles which all decide to exit through the same route in the case of a potential danger; this responds well
to one of the purposes listed in beginning to have a pessimistic approach in the calculation of vulnerability. the
algorithm has the advantage of not allowing the breaking of a link between 2 adjacent nodes to maintain the structure of
the road network. when the traffic evolves, the algorithm detects that and communities can change or disappear as a
result of local forces that change between the nodes locally.
at each simulation time step, the flow on the arcs change following traffic conditions and the attraction forces may
change also. once communities are formed on the graph, the result will be transmitted to the road network layer into the
gis to be visualized.
6 conclusion
in this paper, we have proposed a decision making tool to assess the danger depending on the road network use by the
vehicles. this tool enables decision makers to visualize, on a geographic information system, a dynamic vulnerability
map linked to the difficulty of evacuating the various streets in the metropolitan area of le havre agglomeration. we
simulated the road network traffic by using a simple model of vehicles flow. a communities detection algorithm in the
large graphs was adopted. it enabled us to form communities in a graph thanks to local force propagation rules between
adjacent nodes. the communities evolve according to the current state of road network traffic. the result of the
evolution of communities is visualized by using a gis.
the adopted approach allowed us to estimate the risk due to the use of the road network by vehicles and categorize le
havre agglomeration areas by their vulnerability. we will complete our work by using a micro model of traffic with a
possibility of change from micro to macro and vice versa when necessary, and this depending on the situation.
7 references
[1] aaron clauset, m. e. j. newman and cristopher moore (2004). finding community structure in very large
networks. phys. rev. e 70, 066111.
[2] a. hennecke, m. treiber and d. helbing (2000). macroscopic simulation of open systems and micro-macro link. in
m. schreckenberg d. helbing and h. j. herrmann, editor, traffic and granular flow '99 : social, traffic, and
granular dynamics, pages 383–388. springer.
[3] castel, c.j.e. (2006). developing a prototype agent-based pedestrian evacuation model to explore the evacuation of
king's cross st pancras underground station, centre for advanced spatial analysis (university college london): working
paper 108, london.
[4] church, r.l. and cova, t.j. (2000). mapping evacuation risk on transportation networks using a spatial
optimization model. transportation research part c: emerging technologies, 8(1-6): 321-336.
[5] cova, t.j. and church, r.l. (1997) modelling community evacuation vulnerability using gis. international journal
of geographical information science, 11(8): 763-784.
[6] cutter, s.l., j.t. mitchell and m.s. scott (2000). revealing the vulnerability of people and places: a case study of
georgetown county, south carolina. annals of the association of american geographers 90 (4): 713-737.
[7] e. bourrel and v. henn (2002). mixing micro and macro representation of traffic flow: a first theoretical step. in
proceeding of the 9th meeting of the euro working group on transportation.
[8] e. daudé (2005). systèmes multi-agents pour la simulation en géographie : vers une géographie artificielle, chapter
13, pages 355–382. in y. guermont (dir.), modélisation en géographie : déterminismes et complexités, hermès, paris.
[9] f. radicchi, c. castellano, f. cecconi, v. loreto and d. parisi. defining and identifying communities in networks.
in proceedings of the national academy of sciences, volume 101, pages 2658–2663, 2004.
[10] graphstream : un outil de modélisation et de visualisation de graphes dynamiques. distribué sous licence libre
(gpl).http://graphstream.sourceforge.net.
[11] j.c. bourcier and p. mallet (2006). allocation spatio-temporelle de la population exposée aux risques majeurs.
contribution à l'expologie sur le bassin de risques majeurs de l'estuaire de la seine: modèle pret-resse. revue
internationale de géomatique, 16(10) : 457-478.
[12] l. magne, s. rabut and j. f. gabard (2000). toward an hybrid macro and micro traffic flow simulation model. in
informs spring 2000 meeting.
[13] m. e. j. newman (2004). detecting community structure in networks, eur. phys. j. b 38, 321–330.
[14] m. e. j. newman (2004). fast algorithm for detecting community structure in networks. phys. rev. e 69, 066133.
[15] m.s. el hman, h. abouaäissa, d. jolly and a. benasser (2006). simulation hybride de flux de trafic basée sur les
systèmes multi-agents. in 6e conférence francophone de modélisation et simulation - mosim'06.
[16] najlis, r. and m. j. north (2004). repast for gis. proceedings of agent 2004: social dynamics: interaction,
reflexivity and emergence, university of chicago and argonne national laboratory, il, usa.
[17] d. c. parker (2004). integration of geographic information systems and agent-based models of land use :
challenges and prospects. in maguire, d., j. m. f., goodchild,and m., batty, (eds), gis, spatial analysis and
modelling, redlands, ca: esri. press.
[18] pascal pons (2005). détection de structures de communautés dans les grands réseaux d'interactions. septièmes
rencontres francophones sur les aspects algorithmiques des télécommunications. giens, france.
[19] pascal pons and matthieu latapy(2006). computing communities in large networks using random walks. journal
of graph algorithms and applications. vol. 10, no. 2, pp. 191-218.
[20] d. provitolo (2007). a proposition for a classification of the catastrophe systems based on complexity criteria. in
european conference complex systems-epnacs'07, emergent properties in natural and artificial complex systems,
dresden, pages 93–106.
[21] reynolds c. (1987). flocks, herds, and schools: a distributed behavioral model. computer graphics.
siggraph 87 conference, vol. 21(4), 25–34.
|
0911.1708 | different goals in multiscale simulations and how to reach them | in this paper we sum up our works on multiscale programs, mainly simulations.
we first start with describing what multiscaling is about, how it helps
perceiving signal from a background noise in a ?ow of data for example, for a
direct perception by a user or for a further use by another program. we then
give three examples of multiscale techniques we used in the past, maintaining a
summary, using an environmental marker introducing an history in the data and
finally using a knowledge on the behavior of the different scales to really
handle them at the same time.
| introduction: what this paper is about, and what it's
not
although we delved into different applications and application domains, the
computer science research goals of our team has remained centered on the
same subject for years. it can be expressed in different ways that we feel are,
if not exactly equivalent, at least closely connected. it can be defined as man-
aging multiple scales in a simulation. it also consists in handling emergent
structures in a simulation. it can often also be seen as dynamic heuristic clus-
tering of dynamic data3. this paper is about this theme, about why we think
3 we will of course later on describe in more details what we mean by all this.
2
pierrick tranouez and antoine dutot
it is of interest and what we've done so far in this direction. it is therefore akin
to a state of the art kind of article, except more centered on what we did. we
will allude to what others have done, but the focus of the article is presenting
our techniques and what we're trying to do, like most articles do, and not
present an objective description of the whole field, as the different applica-
tions examples could make think : we're sticking to the same computer science
principles overall. we're taking one step back from our works to contemplate
them all, and not the three steps which would be necessary to encompass the
whole domain, as it would take us beyond the scope of this book.
2 perception: filtering to make decisions
i look at a fluid flow simulation but all i'm interested in is where does the
turbulence happen, in a case where i couldn't know before the simulation
[tranouez 2005a]. i use a multi-participant communication system in a crisis
management piece of software and i would like to know what are the main in-
terests of each communicant based on what they are saying [lesage 1999]. i use
an individual-based model (ibm) of different fish species but i'm interested
in the evolution of the populations, not the individual fish [prevost 2004]. i
use a traffic simulation with thousands of cars and a detailed town but what
i want to know is where the traffic jams are (coming soon).
in all those examples, i use a piece of software which produces huge
amounts of data but i'm interested in phenomena of a different scale than
the raw basic components. what we aim at is helping the user of the program
to reach what he is interested in, be this user a human (clarification of the
representation) or another program (automatic decision making). although
we're trying to stay general in this part, we focused on our past experience of
what we actually managed to do, as described in "some techniques to make
these observations in a time scale comparable to the observed", this is not
gratuitous philosophy.
2.1 clarification of the representation
this first step of our work intends to extract the patterns on the carpet from
its threads [tranouez 1984]. furthermore, we want it to be done in "real (pro-
gram) time", meaning not a posteriori once the program is ended by examining
its traces [servat 1998], and sticking as close as possible to the under layer,
the one pumping out dynamic basic data. we don't want the discovery of our
structures to be of a greater time scale than a step of the program it works
upon.
how to detect these structures? for each problem the structure must be
analyzed, to understand what makes it stand out for the observer. this im-
plies knowing the observer purpose, so as to characterize the structure. the
different goals in multiscale simulations
3
answers are problem specific, nevertheless rules seem to appear.
in many situations, the structures are groups of more basic entities, which
then leads to try to fathom what makes it a group, what is its inside, its
outside, its frontier, and what makes them so.
quite often in the situation we dealt with, the groups members share some
common characteristics. the problem in that case belongs to a subgenre of
clustering, where the data changes all the time and the clusters evolve with
them, they are not computed from scratch at each change.
the other structures we managed to isolate are groups of strongly com-
municating entities in object-oriented programs like multiagent simulations.
we then endeavored to manage these cliques.
in both cases, the detected structures are emphasized in the graphical rep-
resentation of the program. this clarification lets the user of the simulation
understand what happens in its midst. because modeling, and therefore un-
derstanding, is clarifying and simplifying in a chosen direction a multi-sided
problem or phenomenon, our change of representation participates to the un-
derstanding of the operator. it is therefore also a necessary part of automating
the whole understanding, aiming for instance at computing an artificial deci-
sion making.
2.2 automatic decision making
just like the human user makes something of the emerging phenomena the
course of the program made evident, other programs can use the detected
organizations.
for example in the crisis management communication program, the de-
tected favorite subject of interest of each of the communicant will be used as
a filter for future incoming communications, favoring the ones on connected
subjects. other examples are developed below, but the point is once the struc-
tures are detected and clearly identified, the program can use models it may
have of them to compute its future trajectory. it must be emphasized that at
this point the structures can themselves combine into groups and structures
of yet another scale, recursively. we're touching there an important compo-
nent of complex system [simon 1996]. we may hope the applications of this
principle to be numerous, such as robotics, where perceiving structures in vast
amounts of data relatively to a goal, and then being able to act upon these
accordingly is a necessity.
we're now going to develop these notions in examples coming from our
past works.
4
pierrick tranouez and antoine dutot
3 some techniques to make these observations in a time
scale comparable to the observed
the examples of handling dynamic organization we chose are taken from two
main applications, one of a simulation of a fluid flow, the other of the simula-
tion of a huge cluster of computed processes, distributed over a dynamic net-
work of computing resources, such as computers. the methods titled "main-
taining a summary of a simulation" and "reification: behavioral methods"
are theories from the hydromechanics simulation, while "traces of the past
help understand the present" refers to the computing resources management
simulation. we will first describe these two applications, so that an eventual
misunderstanding of what they are does not hinder later the clarity of our
real purpose, the analysis of multiscale handling methods.
in a part of a more general estuarine ecosystem simulation, we developed
a simulation of a fluid flow. this flow uses a particle model [leonard 1980],
and is described in details in [tranouez 2005a] or [tranouez 2005b]. the basic
idea is that each particle is a vorticity carrier, each interacting with all the
others following biot-savart laws. as fluid flows tend to organize themselves
in vortices, from all spatial scales from a tens of angstrom to the atlantic
ocean, this is these vortices we tried to handle as the multiscale characteris-
tic of our simulation. the two methods we used are described below.
the other application, described in depth in [dutot 2005], is a step toward
automatic distribution of computing over computing resources in difficult con-
ditions, as:
•
the resources we want to use can each appear and disappear, increase or
decrease in number.
•
the computing distributed is composed of different object-oriented enti-
ties, each probably a thread or a process, like in a multiagent system for
example (the system was originally imagined for the ecosystem simula-
tion alluded to above, and the entities would have been fish, plants, fluid
vortices etc., each acting, moving . . . )
furthermore, we want the distribution to follow two guidelines:
•
as much of the resources as possible must be used,
•
communications between the resources must be kept as low as possible,
as it should be wished for example if the resources are computers and the
communications therefore happen over a network, bandwidth limited if
compared to the internal of a computer.
this the ultimate goal of this application, but the step we're interested
in today consists in a simulation of our communicating processes, and of a
program which, at the same time the simulated entities act and communicate,
different goals in multiscale simulations
5
fig. 1. studies of water passing obstacles and falling by leonardo da vinci, c. 1508-
9. in codex leicester.
advises how they should be regrouped and to which computing resource they
should be allocated, so as to satisfy the two guidelines above.
3.1 maintaining a summary of a simulation
the first method we would like to describe here relates to the fluid flow sim-
ulation. the hydrodynamic model we use is based on an important number
of interacting particles. each of these influences all the others, which makes
n2 interactions, where n is the number of particles used. this makes a great
number of computations. luckily, the intensity of the influence is inversely pro-
portional to the square of the distance separating two particles. we therefore
use an approximation called fast multipoles method (fmm), which consists
in covering the simulation space with grids, of a density proportional to the
density of particles (see figure 2-a). the computation of the influence of its
colleagues over a given particle is then done exactly for the ones close enough,
and averaged on the grid for those further. all this is absolutely monoscale.
6
pierrick tranouez and antoine dutot
as the particles are vorticity carriers, it means that the more numerous
they are in a region of space, the more agitated the fluid they represent is.
we would therefore be interested in the structures built of close, dense par-
ticles, surrounded by sparser ones. a side effect of the grids of the fmm, is
that they help us do just that. it is not that this clustering is much easier on
the grids, it's above all that they are an order of magnitude less numerous,
and organized in a tree, which makes the group detection much faster than if
the algorithm was ran on the particles themselves. furthermore, the step by
step management of the grids is not only cheap (it changes the constant of
the complexity of the particles movement method but not the order) but also
needed for the fmm.
we therefore detect structures on
•
dynamic data (the particles characteristics)
•
with little computing added to the simulation,
which is what we aimed at.
the principle here is that through the grids we maintain a summary of
the simulation, upon which we can then run static data algorithm, all this at
a cheap computing price.
3.2 traces of the past help understand the present
the second method relates to the detection of communication clusters inside
a distributed application. the applications we are interested in are composed
of a large number of object-oriented entities that execute in parallel, appear,
evolve and, sometimes, disappear. aside some very regular applications, often
entities tend to communicate more with some than with others. for example
in a simulation of an aquatic ecosystem, entities representing a species of fish
may stay together, interacting with one another, but flee predators. indeed
organizations appear groups of entities form. such simulations are a good ex-
ample of applications we intend to handle, where the number of entities is
often too large to compute a result in an acceptable time on one unique com-
puter.
to distribute these applications it would be interesting to both have ap-
proximately the same number of entities on each computing resource to bal-
ance the load, but also to avoid as much as possible to use the network, that
costs significantly more in terms of latency than the internals of a computer.
our goal is therefore to balance the load and minimize network communica-
tions. sadly, these criteria are conflicting, and we must find a tradeoff.
our method consists in the use of an ant metaphor. applications we use
are easily seen as a graph, which is a set of connected entities. we can map
different goals in multiscale simulations
7
a - each color corresponds to a detected aggregate
b - each color corresponds to a computing ressource
fig. 2. detection of emergent structures in two applications with distinct methods
8
pierrick tranouez and antoine dutot
entities to vertices of the graph, and communications between these entities
to the edges of the graph. this graph will follow the evolution of the sim-
ulation. when an entity appear, a vertex will appear in the graph, when a
communication will be established between two entities, an edge will appear
between the two corresponding vertices. we will use such a graph to repre-
sent the application, and will try to find clusters of highly communicating
entities in this graph by coloring it, assigning a color to each cluster. this
will allow to identify clusters as a whole and use this information to assign
not entities, but at another scale, clusters to computing resources (figure 2-b).
for this, we use numerical ants that crawl the graph as well as their
pheromones, olfactory messages they drop, to mark clusters of entities. we
use several distinct colonies of ants, each of a distinct color, that drop colored
pheromones. each color corresponds to one of the computing resources at our
disposal. ants drop colored pheromones on edges of the graph when they cross
them. we mark a vertex as being of the color of the dominant pheromone on
each of its incident edges. the color indicates the computing resource where
the entity should run.
to ensure our ants color groups of highly communicating entities of the
same color to minimize communications, we use the collaboration between
ants: ants are attracted by pheromones of their own color, and attracted by
highly communicating edges. to ensure the load is balanced, that is to ensure
that the whole graph is not colored only in one color if ten colors are avail-
able, we use competition, ants are repulsed by the pheromones of other colors.
pheromones in nature being olfactory molecules, they tend to evaporate.
ants must maintain them so they do not disappear. consequently, only the
interesting areas, zones where ants are attracted, are covered by pheromones
and maintained. when a zone becomes less interesting, ants leave it and
pheromone disappear. when an area becomes of a great interest, ants col-
onize it by laying down pheromones that attract more ants, and the process
self-amplifies.
we respect the metaphor here since it brings us the very interesting prop-
erty of handling the dynamics. indeed, our application continuously changes,
the graph that represents it follows this, and we want our method to be able to
discover new highly communicating clusters, while abandoning vertices that
are no more part of a cluster. as ants continuously crawl through the graph,
they maintain the pheromone color on the highly communicating clusters. if
entities and communications of the simulation appear or disappear, ants can
quickly adapt to the changes. colored pheromones on parts where a cluster
disappeared evaporate and ants colonize new clusters in a dynamic way. in-
deed, the application never changes completely all the time; it modifies itself
smoothly. ants lay down "traces" of pheromones and do not recompute the
different goals in multiscale simulations
9
color of each vertex at each time, they reuse the already dropped pheromone
therefore continuously giving a distribution advice at a small computing price,
and adapting to the reconfigurations of the underlying application.
3.3 reification: behavioral methods
this last example of our multiscale handling methods was also developed on
the fluid flow simulation (figure 3). once more, we want to detect structures
in a dynamic flow of data, without getting rid of the dynamicity by doing a
full computation on each step of the simulation. the idea here is doing the full
computation only once in a good while, and only relatively to the unknown
parts of our simulation.
fig. 3. fluid flow around an obstacle. on the left, the initial state. on the right, a
part of the flow, some steps later (the ellipses are vortices)
we begin with detecting vortices on the basic particles once. vortices will
be a rather elliptic set of close particles of the same rotation sense. we then
introduce a multiagent system of the vortices (figure 3-right). we have in-
deed a general knowledge of the way vortices behave. we know they move like
a big particle in our biot-savard model, and we model its structural stabil-
ity through social interactions with the surrounding basic particles, the other
vortices and the obstacles, through which they can grow, shrink or die (be dis-
sipated into particles). the details on this can be found in [tranouez 2005a].
later on, we occasionally make a full-blown vortex detection, but only on the
remaining basic particles, as the already detected vortexes are managed by
the multiagent system.
in this case, we possess knowledge on the structures we want to detect,
and we use it to build actually the upper scale level of the simulation, which
10
pierrick tranouez and antoine dutot
at the same time lightens ulterior structures detection. we are definitely in
the category described in automatic decision making.
4 conclusion
our research group works on complex systems and focuses on the computer
representation of their hierarchical/holarchical characteristics [koestler 1978],
[simon 1996], [kay 2000]. we tried to illustrate that describing a problem at
different scales is a well-spread practice at least in the modeling and simulat-
ing community. we then presented some methods for handling the different
scales, with maintaining a summary, using an environmental marker introduc-
ing a history in the data and finally using knowledge on the behavior of the
different scales to handle them at the same time.
we now believe we start to have sound multiscale methods, and must
focus on the realism of the applications, to compare the sacrifice in details we
make when we model the upper levels rather than just heavily computing the
lower ones. we save time and lose precision, but what is this trade-offworth
precisely?
references
[dutot 2005] dutot, a. (2005) distribution dynamique adaptative `
a l'aide de
m ́
ecanismes d'intelligence collective. phd thesis, le havre university.
[kay 2000] kay, j. (2000) ecosystems as self-organising holarchic open systems :
narratives and the second law of thermodynamics. jorgensen, s.e.; and f. m ̈
uller
(eds.), handbook of ecosystems theories and management, lewis publishers.
[koestler 1978] koestler, a. (1978) janus. a summing up. vintage books, new
york.
[leonard 1980] leonard, a. (1980) vortex methods for flow simulation, journal of
computational physics, vol. 37, 289-335.
[lesage 1999] lesage, f.; cardon, a.; and p. tranouez (1999) a multiagent based
prediction of the evolution of knowledge with multiple points of view, kaw'99.
[prevost 2004] prevost, g.; tranouez, p.; lerebourg, s.; bertelle, c. and d. olivier
(2004) methodology for holarchic ecosystem model based on ontological tool. esm
2004, 164-171.
[servat 1998] servat, d.; perrier, e.; treuil, j.-p.; and a. drogoul (1998) when
agents emerge from agents: introducing multi-scale viewpoints in multi-agent
simulations. mabs 98, 183-198.
[simon 1996] simon, h. (1996) the sciences of the artificial (3rd edition). mit
press.
[tranouez 1984] tranouez, pierre (1984) fascination et narration dans l'oeuvre ro-
manesque de barbey d'aurevilly. doctorat d' ́
etat.
different goals in multiscale simulations
11
[tranouez 2005a] tranouez, p.; bertelle, c; and d. olivier (2006) changing levels
of description in a fluid flow simulation in m.a. aziz-alaoui and c. bertelle
(eds), "emergent properties in natural and artificial dynamical systems", un-
derstanding complex systems series, 87-99.
[tranouez 2005b] tranouez, p. (2005) contribution `
a la mod ́
elisation et `
a la prise
en compte informatique de niveaux de descriptions multiples. application aux
́
ecosyst`
emes aquatiques (penicillo haere, nam scalas aufero), phd thesis, le
havre university.
|
0911.1709 | on a conjecture of v. v. shchigolev | v. v. shchigolev has proven that over any infinite field k of characteristic
p>2, the t-space generated by g={x_1^p,x_1^px_2^p,...} is finitely based, which
answered a question raised by a. v. grishin. shchigolev went on to conjecture
that every infinite subset of g generated a finitely based t-space. in this
paper, we prove that shchigolev's conjecture was correct by showing that for
any field of characteristic p>2, the t-space generated by any subset
{x_1^px_2^p...x_{i_1}^p, x_1^px_2^p...x_{i_2}^p,...}, i_1<i_2<i_3<..., of g has
a t-space basis of size at most i_2-i_1+1.
| introduction
in [2] (and later in [3], the survey paper with v. v. shchigolev), a. v. grishin
proved that in the free associative algebra with countably infinite generating
set { x1, x2, . . . } over an infinite field of characteristic 2, the t -space generated
by the set { x2
1, x2
1x2
2, . . . } is not finitely based, and he raised the question as
to whether or not over a field of characteristic p > 2, the t -space generated
by { xp
1, xp
1xp
2, . . . } is finitely based. this was resolved by v. v. shchigolev in
[4], wherein he proved that over an infinite field of characteristic p > 2, this t -
space is finitely based. shchigolev then raised the question in [4] as to whether
every infinite subset of { xp
1, xp
1xp
2, . . . } generates a finitely based t -space. in
this paper, we prove that over an arbitrary field of characteristic p > 2, every
subset of { xp
1, xp
1xp
2, . . . } generates a t -space that can be generated as a t -space
by finitely many elements, and we give an upper bound for the size of a minimal
generating set.
let p be a prime (not necessarily greater than 2) and let k denote an arbitrary
field of characteristic p. let x = { x1, x2, . . . } be a countably infinite set, and
let k0⟨x⟩denote the free associative k-algebra over the set x.
definition 1.1. for any positive integer d, let
s(d) = s(d)(x1, x2, . . . , xd) =
x
σ∈σd
d
y
i=1
xσ(i),
where σd is the symmetric group on d letters. then define s(d)
1
= { s(d) }s, the
t -space generated by { s(d) }, and for all n ≥1, s(d)
n+1 = (s(d)
n s(d)
1 )s.
1
let i : i1 < i2 < * * * be a sequence of positive integers (finite or infinite),
and then for each n ≥1, let r(d)
n,i = pn
j=1 s(d)
ij .
when the sequence i is
understood, we shall usually write r(d)
n
instead of r(d)
n,i. finally, let r(d)
∞,i (even
if the sequence is finite) denote the t -space generated by { s(d)
i
| i ∈i }. we
shall prove that r(d)
∞,i has a t -space basis of size at most i2 −i1 + 1.
definition 1.2. let h1 = { xp
1 }s, and for each n ≥1, let hn+1 = (hnh1)s.
then for any positive integer n, let ln,i = pn
j=1 hij, and let l∞,i denote
the t -space generated by { hi | i ∈i }. we prove that l∞,i is finitely generated
as a t -space, with a t -space basis of size at most i2 −i1 + 1. in particular, this
proves that shchigolev's conjecture is valid.
2
preliminaries
in this section, k denotes an arbitrary field of characteristic an arbitrary prime
p, and vi, i ≥1, denotes a sequence of t -spaces of k0⟨x⟩satisfying the following
two properties:
(i) (vivj)s = vi+j;
(ii) for all m ≥1, v2m+1 ⊆vm+1 + v1.
lemma 2.1. for any integers r and s with 0 < r < s, vs+t(s−r) ⊆vr + vs for
all t ≥0.
proof. the proof is by induction on t. there is nothing to show for t = 0. for
t = 1, let m = s−r in (ii) to obtain that v2s−2r+1 ⊆vs−r+1 +v1, then multiply
by vr−1 to obtain vr−1v2s−2r+1 ⊆vr−1vs−r+1 + vr−1v1 ⊆(vr−1vs−r+1)s +
(vr−1v1)s = vs+vr. but then v2s−r = (vr−1v2s−2r+1)s ⊆vs+vr, as required.
suppose now that t ≥1 is such that the result holds. then vs+(t+1)(s−r) =
(vs+t(s−r)vs−r)s ⊆((vs + vr)vs−r)s = v2s−r + vs ⊆vr + vs + vs = vr + vs.
the result follows now by induction.
for any increasing sequence i : i1 < i2 < * * * of positive integers, we shall
refer to i2 −i1 as the initial gap of i.
proposition 2.1. for any increasing sequence i = { ij }j≥1 of positive integers,
there exists a set j of size at most i2 −i1 + 1 with entries positive integers such
that the following hold:
(i) 1, 2 ∈j;
(ii) p∞
j=1 vij = p
j∈j vij.
proof. the proof of the proposition shall be by induction on the initial gap.
by lemma 2.1, for a sequence with initial gap 1, we may take j = { i1, i2 } .
suppose now that l > 1 is an integer for which the result holds for all increasing
2
sequences with initial gap less than l, and let i1 < i2 < * * * be a sequence with
initial gap i2 −i1 = l. if for all j ≥3, vij ⊆vi1 + vi2, then j = { 1, 2 } meets
the requirements, so we may suppose that there exists j ≥3 such that vij is
not contained in vi1 + vi2. by lemma 2.1, this means that there exists j ≥3
such that ij /
∈{ i2 + ql | q ≥0 }. let r be least such that ir /
∈{ i2 + ql | q ≥0 },
so that there exists t such that i2 + tl < ir < i2 + (t + 1)l. form a sequence
i′ from i by first removing all entries of i up to (but not including) ir, then
prepend the integer i2 + tl. thus i′
1, the first entry of i′, is i2 + tl, while for all
j ≥2, i′
j = ir+j−2. note that i′
2 −i′
1 = ir −(i2 + tl) ≤l −1. by hypothesis,
there exists a subset j′ of size at most i′
2 −i′
1 + 1 ≤l = i2 −i1 that contains 1
and 2 and is such that p∞
j=1 vi′
j = p
j∈j′ vi′
j. set
j = { 1, 2 } ∪{ r + j −2 | j ∈j′, j ≥2 }.
then |j| = |j′| + 1 ≤i2 −i1 + 1 and
vi2+tl +
∞
x
j=r
vij =
∞
x
j=1
vi′
j =
x
j∈j′
vi′
j = vi2+tl +
x
j∈j′
j≥2
vi′
j = vi2+tl +
x
j∈j
j≥3
vij
and by lemma 2.1, vi2+tl ⊆vi2 + vi2, so
vi1 + vi2 +
∞
x
j=r
vij = vi1 + vi2 + vi2+tl +
∞
x
j=r
vij = vi1 + vi2 + vi2+tl +
x
j∈j
j≥3
vij
= vi1 + vi2 +
x
j∈j
j≥3
vij.
finally, the choice of r implies that
x
j∈j
vij = vi1 + vi2 +
x
j∈j
j≥3
vij = vi1 + vi2 +
∞
x
j=r
vij =
∞
x
j=1
vij.
this completes the proof of the inductive step.
we remark that in proposition 2.1, it is possible to improve the bound from
i2 −i1 + 1 to 2(log2(2(i2 −i1)).
in the sections to come, we shall examine some important situations of the
kind described above.
3
the r(d)
n
sequence
we shall have need of certain results that first appeared in [1]. for completeness,
we include them with proofs where necessary. in this section, p denotes an
arbitrary prime, k an arbitrary field of characteristic p, and d an arbitrary
positive integer.
the proof of the first result is immediate.
3
lemma 3.1. let d be a positive integer. then
s(d+1)(x1, x2, . . . , xd+1) =
d+1
x
i=1
s(d)(x1, x2, . . . , ˆ
xi, . . . , xd+1)xi
(1)
= s(d)(x1, x2, . . . , xd)xd+1 +
d
x
i=1
s(d)(x1, x2, . . . , xd+1xi, . . . , xd)
(2)
= xd+1s(d)(x1, x2, . . . , xd) +
d
x
i=1
s(d)(x1, x2, . . . , xixd+1, . . . , xd). (3)
corollary 3.1. let d be any positive integer. then modulo s(d)
1 ,
s(d+1)(x1, x2, . . . , xd+1) ≡s(d)(x1, . . . , xd)xd+1 ≡xd+1s(d)(x1, . . . , xd).
proof. this is immediate from (2) and (3) of lemma 3.1.
we remark that corollary 3.1 implies that for every u ∈s(d)
1
and v ∈k0⟨x⟩,
[ u, v ] ∈s(d)
1 . while we shall not have need of this fact, we note that in [4],
shchigolev proves that if the field is infinite, then for any t -space v , if v ∈v ,
then [ v, u ] ∈v for any u ∈k0⟨x⟩.
the next proposition is a strengthened version of proposition 2.1 of [1].
proposition 3.1. for any u, v ∈k0⟨x⟩,
(i) (s(d)
1 uv)s ⊆s(d)
1
+ (s(d)
1 u)s + (s(d)
1 v)s; and
(ii) (uvs(d)
1 )s ⊆s(d)
1
+ (us(d)
1 )s + (vs(d)
1 )s.
proof. we shall prove the first statement; the proof of the second is similar and
will be omitted. by (1) of lemma 3.1,
d
x
i=1
s(d)(x1, . . . , ˆ
xi, . . . , xd+1)xi = s(d+1)(x1, . . . , xd+1) −s(d)(x1, . . . , xd)xd+1
and by (2) of lemma 3.1, s(d+1)(x1, . . . , xd+1) −s(d)(x1, . . . , xd)xd+1 ∈s(d)
1 .
let v ∈k0⟨x⟩. then
s(d)(x2, . . . , xd+1)x1v+
d
x
i=2
s(d)(x1, . . . , ˆ
xi, . . . , xd+1)xiv
=
d
x
i=1
s(d)(x1, x2, . . . , ˆ
xi, . . . , xd+1)xiv ∈(s(d)
1 v)s
.
now for each i = 2, . . . , d, we use two applications of corollary 3.1 to obtain
s(d)(x1, . . . , ˆ
xi, . . . , xd+1)xiv ≡s(d+1)(x1, . . . , ˆ
xi, . . . , xd+1, xiv)
≡s(d)(x2, . . . , ˆ
xi, . . . , xd+1, xiv)x1
mod s(d)
1 .
4
thus
s(d)(x2, . . . , xd+1)x1v +
(
d
x
i=2
s(d)(x2, . . . , ˆ
xi, . . . , xiv)
x1 ∈(s(d)
1 v)s + s(d)
1 .
thus for u ∈k0⟨x⟩, we obtain s(d)(x2, . . . , xd+1)uv ∈(s(d)
1 u)s+(s(d)
1 v)s+s(d)
1 ,
and so
(s(d)
1 uv)s ⊆(s(d)
1 u)s + (s(d)
1 v)s + s(d)
1 ,
as required.
corollary 3.2. let d be any positive integer. then the sequence s(d)
n , n ≥1,
satisfies
(i) for all m, n ≥1, (s(d)
m s(d)
n )s = s(d)
m+n;
(ii) for all m ≥1, s(d)
2m+1 ⊆s(d)
m+1 + s(d)
1 .
proof. the first statement follows immediately from definition 1.1 by an ele-
mentary induction argument. for the second statement, let m ≥1. then by
proposition 3.1, for any u, v ∈s(d)
m , (s(d)
1 uv)s ⊆s(d)
1
+ (s(d)
1 u)s + (s(d)
1 v)s,
which implies that (s(d)
1 s(d)
m s(d)
m )s ⊆s(d)
1
+ (s(d)
1 s(d)
m )s.
by (i), this yields
s(d)
2m+1 ⊆s(d)
1
+ s(d)
m+1, as required.
theorem 3.1. let i denote any increasing sequence of positive integers with
initial gap g. then r(d)
∞,i is finitely based, with a t -space basis of size at most
g + 1.
proof. denote the entries of i in increasing order by ij, j ≥1. by corollary 3.2
and proposition 2.1, there exists a set j of positive integers with |j| ≤i2−i1+1
and r(d)
∞,i = r(d)
n,i = p
j∈j s(d)
ij . since for each i, the t -space s(d)
i
has a basis
consisting of a single element, the result follows.
4
the ln sequence
we shall make use of the following well known result. an element u ∈k0⟨x⟩is
said to be essential if u is a linear combination of monomials with the property
that each variable that appears in any monomial appears in every monomial.
lemma 4.1. let v be a t -space and let f ∈v . if f = p fi denotes the
decomposition of f into its essential components, then fi ∈v for every i.
proof. we induct on the number of essential components, with obvious base
case. suppose that n > 1 is an integer such that if f ∈v has fewer than n
essential components, then each belongs to v , and let f ∈v have n essential
components. since n > 1, there is a variable x that appears in some but not all
essential components of f. let zx and fx denote the sum of the essential com-
ponents of f in which x appears, respectively, does not appear. then evaluate
5
at x = 0 to obtain that fx = f|x=0∈v , and thus zx = f −fx ∈v as well. by
hypothesis, each essential component of fx and of zx belongs to v , and thus
every essential component of f belongs to v , as required.
corollary 4.1. s(p)
1
⊆h1.
proof. s(p) is one of the essential components of (x1 +x2 +* * *+xp)p, and since
(x1 + x2 + * * * + xp)p ∈h1, it follows from lemma 4.1 that s(p) ∈h1. thus
s(p)
1
⊆h1.
corollary 4.2. for every m ≥1, s(p)
m ⊆hm.
proof. the proof is an elementary induction, with corollary 4.1 providing the
base case.
corollary 4.3. for any u ∈h1 and any v ∈k0⟨x⟩, [ u, v ] ∈h1.
proof. it suffices to observe that
[ xp, v ] =
p
x
i=0
xi[ x, v ]xp−i =
1
(p −1)!s(p)(x, x, . . . , x, [ x, v ]),
which belongs to h1 by virtue of corollary 4.1.
we remark again that in [3], shchigolev proves that if k is infinite, then
every t -space in k0⟨x⟩is closed under commutator in the sense of corollary
4.3. since we have not required that k be infinite, we have provided this closure
result (see also lemma 4.4 below).
lemma 4.2. for any m, n ≥1, (hmhn)s = hm+n.
proof. the proof is by an elementary induction on n, with definition 1.2 pro-
viding the base case.
lemma 4.3. for any m ≥1, (s(p)
1 h2m)s ⊆h1 + hm+1 and (h2ms(p)
1 )s ⊆
h1 + hm+1.
proof. by proposition 3.1 (i), for any u, v ∈hm, we have s(p)
1 uv ⊆s(p)
1
+
(s(p)
1 u)s + (s(p)
1 v)s. by corollary 4.2, this gives s(p)
1 hmhm ⊆h1 + (h1hm)s,
and then from lemma 4.2, we obtain s(p)
1 h2m ⊆h1 + hm+1. the proof of the
second part is similar.
lemma 4.4. let m ≥1. for every u ∈hm and v ∈k0⟨x⟩, [ u, v ] ∈hm.
proof. the proof is by induction on m, with corollary 4.3 providing the base
case. suppose that m ≥1 is such that the result holds. it suffices to prove that
for any v ∈k0⟨x⟩, [ xp
1xp
2 * * * xp
mxp
m+1, v ] ∈hm+1. we have
[ xp
1xp
2 * * * xp
mxp
m+1, v ] = [ xp
1xp
2 * * * xp
m, v ]xp
m+1 + xp
1xp
2 * * * xp
m[ xp
m+1, v ].
6
by hypothesis, [ xp
1xp
2 * * * xp
m, v ] ∈hm, while xp
m+1 ∈h1 and thus by corollary
4.3, [ xp
m+1, v ] ∈h1 as well. now by definition, [ xp
1xp
2 * * * xp
m, v ]xp
m+1 ∈hm+1
and xp
1xp
2 * * * xp
m[ xp
m+1, v ] ∈hm+1, which completes the proof of the inductive
step.
lemma 4.5. let m ≥1. then his(p)h2m−i ⊆h1 + hm+1 for all i with
1 ≤i ≤2m −1.
proof. let m ≥1. we consider two cases: 2m −i ≥m and 2m −i < m.
suppose that 2m −i ≥m, and let u ∈hi, w ∈hm−1 and z ∈hm−i+1. then
us(p)wz = ([ u, s(p)w ] + s(p)wu)z = [ u, s(p)w ]z + s(p)wuz. since u ∈hi,
it follows from lemma 4.4 that [ u, s(p)w ] ∈hi.
but then by lemma 4.2,
[ u, s(p)w ]z ∈hi+m−i+1 = hm+1. as well, by corollary 4.1 and lemma 4.2,
s(p)wuz ∈s(p)
1 hm−1+i+m−i+1 = s(p)
1 h2m, and by lemma 4.3, s(p)
1 h2m ⊆h1+
hm+1. thus us(p)wz ∈h1 + hm+1. this proves that his(p)hm−1hm−i+1 ⊆
h1 + hm+1, and so by lemma 4.2, his(p)h2m−i = his(p)(hm−1hm−i+1)s ⊆
h1 + hm+1. the argument for the case when 2m −i < m is similar and is
therefore omitted.
proposition 4.1. let p > 2. then for every m ≥1, h2m+1 ⊆h1 + hm+1.
proof. first, consider the expansion of (x + y)p for any x, y ∈k0⟨x⟩. it will
be convenient to introduce the following notation. let jp = { 1, 2, . . ., p }. for
any j ⊆jp, let pj = qp
i=1 zi, where for each i, zi = x if i ∈j, otherwise
zi = y. as well, for each i with 1 ≤i ≤p −1, we shall let s(p)(x, y; i) =
s(p)(x, x, . . . , x
|
{z
}
i
, y, y, . . ., y
|
{z
}
p−i
).
observe that s(p)(x, y; i) = i!(p −i)! p
j⊆jp
|j|=i
pj.
we have
(x + y)p =
p
x
i=0
x
j⊆jp
|j|=i
pj = yp + xp +
p−1
x
i=1
1
i!(p −i)!s(p)(x, y; i).
let u = pp−1
i=1
1
i!(p−i)!s(p)(x, y; i), so that (x + y)p = xp + yp + u, and note that
u ∈s(p)
1 . then
(x + y)2p = y2p + x2p + 2xpyp + [ yp, xp ] + u2 + (xp + yp)u + u(xp + yp).
since (x+y)2p, x2p, y2p, and, by lemma 4.4, [ yp, xp ] all belong to h1, it follows
(making use of corollary 4.2 where necessary) that 2xpyp ∈h1 + h1s(p)
1
+
s(p)
1 h1.
consequently, for any m ≥1,
xp
1
m
y
i=1
(2xp
2ixp
2i+1) ∈h1(h1 + h1s(p)
1
+ s(p)
1 h1)m.
by corollary 4.1, lemma 4.2, and lemma 4.5, h1(h1 + h1s(p)
1
+ s(p)
1 h1)m ⊆
h1 + hm+1, and since p > 2, it follows that q2m+1
i=1
xp
i ∈h1 + hm+1. thus
h2m+1 ⊆h1 + hm+1, as required.
7
theorem 4.1 (shchigolev's conjecture). let p > 2 be a prime and k a field of
characteristic p. for any increasing sequence i = { ij }j≥1, l∞,i is a finitely
based t -space of k0⟨x⟩, with a t -space basis of size at most i2 −i1 + 1.
proof. by lemma 4.2 and proposition 4.1, the sequence hn of t -spaces of
k0⟨x⟩meets the requirements of section 2. thus by proposition 2.1, for any
increasing sequence i = { ij }j≥1 of positive integers, there exists a set j of
positive integers such that |j| ≤i2 −i1 + 1 and l∞,i = p∞
j=1 hij = p
j∈j hij.
since for each i, hi has t -space basis { xp
1xp
2 * * * xp
i }, it follows that l∞,i has a
t -space basis of size |j| ≤i2 −i1 + 1.
shchigolev's original result was that for the sequence i+ of all positive in-
tegers, l∞,i+ is a finitely-based t -space, with a t -space basis of size at most
p. it was then shown in [1], a precursor to this work, that l∞,i+ has in fact
a t -space basis of size at most 2 (the bound of theorem 4.1, since i1 = 1 and
i2 = 2).
it is also interesting to note that the results in this paper apply to finite
sequences. of course, if i is a finite increasing sequence of positive integers,
then l∞,i has a finite t -space basis, but by the preceding work, we know that
it has a t -space basis of size at most i2 −i1 + 1.
references
[1] c. bekh-ochir and s. a. rankin, on a problem of a. v. grishin, preprint,
arxiv:0909.2266.
[2] a. v. grishin, t-spaces with an infinite basis over a field of characteristic 2,
international conference in algebra and analysis commemorating the hun-
dredth anniversary of n. g. chebotarev, proceedings, kazan 5–11, june,
1994, p. 29 (russian).
[3] a. v. grishin and v. v. shchigolev, t-spaces and their applications, journal
of mathematical sciences, vol 134, no. 1, 2006, 1799–1878 (translated from
sovremennaya matematika i-ee prilozheniya, vol. 18, alg`
ebra, 2004).
[4] v. v. shchigolev, examples of t -spaces with an infinite basis, sbornik math-
ematics, vol 191, no. 3, 2000, 459–476.
8
|
0911.1710 | moduli spaces of $j$-holomorphic curves with general jet constraints | in this paper, we prove that the tagent map of the holomorphic $k$- jet
evaluation $j^k_{hol}$ from the mapping space to holomorphic $k$-jet bundle,
when restricted on the universal moduli space of simple j-holomorphic curves
with one marked point, is surjective. from this we derive that for generic $j$,
the moduli space of simple $j$-holomorphic curves in class $\beta\in h_2(m)$
with general jet constraints at marked points is a smooth manifold of expected
dimension.
| introduction
let (m, ω) be a symplectic manifold of dimension 2n. denote by jω the set of
almost complex structures j on m compatible with ω. let σ be a compact oriented
surface without boundary, and (j, u) a pair of complex structure j on σ and a map
u : σ →m. we say (j, u) is a j-holomorphic curve if ∂j,ju := 1
2 (du + j ◦du ◦j) =
0. we let m1 (σ, m, j; β) be the standard moduli space of j-holomorphic curves
in class β ∈h2 (m, z) with one marked point, and m∗
1 (σ, m, j; β) be the set of
simple (i.e. somewhere injective) j-holomorphic curves in m1 (σ, m; β).
since the birth of the theory of j-holomorphic curves, moduli spaces of j-
holomorphic curves with constraints at marked points have lead to finer symplectic
invarints like gromov-witten invariants and quantumn cohomology. j-holomorphic
curves with embedding property also plays important role in low dimesional sym-
plectic geometry, like the works of [ht] and [wen]. these constraints all can be
viewed as partial differential relations in the 0-jet and 1-jet bundles. in relative
gromov-witten theory, contact order of j-holomorphic curves with given symplec-
tic hypersurfaces (divisors) was used to define the relevant moduli spaces. in the
work of cieliebak-mohke [cm] and oh [oh], the authors studied the moduli space
of j-holomorphic curves with prescribed vanishing orders of derivatives at marked
points. all these are vanishing conditions in k-jets bundles. it is then natural to
ask what properties we can expect for moduli spaces of j-holomorphic curves with
general constraints in jet bundles (while all constraints in previous examples are
zero sections in various jet bundles).
the main purpose of this paper is to confirm that for a wide class of closed
partial differential relations in holomorphic jet bundles (definition 1, orginially de-
fined in [oh]), the moduli spaces of j-holomorphic curves from σ to m with given
constraints at marked points behave well for generic j (theorem 3). namely, they
are smooth manifolds of dimension predicted by index theorem, and all elements in
date: november 9, 2009.
key words and phrases. j-holomorphic curve, jet evaluation map, transversality.
1
2
ke zhu
the moduli spaces are fredholm regular. during the proof it appears that holomor-
phic jet bundles are the natural framework to put jet constraints for j-holomorphic
curves in order to obtain regularity of their moduli spaces. the regularity of j-
holomorphic curve moduli spaces fails for general constraints in usual jet bundle
(remark 2), but still holds in a special case when the moduli space consists of
immersed j-holomorphic curves (theorem 2).
the key of the proof is to establish the sujective property of the linearization of
k-jet evaluations on the universal moduli spaces of j-holomorphic curves at marked
points insider the mapping space, including the parameter j ∈jω(theorem 1). it
is important to take the evaluations in holomorphic jet bundles in order to get the
surjectivity of the linearization of the k-jet evaluation map.
since jω is a huge parameter space to deform j-holomorphic curves, the sujective
property here is a reminiscence of the classic thom transversality theorem, which
says that the k-jet evaluation on smooth mapping space to the k-jet bundle is
transversal to any section there.
the framework of the paper is similar to [oh], which in turn is a higher jet
generalization of [oz] for 1-jet transversality of j-holomorphic curves. the main
steps of the paper are in order:
(1) we set up the banach bundle including the finite dimensional holomorphic
k-jet subbundle jk
hol (σ, m) over the mapping space f1 (σ, m) × jω and
define the section υk =
∂, jk
hol
, where
υk : ((u, j, z0) , j) →
∂j,ju, jk
hol (u (z0))
.
we inteprete the universal j-holomorphic curve moduli space as
m (σ, m) = ∂
−1 (0) = υ−1
k
0, jk
hol (σ, m)
.
(2) we compute the linearization dυk of the section υk.
we express the
submersion property of υk as the solvability of a system of equations
dυk (ξ, b) = (γ, α) for any (γ, α), where (ξ, b) ∈tuf1 (σ, m) × tjjω,
or equivalently, the vanishing of the cokernal element (η, ζ) in the fred-
holm alternative system: f ⟨(ξ, b) , (η, ζ)⟩= 0 for all (ξ, b). this is called
the cokernal equation.
(3) using the abundance of b ∈tjjω we get suppη ⊂{z0}. then we use a
structure theorem in distribution to write η as a linear combination of δ
function and its derivatives at z0, up to (k −1)-th order derivatives.
(4) since suppη ⊂{z0} the cokernal equation is supported at z0. we replace the
ξ in the cokernal equation by ξ+h where h = h (z, z) is a suitable polynomial
in local coordinates nearby z0, and set b = 0, so that the cokernal equation
is reduced to
du∂j,jξ, η
= 0 for all ξ. the crucial observation is that to
get
du∂j,jξ, η
= 0 we do not need so strong conditions of vanishing of
1 ∼k-derivatives of u at z0 as in [oh] and [cm]. this is by exploring
the flexibility of h to get rid of redundant terms from the original cokernal
equation.
(5) then we apply elliptic regularity to conclude η = 0 and consequently ζ = 0.
therefore we get the sujectivity of dυk and djk
hol.
(6) finally, there is an obstruction in step 4 to get h when ζk = 0, where ζk
is the k-th component of ζ. but when ζk = 0 the cokernal equation is
moduli spaces of j-holomorphic curves with general jet constraints
3
reduced to the (k −1)-jet evaluation setting, so we still get (η, ζ) = (0, 0)
by induction on k.
acaknowledgement.
the author would like to thank yakov eliashberg to
suggest the generalization from [oh] to general pde relations. he would also like
to thank yong-geun oh on past discussions in holomorphic jet transversality.
2. holomorphic jet bundle
we recall the holomorphic jet bundle from [oh]. given σ, m, and (z, x) ∈σ×m,
the k-jet with source z and target x is defined as (see [hir])
jk
z,x (σ, m) =
k
y
l=0
syml (tzς, txm) ,
where syml (tzς, txm) is the set of l-multilinear maps from tzς to txm for l ≥1.
here for convenience we have set sym0 (tzς, txm) = m. let
jk (σ, m) =
[
(z,x)∈σ×m
jk
z,x (σ, m)
be the k-jet bundle over σ × m. for the mapping space
f1 (σ, m; β) = {((σ, j) , u) |j ∈m (σ) , z ∈σ, u : σ →m, [u] = β} ,
we consider the map
f1 (σ, m; β) →σ × m, (u, j, z) →(z, u (z)) .
by this map we can pull back the bundle jk (σ, m) →σ×m to the base f1 (σ, m; β).
by abusing notation, we still call the resulted bundle by jk (σ, m) .then jk (σ, m) →
f1 (σ, m; β) is a finite dimensional vector bundle over the banach manifold f1 (σ, m; β).
we define the k-jet evaluation
jk : f1 (σ, m; β) →jk (σ, m) , jk ((u, j) , z) = jk
z u ∈jk
z,u(z) (σ, m) .
then jk is a smooth section. classic thom transversality theorem says that jk is
transversal to any section in jk (σ, m).
now we turn to the case when σ and m are equipped with (almost) complex
structures j and j respectively.
the corresponding concept is the holomorphic
jet bundle defined in [oh].
with respect to (jz, jx), syml
z,x (σ, m) splits into
summands indexed by the bigrading (p, q) for p + q = k:
syml
z,x (σ, m) = sym(l,0) (tzς, txm) ⊕sym(0,l) (tzς, txm) ⊕"mixed parts"
let
h(l,0)
jz,jx (σ, m)
=
sym(l,0) (tzς, txm) ,
h(l,0)
j,j
(σ, m)
=
[
(z,x)∈σ×m
h(l,0)
jz,jx (σ, m) .
given (j, j), the (j, j)-holomorphic jet bundle jk
(j,j)hol (σ, m) is defined as
(2.1)
jk
(j,j)hol (σ, m) =
k
y
l=0
h(l,0)
j,j
(σ, m) ,
4
ke zhu
which is a finite dimensional vector bundle over σ × m.
we define the bundle
jk
hol (σ, m) =
[
(j,j)∈m(σ)×jω
jk
(j,j)hol (σ, m) .
jk
hol (σ, m) →σ × m × m (σ) × jω is a finite dimensional vector bundle over the
base banach manifold. using the pull back of the map
ev : f1 (σ, m; β) × jω →σ × m × m (σ) × jω, ((u, j) , z, j) →(z, u (z) , j, j) ,
ev∗ jk
hol (σ, m)
is a finite dimensional vector bundle over the banach manifold
f1 (σ, m; β) × jω.
by abusing of notation, we still call ev∗ jk
hol (σ, m)
by
jk
hol (σ, m).
definition 1. jk
hol (σ, m) →f1 (σ, m; β) × jω is called the holomorphic k-jet
bundle.
let πhol : jk (σ, m) →jk
hol (σ, m) be the bundle projection. we define the
holomorphic k-jet evaluation
jk
hol = πhol ◦jk.
it is not hard to see jk
hol is a smooth section of the banach bundle f1 (σ, m; β) ×
jω →jk
hol (σ, m). according to the summand (2.1), we write jk
hol in components
jk
hol =
k
y
l=0
σl,
where the l-th component is
σl : f1 (σ, m; β) × jω →h(l,0)
j,j
(σ, m) , ((u, j) , z, j) →πhol
j,j
dlu (z)
.
we remark that if j is integrable, σl corresponds to the l-th holomorphic derivative
∂l
∂zl u of u at z.
the important point is that the holomorphic k-jet bundle and the section jk
hol
are canonically associated to the pair (σ, j) and (m, j) in the "off-shell level", i.e.
on the space of all smooth maps, not only j-holomorphic maps. this enables us to
formulate the jet constraints for j-holomorphic maps as some submanifold in the
bundle jk
hol (σ, m) →f1 (σ, m; β) × jω.
3. fredholm set up
the fredholm set up is the same as in [oh], with the simplification that we only
need one marked point on σ. the case with more marked points has no essential
difference. we introduce the standard bundle
h
′′ =
[
((u,j),j)
h
′′
((u,j),j),
h
′′
((u,j),j) = ω(0,1)
j,j
(u∗t m)
and define the section
υk : f1 (σ, m; β) × jω →h
′′ × jk (σ, m)
as
υk ((u, j) , z, j) =
∂(u, j, j) ; jk
hol (u, j, j, z)
,
where
∂(u, j, j) = ∂j,j (u) = du + j ◦du ◦j
2
.
moduli spaces of j-holomorphic curves with general jet constraints
5
given β ∈h2 (m, z), let
m1 (σ, m; β) =
[
j∈jω
m1 (σ, m, j; β)
be the universal moduli space of j-holomorphic curves in class β with one marked
point.
its open subset consisting of somewhere injective j-holomorphic curves
is denoted by m∗
1 (σ, m; β).
it is a standard fact in symplectic geometry that
m∗
1 (σ, m; β) is a banach manifold.
now we make precise the necessary regularity requirement for the banach man-
ifold set-up:
(1) to make sense of the evaluation of jku at a point z on σ, we need to take at
least w k+1,p-completion with p > 2 of f1 (σ, m; β) so jku ∈w 1,p ֒
→c0.
to make the section υk differentiable we need to take w k+2,p completion,
since in (4.2) (k + 1)-th derivatives of u are involved. to apply sard-smale
theorem, we actually need to take w n,p completion with sufficiently large
n = n (β, k).
(2) we provide h′′ with topology of a w n,p banach bundle.
(3) we also need to provide the banach manifold structure of jω. we can
borrow floer's scheme [f, f] for this whose details we refer readers thereto.
4. transversality
theorem 1. at every j-holomorphic curve ((u, j) , z, j) ∈m∗
1 (σ, m; β) ⊂f1 (σ, m; β)×
jω, the linearization dυk of the map
υk =
∂, jk
hol
: f1 (σ, m; β) × jω →h
′′ × jk
hol (σ, m)
is surjective. especially the linearization djk
hol of the holomorphic k-jet evaluation
jk
hol : f1 (σ, m; β) × jω →jk
hol (σ, m)
on m∗
1 (σ, m; β) is sujective.
to prove theorem we need to verify that at each ((u, j) , z, j) ∈m∗
1 (σ, m; β) ,
the system of equations
dj,(j,u)∂(b, (b, ξ))
=
γ
(4.1)
dj,(j,u)jk
hol (b, (b, ξ)) (z) + ∇v
jk
hol (u)
(z)
=
α
(4.2)
has a solution (b, (b, ξ) , v) ∈tjjω × tjm (σ) × tuf1 (σ, m; β) × tzς for each
given data
γ ∈ω(0,1)
n−1,p (u∗t m) ,
ζ =
ζ0, ζ1, . . . ζk
∈jk
hol
tzς, tu(z)m
.
it will be enough to consider the triple with b = 0 and v = 0 which we will assume
from now on.
we compute the dj,(j,u)jk
hol (b, (b, ξ)) (z). it is enough to compute dj,(j,u)σl (b, (0, ξ)) (z)
for l = 0, 1, * * *k. we have
(4.3)
dj,(j,u)σl (b, (0, ξ)) (z) = πhol
(∇du)l ξ (z)
+σ0≤s,t≤lb (z)*fst (z)
(∇du)s ξ (z) , ∇tu (z)
where fst (z) (*, *) is some vector-valued monomial, and b (z) is a matrix val-
ued function, both smoothly depending on z.
there is no derivative of b in
the above formula, because for any l, σl is the projection of the tensor dlu ∈
6
ke zhu
syml tzς, tu(x)m
to sym(l,0)
j,j
tzς, tu(x)m
, and the projection only involves j
but not its derivatives. since u is (j,j)-holomorphic, it also follows that
(4.4)
πhol
(∇du)l ξ (z)
=
∇
′
du
l
ξ (z) ,
where ∇
′
du = πhol∇du = du∂j,j. there is a formula for du∂j,j and du∂j,j nearby
z0 (see [si]):
du∂j,jξ
=
∂ξ + a (z) ∂ξ + c (z) ξ
(4.5)
du∂j,jξ
=
∂ξ + g (z) ∂ξ + h (z) ξ
where a (z) , c (z) , g (z) , h (z) are matrix-valued smooth functions, all vanishing
at z0.
now we study the solvability of (4.1) and (4.2) by fredholm alternative. we
regard
ω(0,1)
n−1,p (u∗t m) × jk
hol
tzς, tu(z)m
as a banach space with the norm
∥*∥n−1,p + σk
l=1 |*|l
where |*|l is any norm induced by an inner product on the 2n-dimensional vector
space sym(l,0)
j,j
tzς, tu(z)m
≃cn.
we denote the natural pairing
ω(0,1)
n−1,p (u∗t m) ×
ω(0,1)
n−1,p (u∗t m)
∗
→r
by ⟨*, *⟩and the inner product on sym(l,0)
j,j
tzς, tu(z)m
by (*, *)z.
let (η, ζ) ∈
ω(0,1)
n−1,p (u∗t m)
∗
× jk
hol
tzς, tu(z)m
for ζ =
ζ1,*** ,ζk
such
that
(4.6)
dj,(j,u)∂(b, (0, ξ)) , η
+ σk
l=1
dj,(j,u)σl (b, (0, ξ)) (z) , ζl
z = 0
for all ξ ∈ω(0,1)
n−1,p (u∗t m) and b ∈tjjω. we want to show (η, ζ) = (0, 0). the
idea is to change the above equation into
dj,(j,u)∂(b, (0, ξ)) , η
= 0
for all ξ and b by judiciously modifying ξ by a taylor polynomial nearby z, and
then use standard techniques in j-holomorphic curve theory to show η = 0, and
after that use cauchy integral to show ζ = 0. we first deal with n = k case, and
later raise the regularity by ellipticity of cauchy-riemann equation.
let ξ = 0, then (4.6) becomes
1
2b ◦du ◦j, η
= 0.
using the abundance of b ∈tjjω, and that u is a simple j-holomorphic curve, by
standard technique (for example [ms]) we get η = 0 on σ\ {z0}, namely suppη ⊂
{z0}.
since η ∈
w k,p∗, by the structure theorem of distribution with point
support (see [gs]), we have
(4.7)
η = p
∂
∂z , ∂
∂z
δz0
moduli spaces of j-holomorphic curves with general jet constraints
7
where δz0 is the delta function supported at z0, and p is a polynomial in two
variables with degree ≤k −1: this is because the evaluation at a point of the k-th
derivative of w k,p maps does not define a continuous functional on w k,p.
let b = 0. by (4.3) and (4.7), (4.6) becomes
(4.8)
du∂j,jξ, η
+
dj,(j,u)jk
hol
ξ (z0) , ζ
z0 = 0.
since ξ is arbitrary, we can replace ξ by ξ + χ (z) h (z, z) in the above identity,
where h (z, z) is a vector-valued polynomial in z and z, and χ (z) is a smooth cut-
offfunction equal to 1 in a coordinate neighborhood of z0 and 0 outside a slightly
larger neighborhood, so that χ (z) h (z, z) is a well defined and smooth on whole σ.
we want (4.8) becomes
du∂j,jξ, η
= 0 after that replacement. for this purpose
the h (z, z) should satisfy
(4.9)
du∂j,jh, p
∂
∂z , ∂
∂z
δz0
+
dj,(j,u)jk
hol
h (z0) , ζ
z0 = −
dj,(j,u)jk
hol
ξ (z0,z0) , ζ
z0
after simplification, the above is a differential equation about h:
(4.10)
q
∂
∂z , ∂
∂z
h (z0,z0) = w
where q (s, t) is a vector-valued polynomial in two variables s, t , q
∂
∂z, ∂
∂z
acts
on h (z, z) with vector coefficients paired with those of h by inner product, and
w := −
dj,(j,u)jk
hol
ξ (z0) , ζ
z0 is a constant.
here comes the crucial observation: when ζk ̸= 0, the highest degree of s in
q (s, t) is in the term ζksk. this is because p (s, t) has degree≤k −1 and after
integration by parts,
∂
∂z can fall at du∂j,jh of most (k −1) times, and in (4.5)
du∂j,jh = ∂h + a (z) ∂h + c (z) h,
where a (z0) = 0. on the other hand, in
dj,(j,u)jk
hol
h, by (4.3) , (4.4) , (4.5), the
highest derivative for
∂
∂z is
∂
∂z
k, and is paired with the coefficient ζk in (4.9).
when ζk ̸= 0, we take h (z, z) =
ζk
|ζk|2 1
k! (z −z0)k w, then h solves (4.10). this
is because of the following: h is holomorphic nearby z0, so we can ignore all terms
in q
∂
∂z, ∂
∂z
involving
∂
∂z; for the remaining terms in q
∂
∂z, ∂
∂z
, they must be of
the form
∂
∂z
l with 0 ≤l ≤k, and only
∂
∂z
k h (z0) ̸= 0.
with this h, we reduce the cokernal equation to
du∂j,jξ, η
= 0. since η is
a weak solution of
du∂j,j
∗η = 0 on σ, by ellipticity of the
du∂j,j
∗operator,
the distribution solution η is smooth on σ (see [ho]). since η = 0 on σ\ {z0} ,
η = 0 on σ. then it is not hard to conclude ζ = 0 by cauchy integral formula as
in [oz] and [oh]. therefore the system of equations (4.1) and (4.2) is solvable for
any η ∈w k,p and α ∈jk
hol
tz0σ, tu(z0)m
.
there is one case left: that is when ζk = 0. we still need to show (η, ζ) = (0, 0).
if k = 1, then ζ1 = 0 ⇔ζ = 0 so it has been done as above. if k > 1, we notice that
the cokernal equation (4.6) now is the cokernal equation for the section dυk−1,
since the k-th jet is paired with ζk there,
dj,(j,u)jk
hol
ξ (z0) , ζ
z0 =
dj,(j,u)jk−1
hol
ξ (z0) , ζ
z0 .
by induction assumption on k, dυk−1 has trivial cokernal hence (η, ζ) = (0, 0).
8
ke zhu
last we raise the regularity from w k+1,p to w n,p, for any n > k. for η ∈
w n−1,p ⊂w k,p, by the above argument we can find a solution ξ ∈w k+1,p in
(4.1). by elliptic regularity, the solution ξ ∈w n,p. therefore (4.1) and (4.2) is
solvable in w n,p setting. this finishes induction hence the proof of theorem.
remark 1. in the above proof, the induction starts from k = 1. in [oz], k = 1
case was treated in the framework of 1-jet transversality at (u, z0) where du (z0) = 0.
the above proof includes the k = 1 case as well, but the way of choosing h does not
rely on du (z0) = 0 and applies to any z0 on σ.
remark 2. it is crucial that we use the holomorphic k-jet bundle instead of the
usual k-jet bundle to get the sujective property of dυk. otherwise, as the usual
jet evaluation involves mixed derivatives, given ζ(k,0) = 0 we can not reduce the
cokernal equation to the (k −1) case by induction, and when k = 1, ζ(1,0) = 0 does
not imply ζ = 0. in the case k = 1, we can explicitly see why this submersion
property fails in the usual 1-jet bundle: for a j-holomorphic curve u with du (z0) =
0, and γ1 =
∂, jk
: f1 (σ, m; β) × jω →h
′′ × j1 (σ, m), calculations in [oz]
yield
dγ1 (ξ, b) =
du∂j,jξ; du∂j,jξ (z0) , du∂j,jξ (z0)
therefore there is no solution for
η, α(0,1),α(1,0)
if η (z0) ̸= α(0,1).
however, if du (z0) ̸= 0 then the sujective property still holds in the usual jet
bundles. more precisely we have the following
theorem 2. at any j-holomrophic curve ((u, j) , z0, j) ∈m∗
1 (σ, m; β) ⊂f1 (σ, m; β)×
jω with du (z0) ̸= 0, the linearization dγk of the section
γk =
∂, jk
: f1 (σ, m; β) × jω →h
′′ × jk (σ, m)
is a surjective. especially the linearization djk of k-jet evaluation
jk : f1 (σ, m; β) × jω →jk (σ, m)
at ((u, j) , z0, j) is surjective.
proof. it is enough to show that the cokernal equation
du∂j,jξ + 1
2b ◦du ◦j, η
+
dj,(j,u)jk
ξ (z0) , ζ
z0 = 0,
for all ξ, b
only has trivial solution (η, ζ) = (0, 0). to do this, using standard argument in [ms]
we again get suppη ⊂{z0}. given ζ ∈jk tz0σ, tu(z0)m
, by taylor polynomial
we can construct a smooth ξ supported in arbitrarily small neighborhood of z0 ∈σ,
such that
dj,(j,u)jk
ξ (z0) = ζ. when du (z0) ̸= 0, by linear algebra (namely the
abundance of tjjω) and perturbation method we can construct b ∈tjjω such
that du∂j,jξ + 1
2b ◦du ◦j = 0 on σ (see [ms]). so we get from the cokernal
equation that 0 + |ζ|2 = 0, i.e. ζ = 0. let b = 0 in the cokernal equation, we get
du∂j,jξ, η
= 0 for all ξ. then by elliptic regularity we conclude η = 0 on the
whole σ.
□
the following theorem is a direct consequence of theorem 1 by applying sard-
smale theorem.
moduli spaces of j-holomorphic curves with general jet constraints
9
theorem 3. let s be any smooth section of the holomorphic k-jet bundle jk
hol (σ, m) →
f1 (σ, m; β) × jω. then the section υk is transversal to the section (0, s). the
moduli space
ms :=
jk
hol
−1 (s) ∩m∗
1 (σ, m; β) = υ−1
k
(0, s)
is a banach submanifold of codimension 2kn in m∗
1 (σ, m; β). under the natural
projection π : f1 (σ, m; β) × jω →jω, there exists jreg ⊂jω of second category,
such that for any j ∈jreg, the modulis space ms
j := ms ∩π−1 (j) is a smooth
manifold in m∗
1 (σ, m, j; β) , with dimension
dim ms
j = dim m∗
1 (σ, m, j; β) −2kn,
and all the elements in ms
j are fredholm regular.
remark 3. in [oh], the s is the zero section of the holomorphic k-jet bun-
dle jk
hol (σ, m), so ms
j is the set of j-holomorphic curves with prescribed ram-
ification degrees at the marked points.
the j-holomorphic curves in our mod-
uli space ms
j can obey more general constraint s.
similar to [oh], the theo-
rem also has the version with more than one marked point.
also the constaint
s need not to be a full section over the base, but only a closed submanifold in
jk
hol (σ, m) whose tangent space projects onto the horizontal distribution of the
bundle jk
hol (σ, m) →f1 (σ, m; β) × jω, because the essential part in the proof the
theorem is that dυk|(0,s) is surjective.
the theorem appears to be a good start of studying moduli spaces of j-holomorphic
curves satisfying general jet constraints in the holomorphic jet bundle; for exam-
ple, moduli spaces of j-holomorphic curves with self tangency. also in [cm], jet
constraints from symplectic hypersurfaces were used to get rid of multicovering bub-
bling spheres. this enables them to define genus zero gromov-witten invariants
without abstract perturbations.
the above theorem tells that the moduli spaces ms
j are well-behaved, and the
{jt}0≤t≤1 family version of the above theorem tells that they are cobordant to each
other by moduli spaces
ms
jt
0≤t≤1 for generic path jt ⊂jω. it is interesting to
see if the moduli spaces ms
j can be used to construct new symplectic invariants.
references
[cm] cieliebak, k., mohke, k., "symplectic hypersurfaces and transversality in gromov-witten
theory", journal of symplectic geometry (2008)
[f]
floer, a., "the unregularized gradient flow of symplectic action", comm. pure appl. math.
41 (1988), 775-813
[gs] gelfand, i.m., shilov, g.e., "generalized functions", vol 2, academic press, new york and
london, 1968
[hir] hirsch, m., "differential topology", gtm 33, springer-verlag, 1976
[ho] h ̈
ormander, l., "the analysis of linera differential operators ii", compre. studies in
math. 257, springer-verlag, 1983, berlin
[ht] m. hutchings and c. h. taubes, gluing pseudo-holomorphic curves along branched covered
cylinders ii, j. symplectic geom., 5 (2007), pp. 43–137; math.sg/0705.2074.
[ms] mcduffd., salamon, d., "j-holomorpic curves and symplectic topology", colloquim pub-
lications, vol 52, ams, provindence ri, 2004.
[oh] oh,y-g,
"higher
jet
evaluation
transversality
of
j-holomorphic
curves",
math.sg/0904.3573
[oz] oh,y-g., zhu, k., "embedding property of j-holomorphic curves in calabi-yau manifolds
for generic j", math.sg/0805.3581, asian j. of math, vol 13, no.3, 2009
10
ke zhu
[oz1] oh,y-g., zhu, k., floer trajectories with immersed nodes and scale-dependent gluing,
submitted, sg/0711.4187
[si]
sikorav, j.c., "some property of holomorphic curves in almost complex manifolds", 165-
189, "holomorphic curves in symplectic geometry", audin, m. and lafontaine, j. ed,
birkh ̈
auser, basel, 1994
[wen] c. wendl, automatic transversality and orbifolds of punctured holomorphic curves in di-
mension 4, arxiv:0802.3842v1.
department of mathematics, the chinese university of hong kong, shatin, hong
kong
e-mail address: [email protected]
|
0911.1711 | a new class of exact hairy black hole solutions | we present a new class of black hole solutions with minimally coupled scalar
field in the presence of a negative cosmological constant. we consider a
one-parameter family of self-interaction potentials parametrized by a
dimensionless parameter $g$. when $g=0$, we recover the conformally invariant
solution of the martinez-troncoso-zanelli (mtz) black hole. a non-vanishing $g$
signals the departure from conformal invariance. all solutions are
perturbatively stable for negative black hole mass and they may develop
instabilities for positive mass. thermodynamically, there is a critical
temperature at vanishing black hole mass, where a higher-order phase transition
occurs, as in the case of the mtz black hole. additionally, we obtain a branch
of hairy solutions which undergo a first-order phase transition at a second
critical temperature which depends on $g$ and it is higher than the mtz
critical temperature. as $g\to 0$, this second critical temperature diverges.
| introduction
four-dimensional black hole solutions of einstein gravity coupled to a scalar field have
been an avenue of intense research for many years. questions pertaining to their exis-
tence, uniqueness and stability were seeking answers over these years. the kerr-newman
solutions of four-dimensional asymptotically flat black holes coupled to an electromagnetic
field or in vacuum, imposed very stringent conditions on their existence in the form of
"no-hair" theorems. in the case of a minimally coupled scalar field in asymptotically flat
spacetime the no-hair theorems were proven imposing conditions on the form of the self-
interaction potential [1]. these theorems were also generalized to non-minimally coupled
scalar fields [2].
for asymptotically flat spacetime, a four-dimensional black hole coupled to a scalar field
with a zero self-interaction potential is known [3]. however, the scalar field diverges on
the event horizon and, furthermore, the solution is unstable [4], so there is no violation of
the "no-hair" theorems. in the case of a positive cosmological constant with a minimally
coupled scalar field with a self-interaction potential, black hole solutions were found in [5]
and also a numerical solution was presented in [6], but it was unstable. if the scalar field
is non-minimally coupled, a solution exists with a quartic self-interaction potential [7], but
it was shown to be unstable [8, 9].
in the case of a negative cosmological constant, stable solutions were found numerically
for spherical geometries [10, 11] and an exact solution in asymptotically ads space with
hyperbolic geometry was presented in [12] and generalized later to include charge [13]. this
solution is perturbatively stable for negative mass and may develop instabilities for positive
mass [14]. the thermodynamics of this solution were studied in [12] where it was shown
that there is a second order phase transition of the hairy black hole to a pure topological
black hole without hair.
the analytical and numerical calculation of the quasi-normal
modes of scalar, electromagnetic and tensor perturbations of these black holes confirmed
this behaviour [15].
recently, a new exact solution of a charged c-metric conformally
coupled to a scalar field was presented in [16, 17].
a schwarzschild-ads black hole in
five-dimensions coupled to a scalar field was discussed in [18], while dilatonic black hole
solutions with a gauss-bonnet term in various dimensions were discussed in [19].
from a known black hole solution coupled to a scalar field other solutions can be gen-
erated via conformal mappings [20]. in all black hole solutions in the einstein frame the
scalar field is coupled minimally to gravity. applying a conformal transformation to these
solutions, other solutions can be obtained in the jordan frame which are not physically
equivalent to the untransformed ones [21]. the scalar field in the jordan frame is coupled
to gravity non-minimally and this coupling is characterized by a dimensionless parameter
ξ. there are strong theoretical, astrophysical and cosmological arguments (for a review
see [21]) which fix the value of this conformal coupling to ξ = 1/6. if the scalar poten-
tial is zero or quartic in the scalar field, the theory is conformally invariant; otherwise a
non-trivial scalar potential introduces a scale in the theory and the conformal invariance is
broken.
in this work we present a new class of black hole solutions of four-dimensional einstein
gravity coupled to a scalar field and to vacuum energy.
we analyse the structure and
–3–
study the properties of these solutions in the einstein frame. in this frame, the scalar
self-interaction potential is characterised by a dimensionless parameter g. if this parameter
vanishes, then the known solutions of black holes minimally coupled to a scalar field in
(a)ds space are obtained [7, 12]. transforming these solutions to the jordan frame, the
parameter g can be interpreted as giving the measure of departure from conformal invari-
ance. this breakdown of conformal invariance allows the back-scattering of waves of the
scalar field offof the background curvature of spacetime, and the creation of "tails" of
radiation. this effect may have sizeable observation signatures in cosmology [22].
following [23], we perform a perturbative stability analysis of the solutions. we find
that the hairy black hole is stable near the conformal point if the mass is negative and may
develop instabilities in the case of positive mass. we also study the thermodynamics of our
solutions. calculating the free energy we find that there is a critical temperature above
which the hairy black hole loses its hair to a black hole in vacuum. this critical temperature
occurs at a point where the black hole mass flips sign, as in the case of the mtz black hole
[12]. interestingly, another phase transition occurs at a higher critical temperature which is
of first order and involves a different branch of our solution. this new critical temperature
diverges as the coupling constant in the potential g →0. these exact hairy black hole
solutions may have interesting applications to holographic superconductors [24, 25], where
new types of holographic superconductors can be constructed [14, 26].
our discussion is organized as follows. in section 2 we introduce the self-interaction
potential and we present the hairy black hole solution. in section 3 we discuss the thermo-
dynamics of our solution. in section 4 we perform a stability analysis. finally, in section 5
we summarize our results.
2
black hole with scalar hair
to obtain a black hole with scalar hair, we start with the four-dimensional action
consisting of the einstein-hilbert action with a negative cosmological constant λ, along
with a scalar,
i =
z
d4x√−g
r −2λ
16πg −1
2gμν∂μφ∂νφ −v (φ)
,
(2.1)
where g is newton's constant and r is the ricci scalar. the corresponding field equations
are
gμν + λgμν
=
8πgt matter
μν
,
2φ
=
dv
dφ ,
(2.2)
where the energy-momentum tensor is given by
t matter
μν
= ∂μφ∂νφ −1
2gμνgαβ∂αφ∂βφ −gμνv (φ) .
(2.3)
the potential is chosen as
v (φ)
=
λ
4πg sinh2
r
4πg
3 φ
–4–
+
gλ
24πg
"
2
√
3πgφ cosh
r
16πg
3
φ
!
−9
8 sinh
r
16πg
3
φ
!
−1
8 sinh
4
√
3πgφ
#
(2.4)
and it is given in terms of a coupling constant g. setting g = 0 we recover the action that
yields the mtz black hole [12]. this particular form of the potential is chosen so that
the field equations can be solved analytically. the qualitative nature of our results does
not depend on the detailed form of the potential. a similar potential was considered in a
different context in [5] (see also [26] for the derivation of a potential that yields analytic
solutions in the case of a spherical horizon). if one goes over to the jordan frame, in which
the scalar field obeys the klein-gordon equation
2φ −ξrφ −dv
dφ = 0 ,
(2.5)
with ξ = 1/6, the scalar potential has the form
v (φ) = −2πgλ
9
φ4
−
gλ
16πg
"r
16πg
3
φ
1 −4πg
3
φ2 +
16πg
9
φ2
1 −4πg
3
φ2
!
−
1 −4πg
3
φ2
1 + 4πg
3
φ2
ln
1 +
q
4πg
3
φ
1 −
q
4πg
3
φ
.
(2.6)
evidently, the scalar field is conformally coupled but the conformal invariance is broken by
a non-zero value of g.
the mass of the scalar field is given by
m2 = v ′′(0) = −2
l2
(2.7)
where we defined λ = −3/l2. notice that it is independent of g and coincides with the
scalar mass that yields the mtz black hole [12]. asymptotically (r →∞), the scalar field
behaves as φ ∼r−∆± where ∆± = 3
2 ±
q
9
4 + m2l2. in our case ∆+ = 2 and ∆−= 1. both
boundary conditions are acceptable as both give normalizable modes. we shall adopt the
mixed boundary conditions (as r →∞)
φ(r) = α
r + cα2
r2 + . . . ,
c = −
r
4πg
3
< 0 .
(2.8)
this choice of the parameter c coincides with the mtz solution [12].
solutions to the einstein equations with the boundary conditions (2.8) have been found
in the case of spherical horizons and shown to be unstable [23]. in that case, for α > 0,
it was shown that c < 0 always and the hairy black hole had positive mass. on the other
hand, mtz black holes, which have hyperbolic horizons and obey the boundary conditions
(2.8) with c < 0, can be stable if they have negative mass [14]. this is impossible with
–5–
spherical horizons, because they always enclose black holes of positive mass. the numerical
value of c is not important (except for the fact that c ̸= 0) and is chosen as in (2.8) for
convenience.
the field equations admit solutions which are black holes with topology r2 × σ, where
σ is a two-dimensional manifold of constant negative curvature. black holes with constant
negative curvature are known as topological black holes (tbhs - see, e.g., [27, 28]). the
simplest solution for λ = −3/l2 reads
ds2 = −ftbh(ρ)dt2 +
1
ftbh(ρ)dρ2 + ρ2dσ2
,
ftbh(ρ) = ρ2
l2 −1 −ρ0
ρ ,
(2.9)
where ρ0 is a constant which is proportional to the mass and is bounded from below
(ρ0 ≥−
2
3
√
3l), dσ2 is the line element of the two-dimensional manifold σ which is locally
isomorphic to the hyperbolic manifold h2 and of the form
σ = h2/γ
,
γ ⊂o(2, 1) ,
(2.10)
with γ a freely acting discrete subgroup (i.e., without fixed points) of isometries. the line
element dσ2 of σ can be written as
dσ2 = dθ2 + sinh2 θdφ2 ,
(2.11)
with θ ≥0 and 0 ≤φ < 2π being the coordinates of the hyperbolic space h2 or pseu-
dosphere, which is a non-compact two-dimensional space of constant negative curvature.
this space becomes a compact space of constant negative curvature with genus g ≥2 by
identifying, according to the connection rules of the discrete subgroup γ, the opposite edges
of a 4g-sided polygon whose sides are geodesics and is centered at the origin θ = φ = 0
of the pseudosphere. an octagon is the simplest such polygon, yielding a compact surface
of genus g = 2 under these identifications. thus, the two-dimensional manifold σ is a
compact riemann 2-surface of genus g ≥2. the configuration (2.9) is an asymptotically
locally ads spacetime. the horizon structure of (2.9) is determined by the roots of the
metric function ftbh(ρ), that is
ftbh(ρ) = ρ2
l2 −1 −ρ0
ρ = 0 .
(2.12)
for −
2
3
√
3l < ρ0 < 0, this equation has two distinct non-degenerate solutions, corresponding
to an inner and to an outer horizon ρ−and ρ+ respectively. for ρ0 ≥0, ftbh(ρ) has just one
non-degenerate root and so the black hole (2.9) has one horizon ρ+. the horizons for both
cases of ρ0 have the non-trivial topology of the manifold σ. we note that for ρ0 = −
2
3
√
3l,
ftbh(ρ) has a degenerate root, but this horizon does not have an interpretation as a black
hole horizon.
the boundary has the metric
ds2
∂= −dt2 + l2dσ2 ,
(2.13)
so spatially it is a hyperbolic manifold of radius l (and of curvature −1/l).
–6–
the action (2.1) with a potential as in (2.4) has a static black hole solution with topology
r2 × σ and with scalar hair, and it is given by
ds2 = r(r + 2r0)
(r + r0)2
−f(r)dt2 + dr2
f(r) + r2dσ2
,
(2.14)
where
f(r) = r2
l2 −gr0
l2 r −1 + gr2
0
l2 −
1 −2gr2
0
l2
r0
r
2 + r0
r
+ g r2
2l2 ln
1 + 2r0
r
,
(2.15)
and the scalar field is
φ(r) =
r
3
4πg arctanh
r0
r + r0
,
(2.16)
obeying the boundary conditions (2.8) by design.
3
thermodynamics
to study the thermodynamics of our black hole solutions we consider the euclidean
continuation (t →iτ) of the action in hamiltonian form
i =
z h
πij ̇
gij + p ̇
φ −nh −nihi
i
d 3xdt + b,
(3.1)
where πij and p are the conjugate momenta of the metric and the field respectively; b is a
surface term. the solution reads:
ds2 = n2(r)f 2(r)dτ 2 + f −2(r)dr2 + r2(r)dσ2
(3.2)
where
n(r) = r(r + 2r0)
(r + r0)2 ,
f 2(r) = (r + r0)2
r(r + 2r0) f(r) ,
r2(r) = r3(r + 2r0)
(r + r0)2
,
(3.3)
with a periodic τ whose period is the inverse temperature, β = 1/t.
the hamiltonian action becomes
i = −β σ
z ∞
r+
n(r)h(r)dr + b,
(3.4)
where σ is the area of σ and
h = nr2
1
8πg
(f 2)′r ′
r
+ 2f 2r ′′
r
+ 1
r 2(1 + f 2) + λ
+ 1
2f 2(φ′)2 + v (φ)
.
(3.5)
the euclidean solution is static and satisfies the equation h = 0. thus, the value of the
action in the classical limit is just the surface term b, which should maximize the action
within the class of fields considered.
–7–
we now compute the action when the field equations hold. the condition that the
geometries which are permitted should not have conical singularities at the horizon imposes
t = f ′(r+)
4π
.
(3.6)
using the grand canonical ensemble (fixing the temperature), the variation of the surface
term reads
δb ≡δbφ + δbg ,
where
δbg = βσ
8πg
h
n
rr ′δf 2 −(f 2)′rδr
+ 2f 2r
nδr ′ −n′δr
i∞
r+ ,
(3.7)
and the contribution from the scalar field equals
δbφ = βσnr 2f 2φ′δφ|∞
r+ .
(3.8)
for the metric, the variation of fields at infinity yields
δf 2
∞
=
2
l2r0 −2(3 + (9 −8g)r2
0/l2)
3r
−4r0(1 −4r2
0/l2)
r2
+ o(r−3)
δr0 ,
δφ|∞
=
r
3
4πg
1
r −2r0
r2 + o(r−3)
δr0 ,
δr|∞
=
−r0
r + 3r2
0
r2 + o(r−3)
δr0 ,
(3.9)
so
δbg|∞
=
βσ
8πg
6r0(r −4(1 −2g/9)r0)
l2
−2 + o(r−1)
δr0 ,
δbφ|∞
=
βσ
8πg
−6r0(r −4r0)
l2
+ o(r−1)
δr0 .
(3.10)
the surface term at infinity is
b|∞= −βσ(3 −8gr2
0/l2)
12πg
r0 .
(3.11)
the variation of the surface term at the horizon may be found using the relations
δr|r+
=
δr(r+) −r ′|r+ δr+ ,
δf 2
r+
=
−(f 2)′
r+ δr+ .
we observe that δbφ|r+ vanishes, since f 2(r+) = 0, and
δb|r+
=
−βσ
16πgn(r+) (f 2)′
r+ δr 2(r+)
=
−σ
4gδr 2(r+) .
–8–
thus the surface term at the horizon is
b|r+ = −σ
4gr 2(r+) .
(3.12)
therefore, provided the field equations hold, the euclidean action reads
i = −βσ(3 −8gr2
0/l2)
12πg
r0 + σ
4gr 2(r+) .
(3.13)
the euclidean action is related to the free energy through i = −βf. we deduce
i = s −βm ,
(3.14)
where m and s are the mass and entropy respectively,
m = σ(3 −8gr2
0/l2)
12πg
r0 ,
s = σ
4g r2(r+) = ah
4g
(3.15)
it is easy to show that the law of thermodynamics dm = tds holds. for g = 0, these
expressions reduce to the corresponding quantities for mtz black holes [12]. alternatively,
the mass of the black hole can be found by the ashtekar-das method [29]. a straightforward
calculation confirms the expression (3.15) for the mass.
in the case of the topological black hole (2.12) the temperature, entropy and mass are
given by respectively,
t =
3
4πl
ρ+
l −
l
3ρ+
,
stbh = σρ2
+
4g ,
mtbh = σρ+
8πg
ρ2
+
l2 −1
,
(3.16)
and also the law of thermodynamics dm = tds is obeyed.
we note that, in the limit r0 →0, f(r) →r2
l2 −1 from eq. (2.15) and the corresponding
temperature (3.6) reads t =
1
2πl, which equals the temperature of the topological black
hole (3.16) in the limit ρ0 →0 (ρ+ →1). the common limit
ds2
ads = −
r2
l2 −1
dt2 +
r2
l2 −1
−1
dr2 + r2dσ2
(3.17)
is a manifold of negative constant curvature possessing an event horizon at r = l. the
tbh and our hairy black hole solution match continuously at the critical temperature
t0 =
1
2πl ,
(3.18)
which corresponds to mtbh = m = 0, with (3.17) a transient configuration. evidently,
at the critical point (3.18) a scaling symmetry emerges owing to the fact that the metric
becomes pure ads.
at the critical temperature (3.18) a higher-order phase transition occurs as in the case
of the mtz black hole (with g = 0). introducing the terms with g ̸= 0 in the potential do
not alter this result qualitatively.
–9–
next we perform a detailed analysis of thermodynamics and examine several values of
the coupling constant g. henceforth we shall work with units in which l = 1. we begin
with a geometrical characteristic of the hairy black hole, the horizon radius r+ (root of
f(r) (eq. (2.15)). in figure 1 we show the r0 dependence of the horizon for representative
values of the coupling constant, g = 3 and g = 0.0005. we observe that, for g = 3, the
horizon may correspond to more than one value of the parameter r0. for g = 0.0005 we see
that, additionally, there is a maximum value of the horizon radius.
we note that one may express the radius of the horizon r+ in terms of the dimensionless
parameter
ξ = r0
r+
(3.19)
as
r+ =
1 + ξ
q
1 + gξ(1 + ξ)(−1 + 2ξ + 2ξ2) + 1
2g ln(1 + 2ξ)
.
(3.20)
the temperature reads
t = 1 + ξ(1 + ξ)(4 −g(1 + 2ξ + 2ξ2)) + 1
2g(1 + 2ξ)2 ln(1 + 2ξ)
2π(1 + 2ξ)
q
1 + gξ(1 + ξ)(−1 + 2ξ + 2ξ2) + 1
2g ln(1 + 2ξ)
,
(3.21)
or equivalently
t = (r+ + r0)(r2
+ + 4r0r+ + 4r2
0 −8gr3
0r+ −8gr4
0)
2πr3
+
,
(3.22)
a third order equation in r+, showing that, for a given temperature, there are in general
three possible values of ξ. thus we obtain up to three different branches of our hairy black
hole solution.
we start our analysis with a relatively large value of the coupling constant g, namely
g = 3 and calculate the horizon radius, temperature and euclidean action for various
values of r0. in figure 2, left panel, we depict r0 versus t and it is clear that there is a t
interval for which there are really three corresponding values for r0. outside this interval,
there is just one solution. the corresponding graph for the euclidean actions may be seen
in the right panel of the same figure. the action for the topological black hole with the
common temperature t is represented by a continuous line, while the actions for the hairy
black holes are shown in the form of points. we note that equation (2.12) yields for the
temperature of the topological black hole t =
1
4π
3ρ+
l2 +
1
ρ+
⇒ρ+ = 2πt
3 +
q 2πt
3
2 −1
3.
the largest euclidean action (smallest free energy) will dominate.
there are three branches for the hairy black hole, corresponding to the three different
values of r0. in particular, for some fixed temperature (e.g., t = 0.16) the algebraically
lowest r0 corresponds to the algebraically lowest euclidean action; similarly, the medium
and largest r0 parameters correspond to the medium and largest euclidean actions. the
medium euclidean action for the hairy black hole is very close to the euclidean action for the
topological black hole. in fact, it is slightly smaller than the latter for t <
1
2π ≈0.159 and
slightly larger after that value. if it were the only branch present, one would thus conclude
that the hairy black hole dominates for small temperatures, while for large temperatures
–10–
0.6
0.7
0.8
0.9
1
1.1
1.2
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
rp
r0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
-5
0
5
10
15
20
25
30
35
rp
r0
figure 1: horizon versus parameter r0 for g = 3 (left panel) and g = 0.0005 (right panel).
the topological black hole would be preferred. this would be a situation similar to the one
of the mtz black hole.
however, the two additional branches change completely our conclusions. the upper
branch shows that the hairy black hole dominates up to t ≈0.20. when the coupling
constant g decreases, equation (3.21) along with the demand that the temperature should
be positive, show that the acceptable values of r0 are two rather than three, as may be seen
in figure 3, left panel. the lowest branch of the previous corresponding figure 2 shrinks
for decreasing g and it finally disappears. an interesting consequence of this is that the
temperature has an upper limit. the graph for the euclidean actions (figure 3, right panel)
is influenced accordingly. there are just two branches for the hairy black hole, rather than
three in figure 2 and the figure ends on its right hand side at t ≈1.25. the continuous line
represents the euclidean action for the topological black hole with the same temperature.
similar remarks hold as in the previous case, e.g., the phase transition moves to t ≈0.80.
in addition, the largest value of r0 corresponds to the upper branch of the hairy black hole.
to understand the nature of this phase transition it is instructive to draw a kind of
phase diagram, so that we can spot which is the dominant solution for a given pair of g and
t. we depict our result in figure 4. the hairy solution dominates below the curve which
shows the critical temperature as a function of the coupling constant g. the most striking
feature of the graph is that the critical temperature diverges as g →0. thus, it does not
converge to the mtz value
1
2π ≈0.159 at g = 0. for even the slightest nonzero values of
g the critical temperature gets extremely large values! this appears to put the conformal
point (mtz black hole) in a special status within the set of these hairy black holes. in
other words, the restoration of conformal invariance is not a smooth process, and the mtz
black hole solution cannot be obtained in a continuous way as g →0. in fact, it seems that
(even infitesimally) away from the conformal point g = 0 black holes are mostly hairy!
4
stability analysis
to perform the stability analysis of the hairy black hole it is more convenient to work in
the einstein frame. henceforth, we shall work in units in which the radius of the boundary
–11–
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.22
r0
t
0.23
0.24
0.25
0.26
0.27
0.28
0.29
0.3
0.1
0.12
0.14
0.16
0.18
0.2
0.22
i
t
figure 2: parameter r0 (left panel) and euclidean actions versus temperature for g = 3
(right panel).
is l = 1.
we begin with the hairy black hole line element,
ds2
0 = ˆ
r(ˆ
r + 2r0)
(ˆ
r + r0)2
−f(ˆ
r)dt2 + dˆ
r2
f(ˆ
r) + ˆ
r2dσ2
,
(4.1)
which can be written in the form
ds2 = −f0
h2
0
dt2 + dr2
f0
+ r2dσ2
(4.2)
using the definitions
f0(r) = f(ˆ
r)
1 +
r2
0
(ˆ
r + 2r0)(ˆ
r + r0)
2
,
h0(r) =
1 +
r2
0
(ˆ
r + 2r0)(ˆ
r + r0)
ˆ
r + r0
p
ˆ
r(ˆ
r + 2r0)
,
(4.3)
-5
0
5
10
15
20
25
30
35
0
0.2
0.4
0.6
0.8
1
1.2
1.4
r0
t
0
0.5
1
1.5
2
2.5
3
3.5
0
0.2
0.4
0.6
0.8
1
1.2
1.4
i
t
figure 3: parameter r0 (left panel) and euclidean actions versus temperature for g = 0.0005
(right panel).
–12–
0
0.5
1
1.5
2
2.5
0
0.1
0.2
0.3
0.4
0.5
0.6
t
g
figure 4: phase diagram. for points under the curve the hairy solution will be preferred.
under the change of coordinates
r = ˆ
r3/2(ˆ
r + 2r0)1/2
ˆ
r + r0
.
(4.4)
the scalar field solution reads
φ0(r) =
r
3
4πg tanh−1
r0
ˆ
r + r0
,
(4.5)
obeying the boundary conditions (2.8) with
α = α0 =
r
3
4πg|r0| .
(4.6)
we are interested in figuring out when the black hole is unstable (losing its hair to turn
into a tbh) and discuss the results in the context of thermodynamic considerations. to
this end, we apply the perturbation
f(r, t) = f0(r) + f1(r)eωt,
h(r, t) = h0(r) + h1(r)eωt,
φ(r, t) = φ0(r) + φ1(r)
r
eωt .
(4.7)
which respects the boundary conditions (2.8) with ω > 0 for an instability to develop.
the field equations read:
−1 −f −rf ′ + rf h′
h + 8πgr2v (φ) = 0 ,
(4.8)
̇
f + rf ̇
φφ′ = 0 ,
(4.9)
2h′ + rh
h2
f 2 ̇
φ2 + φ′2
= 0 ,
(4.10)
̇
h
f
̇
φ
!
−1
r2
r2f
hφ′
′
+ 1
hv ′(φ) = 0 .
(4.11)
–13–
the field equations give a schr ̈
odinger-like wave equation for the scalar perturbation,
−d2φ1
dr2
∗
+ vφ1 = −ω2φ1 ,
(4.12)
where we defined the tortoise coordinate
dr∗
dr = h0
f0
,
(4.13)
and the effective potential is given by
v = f0
h2
0
−1
2(1 + r2φ′
0
2)φ′
0
2f0 + (1 −r2φ′
0
2)f ′
0
r + 2rφ′
0v ′(φ0) + v ′′(φ0)
.
(4.14)
the explicit form of the schr ̈
odinger-like equation reads:
−f(ˆ
r) d
dˆ
r
f(ˆ
r)dφ1
dˆ
r
+ vφ1 = −ω2φ1,
(4.15)
where the functional form of the function f has been given in equation (2.15) and 1
v =
r2
0f(ˆ
r)
ˆ
r2 1 + 2r0
ˆ
r
2
1 + 3r0
ˆ
r + 3r2
0
ˆ
r2
2
5 + 2 + (11g + 54)r2
0
r0ˆ
r
+ 29 + (47g + 189)r2
0
ˆ
r2
(4.16)
+r0(150 + (−3g + 270)r2
0)
ˆ
r3
+ r2
0(396 + (−351g + 135)r2
0)
ˆ
r4
+ r3
0(612 −873gr2
0)
ˆ
r5
(4.17)
+r4
0(582 −1047gr2
0)
ˆ
r6
+ 324r5
0(1 −2gr2
0)
ˆ
r7
+ 81r6
0(1 −2gr2
0)
ˆ
r8
(4.18)
+g
2
5 + 54r0
ˆ
r
+ 189r2
0
ˆ
r2
+ 270r3
0
ˆ
r3
+ 135r4
0
ˆ
r4
ln
1 + 2r0
ˆ
r
.
(4.19)
near the horizon the schr ̈
odinger-like equation simplifies to
−[f ′(ˆ
r+)]2ǫ d
dǫ
ǫdφ1
dǫ
= −ω2φ1,
ǫ = ˆ
r −ˆ
r+
(4.20)
and its acceptable solution reads
φ1 ∼ǫκω,
κ =
1
f ′(ˆ
r+),
ω > 0 .
(4.21)
regularity of the scalar field at the horizon (r →r+) requires the boundary conditions
φ1 = 0 ,
(r −r+)φ′
1 = κωφ1 ,
κ > 0 .
(4.22)
1we have set 8πg = 1 .
–14–
for a given ω > 0, they uniquely determine the wavefunction.
at the boundary (ˆ
r →∞), the wave equation is approximated by
−d2φ1
dr2
∗
+ 5r2
0φ1 = −ω2φ1 ,
(4.23)
with solutions
φ1 = e±er∗,
e =
q
ω2 + 5r2
0 ,
(4.24)
where r∗=
r
dˆ
r
f(ˆ
r) = −1
r + . . . . therefore, for large r,
φ1 = a + b
r + . . .
(4.25)
to match the boundary conditions (2.8), we need
b
a = 2cα0 = −2r0 .
(4.26)
since the wavefunction has already been determined by the boundary conditions at the
horizon and therefore also the ratio b/a, this is a constraint on ω. if (4.26) has a solution,
then the black hole is unstable. if it does not, then there is no instability of this type
(however, one should be careful with non-perturbative instabilities).
in figure 5 (left panel) we show the ratio b/a for the standard mtz black hole (corre-
sponding to g = 0) versus ω at a typical value of the mass parameter, namely r0 = −0.10.
it is obvious that the value of the ratio lies well below the values 2r0 = +0.20. it is clearly
impossible to have a solution to this equation, so the solution is stable. in fact this value of
the mass parameter lies in the interesting range for this black hole, since thermodynamics
dictates that for negative values of r0 mtz black holes are favored against topological black
holes. we find that the mtz black holes turn out to be stable.
next we examine the case g = 0.0005, for which we have presented data before in figure
3. as we have explained there, the most interesting branch of the graphs is the upper
branch, on the right panel, which dominates the small t part of the graph and corresponds
to large values of r0 (typically around 30 on the left panel of the same figure). thus we set
g = 0.0005, r0 = +30 and plot b/a versus ω in the right panel of figure 5. it is clear that
the curve lies systematically below the quantity −2r0 = −60 and no solution is possible,
so the hairy black hole with these parameters is stable.
finally we come to the case with g = +3, which has three branches. in the left panel of
figure 6 we show the results for r0 = +0.40, which corresponds to the upper branch of figure
2; the curve again lies below the quantity −2r0 = −0.80 and no solution is possible, so this
hairy black hole is stable. in the right panel of figure 6 we show the results for r0 = −0.30,
which corresponds to the lowest branch of figure 2, which disappears for decreasing g. in
this case we find something qualitatively different: the curve cuts the line −2r0 = +0.60
around ω ≈0.40 and a solution is possible, signaling instability. thus, for g = 3 the hairy
black hole may be stable or unstable, depending on the value of r0.
–15–
-1.2
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0
0.2
0.4
0.6
0.8
1
-2*mass, b/a
omega
-350
-300
-250
-200
-150
-100
-50
0
50
100
150
200
250
300
-2*mass, b/a
omega
figure 5: stability of the standard mtz black hole (left panel) for r0 = −0.10; similarly
for the hairy black hole at g = 0.0005, r0 = +30 (right panel).
-11
-10
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
0
2
4
6
8
10
-2*mass, b/a
omega
-2
0
2
4
6
8
10
12
0
0.5
1
1.5
2
-2*mass, b/a
omega
figure 6: stability of the hairy black hole for g = 3 and r0 = +0.40 (left panel); similarly
for the hairy black hole at r0 = −0.30 (right panel).
5
conclusions
we presented a new class of hairy black hole solutions in asymptotically ads space.
the scalar field is minimally coupled to gravity with a non-trivial self-interaction potential.
a coupling constant g in the potential parametrizes our solutions. if g = 0 the conformal
invariant mtz black hole solution, conformally coupled to a scalar field, is obtained. if
g ̸= 0 a whole new class of hairy black hole solutions is generated. the scalar field is
conformally coupled but the solutions are not conformally invariant. these solutions are
perturbative stable near the conformal point for negative mass and they may develop
instabilities for positive mass.
we studied the thermodynamical properties of the solutions. calculating the free energy
we showed that for a general g, apart the phase transition of the mtz black hole at the
critical temperature t = 1/2πl, there is another critical temperature, higher than the mtz
critical temperature, which depends on g and where a first order phase transition occurs of
the vacuum black hole towards a hairy one. the existence of a second critical temperature
is a manifestation of the breaking of conformal invariance. as g →0 the second critical
–16–
temperature diverges, indicating that there is no smooth limit to the mtz solution.
the solutions presented and discussed in this work have hyperbolic horizons. there
are also hairy black hole solutions with flat or spherical horizons of similar form. however,
these solutions are pathological. in the solutions with flat horizons, the scalar field diverges
at the horizon, in accordance to the "no-hair" theorems. in the case of spherical horizons,
calculating the free energy we find that always the vacuum solution is preferable over the
hairy configuration.
moreover, studying the asymptotic behaviour of the solutions, we
found that they are unstable for any value of the mass.
acknowledgments
g. s. was supported in part by the us department of energy under grant de-fg05-
91er40627.
references
[1] j. e. chase, "event horizons in static scalar-vacuum space-times," commun. math.
phys. 19, 276 (1970);
j. d. bekenstein, "transcendence of the law of baryon-number conservation in black-
hole physics," phys. rev. lett. 28, 452 (1972);
j. d. bekenstein, "nonexistence of baryon number for static black holes," phys.
rev. d 5, 1239 (1972);
j. d. bekenstein, "nonexistence of baryon number for black holes. ii," phys. rev.
d 5, 2403 (1972);
m. heusler, "a mass bound for spherically symmetric black hole spacetimes," class.
quant. grav. 12, 779 (1995) [arxiv:gr-qc/9411054];
m. heusler and n. straumann, "scaling arguments for the existence of static, spher-
ically symmetric solutions of self-gravitating systems," class. quant. grav. 9, 2177
(1992);
d. sudarsky, "a simple proof of a no-hair theorem in einstein-higgs theory," class.
quant. grav. 12, 579 (1995);
j. d. bekenstein, "novel 'no-scalar-hair' theorem for black holes," phys. rev. d 51,
r6608 (1995).
[2] a. e. mayo and j. d. bekenstein, "no hair for spherical black holes: charged and
nonminimally coupled scalar field with self–interaction," phys. rev. d 54, 5059 (1996)
[arxiv:gr-qc/9602057];
j. d. bekenstein, "black hole hair: twenty-five years after," arxiv:gr-qc/9605059.
[3] n. bocharova, k. bronnikov and v. melnikov, vestn. mosk. univ. fiz. astron. 6, 706
(1970);
j. d. bekenstein, annals phys. 82, 535 (1974);
j. d. bekenstein, "black holes with scalar charge," annals phys. 91, 75 (1975).
–17–
[4] k. a. bronnikov and y. n. kireyev, "instability of black holes with scalar charge,"
phys. lett. a 67, 95 (1978).
[5] k. g. zloshchastiev, "on co-existence of black holes and scalar field," phys. rev. lett.
94, 121101 (2005) [arxiv:hep-th/0408163].
[6] t. torii, k. maeda and m. narita, "no-scalar hair conjecture in asymptotic de sitter
spacetime," phys. rev. d 59, 064027 (1999) [arxiv:gr-qc/9809036].
[7] c. martinez, r. troncoso and j. zanelli, "de sitter black hole with a conformally
coupled scalar field in four dimensions," phys. rev. d 67, 024008 (2003) [arxiv:hep-
th/0205319].
[8] t. j. t. harper, p. a. thomas, e. winstanley and p. m. young, "instability of a
four-dimensional de sitter black hole with a conformally coupled scalar field," phys.
rev. d 70, 064023 (2004) [arxiv:gr-qc/0312104].
[9] g. dotti, r. j. gleiser and c. martinez, "static black hole solutions with a self interact-
ing conformally coupled scalar field," phys. rev. d 77, 104035 (2008) [arxiv:0710.1735
[hep-th]].
[10] t. torii, k. maeda and m. narita, "scalar hair on the black hole in asymptotically
anti-de sitter spacetime," phys. rev. d 64, 044007 (2001).
[11] e. winstanley, "on the existence of conformally coupled scalar field hair for black
holes in (anti-)de sitter space," found. phys. 33, 111 (2003) [arxiv:gr-qc/0205092].
[12] c. martinez, r. troncoso and j. zanelli, "exact black hole solution with a minimally
coupled scalar field," phys. rev. d 70, 084035 (2004) [arxiv:hep-th/0406111].
[13] c. martinez, j. p. staforelli and r. troncoso, "topological black holes dressed with a
conformally coupled scalar field and electric charge," phys. rev. d 74, 044028 (2006)
[arxiv:hep-th/0512022];
c. martinez and r. troncoso, "electrically charged black hole with scalar hair," phys.
rev. d 74, 064007 (2006) [arxiv:hep-th/0606130].
[14] g. koutsoumbas, e. papantonopoulos and g. siopsis, "exact gravity dual of a gap-
less superconductor," jhep 0907, 026 (2009) [arxiv:0902.0733 [hep-th]].
[15] g. koutsoumbas, s. musiri, e. papantonopoulos and g. siopsis, "quasi-normal modes
of electromagnetic perturbations of four-dimensional topological black holes with scalar
hair," jhep 0610, 006 (2006) [arxiv:hep-th/0606096];
g. koutsoumbas, e. papantonopoulos and g. siopsis, "phase transitions in charged
topological-ads black holes," jhep 0805, 107 (2008) [arxiv:0801.4921 [hep-th]];
g. koutsoumbas, e. papantonopoulos and g. siopsis, "discontinuities in scalar
perturbations of topological black holes," class. quant. grav. 26, 105004 (2009)
[arxiv:0806.1452 [hep-th]].
–18–
[16] c. charmousis, t. kolyvaris and e. papantonopoulos, "charged c-metric with con-
formally coupled scalar field," class. quant. grav. 26, 175012 (2009) [arxiv:0906.5568
[gr-qc]].
[17] a. anabalon and h. maeda, "new charged black holes with conformal scalar hair,"
arxiv:0907.0219 [hep-th].
[18] k. farakos, a. p. kouretsis and p. pasipoularides, "anti de sitter 5d black hole
solutions with a self-interacting bulk scalar field: a potential reconstruction approach,"
phys. rev. d 80, 064020 (2009) [arxiv:0905.1345 [hep-th]].
[19] n. ohta and t. torii, "black holes in the dilatonic einstein-gauss-bonnet theory
in various dimensions iv - topological black holes with and without cosmological
term," arxiv:0908.3918 [hep-th].
[20] k.-i. maeda, "towards the einstein-hilbert action via conformal transformation,"
phys. rev. d 39 3159 (1989).
[21] v. faraoni, e. gunzig and p. nardone, fund. cosmic phys. 20, 121 (1999) [arxiv:gr-
qc/9811047].
[22] v. faraoni and s. sonego, "on the tail problem in cosmology," phys. lett. a 170,
413 (1992) [arxiv:astro-ph/9209004];
t. w. noonan, "huygen's principle in conformally flat spacetimes," class. quant.
grav. 12, 1087 (1995).
[23] t. hertog and k. maeda, "stability and thermodynamics of ads black holes with
scalar hair," phys. rev. d 71, 024001 (2005) [arxiv:hep-th/0409314].
[24] s. a. hartnoll, c. p. herzog and g. t. horowitz, "building a holographic supercon-
ductor," phys. rev. lett. 101, 031601 (2008) [arxiv:0803.3295 [hep-th]].
[25] s. a. hartnoll, c. p. herzog and g. t. horowitz, "holographic superconductors,"
jhep 0812, 015 (2008) [arxiv:0810.1563 [hep-th]].
[26] d. f. zeng, "an exact hairy black hole solution for ads/cft superconductors,"
arxiv:0903.2620 [hep-th].
[27] r. b. mann, "topological black holes – outside looking in," arxiv:gr-qc/9709039.
[28] d. birmingham, "topological black holes in anti-de sitter space," class. quant.
grav. 16, 1197 (1999) [arxiv:hep-th/9808032].
[29] a. ashtekar and s. das, "asymptotically anti-de sitter space-times: conserved quan-
tities," class. quant. grav. 17, l17 (2000) [arxiv:hep-th/9911230].
|
0911.1712 | spin gaps and spin-flip energies in density-functional theory | energy gaps are crucial aspects of the electronic structure of finite and
extended systems. whereas much is known about how to define and calculate
charge gaps in density-functional theory (dft), and about the relation between
these gaps and derivative discontinuities of the exchange-correlation
functional, much less is know about spin gaps. in this paper we give
density-functional definitions of spin-conserving gaps, spin-flip gaps and the
spin stiffness in terms of many-body energies and in terms of single-particle
(kohn-sham) energies. our definitions are as analogous as possible to those
commonly made in the charge case, but important differences between spin and
charge gaps emerge already on the single-particle level because unlike the
fundamental charge gap spin gaps involve excited-state energies. kohn-sham and
many-body spin gaps are predicted to differ, and the difference is related to
derivative discontinuities that are similar to, but distinct from, those
usually considered in the case of charge gaps. both ensemble dft and
time-dependent dft (tddft) can be used to calculate these spin discontinuities
from a suitable functional. we illustrate our findings by evaluating our
definitions for the lithium atom, for which we calculate spin gaps and spin
discontinuities by making use of near-exact kohn-sham eigenvalues and,
independently, from the single-pole approximation to tddft. the many-body
corrections to the kohn-sham spin gaps are found to be negative, i.e., single
particle calculations tend to overestimate spin gaps while they underestimate
charge gaps.
| introduction
there is hardly any electronic property of a system
that does not depend on whether there is an energy gap
for charge excitations, or for particle addition and re-
moval. similarly, there is hardly any magnetic property
of a system that does not depend in some way on whether
there is an energy gap for flipping a spin, or for adding
and removing spins from the system.
the reliable calculation of charge gaps1 from first prin-
ciples is nontrivial and still faces practical problems (rel-
evant aspects are reviewed below), but at least con-
ceptually it is clear how charge gaps are to be defined
and quantified within modern electronic-structure meth-
ods, such as density-functional theory (dft).2,3,4 on the
other hand, much less is is known about how to calculate,
or even define, spin gaps.
in the present paper we show how to define and cal-
culate the spin gap in spin-dft (sdft), and predict
that such calculations will encounter a spin-gap problem
similar to the band-gap problem familiar from applica-
tions of dft to semiconductors or to strongly correlated
systems.
section ii of this paper is devoted to charge gaps. in
sec. ii a we recapitulate the conceptual difference be-
tween fundamental gaps and excitation gaps. in sec. ii b
we then recall the quantitative definition of the funda-
mental gap and related quantities, such as the single-
particle gap, and particle addition and removal energies.
section ii c summarizes key aspects of the derivative dis-
continuity, while sec. ii d describes the connection be-
tween gaps and discontinuities within the framework of
ensemble dft. although the final results of these sec-
tions are well known, our treatment is different from the
usual one in so far as we introduce many-body corrections
to the gap and derivative discontinuities in completely
independent ways, related only a posteriori via ensemble
dft. this way of proceeding is useful for performing the
generalization to the spin case.
for both the fundamental gap and the optical exci-
tation gap, the gapped degree of freedom is related to
particles: either particles are added to or removed from
the system, or particles are excited to higher energy lev-
els within the system under study. in ordinary atoms,
molecules and solids, these particles are electrons, and
the particle gaps of many-electron systems are a key
property in determining the functionality of today's elec-
tronic devices.
the last decade has witnessed an enormous growth of
interest in another type of system, and in devices result-
ing from them, in which the key degree of freedom is the
spin. in the resulting field of spintronics, and the develop-
ment of spintronic devices, one is interested in controlling
and manipulating the spin degrees of freedom indepen-
dently of, or in addition to, the charge degrees of freedom.
here, the issue of the spin gap arises, and a number of
questions for electronic-structure and many-body theory
appear: what is the energy required to add a spin to
the system? what is the energy cost of flipping a spin?
how do these concepts differ from the fundamental and
optical gaps involving particles? can we calculate spin
gaps from spin-density-functional theory, and if yes, what
type of exchange-correlation (xc) functional is required?
2
in sec. iii, we answer these questions.
in sec. iii a we contrast spin gaps with charge gaps,
and in sec. iii b we propose a set of many-body and
single-particle definitions for quantities related to the
spin gap, such as spin-flip energies and the spin stiffness.
we take care to ensure that all quantities appearing in
our definitions can, in principle, be calculated from con-
ventional sdft or time-dependent sdft (tdsdft),
and try to make the definitions in the spin case as analo-
gous as possible to the charge case. however, this analogy
can only be carried up to a certain point, and important
differences between charge gaps and spin gaps emerge al-
ready at this level. as a simple example, we consider,
in sec. iii c, the lithium atom, for which we confront
calculated and experimental spin gaps.
in sec. iii d we then use ensembles dft to relate
the spin gap to a derivative discontinuity that is simi-
lar to, but distinct from, the one usually considered in
the charge case. finally, in sec. iii f, we investigate the
connection to excitation gaps calculated from tdsdft.
equations are given that allow one to extract the var-
ious spin gaps and related quantities from noncollinear
tdsdft calculations. for illustrative purposes we eval-
uate these for the lithium atom, and compare the gaps
and discontinuities obtained from time-dependent dft
to those obtained in sec. iii c from time-independent
considerations.
sec. iv contains our conclusions.
ii.
charge gap
a.
fundamental gaps vs. excitation gaps
to provide the background for this investigation, let us
first briefly recapitulate pertinent aspects of charge (or
particle1) gaps.
while by definition all gaps involve energy differences
between a lower-lying state (in practice often the ground
state) and a state of higher energy, important differences
depend on how the extra energy is added to the system
and what degrees of freedom absorb it. therefore, differ-
ent notions of gap are appropriate for different purposes.
for processes in which particles are added to or removed
from the system, which is subsequently allowed to relax
to the ground state appropriate to the new particle num-
ber, the key quantity is the fundamental gap (sometimes
also called the quasiparticle gap) which is calculated from
differences of ground-state energies of systems with dif-
ferent particle number. as such, it is relevant for instance
in transport phenomena and electron-transfer reactions.
if energy is added by means of radiation, on the other
hand, the particle number does not change, and the rele-
vant gap is an excitation energy of the n-particle system.
this excitation gap (sometimes also called the optical
gap), is relevant, e.g., in spectroscopy.
in first-principles electronic-structure calculations, ex-
citation gaps are today often calculated from time-
dependent density-functional theory (tddft). funda-
mental gaps, on the other hand, involve ground-state en-
ergies of systems with different particle numbers, and
should thus, in principle, be accessible by means of static
(ground-state) dft. however, it is well known that com-
mon approximations to dft encounter difficulties in this
regard. in semiconductors, for example, calculated fun-
damental gaps are often greatly underestimated relative
to experiment, and in strongly-correlated systems such as
transition-metal oxides, gapped materials are frequently
incorrectly predicted to be metallic, i.e., to have no gap
at all. the resulting band-gap problem of dft has been
intensely studied for many decades.
a major breakthrough in this field was the discov-
ery of the derivative discontinuity of the exact exchange-
correlation (xc) functional of dft, which was shown to
account for the difference between the gap obtained from
solving the single-particle kohn-sham (ks) equations of
dft, and the true fundamental gap.5,6,7 the problems
occurring in practice for semiconductors and strongly-
correlated systems are therefore attributed to the fact
that common local and semilocal approximations to the
exact xc functional do not have such a discontinuity. the
development of dft-based methods allowing to nonem-
pirically predict the presence and size of gaps in many-
electron systems continues to be a key issue of electronic-
structure theory and computational materials science.
b.
definition of fundamental charge gaps
the fundamental charge gap eg is defined as the dif-
ference
eg(n) = i(n) −a(n),
(1)
where the electron affinity (energy gained by bringing in
a particle from infinity) and ionization energy (energy it
costs to remove a particle to infinity) are defined in terms
of ground-state energies of the n-particle system, as
a(n) = e(n) −e(n + 1)
(2)
i(n) = e(n −1) −e(n).
(3)
the order of terms in these differences is the conventional
choice. the definition of the fundamental gap is in terms
of processes involving addition and removal of charge and
spin. the change in the respective quantum numbers is
±1 in n, and ±1/2 in s. in chemistry,3 the average of
i and a is identified with the electronegativity of the
n-particle system: (i(n) + a(n))/2 = χ(n).
the corresponding kohn-sham gap is defined analo-
gously as
eg,ks(n) = iks(n) −aks(n),
(4)
where iks(n) = eks(n −1)−eks(n) and aks(n) =
eks(n) −eks(n + 1). since the ks total energy is
simply the sum of the ks eigenvalues, eks = pn
k=1 ǫk,
3
this reduces to iks(n) = −ǫn(n) and aks(n) =
−ǫn+1(n), from which one obtains the usual form
eg,ks(n) = ǫn+1(n) −ǫn(n),
(5)
where ǫn(n) and ǫn+1(n) are the highest occupied and
the lowest unoccupied state of the n-particle system, re-
spectively.
the fundamental gap can also be written in terms of
ks eigenvalues by means of the ionization-potential theo-
rem (sometimes known as koopmans' theorem of dft),
which states
i(n) = −ǫn(n)
(6)
a(n) = i(n + 1) = −ǫn+1(n + 1),
(7)
so that i(n) ≡iks(n), and
eg(n) = ǫn+1(n + 1) −ǫn(n).
(8)
note that in contrast with the ks gap (5) these eigen-
values pertain to different systems.
the relation between both gaps is established by
rewriting the fundamental gap as
eg = eg,ks + ∆xc,
(9)
which defines ∆xc as the xc correction to the single-
particle gap.
by making use of the previous relations
we can cast ∆xc as8,9,10,11
∆xc = ǫn+1(n +1)−ǫn+1(n) = aks(n)−a(n). (10)
the important thing to notice in these expressions is
that, due to protection by koopmans' theorem, the ion-
ization energy does not contribute to the xc correction
∆xc, so that the correction of the affinity and of the fun-
damental gap are one and the same quantity. also, note
that all of these definitions can be made without any
recourse to ensemble dft and without any mention of
derivative discontinuities.
c.
nonuniqueness and derivative discontinuities
the basic euler equations of dft is2,3,4
δe[n]
δn(r) = μ.
(11)
since e[n] = f[n] +
r
d3r vext(r)n(r) and eks[n] =
ts[n] + r d3r vs(r)n(r), this implies
δf[n]
δn(r) = μ −vext(r)
(12)
and
δts[n]
δn(r) = μ −vs(r),
(13)
where ts[n] is the noninteracting kinetic energy func-
tional, f[n] = t [n] + u[n] is the internal energy func-
tional, expressed in terms of the interacting kinetic en-
ergy t [n] and the interaction energy u[n], μ is the chem-
ical potential, vext(r) is the external potential and vs(r)
the effective ks potential.
both the effective and the external potential are only
defined up to a constant, which does not change the form
of the eigenfunctions. consider now a gapped open sys-
tem, connected to a particle reservoir with fixed chemical
potential initially in the gap, and gradually change the
constant. as long as the change is sufficiently small, the
chemical potential remains in the gap, the density n(r)
does not change, and the derivatives on the left-hand side
of eqs. (12) and (13) change continuously
however, once the change in the constant is large
enough to affect the number of occupied levels, the sit-
uation changes: as soon as a new level falls below the
chemical potential, or emerges above it, the number of
particles in the system changes discontinuously by an in-
teger, and the chemical potential adjusts itself to the new
total particle number. for later convenience we call the
two values of μ on the left and the right of integer particle
number μ−and μ+, respectively.
when the right-hand side of eqs.
(12) and (13)
changes discontinuously, the left-hand side must also
change discontinuously. this means that the functional
derivatives of f[n] and ts[n] change discontinuously for
variations δn(r) such that n passes through an integer,
and are not defined precisely at the integer. we can also
argue conversely that if the functional derivatives existed
at all n(r) they would determine the potentials uniquely.
since the potentials are unique only up to a constant, the
functional derivatives cannot exist for the density varia-
tions δn arising from changing the potential by a con-
stant. in a gapped system, these are the δn integrating
to an integer.
either way, we see that the indeterminacy of the po-
tentials with respect to a constant implies that the func-
tionals f[n] and ts[n] display derivative discontinuities
for certain directions in density space along which the
total particle number changes by an integer. this is the
famous integer discontinuity of dft.5,6,7
d.
connection of discontinuities and gaps:
ensemble dft
up to this point we have defined ∆xc as a many-body
correction to the single-particle gap, and deduced the
existence of derivative discontinuities from noting the
nonuniqueness of the external potentials with respect to
a constant. these two conceptually distinct phenomena
are related by ensemble dft for systems with fractional
particle number, describing open systems in contact with
a particle reservoir.5,6,7 for such systems ensemble dft
guarantees that the ground-state energy as a function of
particle number, e(n), is a set of straight lines connect-
4
ing values at integer particle numbers.
for straight lines, the derivative at any n can be ob-
tained from the values at the endpoints:
−a = e(n+1)−e(n) = ∂e
∂n
n+δn
= μ+ =
δe
δn(r)
n+δn
(14)
and
−i = e(n)−e(n−1) = ∂e
∂n
n−δn
= μ−=
δe
δn(r)
n−δn
.
(15)
the many-body fundamental gap is thus the derivative
discontinuity of the total energy across densities integrat-
ing to an integer:
eg = i −a = δe[n]
δn(r)
n+δn
−δe[n]
δn(r)
n−δn
.
(16)
this energy functional is commonly written as e =
ts + v + eh + exc, where the external potential energy
v and the hartree energy eh are manifestly continuous
functionals of the density. hence, the energy gap reduces
to the sum of the discontinuity of the noninteracting ki-
netic energy ts and that of the xc energy exc.
the entire argument up to this point can be repeated
for a noninteracting system in external potential vs. the
energy of this system is eks = ts+vs, of which only the
first term can be discontinuous. hence the fundamental
gap of the ks system is given by the discontinuity of ts
eg,ks = δts[n]
δn(r)
n+δn
−δts[n]
δn(r)
n−δn
.
(17)
returning now to the many-body gap, written as the
sum of the discontinuities of ts and exc, we arrive at
eg = eg,ks + δexc[n]
δn(r)
n+δn
−δexc[n]
δn(r)
n−δn
,
(18)
or, by means of eq. (9),
∆xc = δexc[n]
δn(r)
n+δn
−δexc[n]
δn(r)
n−δn
.
(19)
this identifies the xc correction to the single-particle
gap as the derivative discontinuity arising from the
nonuniqueness of the potentials with respect to an ad-
ditive constant.5,6,7
importantly, this connection is not required to define
the xc corrections and neither is its existence enough to
conclude that these corrections are nonzero. many-body
corrections to the single-particle gap can be defined inde-
pendently of any particular property of the density func-
tional (or even without using any density-functional the-
ory), and whether for a given system these corrections
are nonzero or not depends on the electronic structure of
that particular system, and does not follow from the for-
mal possibility of a derivative discontinuity, because this
discontinuity itself might be zero. thus, the question of
the existence and size of xc corrections to the charge gap
must be asked for each system anew. as we will see in
the next section, the same is true for the spin gap.
iii.
spin gap
we have provided the above rather detailed summary
of the definition of the fundamental charge gap and its
connection to nonuniqueness and to derivative disconti-
nuities to prepare the ground for the following discussion
of the spin gap. in order to arrive at a consistent dft
definition of spin gaps, we follow the steps outlined in
the charge case: (i) define appropriate gaps and their
xc corrections, (ii) use the nonuniqueness of the sdft
potentials to show the existence of spin derivative dis-
continuities, and (iii) identify a suitable spin ensemble to
connect the two.
a.
spin gap vs. charge gap
to introduce a spin gap or a spin-flip energy (see be-
low for precise definitions) we consider processes in which
only the total spin of the system is changed, while the
particle number remains the same. there cannot be any
definition in terms of particle addition and removal en-
ergies, since in these processes the charge changes, too,
which is not what one wants the spin gap to describe. in
other words, the change of quantum numbers related to
a spin flip is ±1 for the spin and 0 for charge. note that
this is an excitation energy, where the excitation takes
place under the constraints of constant particle number
and change of total spin by one unit. this is the key
difference to the previous section, from which all other
differences follow.
b.
definition of spin gaps: spin-flip energies and
spin stiffness
first, we define the spin up-flip energy and the spin
down-flip energy in terms of many-body energies as
esf+(n) = e(n, s + 1) −e(n, s)
(20)
esf−(n) = e(n, s −1) −e(n, s).
(21)
here e(n, s) is the lowest energy in the n-particle spin-
s subspace, where s is the eigenvalue of the z-component
of the total spin, and we assumed that spin-up and spin-
down are good quantum numbers. this implies, in par-
ticular, that spin-orbit coupling is excluded from our
analysis. (of course these definitions only apply if the
respective flips are actually possible; in other words, if
s does not yet have the maximal or minimal value for a
given n.)
the differences esf+(n) and esf−(n) are similar to
the concepts of affinity and ionization energy, eqs. (2)
and (3). however, affinities and ionization energies are
always defined with the smaller value (of n) as the first
term in the differences, whereas spin-flip energies are con-
ventionally defined as final state minus initial state, i.e.
5
dos
sc
ks
g
e ,
f
n
n
ks
g
sc
ks
g
e
e
,
,
sc
ks
g
e ,
f
dos
0
,
,
ks
g
sc
ks
g
e
e
sf
ks
e
sf
ks
e
sf
ks
e
sf
ks
e
b a.
b b.
n
n
fig. 1: left: spin-resolved single-particle (ks) density-of-
states of a spin-polarized insulator.
two spin-flip energies
and two spin-conserving gaps can be defined.
right: the
half-metallic ferromagnet is a special case in which the gap in
one spin channel (say spin up) is zero. in this case, there is
only one spin-conserving gap, equal to the sum of both spin-
flip energies, esf−
ks + esf+
ks = esc
s,ks, and the ks charge gap
is zero, due to the presence of the gapless spin down channel.
figure courtesy of daniel vieira.
both spin-flip energies measure an energy cost. there-
fore, the down-flip is the spin counterpart to the ioniza-
tion energy, while the up-flip is the spin counterpart to
minus the electron affinity.
a more important difference is that the spin-flip en-
ergies involve excited-state energies e(n, s + 1) and
e(n, s −1) of the n-particle system, instead of ground-
state energies, and in this sense are more similar to the
optical gap in the charge case than to the quantities used
in evaluating the fundamental gap. alternatively, these
energies can also be considered ground-state energies for
sectors of hilbert space restricted to a given total s, but
we will not make use of this alternative interpretation in
the following.
ks spin-flip energies are related analogously to single-
particle eigenvalues, according to
esf+
ks
= ǫl(↑) −ǫh(↓)
(22)
esf−
ks
= ǫl(↓) −ǫh(↑),
(23)
where all energies are calculated at the same n, s. here
l(σ) means the lowest unoccupied spin σ state, and h(σ)
means the highest occupied spin σ state. similarly, l(σ)
and h(σ) denote the lowest occupied spin σ state and
h(σ) the highest unoccupied spin σ state, respectively.
this notation is nonstandard, but helpful, and further
illustrated in fig. 1.
in the same way, we can also define the spin conserving
(sc) single-particle gaps in each spin channel, as
esc,↑
g,ks = ǫl(↑) −ǫh(↑)
(24)
esc,↓
g,ks = ǫl(↓) −ǫh(↓).
(25)
the spin-conserving gaps and the spin-flip energies are
necessarily related by esc↑
g,ks + esc↓
g,ks = esf−
ks + esf+
ks . a
look at fig. 1 clarifies these definitions. if the system is
non spin polarized, both spin-flip energies and both spin-
conserving gaps become equal to the ordinary ks charge
gap, which in our present notation reads ǫl −ǫh.
in the same way as for charge gaps, we can now also
consider the sum and the difference of the spin-flip ener-
gies. the sum
es = esf−+ esf+
(26)
of the energies it costs to flip a spin up and a spin down
is formally analogous to the fundamental gap (1), but
with the important difference that es involves excited-
state energies. the unusual sign (sum instead of differ-
ence) arises simply because both spin-flip energies mea-
sure costs, whereas the affinity featured in eq. (1) mea-
sures an energy gain.
the formal analogy to eq. (1) suggests that the quan-
tity defined in eq. (26) be called the fundamental spin
gap. in practice, however, the name spin gap is more
appropriately applied to the individual spin-flip energies.
the physical interpretation of their sum, eq. (26), is re-
vealed by expressing it in terms of the many-body ener-
gies by means of eqs. (20) and (21):
es = e(n, s + 1) + e(n, s −1) −2e(n, s).
(27)
this is of the form of a discretized second derivative
∂2e(n, s)/∂s2, which identifies es as the discretized
spin stiffness [we anticipated this interpretation when at-
taching a subscript s for stiffness to the sum in eq. (26)].
we note that half of i−a is known in quantum chemistry
as chemical hardness, which conveys a very similar idea
as stiffness. generically, we refer to all three quantities
esf−, esf+ and es as spin gaps.
the spin electronegativity can be defined as half of the
difference of the spin-flip energies, χs = (esf−−esf+)/2.
this quantity has the following interpretation: if χs > 0,
it costs less energy to flip a spin up than to flip a spin
down, whereas if χs < 0 the down flip is energetically
cheaper.
the ks spin stiffness is defined as the sum of ks spin-
flip energies,
es,ks = esf−
ks + esf+
ks ,
(28)
or, with eqs. (22) and (23),
es,ks = [ǫl(↓) −ǫh(↑)] + [ǫl(↑) −ǫh(↓)].
(29)
this is analogous to (5), except that in spin flips nothing
is removed to infinity or brought in from infinity. thus,
differently from the ks ionization energy and electron
affinity, the spin-flip energies require two single-particle
energies for their definition instead of one, and in contrast
with the ks charge gap the ks spin stiffness requires four
single-particle energies instead of two.
this missing analogy is physically meaningful: con-
ventional gaps are defined in terms of particle addition
and removal processes and are ground-state properties.
6
table i: kohn-sham energy eigenvalues (in ev) for the
lithium atom.
the 1s ↑, 2s ↑and 1s ↓levels are oc-
cupied. ks: energy eigenvalues obtained by inversion from
quasi-exact densities. xx denotes exact exchange,12 and kli
is the krieger-li-iafrate approximation.14
ksa
ksb
xxa
kli-xx
lsda
−ǫ1s↑
55.97
58.64
55.94
56.64
51.02
−ǫ2s↑
5.39
5.39
5.34
5.34
3.16
−ǫ2p↑
3.54
-
3.48
3.50
1.34
−ǫ1s↓
64.41
64.41
67.18
67.14
50.81
−ǫ2s↓
8.16
5.87
8.25
8.23
2.09
aref. 12
bref. 13
to define pure spin gaps (i.e., spin-flip energies and spin
stiffness) in which the charge does not change, we cannot
make use of particle addition and removal processes but
have to use spin flip processes instead.
however, spin
flips are excitation energies, and we must specify both
initial and final states to define them properly.
we also note that the many-body spin stiffness has no
simple expression in terms of eigenvalues which would be
analogous to eq. (8). such an expression would require
the spin counterpart to koopmans' theorem i(n) ≡
iks(n), which is not available for spin-flip energies.
hence, in general both spin-flip energies esf−and esf+
may be individually different from their ks counterparts
esf−
ks and esf+
ks :
esf−= esf−
ks + ∆sf−
xc
(30)
esf+ = esf+
ks + ∆sf+
xc .
(31)
we can, moreover, establish a relation between the many-
body spin stiffness and the ks spin stiffness by rewriting
the former as
es = es,ks + ∆s
xc = esf−
ks + esf+
ks + ∆s
xc,
(32)
which defines ∆s
xc as the xc correction to the ks spin
stiffness. the important thing to notice in eq. (32) is
that there is no reason to attribute ∆s
xc only to the up-
flip energy. this is a key difference to the charge case,
where i(n) ≡iks(n) and the xc correction could thus
be attributed only to the electron affinity. rather, the
spin-flip corrections are connected by
∆s
xc = ∆sf−
xc
+ ∆sf+
xc .
(33)
c.
example: the li atom
to give an explicit example of the quantities intro-
duced in the previous section, we now consider the li
atom.
for this system, ks eigenvalues ǫh↑= ǫ2s↑,
ǫl↑= ǫ2p↑, ǫl↓= ǫ2s↓and ǫh↓= ǫ1s↓have been obtained
table ii: single-particle spin-flip energies (34) and (35) and
spin stiffness (29), their experimental (exp) counterparts,
(20), (21) and (26), and the resulting xc corrections defined
in (30), (31) and (32), for the lithium atom. in the columns
labelled ks we employ ks eigenvalues obtained from near-
exact densities, while in the columns labelled xx, kli-xx
and lsda we use approximate eigenvalues obtained from
standard sdft calculations. the experimental values were
obtained using spectroscopic data for the lowest quartet state
4p 0 from ref. 15 as well as accurate wave-function based the-
ory from ref. 16. all values are in ev.
ksa
ksb
xxa
kli-xx
lsda
exp
esf+
60.87
60.87
63.70
63.64
49.47
57.41
esf−
−2.77
−0.48
−2.91
−2.89
1.07
0
es
58.10
60.39
60.79
60.75
50.54
57.41
∆sf+
xc
−3.46
−3.46
−6.29
−6.23
7.94
∆sf−
xc
2.77
0.48
2.91
2.89
−1.07
∆s
xc
−0.69
−2.98
−3.38
−3.34
6.87
aref. 12
bref. 13, taking −ǫ2p↑= 3.54 ev from ref. 12
by numerical inversion of the ks equation starting from
near-exact densities (see table i).12,13
the ks spin-flip energies are obtained as
esf+
ks
= ǫ2p↑−ǫ1s↓
(34)
esf−
ks
= ǫ2s↓−ǫ2s↑.
(35)
they are given in table ii, together with the spin stiff-
ness es, see eq. (26). table ii also presents the cor-
responding experimental many-body energy differences
for the li atom, which were obtained using spectroscopic
data for the lowest quartet state 4p 0 and accurate wave-
function based theory.15,16 relativistic effects and other
small corrections included in the experimental data are
ignored since they are too small on the scale of energies
we are interested in.
table ii also gives the xc corrections to the single-
particle spin flip energies, see eqs. (30) and (31), and
the xc correction to the spin stiffness, ∆s
xc, see eq. (32).
as a consistency test we verified that relation (33), which
connects the xc corrections of the spin-flip energies to the
xc corrections of the spin stiffness, is satisfied.
we also carried out calculations using the exact-
exchange (xx) eigenvalues of ref. 12 in order to sepa-
rately assess the size of exchange and correlation effects.
the resulting value of ∆s
x = −3.38 ev indicates a larger
(more negative) correction than in the calculation includ-
ing correlation. an approximate kli-xx calculation14
yields very similar results, while the lsda data are com-
pletely different and do not even reproduce the correct
sign.
three of the required "exact" ks single-particle eigen-
values are also reported in ref. 13 (we use the result of
ref. 12 for the missing value of ǫ2p↑). the value of ǫ2s↓is
7
quite different than the value reported in ref. 12 (−5.87
ev versus −8.16 ev), and consequently we obtain a rather
different value of ∆s
xc (−2.98 ev versus −0.69 ev). nev-
ertheless, both sets of data sustain our main conclusions
in this section:
(i) simple lsda calculations give rise to serious qual-
itative errors. as can be seen from table ii, one obtains
spin-flip energies that are drastically to small (esf+) or
have the wrong sign (esf−). the resulting xc corrections
also suffer from having the wrong sign. these shortcom-
ings of the lsda are hardly surprising in view of its
well-established failure to describe the charge gap.
(ii) even the precise ks eigenvalues do not predict the
exact spin flip energies and spin stiffness, i.e. the xc cor-
rections introduced in sec. iii b on purely formal grounds
are indeed nonzero. the absolute size of these corrections
implies that a simple ks eigenvalue calculation of spin
gaps can be seriously in error.
(iii) exchange-only calculations overestimate (in mod-
ulus) the size of the gap corrections. this implies that
there is substantial cancellation between the exchange
and the correlation contribution to the full correction.
this is the same trend known for charge gaps.
(iv) the xc corrections to both the up-flip energy and
the spin stiffness turn out to be negative; in other words
the ks calculation overestimates these quantities. this
is the opposite of what occurs in the case of the funda-
mental charge gap, which is underestimated by the ks
calculation. we note that hints of an underestimation
of the experimental spin-flip energies by ks eigenvalue
differences have also been observed for half-metallic fer-
romagnets. in the case of cro2, for example, ref. 17
reports experimental spin-flip energies in the range 0.06
to 0.25 ev and compiles sdft predictions that range
from 0.2 to 0.7 ev (and in one case even 1.7 ev).
d.
nonuniqueness and derivative discontinuities in
sdft
above we pointed out that the effective and external
potentials of dft are determined by the ground-state
density up to an additive constant. however, this state-
ment only holds when one formulates dft exclusively
in terms of the charge density, as we have done in dis-
cussing charge gaps. it does not hold when one works
with spin densities, as in sdft, or current densities, as
in current-dft (cdft).
in these cases the densities still determine the wave
function, but they do not uniquely determine the corre-
sponding potentials. a first example of this nonunique-
ness problem of generalized dfts was already encoun-
tered in early work on sdft, for the single-particle ks
hamiltonian.18 later, this observation was extended to
the sdft many-body hamiltonian,19,20 and further ex-
amples were obtained in cdft21 and dft on lattices.22
nonuniqueness is a generic feature of generalized (mul-
tidensity) dfts, consequences of which are still under
investigation.23,24,25,26,27,28 in particular, refs. 19 and
20 already point out that the nonuniqueness of the po-
tentials of sdft implies that the sdft functionals can
have additional derivative discontinuities, because, if the
functional derivatives of f and ts in multi-density dfts
such as sdft and cdft existed for all densities, they
would determine the corresponding potentials uniquely.
very recently, g ́
al and collaborators28 pointed out that
one-sided derivatives may still exist, and explored con-
sequences of this for the dft description of chemical
reactivity indices.
just as in the charge case, derivative discontinuities re-
sult from the nonuniqueness of the spin-dependent poten-
tials, while corrections to single-particle gaps result from
the auxiliary nature of kohn-sham eigenvalues. in the
charge case, both distinct phenomena could be connected
by means of ensemble dft for systems of fractional par-
ticle number. the question then arises if a similar con-
nection can also be established in the spin case. this
requires an investigation of spin-ensembles.
e.
spin ensembles
consequences of the nonuniqueness of the potentials
of sdft for the calculation of spin gaps were already
hinted at in refs. 19 and 20, where it was pointed out
that there may be a spin-gap problem in sdft similarly
to the well known band-gap problem of dft.
to make these hints more precise, we first recall, from
the above, that the quantity usually called the spin gap
is actually what we here called the spin-flip gap, and is
analogous to the ionization energy or the electron affin-
ity in the charge case, not to the fundamental particle
gap. the spin-dependent quantity that is most analo-
gous to the fundamental particle gap is the discretized
spin stiffness of eqs. (26) and (27). however, regardless
of whether one focuses on the spin-flip energies or on the
spin stiffness, the spin situation is not completely anal-
ogous to the charge situation because both the spin-flip
gaps and the spin stiffness are defined in terms of excited
states of an n-particle hamiltonian, while charge gaps
are defined in terms of ground-state energies of hamilto-
nians with different particle numbers.
to identify a suitable ensemble, we write the energy
associated with a generic ensemble of two systems, a
and b, as
ew = (1 −w)ea + web,
(36)
where 0 ≤w ≤1 is the ensemble weight. if a and b have
different particle numbers, na and nb = na ± 1, this
becomes the usual fractional-particle number ensemble,
which is unsuitable for our present investigation where
the involved systems differ in the spin but not the charge
quantum numbers.
a spin-dependent ensemble was recently constructed
by yang and collaborators29,30 in order to understand the
static correlation error of common density functionals. in
8
this spin ensemble, a and b have different (possibly frac-
tional) spin, but are degenerate in energy. the constancy
condition, whose importance and utility is stressed in
refs. 29,30, arises directly from the restriction of the en-
semble to degenerate states. while useful for the pur-
poses of analyzing the static correlation error, this spin
ensemble is too restrictive for our purposes, as it excludes
the excited states involved in the definition of spin-flip
gaps and of the spin stiffness.
ensembles involving excited states have been employed
in dft in connection with the calculation of excitation
energies.31,32,33 here a and b differ in energy but stem
from the same hamiltonian, with fixed particle number.
excited-state ensemble theory leads to a simple expres-
sion relating the first excitation energy to a ks eigenvalue
difference32
eb −ea = ǫw
m+1 −ǫw
m + ∂ew
xc[n]
∂w
n=nw
,
(37)
where eb and ea are the energies of first excited and
the ground state of the many-body system, respectively,
ǫw
m+1 and ǫw
m are the highest occupied and lowest unoc-
cupied ks eigenvalues, and ew
xc is the ensemble xc func-
tional. equation (37) holds for ensemble weights in the
range 0 ≤w ≤1/2. levy showed34 that the last term
in this equation is related to a derivative discontinuity
according to
∂ew
xc[n]
∂w
n=nw
= δew=0
xc
[n]
δn(r)
n=nw=0
−δew
xc[n]
δn(r)
n=nw
(38)
for w →0. here nw = (1 −w)na + wnb is the ensemble
density, and the discontinuity arises because even in the
w →0 limit the ensemble density does contain an admix-
ture of the state b with energy eb > ea and thus decays
differently from n0 as r →∞.34 levy developed his ar-
gument explicitly only for the spin-unpolarized case, but
already pointed out in the original paper that the results
carry over to spin-polarized situations.
in our case, we take a to be the ground state and b to
be the lowest-lying state differing from it by a spin flip.
to be specific, let us assume that the spin is flipped up.
in this case we obtain from eqs. (37) and (38) in the
limit w →0, and using our present notation,
esf+(n) = e(n, s + 1) −e(n, s)
(39)
= ǫw
l(↑) −ǫw
h(↓) + ∂ew
xc[n↑, n↓]
∂w
n↑=nw
↑
n↓=nw
↓
(40)
= ǫw
l(↑) −ǫw
h(↓) + δew=0
xc
[n↑, n↓]
δn↓(r)
n↑=nw=0
↑
n↓=nw=0
↓
−δew
xc[n↑, n↓]
δn↓(r)
n↑=nw
↑
n↓=nw
↓
(41)
= esf+
w,ks(n) + ∆sf+
w,xc
(42)
for w →0. equation (42), which is the ensemble ver-
sion of our eq. (31), illustrates that ks spin-flip excita-
tions, too, acquire a many-body correction arising from
a derivative discontinuity.
in the particular case in which the spin flip costs no
energy in the many-body and in the ks system, the pre-
ceding equation reduces to ∆sf+
w,xc = 0, which is the con-
stancy condition derived in refs. 29,30 for spin ensembles
of degenerate states.
we note that the ks eigenvalues and the discontinu-
ity in eqs. (40) to (42) must be evaluated by taking
the w →0 limit of the w-dependent quantities, while
the quantities in eq. (31) have no ensemble dependence.
this complicates the evaluation of spin-flip energies and
their discontinuities, as defined in sec. iii b, from en-
semble dft. therefore, we turn to still another density-
functional approach to excited states in order to evaluate
these quantities: tddft.
f.
connection to tddft
tddft has established itself as the method of choice
for calculating excitation energies in atomic and molec-
ular systems, and is making rapid progress in nanoscale
systems and solids as well.35,36 in this section we will
make a connection between the preceding discussion and
tddft, which will allow us to derive simple approxima-
tions for the xc corrections to the single-particle spin-flip
excitation energies and the spin stiffness.
to calculate the spin-conserving and the spin-flip exci-
tation energies, it is necessary to use a noncollinear spin-
density response theory, even if the system under study
has a ground state with collinear spins (i.e., spin-up and
-down with respect to the z axis are good quantum num-
bers).
in this way the spin-up and spin-down density
responses can become coupled, and the description of
spin-flip excitations (for instance, due to a transverse
magnetic perturbation) becomes possible. in tddft,
the spin-conserving and the spin-flip excitation energies
can be obtained from the following eigenvalue equations,
which are a generalization of the widely used casida
equations37 for systems with noncollinear spin:38
x
σσ′
x
i′a′
nh
δi′iδa′aδσαδσ′α′ωa′σ′i′σ + kαα′,σσ′
iαaα′,i′σa′σ′
i
xi′σa′σ′ + kαα′,σ′σ
iαaα′,i′σa′σ′yi′σ,a′σ′
o
= −ωxiα,aα′
(43)
9
x
σσ′
x
i′a′
n
kα′α,σσ′
iαaα′,i′σa′σ′xi′σa′σ′ +
h
δa′aδi′iδσ′α′δσαωa′σ′i′σ + kα′α,σ′σ
iαaα′,i′σa′σ′
i
yi′σ,a′σ′
o
= ωyiα,aα′ ,
(44)
where we use the standard convention that i, i and a, a′
are indices of occupied and unoccupied ks orbitals, re-
spectively, and α′α′, σσ′ are spin indices, and ωa′σ′i′σ =
ǫa′σ′ −ǫi′σ. choosing the ks orbitals to be real, without
loss of generality, we have
kαα′,σσ′
iαaα′,i′σa′σ′(ω) =
z
d3r
z
d3r′ ψiα(r)ψaα′(r)
× f hxc
αα′,σσ′(r, r′, ω)ψi′σ(r′)ψa′σ′(r′) .
(45)
here, the subscript indices of the matrix elements k refer
to the ks orbitals in the integrand, and the superscript
spin indices refer to the hartree-xc kernel
f hxc
α,α′,σσ′(r, r′, ω) = δαα′δσσ′
|r −r′| + f xc
αα′,σσ′(r, r′, ω) ,
(46)
where the frequency-dependent xc kernel is defined as the
fourier transform of the time-dependent xc kernel
f xc
αα′,σσ′(r, t, r′, t′) =
δvxc
αα′(r, t)
δnσσ′(r′, t′)
n(r,t)=n0(r)
.
(47)
here, n(r, t) and n0(r) are the time-dependent and the
ground-state 2×2 spin-density matrix, which follow from
the dft formalism for noncollinear spins.38,39,40,41,42
eqs. (43) and (44) give, in principle, the exact spin-
conserving and spin-flip excitation energies of the system,
provided the exact ks orbitals and energy eigenvalues are
known, as well as the exact functional form of f xc
αα′,σσ′.
we will now consider a simplified solution known as the
single-pole approximation.43,44 it is obtained from the
full system of equations (43), (44) by making the tamm-
dancoffapproximation (i.e., neglecting the off-diagonals)
and focusing only on the h(σ) →l(σ′) excitations. in
other words, we need to solve the 4 × 4 problem
x
σσ′
h
δσ′α′δσαωlσ′hσ + kα′α,σ′σ
hαlα′,hσlσ′
i
yhσ,lσ′ = ωyhα,lα′ .
(48)
for ground states with collinear spins, the only nonva-
nishing elements of the hartree-xc kernel are
f hxc
↑↑,↑↑, f hxc
↓↓,↓↓, f hxc
↑↑,↓↓, f hxc
↓↓,↑↑, f xc
↑↓,↑↓, f xc
↓↑,↓↑
(49)
(notice that there is no hartree term in the spin-flip
channel), and the spin-conserving and spin-flip excita-
tion channels decouple into two separate 2 × 2 problems.
for the spin-conserving case, we have
det
ω↑↑−ωsc + m↑↑,↑↑
m↑↑,↓,↓
m↓↓,↑↑
ω↓↓−ωsc + m↓↓,↓↓
= 0 ,
(50)
where we abbreviate mαα′,σσ′ = kα′α,σ′σ
hαlα′,hσlσ′(ω) and
ωσ′σ = ωlσ′,hσ = ǫlσ′ −ǫhσ. from this, we get the two
spin-conserving excitation energies as
esc↑,↓= ω↑↑+ ω↓↓+ m↑↑,↑↑+ m↓↓,↓↓
2
±
h
m↑↑,↓↓m↓↓,↑↑
+ 1
4(ω↑↑−ω↓↓+ m↑↑,↑↑−m↓↓,↓↓)2i1/2
,
(51)
with the spin-conserving kohn-sham single-particle gaps
esc↑(↓)
g,ks
= ω↑↑(↓↓). the two spin-flip excitations follow
immediately as
esf+ = ω↑↓+ m↑↓,↑↓
(52)
esf−= ω↓↑+ m↓↑,↓↑,
(53)
where esf+
ks = ω↑↓and esf−
ks = ω↓↑.
this gives a simple approximation for the xc correction
to the spin stiffness:
∆s
xc = m↑↓,↑↓+ m↓↑,↓↑.
(54)
explicit expressions for f xc
αα′,σσ′ can be obtained from
the local spin-density approximation (lsda), and we
list them here for completeness (see also wang and
ziegler38):
f xc
↑↑,↑↑= ∂2(neh
xc)
∂n2
+ 2(1 −ζ)∂2eh
xc
∂n∂ζ + (1 −ζ)2
n
∂2eh
xc
∂ζ2
f xc
↓↓,↓↓= ∂2(neh
xc)
∂n2
−2(1 + ζ)∂2eh
xc
∂n∂ζ + (1 + ζ)2
n
∂2eh
xc
∂ζ2
f xc
↑↑,↓↓= ∂2(neh
xc)
∂n2
−2ζ ∂2eh
xc
∂n∂ζ −(1 −ζ2)
n
∂2eh
xc
∂ζ2
f xc
↑↓,↑↓=
2
nζ
∂eh
xc(n, ζ)
∂ζ
,
(55)
where f xc
↓↓,↑↑= f xc
↑↑,↓↓and f xc
↓↑,↓↑= f xc
↑↓,↑↓, and it is under-
stood that all expressions are multiplied by δ(r −r′) and
evaluated at the local ground-state density and spin po-
larization, n0(r) = n0↑(r) + n0↓(r) and ζ0(r) = [n0↑(r) −
n0↓(r)]/n0(r).
for the xc energy density of the spin-
polarized homogeneous electron gas we take the standard
interpolation formula
eh
xc(n, ζ) = eh
xc(n, 0) + (1 + ζ)4/3 + (1 −ζ)4/3 −2
24/3 −2
× [eh
xc(n, 1) −eh
xc(n, 0)].
(56)
the case of exact exchange (xx) in linear response can
be treated exactly, though with considerable technical
and numerical effort.45,46 a simplified expression of the
xx xc kernel was developed by petersilka et al.43, and we
have generalized their expression for the linear response
10
table iii: top part: spin-conserving and spin-flip excita-
tion energies, calculated with lsda and kli-xx using dif-
ferences of ks eigenvalues and tddft in the single-pole ap-
proximation (51)–(53). bottom part: tddft xc corrections
to the ks spin-flip excitation energies, from eqs. (52), (53),
and to the ks spin gap, eq. (54). all numbers are in ev.
lsda
kli-xx
exact
ks
tddft
ks
tddft
ksa
expb
esc↑
1.83
2.00
1.84
2.01
1.85
1.85
esc↓
48.72
48.89
58.90
59.31
56.25
56.36
esf+
49.47
48.23
63.64
62.12
60.87
57.41
esf−
1.07
0.99
−2.89
−2.97
−2.77
0.0
∆sf+
xc
−1.24
−1.52
−3.46
∆sf−
xc
−0.08
−0.07
+2.77
∆s
xc
−1.32
−1.59
−0.69
aevaluated from the ks eigenvalues of ref. 12
bspectroscopic data from ref. 47 (esc↑,↓) and refs. 15,16 (esf+)
of the spin-density matrix.
we obtain (details will be
published elsewhere):
f x
↑↑,↑↑(r, r′) = −
n↑
x
i,k
ψk↑(r)ψ∗
k↑(r′)ψ∗
i↑(r)ψi↑(r′)
|r −r′|n↑(r)n↑(r′)
(57)
and similarly for f x
↓↓,↓↓(r, r′), and
f x
↑↓,↑↓(r, r′) = −
n↑,n↓
x
i,k
ψk↑(r)ψ∗
k↑(r′)ψ∗
i↓(r)ψi↓(r′)
|r −r′|
p
n↑(r)n↓(r)n↑(r′)n↓(r′)
= f x
↓↑,↓↑(r, r′).
(58)
here, n↑and n↓are the number of occupied spin-up and
spin-down orbitals.
we have evaluated eqs.
(51)–(54) for the spin-con-
serving and spin-flip excitation energies of the lithium
atom involving the h(σ) and l(σ′) orbitals. the lsda
and kli-xx orbital eigenvalues that are needed as input
are given in table i.
the associated excitation energies are shown in table
iii, where we compare ks excitations, i.e., differences of
ks eigenvalues, with tddft excitations obtained using
the single-pole approximation described above. all in all,
the tddft excitation energies are not much improved
compared to the ks orbital eigenvalue differences. the
main reason is that the lsda and kli-xx ks energy
eigenvalues are not particularly close to the exact ks
energy eigenvalues, and furthermore that the single-pole
approximation is too simplistic for this open-shell atom.
however, we observe that the xc correction ∆s
xc to the
spin stiffness es, when directly calculated within lsda
or kli-xx using the tddft formula (54), is reasonably
close to the exact value, and has the correct sign. this
tells us that, even though the ks spin gap itself may be
not very good, the simple tddft expression (54) gives
a reasonable approximation for the xc correction to it.
iv.
conclusion
the calculation of spin gaps and related quantities is
important for phenomena like spin-flip excitations in fi-
nite systems,38 the magnetic and transport properties
of extended systems such as half-metallic ferromagnets17
and, quite generally, in the emerging field of spintronics
and spin-dependent transport.
our aim in this paper was to show how to define and
calculate spin gaps and related quantities from density-
functional theory. the proper definition of spin gaps in
sdft is by no means obvious, and the straightforward
extrapolation of concepts and properties from the charge
case to the spin case is fraught with dangers.
there-
fore, we started our investigation by disentangling two
aspects of the gap problem that in the charge case are
usually treated together: the derivative discontinuity and
the many-body correction to single-particle gaps.
on this background, we then provided a set of dft-
based definitions of quantities that are related to spin
gaps, such as spin-conserving gaps, spin-flip gaps and
the spin stiffness, pointing out in each case where pos-
sible analogies to the charge case exist, and when these
analogies break down.
in particular, spin-flips involve
excitations, while particle addition and removal involves
ground-state energies. as a consequence, single-particle
spin-flip energies involve two eigenvalues (and not one)
and single-particle spin gaps involve four (and not two).
moreover, each spin-flip energy may have its own xc cor-
rection (there is no koopmans' theorem for spin flips).
an evaluation of our definitions for the lithium atom,
making use of highly precise kohn-sham eigenvalues and
spectroscopic data, shows that the many-body correction
to spin gaps can indeed be nonzero. in fact, unlike what
is common in the charge case, this correction turns out
to be negative, i.e. the single-particle calculation overes-
timates the spin gap while it underestimates the charge
gap.
while this result for a single atom is consistent
with available data on half-metallic ferromagnets,17 sim-
ilar calculations must be performed for other systems be-
fore broad trends can be identified.
next, we connected the many-body corrections to
the spin gap and related quantities to ensemble dft
and to tddft. the former connection makes use of
a suitable excited-state spin ensemble (different from
the degenerate-state spin ensemble recently proposed by
yang and collaborators29,30) and depends on a crucial in-
sight of levy34 regarding excited-state derivative discon-
tinuities. the latter connection employs a noncollinear
version of the casida equations,38 which we evaluate,
again for the lithium atom, within the single-pole ap-
proximation, in lsda and for exact exchange.
the development of approximate density functionals
and computational methodologies that permit the reli-
able calculation of spin gaps and related quantities, in-
cluding their many-body (xc) corrections, remains a chal-
lenge for the future.
acknowledgments k.c. thanks the physics depart-
11
ment of the university of missouri-columbia, where part
of this work was done, for generous hospitality, and
d. vieira for preparing and discussing fig. 1. k.c. is
supported by brazilian funding agencies fapesp and
cnpq. c.a.u. acknowledges support from nsf grant
no.
dmr-0553485.
c.a.u. would also like to thank
the kitp santa barbara for its hospitality and partial
support under nsf grant no. phy05-51164. g.v. ac-
knowledges support from nsf grant no. dmr-0705460.
1 the expression 'charge gap' is actually a misnomer, as
what is added, removed or excited is a particle (with charge
and spin) not just charge. charge gap is the common ex-
pression, however, and we will use it interchangeably with
the more correct 'particle gap'. in one dimensional systems,
on the other hand, it is important to distinguish charge and
particle gaps, due to the possibility of spin-charge separa-
tion.
2 r. m. dreizler and e. k. u. gross, density functional
theory (springer, berlin, 1990).
3 r. g. parr and w. yang, density-functional theory of
atoms and molecules (oxford university press, oxford,
1989).
4 g. f. giuliani and g. vignale, quantum theory of the
electron liquid (cambridge university press, cambridge,
2005).
5 j. p. perdew, r. g. parr, m. levy, and j. l. balduz, phys.
rev. lett. 49, 1691 (1982).
6 j. p. perdew and m. levy, phys. rev. lett. 51, 1884
(1983).
7 l. j. sham and m. schl ̈
uter, phys. rev. lett. 51, 1888
(1983).
8 l. j. sham and m. schl ̈
uter, phys. rev. b 32, 3883 (1985).
9 a. seidl, a. g ̈
orling, p. vogl, j. a. majewski, and m. levy,
phys. rev. b 53, 3764 (1996).
10 k. capelle, m. borgh, k. karkkainen, and s.m. reimann
phys. rev. lett. 99, 010402 (2007).
11 f. p. rosselli,
a. b. f. da silva,
and k. capelle,
arxiv:physics/0611180v2 (2006).
12 j. chen, j. b. krieger, r. o. esquivel, m. j. stott, and g.
j. iafrate, phys. rev. a 54, 1910 (1996).
13 o. v. gritsenko and e. j. baerends, j. chem. phys. 120,
8364 (2004).
14 j. b. krieger, y. li, and g. j. iafrate, phys. rev. a 45,
101 (1992).
15 s. mannervik and h. cederquist, phys. scr. 27, 175 (1983).
16 c. f. bunge and a. v. bunge, phys. rev. a 17, 816 (1978);
c. f. bunge, j. phys. b: at. mol. opt. phys. 14, 1 (1981);
j.-j. hsu, k. t. chung, and k.-n. huang, phys. rev. a
44, 5485 (1991).
17 j. m. d. coey and m. venkatesan, j. appl. phys. 91, 8345
(2002).
18 u. von barth and l. hedin, j. phys. c 5, 1629 (1972).
19 h. eschrig and w. e. pickett, solid state commun. 118,
123 (2001).
20 k. capelle and g. vignale, phys. rev. lett. 86, 5546
(2001).
21 k. capelle and g. vignale, phys. rev. b 65, 113106
(2002).
22 c. a. ullrich, phys. rev. b 72, 073102 (2005).
23 o. gritsenko and e. j. baerends, j. chem. phys. 120,
8364 (2004).
24 n. argaman and g. makov, phys. rev. b 66, 052413
(2002).
25 n.. i. gidopoulos, phys. rev. b 75, 134408 (2007).
26 w. kohn, a. savin, and c. a. ullrich, int. j. quantum
chem. 100, 20 (2004).
27 k. capelle, c. a. ullrich, and g. vignale, phys. rev. a
76, 012508 (2007).
28 t. g ́
al, p. w. ayers, f. de proft, and p. geerlings, j.
chem. phys. 131, 154114 (2009).
29 a. j. cohen, p. mori-s ́
anchez, and w. yang, science 321,
792 (2008).
30 p. mori-s ́
anchez, a. j. cohen, and w. yang, phys. rev.
lett. 102, 066403 (2009).
31 a. k. theophilou, j. phys. c 12, 5419 (1979).
32 e. k. u. gross, l. n. oliveira, and w. kohn, phys. rev.
a 37, 2805 (1988); ibid 2809; ibid 2821.
33 a. nagy, phys. rev. a 49, 3074 (1994); ibid 42, 4388
(1990).
34 m. levy, phys. rev. a 52, r4313 (1995).
35 time-dependent density functional theory, edited by m.
a. l. marques, c. a. ullrich, f. nogueira, a. rubio, k.
burke, and e. k. u. gross, lecture notes in physics 706
(springer, berlin, 2006).
36 p. elliott, k. burke, and f. furche, in recent advances in
density functional methods 26, edited by k. b. lipkowitz
and t. r. cundari (wiley, hoboken, nj, 2009), p. 91.
37 m. e. casida, in recent advances in density functional
methods, edited by d. e. chong (world scientific, singa-
pore, 1995), vol. 1, p. 155.
38 f. wang and t. ziegler, j. chem. phys. 121, 12191 (2004);
j. chem. phys. 122, 074109 (2005); int. j. quant. chem.
106, 2545 (2006).
39 j. sticht, k. h. h ̈
ock, and j. k ̈
ubler, j. phys.: condens.
matter 1, 8155 (1989).
40 l. m. sandratskii, adv. phys. 47, 91 (1998).
41 o. heinonen, j. m. kinaret, and m. d. johnson, phys.
rev. b 59, 8073 (1999).
42 c. a. ullrich and m. e. flatt ́
e, phys. rev. b 66, 205305
(2002) and phys. rev. b 68, 235310 (2003).
43 m. petersilka, u. j. gossmann, and e. k. u. gross, phys.
rev. lett. 76, 1212 (1996).
44 h. appel, e. k. u. gross, and k. burke, phys. rev. lett.
90, 043005 (2003).
45 y.-h. kim and a. g ̈
orling, phys. rev. lett. 89, 096402
(2002).
46 m. hellgren and u. von barth, phys. rev. b 78, 115107
(2008).
47 j. e. sansonetti, w. c. martin, and s. l. young, handbook
of basic atomic spectroscopic data (national institute of
standards and technology, gaithersburg, 2005).
|
0911.1714 | adhesion of surfaces via particle adsorption: exact results for a
lattice of fluid columns | we present here exact results for a one-dimensional gas, or fluid, of
hard-sphere particles with attractive boundaries. the particles, which can
exchange with a bulk reservoir, mediate an interaction between the boundaries.
a two-dimensional lattice of such one-dimensional gas `columns' represents a
discrete approximation of a three-dimensional gas of particles between two
surfaces. the effective particle-mediated interaction potential of the
boundaries, or surfaces, is calculated from the grand-canonical partition
function of the one-dimensional gas of particles, which is an extension of the
well-studied tonks gas. the effective interaction potential exhibits two
minima. the first minimum at boundary contact reflects depletion interactions,
while the second minimum at separations close to the particle diameter results
from a single adsorbed particle that crosslinks the two boundaries. the second
minimum is the global minimum for sufficiently large binding energies of the
particles. interestingly, the effective adhesion energy corresponding to this
minimum is maximal at intermediate concentrations of the particles.
| introduction
the interactions of surfaces are often affected by nanoparticles or macro-
molecules in the surrounding medium. non-adhesive particles cause attrac-
tive depletion interactions between the surfaces, since the excluded volume
of the molecules depends on the surface separation [2, 3, 4]. adhesive parti-
cles, on the other hand, can directly bind two surfaces together if the surface
1
separation is close to the particle diameter [5, 6, 7]. in a recent letter [8],
we have presented a general, statistical-mechanical model for two surfaces in
contact with adhesive particles. in this model, the space between the sur-
faces is discretized into columns of the same diameter d as the particles. the
approximation implied by this discretization is valid for small bulk volume
fractions of the particles, since three-dimensional packing effects relevant at
larger volume fractions are neglected. for short-ranged particle-surface in-
teractions, the gas of particles between the surfaces is as dilute as in the
bulk for large surface separations, except for the single adsorption layers of
particles at the surfaces.
in this article, we present an exact solution of the one-dimensional gas of
hard-sphere particles in a single column between two 'surfaces'. our aim here
is two-fold. first, the exact solution presented here corroborates our previ-
ous, approximate solution for this one-dimensional gas obtained from a virial
expansion in the particle concentration [8].
second, the exactly solvable,
one-dimensional model considered here is a simple toy model to study the
interplay of surface adhesion and particle adsorption. exactly solvable, one-
dimensional models have played an important role in statistical mechanics
[9, 10]. one example is the kac-baker model [11, 12, 13], which has shed light
on the statistical origin of phase transitions of the classical van der waals
type. more recent examples are models for one-dimensional interfaces, or
strings, which have revealed the relevance of entropy and steric interactions
in membrane unbinding and wetting transitions [14, 15, 16]. other examples
are the tonks model [1] and its various generalizations [17, 18, 19, 20], which
have influenced our understanding of the relations between short-ranged par-
ticle interactions, thermodynamics, and statistical correlations in simple flu-
ids.
the tonks model has been exploited also in soft-matter physics to
investigate structures of confined fluids [21, 22], depletion phenomena in two-
component mixtures [23], thermal properties of columnar liquid crystals [24]
and the phase behavior of polydisperse wormlike micelles [25]. a recent bio-
physical modification of the tonks model addresses the wrapping of dna
around histone proteins [26]. the model considered here is a novel extension
of the tonks model.
in our model, a one-dimensional gas of hard-sphere particles is attracted
to the system boundaries, or 'surfaces', by short-ranged interactions. we
calculate the effective, particle-mediated interaction potential between the
surfaces, v , by explicit integration over the particles' degrees of freedom in
the partition function. the potential v is a function of the surface separa-
tion land exhibits a minimum at surface contact, which reflects depletion
interactions, and a second minimum at separations close to the diameter of
the adhesive particles. the effective, particle-mediated adhesion energy of
2
bulk
b)
a)
l
d
r
l
r
l
d
figure 1: a) a one-dimensional gas, or fluid, of hard-sphere particles in a
column of the same diameter d as the particles. the interaction between
the particles and the boundaries, or 'surfaces', is described by a square-well
potential of depth u and range lr < d/2. a particle thus gains the binding
energy u if its center is located at a distance smaller than lr+d/2 from one of
the surfaces. the length lof the column corresponds to the separation of the
surfaces. we consider the grand-canonical ensemble in which the particles
in the column exchange with a bulk reservoir. - b) for small bulk volume
fractions, a two-dimensional lattice of such columns represents a discrete
approximation of a three-dimensional gas of particles between two surfaces
[8]. since the particle-surface interactions are short-ranged in our model,
the particle gas between the surfaces is as dilute as in the bulk for large
surfaces separations, except for the adsorption layer of particles at each of
the surfaces.
the surfaces, w, can be determined from the interaction potential v . the
adhesion energy is the minimal work that has to be performed to bring the
surfaces apart from the equilibrium state corresponding to the deepest well of
the potential v (l). interestingly, the adhesion energy w attains a maximum
value at an optimal particle concentration in the bulk, and is considerably
smaller both for lower and higher particle bulk concentrations.
this article is organized as follows. in section 2, we introduce our model
and define the thermodynamic quantities of interest. in section 3, we cal-
culate the particle-mediated interaction potential v (l) of the surfaces. the
global minimum of this potential is determined in section 4, and the effective
adhesion energy of the surfaces in section 5. in section 6, we show that the
interaction potential v (l) exhibits a barrier at surface separations slightly
larger than the particle diameter, because a particle bound to one of the sur-
3
faces 'blocks' the binding of a second particle to the apposing surface. the
particle binding probability is calculated and analyzed in section 7.
2
model and definitions
we consider a one-dimensional gas of particles with attractive boundaries,
see figure 1. the particles are modeled as hard spheres, and the attractive in-
teraction between the particles and the boundaries, or 'surfaces', is described
by a square-well potential with depth u and range lr. the length lof the
gas 'column' corresponds to the separation of the surfaces and the width of
the column is chosen to be equal to the particle diameter d. the particles in
the column exchange with a bulk reservoir of particles.
the position of the center of mass of particle k is denoted by xk, and its
momentum by pk. for the system of n hard particles confined in the column
of length l> nd, one has d/2 < x1,
x1 < x2−d,
x2 < x3−d,
. . .
xn <
l−d/2. we assume that the 1-st and n-th particle interact with the surfaces,
i.e. with the bases of the columns, via the square-well potential
vn {xk} = −uθ
d
2 + lr −x1
−uθ
xn −l+ lr + d
2
,
(1)
where u > 0 and lr > 0 are the potential depth and range, respectively. we
also assume that lr < d/2. here and below, θ denotes the heaviside step
function with θ(x) = 1 for x > 0 and θ(x) = 0 for x < 0. the configuration
energy for the system of n particles in the column is
hn {xk, pk} = vn {xk} +
n
x
k=1
p2
k
2m
(2)
and the corresponding canonical partition function can be written as
zn = 1
λn
z l−d
2
(n−1
2)d
dxn
z xn−d
(n−3
2)d
dxn−1 . . .
z x3−d
3
2d
dx2
z x2−d
1
2 d
dx1e−vn{xk}/t
(3)
after integration over the momenta of the particles, see, e.g., [1, 21, 26]. here,
λ = h/(2πmt)1/2 is the thermal de broglie wavelength, and t denotes the
temperature times the boltzmann constant. in other words, t is the basic
energy scale.
since the particles can exchange with the bulk solution, the number n
of particles in the column is not constant. such a system is described by
the grand-canonical ensemble in which the temperature, the column length
4
l, and the particle chemical potential μ are fixed. the corresponding grand-
canonical partition function
z = 1 +
⌊l/d⌋
x
n=1
znenμ/t
(4)
is a sum of a finite number of elements, where ⌊l/d⌋denotes the largest
integer less than or equal l/d. the upper limit of the sum on the right hand
side of equation (4) is the largest number of hard particles with diameter d
that can be accommodated in a column of length l. the partition function
z given by equation (4) determines the grand potential
fgc = −t ln z,
(5)
the bulk density of the grand potential
fgc = lim
l→∞
fgc d
l
(6)
and, hence, the surface contribution to the grand potential f (s)
gc
= fgc −
fgc l/d. the effective interaction potential of the surfaces 1,
v = f (s)
gc
d2 = fgc d −fgc l
d3
,
(7)
is defined as the density of the surface contribution to the grand potential
fgc. for consistency with our previous model for particle-mediated surface
interactions [8], the column bases are chosen here to be squares of side length
d. thus the d2 in the denominator of equation (7) is the column base area.
the surface potential v defined by equation (7) is the main quantity of
interest here and will be determined in the next section.
3
effective surface interaction potential
equations (4) - (7) imply that
exp
−v d2
t
=
1 +
⌊l/d⌋
x
n=1
znen μ/t
exp
fgc l
t d
.
(8)
1the surface interaction potential v was called the 'effective adhesion potential' in
reference [8].
5
to determine the surface interaction potential v , we thus have to calculate
the canonical partition function zn and the bulk density of the grand po-
tential, fgc, defined in equation (6). the n-particle partition function zn
is defined in equation (3). the change of variables yk = xk −
k −1
2
d in
equation (3), with k = 1, 2 . . ., n, leads to
zn = 1
λn
z l−nd
0
dyneuθ(yn−l+nd+lr)/t
z yn
0
dyn−1 . . .
z y3
0
dy2
z y2
0
dy1euθ(lr−y1)/t,
(9)
where u is the binding energy of the particles.
the integral (9) can be
evaluated, see a, and after some computation we arrive at
zn =
eu/t −1
2 φn (l−2lr) −2eu/t eu/t −1
φn (l−lr) + e2u/t φn (l) ,
(10)
with
φn (l0) = 1
n!
l0 −nd
λ
n
θ (l0 −nd)
(11)
for any length l0. for u = 0, equation (10), reduces to the partition function
zn =
1
λnn! (l−nd)n θ (l−nd)
(12)
of the classical tonks gas [1]. the grand potential density fgc can be derived
from this exact result as shown in the following subsection.
3.1
thermodynamic potentials in the bulk
the canonical and grand-canonical ensembles are equivalent in the thermo-
dynamic limit, i.e. for infinite surface separation l. in this limit, the grand
potential density fgc therefore can be obtained from the canonical poten-
tial density via legendre transformation. first, we define the canonical free
energy fca = −t ln zn and the free energy density in the bulk,
fca = lim
∞
fca d
l
,
(13)
where lim∞denotes the thermodynamic limit in which both the column
length land the particle number n go to infinity while the particle 'volume
fraction'
φ = n d
l
(14)
remains constant. the particle volume fraction in one dimension defined by
equation (14) attains values 0 ≤φ ≤1, and could also be called a 'length
6
fraction'. in the one-dimensional model considered here, close packing cor-
responds to φ = 1.
the free energy density fca defined by equation (13) is an intensive quan-
tity in the thermodynamic limit and, therefore, does not depend on the
boundary conditions. in particular, the free energy density fca and its deriva-
tives do not depend on the boundary potential (1), which is characterized by
the binding energy u and range lr. with the exact expression for the canon-
ical partition function zn given in equation (10) and the sterling formula
ln(n!) ≈n ln n −n + 1
2 ln (2πn) ,
(15)
we get
fca = tφ
−1 + ln
λ
d
φ
1 −φ
.
(16)
as expected, the particle-surface interactions characterized by the binding
energy u and range lr do not affect the free energy density fca in the ther-
modynamic limit. from the canonical free energy density fca, we obtain the
chemical potential
μ =
∂fca
∂φ
t
(17)
for the particles in the bulk. equations (16) and (17) lead to
μ = t ln
λ
d
φ
1 −φ
+ t φ
1 −φ
(18)
which can be rewritten as
eμ/t = λ
d
φ
1 −φ exp
φ
1 −φ
.
(19)
finally, the grand potential density fgc follows from the legendre transfor-
mation
fgc = fca −μ φ.
(20)
equations (16), (18) and (20) lead to
fgc = −tφ
1 −φ,
(21)
with the dependence between the chemical potential μ and the particle bulk
volume fraction φ given in equation (18).
basic thermodynamics implies
that the gas pressure in the bulk is p = −fgc/d3. this leads to the pressure
p = tφ/[(1 −φ)d3] here, which is the correct equation of state for the tonks
gas [1].
7
3.2
potential profile
by combining equations (8), (19) and (21), the effective, particle-mediated
interaction potential v of the surfaces can be expressed as a function of the
particle bulk volume fraction φ and the separation lbetween the surfaces:
v = −t
d2 ln
1 +
⌊l/d⌋
x
n=1
zn
λ
d
φ
1 −φ
n
exp
n φ
1 −φ
+ lt
d3
φ
1 −φ,
(22)
with the exact expression for the canonical partition function zn given in
equations (10) and (11), the potential v can be evaluated numerically for any
finite surface separation l, see figures 2 and 3. for large bulk volume fractions
φ (dashed curve in figure 2), the potential v exhibits oscillations up to surface
separations lof the order of several particle diameters before approaching a
constant, asymptotic value v∞for large surface separations. the oscillations
are related to successive layers of particles formed in the space between the
surfaces.
for small bulk volume fractions φ, in contrast, the interaction
potential v attains an approximately constant value for surface separations
l> 2(d + lr), see solid curve in figure 2.
3.3
potential asymptote
since the interactions between the particles and the surfaces are short-ranged,
the potential v has a horizontal asymptote, i.e. v (l) approaches a constant
value v∞for large surface separation l, see figures 2 and 3. in this subsection,
we calculate the position of the asymptote. to simplify the notation, we first
introduce the auxiliary variable
ζ =
φ
1 −φ,
(23)
and the function
g
l
d, l0
d , ζ
=
⌊l/d⌋
x
n=1
ζn
n! exp
−l−nd
d
ζ
l−l0
d
−n
n
θ
l−l0
d
−n
(24)
defined for an arbitrary length l0. by combining equations (8), (10), (19)
and (21), we then express the potential v as
e−v (l) d2/t = e−ζl/d + (eu/t −1)2 g
l
d, 2lr
d , ζ
−2eu/t(eu/t −1) g
l
d, lr
d , ζ
+ e2u/t g
l
d, 0, ζ
.
(25)
8
1
2
3
4
5
6
-8
-6
-4
-2
0
vd
t
2
l/d
φ = 0.05
φ = 0.5
figure 2: rescaled surface interaction potential v d2/t, given by equation
(22), as a function of the rescaled surface separation l/d where d is the par-
ticle diameter and t denotes the temperature in energy units. the particle
binding energy here is u = 5 t, the binding range is lr = 0.3 d, and the
particle bulk volume fraction is φ = 0.05 (solid line) and φ = 0.5 (dashed
line), respectively. for large surface separations l≫d, the potential v (l)
attains an approximately constant value v∞. according to equation (28),
the asymptotic values are v∞≈−6.6446 t/d2 for φ = 0.5 (dashed line) and
v∞≈−2.3422 t/d2 for φ = 0.05 (solid line), in excellent agreement with
numerical values obtained from equation (22).
using the saddle-point approximation and stirling's formula (15), one can
prove that
lim
l→∞g
l
d, l0
d , ζ
= e−ζ l0/d
1 + ζ ,
(26)
see b. we now apply this result to equation (25) and arrive at
lim
l→∞exp
−v d2
t
=
eu/t −(eu/t −1)e−ζlr/d2
1
1 + ζ .
(27)
hence, the asymptotic value of the potential v (l) is given by the following
exact expression:
v∞= −t
d2 ln (1 −φ) −2 t
d2 ln
eu/t −(eu/t −1) exp
−lr
d
φ
1 −φ
.
(28)
for non-adhesive particles with u = 0 or lr = 0, the asymptotic value of the
potential v (l) is v∞= −(t/d2) ln(1 −φ). for small bulk volume fractions
9
φ ≪1 of the particles and large binding energy u with eu/t ≫1, we obtain
v∞≈−2 t
d2 ln
1 + φ eu/t lr
d
.
(29)
we will use equations (28) and (29) in section 5 to calculate the effective
adhesion energy of surfaces. in the following section 4, we determine the
global minimum of the potential v (l).
4
global minimum of the surface interaction
potential
in the present calculation we have chosen the free energy reference state
in such a way that the effective surface interaction potential vanishes at
surface contact l= 0, i.e. v (l= 0) = 0, see equation (7). for surface
separations 0 < l< d, the potential v (l) increases linearly with l, i.e. v (l) =
ltφ/[(1 −φ)d3], since fgc = 0 for these separations. the potential v (l)
decreases for separations d < l< d+lr, and increases again for d+lr < l< 2d.
the potential thus attains a minimum at l= d + lr, see figures 2 and 3. the
value vmin = v (l= lr + d) at this minimum can be calculated again from
equation (22). for l= d + lr, equation (10) reduces to z1 = e2u/tlr/λ and
zn = 0 for n ≥2 since only a single particle fits into the column. insertion
of these results into equation (22) leads to
vmin = −t
d2 ln
1 + lr
d
φ
1 −φ exp
2u
t +
φ
1 −φ
+ t
d2
lr
d + 1
φ
1 −φ. (30)
the potential v (l) thus has two local minima, one located at l= 0 with
v (l= 0) = 0 and the other at l= lr + d with v (l= lr + d) = vmin. the
two minima result from the interplay of depletion interactions and adhesive
interactions. the global minimum of v (l) is located at l= d+lr if vmin < 0,
i.e. for
e−ζ + lr
d ζ e2u/t −eζ lr/d > 0
(31)
with ζ = φ/(1 −φ), see equation (23). the inequality (31) is fulfilled for
sufficiently large particle binding energies u. for small binding energies u,
in contrast, the potential v (l) has its global minimum at surface contact
l= 0, see figure 3.
in the experimentally relevant case of small particle bulk volume fractions
φ ≪1 and large binding energy u with eu/t ≫1, equation (30) reduces to
vmin ≈−t
d2 ln
1 + φ e2u/t lr
d
(32)
10
1
2
3
4
5
-0.05
0
0.05
0.1
l/d
vd
t
2
u = 0.6 t
u = 0.9 t
figure 3: rescaled surface interaction potential v d2/t, given by equation
(22), versus the rescaled surface separation l/d. the bulk volume fraction of
the particles here is φ = 0.1, the binding range is lr = 0.3 d, and the binding
energy is u = 0.9 t (dashed line) and u = 0.6 t (solid line). the global
minimum of the potential v (l) is located at the separation l= d + lr for
u = 0.9 t (dashed line) and at surface contact l= 0 for u = 0.6 t (solid
line).
and the inequality (31) simplifies to
u > t
2 ln
1 + d
lr
.
(33)
and, thus, to a relation that is independent of the particle bulk volume frac-
tion φ.
5
adhesion energy
in this section, we assume that the binding energy u of the particles is
sufficiently large so that the inequality (31) is fulfilled. the global minimum
of the interaction potential v (l) then is located at l= d + lr. the minimum
value vmin = v (l= d + lr) is given by equation (30).
for large surface
separations l≫d, the potential v (l) attains a constant value v∞given
by equation (28). the difference between the asymptotic and the minimum
value of the potential v (l) is the effective adhesion energy
w = v∞−vmin
(34)
11
of the surfaces. the effective adhesion energy is the minimal work that has to
performed to bring the two surfaces far apart from the separation l= d + lr.
from equations (28), (30), and (34), we obtain the exact result
w = t
d2 ln
1 + lr
d
φ
1 −φ exp
2u
t +
φ
1 −φ
−t
d2
lr
d + 1
φ
1 −φ
−t
d2 ln (1 −φ) −2 t
d2 ln
eu/t −(eu/t −1) exp
−lr
d
φ
1 −φ
.
(35)
in figure 4, the adhesion energy w is plotted as a function of the particle bulk
volume fraction φ. interestingly, the adhesion energy w exhibits a maximum
at an optimal bulk volume fraction φ⋆of the particles.
0.001
0.002
0.005
0.01
0.02
0.05
0.1
2
2.25
2.5
2.75
3
3.25
3.5
φ
t
2
wd
figure 4: rescaled adhesion energy wd2/t as a function of the particle bulk
volume fraction φ. the solid line corresponds to the exact result (35), and
the dashed line to the approximation (36). the particle binding energy here
is u = 5 t and the binding range is lr = 0.3 d. the adhesion energy has a
maximum at φ = φ⋆with φ⋆≈e−u/td/lr.
for small bulk volume fractions φ ≪1 and large binding energy u with
eu/t ≫1, the asymptotic value and minimum value of v (l) are approxi-
mately given by equations (29) and (32), respectively. the adhesion energy
w = v∞−vmin then simplifies to
w ≈t
d2 ln 1 + φ e2u/t lr/d
(1 + φ eu/t lr/d)2.
(36)
this expression is identical with our previous result obtained from a virial
expansion in φ up to second order terms [8]. for φ ≪1 and eu/t ≫1, the
12
adhesion energy (36) is a good approximation of the exact result (35), see
figures 4 and 5. from equation (36), we obtain the approximate expression
φ⋆≈d
lr
e−u/t
(37)
for the optimum bulk volume fraction φ⋆at which the adhesion energy w
attains its maximum value.
the adhesion energy (36) can be understood as the difference of two lang-
muir adsorption free energies per fluid column, or pair of apposing binding
sites [8]: (i) the adsorption free energy (t/d2) ln
1 + q φ e2u/t
for small
surface separations at which a particle binds both surfaces with total bind-
ing energy 2u, and (ii) the adsorption free energy (t/d2) ln
1 + q φ eu/t
for large surface separations, counted twice in (36) because we have two sur-
faces. these langmuir adsorption free energies result from a simple two-state
model in which a particle is either absent (boltzmann weight 1) or present
(boltzmann weights q φ e2u/t and q φ eu/t, respectively) at a given binding
site, see e.g. [20]. the factor q depends on the degrees of freedom of a single
adsorbed particle. in our model, we obtain q = lr/d.
to assess the quality of approximate expression (36), we analyze its rela-
tive error in reference to the exact result (35). the relative error is the mag-
nitude of the difference between the exact result (35) and the approximate
expression (36) divided by the magnitude of the exact result (35). figure 5
shows parameter regions in which the relative error of the expression (36)
is smaller or larger than 1%, 2% and 5%, respectively. in this example, the
binding range is lr = 0.3. for intermediate and large binding energies with
u > 6 t, we find that the relative error of the approximate expression (36)
is smaller than 1% in a broad range of volume fractions φ.
6
potential barrier
for large binding energies u with eu/t ≫1, the effective interaction potential
has a barrier at surface separations d + 2lr < l< 2(d + lr), see figure 2. at
these separations, only a single particle fits between the surfaces, but this
particle can just bind one of the surfaces.
the particle thus 'blocks' the
binding site at the apposing surface.
the potential barrier attains its maximum value vba = v (l= 2d) at the
separation l= 2d, see figure 2. from equation (25), we obtain
vba = −t
d2 ln
1 +
2lr
d
eu/t −1
+ 1
φ
1 −φ exp
φ
1 −φ
+ 2 t
d2
φ
1 −φ
(38)
13
0
0.05
0.10
0.15
0.20
0.25
1
2
3
4
5
6
φ
t
u
1 %
2 %
5 %
figure 5: relative error of the approximate expression (36) for the binding
range lr = 0.3/, d of the particles. in the parameter region above the dotted
line, the relative error is smaller than 1%. below this line, the relative error
is larger than 1%. the relative error is smaller than 2% above the dashed
line, and smaller than 5% above the solid line. for binding energies u > 6 t,
the simple expression (36) approximates the exact result (35) very well since
the relative error is smaller than 1% for a broad range of volume fractions φ.
for φ ≪1 and eu/t ≫1, we get
vba ≈−t
d2 ln
1 + 2φ eu/t lr
d
(39)
the barrier height uba = vba −v∞then is
uba ≈t
d2 ln
1 + φ eu/t lr/d
2
1 + 2φ eu/t lr/d
(40)
since the asymptotic value v∞is given by equation (29) in this limiting case.
the width of the barrier is approximately lba ≈d, see figure 2. equation (40)
is again identical with our previous result obtained from a virial expansion
in φ up to second order terms [8].
14
7
binding probability
another quantity of interest here is the binding probability
ns = 1
2⟨θ (lr −x1 + d/2)⟩+ 1
2⟨θ (xn −l+ lr + d/2)⟩
(41)
defined as the probability that the separation of the closest particle from
a column base is smaller than the binding range lr.
in other words, the
binding probability ns is the probability of finding a particle bound to one of
the bases. the binding probability corresponds to the surface coverage in the
case of a three-dimensional gas of particles between two parallel attractive
surfaces.
equations (1) - (5) imply that the binding probability can be calculated
by differentiation of the grand potential fgc with respect to binding energy u,
i.e. ns = −1
2
∂fgc
∂u
. since the grand potential density fgc given by equation
(21) does not depend on the binding energy u, the binding probability can
also be obtained from the effective surface interaction potential via
ns = −d2
2
∂v
∂u
.
(42)
with the exact expression (25) for the interaction potential v , the binding
probability ns can be determined numerically for any finite separation l.
in figure 6, the binding probability ns is plotted as a function of surface
separation lfor three different volume fractions φ around the optimal volume
fraction φ⋆at which the adhesion energy w is maximal. in the vicinity of
φ⋆, the binding probability at large separations l> 2(d + lr) is sensitive to
small variations of φ, while the binding probability at separations lin the
surface binding range d < l< d + 2lr remains practically constant at almost
100% .
for small bulk volume fractions φ of the particles and large particle bind-
ing energies u, the asymptotic and minimum value of the interaction po-
tential v (l) are given by equations (29) and (32), respectively. from these
equations and relation (42), we obtain the approximate expressions
ns, ∞≈
φ
φ + φ⋆
(43)
for the binding probability at large surface separation land
ns, min ≈
φ
φ + φ⋆e−u/t ,
(44)
15
0
1
2
3
4
0
0.2
0.4
0.6
0.8
1
φ = 0.007
n
l/d
φ = 0.003
φ = 0.001
s
figure 6: binding probability ns, calculated numerically from equations (42)
and (25), as a function of rescaled surface separation l/d where d is the
diameter of the adhesive particles.
the binding energy here is u = 7 t,
the binding range is lr = 0.3 d and the particle bulk volume fraction is φ =
0.001 < φ⋆(dashed line), φ = 0.003 ≈φ⋆(solid line) and φ = 0.007 > φ⋆
(dotted line). the optimal volume fraction φ⋆at which the adhesion energy
becomes maximal is given by equation (37).
for the particle binding probability at the binding separation l= d+lr of the
surfaces, with the optimum bulk volume fraction φ⋆given in equation (37).
these expressions correspond to the well-known langmuir adsorption equa-
tion [20]. at the optimal volume fraction φ⋆, the particle binding probability
for unbound and bound surfaces is ns, ∞= 1/2 and ns, min ≈1, respectively.
bringing the surfaces from large separations l> 2(d + lr) within binding
separations d < l< d + 2lr thus does not require desorption or adsorption of
particles at φ = φ⋆.
8
conclusions
we have considered one-dimensional gas of hard-sphere particles with at-
tractive boundaries, a novel extension of the tonks model [1].
we have
solved this model analytically in the whole range of parameters by explicit
integration over the particles' degrees of freedom in the partition function.
in contrast to other studies on one-dimensional models for hard spheres
[18, 19, 20, 21, 22, 23, 24, 25, 26], we have focused on the boundary con-
16
tribution to the free energy of the system, which corresponds to the effective,
particle-mediated interaction potential between the boundaries, or surfaces,
see figures 2 and 3. the effective adhesion energy obtained from the inter-
action potential depends non-monotonically on the volume fraction φ of the
particles in the bulk, see figure 4. the adhesion energy exhibits a maximum
at an optimum volume fraction, which can lead to reentrant transitions in
which the surfaces first bind with increasing volume fraction φ, and unbind
again when the volume fraction φ is increased beyond its optimum value.
a lattice of such one-dimensional gas columns represents a discrete ap-
proximation of a three-dimensional gas of particles between two adsorbing
surfaces, see figure 1 and reference [8]. for small volume fractions φ and
short-ranged particle-surface interactions considered here, the gas of parti-
cles between two well-separated surfaces is as dilute as in the bulk, except for
the single adsorption layers of particles at the surfaces. at larger volume frac-
tions, three-dimensional packing effects become relevant. these effects are
not captured correctly in the one-dimensional model. however, it has been
pointed out [22] that approximations based on one-dimensional models do
well in comparison to density functional theories for three-dimensional hard
sphere fluids confined in planar, non-adsorbing pores [27]. in principle, the
quality of the one-dimensional approximation can be tested by monte carlo
or molecular dynamics simulations, which have been used to study various
three-dimensional systems of hard spheres confined between non-adsorbing
surfaces [28, 29, 30, 31, 32].
for simplicity, and for consistency with our previous publication [8], we
have considered here a square lattice of columns between the surfaces. in
particular, the factor d2 in the denominator of equation (7) is the column
base area in the square lattice.
for a hexagonal lattice of columns, the
corresponding area is (
√
3/2) d2, and the corresponding effective interaction
potential of the surfaces is thus obtained by multiplying the right hand side
of equation (22) with a factor 2/
√
3 ≈1.1547.
the adhesion energy of
the surfaces has to be rescaled with the same factor in the case of hexagonal
lattice of columns, but its functional dependence on the bulk volume fraction
φ, binding energy u, and binding range lr remains unchanged.
we have considered an equilibrium situation in which the particles ex-
change with a bulk solution. for polymers between surfaces, such an equi-
librium has been termed 'full equilibrium'. in a 'restricted equilibrium', in
contrast, the polymers are trapped between the surfaces [33, 34, 35], which
is less likely for the spherical particles considered here.
17
a
canonical partition function
in this section we calculate the n-particle partition function zn as given by
equation (9) for l> nd. first, if one notices that
eu θ(lr−y1)/t = 1 +
eu/t −1
θ(lr −y1)
(45)
and
eu θ(yn−l+nd+lr)/t = 1 +
eu/t −1
θ(yn −l+ nd + lr),
(46)
the integral (9) can be written as a sum of four terms
zn = 1
λni1 + 1
λn
eu/t −1
i2 + 1
λn
eu/t −1
i3 + 1
λn
eu/t −1
2 i4, (47)
with
i1 =
z l−nd
0
dyn
z yn
0
dyn−1 . . .
z y3
0
dy2
z y2
0
dy1,
(48)
i2 =
z l−nd
0
dynθ (yn −l+ nd + lr)
z yn
0
dyn−1 . . .
z y3
0
dy2
z y2
0
dy1,
(49)
i3 =
z l−nd
0
dyn
z yn
0
dyn−1 . . .
z y3
0
dy2
z y2
0
dy1θ (lr −y1) ,
(50)
i4 =
z l−nd
0
dynθ (yn −l+ nd + lr)
z yn
0
dyn−1 . . .
z y2
0
dy1θ (lr −y1) . (51)
the first integral
i1 =
1
(n −1)!
z l−nd
0
yn−1
n
dyn = 1
n! (l−nd)n
(52)
and second integral
i2 =
1
(n −1)!
z l−nd
0
yn−1
n
θ (yn −l+ nd + lr) dyn
= 1
n! ((l−nd)n −(l−nd −lr)n θ (l−nd −lr))
(53)
can be easily calculated.
to calculate the third integral, we start from
z y2
0
θ (lr −y1) dy1 = min [y2, lr] = y2 −(y2 −lr) θ (y2 −lr) .
(54)
18
in the next steps, we find
z y3
0
(y2 −(y2 −lr) θ (y2 −lr)) dy2 = 1
2y2
3 −1
2 (y3 −lr)2 θ (y3 −lr)
(55)
and
z y4
0
1
2y2
3 −1
2 (y3 −lr)2 θ (y3 −lr)
dy3 = 1
6y3
4 −1
6 (y4 −lr)3 θ (y4 −lr)
(56)
iterating these results leads to
i3 =
1
(n −1)!
z l−nd
0
yn−1
n
−(yn −lr)n−1 θ (yn −lr)
dyn.
(57)
the integral i3 can now be evaluated as
i3 = 1
n! (l−nd)n −
1
(n −1)!
z l−nd
0
(yn −lr)n−1 θ (yn −lr) dyn
= 1
n! (l−nd)n −1
n! (l−nd −lr)n θ (l−nd −lr) .
(58)
note that i2 = i3.
the fourth integral, i4, can be brought to the form
i4 =
1
(n −1)!
z l−nd
0
θ (yn −l+ nd + lr)
yn−1
n
−(yn −lr)n−1 θ (yn −lr)
dyn
(59)
if one uses again (54) and iterates the integration as in (55) and (56). the first
term on the right hand side of equation (59) is equal to i2, see equation (53).
thus i4 = i2 −i5, where
i5 =
1
(n −1)!
z l−nd
0
θ (yn −l+ nd + lr) θ (yn −lr) (yn −lr)n−1 dyn
(60)
to determine the integral i5, one has to distinguish three cases: (i) for l>
nd + 2lr, we obtain
i5 = 1
n! ((l−nd −lr)n −(l−nd −2lr)n) ,
(61)
(ii) for nd + lr < l< nd + 2lr, we obtain
i5 = 1
n! (l−nd −lr)n
(62)
19
and (iii) for l< nd + lr one gets i5 = 0. in summary
i4 = i2 −1
n! (l−nd −lr)n θ (l−nd −lr)
+ 1
n! (l−nd −2lr)n θ (l−nd −2lr)
(63)
if one now gathers the results (52), (53), (58), (63) and returns to equation
(47), one obtains the partition function
zn =
1
λnn!
j1 + 2(eu/t −1)(j1 −j2) + (eu/t −1)2(j1 −2j2 + j3)
(64)
with
j1 = (l−nd)n θ (l−nd) ,
(65)
j2 = (l−nd −lr)n θ (l−nd −lr) ,
(66)
j3 = (l−nd −2lr)n θ (l−nd −2lr) .
(67)
this result can be written as
zn =
1
λnn!
eu/t −1
2 (l−nd −2lr)n θ (l−nd −2lr)
−2eu/t eu/t −1
(l−nd −lr)n θ (l−nd −lr)
+e2u/t (l−nd)n θ (l−nd)
(68)
which simplifies to equation (10).
b
proof of equality (26)
here, we explore the asymptotics of the function g(l/d, l0/d, ζ) defined in
equation (24) and, hence, prove the equality (26). to simplify the notation,
let l/d = n, where n is a large integer number, and l0/d = λ. then
g (n, λ, ζ) =
n
x
n=1
ζn
n! e−(n−n)ζ (n −n −λ)n θ (n −n −λ) .
(69)
in the next step, we introduce the auxiliary function
χ(n) = −(n −n)ζ + n ln ζ + n ln(n −n −λ) −n ln n + n −1
2 ln(2πn) (70)
20
and rewrite the function g(n, λ, ζ) given by equation (69) in the form
g =
n
x
n=1
eχ(n) θ (n −n −λ) ,
(71)
using stirling's formula (15). for large n, one can replace the sum on the
right hand side of equation (71) by an integral and write
g ≈
z n−λ
1
eχ(n) dn.
(72)
the function χ(n) has a global maximum at n = n0 with
n0 ≈(n −λ)
ζ
1 + ζ
(73)
for large n. note that for large numbers n, the location n0 ≈(n −λ) φ of
the global minimum scales linearly with n. if we now expand the function
χ(n) around the point n0 up to second order terms and apply the saddle-point
approximation, we get
g ≈eχ(n0)
z n−λ
1
exp
1
2χ
′′(n0) (n −n0)2
dn.
(74)
a simple change of variables m = n −n0 leads to
g ≈eχ(n0)
z (n−λ)(1−φ)
−(n−λ)φ+1
eχ
′′(n0) m2/2 dm
(75)
where we have used n0 ≈(n −λ) φ, which follows from equation (73) and
relation (23) between variables ζ and φ. in the limit of large n, we thus
obtain
g ≈eχ(n0)
z ∞
−∞
eχ
′′(n0) m2/2 dm
(76)
because 0 < φ < 1. now, we can calculate the gaussian integral in (76) to
get
g ≈eχ(n0)
s
2π
−χ
′′(n0) .
(77)
from the definition (70) of the auxiliary function χ(n) and equation (73) for
the point n = n0 at which function χ(n) has its global maximum, we get
χ(n0) ≈−λ ζ −1
2 ln(2πn0)
(78)
21
and
χ
′′(n0) ≈−(1 + ζ)2
n0
(79)
note that χ
′′(n0) < 0, and that the function χ(n) has indeed a maximum at
n = n0. combining equations (77), (78) and (79) leads to
g ≈e−λ ζ
1 + ζ
(80)
for large n values and, thus, to equation (26) quod erat demonstrandum.
22
references
[1] tonks l, 1936 phys. rev. 50 955
[2] asakura s and oosawa f, 1954 j. chem. phys. 22 1255
[3] dinsmore a d, yodh a g and pine d j, 1996 nature 383 239
[4] anderson v j and lekkerkerker h n w, 2002 nature 416 811
[5] baksh m m, jaros m and groves j t, 2004 nature 427 139
[6] winter e m and groves j t, 2006 anal. chem. 78 174
[7] hu y, doudevski i, wood d, moscarello m, husted c, genain c, za-
sadzinski j a and israelachvili j, 2004 proc. natl. acad. sci. usa 101
13466
[8] r ́
o ̇
zycki b, lipowsky r and weikl t r, 2008 epl 84 26004
[9] lieb e h and mattis d c, mathematical physics in one dimension
(london, academic press, 1966)
[10] baxter r j, exactly solved models in statistical mechanics (london,
academic press, 1982)
[11] kac m, 1959 phys. fluids 2 8
[12] baker g, 1961 phys. rev. 126 1477
[13] kac m, uhlenbeck g and hemmer p, 1963 j. math. phys. 4 216
[14] j ̈
ulicher f, lipowsky r and m ̈
uller-krumbhaar h, 1990 europhys. lett.
11 657
[15] lipowsky r, 1995 z. phys. b 97 193
[16] r ́
o ̇
zycki b and napi ́
orkowski m, 2003 j. phys. a: math. gen. 36 4551
r ́
o ̇
zycki b and napi ́
orkowski m, 2004 europhys. lett. 66 25
[17] gursey f, 1950 proc. cambridge phil. soc. 46 182
[18] salsburg z, zwanzig r and kirkwood j, 1953 j. chem. phys. 21 1098
[19] baur m and nosanow l, 1962 j. chem. phys. 37 153
23
[20] davis h t, statistical mechanics of phases, interfaces, and thin films
(vch publishers, new york, 1996)
[21] davis h t, 1990 j. chem. phys. 93 4339
[22] henderson t, 2007 mol. phys. 105 2345
[23] lekkerkerker h n w and widom b, 2000 physica a 285 483
[24] wensink h h, 2004 phys. rev. lett. 93 157801
[25] van der schoot p, 1996 j. chem. phys. 104 1130
[26] chou t, 2003 europhys. lett. 62 753
[27] pizio o, partykiejew a and soko
lowki s, 2001 mol. phys. 99 57
[28] chu x l, nikolov a d and wasan d t, 1994 langmuir 10 4403
[29] schmidt m and l ̈
owen h, 1997 phys. rev. e 55 7228
[30] schoen m, gruhn t and diestler d j, 1998 j. chem. phys. 109 301
[31] fortini a and gijkstra m, 2006 j. phys.: condens. matter 18 l371
[32] mittal j, truskett t m, errington j r and hummer g, 2008 phys. rev.
lett. 100 145901
[33] ennis j and j ̈
onsson b j, 1999 phys. chem. b 103, 2248
[34] bleha t and cifra p, 2004 langmuir 20, 764
[35] leermakers f a m and butt h j, 2005 phys. rev. e 72, 021807
24
|
0911.1715 | doppler-tuned bragg spectroscopy of excited levels in he-like uranium: a
discussion of the uncertainty contributions | we present the uncertainty discussion of a recent experiment performed at the
gsi storage ring esr for the accurate energy measurement of the he-like uranium
1s2p3p2- 1s2s3s1 intra-shell transition. for this propose we used a johann-type
bragg spectrometer that enables to obtain a relative energy measurement between
the he-like uranium transition, about 4.51 kev, and a calibration x-ray source.
as reference, we used the ka fluorescence lines of zinc and the li-like uranium
1s22p2p3/2 - 1 s22s 2s1/2 intra-shell transition from fast ions stored in the
esr. a comparison of the two different references, i.e., stationary and moving
x-ray source, and a discussion of the experimental uncertainties is presented.
| introduction
we present the uncertainty discussion of a recent experiment performed at the gsi (darmstadt,
germany) for the accurate energy measurement of the he-like uranium 1s2p 3p2 →1s2s 3s1
intra-shell transition. this measurement allows, for the first time, to test two-photons quantum
electrodynamics in he-like heavy ions. in this article we describe the techniques adopted in
the measurement where x rays emitted from fast ions in a storage ring have been detected with
a bragg spectrometer. in particular, we study the contribution of the different uncertainties
related to the relativistic velocity of the ions when a stationary or moving calibration x-ray
source is considered. additional details of such an experiment can be found in ref. [1].
2. description of the set-up
the experiment was performed at the gsi experimental storage ring esr [2] in august 2007.
here, a h-like uranium beam with up to 108 ions was stored, cooled, and decelerated to an energy
of 43.57 mev/u. excited he-like ions were formed by electron capture during the interaction
of the ion beam with a supersonic nitrogen gas-jet target. at the selected velocity, electrons
zn kα2 kα1
0
200
400
600
800
1000
x-axis (channels)
-150
-100
-50
0
50
100
y-axis (channels)
0
10
20
30
40
50
60
70
counts per pixel
-150
-100
-50
0
50
100
0
200
400
600
800
1000
y-axis (channels)
x-axis (channels)
li-like u 1s22p 3p2 - 1s22s 3s1
figure 1.
reflection of the zinc kα (left) and li-like uranium intra-shell 1s22p 2p3/2 →
1s22s 2s1/2 transition (right) on the bragg spectrometer ccd. the transition energy increases
with the increasing of x-position. the slightly negative slope of the line is due to the relativistic
velocity of the li-like ions.
are primarily captured into shells with principal quantum number of n ≤20, which efficiently
populate the n = 2 3p2 state via cascade feeding. this state decays to the n = 2 3s1 state via an
electric dipole (e1) intra-shell transition (branching ration 30%) with the emission of photons
of an energy close to 4.51 kev detected by a bragg spectrometer.
the crystal spectrometer [3] was mounted in the johann geometry in a fixed angle
configuration allowing for the detection of x rays with a bragg angle θ around 46.0◦.
the
spectrometer was equipped with a ge(220) crystal cylindrically bent, with a radius of curvature
r = 800 mm, and a newly fitted x-ray ccd camera (andor do420) as position sensitive
detector. the imaging properties of the curved crystal were used to resolve spectral lines from
fast x-ray sources nearly as well as for stationary sources [4]. for this purpose, it was necessary
to place the rowland-circle plane of the spectrometer perpendicular to the ion–beam direction.
for a minimizing the systematic effects due to the ion velocity and alignment uncertainties, the
observation angle θ = 90◦was chosen.
the value of the ion velocity v was selected such that the photon energy, e in the ion
frame, was doppler-shifted to the value elab = 4.3 kev in the laboratory frame, where
elab = e/[γ(1 −β cos θ)], with β = v/c (c is speed of light) and γ = 1/
p
1 −β2. this value
of elab was chosen to have the he-like uranium spectral line position on the ccd close to the
position of the 8.6 kev kα1,2 lines of zinc, which were observed in second order diffraction. the
zinc lines were used for calibration and they were produced by a commercial x-ray tube and a
removable zinc plate between the target chamber and the crystal. an image of the zinc kα lines
from the bragg spectrometer is presented in fig. 1 (left side).
as an alternative method to the measurement of the he-like uranium intra-shell transition,
we used a calibration line originating from fast ions, rather than one from the stationary source.
for this purpose the 1s22p 2p3/2 →1s22s 2s1/2 transition in li-like u at 4459.37 ± 0.21 ev [5, 6]
was chosen. at the esr, the li-like ions were obtained by electron capture into he-like uranium
ions. to match the energy of the he-like transition, an energy of 32.63 mev/u was used to
doppler-shift the li-like transition. an image of the li-like uranium transition in the bragg
spectrometer is presented in fig. 1 (right side).
starting from the bragg's law in differential form, ∆e ≈−e ∆θ/ tan θ, one obtains an
approximate dispersion formula that is valid for small bragg angle differences ∆θ. taking into
account the relativistic doppler effect, the measured value of the he-like u transition is given
by
e = e0
n
n0
1 −δ(e0)/ sin2 θ0
1 −δ(e)/ sin2 θ
γ(1 −β cos θ)
γ0(1 −β0 cos θ)
1 +
∆x
tan θ0 d
,
(1)
where n0 and n are the diffraction order of the he-like u and reference lines, respectively, θ0
and θ = θ0 + ∆θ the correspondent bragg angle, ∆x is the position difference of the spectral
lines on the ccd along the dispersion direction and d is the crystal–ccd distance. δ(e) is
the deviation of the index of refraction nr(e) = 1 + δ(e) of the crystal material from the unity,
which depends on the energy e of the reflected x ray (δ ∼10−5 −10−6 typically).
in the case n = n0, the corrections due to the refraction index and other energy dependent
corrections for curved crystals [7, 8] are negligible. in the case of a stationary calibration source,
γ0 = 1 and β0 = 0.
the measured value of the he-like uranium transition energy and additional information can
be found in ref. [1]. in the following section we present in detail the analysis of the systematic
uncertainties.
3. evaluation of the experimental uncertainties
one of the main sources of uncertainty in the present experiment is the low amount of collected
data (see fig. 1, right). this limits the accuracy of the he- and li-like uranium line position,
i.e., the accuracy of ∆x, which is proportional to the energy uncertainty, (δe)stat ∝δ(∆x) (see
eq. (1)). in our specific experiment, characterized by the parameters listed in table 1, numerical
values of (δe)stat are presented in table 2. due to the high statistics in the stationary calibration
source measurement, the zn kα spectrum, (δe)stat is ∼
√
2 smaller than when the moving li-
like ion emission is used as a reference.
in the case of systematic uncertainties, three major sources dominate: the accuracy of the
reference energy e0, of the ion velocity and of the observation angle θ. similarly to the statistical
uncertainty, the contribution of δe0 is much smaller when zn lines are used for calibration
instead of the li-like u transition (see table 2). this is due to the high accuracy of the zinc kα
transition energy, which in the case of kα1 is 8638.906 ± 0.073 ev [9], compared to the li-like
u transition accuracy of 0.21 ev [5, 6].
if on one hand, doppler tuning of the photon energy in the laboratory frame produce two
important systematic uncertainty contributions, due to the ions velocity and the observation
angle; on the other hand, it allows for detecting the different spectral lines in the same narrow
spatial region of the ccd detector, i.e., ∆x/d ≪1.
this results in a drastic reduction of
other systematic effects such as the influence of uncertainty of the crystal–ccd distance d, the
accuracy of the ccd pixel size [10], the accuracy of the inter-plane distance of the crystal and
effects from the optical aberrations in the johann geometry set-up. systematic uncertainties
related to the relativistic velocity of the ions are treated in detail in the following subsections.
3.1. ion velocity uncertainty
the ion velocity in the storage ring is imposed by the velocity of the electrons in the electron
cooler [2]. this is related to the cooler voltage v by the simple relations
γ = 1 + ev
mec2,
β =
r
2 ev
mec2 +
ev
mec2
2
1 +
ev
mec2
≃
r
2 ev
mec2 ,
(2)
where me and e are the mass and charge of the electron, respectively. the factor ev/(mec2) is in
general very small, of the order of 4 × 10−2 in our specific case. the cooler voltage uncertainty
δv propagates to the energy uncertainty via the parameters γ and β in eq. (1). more precisely,
(δγ)v = e/(mec2) δv and (δβ)v ≃[e/(2mec2v )]1/2 δv . we note that δγ/δβ = o(
p
ev/(mec2)).
for this reason an observation angle of 90◦, where the effect of δβ is minimal, was chosen. in
the following formulas we will consider only the case θ = 90◦.
the uncertainty of the cooler voltage has two principal sources: the accuracy of the absolute
value and its linearity. the relation between the real voltage value vreal and the set value v
can be written as v = a + b vreal, where the single uncertainties on the factors a and b has
to be considered for our analysis. with this notation, for an observation angle of θ = 90◦, the
uncertainty propagation to the energy value is, for the case of a stationary calibration source,
(δe)a
e
= 1
γ
∂γ
∂a
δa =
e
mec2
1
γ δa,
(δe)b
e
= 1
γ
∂γ
∂b
δb =
ev
mec2
1
γ δb,
(3)
and
(δe)a
e
= γ0
γ
∂
∂a
γ
γ0
δa =
e2
m2
ec4
|v −v0|
γ γ0
δa,
(δe)b
e
= γ0
γ
∂
∂b
γ
γ0
δb =
e
mec2
|v −v0|
γ γ0
δb,
(4)
in the case of a moving calibration source, where v0 and γ0 are the corresponding parameters
and where the approximation v ≈vreal has been applied. a reduction of the uncertainties is
obtained when the moving calibration source is used. the uncertainty (δe)a due to the offset
error of v is drastically decreased, by a factor e|v −v0|/(mec2γ0) ≪1, here as the uncertainty
due to the linearity is also reduced, but only by a factor |v −v0|/(v γ0). numerical values for
our experiments are given in table 1 and 2.
3.2. observation angle uncertainty
if the effect of the uncertainty due to the ion velocity is minimal when θ = 90◦, the effect of
the uncertainty δθ of the observation angle itself is maximal. in the case θ = 90◦, starting from
eq. (1), for a stationary calibration source we have
(δe)θ
e
=
1
1 −β cos θ
d
dθ(1 −β cos θ)
θ=90◦
δθ = βδθ,
(5)
and
(δe)θ
e
= 1 −β0 cos θ
1 −β cos θ
d
dθ
1 −β cos θ
1 −β0 cos θ
θ=90◦
δθ = |β −β0|δθ.
(6)
table 1.
principal parameters of the ion
beams for the he- and li-like transition
measurement (see text).
he-like u
li-like u
β
0.295578
0.257944
γ
1.046771
1.035026
v (volt)
23900
17898
θ
90.00◦± 0.38◦
a(volt)
0 ± 10
b
1. ± 2 × 10−4
table 2. different uncertainty contributions
(in
ev)
when
li-like
uranium
and
zinc
transitions are used as the reference.
li-like u
zn kα
(δe)stat
0.43
0.30
(δe)e0
0.21
0.04
(δe)a
1 × 10−3
0.08
(δe)b
0.01
0.04
(δe)θ
0.11
0.88
(δe)tot
0.50
0.93
for a moving calibration source.
again, analogously to the calculation of the preceding
subsection, a reduction of a factor |β −β0|/β in the uncertainty is obtained when x rays from
fast ions are used for the calibration. in our experimental set-up, the value of δθ is principally
due to the accuracy of the position of the gas-jet target with respect to the main axis of the
spectrometer (±0.5 mm). numerical values for our experiments are in table 1 and 2. a direct
evaluation of deviation ∆θ from 90◦can be obtained via eq. (6) with the measurement of the
li-like uranium energy 4460.12 ± 0.31 ev (statistical uncertainty only) using the zinc kα lines
as reference, and its comparison with the literature value 4459.37±0.21 ev [5, 6]. we estimated
∆θ = −0.37◦± 0.18◦in agreement with the expected deviation (see table 1).
4. conclusions
we present the uncertainty discussion of an accurate energy measurement on he-like uranium
intra-shell transition obtained via bragg spectroscopy of doppler-tuned x rays emitted from
fast ions. we have evaluated and compared the systematic uncertainties when a stationary or
moving calibration source is used. in particular in our experiment, the use of the x-ray emission
of fast li-like uranium ions as reference enables to reduce systematics uncertainties by a factor
of about 4.
references
[1] trassinelli m, kumar a, beyer h, indelicato p, m ̈
artin r, reuschl r, kozhedub y, brandau c, br ̈
auning h,
geyer s, gumberidze a, hess s, jagodzinski p, kozhuharov c, trotsenko s, weber g and st ̈
ohlker t 2008
submitted to phys. rev. lett.
[2] franzke b 1987 nucl. instrum. meth. phys. res. b 24-25 18–25
[3] beyer h f, indelicato p, finlayson k d, liesen d and deslattes r d 1991 phys. rev. a 43 223
[4] beyer h f and liesen d 1988 nucl. instrum. meth. phys. res. a 272 895–905
[5] beiersdorfer p, knapp d, marrs r e, elliott s r and chen m h 1993 phys. rev. lett. 71 3939
[6] beiersdorfer p 1995 nucl. instrum. meth. phys. res. b 99 114–116
[7] cembali f, fabbri r, servidori m, zani a, basile g, cavagnero g, bergamin a and zosi g 1992 j. appl.
crystallogr. 3 424–31
[8] chukhovskii f n, h ̈
olzer g, wehrhan o and f ̈
orster e 1996 j. appl. crystallogr. 29 438–445
[9] deslattes r d, kessler jr e g, indelicato p, de billy l, lindroth e and anton j 2003 rev. mod. phys. 75
35–99
[10] indelicato p, le bigot e o, trassinelli m, gotta d, hennebach m, nelms n, david c and simons l m 2006
rev. sci. inst. 77 043107 (pages 10)
|
0911.1717 | optical and near-ir spectroscopy of candidate red galaxies in two z~2.5
proto-clusters | we present a spectroscopic campaign to follow-up red colour-selected
candidate massive galaxies in two high redshift proto-clusters surrounding
radio galaxies. we observed a total of 57 galaxies in the field of mrc0943-242
(z=2.93) and 33 in the field of pks1138-262 (z=2.16) with a mix of optical and
near-infrared multi-object spectroscopy.
we confirm two red galaxies in the field of pks1138-262 at the redshift of
the radio galaxy. based on an analysis of their spectral energy distributions,
and their derived star formation rates from the h-alpha and 24um flux, one
object belongs to the class of dust-obscured star-forming red galaxies, while
the other is evolved with little ongoing star formation. this result represents
the first red and mainly passively evolving galaxy to be confirmed as companion
galaxies in a z>2 proto-cluster. both red galaxies in pks1138-262 are massive,
of the order of 4-6x10^11 m_sol. they lie along a colour-magnitude relation
which implies that they formed the bulk of their stellar population around z=4.
in the mrc0943-242 field we find no red galaxies at the redshift of the radio
galaxy but we do confirm the effectiveness of our jhk_s selection of galaxies
at 2.3<z<3.1, finding that 10 out of 18 (56%) of jhk_s-selected galaxies whose
redshifts could be measured fall within this redshift range. we also
serendipitously identify an interesting foreground structure of 6 galaxies at
z=2.6 in the field of mrc0943-242. this may be a proto-cluster itself, but
complicates any interpretation of the red sequence build-up in mrc0943-242
until more redshifts can be measured.
| introduction
at every epoch, galaxies with the reddest colours are known to
trace the most massive objects (tanaka et al. 2005). the study
of massive galaxies at high redshift therefore places important
constraints on the mechanisms and physics of galaxy formation
(e.g., hatch et al. 2009). studies of clusters at both low and high
redshift also yield valuable insights into the assembly and evo-
lution of large-scale structure, which itself is a sensitive cosmo-
logical probe (e.g., eke et al. 1996).
the most distant spectroscopically confirmed cluster lies
at z = 1.45 and has 17 confirmed members, a large fraction
of which are consistent with massive, passively evolving el-
lipticals (stanford et al. 2006; hilton et al. 2007). high redshift
proto-clusters have been inferred through narrow-band imaging
⋆based in part on data collected at subaru telescope, which is oper-
ated by the national astronomical observatory of japan, and in part on
data collected with eso's very large telescope ut1/antu, under pro-
gram id 080.a-0463(b). and eso's new technology telescope under
program id 076.a-0670(b)
searches for ly-α emitters in the vicinity of radio galaxies (e.g.,
kurk et al. 2004a; venemans et al. 2005). such searches have
been highly successful, with follow-up spectroscopy confirm-
ing 15 −35 ly-α emitters per proto-cluster. however, the ly-α
emitters are small, faint, blue star-forming galaxies, and likely
constitute a small fraction of both the number of cluster galaxies
and the total mass budget in the cluster.
massive elliptical galaxies are seen to dominate cluster cores
up to a redshift of z ≈1 (e.g., van dokkum et al. 2001), with
the red sequence firmly in place by that epoch (ellis et al. 2006;
blakeslee et al. 2003) and in fact showing very little evolution
even out to z ∼1.4 (e.g., lidman et al. 2008). what remains
uncertain is whether this holds true at yet higher redshift. the
fundamental questions are (1) at what epoch did massive, rich
clusters first begin to assemble, and (2) from what point do they
undergo mostly passive evolution? there is certainly evidence
that the constituents of high redshift clusters are very differ-
ent from that of rich clusters in the local universe, with the
distant clusters containing a higher fraction of both blue spi-
rals (e.g., ellis et al. 2006) and active galaxies (galametz et al.
2
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
2009). van dokkum & stanford (2003) found early-type galax-
ies with signatures of recent star formation in a z = 1.27 clus-
ter, perhaps symptomatic of a recently formed (or still forming)
cluster. locating massive galaxies in higher redshift clusters will
therefore yield valuable insight into the earliest stages of cluster
formation and the build-up of complex structures which are ob-
served in the local universe.
in two previous papers, we found over-densities of red
colour-selected galaxies in several proto-clusters associated with
high redshift radio galaxies (kajisawa et al. 2006; kodama et al.
2007). follow up studies of distant red galaxies (drgs) have
shown that this colour selection criterion contains two pop-
ulations: (1) dusty star-forming galaxies and (2) passively
evolved galaxies. the exact fraction between these two pop-
ulations is still a matter of debate (f ̈
orster schreiber et al.
2004; grazian et al. 2006, 2007). with our new jhk colour-
selection, we hope to be more efficient in identifying the evolved
galaxy population. we have now embarked upon a spectroscopic
follow-up investigation with the aim of confirming cluster mem-
bers and determining their masses and recent star formation his-
tories. such a goal is ambitious as redshifts of these red galax-
ies are generally very difficult to obtain, especially amongst the
population with no on-going star formation (i.e., lacking emis-
sion lines). here we present the first results from optical and
near-infrared spectroscopy of targets in two of the best studied
proto-clusters, mrc 1138−262 (z = 2.16) and mrc 0943−242
(z = 2.93).
in section 2 we explain the target selection and in section
3 outline the observations and data reduction steps. section 4
presents the redshifts identified and in section 5 we briefly anal-
yse the properties of the two galaxies discovered at the redshift
of mrc 1138−262,including their relative spatial location to the
rg, ages, masses and star formation rates from spectral energy
distribution fits and star formation rates from the hα emission
line flux. in section 6 we summarise the results and draw some
wider conclusions.
throughout
this
paper
we
assume
a
cosmology
of
h0=71 km s−1, ωm = 0.27, ωλ=0.73 (spergel et al. 2003) and
magnitudes are on the vega system.
2. target selection
2.1. proto-clusters
we have concentrated the initial stages of our spectroscopic
campaign on mrc 1138−262 and mrc 0943−242 for several
reasons. first, these two proto-clusters fields were both observed
in the space infrared by a spitzer survey of 70 high redshift radio
galaxies (rgs; seymour et al. 2007). they are also amongst the
eight z > 2 radio galaxies observed in the broad and narrow-band
imaging survey of venemans et al. (2007). both targets therefore
are among the best studied high-redshift radio galaxies and have
a large quantity of broadband data available from the u-band
to the spitzer bands, over a wide field of view. their redshifts
span the 2 < z < 3 region in which it has been shown that the
red sequence most likely starts to build up kodama et al. (2007),
and they therefore probe an interesting and poorly understood
redshift regime for understanding the build-up of red, massive
galaxies in proto-clusters.
our group previously obtained deep, wide-field near-infrared
imaging of mrc 1138−262 and mrc 0943−242 with the
multi-object infrared camera and spectrograph (moircs;
suzuki et al. 2008; ichikawa et al. 2006), the relatively new 4′ ×
7′ imager on subaru (kodama et al. 2007). the data were ob-
tained in exceptional conditions, with seeing ranging from 0.
′′5
to 0.
′′7. we use our previously published photometric catalogues,
and quote the total magnitudes (magauto in sextractor) for
individual band photometry, and 1.5′′ diameter aperture magni-
tudes for all colours. we refer to (kodama et al. 2007) for fur-
ther details on the photometry. we also used an h−band image
of mrc 1138−262 obtained with the son of isaac (sofi) on
the new technology telescope (ntt) on ut 2006 march 23.
the total exposure time was 9720 s. we reduced the data using
the standard procedures in iraf.
using this data set, we selected galaxies expected to lie at
the same redshifts as the radio galaxies on the basis of their j −
ks or jhks colours. these deep data indicate a clear excess of
the near-infrared-selected galaxies clustered towards the radio
galaxies (e.g., figures 3–5 in kodama et al. 2007). the excess is
a factor of ≈2 −3 relative to the goods-s field.
2.2. mrc 1138–262 at z = 2.16
the red galaxies in mrc 1138−262 were selected according to
the 'classic' drg criterion of j −ks > 2.3 (van dokkum et al.
2004). we note that in kodama et al. (2007) the zero points of
the photometry in this field were incorrect by 0.25 and 0.30 mag-
nitudes in j and ks bands, respectively, with j′ = j −0.25,
k′
s = ks −0.30 for this field, where the primed photometry
represents the corrected values. since the corrections for both
bands are similar, there is little change in the sample of drgs
identified, which was the primary thrust of that paper. we have
adjusted the magnitudes and colours before revising the tar-
get selection for the current paper. we also include three near-
infrared spectroscopic targets in the field of mrc 1138−262
from the hubble space telescope nicmos imaging reported
by zirm et al. (2008), two of which fulfill the drg criterion and
one of which is slightly bluer. we targeted 33 red galaxies in this
field, down to a ks magnitude of 21.7 .
2.3. mrc 0943–242 at z = 2.93
for the redshift z ∼3 proto-cluster mrc 0943−242, we de-
fine two classes of colour-selected objects: blue jhks galaxies
(bjhks) defined to have:
j −ks > 2(h −ks) + 0.5 and j −ks > 1.5,
and red jhks galaxies (rjhks) defined to have:
j −ks > 2(h −ks) + 0.5 and j −ks > 2.3.
figure 1 shows these colour cuts and the objects which were
targeted for spectroscopy. note that although we have not used
h−band data in the selection of targets for mrc 1138−262,
we include a jhks diagram for that field as a comparison to
mrc 0943−242 - it is obvious from these diagrams that the two
colour cuts select distinct populations of objects.
figure 2 demonstrates how our jhks selection works to se-
lect candidate proto-cluster members around mrc 0943−242.
the solid curves represent the evolutionary tracks of galaxies
over 0 < z < 4 with different star formation histories (passive, in-
termediate, and active) formed at zform = 5 (kodama et al. 1999).
the dashed line connects the model points at z = 3, and indicates
the region where we expect to find galaxies at a similar redshift
to the radio galaxy. we therefore apply the colour-cut (shown by
the dot-dashed lines) to exclusively search for galaxies associ-
ated with the radio galaxies at 2.3 < z < 3.1. this technique was
originally used by kajisawa et al. (2006) and has a great advan-
tage over the drg selection used at lower redshift since we also
select the bluer populations within the redshift range of interest.
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
3
0.0
0.5
1.0
1.5
2.0
h−ks
1
2
3
4
j−ks
mrc 0943−242
z=2.93
16
18
20
22
ks
0
1
2
3
4
5
j−ks
zform=5
4
mrc 0943−242
z=2.93
(a)
(b)
0.0
0.5
1.0
1.5
2.0
h−ks
1
2
3
4
j−ks
mrc 1138−262
z=2.16
16
18
20
22
ks
0
1
2
3
4
5
j−ks
zform=5
4
mrc 1138−262
z=2.16
(c)
(d)
fig. 1. spectroscopic targets shown in colour-colour (left) and
colour-magnitude (right) space for mrc 0943−242 (top) and
mrc 1138−262 (bottom). objects observed with optical spec-
troscopy by either fors2 or focas are indicated by large
circles; objects observed with near-infrared spectroscopy by
moircs are indicated by squares. with respect to the radio
galaxy, filled blue symbols are confirmed foreground objects and
red filled symbols are background galaxies. green symbols show
galaxies confirmed to lie at the redshift of the radio galaxy.
we targeted 38 jhks selected galaxies (including the rg itself)
down to a magnitude ks = 22. in section 4.1, we will show that
we could measure redshifts of half of these sources with a suc-
cess rate of our selection technique of ∼55%. in the remaining
spectroscopic slits, we included 22 additional "filler" targets (see
table 2), mostly those with colours just outside our selection cri-
terion, but also two distant red galaxies (j−k > 2.3) and three
lyα emitters.
where two or more objects clashed in position, we priori-
tised objects with 24μm detections from the multiband imaging
photometer for spitzer (mips; rieke et al. 2004). for the op-
tical spectroscopy, we also prioritised objects with brighter i-
band magnitudes. after selecting primary targets according to
the above colour criteria, a couple of additional 'filler' targets
were selected depending on the geometry of the mask. these
filler objects were drawn from a list of candidate ly-α emit-
ters which do not yet have confirmed redshifts (venemans et al.
2007). figure 1 shows the distribution of selected targets in
colour-colour and colour-magnitude space. whilst by no means
complete sample, we believe our sample to be representative of
the candidate objects.
3. observations and data reduction
we obtained optical multi-object spectra of targeted galaxies
in the two proto-clusters using the faint object camera and
0.0
0.5
1.0
1.5
2.0
h−ks
0
1
2
3
4
5
j−ks
z=0
z=1
z=2
z=3
z=4
bulge (e)
b+d (50% + 50%)
disk (t=5gyr)
e(b−v)=0.2
fig. 2. colour selection technique designed to select galaxies at
2.3 < z < 3.1. the solid curves represent the evolutionary tracks
of galaxies over 0 < z < 4 with different star formation histories
(passive, intermediate, and active, from top to bottom, respec-
tively) formed at zform = 5 (kodama et al. 1999). the dashed line
connects the model points at z = 3. objects targeted spectroscop-
ically are represented by circles; filled circles indicate where a
redshift identification has been made. blue circles indicate ob-
jects at z < 2.3 and red circles indicate sources at z > 3.1 - i.e.,
outside of the targeted redshift range. cyan circles show objects
whose redshifts fall within the targeted range.
spectrograph (focas; kashikawa et al. 2000) on subaru, and
the focal reducer and low dispersion spectrograph-2 (fors2;
appenzeller & rupprecht 1992) on the vlt1. we obtained near-
infrared multi-object spectra of targeted galaxies using moircs
on subaru. for one galaxy, we also obtained a spectrum with x-
shooter (d'odorico et al. 2006) on the vlt during a com-
missioning run. a summary of the observations is presented in
table 1.
3.1. optical spectra
we were scheduled two nights, ut 2007 march 14-15, of
focas spectroscopy on subaru. however, 1.5 out of the two
nights were lost to bad weather. in the first half of the second
night, we obtained three hours integration on mrc 0943−242 in
variable seeing. we used the 300b grism with the l600 filter and
binned the readout 3 × 2 (spatial × spectral). this set-up gives a
dispersion of 1.34 å pix−1 and a spectral resolution of 9 å full
width at half maximum (fwhm; as measured from sky lines).
there were 22 slits on the mask, but the spectra for three objects
fell on bad ccd defects so we treat these as unobserved. two
ly-α emitter candidates were also placed on the mask to fill in
space where there were no suitable near-infrared selected galax-
ies. the data were reduced in a standard manner using iraf and
custom software. although the night was not photometric, spec-
tra were flux calibrated with standard stars observed on the same
night, as it is important to calibrate at least the relative spectral
response given that we are interested in breaks across the spec-
tra.
we obtained fors2/mxu spectra with ut2/kueyen on ut
2008 march 11-12. the nights were clear with variable seeing
averaging around 1′′. the 300v grating was used, providing a
1 eso program 080.a-0463(b).
4
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
dispersion of 3.36 å pix−1 and a resolution of 12 å (fwhm,
measured from sky lines). slitlets were 1′′ wide and 6−10′′ long.
we used 2′′ nods to shift objects along the slitlets, thereby cre-
ating a-b pairs to improve sky subtraction. we first created
time-contiguous pairs of subtracted frames, and then averaged
all of the a−b pairs together in order to improve the result-
ing sky subtraction and to reject cosmic rays (in both the pos-
itive and negative images). these combined frames were then
processed using the fors2 pipeline (ver. 3.1.7) with standard
inputs. the extracted one-dimensional spectra were boxcar ex-
tracted over a 1′′ aperture and calibrated with flux standards
taken on the same night. in total, we obtained 10 hours of in-
tegration on mrc 0943−242 and five hours of integration on
mrc 1138−262.
3.2. near-infrared spectra
we obtained moircs spectra for mrc 1138−262 over the lat-
ter halves of ut 2008 january 11-12. both nights were photo-
metric, and we obtained a total of five hours of integration. since
chip 1 of moircs was the engineering grade chip at that time,
we gained no useful data on that chip and essentially lost half
of the field coverage. this detector suffered a loss of sensitivity
compared to chip 2. more problematic was a prominent large
ring-like structure with significantly high (and variable) dark
noise, as well as several smaller and patchy structures, which we
were unable to calibrate out successfully. we obtained k-band
spectra with the medium resolution (r1300) grating, yielding a
dispersion of 3.88 å pix−1 and a resolution of 26 å (fwhm,
measured from sky lines). the objects were nodded along the
slitlets and a-b pairs combined to remove the sky background.
the frames were then flat-fielded and corrected for optical dis-
tortion using the mscgeocorr task in iraf. they were then ro-
tated to correct for the fact that the slits are tilted on the detector.
we extracted spectra using a box-car summation over the width
of the continuum, or, in the case of no continuum, over the spa-
tial extent of any detected line emission.
we obtained moircs data for mrc 0943−242 on ut 2009
january 10, with four hours total integration time over the latter
half of the night. the low resolution hk500 grating was used,
providing a dispersion of 7.72 å pix−1 and a resolution of 42 å
(fwhm, measured from sky lines).
3.3. uv/optical/nir spectrum
we also obtained data for a single target in the field of
mrc 0943−242 with x-shooter during a commissioning run
on 2009 june 5 under variable seeing conditions (between 0.5′′
and 2′′ during the exposure). slit widths of 1.0′′, 0.9′′ and 0.9′′
were used in the uv–blue, visual–red and near–ir arm, re-
spectively, resulting in resolutions of ruvb=5100, rvis=8800
and rnir=5100. the data were reduced using a beta version
of the pipeline developed by the x-shooter consortium
(goldoni et al. 2006). the visual-red arm spectrum (550nm–
1000nm) presented below is a combination of three 1200s expo-
sures. the near-ir arm spectrum (1000nm-2500nm) is a combi-
nation of two 1300s exposures in a nodding sequence.
4. results/redshift identifications
redshifts were identified by visual inspection of both the pro-
cessed, stacked two-dimensional data and the extracted, one-
dimensional spectra. we based redshifts on both emission and
absorption line features, as well as possible continuum breaks
and assigned confidence flags.
table 2 shows the sources observed with optical spec-
troscopy and the corresponding redshift identifications and qual-
ity flags. quality 1 indicates confirmed, confident redshifts; qual-
ity 2 indicates doubtful redshifts; quality 3 indicates that we de-
tected continuum emission but no identification was made, typi-
cally due to the low signal-noise ratio (s/n) of the data; and qual-
ity 4 indicated that no emission was detected. in some cases we
can infer an upper limit to the redshift if the continuum extends
to the blue wavelength cutoffof the optical spectrum. this is
useful for distinguishing foreground objects. in practice, due to
the lower efficiency of the grisms, the spectra become too noisy
to distinguish the continuum below 3900 å for fors and 4100
å for focas, corresponding to ly-α at z=2.3 and z=2.4, re-
spectively.
4.1. mrc 0943−242
out of 32 objects observed with fors2 in mrc 0943−242, 16
redshifts have been identified (fig. 3). nineteen objects were ob-
served with focas, including 17 near-infrared-selected galax-
ies and two ly-α candidates observed as filler objects (fig. 4).
we obtained five redshifts, only one of which lies close to the
redshift of the radio galaxy, lae#381 at z = 2.935 (quality 1)
and this is not one of our red galaxy candidate members but was
one of the lyman-alpha emitter candidates. the low success rate
with focas is most likely due to the short exposure time since
that observing run was largely weathered out.
we identified a further 8 redshifts with moircs (out of 21
targeted sources), giving a total of 27 sources in the field of
mrc 0943−242 with spectroscopic redshifts, out of 57 targeted
(fig. 5).
we also observed one object (#792) with x-shooter. the
visual-red arm spectrum (fig. 6) shows one line at 8985.7å.
the near-ir arm spectrum (fig. 7) shows two lines at 15815.7å
and 15868.2å. we identify those lines as [oii]λ3727, hα and
[nii]λ6583 at z=1.410. this redshift is consistent with the
one determined independently from the spectrum obtained with
fors.
from the remaining 30 sources, we obtained five redshift up-
per limits from the optical spectroscopy, excluding these sources
from being members of the proto-cluster. the remaining 25 ob-
jects cannot be excluded as either foreground or background ob-
jects, and an unknown fraction of these may even be at the red-
shift of the radio galaxy. however, we found an extended fore-
ground structure in this field at redshift z ∼2.6. the redshift
distribution is shown in figure 8. this foreground structure com-
plicates any interpretation of the red sequence, or lack thereof,
in this proto-cluster, as a large number of galaxies along the 'red
sequence' in mrc 0943−242 may be members of this interven-
ing structure. however, it is in itself very interesting as it lies at a
high redshift and may also be a young, forming cluster. we find
six galaxies in this structure, with a mean redshift < z >= 2.65
and ∆z = 0.03. several of these sources may contain agn fea-
tures (civ, heii lines), though the s/n of our spectra is insuffi-
cient to make a clear statement about this.
for one object, #420, we obtained two different redshifts
from fors2 and moircs. upon closer inspection of the i-
band and k−band images, we found out that this is actually a
pair of galaxies: (i) a blue one at z =0.458 detected in the i−band
image , and used to centre our fors2 spectroscopic slit (fig. 3),
and (ii) a red one at z = 2.174 detected in the k−band image and
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
5
fig. 3. fors2 spectra of objects with identified redshifts in the field of mrc 0943−242. objects are ordered by redshift from low
to high. the vertical green line indicates the region of atmospheric absorption at 7600 å. the horizontal dashed line indicates zero
6
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
fig. 3. cont.
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
7
fig. 3. cont.
8
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
fig. 4. focas spectra of objects with identified spectra in the field mrc 0943−242.
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
9
fig. 5. moircs spectra of objects with identified redshifts in the field mrc 0943−242. sky lines are shown as a dotted green
spectrum beneath each object spectrum.
10
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
table 1. summary of observations.
proto-cluster
instrument
fov
filter
grating
spectral coveragea (μm)
ut date obs.
total exp.
mrc 1138−262
moircs
4′ × 3.
′5
k
r1300
2.0 −2.4
11-12 jan 2008
5 hr
mrc 1138−262
fors2
7′ × 7′
–
300v
0.33 −0.9
11-12 mar 2008
5 hr
mrc 0943−242
fors2
7′ × 7′
–
300v
0.33 −0.9
11-12 mar 2008
10 hr
mrc 0943−242
focas
6′ diameter
l600
300b
0.37 −0.6
15 mar 2007
3 hr
mrc 0943−242
moircs
4′ × 7′
oc hk
hk500
1.3 −2.5
10 jan 2009
4 hr
mrc 0943−242
x-shooter
1′′ × 11′′
–
–
0.3 −2.5
5 jun 2009
1 hr
a exact spectral coverage for each object depends on the position of the slit on the detector.
fig. 6. x-shooter visual–red arm spectrum of object #792 in
the field of mrc 0943−242.
used to centre the moircs slit (fig. 5). these two galaxies are
separated by only 1.
′′1. a hint of the blue galaxy is seen in the
k−band image, and vice versa, a hint of the red galaxy is seen in
the i−band image, but none of these can be properly deblended.
the red galaxy at z = 2.174 is the one selected by our colour
criterion.
more than a half (10/18, or 56%) of the jhks selected galax-
ies whose redshifts were successfully measured turn out to be
actually located in the redshift range 2.3 < z < 3.1. of the
red sequence candidates (i.e., rjhk galaxies), 6 out of 10 are
within the range 2.3 < z < 3.1. these numbers are summarised
in table 4. the majority of redshift identifications were based on
emission lines, though in several cases the fors2 data yielded
absorption line identifications where the continuum is strong
enough (e.g., see objects #749 and #760 in figure 3).
4.2. mrc 1138−262
we observed 27 objects (25 red galaxies and 2 lα emitters) in
the mrc 1138−262 field with optical spectroscopy, but could
identify only two redshifts, both of which are background ly-α
emitters at z > 3 (figure 9). no redshifts were confirmed at the
fig. 7. x-shooter nir arm spectrum of object #792 in the
field of mrc 0943−242.
redshift of the radio galaxy from optical spectroscopy. although
the total exposure time is half of that spent on mrc 0943−242,
given the 50% confirmation rate in that field we may naively ex-
pect a higher success rate here given the lower redshift of this
radio galaxy. however, about twice as many sources (25 out of
57) in the mrc 0943−242 field have i < 24.5, compared to the
mrc 1138−262 field (7 out of 33), so the lower success rate
in mrc 1138−262 may be partially due to the sources being
fainter. we are confident that the astrometry in both masks is ac-
curate - confirmed by several stars and bright galaxies (includ-
ing the radio galaxy itself) placed on the slitmasks. the astrom-
etry was internally consistent as the slit positions were derived
from preimaging obtained with fors2. however, the efficiency
of the 300v grism drops offquickly at the blue end, and since
the sources observed in this field have such faint magnitudes, it
is likely not possible to detect lyman breaks at the redshift of
the radio galaxy; we estimate a minimum redshift of z = 2.2
to detect ly-α with fors2, and the sources were too faint to
identify redshifts from absorption lines. furthermore, given the
difference in colour cuts from mrc 0943−242 which selects in-
trinsically different populations (figure 1), we do not expect to
find many higher redshift interlopers.
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
11
table 2. results of optical and near-ir spectroscopy in the mrc 0943−242 field.
id
typea
ra(j2000)
dec.(j2000)
vtotal
itotal
jtotal
ks,total
j −ks (1.
′′5)
instr.
z
qualityb
897
bjhk
09 45 25.34
−24 25 40.3
>26.8
>25.6
23.69
21.65
2.15
fors2
...
4
420
rjhk
09 45 25.57
−24 28 59.8
...c
...c
23.65
21.35
2.65
focas
...
3
moircs
2.174
2
413
rjhk
09 45 25.74
−24 30 28.8
>26.8
>25.6
21.85
19.79
2.41
moircs
...
3
410
bjhk
09 45 26.07
−24 30 20.2
23.73
22.85
21.78
20.36
1.52
moircs
1.420
2
402
rjhk
09 45 26.15
−24 29 38.2
>26.8
>25.6
23.38
20.34
2.96
moircs
2.466
2
873
bjhk
09 45 26.16
−24 26 48.2
24.54
23.27
21.64
19.99
1.59
focas
0.272
1
860
filler
09 45 26.55
−24 28 14.8
27.2
24.99
21.28
19.01
2.25
fors2
...
4
387
bjhk
09 45 26.90
−24 31 08.2
>26.8
>25.6
23.29
20.02
2.28
fors2
...
4
845
rjhk
09 45 27.07
−24 28 44.5
25.3
24.01
22.24
19.60
2.72
fors2
2.335
2
moircs
...
3
373
filler
09 45 27.27
−24 30 37.7
24.70
23.52
21.63
20.06
1.59
fors2
<2.3
3
830
rjhk
09 45 27.39
−24 26 45.9
>26.8
>25.6
22.94
20.39
2.93
fors2
...
4
moircs
...
4
368
filler
09 45 27.46
−24 30 46.8
24.68
22.79
20.95
18.93
2.18
focas
...
4
803
bjhk
09 45 28.25
−24 26 03.8
24.31
23.82
22.89
21.10
1.66
fors2
2.68
1
792
bjhk
09 45 28.52
−24 26 16.8
24.74
23.19
21.43
19.67
1.78
fors2
1.408
1
focas
<2.4
3
moircs
...
3
xshoot
1.410
1
341
bjhk
09 45 28.76
−24 29 10.9
25.20
23.12
21.53
19.44
2.07
focas
<2.4
3
775
filler
09 45 29.12
−24 28 23.9
26.47
24.60
22.75
20.35
2.14
fors2
...
4
760
rjhk
09 45 29.58
−24 28 02.5
>26.8
>25.6
23.70
21.25
2.33
fors2
2.65
2
749
rjhk
09 45 29.91
−24 27 24.7
>26.8
>25.6
22.01
19.71
2.53
fors2
2.635
1
moircs
...
3
722
filler
09 45 30.36
−24 25 26.8
23.96
22.40
22.18
20.44
1.62
fors2
...
3
moircs
1.507
1
700
bjhk
09 45 31.14
−24 27 01.3
24.58
23.91
22.75
20.43
2.26
focas
2.386
1
267
rjhk
09 45 31.31
−24 29 34.8
24.96
24.60
23.24
20.83
2.43
fors2
<2.3
3
focas
...
4
moircs
2.419
1
694
filler
09 45 31.38
−24 27 39.1
26.81
24.43
23.01
21.21
1.93
fors2
...
4
690
filler
09 45 31.50
−24 27 17.8
24.31
23.78
22.48
20.99
1.68
focas
<2.4
3
686
rjhk
09 45 31.58
−24 26 41.0
>26.8
>25.6
23.14
20.60
2.83
moircs
...
3
250
filler
09 45 31.77
−24 29 24.4
>26.8
>25.6
22.34
20.59
1.71
moircs
...
3
675
rjhk
09 45 31.99
−24 28 14.8
24.96
24.51
22.89
20.33
2.47
focas
2.520
1
241
bjhk
09 45 32.28
−24 28 50.6
>26.8
>25.6
22.88
20.96
1.89
fors2
1.16
2
660
rjhk
09 45 32.53
−24 26 38.0
25.06
24.26
22.87
21.07
2.63
fors2
1.12
2
hzrg
bjhk
09 45 32.76
−24 28 49.3
...
...
20.65
19.12
1.70
moircs
2.923
1
221
rjhk
09 45 32.81
−24 29 28.2
>26.8
>25.6
22.26
20.38
2.39
moircs
...
4
646
bjhk
09 45 32.97
−24 27 59.6
25.73
24.26
23.35
21.34
1.93
focas
...
4
623
rjhk
09 45 33.66
−24 27 13.0
26.45
25.04
24.85
22.54
2.74
fors2
...
4
175
filler
09 45 35.06
−24 29 24.3
25.02
24.09
22.23
20.15
1.89
fors2
<2.8
3
174
rjhk
09 45 35.11
−24 31 34.6
25.46
25.37
22.51
19.82
2.75
focas
...
4
167
drg
09 45 35.32
−24 29 46.8
>26.8
>25.6
22.24
20.59
3.02
fors2
0.464
2
153
rjhk
09 45 35.72
−24 31 38.1
26.51
23.66
23.00
20.62
2.59
fors2
3.94
2
143
bjhk
09 45 36.10
−24 30 34.4
24.98
24.00
23.33
21.25
1.96
focas
...
3
139
rjhk
09 45 36.34
−24 29 56.9
>26.8
>25.6
21.99
19.80
2.52
moircs
...
3
138
filler
09 45 36.42
−24 30 25.6
24.29
23.65
22.27
20.95
1.37
fors2
1.34
1
544
bjhk
09 45 36.52
−24 26 11.8
24.59
23.75
23.14
21.72
1.58
focas
...
3
130
rjhk
09 45 36.82
−24 29 51.8
>26.8
25.33
23.53
19.90
3.63
moircs
...
3
123
filler
09 45 37.03
−24 31 18.1
>26.8
>25.6
21.60
19.78
1.93
fors2
...
3
116
filler
09 45 37.23
−24 31 48.5
26.45
23.78
23.11
21.75
1.34
fors2
2.58
2
114
rjhk
09 45 37.23
−24 29 59.2
>26.8
>25.6
24.91
21.10
3.38
fors2
...
4
110
filler
09 45 37.26
−24 30 20.8
>26.8
>25.6
22.22
20.39
1.87
fors2
2.65
1
103
bjhk
09 45 37.41
−24 30 12.1
24.81
24.23
23.61
21.36
2.10
fors2
2.62
1
focas
2.625
1
522
bjhk
09 45 37.44
−24 27 50.4
27.5
25.23
23.15
21.42
1.78
fors2
...
4
100
rjhk
09 45 37.47
−24 29 09.5
24.61
23.73
23.10
20.16
2.55
moircs
2.649
2
511
rjhk
09 45 37.84
−24 28 43.9
25.7
24.91
22.75
20.04
2.88
moircs
...
3
498
drg
09 45 38.26
−24 28 08.6
26.4
24.59
22.34
19.99
2.39
focas
...
4
72
rjhk
09 45 38.30
−24 30 46.0
>26.8
>25.6
22.86
20.81
2.35
fors2
...
4
495
filler
09 45 38.33
−24 27 03.3
>26.8
>25.6
22.20
20.44
1.73
fors2
...
3
64
filler
09 45 38.82
−24 30 39.0
26.0
25.07
22.86
20.83
2.12
focas
...
4
39
filler
09 45 39.72
−24 32 02.6
25.4
24.13
22.22
20.57
1.59
fors2
3.76
2
34
rjhk
09 45 39.92
−24 29 19.7
27.9
25.75
22.60
19.99
2.60
moircs
...
3
23
rjhk
09 45 40.40
−24 29 00.6
>26.8
>25.6
24.31
21.00
3.23
moircs
...
4
16
filler
09 45 40.57
−24 31 24.7
24.16
23.48
22.34
20.93
1.47
fors2
2.44
1
focas
<2.7
2
lae1141
lae
09 45 29 518
24 29 15 72
l
itt
did t
moircs
3 495
2
12
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
table 3. results of optical and near-ir spectroscopy in the mrc 1138−262 field.
id
ra(j2000)
dec.(j2000)
btotal
itotal
jtotal
ks,total
j −ks (1.
′′5)
instr.
z
qualitya
493
11 40 33.42
−26 29 14.6
>27
>26.2
22.89
20.55
2.56
fors2
...
4
867
11 40 34.09
−26 30 38.7
>27
25.32
>23.3
21.30
2.49
fors2
...
4
156
11 40 34.82
−26 27 46.4
>27
>26.2
>23.3
21.15
4.23
fors2
...
4
174
11 40 35.45
−26 27 51.8
25.35
25.06
>23.3
>22
2.51
fors2
...
3
676
11 40 36.96
−26 29 55.6
>27
>26.2
22.57
19.79
3.07
fors2
...
4
808
11 40 38.19
−26 30 24.3
>27
24.87
22.91
20.44
2.56
fors2
...
4
552
11 40 38.28
−26 29 25.1
>27
25.37
22.67
20.47
2.41
moircs
...
4
364
11 40 39.76
−26 28 45.4
24.17
22.87
21.24
19.03
2.21
moircs
...
3
877
11 40 40.24
−26 30 41.2
25.96
24.91
23.44
21.30
2.35
fors2
...
3
341
11 40 41.47
−26 28 37.2
>27
24.81
23.63
20.48
3.06
fors2
3.263
1
147
11 40 43.48
−26 27 44.1
>27
>26.2
21.98
20.04
2.43
fors2
...
3
369
11 40 43.46
−26 28 45.2
>27
>26.2
>23.3
21.30
3.83
moircs
...
4
905
11 40 44.01
−26 30 47.0
>27
>26.2
23.45
20.86
2.69
moircs
...
4
456
11 40 44.28
−26 29 07.7
>27
24.71
21.28
18.93
2.45
fors2
...
3
moircs
2.172
2
464
11 40 46.09
−26 29 11.5
>27
>26.2
21.54
19.06
2.41
moircs
2.149
2
558
11 40 46.52
−26 29 27.1
>27
24.71
21.64
19.03
2.55
moircs
...
4
476
11 40 46.68
−26 29 10.4
>27
24.90
22.30
19.86
2.46
fors2
...
3
605
11 40 47.58
−26 29 38.2
>27
>26.2
>23.3
21.07
3.51
fors2
...
4
moircs
...
4
830
11 40 48.36
−26 30 30.6
>27
20.48
20.67
18.86
2.20
moircs
...
3
492
11 40 49.59
−26 29 07.7
>27
25.60
23.09
20.02
2.84
fors2
...
4
392
11 40 50.35
−26 28 49.6
>27
>26.2
23.47
20.56
2.47
fors2
...
4
573
11 40 50.75
−26 29 32.5
>27
24.68
21.98
20.04
2.06
moircs
...
4
601
11 40 51.30
−26 29 38.5
>27
23.45
21.27
19.05
2.35
fors2
...
3
506
11 40 53.13
−26 29 18.1
>27
23.55
21.33
19.01
2.38
fors2
...
3
165
11 40 54.01
−26 27 48.1
>27
>26.2
23.35
20.85
2.59
fors2
...
4
241
11 40 54.77
−26 28 03.6
>27
25.44
23.16
20.78
2.76
fors2
...
3
223
11 40 55.99
−26 28 02.9
26.13
24.30
>23.3
21.36
2.37
fors2
3.455
1
268
11 40 57.02
−26 28 17.5
>27
23.02
20.57
18.35
2.64
fors2
...
3
206
11 40 57.87
−26 27 59.3
>27
24.52
23.51
20.28
2.40
fors2
...
3
565
11 41 00.10
−26 29 28.1
>27
>26.2
22.72
20.31
2.57
fors2
...
4
451
11 41 00.84
−26 29 04.7
>27
>26.2
>23.3
21.26
3.11
fors2
...
4
90
11 41 01.54
−26 27 31.2
>27
23.55
21.38
18.89
2.51
fors2
...
3
728
11 41 02.69
−26 30 07.8
>27
>26.2
21.03
18.89
2.56
fors2
...
4
...
11 40 43.85
−26 31 26.4
ly-α emitter candidate
fors2
...
3
...
11 40 50.68
−26 31 00.0
ly-α emitter candidate
fors2
...
4
aquality flags are the same as in table 2. limiting magnitudes are 5σ.
table 4. spectroscopy summary for near-ir selected candidates.
selection criterion
# candidates
# targeted
# confirmed redshifts
# @ 2.3 < z < 3.1
# @ zrg
mrc 0943−242
rjhk
62
23
10
6
0
bjhk
70
15
8
4
1a
total
132
38
18
10
1a
mrc 1138−262
drg
97
30
15
4
2
a this is the radio galaxy itself, which is also selected as a bjhk candidate.
from the moircs spectra, we detected two emission line
objects (out of 12 targets), which, if identified as hα, places the
objects at the same redshift as the radio galaxy. one of these
is a clear detection, the other is marginal (3σ). both the two-
dimensional and extracted one-dimensional spectra are shown in
figures 10 and 11. the target information for these two objects
are shown in table 3.
the small number of confirmed redshifts in this field is
in fact not surprising given that detecting interstellar absorp-
tion lines and the lyman-break in the optical was unsuccessful.
as we seem to detect only a surprisingly low number of star-
forming red galaxies in our colour-selection, the remaining frac-
tion of passively evolving red sequence galaxies will not have
strong emission lines, and hence will be harder to confirm red-
shifts of. our current results show that the most effective strategy
would in fact be to observe much deeper in the near-ir so that we
can detect the 4000å continuum break feature, if present (e.g.,
kriek et al. 2006). in fact, there are a total of 13 objects in which
we detect continuum in our near-ir spectra, but in general it is
too weak and/or diffuse to identify any features with the current
data.
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
13
fig. 9. fors2 spectra of background ly-α emitters identified in the field of mrc 1138−262.
5. properties of the two confirmed red galaxies in
mrc 1138−262
these are the first examples of red galaxies confirmed as mem-
bers in a proto-cluster above z > 2. here we compare their physi-
cal properties, insofar as possible with the information available,
with the other confirmed proto-cluster members which are gen-
erally small, blue, star-forming, ly-α and hα emitters.
we make use of the multi-wavelength broad-band data to fit
spectral energy distributions (seds) and deduce ages, masses
and star formation rates (sfrs) for these two galaxies. we also
calculate the sfrs from the hα line fluxes in the moircs
spectra and from spitzer 24 μm imaging of the mrc 1138−262
field. finally, given the projected spatial location with respect
to the rg, and the calculated stellar masses, we attempt to in-
fer whether or not these galaxies will eventually merge with
mrc 1138−262 .
5.1. stellar masses
using broad-band ubrijhks[3.6][4.5][5.8][8.0] photometry,
we fit the seds of the two galaxies confirmed to lie at the red-
shift of the mrc 1138−262 to model templates with two aims.
the first is to test if the photometric redshifts produced are reli-
able, which is useful for future studies. the second is to derive
estimates of the physical properties of these galaxies - in par-
ticular, their stellar masses. tanaka et al. (in prep.) describes the
details of the model fitting, but we briefly outline the procedure
here.
the model templates were generated using the updated ver-
sion of bruzual & charlot (2003) population synthesis code
(charlot & bruzual in prep), which takes into account the effects
of thermally pulsating agb stars. we adopt the salpeter (1955)
initial mass function (imf) and solar and sub-solar metallicities
(z = 0.02 and 0.008). we generated model templates assuming
an exponentially decaying sfr with time scale τ, dust extinc-
tion, and age. we implement effects of the intergalactic extinc-
tion following furusawa et al. (2000), who used the recipe by
madau (1995), as we are exploring the z > 2 universe. we use
conventional χ2 minimizing statistics to fit the models to the ob-
served data.
firstly, we generate model templates at various redshifts and
perform the sed fit. errors of 0.1 magnitudes are added in
quadrature to all bands to ensure that systematic zero point er-
rors do not dominate the overall error budget. the resulting pho-
tometric redshift of the object #456 is 2.25+0.06
−0.09, which is consis-
tent with the spectroscopic redshift (zspec = 2.1719).
we then fix the redshift of the templates at the spectroscopic
redshift and fit the sed again. we impose the logical constraint
that the model galaxies must be younger than the age of the
universe. the derived properties of the galaxy #456 are an age
of 1.6+1.1
−0.7 gyr, an e-folding time scale τ = 0.1+0.4
−0.1 gyr, and dust
extinction of τv = 0.4+1.4
−0.4, where τv is the optical depth in the v-
band. it is a relatively old galaxy with a modest amount of dust,
14
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
fig. 8. redshift distribution for confirmed sources in the field of
mrc 0943−242 at z = 2.93 (redshift indicated by an arrow). the
proportion of sources in each bin with quality flag 1 (i.e. reliable
redshifts) is shown with red hatching. the blue hatching shows
sources with quality 3 (i.e., representing an upper limit rather
than a confirmed redshift). no filling (i.e., simply the black out-
line) shows all sources with redshifts (quality flags 1 and 2).
fig. 10. (bottom) 2d and (top) 1d spectra for the object #464
with z=2.149. the oh skylines are shown in red underneath the
1d spectrum. the 2d and 1d spectra are on the same wavelength
scale.
although the error on the extinction is large. the apparent red
color of the galaxy is probably due to its old stellar populations.
we find a high photometric stellar mass of 2.8+1.5
−1.0 × 1011 m⊙
for this galaxy. the short time scale derived (τ of only 0.1 gyr)
suggests that the galaxy formed in an intense burst of star for-
mation. the low current star formation rate of 0.0+1.0
−0.0 m⊙yr−1
derived from the sed fit indeed confirms that the high star for-
fig. 11. bottom: 2d spectrum of object #456 showing h-α detec-
tion, middle: 2d spectrum rebinned in 2x2 pixels to emphasize
the detection and top: 1d spectrum extracted from the rebinned
2d spectrum.
mation phase has ended and the galaxy is now in a quiescent
phase.
for the galaxy #464, the best fit photometric redshift is
zphot
= 2.05+0.26
−0.13, consistent with the spectroscopic redshift
(zspec = 2.149). the best fit at the spectroscopic redshift yields
an age of 2.4+0.4
−1.0 gyr, an e-folding time τ = 0.2+0.1
−0.2 gyr, dust
extinction of τv = 0.7+0.7
−0.6, a stellar mass of 5.1+1.5
−2.0 × 1011 m⊙
and a sfr of 0.0+1.0
−0.0 m⊙yr−1.
the best fitting seds for each galaxy are shown in
figures 13 and 12. the broad-band magnitudes of the two galax-
ies are summarized in table 5. a more extensive analysis of the
properties of galaxies in the mrc 1138−262 field will be pre-
sented in tanaka et al. (in prep.).
several earlier papers presented measurements of different
classes of objects in the field of mrc 1138−262. kurk et al.
(2004b) converted k-band magnitudes to masses, for the ly-α
and hα emitters, estimating masses of 0.3 −3 × 1010m⊙for the
ly-α emitters (except for one object with a mass of 7.5×1010m⊙)
and 0.3 −11 × 1010m⊙for the hα emitters, the latter on average
being clearly more massive than the ly-α emitters. furthermore
hatch et al. (2009) find stellar masses between 4 × 108 and
3×1010 for candidate ly-α emitting companions within 150 kpc
of the central rg. our red galaxies are thus an order of magni-
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
15
table 5. total magnitudes of the two hα detected galaxies. in this table, the magnitudes are on the ab system for the ease of fitting
seds. magnitude limits are 3σ limits.
id
u
b
r
i
z
j
h
ks
3.6μm
4.5μm
5.8μm
8.0μm
456
> 26.6
> 26.0
> 25.2
25.15 ± 0.41
> 24.2
22.19 ± 0.08
21.27 ± 0.04
20.72 ± 0.06
20.11 ± 0.03
20.00 ± 0.04
19.69 ± 0.13
20.24 ± 0.25
464
> 26.6
> 26.0
> 25.2
> 26.1
> 24.2
22.45 ± 0.11
21.39 ± 0.04
20.86 ± 0.09
19.96 ± 0.03
19.80 ± 0.03
19.63 ± 0.12
20.05 ± 0.21
fig. 12. sed fit for object #456, with the fainter hα emission
line detection.
fig. 13. sed fit for object #464, with the brighter hα emission
line detection.
tude more massive than the most massive ly-α emitters and 2–3
times more massive than the most massive hα emitters discov-
ered through nb imaging searches in this field.
5.2. star formation rates
we now use the line fluxes for the two hα detections to infer a
lower limit to the instantaneous sfr. we obtained a rough flux
calibration of the spectra by scaling the total flux in the spec-
trum to the observed ks magnitude of the two objects, and as-
suming a flat spectral throughput in the ks -band. as the ob-
served lines are near the centre of the ks -band, this approx-
imation does not significantly effect the derived hα flux, and
we assume a calibration uncertainty of ≈20%. both line de-
tections lie on the edge of weak telluric absorption features be-
tween 20400-20800 å and also very close to sky lines (espe-
cially #458), increasing the measurement uncertainties. the hα
flux of object #464 thus derived is 1.3 ± 0.3 × 10−16 erg s−1 cm−2
and the fainter object, #456, has an hα flux of 6.2 ± 2 × 10−17
erg s−1 cm−2. the measured line flux in #464 is fully consistent
with the published f(hα)=1.35 × 10−16 erg s−1 cm−2 derived
from narrow-band imaging (object 229 of kurk et al. 2004b).
the previously published near-ir spectroscopy of kurk et al.
(2004a) found f(hα)=7.1±1.9 × 10−17 erg s−1 cm−2, which is
somewhat lower than our flux, suggesting possible slit losses.
in the same narrow-band image, object #458 does not show any
excess flux compared to the full k−band image, suggesting that
our line flux may be overestimated due to incomplete skyline
subtraction.
using the kennicutt (1998) relation,
sfr (m⊙yr−1) = 7.9 × 10−42 l(hα) erg s−1,
which assumes solar metallicities and a salpeter (1955) imf, this
leads to sfrs of 35±8 and 17±6 m⊙yr−1, respectively.
the sfrs derived from hα are higher than those from the
sed fits (figs 12 and 13), which give sfr up to 4m⊙yr−1 at
2σ, i.e. a factor of 5–10 lower. an additional independent esti-
mate for the sfr can be obtained from the mips 24μm imag-
ing of the field (seymour et al. 2007). object #464 is detected
at s(24μm)=470±30μjy, while object #458 remains undetected
at at the 2σ<60 μjy level. we first convert the 24μm flux to the
total ir flux using the relation of reddy et al. (2006), and con-
verted the latter to a sfr using the formula of kennicutt (1998).
the derived sfr are 34 and <4 m⊙yr−1 for objects #464 and
#458, respectively. for #464, this is fully consistent with the the
sfr derived from hα, but for #458, the value derived from hα
seems strongly over-estimated. this may be either be due to the
low s/n and incomplete skyline removal in our near-ir spec-
troscopy, or it may indicate that the hα may have a contribution
by an agn. to confirm the latter hypothesis, we need to obtain
other emission lines such [oiii], but the possible agn contri-
bution may not be surprising given the higher fraction of agns
observed in proto-clusters (e.g. galametz et al. 2009).
in summary, we find that object #458 most likely belongs to
the class of passively evolving galaxies, while #464 belongs to
the class of dusty star-forming red galaxies.
5.3. merging timescales
the spatial locations of the two red galaxies identified at the red-
shift of mrc 1138−262 are shown in figure 14. they both lie
along the east-west axis, where most of the confirmed agn in
this field were found (croft et al. 2005; pentericci et al. 2002)
and where the brighter drgs in kodama et al. (2007) also trace
a filament. the two galaxies are relatively central, being located
within 1′ (500 kpc, physical) of the radio galaxy. this is signif-
icant given that no obvious density gradient has been observed
for the lyman-α emitters (e.g., pentericci et al. 2000; kurk et al.
2003), yet kurk et al. (2004b) argue there is an indication that
16
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
the hα emitters and extremely red objects have a higher density
near the rg than further out. our new results and these former
results are consistent with the picture that accelerated evolution
takes place in high density environments.
to properly assess the possibility of a future merger be-
tween the rg and one or both of the red companion galax-
ies, numerical simulations are needed. however, we can at least
gain an idea of the relative timescales involved as follows. the
crossing time can be approximated by tcross = (r3/gmtotal)1/2
(binney & tremaine 2008) where r is the distance between the
two galaxies and m is the total mass (dark matter + stellar) of the
central galaxy, approximately 1013m⊙in this case (hatch et al.
2009). for the closer galaxy to the rg, #464, the distance is
r ∼300 kpc (physical) and the crossing time is therefore
roughly 0.7 gyr. the second galaxy has a crossing time between
2–3 times longer, let us say approximately 2 gyr. for a major
merger (unless it is a high speed encounter in which case the
galaxies would pass through each other with little disruption) the
galaxies should merge into a single system within a few cross-
ing times (binney & tremaine 2008). there is therefore enough
time for the two red galaxies to merge with the central rg before
redshift z = 0, increasing its stellar mass of 1012m⊙(hatch et al.
2009) by a factor of at least 50% .
fig. 14. relative spatial distribution of the two confirmed mas-
sive galaxies in mrc 1138−262. the central radio galaxy is
marked by a large green asterisk. the red galaxy candidates
are marked as red points; those targeted spectroscopically are
marked with circles. the two filled red circles are the galaxies
confirmed to lie at the redshift of the radio galaxy. confirmed
lyman-α emitters are marked as blue triangles (pentericci et al.
2000), and confirmed agn are marked as the smaller green
squares (croft et al. 2005; pentericci et al. 2002). objects which
have been excluded as background objects are marked with a
plus sign.
6. conclusions
we observed a total of 57 near-infrared selected galaxies in
mrc 0943−242 and 33 in mrc 1138−262 with a mix of op-
tical and near-infrared multi-object spectroscopy to attempt the
confirmation of massive galaxies in the vicinity of the cen-
tral radio galaxy in these candidate forming proto-clusters. we
have determined that the jhks selection criteria presented in
kajisawa et al. (2006) select mostly 2.3 < z < 3.1 galaxies, thus
confirming the validity of this colour selection technique (10 out
of 18 confirmed redshifts of jhks-selected galaxies are in this
redshift range). of the 57 sources in the field of the radio galaxy
mrc 0943−242 we identify 27 spectroscopic redshifts and for
the remaining 30 sources we obtained five upper limits on the
redshifts, excluding them to be part of the proto-cluster. the re-
maining 25 objects could not be excluded as either foreground
or background. an unknown fraction of these may be at the red-
shift of the radio galaxy. however, we also pinpoint a foreground
(but still quite distant) large scale structure at redshift z ≈2.65
in this field, so it is likely that some of the remaining galaxies
are also associated with this structure.
of the 33 galaxies observed in mrc 1138−262 we exclude
two background objects and confirm two objects at the redshift
of the radio galaxy (all on the basis of emission lines). we can-
not exclude the remaining 29 objects as being potential proto-
cluster members - we require deep near-infrared spectroscopy
to locate the 4000å break. on the basis of sed fits, and the sfr
derived from both the hα line flux and the 24 μm flux, we deduce
that one of the two red galaxies confirmed to be at the redshift
of the proto-cluster belongs to the class of dusty star-forming
(but still massive) red galaxies, while the other is evolved and
massive, and formed rapidly in intense bursts of star formation.
this represents the first red galaxies to be confirmed as a mem-
ber of a proto-cluster above z > 2. the only other galaxies of
the same class are evolved galaxies found in an overdensity at
z = 1.6 in the galaxy mass assembly ultra-deep spectroscopic
survey (kurk et al. 2009), and red massive galaxies in the multi-
wavelength survey by yale-chile (musyc) sample at z ∼2.3
(e.g. kriek et al. 2006). since their phase of high star forma-
tion has ended (the current sfr is of order 1 m⊙yr−1), we only
just detect them through their emission line fluxes. however, this
shows that even in these galaxies where we do not expect strong
emission lines, it is still sometimes possible to derive redshifts
from the hα line. it is highly likely that there are more galaxies
on the red sequence in this field which have completely ceased
star formation and are thus very difficult to identify spectroscop-
ically. the next stage of our spectroscopic campaign is to obtain
deep near-infrared spectroscopy in the j- and h-bands to search
for objects with 4000å continuum breaks.
given the inferred sfrs and stellar masses of the two con-
firmed z ∼2.15 galaxies, they have to have formed quite early,
which fits into the down-sizing scenario (cowie et al. 1996).
although this conclusion is drawn from limited numbers, the
low fraction of sources with hα emission lines in our sample
suggests that most of the sources probably don't have much on-
going star formation (sfr< 0.5 m⊙yr−1) and so their formation
epoch might be quite high. this would be contrary to the idea
of the giant elliptical assembly epoch being z ∼2 −3 (e.g.,
van dokkum et al. 2008; kriek et al. 2008), but is consistent
with results from sed age fitting of stellar populations, which
point to zform > 3 (e.g., eisenhardt et al. 2008). the discrepancy
may be resolved if subclumps form early and merge without in-
ducing much star formation (i.e. dry merging). alternatively, the
fact that these galaxies are located in over–densities may predis-
pose us to structures that formed early.
finally, the proximity of these two massive galaxies to the
rg implies that they will have an important impact on its future
evolution. given the crossing times, compared with the time re-
maining until z = 0, it is plausible that one or both of the galaxies
may eventually merge with the rg.
m. doherty et al.: spectroscopy of red galaxies in z ∼2.5 proto-clusters
17
acknowledgements. we thank the referee g. zamorani for a very constructive
referee report, which has substantially improved this paper. this work was fi-
nancially supported in part by the grant-in-aid for scientific research (no.s
18684004 and 21340045) by the japanese ministry of education, culture, sports
and science. cl is supported by nasa grant nnx08aw14h through their
graduate student researcher program (gsrp). we thank dr. bruzual and dr.
charlot for kindly providing us with their latest population synthesis code. md
thanks andy bunker and rob sharp for useful discussions on manipulating
moircs data. the work of ds was carried out at jet propulsion laboratory,
california institute of technology, under a contract with nasa. jk acknowl-
edges financial support from dfg grant sfb 439. the authors wish to respect-
fully acknowledge the significant cultural role and reverence that the summit
of mauna kea has always had within the indigenous hawaiian community. we
are fortunate to have the opportunity to conduct scientific observations from this
mountain.
references
appenzeller, i. & rupprecht, g. 1992, the messenger, 67, 18
binney, j. & tremaine, s. 2008, galactic dynamics: second edition (princeton
university press)
blakeslee, j. p., franx, m., postman, m., et al. 2003, apj, 596, l143
bruzual, g. & charlot, s. 2003, mnras, 344, 1000
cowie, l. l., songaila, a., hu, e. m., & cohen, j. g. 1996, aj, 112, 839
croft, s., kurk, j., van breugel, w., et al. 2005, aj, 130, 867
d'odorico, s., dekker, h., mazzoleni, r., et al. 2006, in society of photo-
optical instrumentation engineers (spie) conference series, vol. 6269,
society of photo-optical instrumentation engineers (spie) conference
series
eisenhardt, p. r. m., brodwin, m., gonzalez, a. h., et al. 2008, apj, 684, 905
eke, v. r., cole, s., & frenk, c. s. 1996, mnras, 282, 263
ellis, s. c., jones, l. r., donovan, d., ebeling, h., & khosroshahi, h. g. 2006,
mnras, 368, 769
f ̈
orster schreiber, n. m., van dokkum, p. g., franx, m., et al. 2004, apj, 616,
40
furusawa, h., shimasaku, k., doi, m., & okamura, s. 2000, apj, 534, 624
galametz, a., stern, d., eisenhardt, p. r. m., et al. 2009, apj, 694, 1309
goldoni, p., royer, f., franc
̧ois, p., et al. 2006, in society of photo-optical
instrumentation engineers (spie) conference series, vol. 6269, society of
photo-optical instrumentation engineers (spie) conference series
grazian, a., fontana, a., moscardini, l., et al. 2006, a&a, 453, 507
grazian, a., salimbeni, s., pentericci, l., et al. 2007, a&a, 465, 393
hatch, n. a., overzier, r. a., kurk, j. d., et al. 2009, mnras, 395, 114
hilton, m., collins, c. a., stanford, s. a., et al. 2007, apj, 670, 1000
ichikawa, t., suzuki, r., tokoku, c., et al. 2006, in presented at the society
of photo-optical instrumentation engineers (spie) conference, vol. 6269,
society of photo-optical instrumentation engineers (spie) conference
series
kajisawa, m., kodama, t., tanaka, i., yamada, t., & bower, r. 2006, mnras,
371, 577
kashikawa, n., inata, m., iye, m., et al. 2000, in society of photo-optical
instrumentation engineers (spie) conference series, vol. 4008, society
of photo-optical instrumentation engineers (spie) conference series, ed.
m. iye & a. f. moorwood, 104–113
kennicutt, r. c. 1998, ara&a, 36, 189
kodama, t., bell, e. f., & bower, r. g. 1999, mnras, 302, 152
kodama, t., tanaka, i., kajisawa, m., et al. 2007, mnras, 377, 1717
kriek, m., van der wel, a., van dokkum, p. g., franx, m., & illingworth, g. d.
2008, apj, 682, 896
kriek, m., van dokkum, p. g., franx, m., et al. 2006, apj, 645, 44
kurk, j., cimatti, a., zamorani, g., et al. 2009, arxiv e-prints
kurk, j., r ̈
ottgering, h., pentericci, l., miley, g., & overzier, r. 2003, new
astronomy review, 47, 339
kurk, j. d., pentericci, l., overzier, r. a., r ̈
ottgering, h. j. a., & miley, g. k.
2004a, a&a, 428, 817
kurk, j. d., pentericci, l., r ̈
ottgering, h. j. a., & miley, g. k. 2004b, a&a,
428, 793
lidman, c., rosati, p., tanaka, m., et al. 2008, a&a, 489, 981
madau, p. 1995, apj, 441, 18
pentericci, l., kurk, j. d., carilli, c. l., et al. 2002, a&a, 396, 109
pentericci, l., kurk, j. d., r ̈
ottgering, h. j. a., et al. 2000, a&a, 361, l25
reddy, n. a., steidel, c. c., fadda, d., et al. 2006, apj, 644, 792
rieke, g. h., young, e. t., engelbracht, c. w., et al. 2004, apjs, 154, 25
salpeter, e. e. 1955, apj, 121, 161
seymour, n., stern, d., de breuck, c., et al. 2007, apjs, 171, 353
spergel, d. n., verde, l., peiris, h. v., et al. 2003, apjs, 148, 175
stanford, s. a., romer, a. k., sabirli, k., et al. 2006, apj, 646, l13
suzuki, r., tokoku, c., ichikawa, t., et al. 2008, pasj, 60, 1347
tanaka, m., kodama, t., arimoto, n., et al. 2005, mnras, 362, 268
van dokkum, p. g., franx, m., f ̈
orster schreiber, n. m., et al. 2004, apj, 611,
703
van dokkum, p. g., franx, m., kriek, m., et al. 2008, apj, 677, l5
van dokkum, p. g. & stanford, s. a. 2003, apj, 585, 78
van dokkum, p. g., stanford, s. a., holden, b. p., et al. 2001, apj, 552, l101
venemans, b. p., r ̈
ottgering, h. j. a., miley, g. k., et al. 2005, a&a, 431, 793
venemans, b. p., r ̈
ottgering, h. j. a., miley, g. k., et al. 2007, a&a, 461, 823
zirm, a. w., stanford, s. a., postman, m., et al. 2008, apj, 680, 224
|
0911.1718 | time-dependent quantum transport with superconducting leads: a discrete
basis kohn-sham formulation and propagation scheme | in this work we put forward an exact one-particle framework to study
nano-scale josephson junctions out of equilibrium and propose a propagation
scheme to calculate the time-dependent current in response to an external
applied bias. using a discrete basis set and peierls phases for the
electromagnetic field we prove that the current and pairing densities in a
superconducting system of interacting electrons can be reproduced in a
non-interacting kohn-sham (ks) system under the influence of different peierls
phases {\em and} of a pairing field. an extended keldysh formalism for the
non-equilibrium nambu-green's function (negf) is then introduced to calculate
the short- and long-time response of the ks system. the equivalence between the
negf approach and a combination of the static and time-dependent
bogoliubov-degennes (bdg) equations is shown. for systems consisting of a
finite region coupled to ${\cal n}$ superconducting semi-infinite leads we
numerically solve the static bdg equations with a generalized wave-guide
approach and their time-dependent version with an embedded crank-nicholson
scheme. to demonstrate the feasibility of the propagation scheme we study two
paradigmatic models, the single-level quantum dot and a tight-binding chain,
under dc, ac and pulse biases. we provide a time-dependent picture of single
and multiple andreev reflections, show that andreev bound states can be
exploited to generate a zero-bias ac current of tunable frequency, and find a
long-living resonant effect induced by microwave irradiation of appropriate
frequency.
| introduction
in the last two decades superconducting nanoelectron-
ics has emerged as an interdisciplinary field bridging dif-
ferent areas of physics like superconductivity, quantum
transport and quantum computation.1–3 for practical ap-
plications the reduction of heat losses in superconducting
circuits constitutes a major advantage over semiconduc-
tor electronics where a molecular junction is more subject
to thermal instabilities.4–7
the idea of exploiting atomic-size quantum point con-
tacts or quantum dots coupled to superconducting leads
as quantum bits (qubit) has received significant atten-
tion both theoretically and experimentally.8–11 the state
of a qubit evolves in time according to the schr ̈
odinger
equation for open quantum systems and can be manip-
ulated using electromagnetic pulses of the duration of
few nano-seconds or even faster.
due to the reduced
dimensionality and the high speed of the pulses these
systems can be classified as ultrafast josephson nano-
junctions (uf-jnj). the microscopic description of the
out-of-equilibrium properties of an uf-jnj is not only
of importance for their potential applications in future
electronics but also of considerable fundamental interest.
the quantum nature of the nanoscale device leads to a
sub-harmonic gap structure,12–16 ac characteristics,17,18
current-phase relation,19,20 etc. that differ substantially
from those of a macroscopic josephson junction. further-
more, there are regimes in which the electron-electron
scattering inside the device plays an important role.21–25
we here focus on a different relevant aspect of uf-
jnj, namely the ab initio description of their short time
responses. considerable theoretical progresses have been
made to construct a first-principle scheme of electron
transport through molecules placed between normal met-
als.
on the contrary, despite the recent experimental
advances in fabricating superconducting quantum point
contacts, a first-principle approach to superconducting
nanoelectronics is still missing.
furthermore, time-
dependent (td) properties like the switch on/offtime of
the current or the response to time-dependent ac fields
or train pulses has remained largely unexplored. there
are several difficulties related to the construction of a
feasible time-dependent approach already at a mean-field
level. the system is open, the electronic energy scales are
2-3 orders of magnitude larger than a typical supercon-
ducting gap, the problem is intrinsically time-dependent
(even for dc biases), and the possible formation of an-
dreev bound states (abs) give rise to persistent oscilla-
tions in the density and current. the time-evolution of lo-
calized wave-packets scattering across a superconductor-
normal interface was explored long ago.26–28 more re-
cently the analysis has been extended to scattering states
in superconductor-device-normal (s-d-n) junctions us-
ing the wide-band-limit (wbl) approximation29 and in
superconductor-device-superconductor (s-d-s) junctions
by approximating the leads with finite size reservoirs.30
2
however, there has been no attempt to calculate the re-
sponse of s-d-s junctions to td applied voltages using
truly semi-infinite leads.
in this work we propose a one-particle framework to
study td quantum transport in uf-jnj, construct a
suitable propagation scheme and apply it to study gen-
uine td properties like the switch on/offof the current,
the onset of a josephson regime, abs oscillations, ac
transport and the time-evolution of multiple andreev re-
flections.
the one-particle framework, described in section ii a
and ii b, is an extension of td superconducting density
functional theory31 to systems with a discrete basis and
is built on the mapping from densities to potentials pro-
posed by van leeuwen32 and vignale.33 it is shown that
under reasonable assumptions the current density and
pairing density of an interacting system perturbed by a
td electromagnetic field can be reproduced in a kohn-
sham system of non-interacting electrons perturbed by
a td electromagnetic and pairing fields, and that these
fields are unique. in the special case of normal systems
such result provides a formulation of td current density
functional theory in tight-binding models.
an
extended
keldysh
formalism
for
the
non-
equilibrium nambu-green's function is introduced in
section ii c and used to calculate the time-dependent
current, density and pairing density of the kohn-sham
hamiltonian. by adding a vertical imaginary track to
the original keldysh contour34–36 we are able to extract
the response of the system just after the application of
the bias (transient regime) and to describe the onset of
the josephson regime. we also show the equivalence be-
tween the equations of motion for the nambu-green's
function on the extended contour and the combination
of the static and td bogoliubov-degennes equations.
in section iii we illustrate a procedure for the calcula-
tion of the one-particle eigenstates of a system consisting
of n semi-infinite superconducting leads coupled to a fi-
nite region c. these states are then propagated in time
according to the td bogoliubov-degennes equations us-
ing an embedded crank-nicholson algorithm which re-
duces to that of refs. 37,38 in the case of normal leads.
the propagation scheme is unitary (norm conserving)
and incorporates exactly the transparent boundary con-
ditions.
the feasibility of the method is demonstrated in sec-
tion iv where we calculate the td current, density and
pairing density of s-d-s junctions under dc, ac and pulse
biases.
the paradigmatic model with a single atomic
level connected to a left and right superconducting leads
is investigated in detail. we provide a time-dependent
picture of single and multiple andreev reflections and of
the consequent formation of cooper pairs at the inter-
face. we show that the smaller is the bias the longer and
the more complex is the transient regime. we also study
how the system relaxes after the bias is switched off. due
to the presence of abs a tiny difference in the switch-off
time can cause a large difference in the relaxation be-
havior with persistent oscillations of tunable frequency.
abs also play a crucial role in microwave ac transport.
tuning the frequency of the microwave field according
to the abs energy difference one produces a long-living
transient resonant effect in which the amplitude of the
ac current is about an order of magnitude larger than
that of the current out of resonance. finally we consider
one-dimensional atomic chains coupled to superconduct-
ing leads. we calculate the td current density pattern
along the chain for dc (ac) biases and show a clear-cut
transient scenario of the multiple (photon-assisted) an-
dreev reflections. a summary of the main findings and
an outlook on future perspectives are drawn in section
v.
ii.
general formulation
a.
hamiltonian of the system
the hamiltonian of a system of interacting electrons
can be written in terms of the field operators ˆ
ψσ(r)
( ˆ
ψ†
σ(r)) which destroy (create) an electron of spin σ
in position r.
we expand the field operators in some
suitable basis of localized orbitals φm(r) as ˆ
ψσ(r) =
p
m ˆ
cmσφm(r). assuming, for simplicity, that the φm's
are orthonormal the ˆ
c's operators obey the anticommu-
tation relations
{ˆ
cmσ, ˆ
c†
nσ′} = δσσ′δnm.
(1)
in the presence of an external static electromagnetic and
pairing field the hamiltonian has the general form
ˆ
h0 = ˆ
k0 + ˆ
∆0 + ˆ
Ơ
0 + ˆ
hint.
(2)
the first term is the free-electron part and reads
ˆ
k0 =
x
σ
x
mn
tmneiγmnˆ
c†
mσˆ
cnσ
(3)
with real symmetric hopping parameters tmn = tnm and
real antisymmetric phases γmn = −γnm. the phases ac-
count for the presence of an external vector potential
a(r), in accordance with the peierls prescription. if we
use a grid basis for the expansion of the field operators
with grid points rm then γmn = 1
c
r rm
rn dl*a(r). the sec-
ond term in eq. (2) represents the pairing field operator
which couples the pairing density operator to an external
field and reads
ˆ
∆0 =
x
m
∆mˆ
c†
m↑ˆ
c†
m↓.
(4)
we notice that the pairing field ∆m is local in the chosen
basis. this term is usually set to zero since the transition
to a superconducting state is caused by the interaction
part. our motivation to include it at this stage will soon
become clear. the interaction part of the hamiltonian
3
ˆ
hint contains terms more than quadratic in the ˆ
c's op-
erators. we do not specify the form of ˆ
hint which can
be any. we, however, require that it commutes with the
density operator ˆ
nmσ ≡ˆ
c†
mσˆ
cmσ
[ ˆ
hint, ˆ
nmσ] = 0,
∀m, σ.
(5)
the above condition is fulfilled on a grid basis as well as
in tight-binding models with hubbard-like interactions.
we are interested in the dynamics of the system when
an extra time-dependent electromagnetic field and pair-
ing potential is switched on at t = 0. the pairing po-
tential must here be considered as an independent ex-
ternal field. since the time-dependent part of the scalar
potential can always be gauged away we restrict to time-
dependent hamiltonians of the form
ˆ
h(t) = ˆ
k(t) + ˆ
∆(t) + ˆ
∆†(t) + ˆ
hint,
(6)
where
ˆ
k(t) =
x
σ
x
mn
tmneiγmn(t)ˆ
c†
mσˆ
cnσ
(7)
and
ˆ
∆(t) =
x
m
∆m(t)ˆ
c†
m↑ˆ
c†
m↓.
(8)
in 1994 wacker, k ̈
ummel and gross31 put forward a
rigorous framework, known as td density functional
theory for superconductors (scdft), to study the dy-
namics of a superconducting system in the continuum
case. the continuum hamiltonian can be obtained from
the hamiltonian in eq.
(6) with the φm's a grid ba-
sis in the limit of zero spacing. they proved that given
an initial many-body state |φ0⟩the current and pair-
ing densities evolving under the influence of two different
vector potentials a and a′ and/or two different pair-
ing potentials ∆and ∆′ are always different. this re-
sult renders all observable quantities functionals of the
current and pairing densities, which can therefore be cal-
culated in a one-particle manner.31 the original formu-
lation relies on the assumption that the time-dependent
current and pairing densities of the interacting hamil-
tonian can be reproduced in a non-interacting hamil-
tonian under the influence of another vector and pair-
ing potential, i.e., that the interacting a-∆densities are
also non-interacting a-∆representable.
the interact-
ing versus non-interacting representability assumption is
present also in the original formulation of td density
functional theory (dft) by runge and gross39 and td
current density functional theory (cdft) by ghosh
and dhara.40 the representability problem in tddft
was solved by van leeuwen who proved that the td den-
sity of a system with interaction ˆ
hint under the influence
of a td scalar potential v can be reproduced in another
system with interaction ˆ
h′
int under the influence of a td
scalar potential v ′ and that v ′ is unique.32 we will re-
fer to such result as the van leeuwen theorem. taking
ˆ
h′
int = 0 the van leeuwen theorem implies that the td
interacting density can be reproduced in a system of non-
interacting electrons.
later vignale extended the van
leeuwen theorem to solve the representability problem in
tdcdft.33 in the next section we show that the results
by van leeuwen and vignale can be further extended
to solve the representability problem in tdscdft. the
theory is formulated on a discete basis and it is not lim-
ited to pure states, implying that we also have access to
the finite-temperature domain.
b.
the one-particle kohn-sham scheme of
tdscdft
let ˆ
ρ(t) be the density matrix at time t of the system
described by the hamiltonian in eq.
(6).
we denote
by o(t) ≡tr {ˆ
ρ(t) ˆ
o(t)} the time-dependent ensemble
average of a generic operator ˆ
o(t), where the "tr " sym-
bol signifies the trace over a complete set of many-body
states. the average o(t) obeys the equation of motion
d
dto(t) = ∂
∂to(t) + itr {ˆ
ρ(t)[ ˆ
h(t), ˆ
o(t)]}.
(9)
it is easy to verify that when ˆ
o(t) is the density operator
ˆ
nm ≡p
σ ˆ
c†
mσˆ
cmσ, eq. (9) yields
d
dtnm(t) =
x
n
jmn(t) −4im
∆∗
m(t)pm(t)e−2itmmt
,
(10)
where jmn(t) and pm(t) are the expectation value of the
bond-current operator
ˆ
jmn(t) ≡1
i
x
σ
tmneiγmn(t)ˆ
c†
mσˆ
cnσ −h.c.
(11)
and pairing density operator
ˆ
pm(t) ≡ˆ
cm↓ˆ
cm↑e2i
r t
0 dt′tmm = ˆ
cm↓ˆ
cm↑e2itmmt.
(12)
equation (10) is the proper extension of the continuity
equation to systems exposed to a pairing field. the term
ˆ
∆(t) + ˆ
Ơ(t) acts as if there were td sources and sinks.
notice that under the gauge transformation ˆ
cnσ →
eiβn(t)ˆ
cnσ (with βn(0) = 0) the on-site energies change as
tmm →tmm −dβm(t)/dt while the phases and the pair-
ing field change according to γmn(t) →γmn(t) + βm(t) −
βn(t) and ∆m(t) →∆m(t) exp[2iβm(t)]. therefore the
bond-current operator ˆ
jmn and pairing density operator
ˆ
pm are gauge invariant. in a grid basis representation
with grid points rm the phases βm(t) are the discretized
values of the scalar function λ(rm, t) which defines the
gauge-transformed vector potential a and scalar poten-
tial v : a →a + c∇λ and v →v −∂λ/∂t.
the equation of motion for the bond-current jmn(t)
can be cast as follows
d
dtjmn(t) = kmn(t) d
dtγmn(t) + fmn(t).
(13)
4
the first term in the r.h.s. is exactly ∂jmn(t)/∂t; the
operator ˆ
kmn(t) ≡p
σ
tmneiγmn(t)ˆ
c†
mσˆ
cnσ + h.c.
is
the energy density of the bond m-n. the second term
in the r.h.s.
is, therefore, the average of ˆ
fmn(t) ≡
i[ ˆ
h(t), ˆ
jmn(t)], see eq. (9).
the derivation of the equation of motion for the pairing
density pm(t) is also straightforward and leads to
d
dt −2itmm
pm(t) = i∆m(t)[nm(t) −1]e2itmmt
+ igm(t)e2itmmt,
(14)
with ˆ
gm(t) ≡[ ˆ
k(t) + ˆ
hint, ˆ
cm↓ˆ
cm↑].
we now ask the question whether the densities jmn(t)
for all bonds m-n with tmn ̸= 0 and pm(t) can be re-
produced in a system with a different interaction hamil-
tonian ˆ
h′
int under the influence of td phases γ′(t) and
pairing potential ∆′(t) starting from an initial density
matrix ˆ
ρ′(0).
for the densities to be the same at time t = 0 we have
to choose ˆ
ρ′(0) and γ′(0) in such a way that
tr {ˆ
ρ′(0) ˆ
j′
mn(0)} = tr {ˆ
ρ(0) ˆ
jmn(0)},
(15)
tr {ˆ
ρ′(0) ˆ
pm(0)} = tr {ˆ
ρ(0) ˆ
pm(0)}.
(16)
notice that in the primed system the bond-current op-
erator ˆ
j′
mn is different from ˆ
jmn since the phases γ′ are
generally different from γ. on the contrary the pairing
density operator is the same in the two systems. equa-
tions (15,16) define the compatible initial configurations
of the primed system.
we answer the above question affirmatively by showing
that given a compatible initial configuration [ˆ
ρ′(0), γ′(0)]
and under reasonable conditions there exist γ′(t) and
∆′(t) for which the bond-current and pairing density of
the original and primed system are the same at all times.
the formal statement is enunciated in the following
theorem :
given a compatible initial configuration
[ˆ
ρ′(0), γ′(0)] such that
k′
mn(0) = tr {ˆ
ρ′(0)
x
σ
(tmneiγ′
mn(0)ˆ
c†
mσˆ
cnσ + h.c.)} ̸= 0
(17)
for all bonds m-n with tmn ̸= 0, and
n′
m(0) = tr {ˆ
ρ′(0)ˆ
nm} ̸= 1,
(18)
which implies that at time t = 0 none of the orbitals φm
are half filled in the primed system, there exist a unique
set of continuous phases γ′(t) and pairing potential ∆′(t)
that reproduce in the primed system the densities jmn(t)
and pm(t) of the original system.
remarks :
before presenting the proof of the the-
orem we discuss few relevant implications.
(1) if the
original system is a superconducting system with an at-
tractive interaction ˆ
hint and a vanishing pairing field,
i.e., ˆ
∆= 0, the theorem implies that the bond-currents
and pairing densities can be reproduced in a system of
non-interacting electrons, i.e.,
ˆ
h′
int = 0 perturbed by
td phases γ′ and pairing field ∆′. in the following we
will refer to such non-interacting system as the kohn-
sham (ks) system and to the td perturbation as the
ks phases and ks pairing potential.
in section iii
we describe how to perform the time-evolution of such
ks systems for geometries relevant to quantum trans-
port. (2) for interacting systems with ∆= 0 and ini-
tially in equilibrium in the absence of electromagnetic
fields the phases γ(0) = 0 and hence jmn(0) = 0 for all
bonds. in the ks system a possible compatible initial
configuration is therefore γ′(0) = 0 and ˆ
ρ′(0) such that
the expectation value of the one-particle density matrix
n′
mn(0) = p
σ tr {ˆ
ρ′(0)ˆ
c†
mσˆ
cnσ} is real. for such initial
configurations the condition (17) becomes n′
mn(0) ̸= 0
for all bonds m-n with tmn ̸= 0. (3) if we ask the ques-
tion whether only the bond-currents jmn(t) of a system
with hamiltonian (6) and zero pairing field, i.e., ∆= 0,
can be reproduced in a system with zero pairing field,
i.e., ∆′ = 0, and different interactions ˆ
h′
int under the in-
fluence of different phases γ′ starting from some initial
density matrix ˆ
ρ′(0), the answer is affirmative provided
that ˆ
ρ′(0) and γ′(0) fulfill eqs. (15,17). this corollary ex-
tends tdcdft to tight-binding models using the peierls
phases as the basic ks fields and lays down the basis for
a density functional td theory in discrete systems.41
we conclude this section with the proof of the theo-
rem.
proof :
the current and pairing densities of the
primed system obey the equations of motion (13,14) with
kmn(t) →k′
mn(t), fmn(t) →f ′
mn(t) and nm(t) →
n′
m(t), gm(t) →g′
m(t). therefore, for a generic time
t the densities of the two systems are the same provided
that
k′
mn(t) d
dtγ′
mn(t) = kmn(t) d
dtγmn(t)
+ fmn(t) −f ′
mn(t),
(19)
[n′
m(t) −1]∆′
m(t) = [nm(t) −1]∆m(t)
+ gm(t) −g′
m(t).
(20)
a discussion on the existence and uniqueness of the so-
lution for the coupled eqs. (19-20) is rather complicated
since the dependence on the phases γ′ and potentials ∆′
in f ′ and g′ enters implicitly via the td density matrix
ˆ
ρ′(t). to proceed further we then follow the approach
of vignale and assume that the time-dependent phases
and pairing potentials and hence all expectation values
are analytic functions of time around t = 0.33 expand-
ing all quantities in eqs. (19-20) in their taylor series
and equating the coefficients with the same power of t
5
we obtain
(l + 1)k′(0)
mn γ′(l+1)
mn
= −
l−1
x
k=0
(k + 1)k′(l−k)
mn
γ′(k+1)
mn
+
l
x
k=0
(k + 1)k(l−k)
mn
γ(k+1)
mn
+ f ′(l)
mn −f (l)
mn,
(21)
[n′(0)
m
−1]∆′(l)
m
= −
l−1
x
k=0
n′(l−k)
m
∆′(k)
m
+
l
x
k=0
n(l−k)
m
∆(k)
m −∆(l)
m + g′(l)
m −g(l)
m ,
(22)
where for a generic analytic function f(t) we defined f (l)
as the l-th coefficient of the taylor expansion. we now
show that eqs. (21-22) constitute a set of recursive re-
lations to calculate all γ′(l) and ∆′(l) once all γ′(k) and
∆′(k) are known for k < l.
we first observe that the
l-th derivative of the density matrix ˆ
ρ′(t) in t = 0 de-
pends at most on the (l −1) derivative of γ′ and ∆′ since
i d
dt ˆ
ρ′(t) = [ ˆ
h′(t), ˆ
ρ′(t)]. the quantity f ′
mn depends on
(γ′, ∆′) implicitly through ˆ
ρ′(t) and explicitly through
the commutator [ ˆ
h′(t), ˆ
j′
mn(t)]. since the l-th derivative
of the commutator depends on all (γ′(k), ∆′(k)) with k ≤l
the quantity f ′(l)
mn is a function of (γ′(k), ∆′(k)) with k ≤l.
on the contrary, the quantities k′, g′ depend implicitly
on (γ′, ∆′) through ˆ
ρ′(t) but they explicitly depend only
on γ′, i.e., there is no explicit dependence on the pair-
ing potential ∆′. we therefore conclude that k′(l) and
g′(l) depend on the γ′(k) with k ≤l and on ∆′(k) with
k < l. finally, from eq. (10) we see that the l-th deriva-
tive of the density n′
m(t) depends at most on the l −1
derivative of γ′ and ∆′. the table below summarizes the
dependency of the various quantities on the order of the
derivatives of γ′ and ∆′
f ′(l)
k′(l)
g′(l)
n′(l)
{γ′(k)} k ≤l k ≤l k ≤l k < l
{∆′(k)} k ≤l k < l k < l k < l
(23)
from the above considerations it follows that eq. (22)
with l = 0 can be used to determine ∆′(0) since the r.h.s.
depends only on γ′(0) = γ′(0) and from eq. (18) the pref-
actor [n′(0)
m −1] ̸= 0. having ∆′(0) we can easily calculate
γ′(1) from eq. (21) with l = 0 since the r.h.s. depends
only on γ′(0) and ∆′(0) and from eq. (17) k′(0)
mn ̸= 0.
with γ′(1), γ′(0) and ∆′(0) we can use eq. (22) with l = 1
to extract ∆′(1), then eq. (21) with l = 1 to extract γ′(2)
and so on and so forth.
c.
keldysh-green's function in the nambu space
1.
keldysh contour
we now specialize to interacting systems which are ini-
tially in equilibrium at temperature t = 1/β and chem-
ical potential μ; such initial configurations are the rele-
vant ones in quantum transport experiments, see section
ii d.42 from static scdft43 we can choose the initial
density matrix of the ks system as the thermal density
matrix of a system described by the equilibrium hamilto-
nian (2) with ˆ
hint = 0 and ks phases γ and pairing po-
tentials ∆, and from the results of the previous section we
know that such ks system can reproduce the td bond-
currents and pairing densities of the interacting system if
perturbed by td ks phases γ(t) and pairing potentials
∆(t). denoting by ˆ
hs(t) = ˆ
k(t) + ˆ
∆(t) + ˆ
Ơ(t) the td
hamiltonian and by ˆ
ρs(t) the td density matrix of the
ks system we then have
ˆ
ρs(t) = 1
z
ˆ
ss(t)e−β( ˆ
hs−μ ˆ
n) ˆ
s†
s(t)
(24)
where z = tr {e−β( ˆ
hs−μ ˆ
n)} is the partition function
and ˆ
ss(t) is the ks evolution operator to be determined
from i d
dt ˆ
ss(t) =
ˆ
hs(t) ˆ
ss(t) with boundary condition
ˆ
ss(0) = 1. the hamiltonian ˆ
hs = ˆ
hs(0) is the equi-
librium ks hamiltonian while ˆ
n is the total number of
particles operator. it is worth to notice that in general
[ ˆ
hs, ˆ
n] ̸= 0 due to the presence of the pairing field. the
td expectation value os(t) of a generic operator ˆ
o(t) is
in the ks system given by34–36,44
os(t) = tr {ˆ
ρs(t) ˆ
o(t)} ≡⟨tk
n
ˆ
o(z = t±)
o
⟩
(25)
where we have introduced the short hand notation
⟨tk{. . .}⟩=
tr
h
tk
n
e−i
r
γk d ̄
z ˆ
hμ,s( ̄
z) . . .
oi
tr
h
tk
n
e−i
r
γk d ̄
z ˆ
hμ,s( ̄
z)oi .
(26)
in the above equation γk is the keldysh contour45 il-
lustrated in fig.
1 which is an oriented contour com-
posed by an upper branch going from 0 to ∞, a lower
branch going from ∞to 0 and a purely imaginary (ther-
mal) segment going from 0 to −iβ.
the operator tk
is the contour ordering operator and move operators
with later contour variable to the left (an extra minus
sign has to be included for odd permutations of fermion
fields). finally ˆ
hμ,s( ̄
z = ̄
t±) = ˆ
hs( ̄
t) where the contour
points ̄
t−/ ̄
t+ lie on the upper/lower branch at a distance
̄
t from the origin while for ̄
z on the thermal segment
ˆ
hμ,s( ̄
z = −iτ) = ˆ
hs −μ ˆ
n. thus, the denominator in
eq. (26) is simply the partition function z. in eq. (25)
the variable z on the contour can be taken either on the
upper (t−) or lower (t+) branch at a distance t from the
origin.
6
fig. 1: the keldysh contour γk described in the main text.
the contour variable z = t−/t+ denotes a point on the up-
per/lower branch at a distance t from the origin while z = −iτ
denotes a point on the imaginary track at a distance τ from
the origin. in the figure we also illustrate the points 0−(ear-
liest point on γk), 0+ and −iβ (latest point on γk).
2.
keldysh-nambu-green's function
the ks expectation value os(t) of an operator ˆ
o(t)
is in general different from the expectation value o(t)
produced by the original system. however if ˆ
o(t) is the
ks bond-current operator or the pairing density operator
the average over the ks system yields exactly the bond-
current and pairing density of the original system.
it
is therefore convenient to introduce the non-equilibrium
nambu-green's functions (negf) from which the ex-
pectation value of any one-particle operator can be ex-
tracted. a further reason for us to introduce the negf
is that the equilibrium and time-dependent bogoliubov-
degennes equations can be elegantly derived from them,
thus illustrating the equivalence between the negf and
the bogoliubov-degennes formalisms. the normal and
anomalous components of the negf are defined accord-
ing to46
gσ,mn(z; z′) = 1
i ⟨tk
ˆ
cmσ(z)ˆ
c†
nσ(z′)
⟩,
(27)
fmn(z; z′) = 1
i ⟨tk {ˆ
cm↓(z)ˆ
cn↑(z′)}⟩,
(28)
fmn(z; z′) = −1
i ⟨tk
n
ˆ
c†
n↑(z′)ˆ
c†
m↓(z)
o
⟩,
(29)
where z, z′ run on the keldysh contour γk.34,35,44,47 the
ˆ
c operators carry a dependence on the z variable; such de-
pendence simply specifies their position along the contour
so to have a well defined action of tk.44 the td bond-
current and pairing density can be expressed in terms of
gσ(z; z′) and f(z; z′) as
jmn(t) = −
x
σ
tmneiγmn(t)gσ,nm(t−; t+) + h.c.
,
(30)
pm(t) = ifmm(t+; t−)e2itmmt.
(31)
3.
equations of motion
the negf of the ks system obey the following equa-
tions of motion
(
i
−
→
d
dz 1 −hμ(z)
)
g(z; z′) = 1δ(z −z′),
(32)
g(z; z′)
(
−i
←
−
d
dz′ 1 −hμ(z′)
)
= 1δ(z −z′),
(33)
where all underlined quantities are 2 × 2 matrices in the
nambu space with matrix elements 1mn =
δmn
0
0
δmn
and
gmn(z; z′) =
g↑,mn(z; z′)
−fnm(z′; z)
fmn(z; z′)
−g↓,nm(z′; z)
,
(34)
hμ,mn(z) =
kμ,mn(z)
δmn∆m(z)
δmn∆∗
m(z) −kμ,nm(z)
.
(35)
the matrix elements of hμ(z) are
kμ,mn(t±) = tmneiγmn(t)
∆m(t±) = ∆m(t)
(36)
for z = t± on the horizontal branches and
kμ,mn(−iτ) = tmneiγmn −μδmn
∆m(−iτ) = ∆m
(37)
for z = −iτ on the imaginary track. since hμ(−iτ) is
independent of τ we write hμ(−iτ) = h0 −μσ with
σmn = σz1mn and σz the third pauli matrix.
in the next section we show that the solution of the
equations of motion is equivalent to first solve the static
bogoliubov-degennes (bdg) equations and then their
td version.
4.
keldysh components and bogoliubov-degennes equations
we introduce the left and right contour evolution ma-
trices sr/l(z) which satisfy
i d
dz sr(z) = hμ(z)sr(z),
(38)
−i d
dz′ sl(z′) = sl(z′)hμ(z′),
(39)
with boundary conditions sr/l(0−) = 1. the most gen-
eral solution of the equations of motion (32,33) can then
be written as
g(z; z′) = sr(z)
θ(z; z′)g> + θ(z′; z)g<
sl(z′),
(40)
7
with g>−g< = −i1 and the contour heaviside function
θ(z; z′) = 1 if z is later than z′ and zero otherwise. equa-
tion (40) is a solution for all matrices g> = −i1 + g<.
in order to determine g> or g< we use the boundary
conditions
g(0−; z′) = −g(−iβ; z′),
(41)
g(z; 0−) = −g(z; −iβ),
(42)
which follow directly from the definitions (27-29) of the
negf. using eq. (40) one finds g(0−; z′) = g<sl(z′)
and g(−iβ; z′) = sr(−iβ)g>sl(z′) from which we con-
clude that
g< = −sr(−iβ)g>.
(43)
similarly, from eq. (42) one finds
g> = −g<sl(−iβ).
(44)
exploiting the fact that hμ(−iτ) = h0 −μσ is con-
stant along the imaginary track one readily realizes that
sr/l(−iβ) = exp[±β(h0 −μσ)] and hence
g< =
i
1 + exp[β(h0 −μσ)].
(45)
from the exact solution (40) we can extract any observ-
able quantity at times t ≥0 and not only its limiting
behavior at t →∞.
below we calculate the different
components of the negf.
we introduce the eigenstates ψq, with eigenenergies
eq, of the matrix h0 −μσ. the vector ψq = [uq, vq]
is a two-dimensional vector in the nambu space and, by
definition, satisfies the eigenvalue problem
x
n
tmneiγmnuq(n) + ∆mvq(m) = (eq + μ)uq(m), (46)
−
x
n
tnmeiγnmvq(n)+∆∗
muq(m) = (eq−μ)vq(m). (47)
due to the presence of the pairing field the components
uq and vq are coupled and the eigenstates ψq are a
mixture of one-particle spin-up electron states and spin-
down hole states.
we will refer to the eigenstates ψq
as bogolons.
the above equations have the structure
of the static bdg equations which follow from the bcs
approximation.48,49 in our case eqs. (46,47) follow from
scdft43 and therefore yield the exact equilibrium bond-
current and pairing density provided that the exact ks
phases and pairing fields are used.
inserting the complete set of eigenstates in eq. (40)
and taking into account eq. (45) we find the following
expansion for the negf
g(z; z′) = i
x
q
sr(z)ψq
θ(z; z′)f >(eq)
+θ(z′; z)f <(eq)
ψ†
q sl(z′),
(48)
where f <(ω) = 1/[1 + exp(βω)] is the fermi function
and f >(ω) = f <(ω)−1. taking z and z′ on the real axis
but on different branches of the keldysh contour, we can
extract the lesser and greater component of the negf.
we first notice that for z = t± the contour evolution
operators reduce to the standard evolution operators, i.e.,
sr(t±) = s(t) and sl(t±) = s†(t) with
i d
dts(t) = h(t)s(t),
s(0) = 1 ,
(49)
and h(t) = hμ(t±), see eq. (36). then, in terms of the
evolved states ψq(t) = s(t)ψq with components ψq(t) =
[uq(t), vq(t)] we find
g≶(t; t′) ≡g(t∓; t′
±)=
g≶
↑(t; t′) −f≷,t (t′; t)
f
≶(t; t′) −g≷,t
↓
(t′; t)
= i
x
q
f ≶(eq)
uq(t)u†
q(t′) uq(t)v†
q(t′)
vq(t)u†
q(t′) vq(t)v†
q(t′)
,(50)
where the superscript t in f≷,t and g≷,t
↓
denotes the
transpose of the matrix, see also eq. (34). the func-
tions uq(t) and vq(t) can be determined by solving a cou-
pled system of first-order differential equations.
from
eq. (49) it follows that
i d
dtuq(m, t) =
x
n
tmneiγmn(t)uq(n, t) + ∆m(t)vq(m, t),
(51)
i d
dtvq(m, t) = −
x
n
tnmeiγnm(t)vq(n, t) + ∆∗
m(t)uq(m, t),
(52)
which have the structure of the td bdg equations.26,50
as in the static case, however, the solution of eqs. (51-
52) yields the exact densities and not their bcs approx-
imation.
we notice that for the ks system to reproduce the
time-independent densities of an interacting system in
equilibrium it must be
∆m(t) = e−2iμt∆m
(53)
for which one finds the solutions uq(t) = e−i(eq+μ)tuq
and vq(t) = e−i(eq−μ)tvq. the above time-dependence of
the pairing field is the same as in the bcs approximation.
using eq.
(50) the retarded (r) and advanced (a)
negf are
gr/a(t; t′) ≡±θ(±t ∓t′)
g>(t; t′) −g<(t; t′)
= ∓iθ(±t ∓t′)s(t)s†(t′),
(54)
with components
gr/a
mn (t; t′) =
gr/a
↑,mn(t; t′)
−fa/r
nm (t′; t)
f
r/a
mn (t; t′)
−ga/r
↓,nm(t′; t)
.
(55)
8
it follows that g≶(t; t′) can also be written as
g≶(t; t′) = gr(t; 0) g≶(0; 0) ga(0; t′).
(56)
d.
application to quantum transport
we here apply the above formalism to systems de-
scribed by α = 1, . . . , n bulk superconducting leads in
contact with a central region c which can be, e.g., a
quantum dot, a molecule or a nanostructure. assuming
no direct coupling between the leads the hamiltonian
hμ is written in terms of its projections on different sub-
spaces as
hμ =
n
x
α=1
hμ,αα + hμ,cc +
n
x
α=1
(hμ,αc + hμ,cα), (57)
where hμ,αα describes the α-th lead, hμ,cc the nanos-
tructure c and hμ,αc + hμ,cα the coupling between
lead α and c.
we assume region c to be a constric-
tion so small that the bulk equilibrium of the leads is
not altered by the coupling to c. furthermore we con-
sider time-dependent perturbations which correspond to
the switching on of a longitudinal electric field in lead
α. the time to screen the external electric field in the
leads is in the plasmon time-scale region. if we are in-
terested in external fields which vary on a much longer
time-scale it is reasonable to expect that the leads remain
in local equilibrium. therefore the coarse-grained time
evolution of the system can be described by the following
td hamiltonian hμ(t±) = h(t)
hαα(t) = exp (−iμtσz) hαα(0) exp (iμtσz) ,
(58)
hαc(t) = exp
i
z t
0
d ̄
t uα( ̄
t)σz
hαc(0),
(59)
hcα(t) = [hcα(t)]†.
(60)
we do not specify the time dependence of hcc(t) since
it can be any, see below. the td field uα(t) is the sum of
the external and hartree field and is homogeneous, i.e.,
it does not carry any dependence on the internal struc-
ture of the leads, in accordance with the above discus-
sion. it has been shown that for macroscopic leads the
assumption of homogeneity is verified with rather high
accuracy.51
as for the case of normal leads the equations of motion
for the keldysh-green's function can be solved by an em-
bedding procedure. we define the uncontacted green's
function g which obeys the equations of motion (32,33)
with hμ,αc = hμ,cα = 0 and the same boundary con-
ditions as g.
then, the equation of motion for gcc
projected onto regions cc takes the form
(
i
−
→
d
dz 1cc −hμ,cc(z)
)
gcc(z; z′) = 1ccδ(z −z′)
+
z
d ̄
z σ(z; ̄
z) gcc( ̄
z; z′),(61)
where the embedding self-energy is expressed in terms of
g as
σ( ̄
z; ̄
z′) =
n
x
α=1
σα( ̄
z; ̄
z′)
=
n
x
α=1
hμ,cα( ̄
z) gαα( ̄
z; ̄
z′) hμ,αc( ̄
z′). (62)
the above equation of motion is defined on the keldysh
contour of fig. 1. converting eq. (61) in equations for
real times results in a set of coupled equations known as
kadanoff-baym equations34,52–56 recently implemented
to study transient responses of interacting electrons in
model molecular junctions.51,57 the use of the kadanoff-
baym equations to address transient and relaxation ef-
fects in other contexts has been pioneered by sch ̈
afer,58
bonitz et al.,59 and binder et al..60
the importance of using an uncontacted green's func-
tion g with boundary conditions (41,42) for a proper de-
scription of g≶(t; t′) at finite times has been discussed
elsewhere in the context of transient regimes36,51 and it
has been shown that it leads to coupled equations be-
tween the keldysh-green's function with two real times
and those with one real and one imaginary time.
in the next section we propose a wave-function based
propagation scheme to solve eq. (61) for td hamiltoni-
ans of the form (58-60).
iii.
numerical algorithm
we consider semi-infinite periodic leads with a super-
cell of dimension n α
cell for lead α. the projected hamil-
tonian h0,αα = hαα(0) can then be organized as follows
h0,αα =
hα tα
0α . . .
t†
α hα tα . . .
0α
t†
α hα . . .
. . . . . . . . . . . .
,
(63)
where hα is the 2n α
cell × 2n α
cell nambu hamiltonian of
the supercell with matrix structure
hα =
ǫα
∆α
∆∗
α −ǫt
α
,
(64)
while tα describes the contact between two nearest neigh-
bor supercells.
since the pairing field is local the off-
diagonal terms of tα are zero and therefore the general
structure of the hopping matrix is
tα =
tα
0α
0α −tt
α
.
(65)
the matrices ǫα, ∆α and tα in hα and tα have the di-
mension of the unit cell, i.e., n α
cell × n α
cell. in particular
∆α is a diagonal matrix.
9
a.
calculation of initial states
given the above structure of the leads hamiltonian
the eigenstates of h0 −μσ can be grouped in scattering
states with incoming bogolons from lead α = 1, . . . , n
and andreev bound states (abs).
1.
scattering states
the lead α is characterized by energy bands eα
ν (p)
with ν = 1, . . . , 2n α
cell and p ∈(0, π).
for a given p
the energies eα
ν (p) are the solutions of the eigenvalue
problem
hα + tαeip + t†
αe−ip −μσα
u α
νp = eα
ν (p)u α
νp
(66)
with u α
νp the nambu-bloch eigenvectors. we write the
index of the localized orbital φm as m = s, j, α; here s
labels the orbital within the supercell, j the supercell and
α the lead. the index s runs between 1 and n α
cell while
the supercell index j = 0, . . . , ∞. the scattering state for
an incoming bogolon from lead α has the general form
ψα
νp(m) =
u α
νp(s)e−ipj + p
ρ
rαα
νp,ρ w αα
νp,ρ(s) eiqαα
νp,ρj
m = s, j, α
ψα
νp,c(m)
m ∈c
p
ρ
t αβ
νp,ρ w αβ
νp,ρ(s)eiqαβ
νp,ρj
m = s, j, β ̸= α
(67)
with reflection coefficients r and transmission coefficients
t . the momenta qαβ
νp,ρ (for all leads β including β = α)
are associated to states with energy e = eα
ν (p) and can
therefore be obtained from the roots of
det[hβ + tβeiq + t†
βe−iq −μσβ −e1β] = 0.
(68)
the above equation admits, in general, complex solu-
tions for q.
in eq.
(67) the sums over ρ run over
real solutions q for which the sign of the fermi velocity
vβ
ρ (q) = ∂eβ
ρ (q)/∂q is opposite to the sign of the fermi
velocity vα
ν (p) of the incoming bogolon and over all com-
plex solutions q for which im[q] > 0 (evanescent states).
once the qαβ
νp,ρ are known the bloch state w αβ
νp,ρ is sim-
ply the eigenvector with zero eigenvalue of the matrix
hβ + tβeiqαβ
νp,ρ + t†
βe−iqαβ
νp,ρ −μσβ −e1β.
for the cal-
culation of the reflection and transmission coefficients as
well as of the amplitude ψα
νp,c(m) in the central region
we extended a recently proposed wave-guide approach.61
the method is based on projecting the schr ̈
odinger equa-
tion (h0 −μσ)ψ = eψ onto the central region and onto
all the supercells in contact with the central region, i.e.,
with j = 0. the projection onto a j = 0 supercell leads
to an equation which couples the amplitude of ψ in j = 0
with that in j = 1. exploiting the analytic form of the
eigenstate in eq.
(67) the amplitude in the leads can
entirely be expressed in terms of the unknown r's and
t 's for all j. in this way the equations can be closed
and the problem is mapped into a simple linear system
of equations for the unknown rαβ
νp,ρ, t αβ
νp,ρ and ψα
νp,c(m).
2.
andreev bound states
the presence of a gap in the spectrum of the supercon-
ducting leads may lead to the formation of localized abs
within the gap. the procedure to calculate the abs is
slightly different from the one previously presented since
the abs energy is not an input parameter and the abs
state is normalized to 1 over the whole system. the en-
ergy eb of an abs ψb is outside the lead continua. pro-
jecting the schr ̈
odinger equation (h0 −μσ)ψb = ebψb
onto different regions and solving for the projection ψb,c
in region c one finds (heff
0,cc(eb)−μσcc)ψb,c = ebψb,c
where
heff
0,cc(e)=h0,cc+
x
α
h0,cα
1
e−(h0,αα−μσαα)h0,αc.
(69)
the abs energies eb can then be extracted from the
roots of det[heff
0,cc(e) −μσcc −e1cc] = 0 and the
eigenvector with zero eigenvalue of heff
0,cc(eb)−μσcc −
eb1cc is proportional to the projection ψb,c of the abs
in region c. we call cb the unknown constant of propor-
tionality. as for the scattering states we can construct
the abs everywhere in the system according to
ψb(m) =
p
ρ
bα
b,ρw α
b,ρ(s)eiqα
b,ρj
m = s, j, α
ψb,c(m)
m ∈c
.
(70)
the momenta qα
b,ρ and bloch states w α
b,ρ are calculated
in the same way as for the scattering states. by definition
all momenta have a finite imaginary part and the sum in
10
eq. (70) runs over those with a positive imaginary part.
the constants bα
b,ρ can be simply obtained by project-
ing the schr ̈
odinger equation (h0 −μσ)ψb = ebψb onto
the supercells in contact with region c, i.e., with j = 0.
the resulting equation couples the amplitude of ψb in
j = 0 with that in j = 1 and with the known amplitude
cbψb,c(m). exploiting the analytic form of ψb in the
leads the amplitude in j = 1 can entirely be expressed
in terms of the constants cbbα
b,ρ thus yielding a linear
system of equations for each lead. once the cbbα
b,ρ are
known the constant of proportionality cb is fixed by im-
posing that the abs is normalized to 1.
this can be
easily done since the sums over j are geometrical series.
b.
embedded crank-nicholson propagation
scheme
to propagate the generic eigenstate ψ of h0 −μσ we
extend the embedded crank-nicholson37,38 scheme to su-
perconducting leads.
the equations of motion (51,52)
can be written in a compact form as
i d
dtψ(t) = h(t)ψ(t),
ψ(0) = ψ
(71)
where the components of the td hamiltonian are given
in eqs. (58-60). we first perform the gauge transfor-
mation ψα(t) = exp[−iμσααt]φα(t) for the projection of
the state ψ onto lead α and ψc(t) = φc(t) for region c.
the state φ(t) obeys the equation
i d
dtφ(t) = ̃
h(t)φ(t),
φ(0) = ψ
(72)
with
̃
hαα(t) = hαα(0) −μσαα
(73)
̃
hαc(t) = exp
i
μt +
z t
0
d ̄
t uα( ̄
t)
σαα
hαc(0)
(74)
and ̃
hcc(t) = hcc(t).
the advantage of the gauge
transformed equations is that the lead hamiltonian is
now independent of time.
we discretize the time as
tm = 2mδ and define φ(m) = φ(tm) and ̃
h
(m) =
1
2
h
̃
h(tm+1) + ̃
h(tm)
i
. the differential operator in eq.
(72) is then approximated by the cayley propagator
1 + iδ ̃
h
(m)
φ(m+1) =
1 −iδ ̃
h
(m)
φ(m).
(75)
the above propagation scheme is known as crank-
nicholson algorithm and it is norm-conserving and ac-
curate up to second order in δ. as the matrix ̃
h is in-
finite dimensional the direct implementation of eq. (75)
is not possible. a significant progress can be done using
an embedding procedure which, as we shall see, entails
perfect transparent boundary conditions at the interfaces
between region c and leads α. projecting eq. (75) onto
lead α and iterating one finds
φ(m+1)
α
= gm+1
αα φ(0)
α −
iδ
1αα + iδ ̃
hαα
m
x
j=0
gj
αα ̃
h
(m−j)
αc
×
φ(m+1−j)
c
+ φ(m−j)
c
,(76)
where we have defined the propagator
gαα = 1αα −iδ ̃
hαα
1αα + iδ ̃
hαα
,
(77)
and made use of the fact that ̃
hαα(t) ≡ ̃
hαα is time-
independent.
the time-dependence of the contacting
hamiltonian can be easily extracted from eq. (74) and
reads
̃
h
(m)
αc =
exp
iμ(m+1)
α
σαα
+ exp
iμ(m)
α
σαα
2
̃
hαc(0),
(78)
where we have defined
μ(m)
α
= μtm +
z tm
0
d ̄
t uα( ̄
t).
(79)
at this point comes a crucial observation which allows
for extending the propagation scheme of refs. 37,38 to
the superconducting case. since the pairing field is local
in the chosen basis the off-diagonal part of the contacting
hamiltonian is zero and hence ̃
hcασαα = σcc ̃
hcα. it
follows that eq. (78) can also be rewritten as
̃
h
(m)
αc
=
̃
hαc(0)
exp
iμ(m+1)
α
σcc
+ exp
iμ(m)
α
σcc
2
≡
̃
hαc(0) ̄
z(m)
α
,
(80)
which implicitly define the matrices ̄
z(m)
α
= (z(m)
α
)∗.
next we project eq.
(75) onto region c and use eq.
(76) to express the φα at a given time step in terms of
the φc at all previous time steps. the resulting equation
is
1cc + iδ ̃
h
(m)
eff
φ(m+1)
c
=
1cc −iδ ̃
h
(m)
eff
φ(m)
c
+
x
α
s(m)
α
+ m (m)
α
(81)
and contains only quantities with the dimension of region
c. we emphasize that eq. (81) is an exact reformulation
of the original eq. (75) but it has the advantage of being
implementable. indeed, exploiting the result in eq. (80)
the boundary term s(m)
α
and memory term m (m)
α
read
s(m)
α
= −iδz(m)
α
̃
hcα(0) gm
αα
1αα + gαα
φ(0)
α ,
(82)
11
m (m)
α
= −δ2
m−1
x
j=0
z(m)
α
q(j+1)
α
+ q(j)
α
̄
z(m−1−j)
α
×
φ(m−j)
c
+ φ(m−1−j)
c
, (83)
while the effective hamiltonian is given by
̃
h
(m)
eff
= ̃
h
(m)
cc −iδ
x
α
z(m)
α
q(0)
α ̄
z(m)
α
,
(84)
where the embedding matrices q(m)
α
have twice the di-
mension of region c and are defined according to
q(m)
α
= ̃
hcα(0)
1αα −iδ ̃
hαα
m
1αα + iδ ̃
hαα
m+1 ̃
hαc(0).
(85)
in appendix a we describe a recursive scheme to calcu-
late the embedding matrices. in appendix b we further
show that the boundary term s(m)
α
can be expressed in
terms of the qα's thus rendering eq. (81) a well defined
equation for time propagations.
in the next section we apply the numerical scheme to
uf-jnj model systems and obtain results for the td
densities and currents.
iv.
real-time simulations of s-d-s
junctions
due to the vast phenomenology of s-d-s junctions it
is not possible to address these systems in a single work.
furthermore the analysis of the time-dependent regime
is generally more complex than that in the josephson
regime and it is therefore advisable to first gain some
insight by investigating simple cases. our intention in
this section is to demonstrate the feasibility of the prop-
agation scheme and to present genuine td properties of
simple model systems.
we consider a tight-binding chain (region c) with
nearest neighbor hopping tc and on-site energy ǫc con-
nected to a left (l) and right (r) wide-band leads. the
α = l, r lead is described by a semi-infinite tight-binding
chain with nearest neighbor hopping tα and a constant
pairing field ∆α, and is coupled to the α end-point of
the central chain through its surface site with a hop-
ping tcα = tαc. the system is initially in equilibrium
at temperature t = 0 and chemical potential μ = 0
and driven out of equilibrium by a td bias voltage
uα(t) applied to lead α at positive times.
from sec-
tion ii d, the hamiltonian for this kind of systems read
ˆ
h(t) = p
α( ˆ
hαα(t) + ˆ
hαc(t) + ˆ
hcα(t)) + ˆ
hcc where
ˆ
hαα(t) = tα
∞
x
j=0
x
σ
(ˆ
c†
j+1σαˆ
cjσα + h.c.)
+ (e−2iμt∆αˆ
c†
j↑αˆ
c†
j↓α + h.c.)
(86)
describes the lead α = l, r,
ˆ
hlc(t) = tlcei
r t
0 dt′ul(t′) x
σ
ˆ
c†
0σlˆ
c0σ + h.c.
(87)
ˆ
hrc(t) = trcei r t
0 dt′ur(t′) x
σ
ˆ
c†
0σrˆ
cnσ + h.c.
(88)
accounts for the coupling between region c and the leads,
and
ˆ
hcc = tc
n−1
x
m=0
x
σ
(ˆ
c†
m+1σˆ
cmσ+h.c.)+ǫc
n
x
m=0
x
σ
ˆ
c†
mσˆ
cmσ
(89)
is the hamiltonian of the chain with n + 1 atomic sites.
the currents jl(t) ≡j0l,0(t) and jr(t) ≡jn,0r(t)
through the bonds connecting the chain to the left and
right leads are obtained from eq. (30) and eq. (50) and
read
jl(t) = −itlceiγlc(t)
"x
q
f <(eq)uq(0l, t)u∗
q(0, t)
−
x
q
f >(eq)vq(0, t)v∗
q(0l, t)
#
+ h.c.,
(90)
jr(t) = −it∗
rce−iγrc(t)
"x
q
f <(eq)uq(0, t)u∗
q(0r, t)
−
x
q
f >(eq)vq(0r, t)v∗
q(0, t)
#
+ h.c.,
(91)
where γαc(t) = i
r t
0 dt′uα(t′) and the sum over q runs
over all abs and scattering states. similarly, the pairing
density pm(t) on an arbitrary site of the chain is obtained
from eq. (31) and eq. (50) and reads
pm(t) =
x
q
f <(eq)uq(m, t)v∗
q(m, t)e2iǫct.
(92)
we will write the pairing field as ∆α = ξαeiχα∆and
measure energies in units of ∆, times in units of ħ/∆
and currents in units of |e|∆/ħ, with |e| the absolute
charge of the carriers. since we consider wide-band leads
with tα ≫tαc, tc and the chemical potential is set to
zero the results depend only on the ratio γα ≡2t2
αc/tα
(tunneling rate) and not on tαc and tα separately. in
the following we therefore specify the value of γα only.
in practical calculations the longitudinal vector p ∈(0, π)
of the scattering states, see eq. (67), is discretized with
np mesh points and only states with energy within the
range (μ−λ, μ+ λ) are propagated in time. we will call
np,α the number of scattering states from lead α that
are propagated. the cutoffλ is chosen about an order
of magnitude larger than the typical energy scales of the
problem, i.e., uα, γα, ∆α, tc, ǫc.
12
a.
the single-level quantum dot model
the single-level quantum dot (qd) model corresponds
to a central chain with only one atomic site (n = 0). for
∆l = ∆r = 0 (n-qd-n) the td response of this system
has been investigated by several authors and an analytic
formula for the td current is also available.36,62,63 scarce
attention, however, has been devoted to the system with
one superconducting lead29 (n-qd-s) and to the best
of our knowledge the only available results when both
leads are superconducting (s-qd-s) have been published
in ref. 30.
1.
n-qd-s model under dc bias
we first consider the n-qd-s case schematically illus-
trated in fig. 2(a). to highlight the different scattering
mechanisms we shift the central level by ǫc = 0.5, choose
weak couplings to the leads γl = γr = 0.2, and drive
the system out of equilibrium by applying four different
biases ul = 0.3, 0.6, 0.9, 1.2 to the left normal lead. for
biases in the subgap region, i.e., ul < ∆r = 1, transport
is dominated by andreev reflections (ar). in fig. 2(b)
we show the currents jl(t) and jr(t) of eqs. (90,91).
for ul = 0.3 < ǫc the ar are strongly suppressed since
electrons at the left electrochemical potential μl = ul
have just enough energy to enter the resonant window
(ǫc −2γ, ǫc + 2γ), where 2γ = γl + γr. resonant ar
can occur for ul > ǫc and constitute the dominant mech-
anism for electron tunneling. this is clearly visible in the
second panel of fig. 2(b) where the steady-state values
of jr for ul = 0.6 and ul = 0.9 are approximatively
the same. at larger biases ul = 1.2 > ∆r electrons can
also tunnel via standard quasi-particle scattering and the
steady-state current increases. this interpretation is con-
firmed by the behavior of the pairing density p0(t) on the
qd, third panel of fig. 2(b). for times up to ∼5 the
pairing density decreases since pre-existent cooper pairs
in lead r move away from the qd. however, while |p0(t)|
remains below its equilibrium value at ul = 0.3, for all
other biases, ul > ǫc, |p0(t)| increases after t ∼5, mean-
ing that a cooper pair is forming at the interface. we
also notice that the values of |p0(t →∞)| for ul = 0.9
and ul = 1.2 are very close while the corresponding cur-
rents jr differ appreciably. this is again in agreement
with the fact that electrons with energy larger than ∆r
do not undergo ar and thus no extra cooper pairs are
formed. finally we observe that the transient regime is
longer in the n-qd-s case than in the n-qd-n case, see
inset in panel 2 and 3 of fig. 2(b), as also pointed out in
ref. 29.
2.
s-qd-s model under dc bias
we now turn to the more interesting case in which
the qd is connected to a left and right superconduct-
0
20
40
60
0
0.05
0.1
ul = 0.3
ul = 0.6
ul = 0.9
ul = 1.2
0
0.1
0.2
0
0.03
0.06
0
10
20
30
0
0.1
0.2
0
10
20
30
0
0.08
0.16
t
jl
jr
|p0|
(b)
jl
t
jr
t
n-qd-n
n-qd-n
fig. 2:
(a) schematic of the transport set up.
a single
level qd with on-site energy εc = 0.5 is weakly connected
(γl = γr = 0.2) to a left normal lead and a right supercon-
ducting lead. in equilibrium both temperature t and chemi-
cal potential μ are zero. the system is driven out of equilib-
rium by a step-like voltage bias ul = 0.3, 0.6, 0.9, 1.2 in the
normal lead. for ul < ∆r the dominant scattering mecha-
nism is the ar in which an electron is reflected as a hole and
a cooper pair is formed in lead r. (b) time-dependent cur-
rent at the left interface (first panel), right interface (second
panel) and absolute value of the pairing density on the qd
(third panel). the insets show the td current for the same
parameters but ∆r = 0, i.e., for a normal r lead. the results
are obtained with a time-step δ = 0.05, cutoffλ = 6 and a
number of scattering states np,l = 1070, np,r = 1056.
ing lead (s-qd-s), see fig.
3(a).
we focus on sym-
metric couplings γl = γr = γ = 1 and on pairing
fields ∆l = ∆reiχ = eiχ with the same magnitude
but different phase.
this system always support two
andreev bound states (abs) in the gap.
their en-
ergy can be obtained analytically from the solution of
det[heff
0,cc(e)−μσcc −e1cc] = 0 (see section iii a 2)
which, in terms of the dimensionless variables x = e/∆,
13
0
50
100
150
200
250
0
0.8
1.6
0
10
20
30
40
0
0.6
1.2
(b)
(c)
ul = 0.2
ul = 0.3
ul = 0.4
ul = 0.5
ul = 1.0
ul = 2.0
ul = 3.0
t
t
jl
jl
fig. 3: (a) schematic of the s-qd-s model with γl = γr =
1.0, ∆l = ∆r = 1, and ǫc = 0. this system admits two abs
in the gap. the abs energy depends on the superconducting
phase difference χ as illustrated in the inset.
(b-c) time-
dependent current jl(t) at the left interface as a function of
time for (b) ul = 3.0, 2.0, 1.0 [the curves corresponding to
bias ul = n.0 are shifted upward by 0.3(n−1)] and (c) ul =
0.5, 0.4, 0.3, 0.2 [the curves corresponding to bias ul = 0.n
are shifted upward by 0.6(n −2)]. the results are obtained
with a time-step δ = 0.05, cutoffλ = 12.1, and a number
of scattering states np,l = np,r = 768 for panel (b) and
δ = 0.05, λ = 4, np,l = np,r = 788 for panel (c).
γ = γ/∆and e = (ǫc −μ)/∆, reads
x2(1 +
γ
√
1 −x2 )2 −e2 −α2γ2
1 −x2 = 0,
(93)
where α =
q
1+cos χ
2
and varies in the range (0, 1). in fig.
3(a) we plot the solutions of eq. (93) as a function of χ
for ǫc = μ = 0. in equilibrium and at zero temperature
one abs is fully occupied and the other is empty. at time
t = 0 a constant bias ul is applied to the left lead. in
fig. 3(b) we display the td current at the left interface
jl(t) for χ = 0 and ul = 3, 2, 1. after a transient the
current oscillates in time with period tj = 2π/(2ul),
as expected.
for ul > 2 the s-qd-s system behaves
similarly to a macroscopic josephson junction with an
almost pure monochromatic response, albeit the average
value jdc of the current over a period is different from
zero. for ul = 1 < 2∆, i.e., in the subgap region, the
transient regime becomes much longer and jl(t) deviates
0
1
2
3
4
ω
0
0.5
1
1.5
|jl(ω)| (arb. units)
ul = 0.3
ul = 0.4
ul = 0.5
ul = 0.6
ul = 1.0
0.3
0.4
0.5
0.6
ul
0
0.05
0.1
jdc
0
50
100 150 200
t
0
1
2
3
jl,abs
(a)
(b)
(c)
ul = 0.3
ul = 0.4
ul = 0.5
ul = 0.6
ul = 0.2
2∆/5
2∆/4
fig. 4: (a) discrete fourier transform of jl(t) in arbitrary
units [the curves corresponding to bias ul = 0.n are shifted
upward by 0.7(n −3) while that corresponding to bias ul =
1.0 is shifted upward by 2.8. (b) values of the average current
for biases in the subgap region. (c) abs contribution to the
current jl(t) for biases ul = 0.2, 0.3, 0.4, 0.5, 0.6 [the curves
corresponding to bias ul = 0.n are shifted upward by 0.8(n−
2)]. the numerical parameters are the same as in fig. 3.
from a perfect monochromatic function. at ul = 1 the
dominant scattering mechanism is the single ar.
as discussed in ref. 15 the presence of the resonant
level modifies substantially the jdc −v (v = ul −ur)
characteristic and for γ = 1 the subharmonic gap struc-
ture is almost entirely washed out. however, a very rich
structure is observed in the td current. in fig. 3(c)
we display jl(t) for biases ul = 0.5, 0.4, 0.3, 0.2. the
charge carriers undergo multiple ar (mar) before ac-
quiring enough energy and escaping from the qd. the
dwelling time increases with decreasing bias and the tran-
sient current has a highly non-trivial behavior before the
josephson regime sets in. from the simulations in fig.
3(c) at bias ul = 0.2 the propagation time t = 250 is
not sufficient for the development of the josephson oscil-
lations. we also observe that the smaller is the bias the
larger is the contribution of high-order harmonics, which
is in contrast with one would naively expect from linear
response theory.
in fig.
4(a) we display the fourier transform of
jl(t)−jdc in the josephson regime. replica of the main
josephson frequency ωj = 2ul are clearly visible for
ul < ∆. the values of jdc as obtained from time propa-
gation are reported in fig. 4(b) and are consistent with
a smeared sub-harmonic gap structure.
from the curves jl(t) it is not evident how to estimate
the duration of the transient time. we found useful to
look at the contribution of the abs, jl,abs, to the total
current jl, since jl,abs(t →∞) = 0. this quantity is
evaluated from eq. (90) by restricting the sum over q to
the abs and is shown in fig. 4(c). abs play a crucial
14
0
0.5
1
1.5
1
2
3
0
15
30
45
60
0
0.5
1
t
jr
n0
|p0|
0
π/4
π/2
3π/4
π
π
3π/4
π/2
π/4
0
π
3π/4
π/2
π/4
0
fig. 5: time-dependent current at the right interface jr
(first panel) as well as the density n0 (second panel) and pair-
ing density |p0| (third panel) on the qd. the curves from bot-
tom to top corresponds to a switch-offtime t(n)
off= 5π + nπ/8,
with n = 0, 1, 2, 3, 4.
since the bias is ul = 1 the ac-
cumulated phase difference χ(n) at the end of the pulse is
χ(n) = 2t(n)
off= nπ/4. for the switch-offtime t(n)
offthe curves
of jr are shifted upward by 0.3n, those of n0 by 0.5n and
those of |p0| by 0.2n. the results are obtained with a time-
step δ = 0.05, cutoffλ = 12.1, and a number of scattering
states np,l = np,r = 768.
role in the relaxation mechanism as we shall see in the
next section.
3.
s-qd-s model under dc pulses
as mentioned in the introduction the possibility of em-
ploying uf-jnj in future electronics rely on our under-
standing of their td properties. in the previous section
we studied the transient behavior of a s-qd-s system
under the sudden switch-on of an applied bias. equally
important is to study how the system responds when the
bias is switched off. we therefore consider the same s-
qd-s model as before with γl = γr = 1, ǫc = 0,
∆l = ∆r = 1 initially in equilibrium at zero temper-
ature and chemical potential. at time t = 0 a constant
bias ul = 1 is applied to lead l until the time toffat
which the bias is switched off. how does the system re-
lax?
in fig.
5 we show the current jr at the right
interface as well as the density n0 and pairing density
|p0| on the qd for switch-offtimes t(n)
off
= 5π + nπ/8
with n = 0, 1, 2, 3, 4. despite the fact that the switch-
offtimes are all very close [t(0)
off∼15.71 and t(4)
off∼17.28]
the system reacts in different ways and actually relaxes
only in one case. the strong dependence on toffis due
to the two abs in the gap. similarly to what happens
in normal systems64 the asymptotic (t →∞) form of the
density on the qd is
n0(t) −n0,cont ∼
x
ij
fij cos((ǫ(i)
abs −ǫ(j)
abs)t),
(94)
where ǫ(i)
abs, i = 1, 2, are the abs eigenenergies of the
hamiltonian after the bias has been switched offand
n0,cont is the contribution of the continuum states to
the density.
the coefficients fij = fji are matrix ele-
ments of the fermi function f( ˆ
h(0)) calculated at the
equilibrium hamiltonian and depend on the history of
the applied bias.65,66 contrary to the normal case, how-
ever, the energy of the abs depends on when the bias
is switched offsince after a time toffthe phase difference
χ changes from zero to 2ultoff. this fact together with
eq. (94) explains the persistent oscillations at different
frequencies.
indeed χ(n) = 2ult(n)
off= nπ/4 and from
fig.
3(a) we see that [ǫ(1)
abs(χ(n)) −ǫ(2)
abs(χ(n))] varies
from ∼1.08 to zero when n varies from zero to 4. the
amplitude of the oscillations as well as the average value
of the density n0, however, do not depend only on χ but
also on the history of the applied bias.
two different
biases ul(t) and u ′
l(t) yielding the same phase differ-
ence χ = 2
r toff
0
dτul(τ) = 2
r toff
0
dτu ′
l(τ) give rise to
different persistent oscillations, albeit with the same fre-
quency.
from the results of this section we conclude that for
devices coupled to superconducting leads a small differ-
ence in the switch-offtime of the bias can cause a large
difference in the relaxation time of the device. this prop-
erty may be exploited to generate zero bias ac currents
of tunable frequency.
4.
s-qd-s model under ac bias
the time-propagation approach has the merit of not
being limited to step-like biases as it can deal with
any td bias at the same computational cost. of spe-
cial importance is the case of ac biases where a mi-
crowave radiation ur sin(ωrt) is superimposed to a dc sig-
nal v = ul −ur. the study of uf-jnj in the presence
of microwave radiation started with the work of cuevas
et al.67 who predicted the occurrence of subharmonic
shapiro spikes in the jdc −v characteristic of supercon-
ducting point contacts. later on zhu et al.68 extended
the analysis to the s-qd-s model and discuss how the
abs modify the jdc −v characteristic. the replicas of
the shapiro spikes have been experimentally observed69
and can be explained in terms of photon-assisted mul-
tiple andreev reflections.
using a generalized floquet
15
0
0.4
0.8
-0.2
0
0.2
jl,abs
jl,cont
0
130
260
-0.2
0
0.2
re[p0]
im[p0]
|p0|
(a)
(b)
(c)
ωr = 0.5
ωr = 1.08
ωr = 1.5
jl
t
fig. 6: (a) td current at the left interface for ul = 0,
ur = 0.05ωr with ωr = 0.5, 1.08 [the curve is shifted upward
by 0.4], and 1.5 [the curve is shifted upward by 0.8].
(b)
abs and continuum contribution to the total current in the
resonant case ωr = 1.08, ur = 0.05ωr and ul = 0. (c) pairing
potential on the qd for the same parameters as in panel (b).
the results are obtained with a time-step δ = 0.05, cut-off
λ = 4, and a number of scattering states np,l = np,r = 788.
formalism one can show that in the long-time limit67
jl(t) =
x
mn
jn
m(v, γ, ωr)ei(mωj+nωr)t
(95)
where γ = ur/ωr and ωj = 2v is the josephson fre-
quency. the calculation of jn
m is, in general, rather com-
plicated and to the best of our knowledge the full td
profile of jl(t) as well as the duration of the transient
time before the photon-assisted josephson regime sets in
have not been addressed before.
we here consider the s-qd-s model with γl = γr = 1,
εc = 0, ∆l = ∆r = 1 under a dc bias and in the presence
of a superimposed microwave radiation ul(t) = ul +
ur sin(ωrt) and ur = 0. in fig. 6(a) we display the td
current at the left interface for fixed γ = ur/ωr = 0.05
and different values of the frequency ωr = 0.5, 1.08, 1.5.
the first striking feature is the occurrence of a transient
resonant effect at ωr = 1.08 ∼ωabs ≡ǫ(1)
abs −ǫ(2)
abs. at
the resonant frequency the amplitude of the oscillations
increases linearly in time till a maximum value ∼0.3.
the fourier decomposition (not shown) reveals that the
peak at ω = 1.08 splits into two peaks, one above and one
below 1.08, which is consistent with the observed beat-
ing. the effect is absent at larger (ωr = 1.5) and smaller
(ωr = 0.5) frequencies for which the amplitude of the
oscillations remains below 0.05 and two main harmonics,
one at ωr and the other at ωabs, are visible in the fourier
decomposition (not shown). the peak at ω = ωabs is due
to a transient excitation with a long life-time and cannot
be described using floquet based approaches.
the abs play a crucial role in determining the td
profile of jl at the resonant frequency. the total current
jl(t) = jl,cont(t) + jl,abs(t) is the sum of the current
jl,cont coming from the evolution of the continuum states
and the abs current jl,abs(t). these two currents are
shown in fig. 6(b) from which it is evident that abs
carry an important amount of current not only in the dc
josephson effect30,70 but also in the transient regime. in
fig. 6(c) we show the pairing density on the qd for the
resonant frequency ωr = 1.08.
in the presence of an external bias the abs contribute
to the current only in the transient regime. the duration
of the transient is investigated in fig. 7 where we show
jr,abs for dc biases with a superimposed microwave
radiation described by ul(t) = ul + ur sin(ωrt), with
ur = 0.05ωr, ωr = 1.08, and ul = 0.0, 0.03, 0.1, 0.3.
the interplay between the ac josephson effect and the
resonant microwave driving leads to complicated td pat-
terns for small ul.
increasing ul the life-time of the
quasi abs decreases resulting in a fast damping of the
oscillations, see fig. 7 with ul = 0.3.
b.
long atomic chains
we consider a chain of n + 1 = 21 atomic sites with
onsite energy ǫc = 0 and nearest neighbor hopping tc =
1, see eq.
(89), symmetrically coupled, γl = γr =
γ, to superconducting electrodes with |∆l| = |∆r| =
∆. in the limit of long chains one can prove that the
current phase relation (at zero bias) is linear if tc =
γ/2.30,70 this is the so called ishii's sawtooth behavior71
and is due to perfect ar. to better visualize the mar
in the transient regime we therefore choose tc = γ/2. in
equilibrium there are 16 abs in the gap. at time t = 0
the system is driven out of equilibrium by a dc bias ul
applied to lead l.
in fig.
8 we display the contour plot of the cur-
rents jn,n+1(t) along the bond (n, n + 1) of region c
as a function of time for different values of ul
=
2∆/4, 2∆/3, 2∆/2.
the mar pattern is illustrated
with black arrows.
there is a clear-cut transient sce-
nario during which electrons undergo n ar before the
ac josephson regime sets in, with n = ul/2∆. at every
ar the current increases since the electrons are mainly
reflected as holes and holes as electrons. the same nu-
merical simulation in a normal system would have given a
current in region 1ar smaller than the current in region
0ar.
for the same system parameters we also considered a
dc bias ul = 0.8 for which the dominant scattering mech-
anism is the 3-rd order ar. the contour plot of the bond
16
0
250
500
750
1000
t
0
1
2
3
jr,abs
ul = 0.0
ul = 0.03
ul = 0.1
ul = 0.3
fig. 7: abs contribution to the current at the right interface for dc biases with a superimposed microwave radiation described
by ul(t) = ul + ur sin(ωrt), with ur = 0.05ωr, ωr = 1.08 and ul = 0.0, 0.03, 0.1, 0.3. the system is the same as in fig. 6
with ∆l = ∆r = 1, γl = γr = 1 and ǫc = 0. the time-step is δ = 0.05.
fig. 8: td picture of mar. a chain of 21 atomic sites is symmetrically connected with γl = γr = 2tc = 2 to two identical
superconducting leads with ∆l = ∆r = 1. a dc bias ul = 2∆/n, n = 4, 3, 2, is applied to lead l at time t = 0. the panels
show the contour plots of the bond-current jn,n+1(t) across the atomic bonds of region c. the results are obtained with a
time-step δ = 0.05, cut-offλ = 4 and a number of scattering states np,l = np,r = 1232.
current is displayed in the top-left panel of fig. 9 and is
similar to the case ul = 2∆/3 of fig. 8. a new scat-
tering channel does, however, open if a microwave radia-
tion of appropriate frequency is superimposed to ul. we
therefore applied an ac bias ur(t) = ur sin(ωrt) to lead
r and choose ωr to fulfill 2ul + ωr = 2∆, i.e., ωr = 0.4.
in fig. 9 we report the contour plot of the bond-current
for different values of ur = 0.0, 0.1, 0.3, 0.5. at ur ̸= 0
the right-going wave-front reduces its intensity just af-
ter crossing the bond 10 due to scattering against the
left-going wave-front from lead r, see the characteris-
tic λ-shape in the bottom-right panel. when the right-
going wave-front hits the right interface the bond cur-
rent sharply increases. furthermore, the larger is ur the
shorter is the transient regime. this can be explained as
follows. at large ur the dominant scattering mechanism
is the one in which an electron from lead l and energy
ul is reflected as a hole and at the same time absorbs
a photon of energy ωr. the energy of the reflected hole
is 2ul + ωr = 2∆, no extra ar are needed for charge
transfer and the photon-assisted josephson regime sets
in.
v.
conclusions and outlooks
in this paper we proposed a one-particle framework
and a propagation scheme to study the td response of
uf-jnj. by projecting the continuum hamiltonian onto
a suitable set of localized states we reduced the problem
to the solution of a discrete system in which the electro-
magnetic field is described in terms of peierls phases. the
latter provide the basic quantities to construct a density
functional theory of superconducting (and as a special
case normal) systems. we proved that under reasonable
conditions the td bond current and pairing density of an
interacting system driven out of equilibrium by peierls
phases γ(t) can be reproduced in a system of noninter-
acting ks electrons under the influence of peierls phases
γ′(t) and pairing field ∆′(t) and that γ′(t) and ∆′(t) are
unique. we considered the ks system initially in equilib-
rium at given temperature and chemical potential when
at time t = 0 an external electromagnetic field is switched
on. to calculate the response of the system at times t > 0
we used a non-equilibrium formalism in which the normal
and anomalous propagators are defined on an extended
keldysh contour that includes a purely imaginary (ther-
17
fig. 9: photon-assisted mar in a chain of 21 atomic sites.
the equilibrium parameters are the same as in fig. 8. an ac
bias ur = ur sin(ωrt) in lead r is superimposed to a dc bias
ul = 0.8 in lead l. the panels show the contour plots of the
bond-current jn,n+1(t) across the atomic bonds of region c
for different values of ur = 0.0, 0.1, 0.3, 0.5 and ωr = 0.4.
the results are obtained with a time-step δ = 0.05, cut-off
λ = 4 and a number of scattering states np,l = np,r = 1232.
mal) path going from 0 to −iβ.
we showed that the
solution of the equations of motion for the negf are
equivalent to first solve the static bdg equations and
then the td bdg equations.
it is worth emphasizing
that in tdscdft the bdg equations do not follow from
the bcs approximation and that their solution yields the
exact bond-current and pairing density of an interacting
system provided that the exact ks peierls phases and
pairing field are used.
for systems consisting of n superconducting leads in
contact with a finite region c and driven out of equilib-
rium by a longitudinal electric field a numerical algorithm
is proposed. the initial eigenstates are obtained from a
recent generalized wave-guide approach properly adapted
to the superconducting case.61 the initial states are prop-
agated in time using an embedded crank-nicholson al-
gorithm which is norm-conserving, accurate up to sec-
ond order in the time-step and that exactly incorpo-
rates transparent boundary conditions. the propagation
scheme reduces to the one of refs. 37,38 in the case of
normal leads.
the method described in this work allows for obtain-
ing the td current across an uf-jnj and hence to fol-
low the time evolution of several ar until the josephson
regime sets in. as a first calculation of these kind we
explored in detail the popular single-level qd model in
the weak and intermediate coupling regime. we demon-
strated that the transient time increases with decreasing
bias and provided a quantitative picture of the mar. the
rich structure of the transient regime is due to the abs
which play a crucial role in the relaxation process. for dc
pulses we showed that abs can be exploited to generate
zero bias ac currents of tunable frequency. furthermore,
irradiating the biased system with a microwave field of
appropriate frequency the abs give rise to a long-living
transient resonant effect. the transient regime increases
also with the length of the junction. we considered one-
dimensional atomic chains coupled to superconducting
leads under dc and ac biases. here we showed that in
conditions of perfect ar there exists a clear-cut transient
scenario for mar. for biases ul = 2∆/n the dominant
scattering channel is the n-th order ar and the transient
regime lasts for about nn/vc where n is the length of
the chain and vc the electron velocity at the fermi level.
similar considerations apply to photon assisted mar. a
more careful analysis of the transient regime is beyond
the scope of the present paper. however such analysis is
of utmost importance if the ultimate goal of supercon-
ducting nanoelectronics is to use these devices for ultra-
fast operations.
the td properties presented in this work have been
obtained using rather simple, yet so far unexplored, mod-
els.
a more sophisticated description of the hamilto-
nian is, however, needed for a quantitative parameter-
free comparison with experiments. theoretical advances
also involve the development of approximate function-
als for the self-consistent calculation of the td pairing
potential and peierls phases. self-consistent calculations
have so far been restricted to equilibrium s-d-s mod-
els with a point-like attractive interaction treated in the
bcs approximation.72–75 for biased systems, however,
the pairing potential and peierls phases must be treated
on equal footing and a first step in this direction would
be the bcs approximation for the pairing field and the
hartree-fock approximation for the peierls phases. more
difficult is the study of uf-jnj in the coulomb blockade
regime for which electron correlations beyond hartree-
fock must be incorporated.
finally, the approach presented in this work is not lim-
ited to two terminal systems. the coupling of the cen-
tral region to a third normal lead, or gate, allows for
controlling the josephson current by varying the gate
voltage.25,76,77 these systems can be potentially used for
fast switches and transistors,78,79 and a microscopic un-
derstanding of their ultrafast properties is therefore nec-
essary to optimize their functionalities.
appendix a: calculation of the embedding matrices
without loss of generality we include few layers of each
lead in the explicitly propagated region c.
then, the
embedding matrix q(m)
α
is zero everywhere except in the
block of dimension 2n α
cell × 2n α
cell which is connected to
the α lead. denoting with q(m)
α
such non-vanishing block
18
in q(m)
α
we have
q(m)
α
= tα
1αα −iδ ̃
hαα
m
1αα + iδ ̃
hαα
m+1
0.0
t†
α ,
(a1)
where the subscript (0, 0) denotes the first diagonal block
(supercell with j = 0) of the matrix in the square brack-
ets. we notice that from eq. (73) the matrix ̃
hαα is the
same as the matrix hαα(0) in eq. (63) but with renor-
malized diagonal blocks ̃
hα = hα −μσα.
in order to
compute the q(m)
α
's we introduce the generating matrix
function
qα(x, y) ≡tα
1
x1αα + iyδ ̃
hαα
0,0
t†
α,
(a2)
which can also be expressed in terms of continued matrix
fractions
qα(x, y) = tα
1
x1α + iyδ ̃
hα + y2δ2tα
1
x1α + iyδ ̃
hα + y2δ2tα 1
......
t†
α
t†
α
t†
α
= tα
1
x1α + iyδ ̃
hα + y2δ2qα(x, y)
t†
α ≡tαpα(x, y)t†
α,
(a3)
where the last step is an implicit definition of pα(x, y).
the q(m)
α
's are obtained from the generating matrix func-
tion as
q(m)
α
= tα
1
m!
−∂
∂x + ∂
∂y
m
pα(x, y)
x=y=1
t†
α
= tαp(m)
α
t†
α.
(a4)
using the identity
1
m![−∂
∂x + ∂
∂y]mp−1
α (x, y)pα(x, y) = 0,
we derive the following recursive scheme
(1α+ iδ ̃
hα)p(m)
α
= (1α−iδ ̃
hα)p(m−1)
α
−δ2
m
x
k=0
(q(k)
α + 2q(k−1)
α
+ q(k−2)
α
)p(m−k)
α
(a5)
with p(m)
α
= q(m)
α
= 0 for m < 0. the above relation can
be used to calculate q(m)
α
provided that all p(k)
α
are known
for k < m. to obtain p(0)
α
we can use eq. (a3) with
x = y = 1 in which the continued fraction is truncated
after a number nlevel of levels. convergence can be easily
checked by increasing nlevel.
appendix b: calculation of the boundary term
from eq. (81) we see that in order to propagate an
eigenstate of h0 −μσ we need to know the boundary
term defined in eq. (82). the state φ(0) can be either a
scattering state or an abs. as shown in section iii a the
projection onto lead α of a generic eigenstate with energy
e can be written as a linear combination of states of the
form
φα
k(m = s, j, α) = zα
k (s)eikj,
(b1)
where the amplitudes zα
k satisfies the eigenvalue equation
hα + tαeik + t†
αe−ik −μσα
zα
k = ezα
k .
(b2)
in the following we show how to compute the action of
the operator ̃
hcα(0)gm
αα
1αα + gαα
on φα
k. we define
the nambu vector in region c
φα(m)
c,k
≡
̃
hcα(0)gm
αα
1αα + gαα
φα
k
= 2 ̃
hcα(0)
1αα −iδ ̃
hαα
m
1αα + iδ ̃
hαα
m+1 φα
k ,
(b3)
from which the boundary term can easily be extracted by
taking the appropriate linear combination of the φα(m)
c,k
and then multiplying by −iδz(m)
α
, see eq. (82). since re-
gion c includes few layers of the leads the vector φα(m)
c,k
is
zero everywhere except for the components correspond-
ing to orbitals in contact with lead α. if we call φα(m)
c,k
the vector with such components from eq. (b3) we can
write
φα(m)
c,k
= 2tα
1αα −iδ ̃
hαα
m
1αα + iδ ̃
hαα
m+1 φα
k
j=0
≡2tαv α(m)
k
,
(b4)
where the subscript j = 0 in the square brackets denotes
the vector of dimension 2n α
cell with components given by
the projection of the full vector onto the first (j = 0)
supercell. as for the embedding matrices we introduce
the generating function
v α
k (x, y) =
1
x1αα + iyδ ̃
hαα
φα
k
j=0
(b5)
19
from which the v α(m)
k
are obtained via multiple deriva-
tives
v α(m)
k
= 1
m!
−∂
∂x + ∂
∂y
m
v α
k (x, y)
x=y=1
.
(b6)
the generating function can be obtained as follows. tak-
ing φα
k as in eq. (b1) and exploiting the property in eq.
(b2) it is easy to realize that
h
̃
hααφα
k
i
j = (e −δj,0e−ikt†
α) [φα
k]j ,
(b7)
where the subscript j denotes the vector of dimension
2n α
cell with components given by the projection of the
full vector onto the j-th supercell. then, multiplying the
dyson identity
1
x1αα + iδy ̃
hαα
= 1
x −iyδ
x
1
x1αα + iyδ ̃
hαα
̃
hαα
(b8)
on the right by φα
k , using eq.
(b7) and solving for
v α
k (x, y) we obtain the following result
v α
k (x, y) =
1 + iyδe−ikpα(x, y)t†
α
x + iyδe
zα
k ,
(b9)
where pα(x, y) is the generating function defined in eq.
(a3). the quantity v α(m)
k
can now be obtained from eq.
(b6) and reads
v α(m)
k
=
(1 −iδe)m
(1 + iδe)m+1 zα
k + iδe−ik
m
x
n=0
(1 −iδe)m−n
(1 + iδe)m−n+1
×
p(n)
α
+ p(n−1)
α
t†
αzα
k .
(b10)
this conclude the calculation of the boundary term.
1 c. j. lambert and r. raimondi, j. phys.: condens. mat-
ter 10, 901 (1998).
2 y. makhlin, g. sch ̈
on, and a. shnirman, rev. mod. phys.
73, 357 (2001).
3 k. k. likharev superconductor devices for ultrafast com-
puting in applications of superconductivity, ed. h. wein-
stock, kluwer (1999).
4 r. s. sorbello, solid state phys. 51, 159 (1997).
5 t. n. todorov, j. hoekstra, and a. p. sutton, phys. rev.
lett. 86, 3606 (2001).
6 m. di ventra, s. t. pantelides, and n. d. lang, phys. rev.
lett. 88, 046801 (2002).
7 c. verdozzi, g. stefanucci, and c.-o. almbladh, phys.
rev. lett. 97, 046603 (2006).
8 j. clarke and f. k. wilhelm, nature 453, 1031 (2008).
9 a. zazunov, v. s. shumeiko, e. n. bratus, j. lantz, and
g. wendin, phys. rev. lett. 90, 087003 (2003).
10 g. wendin and v. s. shumeiko, low temp. phys. 33, 724
(2007).
11 c. buizert, a. oiwa, k. shibata, k. hirakawa, and
s. tarucha, phys. rev. lett. 99, 136806 (2007).
12 d. c. ralph, c. t. black, and m. tinkham, phys. rev.
lett. 74, 3241 (1995).
13 e. n. bratus, v. s. shumeiko, and g. wendin, phys. rev.
lett. 74, 2110 (1995).
14 m.
c.
koops,
g.
v.
van
duyneveldt,
and
r.
de
bruyn
ouboter,
phys.
rev.
lett.
77,
2542
(1995).
15 a. l. yeyati,
j. c. cuevas,
a. lopez-davalos,
and
a. martin-rodero, phys. rev. b 55, r6137 (1997).
16 g. johansson,
e. n. bratus,
v. s. shumeiko,
and
g. wendin, phys. rev. b 60, 1382 (1999).
17 j. c. cuevas, a. martin-rodero, and a. l. yeyati, phys.
rev. b 54, 7366 (1996).
18 q. f. sun, h. guo, and j. wang, phys. rev. b 65, 075315
(2002).
19 c. w. j. beenakker, phys. rev. lett. 67, 3836 (1991).
20 u. gunsenheimer, u. sch ̈
ussler, and r. k ̈
ummel, phys.
rev. b 49, 6111 (1994).
21 r. fazio and r. raimondi, phys. rev. lett. 80, 2913
(1998).
22 a. a. clerk and v. ambegaokar, phys. rev. b 61, 9109
(2000).
23 y. avishai, a. golub, and a. d. zaikin, phys. rev. b 67,
041301 (2003).
24 e. vecino, a. martin-rodero, and a. l. yeyati, phys. rev.
b 68, 035105 (2003).
25 m. governale, m. g. pala, and j. k ̈
onig, phys. rev. b 77,
134513 (2008).
26 r. k ̈
ummel, z. phys. 218, 472 (1969).
27 h. d. raedt, k. michielsen, and t. m. klapwijk, phys.
rev. b 50, 631 (1994).
28 a. jacobs and r. k ̈
ummel, phys. rev. b 64, 104515
(2001).
29 y. xing, q. f. sun, and j. wang, phys. rev. b 75, 125308
(2007).
30 e. perfetto, g. stefanucci, and m. cini, phys. rev. b 80,
205408 (2009).
31 o.-j. wacker, r. k ̈
ummel, and e. k. u. gross, phys. rev.
lett. 73, 2915 (1994).
32 r. van leeuwen, phys. rev. lett. 82, 3863 (1999).
33 g. vignale, phys. rev. b 70, 201102(r) (2004).
34 p. danielewicz, ann. phys. (n.y.) 152, 239 (1984).
35 m. wagner, phys. rev. b 44, 6104 (1991).
36 g. stefanucci and c.-o. almbladh, phys. rev. b 69,
195318 (2004).
37 s. kurth, g. stefanucci, c.-o. almbladh, a. rubio, and
e. k. u. gross, phys. rev. b 72, 035308 (2005).
38 g. stefanucci, e. perfetto, and m. cini, phys. rev. b 78,
075425 (2008).
39 e. runge and e. k. u. gross, phys. rev. lett. 52, 997
(1984).
40 s. k. ghosh and a. k. dhara, phys. rev. a 38, 1149
(1988).
41 at present there is no rigorous formulation of standard
tddft in discrete systems. the difficulty here consists
in finding simple criteria for the existence of a ks system,
see c. verdozzi, phys. rev. lett. 101, 166401 (2008) and
20
references therein.
42 for a discussion on general initial configurations the reader
is referred to refs. 34,35 and d. semkat, d. kremp and m.
bonitz, phys. rev. e 59, 1557 (1998); k. morawetz, m.
bonitz, v. g. morozov, g. r ̈
opke, d. kremp, phys. rev.
e 63, 020102 (2001).
43 n. n. oliveira, e. k. u. gross, and w. kohn, phys. rev.
lett. 60, 2430 (1988).
44 r. van leeuwen, n. e. dahlen, g. stefanucci, c. o. alm-
bladh, and u. von barth, lect. notes phys. 706, 33 (2006).
45 l. v. keldysh, jetp 20, 1018 (1965).
46 y. nambu, phys. rev. 117, 648 (1960).
47 j. rammer and h. smith, rev. mod. phys. 58, 323 (1986).
48 a. f. andreev, sov. phys. jetp 19, 1228 (1964).
49 p. g. de gennes, superconductivity of metals and alloys
(benjamin, new york, 1966).
50 r. k ̈
ummel, in physics and applications of mesoscopic
josephson junctions, edited by h. ohta and c. ishii (the
physical society of japan, tokyo, 1999), p. 19.
51 p. my ̈
oh ̈
anen, a. stan, g. stefanucci, and r. van leeuwen,
phys. rev. b 80, 115107 (2009).
52 l. p. kadanoffand g. baym, quantum statistical mechan-
ics (benjamin, new york, 1962).
53 n.-h. kwong and m. bonitz, phys. rev. lett. 84, 1768
(2000).
54 n. e. dahlen and r. van leeuwen, phys. rev. lett. 98,
153004 (2007).
55 a. stan, n. e. dahlen, and r. van leeuwen, j. chem.
phys. 130, 224101 (2009).
56 m. p. von friesen, c. verdozzi, and c.-o. almbladh, phys.
rev. lett. 103, 176404 (2009).
57 p. my ̈
oh ̈
anen, a. stan, g. stefanucci, and r. van leeuwen,
europhys. lett. 84, 67001 (2008).
58 w. sch ̈
afer, j. opt. soc. am. b 13, 1291 (1996).
59 m. bonitz, d. kremp, d. c. scott, r. binder, w. d.
kraeft, and h. s. k ̈
hler, j. phys.: condens. matter 8,
6057 (1996).
60 r. binder, h. s. k ̈
ohler, m. bonitz, and n. kwong, phys.
rev. b 55, 5110 (1997).
61 g. stefanucci, e. perfetto, s. bellucci, and m. cini, phys.
rev. b 79, 073406 (2009).
62 a.-p. jauho, n. s. wingreen, and y. meir, phys. rev. b
50, 5528 (1994).
63 g. schaller, p. zedler, and t. brandes, phys. rev. a 79,
032110 (2009).
64 g. stefanucci, phys. rev. b 75, 195115 (2007).
65 e. khosravi, s. kurth, g. stefanucci, and e. k. u. gross,
applies phys. a 93, 355 (2008).
66 e. khosravi, g. stefanucci, s. kurth, and e. k. u. gross,
phys. chem. chem. phys. 11, 4535 (2009).
67 j. c. cuevas, j. heurich, a. martin-rodero, a. l. yeyati,
and g. sch ̈
on, phys. rev. lett. 88, 157001 (2002).
68 y. zhu, w. li, z. s ma, and t. h lin, phys. rev. b 69,
024518 (2004).
69 m. chauvin, p. vom stein, h. pothier, p. joyez, m. e.
huber, d. esteve, and c. urbina, phys. rev. lett. 97,
067006 (2006).
70 i. affleck, j. s. caux, and a. m. zagoskin, phys. rev. b
62, 1433 (2000).
71 c. ishii, prog. theor. phys. 44, 1525 (1970).
72 a. martin-rodero, f. j. garcia-vidal, and a. l. yeyati,
phys. rev. lett. 72, 554 (1994).
73 a.
spuntarelli,
p.
pieri
and
g.
c.
strinati,
con-
mat/0911.4026.
74 a. m. martin and j. f. annett, phys. rev. b 57, 8709
(1998).
75 j. j. hogan-o'neill, a. m. martin, and j. f. annett, phys.
rev. b 60, 3568 (1999).
76 f. k. wilhelm, g. sch ̈
on, and a. d. zaikin, phys. rev.
lett. 81, 1682 (1998).
77 p. samuelsson, j. lantz, v. s. shumeiko, and g. wendin,
phys. rev. b 62, 1319 (2000).
78 t. akazaki, h. takayanagi, and j. nitta, appl. phys. lett.
68, 418 (1996).
79 j. j. a. baselmans, a. f. morpurgo, b. j. van wees, and
t. m. klapwijk, nature 397, 43 (1999).
|
0911.1719 | bosonic colored group field theory | bosonic colored group field theory is considered. focusing first on dimension
four, namely the colored ooguri group field model, the main properties of
feynman graphs are studied. this leads to a theorem on optimal perturbative
bounds of feynman amplitudes in the "ultraspin" (large spin) limit. the results
are generalized in any dimension. finally integrating out two colors we write a
new representation which could be useful for the constructive analysis of this
type of models.
| introduction
group field theories (gft's) or quantum field theories over group manifolds were introduced in
the beginning of the 90's [1, 2] as generalizations of matrix and tensor field theories [3, 4]. they
are currently under active investigation as they provide one of the most complete definition for
spin foam models, themselves acknowledged as good candidates for a background independent
quantum theory of gravity [5, 6]. also they are more and more studied per se as a full theory
of quantum gravity [4, 7, 8]. gft's are gauge invariant theories characterized by a non-local
interaction which pairs the field arguments in a dual way to the gluing of simplicial complexes.
hence, the feynman diagrams of a gft are fat graphs, and their duals are triangulations of
(pseudo)manifolds made of vertices, edges, faces and higher dimensional simplices discretizing a
particular spacetime. the first and simplest gft's [1, 2] have feynman amplitudes which are
products of delta functions on the holonomies of a group connection associated to the faces of the
feynman diagram. these amplitudes are therefore discretizations of the topological bf theory.
but the gft is more than simply its perturbative feynman amplitudes. a quantum field
theory formulated by a functional integral also assigns a precise weight to each feynman graph,
and should resum all such graphs. in particular, it sums over different spacetime topologies1 thus
providing a so-called "third quantization". however a difficulty occurs, shared by other discrete
approaches to quantum gravity: since a gft sums over arbitrary spacetime topologies, it is not
clear how the low energy description of such a theory would lead to a smooth and large manifold,
namely our classical spacetime (see [7] for more details).
in order to overcome this difficulty we propose to include the requirement of renormaliz-
ability as a guide [8]. the declared goal would be to find, using a novel scale analysis, which of
the gft models leads to a physically relevant situation after passing through the renormaliza-
tion group analysis. let us mention that a number of non-local noncommutative quantum field
theories have been successfully renormalized in the recent years (see [9] and references therein).
even though the gft's graphs are more complicated than the ribbon graphs of noncommutative
field theories, one can hope to extend to gft's some of the tools forged to renormalize these
noncommutative quantum field theories.
recent works have addressed the first steps of a renormalization program for gft's [7, 8]
and for spin foam models [10, 11] combining topological and asymptotic large spin (also called
"ultraspin") analysis of the amplitudes. a first systematic analysis of the boulatov model was
started in [7]. the authors have identified a specific class of graphs called "type i" for which
a complete procedure of contraction is possible.
the exact power counting for these graphs
has been established. they also formulated the conjecture that these graphs dominate in the
ultraspin regime. the boulatov model was also considered in [8] as well as its freidel-louapre
constructive regularization [12]. here, feynman amplitudes have been studied in the large cutoff
limit and their optimal bounds have been found (at least for graphs without generalized tadpoles,
see appendix below). these bounds show that the freidel-louapre model is perturbatively more
divergent than the ordinary one. at the constructive field theory level, borel summability of
the connected functions in the coupling constant has been established via a convergent "cactus"
expansion, together with the correct scaling of the borel radius.
these two seminal works on the general scaling properties in gft [7, 8] were restricted
to three dimensions. the purpose of this paper is to extend the perturbative bounds of [8] to
any dimension. a difficulty (which was overlooked in [8], see the appendix of this paper) comes
from the fact that the power counting of the most general topological models is governed by
"generalized tadpoles".
1 in this sense, gft's are quantum field theories of the spacetime and not solely on a spacetime [4].
1
in the mean time, a fermionic colored gft model possessing a su(d + 1) symmetry
in dimension d has been introduced by gurau [13], together with the homology theory of the
corresponding colored graphs. in contradistinction with the general bosonic theory, the "bubbles"
of this theory can be easily identified and tadpoles simply do not occur.
apart from the su(d +1) symmetry, these nice features are shared by the bosonic version
of this colored model, which is the one considered in this paper. we prove that a vacuum graph
of such a theory is bounded in dimension 4 by knλ9n/2, where λ is the "ultraspin" cutoffand n
the number of vertices of the graph. for any dimension d, similar bounds are also derived. we
also take the first steps towards the constructive analysis of the model.
the paper's organization is as follows. section 2 is devoted to the definition of the colored
models, the statement of some properties of their feynman graphs, and the perturbative bounds
which can be proved in any dimension. section 3 further analyses the model by integrating out
two particular colors. this leads to a matthews-salam formulation which reveals an interesting
hidden positivity of the model encouraging for a constructive analysis. a conclusion is provided
in section 4 and an appendix discusses the not yet understood case of graphs with tadpoles in
general gft's.
2
perturbative bounds of colored models
in this section, we introduce the colored ooguri model or colored su(2) bf theory in four
dimensions2. some useful (feynman) graphical properties are stated and allow us to bound a
general feynman amplitude. these bounds are then generalized to any dimension.
2.1
the colored ooguri model
the dynamical variables of a d dimensional gft are fields, defined over d copies of a group g.
for the moment, let us specialize to d = 4 and g = su(2), hence to ooguri-type models.
in the colored bosonic model, these fields are themselves d + 1 complex valued functions
φl, l= 1, 2, . . . , d+1 = 5. the upper index ldenotes the color index of the field φ = (φ1, . . . , φ5).
the fields are required to be invariant under the "diagonal" action of the group g
φl(g1h, g2h, g3h, g4h) = φl(g1, g2, g3, g4),
h ∈g,
̄
φl(g1h, g2h, g3h, g4h) = ̄
φl(g1, g2, g3, g4),
h ∈g,
(1)
but are not symmetric under any permutation of their arguments. we will use the shorthand
notation φl
α1,α2,α3,α4 := φl(gα1, gα2, gα3, gα4).
the dynamics is traditionally written in terms of an action
s[φ]
:=
z
4
y
i=1
dgi
5
x
l=1
̄
φl
1,2,3,4 φl
4,3,2,1 + λ1
z
10
y
i=1
dgi φ1
1,2,3,4 φ5
4,5,6,7 φ4
7,3,8,9 φ3
9,6,2,10 φ2
10,8,5,1
+λ2
z
10
y
i=1
dgi ̄
φ1
1,2,3,4 ̄
φ5
4,5,6,7 ̄
φ4
7,3,8,9 ̄
φ3
9,6,2,10 ̄
φ2
10,8,5,1
(2)
supplemented by the gauge invariance constraints (1), ̄
φlφlis a quadratic mass term. integrations
are performed over copies of g using products of invariant haar measures dgi of this group and
λ1,2 are coupling constants.
2su(2) is chosen here for simplicity. the so(4) or so(3, 1) bf theories could be treated along the same lines.
supplemented with plebanski constraints they are a starting point for four dimensional quantum gravity.
2
a2
a3
1
a
a4
a
a
2
a
3
1a
a
4
a
a2
a3
1
a
a4
a
a
2
a
3
1a
a
4
a
covariance
figure 1:
the propagator or covariance of the colored model.
15
14
13
12
1
43
42
41
5
4
4
21
23
2
24
25
35
34
32
31
3
52
53
54
51
5
51
52
53
54
5
32
31
35
34
3
21 25
24
23
2
45
41
42
43
4
15
14
13
12
1
pentachore i
pentachore ii
figure 2: the vertices of the colored model: pentachores i and ii are associated with interactions of the
form φ5 and ̄
φ5, respectively.
the initial model of this type was fermionic [13], hence the fields were complex grassmann
variables ψland ̄
ψl. the corresponding monomials ψ1ψ2ψ3ψ4ψ5 and ̄
ψ1 ̄
ψ2 ̄
ψ3 ̄
ψ4 ̄
ψ5 are su(5)
invariant. in the bosonic model, this invariance is lost as the monomials are only invariant under
transformations with permanent 1, which do not form a group.
more rigorously one should consider the gauge invariance constraints as part of the prop-
agator of the group field theory. the partition function of this bosonic colored model is then
rewritten as,
z(λ1, λ2) =
z
dμc[ ̄
φ, φ] e−λ1t1[φ]−λ2t2[ ̄
φ],
(3)
where t1,2 stand for the interaction parts in the action (2) associated with the φ's and ̄
φ's
respectively, and dμc[ ̄
φ, φ] denotes the degenerate gaussian measure (see a concise appendix
in [8]) which, implicitly, combines the (ordinary not well defined) lebesgue measure of fields
d[ ̄
φ, φ] = q
ld ̄
φldφl, the gauge invariance constraint (1) and the mass term. hence dμc[ ̄
φ, φ] is
associated with the covariance (or propagator) c given by
z
̄
φl
1,2,3,4 φl′
4′,3′,2′,1′ dμc[ ̄
φ, φ] = cll′(g1, g2, g3, g4; g′
4, g′
3, g′
2, g′
1) = δll′
z
dh
4
y
i=1
δ(gih(g′
i)−1).
(4)
as in the ordinary gft situation, the covariance c, a bilinear form, can also be considered as
an operator, which is a projector (satisfying c2 = c) and acts on the fields as follows:
[cφ]l
1,2,3,4 =
x
l′
z
dg′
i cll′(g1, g2, g3, g4; g′
4, g′
3, g′
2, g′
1) φl′
4′,3′,2′,1′ =
z
dh φl(g4h, g3h, g2h, g1h). (5)
3
2.2
properties of feynman diagrams
in a given dimension d, feynman graphs of a gft are dual to d dimensional simplicial complexes
triangulating a topological spacetime. in this subsection, we illustrate the particular features of
this duality in the above colored model.
associating the field φlto a tetrahedron or 3-simplex with its group element arguments gi
representing its faces (triangles), then it is well known that the order of arguments of the fields in
the quintic interactions (2) follows the pattern of the gluing of the five tetrahedra l= 1, ..., 5, along
one of their faces in order to build a 4-simplex or pentachore (see fig.2). besides, the propagator
can be seen as the gluing rule for two tetrahedra belonging to two neighboring pentachores. for
the bf theory this gluing is made so that each face is flat. in the present situation, the model
(2) adds new features in the theory: the fields are complex valued and colored. consequently,
we can represent the complex nature of the fields by a specific orientation of the propagators
(big arrows in fig.1) and the color feature of the fields is reflected by a specific numbering (from
1 to 5, see fig.2) of the legs of the vertex. hence, the only admissible propagation should be
between a field ̄
φland φlof the same color index, and "physically" it means that only tetrahedra
of the same color belonging to two neighboring pentachores can be glued together. finally, given
a propagator with color l= 1, 2, ..., 5, it has itself sub-colored lines (called sometimes "strands")
that we write cyclically lj, and the gluing also respects these subcolors (because our model does
not include any permutation symmetry of the strands).
let us enumerate some useful properties of the colored graphs. the following lemmas hold.
lemma 2.1 given a n-point graph with n internal vertices, if one color is missing on the
external legs, then
(i) n is a even number;
(ii) n is also even and external legs have colors which appear in pair.
proof. let us consider a n-point graph with n internal vertices. consequently, one has n fields
for each color. since one color is missing on the external legs, and since by parity, any contraction
creating an internal line consumes two fields of the same color, the full contraction process for
that missing color consumes all the n fields of that color and an even number of fields. hence n
must be even. this proves the point (i).
we prove now the point (ii). we know that the number n of internal vertices is even. now,
if a color on the external legs appears an odd number of times, the complete internal contraction
for that color would involve an odd number of fields hence would be impossible. □
an interesting corollary is that for n < d + 1 the conclusions of lemma 2.1 must hold.
in particular the colored theory in dimension d has no odd point functions with less than d
arguments. for example, the colored ooguri model in four dimensions has no one and three
point functions. this property is reminiscent of ordinary even field theories like the φ4
4 model
which has also neither one nor three point functions. it may simplify considerably the future
analysis of renormalizable models of this type.
we recall that in a colored gft model,
(i) a face (or closed cycle) is bi-colored with an even number of lines;
(ii) a chain (opened cycle) of length > 1 is bi-colored.
4
t23
t41
t14
=
t 32
figure 3:
generalized tadpoles.
this is in fact the definition of a face in [13] but can also be easily deduced from the fact that
each strand in a line l(0)
a
joining a vertex v (0) to a vertex v (1) possesses a double label that we
denote by ab (see fig. 2): a is the color index of l(0)
a
and b denotes the color of the line l(1)
b
after the vertex v1, where the strand ab will propagate. in short, a vertex connects in a unique
way, the strand ab of the line la to the strand ba of the line lb. the point is that, in return, the
strand ba belonging to l(1)
b
can only be connected to a strand of the form ab in a line l(2)
a
through
another vertex v (2). hence only chains of the bi-colored form abbaabba . . . ba (for closed cycles)
and abbaabba . . . ab (for open chains) could be obtained. closed cycles include clearly an even
number of lines. □
definition 2.1 a generalized tadpole is a (n < 5)-point graph with only one external vertex (see
fig.3).
in the case of a generalized tadpole with one external leg, that external leg must contract to a
single vertex, creating a new tadpole but with four external legs.
theorem 2.1 there is no generalized tadpole in the colored ooguri group field model.
proof. by definition, a generalized tadpole is a (n < 5)-point graph, meaning that at least one
of the colors is missing on the external legs. then, lemma 2.1 tells us that n should be even
(n = 2 or 4) and the external colors appear in pairs. having in mind that, still by definition, an
ordinary vertex of the colored theory has no common color in its fields, the external vertex of a
generalized tadpole having colors appearing in pairs cannot be an ordinary colored vertex. □
hence, the colored ooguri model has no generalized tadpole and so, a fortiori, no tadpole.
this property does not actually depend on the dimension.
the same strategy as in [8] is used in order to determine the types of vertex operators from
which the feynman amplitude of a general vacuum graph will be bounded in the next subsection.
we first start by some definitions.
definition 2.2
(i) a set a of vertices of a graph g is called connected if the subgraph made of
these vertices and all their inner lines (that is all the lines starting and ending at a vertex
of a) is connected.
(ii) an (a, b)-cut of a two-point connected graph g with two external vertices va and vb is a
partition of the vertices of g into two subsets a and b such that va ∈a, vb ∈b and a
and b are still connected.
(iii) a line joining a vertex of a to a vertex of b in the graph is called a frontier line for the
(a, b)-cut.
5
(iv) a vertex of b is called a frontier vertex with respect to the cut if there is a frontier line
attached to that vertex.
(v) an exhausting sequence of cuts for a connected two-point graph g of order n is a sequence
a0 = ∅& a1 & a2 & * * * & an−1 & an = g such that (ap, bp := g \ ap) is a cut of g for
any p = 1, * * * , n −1.
given a graph g, an exhausting sequence of cuts is a kind of total ordering of the vertices of g,
such that each vertex can be 'pulled' successively through a 'frontier' from one part b to another
part a, and this without disconnecting a or b.
lemma 2.2 let g be a colored connected two-point graph. there exists an exhausting sequence
of cuts for g.
the proof can be worked out by induction along the lines of a similar lemma in [8]. the main
ingredient for this proof in the colored model is the absence of generalized tadpoles as estab-
lished by theorem 2.1. we reproduce here this proof in detail in the four dimensional case for
completeness, but the result holds in any dimension.
proof of lemma 2.2. let us consider a two-point graph g with n vertices and assume that
a sequence a0 = ∅& a1 & a2 & * * * ap and its corresponding sequence {bj=0,1,...,p}, have been
defined for 0 ≤p < n −1, such that for all j = 0, 1, . . . , p, (aj, bj) is a cut for g. then another
frontier vertex vp+1 has to be determined such that ap+1 = ap ∪{vp+1} and bp+1 = g \ ap+1
define again a cut for g.
let us consider a tree tp with a fixed root vb which spans the remainder set of vertices bp
and give them a partial ordering. the set bp being finite, we can single out a maximal frontier
vertex vmax with respect to that ordering, namely a frontier vertex such that there is no other
frontier vertex in the "branch above vmax" in tp.
we prove first that vmax ̸= vb. the proposition vmax = vb would imply that vb is the
only frontier vertex left in bp. the vertex vb has four internal lines and therefore two cases may
occur: (1) all these are frontier lines then this means that {vb} = bp which contradicts the fact
that p < n −1; (2) not all lines are frontier lines which implies that some of these lines span a
generalized tadpole. this possibility does not occur by theorem 2.1.
let us then assume that vmax ̸= vb and choose vp+1 = vmax. we would want to prove that
the set bp+1 = bp \ {vp+1} is still connected through internal lines. cutting lp+1, the unique
link between vp+1 and vb in tp splits the tree into two connected components. we call rp the
part containing vb, and sp the other one. note that sp becomes a rooted tree with root and only
frontier vertex vp+1 (remember that vp+1 is maximal). from the property of vp+1 to be a frontier
vertex, one deduces that it has at most four lines in bp (and so at least one frontier line) and
hence there are at most three lines from vp+1 to bp+1 distinct from lp+1. since the tree rp does
not have any line hooked to vp+1, its lines remain inner lines of bp+1 confining all of its vertices
to a single connected component of bp+1. assume that bp+1 is not connected, this would imply
first, that sp contains other vertices than vp+1. second, removing the root vp+1 from sp and the
at most three lines hooked to it, one, two, or three connected components would be obtained.
these component are made of the vertices of sp \ {vp+1} plus their inner lines, which no longer
hook to rp through inner lines of bp+1. due to the fact that these components have no frontier
vertices hence no other frontier lines, it would mean that they must have been hooked to the total
graph g through at most three lines from vp+1 to bp+1 distinct from lp+1, hence they would have
been a generalized tadpole which is not admissible. □
6
14
o
o 23
figure 4:
the vertex operators.
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
g23
n
=
g14
n
=
figure 5: the chains of graphs g23
n and g14
n with 2n vertices.
hi 1
hi
2
hi
3
hi
4
h 1
i+1
h
3
i+1
4
hi+1
2
hi+1
hi 0
figure 6:
elements in the chain g14
n .
2.3
perturbative bounds
to begin with the study of perturbative bounds of the feynman graphs, let us introduce a cutoff
of the theory in order to define a regularized version of the partition function z(λ1, λ2) (3). we
then truncate the peter-weyl field expansion as
φl
1,2,3,4 =
λ
x
j1,j2,j3,j4
tr
φj1,j2,j3,j4dj1(g1)dj2(g2)dj3(g3)dj4(g4)
,
(6)
where the summation is over the spin indices j1,...,4, up to λ, dj(g) denotes the (2j+1)-dimensional
matrix representing g and φj1,j2,j3,j4 are the corresponding modes.
on the group, the delta
function with cutoffis of the form δλ(h) = pλ
j (2j + 1) tr dj(h). this function behaves as usual
as
r
dg δλ(hg−1)δλ(gk−1) = δλ(hk−1),
r
dgδ(gh) = 1, and diverges as pλ
j j2 ∼λ3.
we mention that the bounds obtained hereafter are only valid for vacuum graphs. for
general graphs with external legs, a bound can be easily obtained by cauchy-schwarz inequalities
by taking the product of the l2-norms of these external legs (seen as test functions) times the
amplitude of a vacuum graph.
7
we now prove the following theorem
theorem 2.2 there exists a constant k such that for any connected colored vacuum graph g of
the ooguri model with n internal vertices, we have
|ag| ≤knλ9n/2+9.
(7)
this bound is optimal in the sense that there exists a graph gn with n internal vertices such that
|ag| ≃knλ9n/2+9.
proof. lemma 2.2 shows that the vertices can be "pulled out" one by one from the connected
graph g. from this procedure, we build two kinds of vertex operators which compose the graph
g. these are the operators o14 and o23 (see fig.4) and their adjoint acting as
o14 : hλ
0
−
→
hλ
0 ⊗hλ
0 ⊗hλ
0 ⊗hλ
0
o23 : hλ
0 ⊗hλ
0
−
→
hλ
0 ⊗hλ
0 ⊗hλ
0
(8)
where hλ
0 := hλ ∩im c is a subspace of su(2)-right invariant function belonging also to hλ ⊂
l2(su(2)4) the subspace of l2 integrable functions with λ-truncated peter-weyl expansion.
the norm of these operators can be computed according to the formula
||h|| = lim
n→∞
tr[h†h]n1/2n
,
(9)
where h† denotes the adjoint operator associated with h.
we start by the calculations of
tr(o14o41)n using the formula [7]
tr(o14o41)n =
z
y
l∈lg14
n
dhl
y
f∈fg14
n
δλ
⃗
y
l∈∂fhl
(10)
with g14
n (see fig.5 and fig.6) the vacuum graph obtained from tr(o14o41)n, lg14
n its set of lines,
fg14
n its set of faces (closed cycles of strands). the oriented product in the argument of the delta
function is to be performed on the hl belonging to each oriented line of each oriented face. the
rule is that if the orientation of the line and the face coincide, one takes hl in the product; if the
orientations disagree one takes the value h−1
l
. we get after a reduction (we omit henceforth the
subscript λ in the notation of the truncated delta functions)
tr(o14o41)n =
z
n
y
i=1
4
y
j=0
dhi
j
(
δ(
n
y
i=1
hi
1hi
0)δ(hn
0 h1
1)δ(
n
y
i=1
hi
2hi
0)δ(hn
0 h1
2)δ(
n
y
i=1
hi
3hi
0)δ(hn
0 h1
3)δ(
n
y
i=1
hi
4hi
0)δ(hn
0 h1
4)
n
y
i=1
δ(hi
1(hi
2)−1)δ(hi
1(hi
3)−1)δ(hi
1(hi
4)−1)δ(hi
2(hi
3)−1)δ(hi
2(hi
4)−1)δ(hi
3(hi
4)−1)
)
.
(11)
after integration, it is simple to deduce that tr(o14o41)n ≤λ9n+9 and the operator norm is
bounded by
||o14|| ≤λ9/2.
(12)
a similar calculation allows us to write tr(o23o32)2n ≤λ6n+15 (see graph g23
n in fig.5) such that
we get the bound
||o23|| ≤λ3/2.
□
(13)
the meaning of the norms of the operators o14 and o23 is the following: each vertex in a
graph gn diverges at most as λ9/2 which is the bound of ||o14||. roughly speaking the amplitude
of a colored graph is bounded by knλ9n/2 where n is its number of vertices.
8
2
a
a1
d
a
2
a
a1
d
a
(d+1) 1
(d+1)
d
(d+1) 2
d+1
d d+1
d 1
d 2
d
21
23
2(d+1)
2d
2
12
1d
1(d+1)
1
vertex
propagator
figure 7:
propagator and vertex φd+1 in d dimensional gft.
2.4
d dimensional colored gft
we treat in this subsection, in a streamlined analysis, how to extend the above perturbative
bounds to any dimension d.
the d dimensional gft model. in dimension d, the action (2) finds the following extension
sd[φ] :=
z
d
y
i=1
dgi
d+1
x
l=1
̄
φl
1,2,...,d φl
d,...,2,1
+λ1
z y
dgij φ1
12,13,...,1(d+1) φ(d+1)
(d+1)1,(d+1)2,(d+1)3,...,(d+1)d φd
dd+1,d1,d2,...,dd−1 . . .
. . . . . . φ3
34,35,...,3d+1,31,32 φ2
23,24,...,2d+1,21
d+1
y
j̸=i
δ(gij(gji)−1)
+λ2
z y
dgij ̄
φ1
12,13,...,1(d+1) ̄
φd+1
(d+1)1,(d+1)2,(d+1)3,...,(d+1)d ̄
φd
dd+1,d1,d2,...,dd−1 . . .
. . . . . . ̄
φ3
34,35,...,3d+1,31,32 ̄
φ2
23,24,...,2d+1,21
d+1
y
j̸=i
δ(gij(gji)−1)
(14)
where the complex colored fields φl: gd →c will be denoted by φl(gli, glj, . . . , glk) = φl
li,lj,...,lk,
and glk is a group element materializing the link in the vertex of two colors land k (the general
propagator and vertex φd+1 are pictured in fig.7; the vertex for ̄
φd+1 can be easily found by
conjugation).
perturbative bounds. an extension of theorem 2.1 can be easily realized here in any dimension
using similar conditions as in lemma 2.1 for any n-point function (the argument on the parity
of the number of lines in complete contraction procedure will still hold here). since there is no
generalized tadpole again in the d dimensional theory, we are able to still provide a exhaustive
sequence of cuts for any colored graph. from this point, we can formulate the following statement:
theorem 2.3 there exists a constant k such that for any connected colored vacuum graph g of
the d dimensional gft model with n internal vertices, we have
|ag| ≤knλ3(d−1)(d−2)n/4+3(d−1).
(15)
9
d+1−p
p
figure 8:
the operator o(d + 1 −p, p).
this bound is optimal in the sense that there exists a graph gn with n internal vertices such that
|ag| ≃knλ3(d−1)(d−2)n/4+3(d−1).
proof. given 0 < p ≤d, the generalized vertex operator is an operator with d + 1 −p legs in a
part a and remaining p legs in a part b, for a (a, b)-cut for g (see fig.8). from the generalized
formula for
tr(od+1−p,pop,d+1−p)2n = λ3[((d−p−1)(d−p)+(p−1)(p−2))n+(d+1−p)p−1],
(16)
we can determine the bound on the norm of the operator od+1−p,p as
||od+1−p,p|| ≤λ
3(d−1)(d−2)−2(p−1)(d−p)
4
.
(17)
then the maximum of the bound occurs for p = 1, giving an operator od,1. for this operator
we have
||od,1|| ≤(λ3/2)
(d−1)(d−2)
2
.
□
(18)
setting d = 4 and p = 1, (18) recovers the bounds of o14 (12). in the specific d = 3
dimensional case, (18) reduces to λ3/2 bound of the vertex operators of boulatov's model as
established in [8].
3
integration of two fields: first steps of constructive analysis
a main interesting feature of the colored theory, is the possibility of an explicit integration of
two fields leading to a determinant as in the matthews-salam formalism [14]. we explore this
possibility in four dimensions in this subsection, but the result again is general. let us start by
writing the vertex terms in the form (setting henceforth λ1 = λ2 = λ)
sv
=
λ
(z
4
y
i=1
dgidg′
i φ1
1,2,3,4
δ(g1(g′
1)−1)
z
dg5dg6dg7 φ5
4,2′,5,6 φ4
6,3,3′,7 φ3
7,5,2,4′
φ2
4′,3′,2′,1′
+
z
4
y
i=1
dgidg′
i ̄
φ1
4′,3′,2′,1′
δ(g′
4(g4)−1)
z
dg5dg6dg7 ̄
φ5
1′,3,5,6 ̄
φ4
6,2,2′,7 ̄
φ3
7,5,3′,1
̄
φ2
1,2,3,4
)
,(19)
such that the following operators
h(g1, g2, g3, g4; g′
4, g′
3, g′
2, g′
1)
=
δ(g1(g′
1)−1)
z
dg5dg6dg7 φ5
4,2′,5,6 φ4
6,3,3′,7 φ3
7,5,2,4′ ,
(20)
h∗(g1, g2, g3, g4; g′
4, g′
3, g′
2, g′
1)
=
δ(g4(g′
4)−1)
z
dg5dg6dg7 ̄
φ5
1′,3,5,6 ̄
φ4
6,2,2′,7 ̄
φ3
7,5,3′,1
(21)
10
allow us to express the partition function (3) as
z(λ)
=
z
dμc[ ̄
φ, φ] exp[−λ[
z
4
y
i=1
dgidg′
i φ1
1,2,3,4 h(g1, g2, g3, g4; g′
4, g′
3, g′
2, g′
1) φ2
4′,3′,2′,1′
+ ̄
φ2
1,2,3,4 h∗(g1, g2, g3, g4; g′
4, g′
3, g′
2, g′
1) ̄
φ1
4′,3′,2′,1′]].
(22)
note the important property of h∗to be the adjoint of h. indeed, a quick inspection shows
that h∗= ( ̄
h)t which can be checked by a complex conjugation of the fields and a symmetry of
arguments such that 1 →4 and 2 →3.
the integration of this function follows standard techniques in quantum field theory. we
can introduce the vector v = (reφ1, imφ1, reφ2, imφ2) and its transpose vt and such that φ1hφ2
and ̄
φ1 ̄
h ̄
φ2 and the mass terms ̄
φl=1,2
1,2,3,4φl=1,2
4,3,2,1 can be expressed as in the matrix form
z(λ) =
z
dμc[ ̄
φ, φ] e−λvt a v,
(23)
where the matrix operator a can be expressed as
a =
0
0
h
ih
0
0
ih
−h
h∗
−ih∗
0
0
−ih∗
−h∗
0
0
.
(24)
after integration over the colors 1 and 2, using the normalized gaussian measure
dμc[ ̄
φ, φ] = dμ′
c′[ ̄
φ1,2; φ1,2]dμ′′
c′′[ ̄
φ3,4,5, φ3,4,5],
we get
z(λ) =
z
dμ′′
c′′[ ̄
φ3,4,5, φ3,4,5] k[det(1 + λca)]−1 =
z
dμ′′
c′′[ ̄
φ3,4,5, φ3,4,5] ke−tr log(1+λca), (25)
where k is an unessential normalization constant that we omit in the sequel. mainly the operator
product ca can be calculated by composing ch and ch∗.
we obtain for ch and ch∗,
respectively, the following operators defined by their kernel
h(g1, g2, g3, g4; g′
4, g′
3, g′
2, g′
1) =
z
4
y
i=1
dg′′
i c(g1, g2, g3, g4; g′′
4, g′′
3, g′′
2, g′′
1)h(g′′
1, g′′
2, g′′
3, g′′
4; g′
4, g′
3, g′
2, g′
1)
(26)
=
z
dg5dg6dg7
z
dh δ(g1h(g′
1)−1) φ5(g4h, g′
2, g5, g6) φ4(g6, g3h, g′
3, g7) φ3(g7, g5, g2h, g′
4) ,
h∗(g1, g2, g3, g4; g′
4, g′
3, g′
2, g′
1) =
z
4
y
i=1
dg′′
i
c(g1, g2, g3, g4; g′′
4, g′′
3, g′′
2, g′′
1)h∗(g′′
1, g′′
2, g′′
3, g′′
4; g′
4, g′
3, g′
2, g′
1)
(27)
=
z
dg5dg6dg7
z
dh δ(g4h(g′
4)−1) φ5(g′
1, g3h, g5, g6) φ4(g6, g′
2, g2h, g7) φ3(g7, g5, g′
3, g1h).
it is convenient to see ca as a sum of two matrices h + h∗which are defined by the offblock
diagonal matrices of ca involving at each matrix element h or h∗.
then the determinant
integrant of (25) can be expressed, up to a constant, more simply by
e−tr log(1+λ(h+h∗)) = e+tr p∞
n=1
(−λ)n
n
(h+h∗)n
11
= e+tr p∞
p=1
λ2p
2p ([h+h∗)2])p = e
1
2tr p∞
p=1
λ2p
p (q)p = e−1
2tr log(1−λ2q)
(28)
where we have used the fact that tr((h+h∗)2p+1) = 0 for all p, and q := (h+h∗)2 = hh∗+h∗h,
since h2 = 0 = (h∗)2. the operator q is hermitian and is given as
q
=
2hh∗
−2ihh∗
0
0
2ihh∗
2hh∗
0
0
0
0
2h∗h
2ih∗h
0
0
−2ih∗h
2h∗h
.
(29)
thus, q has positive real eigenvalues and −λ2q is positive for λ = ic, c ∈r.
the next purpose is to investigate a bound on the radius of the series f(λ) = log z(λ)
denoting the free energy and computing the amplitude sum of connected feynman graphs. for
this, we will use a cactus expansion [17] and the brydges-kennedy forest formula (see [18] and
references therein; for a short pedagogical approach see [8]) on the partition function z(λ).
expanding e−1
2 tr log(1−λ2q) in terms of vλ = −1
2tr log(1 −λ2q), using the replica trick, and then
applying the brydges-kennedy formula, one comes to
z(λ)
=
z
dνc[ ̄
φ5,4,3, φ5,4,3] e−1
2tr log(1−λ2q)
=
∞
x
n=0
1
n!
x
f ∈fn
y
l∈f
z 1
0
dhl
! y
l∈f
∂
∂hl
! z
dνhf
n (σ1, . . . , σn)
n
y
v=1
vλ(σv),
(30)
where we have changed the notation dνc = dμ′′
c′′ and each of the σi represent an independent
copy of the six fields ( ̄
φ5,4,3, φ5,4,3)i; the second sum is over the set fn of forests built on n points
or vertices; the product is over lines l in a given forest f; hf is a n(n −1)/2-tuple with element
hf
l = minp hp, where p take values in the unique path in f connecting the source s(l) and target
t(l) of a line l ∈f, if such a path exists, otherwise hf
l = 0. it is well known that the summand
factorizes along connected components of each forest. therefore log z(λ) is given by the same
series in terms of trees (connected forests) as
f(λ) =
∞
x
n=1
1
n!
x
t∈tn
y
l∈t
z 1
0
dhl
! y
l∈t
∂
∂hl
! z
dνht
n (σ1, . . . , σn)
n
y
v=1
vλ(σv).
(31)
the trees t join the new vertices vλ(σv)'s often called "loop vertices".
the covariance of
dνht
n (σ1, . . . , σn) can be expressed by,
cht
ij; ab(gi; gj) =
1
if i = j and a = b
δab ht
l c(gi; gj)
if i ̸= j
0
otherwise
(32)
where l = {ij} denotes the line with source s(l) = i and target t(l) = j. we use also the notation
(gi, gj) = ((gi)k=1,2,3,4, (gj)k=1,2,3,4) ∈g4×4, a and b are color indices such that formally, we have
dνht
n (σ1, . . . , σn) = e
r dgdg′ pn
i,j=1
p5
a,b=1
δ
δ ̄
φa
i (gi) cht
ij;ab(gi,gj)
δ
δφb
j(gj ) .
(33)
the partial derivative ∂/∂hl in (31) acts on the measure dνht
n (σ1, . . . , σn) and one obtains
f(λ)
=
∞
x
n=1
1
n!
x
t∈tn
y
l∈t
z 1
0
dhl
! z
dνht
n (σ1, . . . , σn)
(34)
y
l∈t
z
d4gs(l)d4gt(l) c(gs(l); gt(l))
δ2
δ ̄
φa(l)
s(l)(gs(l))δφa(l)
t(l) (gt(l))
n
y
v=1
vλ(σv),
12
with the color index a(l) denoting the color of the line l.
let us denote by kv the coordination number of a given loop vertex vλ(σv) hooked to the
tree t by half lines lv = 1, . . . , kv such that s(lv) = v or t(lv) = v. since each half line corresponds
to a derivative, we can decompose the kv derivatives in two parts pv + qv = kv such that, up to
inessential constants
t (λ; kv) =
pv
y
l1
v=1
δ
δ ̄
φa(l1
v)
v
(gl1
v)
qv
y
l2
v=1
δ
δφa(l2
v)
v
(gl2
v)
−1
2tr log(1 −λ2q[ ̄
φ5,4,3
v
; φ5,4,3
v
])
(35)
= tr
x
r
pr+qr>0
p pr=pv;p qr=qv
r
y
j=1
δpj+qj λ2q
qpj
l(1;j)
v
=1 δ ̄
φa(l(1;j)
v
)
v
(gl(1;j)
v
) qqj
l(2;j)
v
=1 δφa(l(2;j)
v
)
v
(gl(2;j)
v
)
1
1 −λ2q
.
note that, as previously mentionned the coupling constant λ = ic is a pure imaginary complex
number so that the denominator 1 + c2q is positive.
a rigorous bound for (35) is complicated to determine. however an encouraging remark is
to consider the constant field modes or constant fields themselves (even if the physical relevance
of these "background" modes is not clear at this stage). if we restrict to these constant modes,
then (35) can be bounded as follows. noting that q is a polynomial in the fields φa of degree six,
the product of derivatives acting on it behaves like
t (λ; kv) ≤
x
r
pr+qr>0
p pr=pv;p qr=qv
r
y
j=1
|λ1/3φ|6−pj−qj(λ1/3)pj+qj
1 + |λ1/3φ|6
≤
x
r
pr+qr>0
p pr=pv;p qr=qv
r
y
j=1
(λ1/3)pj+qj.
(36)
note that similar formulas exist in dimension d with q a polynomial of degree 2(d −1).
hence this method opens a constructive perspective for colored gft in any dimension. this
perspective is distinct or could be a complement to the freidel-louapre approach.
4
conclusion
perturbative bounds of a general vacuum graph of colored gft have been obtained in any
dimension. we have shown that, in dimension d, the scaling of the amplitude of any vacuum
graph, in the "ultraspin" cutoffλ, behaves like knλ3(d−1)(d−2)n/4+3(d−1) where n is the number
of vertices. in addition, it has been revealed that, by an integration of two colors, the model
is positive if the coupling constant is purely imaginary. we did not need further regularization
procedure, such as an inclusion of a freidel-louapre interaction term which in return may be more
divergent than the ordinary theory. using a loop vertex expansion, we have reached the property
that for at least the constant modes of the fields the forest-tree formula leads to a convergent
series (which should be the borel-le roy sum of appropriate order of the ordinary perturbation
series). these results are encouraging for a constructive program.
the bounds obtained in this paper for colored gft models should now be completed
into a more precise power counting and scaling analysis. they should also be extended to the
physically more interesting models in particular the so called eprl-fk model [16, 15].
we
recall that the most simple divergencies of the eprl-fk model have been studied in [10]. this
work analyzes the elementary "bubble" (built out of two vertices) corresponding to the one-loop
self-energy correction and the "ball" (built out of five vertices), corresponding to the one-loop
vertex correction. the authors prove that putting the external legs at zero spin, the degree of
13
figure 9:
nonplanar tadpoles with their additional divergence (dashed line) when inserted in more
involved graphs.
divergence in the cutoffof the ball could be logarithmic. this result sounds very promising from
the renormalization point of view.
the first step in this program is to establish the power counting of a simplified colored
ooguri model with commutative group, which we call "linearized" colored ooguri model. we
checked that power counting in this case is given by a homology formula [13, 19]. then we intend
to perform a similar analysis of the colored linearized eprl-fk model, and find out the analogs
of the multiscale renormalization group analysis in this context. this should help for the more
complicated study of the "non-linear" models in which homotopy rather than homology should
ultimately govern the power counting.
acknowledgments
the work of j.b.g. is supported by the laboratoire de physique th ́
eorique d'orsay (lpto, uni-
versit ́
e paris sud xi) and the association pour la promotion scientifique de l'afrique (apsa).
discussions with r. gurau are gratefully acknowledged, which in particular lead to the represen-
tation of section 3.
5
appendix: the case of tadpoles
in this appendix, we give some precisions on the scaling properties of amplitudes of graphs
containing generalized tadpoles as occur in the non colored gft. we will restrict the discussion
to the case of dimension 3 [8, 7] but similar properties definitely appear in greater dimension.
referring to [8], the vertex operators of boulatov's model are bounded by λ3/2 such that
any connected vacuum graph without generalized tadpoles is bounded by knλ3n/2 (omitting the
overall trace factor), n being as usual the number of vertices. however, nothing more can be said
in the case of graphs with generalized tadpoles. for instance, there exist non-planar tadpoles
(see fig.9) which contribute more than λ3/2 per vertex. indeed, in the typical situation of fig.9,
these tadpoles cost a factor λ3/2+3/2 per vertex, and violate the ordinary vertex bound.
finally, one concludes that the bounds of [8] are correct for vacuum graphs without gen-
eralized tadpoles, such as the vacuum graphs that occur in the colored models, but that, when
tadpoles are present, they are wrong for the most general model. in any non-colored case and
any dimension, we can expect similar features.
14
references
[1] d. v. boulatov, mod. phys. lett. a7 (1992) 1629; eprint hep-th/9202074.
[2] h. ooguri, mod. phys. lett. a7 (1992) 2799; eprint hep-th/9205090.
[3] l. freidel, int. j. theor. phys. 44 (2005) 1769; eprint hep-th/0505016.
[4] d. oriti, "the group field theory approach to quantum gravity," eprint gr-qc/0607032
(2006).
[5] c. rovelli, quantum gravity (cambridge university press, cambridge, 2004).
[6] t. thiemann, modern canonical quantum general relativity (cambridge university press,
cambridge 2007).
[7] l.
freidel,
r.
gurau
and
d.
oriti,
phys.
rev.
d80
(2009)
044007;
eprint
0905.3772[hep-th].
[8] j. magnen, k. noui, v. rivasseau and m. smerlak, "scaling behaviour of three dimensional
group field theory," eprint 0906.5477[hep-th] (2009).
[9] v. rivasseau, noncommutative renormalization, poincar ́
e seminar x, "espaces quantiques",
ed. b. duplantier et al, (2007) 15-95; eprint 0705.0705[hep-th].
[10] c. perini, c. rovelli and s. speziale, "self-energy and vertex radiative corrections in lqg,"
eprint 0810.1714[gr-qc] (2008).
[11] j. w. barrett, r. j. dowdall, w. j. fairbairn, h. gomes and f. hellmann, "asymptotic
analysis of the eprl four-simplex amplitude," eprint 0902.1170[gr-qc] (2009).
[12] l. freidel and d. louapre, phys. rev. d68 (2003) 104004; eprint hep-th/0211026.
[13] r. gurau, "colored group field theory," eprint 0907.2582[hep-th] (2009).
[14] p. t. matthews and a. salam, nuovo cimento 12 (1954) 563; ibid 2 (1955) 120.
[15] l. freidel and k. krasnov, class. quant. grav. 25 (2008) 125018; eprint 0708.1595[gr-qc].
[16] j. engle, e. livine, r. pereira and c. rovelli, nucl. phys. b799 (2008) 136; eprint
0711.0146[gr-qc].
[17] v. rivasseau, jhep 9 (2007) 008; eprint 0706.1224[hep-th].
[18] v. rivasseau, from perturbative to constructive renormalization (princeton university
press, princeton, 1991); a. abdesselam and v. rivasseau, trees, forests and jungles: a
botanical garden for cluster expansions, in constructive physics, ed. v. rivasseau, lecture
notes in physics 446, springer verlag, 1995.
[19] j. ben geloun, j. magnen and v. rivasseau, in progress.
15
|
0911.1720 | evolutionary game theory: temporal and spatial effects beyond replicator
dynamics | evolutionary game dynamics is one of the most fruitful frameworks for
studying evolution in different disciplines, from biology to economics. within
this context, the approach of choice for many researchers is the so-called
replicator equation, that describes mathematically the idea that those
individuals performing better have more offspring and thus their frequency in
the population grows. while very many interesting results have been obtained
with this equation in the three decades elapsed since it was first proposed, it
is important to realize the limits of its applicability. one particularly
relevant issue in this respect is that of non-mean-field effects, that may
arise from temporal fluctuations or from spatial correlations, both neglected
in the replicator equation. this review discusses these temporal and spatial
effects focusing on the non-trivial modifications they induce when compared to
the outcome of replicator dynamics. alongside this question, the hypothesis of
linearity and its relation to the choice of the rule for strategy update is
also analyzed. the discussion is presented in terms of the emergence of
cooperation, as one of the current key problems in biology and in other
disciplines.
| introduction
2
2
basic concepts and results of evolutionary game theory
4
2.1
equilibria and stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.2
replicator dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2.3
the problem of the emergence of cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
3
the effect of different time scales
8
3.1
time scales in the ultimatum game
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
3.2
time scales in symmetric binary games
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
3.2.1
slow selection limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
3.2.2
fast selection limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
email addresses: [email protected] (carlos p. roca), [email protected] (jos ́
e a. cuesta), [email protected]
(angel s ́
anchez)
preprint submitted to elsevier
november 9, 2009
arxiv:0911.1720v1 [q-bio.pe] 9 nov 2009
4
structured populations
21
4.1
network models and update rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
4.2
spatial structure and homogeneous networks
. . . . . . . . . . . . . . . . . . . . . . . . . . .
25
4.3
synchronous vs asynchronous update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
4.4
heterogeneous networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
4.5
best response update rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
4.6
weak selection
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
5
conclusion and future prospects
41
a characterization of birth-death processes
47
b absorption probability in the hypergeometric case
47
1. introduction
the importance of evolution can hardly be overstated, in so far as it permeates all sciences. indeed, in
the 150 years that have passed since the publication of on the origin of the species [1], the original idea of
darwin that evolution takes place through descent with modification acted upon by natural selection has
become a key concept in many sciences. thus, nowadays one can speak of course of evolutionary biology, but
there are also evolutionary disciplines in economics, psychology, linguistics, or computer science, to name a
few.
darwin's theory of evolution was based on the idea of natural selection. natural selection is the process
through which favorable heritable traits become more common in successive generations of a population
of reproducing organisms, displacing unfavorable traits in the struggle for resources. in order to cast this
process in a mathematically precise form, j. b. s. haldane and sewall wright introduced, in the so-called
modern evolutionary synthesis of the 1920's, the concept of fitness. they applied theoretical population
ideas to the description of evolution and, in that context, they defined fitness as the expected number of
offspring of an individual that reach adulthood. in this way they were able to come up with a well-defined
measure of the adaptation of individuals and species to their environment.
the simplest mathematical theory of evolution one can think of arises when one assumes that the fitness
of a species does not depend on the distribution of frequencies of the different species in the population,
i.e., it only depends on factors that are intrinsic to the species under consideration or on environmental
influences. sewall wright formalized this idea in terms of fitness landscapes ca. 1930, and in that context r.
fisher proved his celebrated theorem, that states that the mean fitness of a population is a non-decreasing
function of time, which increases proportionally to variability. since then, a lot of work has been done on
this kind of models; we refer the reader to [2, 3, 4, 5] for reviews.
the approach in terms of fitness landscapes is, however, too simple and, in general, it is clear that the
fitness of a species will depend on the composition of the population and will therefore change accordingly as
the population evolves. if one wants to describe evolution at this level, the tool of reference is evolutionary
game theory. brought into biology by maynard smith [6] as an "exaptation"1 of the game theory developed
originally for economics [8], it has since become a unifying framework for other disciplines, such as sociology
or anthropology [9]. the key feature of this mathematical apparatus is that it allows to deal with evolution
on a frequency-dependent fitness landscape or, in other words, with strategic interactions between entities,
these being individuals, groups, species, etc. evolutionary game theory is thus the generic approach to
evolutionary dynamics [10] and contains as a special case constant, or fitness landscape, selection.
in its thirty year history, a great deal of research in evolutionary game theory has focused on the properties
and applications of the replicator equation [11]. the replicator equation was introduced in 1978 by taylor
and jonker [12] and describes the evolution of the frequencies of population types taking into account their
1borrowing the term introduced by gould and vrba in evolutionary theory, see [7].
2
mutual influence on their fitness. this important property allows the replicator equation to capture the
essence of selection and, among other key results, it provides a connection between the biological concept of
evolutionarily stable strategies [6] with the economical concept of nash equilibrium [13].
as we will see below, the replicator equation is derived in a specific framework that involves a number of
assumptions, beginning with that of an infinite, well-mixed population with no mutations. by well-mixed
population it is understood that every individual either interacts with every other one or at least has the
same probability to interact with any other individual in the population.
this hypothesis implies that
any individual effectively interacts with a player which uses the average strategy within the population
(an approach that has been traditionally used in physics under the name of mean-field approximation).
deviations from the well-mixed population scenario affect strongly and non-trivially the outcome of the
evolution, in a way which is difficult to apprehend in principle. such deviations can arise when one considers,
for instance, finite size populations, alternative learning/reproduction dynamics, or some kind of structure
(spatial or temporal) in the interactions between individuals.
in this review we will focus on this last point, and discuss the consequences of relaxing the hypothesis
that every player interacts or can interact with every other one. we will address both spatial and temporal
limitations in this paper, and refer the reader to refs. [10, 11] for discussions of other perturbations. for
the sake of definiteness, we will consider those effects, that go beyond replicator dynamics, in the specific
context of the emergence of cooperation, a problem of paramount importance with implications at all levels,
from molecular biology to societies and ecosystems [14]; many other applications of evolutionary dynamics
have also been proposed but it would be too lengthy to discuss all of them here (the interested reader should
see, e.g., [10]).
cooperation, understood as a fitness-decreasing behavior that increases others' fitness,
is an evolutionary puzzle, and many researchers have considered alternative approaches to the replicator
equation as possible explanations of its ubiquity in human (and many animal) societies. as it turns out,
human behavior is unique in nature. indeed, altruism or cooperative behavior exists in other species, but
it can be understood in terms of genetic relatedness (kin selection, introduced by hamilton [15, 16]) or of
repeated interactions (as proposed by trivers [17]). nevertheless, human cooperation extends to genetically
unrelated individuals and to large groups, characteristics that cannot be understood within those schemes.
subsequently, a number of theories based on group and/or cultural evolution have been put forward in
order to explain altruism (see [18] for a review). evolutionary game theory is also being intensively used for
this research, its main virtue being that it allows to pose the dilemmas involved in cooperation in a simple,
mathematically tractable manner. to date, however, there is not a generally accepted solution to this puzzle
[19].
considering temporal and spatial effects means, in the language of physics, going beyond mean-field to
include fluctuations and correlations. therefore, a first step is to understand what are the basic mean field
results. to this end, in section 2 we briefly summarize the main features of replicator equations and introduce
the concepts we will refer to afterwards. subsequently, section 3 discusses how fluctuations can be taken
into account in evolutionary game theory, and specifically we will consider that, generically, interactions and
dynamics (evolution) need not occur at the same pace. we will show that the existence of different time
scales leads to quite unexpected results, such as the survival and predominance of individuals that would
be the less fit in the replicator description. for games in finite populations with two types of individuals
or strategies, the problem can be understood in terms of markov processes and the games can be classified
according to the influence of the time scales on their equilibrium structure. other situations can be treated
by means of numerical simulations with similarly non-trivial results.
section 4 deals with spatial effects. the inclusion of population structure in evolutionary game theory
has been the subject of intense research in the last 15 years, and a complete review would be beyond our
purpose (see e.g. [20]). the existence of a network describing the possible interactions in the population
has been identified as one of the factors that may promote cooperation among selfish individuals [19]. we
will discuss the results available to date and show how they can be reconciled by realizing the role played
by different networks, different update rules for the evolution of strategies and the equilibrium structure of
the games. as a result, we will be able to provide a clear-cut picture of the extent as to which population
structure promote cooperation in two strategy games.
finally, in section 5 we discuss the implications of the reviewed results on a more general context. our
3
major conclusion will be the lack of generality of models in evolutionary game theory, where details of
the dynamics and the interaction modify qualitatively the results. a behavior like that is not intuitive to
physicists, used to disregard those details as unimportant. therefore, until we are able to discern what is
and what is not relevant, when dealing with problems in other sciences, modeling properly and accurately
specific problems is of utmost importance. we will also indicate a few directions of research that arise from
the presently available knowledge and that we believe will be most appealing in the near future.
2. basic concepts and results of evolutionary game theory
in this section, we summarize the main facts about evolutionary game theory that we are going to need
in the remainder of the paper. the focus is on the stability of strategies and on the replicator equation, as
an equivalent to the dynamical description of a mean field approach which we will be comparing with. this
summary is by no means intended to be comprehensive and we encourage the reader to consult the review
[21] or, for full details, the books [6, 9, 11].
2.1. equilibria and stability
the simplest type of game has only two players and, as this will be the one we will be dealing with, we
will not dwell into further complications. player i is endowed with a finite number ni of strategies. a game
is defined by listing the strategies available to the players and the payoffs they yield: when a player, using
strategy si, meets another, who in turn uses strategy sj, the former receives a payoffwij whereas the latter
receives a payoffzij. we will restrict ourselves to symmetric games, in which the roles of both players are
exchangeable (except in the example considered in section 3.1); mathematically, this means that the set of
strategies are the same for both players and that w = zt . matrix w is then called the payoffmatrix of
the normal form of the game. in the original economic formulation [8] payoffs were understood as utilities,
but maynard smith [6] reinterpreted them in terms of fitness, i.e. in terms of reproductive success of the
involved individuals.
the fundamental step to "solving" the game or, in other words, to find what strategies will be played,
was put forward by john nash [13] by introducing the concept of equilibrium. in 2 × 2 games, a pair of
strategies (si, sj) is a nash equilibrium if no unilateral change of strategy allows any player to improve her
payoff. when we restrict ourselves to symmetric games, one can say simply, by an abuse of language [21],
that a strategy si is a nash equilibrium if it is a best reply to itself: wii ≥wij, ∀sj (a strict nash equilibrium
if the inequality is strict). this in turn implies that if both players are playing strategy si, none of them
has any incentive to deviate unilaterally by choosing other strategy. as an example, let us consider the
famous prisoner's dilemma game, which we will be discussing throughout the review. prisoner's dilemma
was introduced by rapoport and chammah [22] as a model of the implications of nuclear deterrence during
the cold war, and is given by the following payoffmatrix (we use the traditional convention that the matrix
indicates payoffs to the row player)
c
d
c
d
3
5
0
1
.
(1)
the strategies are named c and d for cooperating and defecting, respectively. this game is referred to as
prisoner's dilemma because it is usually posed in terms of two persons that are arrested accused of a crime.
the police separates them and makes the following offer to them: if one confesses and incriminates the other,
she will receive a large reduction in the sentence, but if both confess they will only get a minor reduction;
and if nobody confesses then the police is left only with circumstancial evidence, enough to imprison them
for a short period. the amounts of the sentence reductions are given by the payoffs in (1). it is clear from
it that d is a strict nash equilibrium: to begin with, it is a dominant strategy, because no matter what the
column player chooses to do, the row player is always better offby defecting; and when both players defect,
none will improve her situation by cooperating. in terms of the prisoners, this translates into the fact that
both will confess if they behave rationally. the dilemma arises when one realizes that both players would
be better offcooperating, i.e. not confessing, but rationality leads them unavoidable to confess.
4
the above discussion concerns nash equilibria in pure strategies. however, players can also use the
so-called mixed strategies, defined by a vector with as many entries as available strategies, every entry
indicating the probability of using that strategy. the notation changes then accordingly: we use vectors
x = (x1 x2 . . . xn)t , which are elements of the simplex sn spanned by the vectors ei of the standard unit
base (vectors ei are then identified with the n pure strategies).the definition of a nash equilibrium in mixed
strategies is identical to the previous one: the strategy profile x is a nash equilibrium if it is a best reply
to itself in terms of the expected payoffs, i.e. if xt wx ≥xt wy, ∀y ∈sn. once mixed strategies have
been introduced, one can prove, following nash [13] that every normal form game has at least one nash
equilibrium, albeit it need not necessarily be a nash equilibrium in pure strategies. an example we will
also be discussing below is given by the hawk-dove game (also called snowdrift or chicken in the literature
[23]), introduced by maynard smith and price to describe animal conflicts [24] (strategies are labeled h and
d for hawk and dove, respectively)
d
h
d
h
3
5
1
0
.
(2)
in this case, neither h nor d are nash equilibria, but there is indeed one nash equilibrium in mixed
strategies, that can be shown [6] to be given by playing d with probability 1/3. this makes sense in terms
of the meaning of the game, which is an anti-coordination game, i.e. the best thing to do is the opposite of
the other player. indeed, in the snowdrift interpretation, two people are trapped by a snowdrift at the two
ends of a road. for every one of them, the best option is not to shovel snow offto free the road and let the
other person do it; however, if the other person does not shovel, then the best option is to shovel oneself.
there is, hence, a temptation to defect that creates a dilemmatic situation (in which mutual defection leads
to the worst possible outcome).
in the same way as he reinterpreted monetary payoffs in terms of reproductive success, maynard smith
reinterpreted mixed strategies as population frequencies. this allowed to leave behind the economic con-
cept of rational individual and move forward to biological applications (as well as in other fields). as a
consequence, the economic evolutionary idea in terms of learning new strategies gives way to a genetic
transmission of behavioral strategies to offspring. therefore, maynard smith's interpretation of the above
result is that a population consisting of one third of individuals that always use the d strategy and two thirds
of h-strategists is a stable genetic polymorphism. at the core of this concept is his notion of evolutionarily
stable strategy. maynard smith defined a strategy as evolutionarily stable if the following two conditions
are satisfied
xt wx
≥
xt wy, ∀y ∈sn,
(3)
if x ̸= y and xt wy
=
xt wx, then xt wy > yt wy.
(4)
the rationale behind this definition is again of a population theoretical type: these are the conditions that
must be fulfilled for a population of x-strategists to be non-invadable by any y-mutant. indeed, either
x performs better against itself than y or, if they perform equally, x performs better against y than y
itself. these two conditions guarantee non-invasibility of the population. on the other hand, comparing
the definitions of evolutionarily stable strategy and nash equilibrium one can immediately see that a strict
nash equilibrium is an evolutionarily stable strategy and that an evolutionarily stable strategy is a nash
equilibrium.
2.2. replicator dynamics
after nash proposed his definition of equilibrium, the main criticism that the concept has received relates
to how equilibria are reached. in other words, nash provided a rule to decide which are the strategies that
rational players should play in a game, but how do people involved in actual game-theoretical settings but
without knowledge of game theory find the nash equilibrium? furthermore, in case there is more than one
nash equilibrium, which one should be played, i.e., which one is the true "solution" of the game? these
questions started out a great number of works dealing with learning and with refinements of the concept
5
that allowed to distinguish among equilibria, particularly within the field of economics. this literature is
out of the scope of the present review and the reader is referred to [25] for an in-depth discussion.
one of the answers to the above criticism arises as a bonus from the ideas of maynard smith. the
notion of evolutionarily stable strategy has implicit some kind of dynamics when we speak of invasibility by
mutants; a population is stable if when a small proportion of it mutates it eventually evolves back to the
original state. one could therefore expect that, starting from some random initial condition, populations
would evolve to an evolutionarily stable strategy, which, as already stated, is nothing but a nash equilibrium.
thus, we would have solved the question as to how the population "learns" to play the nash equilibrium
and perhaps the problem of selecting among different nash equilibria. however, so far we have only spoken
of an abstract dynamics; nothing is specified as to what is the evolution of the population or the strategies
that it contains.
the replicator equation, due to taylor and jonker [12], was the first and most successful proposal of an
evolutionary game dynamics. within the population dynamics framework, the state of the population, i.e.
the distribution of strategy frequencies, is given by x as above. a first key point is that we assume that
the xi are differentiable functions of time t: this requires in turn assuming that the population is infinitely
large (or that xi are expected values for an ensemble of populations). within this hypothesis, we can now
postulate a law of motion for x(t). assuming further that individuals meet randomly, engaging in a game
with payoffmatrix w, then (wx)i is the expected payofffor an individual using strategy si, and xt wx is
the average payoffin the population state x. if we, consistently with our interpretation of payoffas fitness,
postulate that the per capita rate of growth of the subpopulation using strategy si is proportional to its
payoff, we arrive at the replicator equation (the name was first proposed in [26])
̇
xi = xi[(wx)i −xt wx],
(5)
where the term xt wx arises to ensure the constraint p
i xi = 1 ( ̇
xi denotes the time derivative of xi).
this equation translates into mathematical terms the elementary principle of natural selection: strategies,
or individuals using a given strategy, that reproduce more efficiently spread, displacing those with smaller
fitness. note also that states with xi = 1, xj = 0, ∀j ̸= i are solutions of eq. (5) and, in fact, they are
absorbing states, playing a relevant role in the dynamics of the system in the absence of mutation.
once an equation has been proposed, one can resort to the tools of dynamical systems theory to de-
rive its most important consequences. in this regard, it is interesting to note that the replicator equation
can be transformed by an appropriate change of variable in a system of lotka-volterra type [11].
for
our present purposes, we will focus only on the relation of the replicator dynamics with the two equilib-
rium concepts discussed in the preceding subsection. the rest points of the replicator equation are those
frequency distributions x that make the rhs of eq. (5) vanish, i.e. those that verify either xi = 0 or
(wx)i = xt wx, ∀i = 1, . . . , n. the solutions of this system of equations are all the mixed strategy nash
equilibria of the game [9]. furthermore, it is not difficult to show (see e.g. [11]) that strict nash equilibria
are asymptotically stable, and that stable rest points are nash equilibria. we thus see that the replicator
equation provides us with an evolutionary mechanism through which the players, or the population, can
arrive at a nash equilibrium or, equivalently, to an evolutionarily stable strategy. the different basins of
attraction of the different equilibria further explain which of them is selected in case there are more than
one.
for our present purposes, it is important to stress the hypothesis involved (explicitly or implicitly) in
the derivation of the replicator equation:
1. the population is infinitely large.
2. individuals meet randomly or play against every other one, such that the payoffof strategy si is
proportional to the payoffaveraged over the current population state x.
3. there are no mutations, i.e. strategies increase or decrease in frequency only due to reproduction.
4. the variation of the population is linear in the payoffdifference.
assumptions 1 and 2 are, as we stated above, crucial to derive the replicator equation in order to replace
the fitness of a given strategy by its mean value when the population is described in terms of frequencies. of
6
course, finite populations deviate from the values of frequencies corresponding to infinite ones. in a series
of recent works, traulsen and co-workers have considered this problem [27, 28, 29]. they have identified
different microscopic stochastic processes that lead to the standard or the adjusted replicator dynamics,
showing that differences on the individual level can lead to qualitatively different dynamics in asymmetric
conflicts and, depending on the population size, can even invert the direction of the evolutionary process.
their analytical framework, which they have extended to include an arbitrary number of strategies, provides
good approximations to simulation results for very small sizes. for a recent review of these and related issues,
see [30]. on the other hand, there has also been some work showing that evolutionarily stable strategies
in infinite populations may lose their stable character when the population is small (a result not totally
unrelated to those we will discuss in section 3). for examples of this in the context of hawk-dove games,
see [31, 32].
assumption 3 does not pose any severe problem. in fact, mutations (or migrations among physically
separated groups, whose mathematical description is equivalent) can be included, yielding the so-called
replicator-mutator equation [33]. this is in turn equivalent to the price equation [34], in which a term
involving the covariance of fitness and strategies appears explicitly. mutations have been also included in
the framework of finite size populations [29] mentioned above. we refer the reader to references [33, 35] for
further analysis of this issue.
assumption 4 is actually the core of the definition of replicator dynamics. in section 4 below we will come
back to this point, when we discuss the relation of replicator dynamics to the rules used for the update of
strategies in agent-based models. work beyond the hypothesis of linearity can proceed otherwise in different
directions, by considering generalized replicator equations of the form
̇
xi = xi[wi(x) −xt w(x)].
(6)
the precise choice for the functions wi(x) depends of course on the particular situation one is trying to
model. a number of the results on replicator equation carry on for several such choices. this topic is well
summarized in [21] and the interested reader can proceed from there to the relevant references.
assumption 2 is the one to which this review is devoted to and, once again, there are very many different
possibilities in which it may not hold. we will discuss in depth below the case in which the time scale
of selection is faster than that of interaction, leading to the impossibility that a given player can interact
with all others. interactions may be also physically limited, either for geographical reasons (individuals
interact only with those in their surroundings), for social reasons (individuals interact only with those with
whom they are acquainted) or otherwise. as in previous cases, these variations prevents one from using the
expected value of fitness of a strategy in the population as a good approximation for its growth rate. we
will see the consequences this has in the following sections.
2.3. the problem of the emergence of cooperation
one of the most important problems to which evolutionary game theory is being applied is the under-
standing of the emergence of cooperation in human (albeit non-exclusively) societies [14]. as we stated
in the introduction, this is an evolutionary puzzle that can be accurately expressed within the formalism
of game theory. one of the games that has been most often used in connection with this problem is the
prisoner's dilemma introduced above, eq. (1). as we have seen, rational players should unavoidably defect
and never cooperate, thus leading to a very bad outcome for both players. on the other hand, it is evident
that if both players had cooperated they would have been much better off. this is a prototypical example
of a social dilemma [36] which is, in fact, (partially) solved in societies. indeed, the very existence of human
society, with its highly specialized labor division, is a proof that cooperation is possible.
in more biological terms, the question can be phrased using again the concept of fitness. why should an
individual help other achieve more fitness, implying more reproductive success and a chance that the helper
is eventually displaced? it is important to realize that such a cooperative effort is at the roots of very many
biological phenomena, from mutualism to the appearance of multicellular organisms [37].
when one considers this problem in the framework of replicator equation, the conclusion is immediate and
disappointing: cooperation is simply not possible. as defection is the only nash equilibrium of prisoner's
7
dilemma, for any initial condition with a positive fraction of defectors, replicator dynamics will inexorably
take the population to a final state in which they all are defectors. therefore, one needs to understand
how the replicator equation framework can be supplemented or superseded for evolutionary game theory to
become closer to what is observed in the real world (note that there is no hope for classical game theory in
this respect as it is based in the perfect rationality of the players). relaxing the above discussed assumptions
leads, in some cases, to possible solutions to this puzzle, and our aim here is to summarize and review what
has been done along these lines with assumption 2.
3. the effect of different time scales
evolution is generally supposed to occur at a slow pace: many generations may be needed for a noticeable
change to arise in a species. this is indeed how darwin understood the effect of natural selection, and he
always referred to its cumulative effects over very many years. however, this needs not be the case and, in
fact, selection may occur faster than the interaction between individuals (or of the individuals with their
environment). thus, recent experimental studies have reported observations of fast selection [38, 39, 40].
it is also conceivable that in man-monitored or laboratory processes one might make selection be the rapid
influence rather than interaction. another context where these ideas apply naturally is that of cultural
evolution or social learning, where the time scale of selection is much closer to the time scale of interaction.
therefore, it is natural to ask about the consequences of the above assumption and the effect of relaxing it.
this issue has already been considered from an economic viewpoint in the context of equilibrium selection
(but see an early biological example breaking the assumption of purely random matching in [41], which
considered hawk-dove games where strategists are more likely to encounter individuals using their same
strategy). this refers to a situation in which for a game there is more than one equilibrium, like in the stag
hunt game, given e.g. by the following payoffmatrix
c
d
c
d
6
5
1
2
.
(7)
this game was already posed as a metaphor by rousseau [42], which reads as follows: two people go out
hunting for stag, because two of them are necessary to hunt down such a big animal. however, any one of
them can cheat the other by hunting hare, which one can do alone, leaving the other one in the impossibility
of getting the stag. therefore, we have a coordination game, in which the best option is to do as the other:
hunt stag together or both hunting hare separately. in game theory, this translates into the fact that both
c and d are nash equilibria, and in principle one is not able to determine which one would be selected by
the players, i.e., which one is the solution of the game. one rationale to choose was proposed by harsanyi
and selten2 [43], who classified c as the pareto-efficient equilibrium (hunting stag is more profitable than
hunting hare), because that is the most beneficial for both players, and d as the risk-dominant equilibrium,
because it is the strategy that is better in case the other player chooses d (one can hunt hare alone). here
the tension arises then from the risk involved in cooperation, rather than from the temptation to defect of
snowdrift games [44] (note that both tensions are present in prisoner's dilemma).
kandori et al [45] showed that the risk-dominant equilibrium is selected when using a stochastic evo-
lutionary game dynamics, proposed by foster and young [46], that considers that every player interacts
with every other one (implying slow selection). however, fast selection leads to another result. indeed,
robson and vega-redondo [47] considered the situation in which every player is matched to another one
and therefore they only play one game before selection acts. in that case, they showed that the outcome
changed and that the pareto-efficient equilibrium is selected. this result was qualified later by miekisz [48],
who showed that the selected equilibrium depended on the population size and the mutation level of the
dynamics. recently, this issue has also been considered in [49], which compares the situation where the
2harsanyi and senten received the nobel prize in economics for this contribution, along with nash, in 1994.
8
contribution of the game to the fitness is small (weak selection, see section 4.6 below) to the one where the
game is the main source of the fitness, finding that in the former the results are equivalent to the well-mixed
population, but not in the latter, where the conclusions of [47] are recovered. it is also worth noticing in
this regards the works by boylan [50, 51], where he studied the types of random matching that can still
be approximated by continuous equations. in any case, even if the above are not general results and their
application is mainly in economics, we have already a hint that time scales may play a non-trivial role in
evolutionary games.
in fact, as we will show below, rapid selection affects evolutionary dynamics in such a dramatic way
that for some games it even changes the stability of equilibria. we will begin our discussion by briefly
summarizing results on a model for the emergence of altruistic behavior, in which the dynamics is not
replicator-like, but that illustrates nicely the very important effects of fast selection. we will subsequently
proceed to present a general theory for symmetric 2 × 2 games.
there, in order to make explicit the
relation between selection and interaction time scales, we use a discrete-time dynamics that produces results
equivalent to the replicator dynamics when selection is slow. we will then show that the pace at which
selection acts on the population is crucial for the appearance and stability of cooperation. even in non-
dilemma games such as the harmony game [52], where cooperation is the only possible rational outcome,
defectors may be selected for if population renewal is very rapid.
3.1. time scales in the ultimatum game
as a first illustration of the importance of time scales in evolutionary game dynamics, we begin by dealing
with this problem in the context of a specific set of such experiments, related to the ultimatum game [53, 54].
in this game, under conditions of anonymity, two players are shown a sum of money. one of the players,
the "proposer", is instructed to offer any amount to the other, the "responder". the proposer can make
only one offer, which the responder can accept or reject.
if the offer is accepted, the money is shared
accordingly; if rejected, both players receive nothing. note that the ultimatum game is not symmetric, in
so far as proposer and responder have clearly different roles and are therefore not exchangeable. this will
be our only such an example, and the remainder of the paper will only deal with symmetric games. since
the game is played only once (no repeated interactions) and anonymously (no reputation gain; for more on
explanations of altruism relying on reputation see [55]), a self-interested responder will accept any amount
of money offered. therefore, self-interested proposers will offer the minimum possible amount, which will
be accepted.
the above prediction, based on the rational character of the players, contrasts clearly with the results of
actual ultimatum game experiments with human subjects, in which average offers do not even approximate
the self-interested prediction. generally speaking, proposers offer respondents very substantial amounts
(50% being a typical modal offer) and respondents frequently reject offers below 30% [56, 57]. most of the
experiments have been carried out with university students in western countries, showing a large degree
of individual variability but a striking uniformity between groups in average behavior. a large study in
15 small-scale societies [54] found that, in all cases, respondents or proposers behave in such a reciprocal
manner. furthermore, the behavioral variability across groups was much larger than previously observed:
while mean offers in the case of university students are in the range 43%-48%, in the cross-cultural study
they ranged from 26% to 58%.
how does this fit in our focus topic, namely the emergence of cooperation?
the fact that indirect
reciprocity is excluded by the anonymity condition and that interactions are one-shot (repeated interaction,
the mechanism proposed by axelrod to foster cooperation [58, 59], does not apply) allows one to interpret
rejections in terms of the so-called strong reciprocity [60, 61].
this amounts to considering that these
behaviors are truly altruistic, i.e. that they are costly for the individual performing them in so far as they
do not result in direct or indirect benefit. as a consequence, we return to our evolutionary puzzle: the
negative effects of altruistic acts must decrease the altruist's fitness as compared to that of the recipients
of the benefit, ultimately leading to the extinction of altruists. indeed, standard evolutionary game theory
arguments applied to the ultimatum game lead to the expectation that, in a well-mixed population, punishers
(individuals who reject low offers) have less chance to survive than rational players (individuals who accept
9
figure 1: left: mean acceptance threshold as a function of simulation time. initial condition is that all agents have ti = 1.
right: acceptance threshold distribution after 108 games (note that this distribution, for small s, is not stationary). initial
condition is that all agents have uniformly distributed, random ti. in both cases, s is as indicated from the plot.
any offer) and eventually disappear. we will now show that this conclusion depends on the dynamics, and
that different dynamics may lead to the survival of punishers through fluctuations.
consider a population of n agents playing the ultimatum game, with a fixed sum of money m per
game. random pairs of players are chosen, of which one is the proposer and another one is the respondent.
in its simplest version, we will assume that players are capable of other-regarding behavior (empathy);
consequently, in order to optimize their gain, proposers offer the minimum amount of money that they
would accept. every agent has her own, fixed acceptance threshold, 1 ≤ti ≤m (ti are always integer
numbers for simplicity). agents have only one strategy: respondents reject any offer smaller than their
own acceptance threshold, and accept offers otherwise. money shared as a consequence of accepted offers
accumulates to the capital of each player, and is subsequently interpreted as fitness as usual. after s games,
the agent with the overall minimum fitness is removed (randomly picked if there are several) and a new
agent is introduced by duplicating that with the maximum fitness, i.e. with the same threshold and the same
fitness (again randomly picked if there are several). mutation is introduced in the duplication process by
allowing changes of ±1 in the acceptance threshold of the newly generated player with probability 1/3 each.
agents have no memory (interactions are one-shot) and no information about other agents (no reputation
gains are possible). we note that the dynamics of this model is not equivalent to the replicator equation,
and therefore the results do not apply directly in that context. in fact, such an extremal dynamics leads to
an amplification of the effect of fluctuations that allows to observe more clearly the influence of time scales.
this is the reason why we believe it will help make our main point.
fig. 1 shows the typical outcome of simulations of our model for a population of n = 1000 individuals.
an important point to note is that we are not plotting averages but a single realization for each value of
s; the realizations we plot are not specially chosen but rather are representative of the typical simulation
results. we have chosen to plot single realizations instead of averages to make clear for the reader the large
fluctuations arising for small s, which are the key to understand the results and which we discuss below.
as we can see, the mean acceptance threshold rapidly evolves towards values around 40%, while the whole
distribution of thresholds converges to a peaked function, with the range of acceptance thresholds for the
agents covering about a 10% of the available ones.
these are values compatible with the experimental
results discussed above. the mean acceptance threshold fluctuates during the length of the simulation,
never reaching a stationary value for the durations we have explored. the width of the peak fluctuates
as well, but in a much smaller scale than the position. the fluctuations are larger for smaller values of s,
and when s becomes of the order of n or larger, the evolution of the mean acceptance threshold is very
smooth. as is clear from fig. 1, for very small values of s, the differences in payoffarising from the fact
that only some players play are amplified by our extreme dynamics, resulting in a very noisy behavior of
the mean threshold. this is a crucial point and will be discussed in more detail below. importantly, the
10
typical evolution we are describing does not depend on the initial condition. in particular, a population
consisting solely of self-interested agents, i.e. all initial thresholds set to ti = 1, evolves in the same fashion.
indeed, the distributions shown in the left panel of fig. 1 (which again correspond to single realizations)
have been obtained with such an initial condition, and it can be clearly observed that self-interested agents
disappear in the early stages of the evolution. the number of players and the value m of the capital at stake
in every game are not important either, and increasing m only leads to a higher resolution of the threshold
distribution function, whereas smaller mutation rates simply change the pace of evolution.
to realize the effect of time scales, it is important to recall previous studies of the ultimatum game by
page and nowak [62, 63]. the model introduced in those works has a dynamics completely different from
ours: following standard evolutionary game theory, every player plays with every other one in both roles
(proponent and respondent), and afterwards players reproduce with probability proportional to their payoff
(which is fitness in the reproductive sense). simulations and adaptive dynamics equations show that the
population ends up composed by players with fair (50%) thresholds. note that this is not what one would
expect on a rational basis, but page and nowak traced this result back to empathy, i.e. the fact that the
model is constrained to offer what one would accept. in any event, what we want to stress here is that their
findings are also different from our observations: we only reach an equilibrium for large s. the reason for
this difference is that the page-nowak model dynamics describes the s/n →∞limit of our model, in which
between death-reproduction events the time average gain obtained by all players is with high accuracy a
constant o(n) times the mean payoff. we thus see that our model is more general because it has one free
parameter, s, that allows selecting different regimes whereas the page-nowak dynamics is only one limiting
case. those different regimes are what we have described as fluctuation dominated (when s/n is finite and
not too large) and the regime analyzed by page and nowak (when s/n →∞). this amounts to saying that
by varying s we can study regimes far from the standard evolutionary game theory limit. as a result, we
find a variability of outcomes for the acceptance threshold consistent with the observations in real human
societies [54, 57]. furthermore, if one considers that the acceptance threshold and the offer can be set
independently, the results differ even more [64]: while in the model of page and nowak both magnitudes
evolve to take very low values, close to zero, in the model presented here the results, when s is small, are very
similar to the one-threshold version, leading again to values compatible with the experimental observations.
this in turn implies that rapid selection may be an alternative to empathy as an explanation of human
behavior in this game.
the main message to be taken from this example is that fluctuations due to the finite number of games s
are very important. among the results summarized above, the evolution of a population entirely formed by
self-interested players into a diversified population with a large majority of altruists is the most relevant and
surprising one. one can argue that the underlying reason for this is precisely the presence of fluctuations
in our model. for the sake of definiteness, let us consider the case s = 1 (agent replacement takes place
after every game) although the discussion applies to larger (but finite) values of s as well. after one or
more games, a mutation event will take place and a "weak altruistic punisher" (an agent with ti = 2) will
appear in the population, with a fitness inherited from its ancestor. for this new agent to be removed at
the next iteration, our model rules imply that this agent has to have the lowest fitness, and also that it
does not play as a proposer in the next game (if playing as a responder the agent will earn nothing because
of her threshold). in any other event this altruistic punisher will survive at least one cycle, in which an
additional one can appear by mutation. it is thus clear that fluctuations indeed help altruists to take over:
as soon as a few altruists are present in the population, it is easy to see analytically that they will survive
and proliferate even in the limit s/n →∞.
3.2. time scales in symmetric binary games
the example in the previous subsection suggests that there certainly is an issue of relative time scales
in evolutionary game theory that can have serious implications. in order to gain insight into this question,
it is important to consider a general framework, and therefore we will now look at the general problem of
symmetric 2 × 2 games. asymmetric games can be treated similarly, albeit in a more cumbersome manner,
and their classification involves many more types; we feel, therefore, that the symmetric case is a much
11
clearer illustration of the effect of time scales. in what follows, we review and extend previous results of us
[65, 66], emphasizing the consequences of the existence of different time scales.
let us consider a population of n individuals, each of whom plays with a fixed strategy, that can be
either c or d (for "cooperate" and "defect" respectively, as in section 2). we denote the payoffthat an
x-strategist gets when confronted to a y -strategist (x and y are c or d) by the matrix element wxy .
for a certain time individuals interact with other individuals in pairs randomly chosen from the population.
during these interactions individuals collect payoffs. we shall refer to the interval between two interaction
events as the interaction time. once the interaction period has finished reproduction occurs, and in steady
state selection acts immediately afterwards restoring the population size to the maximum allowed by the
environment. the time between two of these reproduction/selection events will be referred to as the evolution
time.
reproduction and selection can be implemented in at least two different ways. the first one is through
the fisher-wright process [5] in which each individual generates a number of offspring proportional to her
payoff.
selection acts by randomly killing individuals of the new generation until restoring the size of
the population back to n individuals. the second option for the evolution is the moran process [5, 67].
it amounts to randomly choosing an individual for reproduction proportionally to payoffs, whose single
offspring replaces another randomly chosen individual, in this case with a probability 1/n equal for all. in
this manner populations always remains constant. the fisher-wright process is an appropriate model for
species which produce a large number of offspring in the next generation but only a few of them survive, and
the next generation replaces the previous one (like insects or some fishes). the moran process is a better
description for species which give rise to few offspring and reproduce in continuous time, because individuals
neither reproduce nor die simultaneously, and death occurs at a constant rate. the original process was
generalized to the frequency-dependent fitness context of evolutionary game theory by taylor et al. [68],
and used to study the conditions for selection favoring the invasion and/or fixation of new phenotypes. the
results were found to depend on whether the population was infinite or finite, leading to a classification of
the process in three or eight scenarios, respectively.
both the fisher-wright and moran processes define markov chains [69, 70] on the population, charac-
terized by the number of its c-strategists n ∈{0, 1, . . . , n}, because in both cases it is assumed that the
composition of the next generation is determined solely by the composition of the current generation. each
process defines a stochastic matrix p with elements pn,m = p(m|n), the probability that the next generation
has m c-strategists provided the current one has n. while for the fisher-wright process all the elements of
p may be nonzero, for the moran process the only nonzero elements are those for which m = n or m = n±1.
hence moran is, in the jargon of markov chains, a birth-death process with two absorbing states, n = 0 and
n = n [69, 70]. such a process is mathematically simpler, and for this reason it will be the one we will
choose for our discussion on the effect of time scales.
to introduce explicitly time scales we will implement the moran process in the following way, generalizing
the proposal by taylor et al. [68]. during s time steps pairs of individuals will be chosen to play, one pair
every time step. after that the above described reproduction/selection process will act according to the
payoffs collected by players during the s interaction steps. then, the payoffs of all players are set to zero
and a new cycle starts. notice that in general players will play a different number of times -some not at
all- and this will reflect in the collected payoffs. if s is too small most players will not have the opportunity
to play and chance will have a more prominent role in driving the evolution of the population.
quantifying this effect requires that we first compute the probability that, in a population of n individ-
uals of which n are c-strategists, an x-strategist is chosen to reproduce after the s interaction steps. let
nxy denote the number of pairs of x- and y -strategists that are chosen to play. the probability of forming
a given pair, denoted pxy , will be
pcc = n(n −1)
n(n −1),
pcd = 2 n(n −n)
n(n −1),
pdd = (n −n)(n −n −1)
n(n −1)
.
(8)
12
then the probability of a given set of nxy is dictated by the multinomial distribution
m({nxy }; s) =
s!pncc
cc
ncc!
pncd
cd
ncd!
pndd
dd
ndd!,
if ncc + ncd + ndd = s,
0,
otherwise.
(9)
for a given set of variables nxy , the payoffs collected by c- and d-strategists are
wc = 2nccwcc + ncdwcd,
wd = ncdwdc + 2nddwdd.
(10)
then the probabilities of choosing a c- or d-strategist for reproduction are
pc(n) = em
wc
wc + wd
,
pd(n) = em
wd
wc + wd
,
(11)
where the expectations em [*] are taken over the probability distribution m (9). notice that we have to
guarantee wx ≥0 for the above expressions to define a true probability. this forces us to choose all payoffs
wxy ≥0. in addition, we have studied the effect of adding a baseline fitness to every player, which is
equivalent to a translation of the payoffmatrix w, obtaining the same qualitative results (see below).
once these probabilities are obtained the moran process accounts for the transition probabilities from a
state with n c-strategists to another with n±1 c-strategists. for n →n+1 a c-strategist must be selected
for reproduction (probability pc(n)) and a d-strategist for being replaced (probability (n −n)/n). thus
pn,n+1 = p(n + 1|n) = n −n
n
pc(n).
(12)
for n →n −1 a d-strategist must be selected for reproduction (probability pd(n)) and a c-strategist for
being replaced (probability n/n). thus
pn,n−1 = p(n −1|n) = n
n pd(n).
(13)
finally, the transition probabilities are completed by
pn,n = 1 −pn,n−1 −pn,n+1.
(14)
3.2.1. slow selection limit
let us assume that s →∞, i.e. the evolution time is much longer than the interaction time. then the
distribution (9) will be peaked at the values nxy = spxy , the larger s the sharper the peak. therefore in
this limit
pc(n) →
wc(n)
wc(n) + wd(n),
pd(n) →
wd(n)
wc(n) + wd(n),
(15)
where
wc(n) = n
n
n −1
n −1(wcc −wcd) + wcd
,
wd(n) = n −n
n
n
n −1(wdc −wdd) + wdd
. (16)
in general, for a given population size n we have to resort to a numerical evaluation of the various
quantities that characterize a birth-death process, according to the formulas in appendix a. however, for
large n the transition probabilities can be expressed in terms of the fraction of c-strategists x = n/n as
pn,n+1
=
x(1 −x)
wc(x)
xwc(x) + (1 −x)wd(x),
(17)
pn,n−1
=
x(1 −x)
wd(x)
xwc(x) + (1 −x)wd(x),
(18)
13
where
wc(x) = x(wcc −wcd) + wcd,
wd(x) = x(wdc −wdd) + wdd.
(19)
the terms wc and wd are, respectively, the expected payoffof a cooperator and a defector in this case
of large s and n. the factor x(1 −x) in front of pn,n+1 and pn,n−1 arises as a consequence of n = 0 and
n = n being absorbing states of the process. there is another equilibrium x∗where pn,n±1 = pn±1,n, i.e.
wc(x∗) = wd(x∗), with x∗given by
x∗=
wcd −wdd
wdc −wcc + wcd −wdd
.
(20)
for x∗to be a valid equilibrium 0 < x∗< 1 we must have
(wdc −wcc)(wcd −wdd) > 0.
(21)
this equilibrium is stable3 as long as the function wc(x) −wd(x) is decreasing at x∗, for then if x < x∗
pn,n+1 > pn+1,n and if x > x∗pn,n−1 > pn−1,n, i.e. the process will tend to restore the equilibrium, whereas
if the function is increasing the process will be led out of x∗by any fluctuation. in terms of (19) this implies
wdc −wcc > wdd −wcd.
(22)
notice that the two conditions
wc(x∗) = wd(x∗),
w′
c(x∗) < w′
d(x∗),
(23)
are precisely the conditions arising from the replicator dynamics for x∗to be a stable equilibrium [10, 11],
albeit expressed in a different manner than in section 2 (w′
x represents the derivative of wx with respect
to x). out of the classic dilemmas, condition (21) holds for stag hunt and snowdrift games, but condition
(22) only holds for the latter. thus, as we have already seen, only snowdrift has a dynamically stable mixed
population.
this analysis leads us to conclude that the standard setting of evolutionary games as advanced above,
in which the time scale for reproduction/selection is implicitly (if not explicitly) assumed to be much longer
than the interaction time scale, automatically yields the distribution of equilibria dictated by the replicator
dynamics for that game. we have explicitly shown this to be true for binary games, but it can be extended
to games with an arbitrary number of strategies. in the next section we will analyze what happens if this
assumption on the time scales does not hold.
3.2.2. fast selection limit
when s is finite, considering all the possible pairings and their payoffs, we arrive at
pc(n) =
s
x
j=0
s−j
x
k=0
2s−j−k s!ns−k(n −1)j(n −n)s−j(n −n −1)k
j!k!(s −j −k)!n s(n −1)s
×
2jwcc + (s −j −k)wcd
2jwcc + 2kwdd + (s −j −k)(wcd + wdc),
(24)
and pd(n) = 1 −pc(n).
we have not been able to write this formula in a simpler way, so we have
to evaluate it numerically for every choice of the payoffmatrix. however, in order to have a glimpse at
the effect of reducing the number of interactions between successive reproduction/selection events, we can
examine analytically the extreme case s = 1, for which
pn,n+1
=
n(n −n)
n(n −1)
2wcd
wdc + wcd
+ n
n
wdc −wcd
wdc + wcd
−1
n
,
(25)
pn,n−1
=
n(n −n)
n(n −1)
1 + n
n
wdc −wcd
wdc + wcd
−1
n
.
(26)
3here the notion of stability implies that the process will remain near x∗for an extremely long time, because as long as n
is finite, no matter how large, the process will eventually end up in x = 0 or x = 1, the absorbing states.
14
from these equations we find that
pn,n−1
pn,n+1
=
dn + s(n −1)
d(n + 1) + s(n −1) −d(n + 1),
d = wdc −wcd,
s = wdc + wcd,
(27)
and this particular dependence on n allows us to find the following closed-form expression for cn, the
probability that starting with n cooperators the population ends up with all cooperators (see appendix b)
cn = rn
rn
,
rn =
n
y
j=1
s(n −1) + dj
s(n −1) −d(n + 1 −j) −1,
if d ̸= 0,
n,
if d = 0.
(28)
the first thing worth noticing in this expression is that it only depends on the two off-diagonal elements
of the payoffmatrix (through their sum, s, and difference, d). this means that in an extreme situation
in which the evolution time is so short that it only allows a single pair of players to interact, the outcome
of the game only depends on what happens when two players with different strategies play. the reason is
obvious: only those two players that have been chosen to play will have a chance to reproduce. if both
players have strategy x, an x-strategist will be chosen to reproduce with probability 1. only if each player
uses a different strategy the choice of the player that reproduces will depend on the payoffs, and in this case
they are precisely wcd and wdc. of course, as s increases this effect crosses over to recover the outcome
for the case s →∞.
we can extend our analysis further for the case of large populations. if we denote x = n/n and c(x) = cn,
then we can write, as n →∞,
c(x) ∼enφ(x) −1
enφ(1) −1,
φ(x) =
z x
0
[ln(s + dt) −ln(s + d(t −1))] dt.
(29)
then
φ′(x) = ln
s + dx
s + d(x −1)
,
(30)
which has the same sign as d, and hence φ(x) is increasing for d > 0 and decreasing for d < 0.
thus if d > 0, because of the factor n in the argument of the exponentials and the fact that φ(x) > 0
for x > 0, the exponential will increase sharply with x. then, expanding around x = 1,
φ(x) ≈φ(1) −(1 −x)φ′(1),
(31)
so
c(x) ∼exp{−n ln(1 + d/s)(1 −x)}.
(32)
the outcome for this case is that absorption will take place at n = 0 for almost any initial condition, except
if we start very close to the absorbing state n = n, namely for n ≳n −1/ ln(1 + d/s).
on the contrary, if d < 0 then φ(x) < 0 for x > 0 and the exponential will be peaked at 0. so expanding
around x = 0,
φ(x) ≈xφ′(0)
(33)
and
c(x) ∼1 −exp{−n ln(1 −d/s)x}.
(34)
the outcome in this case is therefore symmetrical with respect to the case d > 0, because now the probability
of ending up absorbed into n = n is 1 for nearly all initial conditions except for a small range near n = 0
determined by n ≲1/ ln(1 −d/s). in both cases the range of exceptional initial conditions increases with
decreasing |d|, and in particular when d = 0 the evolution becomes neutral,4 as it is reflected in the fact
that in that special case cn = n/n (cf. eq. (28)) [5].
4notice that if d = 0 then wdc = wcd and therefore the evolution does not favor any of the two strategies.
15
(a)
(b)
(c)
figure 2: absorption probability cn to state n = n starting from initial state n, for a harmony game (payoffs wcc = 1,
wcd = 0.25, wdc = 0.75 and wdd = 0.01), population sizes n = 10 (a), n = 100 (b) and n = 1000 (c), and for values of
s = 1, 2, 3, 10 and 100. the values for s = 100 are indistinguishable from the results of replicator dynamics.
in order to illustrate the effect of a finite s, even in the case when s > 1, we will consider all possible
symmetric 2 × 2 games. these were classified by rapoport and guyer [71] in 12 non-equivalent classes
which, according to their nash equilibria and their dynamical behavior under replicator dynamics, fall into
three different categories:
(i) six games have wcc > wdc and wcd > wdd, or wcc < wdc and wcd < wdd. for them, their
unique nash equilibrium corresponds to the dominant strategy (c in the first case and d in the second
case). this equilibrium is the global attractor of the replicator dynamics.
(ii) three games have wcc > wdc and wcd < wdd. they have several nash equilibria, one of them
with a mixed strategy, which is an unstable equilibrium of the replicator dynamics and therefore acts
as a separator of the basins of attractions of two nash equilibria in pure strategies, which are the
attractors.
(iii) the remaining three games have wcc < wdc and wcd > wdd.
they also have several nash
equilibria, one of them with a mixed strategy, but in this case this is the global attractor of the
replicator dynamics.
examples of the first category are the harmony and prisoner's dilemma games. category (ii) includes the
stag hunt game, whereas the snowdrift game belongs to category (iii).
we will begin by considering one example of category (i): the harmony game. to that aim we will
choose the parameters wcc = 1, wcd = 0.25, wdc = 0.75 and wdd = 0.01. the name of this game refers
to the fact that it represents no conflict, in the sense that all players get the maximum payoffby following
strategy c. the values of cn obtained for different populations n and several values of s are plotted in fig. 2.
the curves for large s illustrate the no-conflicting character of this game as the probability cn is almost 1
for every starting initial fraction of c-strategists. the results for small s also illustrate the effect of fast
selection, as the inefficient strategy, d, is selected for almost any initial fraction of c-strategists. the effect
is more pronounced the larger the population. the crossover between the two regimes takes place at s = 2
or 3, but it depends on the choice of payoffs. a look at fig. 2 reveals that the crossing over to the s →∞
regime as s increases has no connection whatsoever with n, because it occurs nearly at the same values for
any population size n. it does depend, however, on the precise values of the payoffs. as a further check,
in fig. 3 we plot the results for s = 1 for different population sizes n and compare with the asymptotic
prediction (32), showing its great accuracy for values of n = 100 and higher; even for n = 10 the deviation
from the exact results is not large.
let us now move to category (ii), well represented by the stag hunt game, discussed in the preceding
subsection. we will choose for this game the payoffs wcc = 1, wcd = 0.01, wdc = 0.8 and wdd = 0.2.
the values of cn obtained for different populations n and several values of s are plotted in fig. 4. the panel
(c) for s = 100 reveals the behavior of the system according to the replicator dynamics: both strategies
are attractors, and the crossover fraction of c-strategists separating the two basins of attraction (given by
16
figure 3: same as in fig. 2 plotted against n −n, for s = 1 and n = 10, 100 and 1000. the solid line is the asymptotic
prediction (32).
(a)
(b)
(c)
figure 4: absorption probability cn to state n = n starting from initial state n, for a stag hunt game (payoffs wcc = 1,
wcd = 0.01, wdc = 0.8 and wdd = 0.2), population sizes n = 10 (a), n = 100 (b) and n = 1000 (c), and for values of
s = 1, 3, 5, 10 and 100. results from replicator dynamics are also plotted for comparison.
eq. (20)) is, for this case, x∗≈0.49. we can see that the effect of decreasing s amounts to shifting this
crossover towards 1, thus increasing the basins of attraction of the risk-dominated strategy. in the extreme
case s = 1 this strategy is the only attractor. of course, for small population sizes (fig. 4(a)) all these
effects (the existence of the threshold and its shifting with decreasing s) are strongly softened, although still
noticeable. an interesting feature of this game is that the effect of a finite s is more persistent compared
to what happens to the harmony game. whereas in the latter the replicator dynamics was practically
recovered, for values of s ≥10 we have to go up to s = 100 to find the same in stag hunt.
finally, a representative of category (iii) is the snowdrift game, for which we will choose the payoffs
wcc = 1, wcd = 0.2, wdc = 1.8 and wdd = 0.01. for these values, the replicator dynamics predicts that
both strategies coexist with fractions of population given by x∗in (20), which for these parameters takes
the value x∗≈0.19. however, a birth-death process for finite n always ends up in absorption into one of
the absorbing states. in fact, for any s and n and this choice of payoffs, the population always ends up
absorbed into the n = 0 state -except when it starts very close to n = n. but this case has a peculiarity
that makes it entirely different from the previous ones. whereas for the former cases the absorption time
(50) is τ = o(n) regardless of the value of s, for snowdrift the absorption time is o(n) for s = 1 but grows
very fast with s towards an asymptotic value τ∞(see fig. 5(a)) and τ∞grows exponentially with n (see
fig. 5(b)). this means that, while for s = 1 the process behaves as in previous cases, being absorbed into
the n = 0 state, as s increases there is a crossover to a regime in which the transient states become more
17
(a)
(b)
figure 5: absorption time starting from the state n/n = 0.5 for a snowdrift game (payoffs wcc = 1, wcd = 0.2, wdc = 1.8
and wdd = 0.01), as a function of s for population size n = 100 (a) and as a function of n in the limit s →∞(b). note the
logarithmic scale for the absorption time.
relevant than the absorbing state because the population spends an extremely long time in them. in fact,
the process oscillates around the mixed equilibrium predicted by the replicator dynamics. this is illustrated
by the distribution of visits to states 0 < n < n before absorption (48), shown in fig. 6. thus the effect of
fast selection on snowdrift games amounts to a qualitative change from the mixed equilibrium to the pure
equilibrium at n = 0.
having illustrated the effect of fast selection in these three representative games, we can now present the
general picture. similar results arise in the remaining 2 × 2 games, fast selection favoring in all cases the
strategy with the highest payoffagainst the opposite strategy. for the remaining five games of category (i)
this means favoring the dominant strategy (prisoner's dilemma is a prominent example of it). the other
two cases of category (ii) also experience a change in the basins of attraction of the two equilibria. finally,
the remaining two games of category (iii) experience the suppression of the coexistence state in favor of
one of the two strategies. the conclusion of all this is that fast selection changes completely the outcome
of replicator dynamics. in terms of cooperation, as the terms in the off-diagonal of social dilemmas verify
wdc > wcd, this change in outcome has a negative influence on cooperation, as we have seen in all the
games considered. even for some payoffmatrices of a non-dilemma game such as harmony, it can make
defectors invade the population.
two final remarks are in order. first, these results do not change qualitatively with the population size.
in fact, eqs. (32) and (34) and fig. 3 very clearly illustrate this. second, there might be some concern
about this analysis which the extreme s = 1 case puts forward: all this might just be an effect of the fact
that most players do not play and therefore have no chance to be selected for reproduction. in order to sort
this out we have made a similar analysis but introducing a baseline fitness for all players, so that even if a
player does not play she can still be selected for reproduction. the probability will be, of course, smaller
than the one of the players who do play; however, we should bear in mind that when s is very small, the
overwhelming majority of players are of this type and this compensates for the smaller probability. thus,
let fb be the total baseline fitness that all players share per round, so that sfb/n is the baseline fitness every
player has at the time reproduction/selection occurs. this choice implies that if fb = 1 the overall baseline
fitness and that arising from the game are similar, regardless of s and n. if fb is very small (fb ≲0.1), the
result is basically the same as that for fb = 0. the effect for fb = 1 is illustrated in fig. 7 for harmony and
stag hunt games. note also that at very large baseline fitness (fb ≳10) the evolution is almost neutral,
although the small deviations induced by the game -which are determinant for the ultimate fate of the
population- still follow the same pattern (see fig. 8). interestingly, traulsen et al. [72] arrive at similar
18
figure 6: distribution of visits to state n before absorption for population n = 100, initial number of cooperators n = 50 and
several values of s. the game is the same snowdrift game of fig. 5. the curve for s = 100 is indistinguishable from the one
for s →∞(labeled 'replicator').
results by using a fermi like rule (see sec. 4.1 below) to introduce noise (temperature) in the selection
process, and a interaction probability q of interactions between individuals leading to heterogeneity in the
payoffs, i.e., in the same words as above, to fluctuations, that in turn reduce the intensity of selection as is
the case when we introduce a very large baseline fitness.
19
(a)
(b)
figure 7: absorption probability starting from state n for the harmony game of fig. 2 (a) and the stag hunt game of fig. 4
(b) when n = 100 and baseline fitness fb = 1.
(a)
(b)
figure 8: same as fig. 7 for fb = 10.
20
4. structured populations
having seen the highly non-trivial effects of considering temporal fluctuations in evolutionary games, in
this section we are going to consider the effect of relaxing the well-mixed hypothesis by allowing the existence
of spatial correlations in the population. recall from section 2.2 that a well-mixed population presupposes
that every individual interacts with equal probability with every other one in the population, or equivalently
that each individual interacts with the "average" individual. it is not clear, however, that this hypothesis
holds in many practical situations. territorial or physical constraints may limit the interactions between
individuals, for example. on the other hand, an all-to-all network of relationships does not seem plausible
in large societies; other key phenomena in social life, such as segregation or group formation, challenge the
idea of a mean player that everyone interacts with.
it is adequate, therefore, to take into consideration the existence of a certain network of relationships in
the population, which determines who interacts with whom. this network of relationships is what we will
call from now on the structure of the population. consistently, a well-mixed population will be labeled as
unstructured and will be represented by a complete graph. games on many different types of networks have
been investigated, examples of which include regular lattices [73, 74], scale-free networks [75], real social
networks [76], etc. this section is not intended to be an exhaustive review of all this existent work and we
refer the reader to [20] for such a detailed account. we rather want to give a panoramic and a more personal
and idiosyncratic view of the field, based on the main available results and our own research.
it is at least reasonable to expect that the existence of structure in a population could give rise to the
appearance of correlations and that they would have an impact on the evolutionary outcome. for more
than fifteen years investigation into this phenomena has been a hot topic of research, as the seminal result
by nowak and may [73], which reported an impressive fostering of cooperation in prisoner's dilemma on
spatial lattices, triggered a wealth of work focused on the extension of this effect to other games, networks
and strategy spaces.
on the other hand, the impossibility in most cases of analytical approaches and
the complexity of the corresponding numerical agent-based models have made any attempt of exhaustive
approach very demanding. hence most studies have concentrated on concrete settings with a particular
kind of game, which in most cases has been the prisoner's dilemma [73, 77, 78, 79, 80, 81, 82, 83, 84, 85,
86, 87, 88, 89, 90, 91]. other games has been much less studied in what concerns the influence of population
structure, as show the comparatively much smaller number of works about snowdrift or hawk-dove games
[92, 93, 94, 95, 96, 97], or stag hunt games [98, 99, 100]. moreover, comprehensive studies in the space of
2 × 2 games are very scarce [74, 75]. as a result, many interesting features of population structure and its
influence on evolutionary games have been reported in the literature, but the scope of these conclusions is
rather limited to particular models, so a general understanding of these issues, in the broader context of
2 × 2 games and different update rules, is generally missing.
however, the availability and performance of computational resources in recent years have allowed us
to undertake a systematic and exhaustive simulation program [101, 102] on these evolutionary models.
as a result of this study we have reached a number of conclusions that are obviously in relation with
previous research and that we will discuss in the following.
in some cases, these are generalizations of
known results to wider sets of games and update rules, as for example for the issue of the synchrony of
the updating of strategies [73, 77, 78, 95, 96, 100, 103] or the effect of small-world networks vs regular
lattices [84, 96, 104, 105]. in other cases, the more general view of our analysis has allowed us to integrate
apparently contradictory results in the literature, as the cooperation on prisoner's dilemma vs. snowdrift
games [73, 92, 93, 94, 96], or the importance of clustering in spatial lattices [85, 89, 96]. other conclusions
of ours, however, refute what seems to be established opinions in the field, as the alleged robustness of the
positive influence of spatial structure on prisoner's dilemma [73, 74, 77]. and finally, we have reached novel
conclusions that have not been highlighted by previous research, as the robustness of the influence of spatial
structure on coordination games, or the asymmetry between the effects on games with mixed equilibria
(coordination and anti-coordination games) and how it varies with the intensity of selection.
it is important to make clear from the beginning that evolutionary games on networks may be sensitive
to another source of variation with respect to replicator dynamics besides the introduction of spatial cor-
relations. this source is the update rule, i.e. the rule that defines the evolution dynamics of individuals'
21
strategies, whose influence seems to have been overlooked [74]. strictly speaking, only when the model
implements the so-called replicator rule (see below) one is considering the effect of the restriction of rela-
tionships that the population structure implies, in comparison with standard replicator dynamics. when
using a different update rule, however, we are adding a second dimension of variability, which amounts to
relax another assumption of replicator dynamics, namely number 4, which posits a population variation
linear in the difference of payoffs (see section 2). we will show extensively that this issue may have a huge
impact on the evolutionary outcome.
in fact, we will see that there is not a general influence of population structure on evolutionary games.
even for a particular type of network, its influence on cooperation depends largely on the kind of game
and the specific update rule. all one can do is to identify relevant topological characteristics that have a
consistent effect on a broad range of games and update rules, and explain this influence in terms of the
same basic principles. to this end, we will be looking at the asymptotic states for different values of the
game parameters, and not at how the system behaves when the parameters are varied, which would be
an approach of a more statistical mechanics character. in this respect, it is worth pointing out that some
studies did use this perspective: thus, it has been shown that the extinction transitions when the temptation
parameter varies within the prisoner's dilemma game and the evolutionary dynamics is stochastic fall in
the directed percolation universality class, in agreement with a well known conjecture [106]. in particular,
some of the pioneering works in using a physics viewpoint on evolutionary games [82, 107] have verified this
result for specific models. the behavior changes under deterministic rules such as unconditional imitation
(see below), for which this extinction transition is discontinuous.
although our ultimate interest may be the effect on the evolution of cooperation, measuring to which
extent cooperation is enforced or inhibited is not enough to clarify this effect. as in previous sections, our
basic observables will be the dynamical equilibria of the model, in comparison with the equilibria of our
reference model with standard replicator dynamics –which, as we have explained in section 2, are closely
related to those of the basic game–. the understanding of how the population structure modifies qualitatively
and quantitatively these equilibria will give us a much clearer view on the behavior and properties of the
model under study, and hence on its influence on cooperation.
4.1. network models and update rules
many kinds of networks have been considered as models for population structure (for recent reviews on
networks, see [108, 109]). a first class includes networks that introduce a spatial arrangement of relationships,
which can represent territorial or physical constraints in the interactions between individuals.
typical
examples of this group are regular lattices, with different degrees of neighborhood. other important group
is that of synthetic networks that try to reproduce important properties that have been found in real
networks, such as the small-world or scale-free properties. prominent examples among these are watts-
strogatz small-world networks [110] and barab ́
asi-albert scale-free networks [111]. finally, "real" social
networks that come directly from experimental data have also been studied, as for example in [112, 113].
as was stated before, one crucial component of the evolutionary models that we are discussing in this
section is the update rule, which determines how the strategy of individuals evolves in time. there is a
very large variety of update rules that have been used in the literature, each one arising from different
backgrounds. the most important for our purposes is the replicator rule, also known as the proportional
imitation rule, which is inspired on replicator dynamics and we describe in the following.5 let i = 1 . . . n
label the individuals in the population.
let si be the strategy of player i, wi her payoffand ni her
neighborhood, with ki neighbors. one neighbor j of player i is chosen at random, j ∈ni. the probability
of player i adopting the strategy of player j is given by
pt
ij ≡p {st
j →st+1
i
} =
(w t
j −w t
i )/φ ,
w t
j > w t
i ,
0 ,
w t
j ≤w t
i ,
(35)
with φ appropriately chosen as a function of the payoffs to ensure p {*} ∈[0, 1].
5to our knowledge, helbing was the first to show that a macroscopic population evolution following replicator dynamics
22
figure 9: asymptotic density of cooperators x∗with the replicator rule on a complete network, when the initial density of
cooperators is x0 = 1/3 (left, a), x0 = 1/2 (middle, b) and x0 = 2/3 (right, c). this is the standard outcome for a well-mixed
population with replicator dynamics, and thus constitutes the reference to assess the influence of a given population structure
(see main text for further details).
the reason for the name of this rule is the fact that the equation of evolution with this update rule,
for large sizes of the population, is equal, up to a time scale factor, to that of replicator dynamics [9, 11].
therefore, the complete network with the replicator rule constitutes the finite-size, discrete-time version of
replicator dynamics on an infinite, well-mixed population in continuous time. fig. 9 shows the evolutionary
outcome of this model, in the same type of plot as subsequent results in this section. each panel of this
figure displays the asymptotic density of cooperators x∗for a different initial density x0, in a grid of points
in the st-plane of games. the payoffmatrix of each game is given by
c
d
c
d
1
s
t
0
.
(36)
we will consider the generality of this choice of parameters at the end of this section, after introducing
the other evolutionary rules. note that, in the notation of section 3, we have taken wcc = 1, wcd =
s, wdc = t, wdd = 0; note also that for these payoffs, the normalizing factor in the replicator rule can be
chosen as φ = max(ki, kj)(max(1, t)−min(0, s)). in this manner, we visualize the space of symmetric 2×2
games as a plane of co-ordinates s and t –for sucker's and temptation–, which are the respective payoffs
of a cooperator and a defector when confronting each other. the four quadrants represented correspond to
the following games: harmony (upper left), stag hunt (lower left), snowdrift or hawk-dove (upper right)
and prisoner's dilemma (lower right). as expected, these results reflect the close relationship between the
equilibria of replicator dynamics and the equilibria of the basic game. thus, all harmony games end up in
full cooperation and all prisoner's dilemmas in full defection, regardless of the initial condition. snowdrift
games reach a mixed strategy equilibrium, with density of cooperators xe = s/(s +t −1). stag hunt games
are the only ones whose outcome depends on the initial condition, because of their bistable character with
an unstable equilibrium also given by xe. to allow a quantitative comparison of the degree of cooperation in
each game, we have introduced a quantitative index, the average cooperation over the region corresponding
to each game, which appears beside each quadrant. the results in fig. 9 constitute the reference against
which the effect of population structure will be assessed in the following.
one interesting variation of the replicator rule is the multiple replicator rule, whose difference consists
on checking simultaneously all the neighborhood and thus making more probable a strategy change. with
could be induced by a microscopic imitative update rule [114, 115]. schlag proved later the optimality of such a rule under
certain information constraints, and named it proportional imitation [116].
23
this rule the probability that player i maintains her strategy is
p {st
i →st+1
i
} =
y
j∈ni
(1 −pt
ij),
(37)
with pt
ij given by (35). in case the strategy update takes place, the neighbor j whose strategy is adopted
by player i is selected with probability proportional to pt
ij.
a different option is the following moran-like rule, also called death-birth rule, inspired on the moran
dynamics, described in section 3. with this rule a player chooses the strategy of one of her neighbors, or
herself's, with a probability proportional to the payoffs
p {st
j →st+1
i
} =
w t
j −ψ
p
k∈n∗
i
(w t
k −ψ),
(38)
with n ∗
i = ni ∪{i}. because payoffs may be negative in prisoner's dilemma and stag hunt games, the
constant ψ = maxj∈n ∗
i (kj) min(0, s) is subtracted from them. note that with this rule a player can adopt,
with low probability, the strategy of a neighbor that has done worse than herself.
the three update rules presented so far are imitative rules. another important example of this kind is
the unconditional imitation rule, also known as the "follow the best" rule [73]. with this rule each player
chooses the strategy of the individual with largest payoffin her neighborhood, provided this payoffis greater
than her own. a crucial difference with the previous rules is that this one is a deterministic rule.
another rule that has received a lot of attention in the literature, specially in economics, is the best
response rule. in this case, instead of some kind of imitation of neighbor's strategies based on payoffscoring,
the player has enough cognitive abilities to realize whether she is playing an optimum strategy (i.e. a best
response) given the current configuration of her neighbors. if it is not the case, she adopts with probability
p that optimum strategy. it is clear that this rule is innovative, as it is able to introduce strategies not
present in the population, in contrast with the previous purely imitative rules.
finally, an update rule that has been widely used in the literature, because of being analytically tractable,
is the fermi rule, based on the fermi distribution function [82, 117, 118]. with this rule, a neighbor j of
player i is selected at random (as with the replicator rule) and the probability of player i acquiring the
strategy of j is given by
p {st
j →st+1
i
} =
1
1 + exp
−β (w t
j −w t
i )
.
(39)
the parameter β controls the intensity of selection, and can be understood as the inverse of temperature or
noise in the update rule. low β represents high temperature or noise and, correspondingly, weak selection
pressure. whereas this rule has been employed to study resonance-like behavior in evolutionary games on
lattices [119], we use it in this work to deal with the issue of the intensity of selection (see subsection 4.6).
having introduced the evolutionary rules we will consider, it is important to recall our choice for the payoff
matrix (36), and discuss its generality. most of the rules (namely the replicator, the multiple replicator,
the unconditional imitation and the best response rules) are invariant on homogeneous networks6 under
translation and (positive) scaling of the payoffmatrix. among the remaining rules, the dynamics changes
upon translation for the moran rule and upon scaling for the fermi rule. the corresponding changes in
these last two cases amount to a modification of the intensity of selection, which we also treat in this work.
therefore, we consider that the parameterization of (36) is general enough for our purposes.
it is also important to realize that for a complete network, i.e. for a well-mixed or unstructured population,
the differences between update rules may be not relevant, as far as they do not change in general the
evolutionary outcome [121].
these differences, however, become crucial when the population has some
structure, as we will point out in the following.
6the invariance under translations of the payoffmatrix does not hold if the network is heterogenous. in this case, players
with higher degrees receive comparatively more (less) payoffunder positive (negative) translations. only very recently has this
issue been studied in the literature [120].
24
figure 10: asymptotic density of cooperators x∗in a square lattice with degree k = 8 and initial density of cooperators
x0 = 0.5, when the game is the prisoner's dilemma as given by (40), proposed by nowak and may [73]. note that the outcome
with replicator dynamics on a well-mixed population is x∗= 0 for all the displayed range of the temptation parameter t.
notice also the singularity at t = 1.4 with unconditional imitation. the surrounding points are located at t = 1.3999 and
t = 1.4001.
the results displayed in fig. 9 have been obtained analytically, but the remaining results of this section
come from the simulation of agent-based models. in all cases, the population size is n = 104 and the
allowed time for convergence is 104 time steps, which we have checked it is enough to reach a stationary
state. one time step represents one update event for every individual in the population, exactly in the
case of synchronous update and on average in the asynchronous case, so it could be considered as one
generation. the asymptotic density of cooperators is obtained averaging over the last 103 time steps, and
the values presented in the plots are the result of averaging over 100 realizations. cooperators and defectors
are randomly located at the beginning of evolution and, when applicable, networks have been built with
periodic boundary conditions. see [101] for further details.
4.2. spatial structure and homogeneous networks
in 1992 nowak and may published a very influential paper [73], where they showed the dramatic effect
that the spatial distribution of a population could have on the evolution of cooperation. this has become
the prototypical example of the promotion of cooperation favored by the structure of a population, also
known as network reciprocity [19]. they considered the following prisoner's dilemma:
c
d
c
d
1
0
t
ε
,
(40)
with 1 ≤t ≤2 and ε ≲0. note that this one-dimensional parameterization corresponds in the st-plane
to a line very near the boundary with snowdrift games.
fig. 10 shows the great fostering of cooperation reported by [73]. the authors explained this influence
in terms of the formation of clusters of cooperators, which give cooperators enough payoffto survive even
when surrounded by some defectors. this model has a crucial detail, whose importance we will stress later:
the update rule used is unconditional imitation.
since the publication of this work many studies have investigated related models with different games
and networks, reporting qualitatively consistent results [20]. however, hauert and doebeli published in 2004
another important result [93], which casted a shadow of doubt on the generality of the positive influence of
spatial structure on cooperation. they studied the following parameterization of snowdrift games:
c
d
c
d
1
2 −t
t
0
,
(41)
25
figure 11: asymptotic density of cooperators x∗in a square lattice with degree k = 8 and initial density of cooperators
x0 = 0.5, when the game is the snowdrift game as given by (41), proposed by hauert and doebeli [93]. the result for a
well-mixed population is displayed as a reference as a dashed line.
with 1 ≤t ≤2 again.
the unexpected result obtained by the authors is displayed in fig. 11. only for low t there is some
improvement of cooperation, whereas for medium and high t cooperation is inhibited. this is a surprising
result, because the basic game, the snowdrift, is in principle more favorable to cooperation. as we have
seen above, its only stable equilibrium is a mixed strategy population with some density of cooperators,
whereas the unique equilibrium in prisoner's dilemma is full defection (see fig. 9). in fact, a previous paper
by killingback and doebeli [92] on the hawk-dove game, a game equivalent to the snowdrift game, had
reported an effect of spatial structure equivalent to a promotion of cooperation.
hauert and doebeli explained their result in terms of the hindrance to cluster formation and growth, at
the microscopic level, caused by the payoffstructure of the snowdrift game. notwithstanding the different
cluster dynamics in both games, as observed by the authors, a hidden contradiction looms in their argument,
because it implies some kind of discontinuity in the microscopic dynamics in the crossover between prisoner's
dilemma and snowdrift games (s = 0, 1 ≤t ≤2). however, the equilibrium structure of both basic games,
which drives this microscopic dynamics, is not discontinuous at this boundary, because for both games the
only stable equilibrium is full defection. so, where does this change in the cluster dynamics come from?
the fact is that there is not such a difference in the cluster dynamics between prisoner's dilemma
and snowdrift games, but different update rules in the models. nowak and may [73], and killingback and
doebeli [92], used the unconditional imitation rule, whereas hauert and doebeli [93] employed the replicator
rule. the crucial role of the update rule becomes clear in fig. 12, where results in prisoner's dilemma and
snowdrift are depicted separately for each update rule. it shows that, if the update rule used in the model
is the same, the influence on both games, in terms of promotion or inhibition of cooperation, has a similar
dependence on t. for both update rules, cooperation is fostered in prisoner's dilemma and snowdrift at
low values of t, and cooperation is inhibited at high t. note that with unconditional imitation the crossover
between both behaviors takes place at t ≈1.7, whereas with the replicator rule it occurs at a much lower
value of t ≈1.15. the logic behind this influence is better explained in the context of the full st-plane, as
we will show later.
the fact that this apparent contradiction has been resolved considering the role of the update rule is
a good example of its importance. this conclusion is in agreement with those of [96], which performed
an exhaustive study on snowdrift games with different network models and update rules, but refutes those
of [74], which defended that the effect of spatial lattices was almost independent of the update rule. in
consequence, the influence of the network models that we consider in the following is presented separately
for each kind of update rule, highlighting the differences in results when appropriate. apart from this,
to assess and explain the influence of spatial structure, we need to consider it along with games that have
different equilibrium structures, not only a particular game, in order to draw sufficiently general conclusions.
26
(a)
(b)
figure 12: asymptotic density of cooperators x∗in square lattices with degree k = 8 and initial density of cooperators
x0 = 0.5, for both prisoner's dilemma (40) and snowdrift games (41), displayed separately according to the update rule: (a)
unconditional imitation (nowak and may's model [73]), (b) replicator rule (hauert and doebeli's model [93]). the result for
snowdrift in a well-mixed population is displayed as a reference as a dashed line. it is clear the similar influence of regular
lattices on both games, when the key role of the update rule is taken into account (see main text for details).
one way to do it is to study their effect on the space of 2 × 2 games described by the parameters s and t
(36). a first attempt was done by hauert [74], but some problems in this study make it inconclusive (see
[101] for details on this issue).
apart from lattices of different degrees (4, 6 and 8), we have also considered homogeneous random
networks, i.e. random networks where each node has exactly the same number of neighbors. the aim of
comparing with this kind of networks is to isolate the effect of the spatial distribution of individuals from that
of the mere limitation of the number of neighbors and the context preservation [85] of a degree-homogeneous
random network. the well-mixed population hypothesis implies that every player plays with the "average"
player in the population. from the point of view of the replicator rule this means that every player samples
successively the population in each evolution step. it is not unreasonable to think that if the number of
neighbors is sufficiently restricted the result of this random sampling will differ from the population average,
thus introducing changes in the evolutionary outcome.
fig. 13 shows the results for the replicator rule with random and spatial networks of different degrees.
first, it is clear that the influence of these networks is negligible on harmony games and minimal on
prisoner's dilemmas, given the reduced range of parameters where it is noticeable. there is, however, a
clear influence on stag hunt and snowdrift games, which is always of opposite sign: an enhancement of
cooperation in stag hunt and an inhibition in snowdrift. second, it is illuminating to consider the effect
of increasing the degree. for the random network, it means that its weak influence vanishes. the spatial
lattice, however, whose result is very similar to that of the random one for the lowest degree (k = 4), displays
remarkable differences for the greater degrees (k = 6 and 8). these differences are a clear promotion of
cooperation in stag hunt games and a lesser, but measurable, inhibition in snowdrift games, specially for
low s.
the relevant topological feature that underlies this effect is the existence of clustering in the network,
understood as the presence of triangles or, equivalently, common neighbors [108, 109]. in regular lattices,
for k = 4 there is no clustering, but there is for k = 6 and 8. this point explains the difference between
the conclusions of cohen et al. [85] and those of ifti et al. [89] and tomassini et al. [96], regarding the role
of network clustering in the effect of spatial populations. in [85], rectangular lattices of degree k = 4 were
considered, which have strictly zero clustering because there are not closed triangles in the network, hence
finding no differences in outcome between the spatial and the random topology. in the latter case, on the
contrary, both studies employed rectangular lattices of degree k = 8, which do have clustering, and thus
they identified it as a key feature of the network, for particular parameterizations of the games they were
27
figure 13: asymptotic density of cooperators x∗in homogeneous random networks (upper row, a to c) compared to regular
lattices (lower row, d to f), with degrees k = 4 (a, d), 6 (b, e) and 8 (c, f). the update rule is the replicator rule and the
initial density of cooperators is x0 = 0.5. the plots show that the main influence occurs in stag hunt and snowdrift games,
specially for regular lattices with large clustering coefficient, k = 6 and 8 (see main text).
studying, namely prisoner's dilemma [89] and snowdrift [96].
an additional evidence for this conclusion is the fact that small-world networks, which include random
links to reduce the average path between nodes while maintaining the clustering, produce almost indistin-
guishable results from those of fig. 13 d-f. this conclusion is in agreement with existent theoretical work
about small-world networks, on prisoner's dilemma [84, 104, 105] and its extensions [122, 123], on snowdrift
games [96], and also with experimental studies on coordination games [124]. the difference between the
effect of regular lattices and small-world networks consists, in general, in a greater efficiency of the latter in
reaching the stationary state (see [101] for a further discussion on this comparison).
the mechanism that explains this effect is the formation and growth of clusters of cooperators, as
fig. 14 displays for a particular realization. the outcome of the population is then totally determined by
the stability and growth of these clusters, which in turn depend on the dynamics of clusters interfaces.
this means that the result is no longer determined by the global population densities but by the local
densities that the players at the cluster interfaces see in their neighborhood. in fact, the primary effect that
the network clustering causes is to favor, i.e. to maintain or to increase, the high local densities that were
present in the population from the random beginning. this favoring produces opposite effects in stag hunt
and snowdrift games. as an illustrating example, consider that the global density is precisely that of the
mixed equilibrium of the game. in stag hunt games, as this equilibrium is unstable, a higher local density
induces the conversion of nearby defectors to cooperators, thus making the cluster grow. in snowdrift games,
on the contrary, as the equilibrium is stable, it causes the conversion of cooperators to defectors. see [101]
for a full discussion on this mechanism.
in view of this, recalling that these are the results for the replicator rule, and that therefore they corre-
spond to the correct update rule to study the influence of population structure on replicator dynamics, we
28
figure 14: snapshots of the evolution of a population on a regular lattice of degree k = 8, playing a stag hunt game (s = −0.65
and t = 0.65). cooperators are displayed in red and defectors in blue. the update rule is the replicator rule and the initial
density of cooperators is x0 = 0.5. the upper left label shows the time step t. during the initial steps, cooperators with low
local density of cooperators in their neighborhood disappear, whereas those with high local density grow into the clusters that
eventually take up the complete population.
29
figure 15: asymptotic density of cooperators x∗in homogeneous random networks (upper row, a to c) compared to regular
lattices (lower row, d to f), with degrees k = 4 (a, d), 6 (b, e) and 8 (c, f). the update rule is unconditional imitation and
the initial density of cooperators is x0 = 0.5. again as in fig. 13, spatial lattices have greater influence than random networks
when the clustering coefficient is high (k = 6 and 8). in this case, however, the beneficial effect for cooperation goes well into
snowdrift and prisoner's dilemma quadrants.
can state that the presence of clustering (triangles, common neighbors) in a network is a relevant topological
feature for the evolution of cooperation. its main effects are, on the one hand, a promotion of cooperation
in stag hunt games, and, on the other hand, an inhibition (of lower magnitude) in snowdrift games. we
note, however, that clustering may not be the only relevant factor governing the game asymptotics: one
can devise peculiar graphs, not representing proper spatial structure, where other influences prove relevant.
this is the case of networks consisting of a complete subgraphs connected to each other by a few connections
[119], a system whose behavior, in spite of the high clustering coefficient, is similar to those observed on the
traditional square lattice where the clustering coefficient is zero. this was subsequently related [125] to the
existence of overlapping triangles that support the spreading of cooperation. we thus see that our claim
about the outcome of evolutionary games on networks with clustering is anything but general and depends
on the translational invariance of the network.
other stochastic non-innovative rules, such as the multiple replicator and moran rules, yield similar
results, without qualitative differences [101]. unconditional imitation, on the contrary, has a very different
influence, as can be seen in fig. 15.
in the first place, homogenous random networks themselves have a marked influence, that increases with
network degree for stag hunt and snowdrift games, but decreases for prisoner's dilemmas. secondly, there
are again no important differences between random and spatial networks if there is no clustering in the
network (note how the transitions between the different regions in the results are the same). there are,
however, stark differences when there is clustering in the network. interestingly, these are the cases with an
important promotion of cooperation in snowdrift and prisoner's dilemma games.
in this case, the dynamical mechanism is the formation and growth of clusters of cooperators as well,
30
figure 16: snapshots of the evolution of a population on a regular lattice of degree k = 8, playing a stag hunt game (s = −0.65
and t = 0.65). cooperators are displayed in red and defectors in blue. the update rule is unconditional imitation and the
initial density of cooperators is x0 = 1/3 (this lower value than that of fig. 14 has been used to make the evolution longer and
thus more easily observable). the upper left label shows the time step t. as with the replicator rule (see fig. 14), during the
initial time steps clusters emerge from cooperators with high local density of cooperators in their neighborhood. in this case,
the interfaces advance deterministically at each time step, thus giving a special significance to flat interfaces and producing a
much faster evolution than with the replicator rule (compare time labels with those of fig. 14)
31
figure 17: asymptotic density of cooperators x∗in regular lattices of degree k = 8, for different initial densities of cooperators
x0 = 1/3 (a, d), 1/2 (b, e) and 2/3 (c, f). the update rules are the replicator rule (upper row, a to c) and unconditional
imitation (lower row, d to f). with the replicator rule, the evolutionary outcome in stag hunt games depends on the initial
condition, as is revealed by the displacement of the transition line between full cooperation and full defection. however, with
unconditional imitation this transition line remains in the same position, thus showing the insensitivity to the initial condition.
in this case, the outcome is determined by the presence of small clusters of cooperators in the initial random population, which
takes place for a large range of values of the initial density of cooperators x0.
and the fate of the population is again determined by the dynamics of cluster interfaces. with unconditional
imitation, however, given its deterministic nature, interfaces advance one link every time step. this makes
very easy the calculation of the conditions for their advancement, because these conditions come down
to those of a flat interface between cooperators and defectors [101]. see fig. 16 for a typical example of
evolution.
an interesting consequence of the predominant role of flat interfaces with unconditional imitation is that,
as long as there is in the initial population a flat interface (i.e. a cluster with it, as for example a 3 × 2
cluster in a 8-neighbor lattice), the cluster will grow and eventually extend to the entire population. this
feature corresponds to the 3×3 cluster rule proposed by hauert [74], which relates the outcome of the entire
population to that of a cluster of this size. this property makes the evolutionary outcome quite independent
of the initial density of cooperators, because even for a low initial density the probability that a suitable
small cluster exists will be high for sufficiently large populations; see fig. 17 d-f about the differences in
initial conditions. nevertheless, it is important to realize that this rule is based on the dynamics of flat
interfaces and, therefore, it is only valid for unconditional imitation. other update rules that also give rise
to clusters, as replicator rule for example, develop interfaces with different shapes, rendering the particular
case of flat interfaces irrelevant. as a consequence, the evolution outcome becomes dependent on the initial
condition, as fig. 17 a-c displays.
in summary, the relevant topological feature of these homogeneous networks, for the games and update
rules considered so far, is the clustering of the network. its effect depends largely on the update rule, and
the most that can be said in general is that, besides not affecting harmony games, it consistently promotes
32
figure 18: asymptotic density of cooperators x∗in regular lattices of degree k = 8, with synchronous update (left, a and c)
compared to asynchronous (right, b and d). the update rules are the replicator rule (upper row) and unconditional imitation
(lower row). the initial density of cooperators is x0 = 0.5. for the replicator rule, the results are virtually identical, showing
the lack of influence of the synchronicity of update on the evolutionary outcome. in the case of unconditional imitation the
results are very similar, but there are differences for some points, specially snowdrift games with s ≲0.3 and t > 5/3 ≈1.67.
the particular game studied by huberman and glance [103], which reported a suppression of cooperation due to asynchronous
update, belongs to this region.
cooperation in stag hunt games.
4.3. synchronous vs asynchronous update
huberman and glance [103] questioned the generality of the results reported by nowak and may [73],
in terms of the synchronicity of the update of strategies. nowak and may used synchronous update, which
means that every player is updated at the same time, so the population evolves in successive generations.
huberman and glance, in contrast, employed asynchronous update (also called random sequential update),
in which individuals are updated independently one by one, hence the neighborhood of each player always
remains the same while her strategy is being updated.
they showed that, for a particular game, the
asymptotic cooperation obtained with synchronous update disappeared. this has become since then one of
the most well-known and cited examples of the importance of synchronicity in the update of strategies in
evolutionary models. subsequent works have, in turn, critizised the importance of this issue, showing that
the conclusions of [73] are robust [77, 126], or restricting the effect reported by [103] to particular instances
of prisoner's dilemma [78] or to the short memory of players [100]. other works, however, in the different
context of snowdrift games [95, 96] have found that the influence on cooperation can be positive or negative,
in the asynchronous case compared with the synchronous one.
we have thoroughly investigated this issue, finding that the effect of synchronicity in the update of
strategies is the exception rather than the rule. with the replicator rule, for example, the evolutionary
outcome in both cases is virtually identical, as fig. 18 a-b shows. moreover, in this case, the time evolution
is also very similar (see fig.19 a-b). with unconditional imitation there are important differences only in
33
figure 19: time evolution of the density of cooperators x in regular lattices of degree k = 8, for typical realizations of stag
hunt (left, a and c) and snowdrift games (right, b and d), with synchronous (continuous lines) or asynchronous (dashed
lines) update. the update rules are the replicator rule (upper row) and unconditional imitation (lower row). the stag hunt
games for the replicator rule (a) are: a, s = −0.4, t = 0.4; b, s = −0.5, t = 0.5; c, s = −0.6, t = 0.6; d, s = −0.7, t = 0.7;
e, s = −0.8, t = 0.8. for unconditional imitation the stag hunt games (c) are: a, s = −0.6, t = 0.6; b, s = −0.7, t = 0.7;
c, s = −0.8, t = 0.8; d, s = −0.9, t = 0.9; e, s = −1.0, t = 1.0. the snowdrift games are, for both update rules (b, d):
a, s = 0.9, t = 1.1; b, s = 0.7, t = 1.3; c, s = 0.5, t = 1.5; d, s = 0.3, t = 1.7; e, s = 0.1, t = 1.9. the initial density
of cooperators is x0 = 0.5. the time scale of the asynchronous realizations has been re-scaled by the size of the population,
so t hat for both kinds of update a time step represents the same number of update events in the population. figures a
and b show that, in the case of the replicator rule, not only the outcome but also the time evolution is independent of the
update synchronicity. with unconditional imitation the results are also very similar for stag hunt (c), but somehow different
in snowdrift (d) for large t, displaying the influence of synchronicity in this subregion. note that in all cases unconditional
imitation yields a much faster evolution than the replicator rule.
one particular subregion of the space of parameters, corresponding mostly to snowdrift games, to which the
specific game studied by huberman and glance belongs (see fig. 18 c-d and 19 c-d).
34
figure 20: asymptotic density of cooperators x∗with the replicator update rule, for model networks with different degree
heterogeneity: homogeneous random networks (left, a), erd ̋
os-r ́
enyi random networks (middle, b) and barab ́
asi-albert scale-
free networks (right, c). in all cases the average degree is ̄
k = 8 and the initial density of cooperators is x0 = 0.5. as degree
heterogeneity grows, from left to right, cooperation in snowdrift games is clearly enhanced.
figure 21: asymptotic density of cooperators x∗with unconditional imitation as update rule, for model networks with different
degree heterogeneity: homogeneous random networks (left, a), erd ̋
os-r ́
enyi random networks (middle, b) and barab ́
asi-albert
scale-free networks (right, c). in all cases the average degree is ̄
k = 8 and the initial density of cooperators is x0 = 0.5.
as degree heterogeneity grows, from left to right, cooperation in snowdrift games is enhanced again. in this case, however,
cooperation is inhibited in stag hunt games and reaches a maximum in prisoner's dilemmas for erd ̋
os-r ́
enyi random networks.
4.4. heterogeneous networks
the other important topological feature for evolutionary games was introduced by santos and co-workers
[75, 127, 128], who studied the effect of degree heterogeneity, in particular scale-free networks. their main
result is shown in fig. 20, which displays the variation in the evolutionary outcome induced by increasing
the variance of the degree distribution in the population, from zero (homogeneous random networks) to a
finite value (erd ̋
os-r ́
enyi random networks), and then to infinity (scale-free networks). the enhancement of
cooperation as degree heterogeneity increases is very clear, specially in the region of snowdrift games. the
effect is not so strong, however, in stag hunt or prisoner's dilemma games. similar conclusions are obtained
with other scale-free topologies, as for example with klemm-egu ́
ıluz scale-free networks [129]. very recently,
it has been shown [130] that much as we discussed above for the case of spatial structures, clustering is also
a factor improving the cooperative behavior in scale-free networks.
the positive influence on snowdrift games is quite robust against changes in network degree and the
use of other update rules. on the other hand, the influence on stag hunt and prisoner's dilemma games is
quite restricted and very dependent on the update rule, as fig. 21 reveals. in fact, with unconditional imi-
tation cooperation is inhibited in stag hunt games as the network becomes more heterogeneous, whereas in
prisoner's dilemmas it seems to have a maximum at networks with finite variance in the degree distribution.
35
a very interesting insight from the comparison between the effects of network clustering and degree het-
erogeneity is that they mostly affect games with one equilibrium in mixed strategies, and that in addition the
effects on these games are different. this highlights the fact that they are different fundamental topological
properties, which induce mechanisms of different nature. in the case of network clustering we have seen the
formation and growth of clusters of cooperators. for network heterogeneity the phenomena is the bias and
stabilization of the strategy oscillations in snowdrift games towards the cooperative strategy [131, 132], as
we explain in the following. the asymptotic state of snowdrift games in homogeneous networks consists of
a mixed strategy population, where every individual oscillates permanently between cooperation and defec-
tion. network heterogeneity tends to prevent this oscillation, making players in more connected sites more
prone to be cooperators. at first, having more neighbors makes any individual receive more payoff, despite
her strategy, and hence she has an evolutionary advantage. for a defector, this is a short-lived advantage,
because it triggers the change of her neighbors to defectors, thus loosing payoff. a high payoffcooperator,
on the contrary, will cause the conversion of her neighbors to cooperators, increasing even more her own
payoff. these highly connected cooperators constitute the hubs that drive the population, fully or partially,
to cooperation. it is clear that this mechanism takes place when cooperators collect more payofffrom a
greater neighborhood, independently of their neighbors' strategies. this only happens when s > 0, which is
the reason why the positive effect on cooperation of degree heterogeneity is mainly restricted to snowdrift
games.
36
figure 22: asymptotic density of cooperators x∗in a square lattice with degree k = 8 and best response as update rule, in the
model with snowdrift (41) studied by sysi-aho and co-workers [94]. the result for a well-mixed population is displayed as a
reference. note how the promotion or inhibition of cooperation does not follow the same variation as a function of t than in
the case with the replicator rule studied by hauert and doebeli [93] (fig. 11).
4.5. best response update rule
so far, we have dealt with imitative update rules, which are non-innovative. here we present the results
for an innovative rule, namely best response. with this rule each player chooses, with certain probability p,
the strategy that is the best response for her current neighborhood. this rule is also referred to as myopic
best response, because the player only takes into account the last evolution step to decide the optimum
strategy for the next one. compared to the rules presented previously, this one assumes more powerful
cognitive abilities on the individual, as she is able to discern the payoffs she can obtain depending on
her strategy and those of her neighbors, in order to chose the best response. from this point of view, it
constitutes a next step in the sophistication of update rules.
an important result of the influence of this rule for evolutionary games was published in 2005 by sysi-
aho and co-workers [94]. they studied the combined influence of this rule with regular lattices, in the same
one-dimensional parameterization of snowdrift games (41) that was employed by hauert and doebeli [93].
they reported a modification in the cooperator density at equilibrium, with an increase for some subrange
of the parameter t and a decrease for the other, as fig. 22 shows.
at the moment, it was intriguing that regular lattices had opposite effects (promotion or inhibition of
cooperation) in some ranges of the parameter t, depending on the update rule used in the model. very
recently we have carried out a thorough investigation of the influence of this update rule on a wide range
of networks [102], focusing on the key topological properties of network clustering and degree heterogeneity.
the main conclusion of this study is that, with only one relevant exception, the best response rule suppresses
the effect of population structure on evolutionary games. fig. 23 shows a summary of these results. in all
cases the outcome is very similar to that of replicator dynamics on well-mixed populations (fig. 9), despite
the fact that the networks studied explore different options of network clustering and degree heterogeneity.
the steps in the equilibrium density of snowdrift games, as those reported in [94], show up in all cases, with
slight variations which depend mostly on the mean degree of the network.
the exception to the absence of network influence is the case of regular lattices, and consists of a
modification of the unstable equilibrium in stag hunt games, in the sense that it produces a promotion
of cooperation for initial densities lower than 0.5 and a corresponding symmetric inhibition for greater
densities. an example of this effect is given in fig. 24, where the outcome should be compared to that of
well-mixed populations in fig. 9 a. the reason for this effect is that the lattice creates special conditions
for the advancement (or receding) of the interfaces of clusters of cooperators. we refer the interested reader
to [102] for a detailed description of this phenomena. very remarkably, in this case network clustering is not
relevant, because the effect also takes place for degree k = 4, at which there is no clustering in the network.
37
figure 23: asymptotic density of cooperators x∗in random (left, a and d), regular (middle, b and e), and scale-free networks
(right, c and f) with average degrees ̄
k = 4 (upper row, a to c) and 8 (lower row, d to f). the update rule is best response
with p = 0.1 and the initial density of cooperators is x0 = 0.5. differences are negligible in all cases; note, however, that the
steps appearing in the snowdrift quadrant are slightly different.
figure 24: asymptotic density of cooperators x∗in regular lattices with initial density of cooperators x0 = 1/3. the degrees
are k = 4 (left, a), k = 6 (middle, b) and k = 8 (right, c). the update rule is best response with p = 0.1. comparing with
fig. 9 a, there is a clear displacement of the boundary between full defection and full cooperation in stag hung games, which
amounts to a promotion of cooperation. the widening of the border in panel c is a finite size effect, which disappears for
larger populations. see main text for further details.
38
4.6. weak selection
this far, we have considered the influence of population structure in the case of strong selection pressure,
which means that the fitness of individuals is totally determined by the payoffs resulting from the game.
in general this may not be the case, and then to relax this restriction the fitness can be expressed as
f = 1 −w + wπ [133]. the parameter w represents the intensity of selection and can vary between w = 1
(strong selection limit) and w ≳0 (weak selection limit). with a different parameterization, this implements
the same idea as the baseline fitness discussed in section 3. we note that another interpretation has been
recently proposed [134] for this limit, namely δ-weak selection, which assumes that the game means much to
the determination of reproductive success, but that selection is weak because mutant and wild-type strategies
are very similar. this second interpretation leads to different results [134] and we do not deal with it here,
but rather we stick with the first one, which is by far the most generally used.
the weak selection limit has the nice property of been tractable analytically. for instance, ohtsuki and
nowak have studied evolutionary games on homogeneous random networks using this approach [135], finding
an interesting relation with replicator dynamics on well-mixed populations. using our normalization of the
game (36), their main result can be written as the following payoffmatrix
1
s + ∆
t −∆
0
.
(42)
this means that the evolution in a population structured according to a random homogeneous network,
in the weak selection limit, is the same as that of a well-mixed population with a game defined by this
modified payoffmatrix. the effect of the network thus reduces to the term ∆, which depends on the game,
the update rule and the degree k of the network. with respect to the influence on cooperation it admits a
very straightforward interpretation: if both the original and the modified payoffmatrices correspond to a
harmony or prisoner's dilemma game, then there is logically no influence, because the population ends up
equally in full cooperation or full defection; otherwise, cooperation is enhanced if ∆> 0, and inhibited if
∆< 0.
the actual values of ∆, for the update rules pairwise comparison (pc), imitation (im) and death-birth
(db) (see [135] for full details), are
∆p c
=
s −(t −1)
k −2
(43)
∆im
=
k + s −(t −1)
(k + 1)(k −2)
(44)
∆db
=
k + 3(s −(t −1))
(k + 3)(k −2)
,
(45)
k being the degree of the network. a very remarkable feature of these expressions is that for every pair
of games with parameters (s1, t1) and (s2, t2), if s1 −t1 = s2 −t2 then ∆1 = ∆2. hence the influence
on cooperation for such a pair of games, even if one is a stag hunt and the other is a snowdrift, will be
the same. this stands in stark contrast to all the reported results with strong selection, which generally
exhibit different, and in many cases opposite, effects on both games. besides this, as the term s −(t −1)
is negative in all prisoner's dilemmas and half the cases of stag hunt and snowdrift games, the beneficial
influence on cooperation is quite reduced for degrees k as those considered above [101].
another way to investigate the influence of the intensity of selection if to employ the fermi update rule,
presented above, which allows to study numerically the effect of varying the intensity of selection on any
network model. figs. 25 and 26 display the results obtained, for different intensities of selection, on networks
that are prototypical examples of strong influence on evolutionary games, namely regular lattices with high
clustering and scale-free networks, with large degree heterogeneity. in both cases, as the intensity of selection
is reduced, the effect of the network becomes weaker and more symmetrical between stag hunt and snowdrift
games. therefore, these results show that the strong and weak selection limits are not comparable from the
viewpoint of the evolutionary outcome, and that weak selection largely inhibits the influence of population
structure.
39
figure 25: asymptotic density of cooperators x∗in regular lattices of degree k = 8, for the fermi update rule with β equal to
10 (a), 1 (b), 0.1 (c) and 0.01 (d). the initial density of cooperators is x0 = 0.5. for high β the result is quite similar to that
obtained with the replicator rule (fig. 13 f). as β decreases, or equivalently for weaker intensities of selection, the influence
becomes smaller and more symmetrical between stag hunt and snowdrift games.
figure 26: asymptotic density of cooperators x∗in barab ́
asi-albert scale-free networks of average degree ̄
k = 8, for the fermi
update rule with β equal to 10 (a), 1 (b), 0.1 (c) and 0.01 (d). the initial density of cooperators is x0 = 0.5. as in fig. 25,
for high β the result is quite similar to that obtained with the replicator rule (fig. 20 c), and analogously, as β decreases the
influence of the network becomes smaller and more symmetrical between stag hunt and snowdrift games.
40
5. conclusion and future prospects
in this review, we have discussed non-mean-field effects on evolutionary game dynamics. our reference
framework for comparison has been the replicator equation, a pillar of modern evolutionary game theory
that has produced many interesting and fruitful insights on different fields. our purpose here has been to
show that, in spite of its many successes, the replicator equation is only a part of the story, much in the same
manner as mean-field theories have been very important in physics but they cannot (nor are they intended
to) describe all possible phenomena. the main issues we have discussed are the influence of fluctuations, by
considering the existence of more than one time scale, and of spatial correlations, through the constraints on
interaction arising from an underlying network structure. in doing so, we have shown a wealth of evidence
supporting our first general conclusion: deviations with respect to the hypothesis of a well-mixed population
(including nonlinear dependencies of the fitness on the payoffor not) have a large influence on the outcome
of the evolutionary process and in a majority of cases do change the equilibria structure, stability and/or
basins of attraction.
the specific question of the existence of different time scales was discussed in section 3.
this is a
problem that has received some attention in economics but otherwise it has been largely ignored in biological
contexts. in spite of this, we have shown that considering fast evolution in the case of ultimatum game
may lead to a non-trivial, unexpected conclusion: that individual selection may be enough to explain the
experimental evidence that people do not behave rationally. this is an important point in so far as, to date,
simple individual selection was believed not to provide an understanding of the phenomena of altruistic
punishment reported in many experiments [56]. we thus see that the effect of different time scales might be
determinant and therefore must be considered among the relevant factors with an influence on evolutionary
phenomena.
this conclusion is reinforced by our general study of symmetric 2 × 2 symmetric games, that shows
that the equilibria of about half of the possible games change when considering fast evolution. changes
are particularly surprising in the case of the harmony game, in which it turns out that when evolution is
fast, the selected strategy is the "wrong" one, meaning that it is the less profitable for the individual and
for the population. such a result implies that one has to be careful when speaking of adaptation through
natural selection, because in this example we have a situation in which selection leads to a bad outcome
through the influence of fluctuations. it is clear that similar instances may arise in many other problems.
on the other hand, as for the particular question of the emergence of cooperation, our results imply that in
the framework of the classical 2 × 2 social dilemmas, fast evolution is generally bad for the appearance of
cooperative behavior.
the results reported here concerning the effect of time scales on evolution are only the first ones in this
direction and, clearly, much remains to be done. in this respect, we believe that it would be important to
work out the case of asymmetric 2 × 2 games, trying to reveal possible general conclusions that apply to
families of them. the work on the ultimatum game [64] is just a first example, but no systematic analysis of
asymmetric games has been carried out. a subsequent extension to games with more strategies would also be
desirable; indeed, the richness of the structures arising in those games (such as, e.g., the rock-scissors-papers
game [11]) suggests that considering fast evolution may lead to quite unexpected results. this has been very
recently considered in the framework of the evolutionary minority game [136] (where many strategies are
possible, not just two or three) once again from an economics perspective [137]; the conclusion of this paper,
namely that there is a phase transition as a function of the time scale parameter that can be observed in
the predictability of market behavior is a further hint of the interest of this problem.
in section 4 we have presented a global view of the influence of population structure on evolutionary
games. we have seen a rich variety of results, of unquestionable interest, but that on the downside reflect
the non-generality of this kind of evolutionary models. almost every detail in the model matters on the
outcome, and some of them dramatically.
we have provided evidence that population structure does not necessarily promote cooperation in evolu-
tionary game theory, showing instances in which population structure enhances or inhibits it. nonetheless,
we have identified two topological properties, network clustering and degree heterogeneity, as those that
allow a more unified approach to the characterization and understanding of the influence of population
41
structure on evolutionary games. for certain subset of update rules, and for some subregion in the space of
games, they induce consistent modifications in the outcome. in summary, network clustering has a positive
impact on cooperation in stag hunt games and degree heterogeneity in snowdrift games. therefore, it would
be reasonable to expect similar effects in other networks which share these key topological properties. in
fact, there is another topological feature of networks that conditions evolutionary games, albeit of a different
type: the community structure [108, 109]. communities are subgraphs of densely interconnected nodes, and
they represent some kind of mesoscopic organization. a recent study [76] has pointed out that communities
may have their own effects on the game asymptotics in otherwise similar graphs, but more work is needed
to assess this influence.
on the other hand, the importance of the update rules cannot be overstated. we have seen that for the
best response and fermi rules even these "robust" effects of population structure are greatly reduced. it
is very remarkable from a application point of view that the influence of population structure is inhibited
so greatly when update rules more sophisticated than merely imitative ones are considered, or when the
selection pressure is reduced. it is evident that a sound justification of several aspects of the models is
mandatory for applications. crucial details, as the payoffstructure of the game, the characteristics of the
update rule or the main topological features of the network are critical for obtaining significant results. for
the same reasons, unchecked generalizations of the conclusions obtained from a particular model, which
go beyond the kind of game, the basic topology of the network or the characteristics of the updated rule,
are very risky in this field of research. very easily the evolutionary outcome of the model could change
dramatically, making such generalizations invalid.
this conclusion has led a number of researchers to address the issue from a further evolutionary viewpoint:
perhaps, among the plethora of possible networks one can think of, only some of them (or some values of
their magnitudes) are really important, because the rest are not found in actual situations. this means that
networks themselves may be subject to natural selection, i.e., they may co-evolve along with the game under
consideration. this promising idea has already been proposed [138, 139, 140, 141, 142, 143, 144, 145] and a
number of interesting results, which would deserve a separate review on their own right7, have been obtained
regarding the emergence of cooperation. in this respect, it has been observed that co-evolution seems to
favor the stabilization of cooperative behavior, more so if the network is not rewired from a preexisting
one but rather grows upon arrival of new players [147].
a related approach, in which the dynamics of
the interaction network results from the mobility of players over a spatial substrate, has been the focus of
recent works [148, 149]. albeit these lines of research are appealing and natural when one thinks of possible
applications, we believe the same caveat applies: it is still too early to draw general conclusions and it
might be that details would be again important. nevertheless, work along these lines is needed to assess the
potential applicability of these types of models. interestingly, the same approach is also being introduced
to understand which strategy update rules should be used, once again as a manner to discriminate among
the very many possibilities. this was pioneered by harley [150] (see also the book by maynard smith [6],
where the paper by harley is presented as a chapter) and a few works have appeared in the last few years
[151, 152, 153, 154, 155]; although the available results are too specific to allow for a glimpse of any general
feature, they suggest that continuing this research may render fruitful results.
we thus reach our main conclusion: the outcome of evolutionary game theory depends to a large extent
on the details, a result that has very important implications for the use of evolutionary game theory to model
actual biological, sociological or economical systems. indeed, in view of this lack of generality, one has to
look carefully at the main factors involved in the situation to be modeled because they need to be included
as close as necessary to reality to produce conclusions relevant for the case of interest. note that this does
not mean that it is not possible to study evolutionary games from a more general viewpoint; as we have seen
above, general conclusions can be drawn, e.g., about the beneficial effects of clustering for cooperation or the
key role of hubs in highly heterogeneous networks. however, what we do mean is that one should not take
such general conclusions for granted when thinking of a specific problem or phenomenon, because it might
well be that some of its specifics render these abstract ideas unapplicable. on the other hand, it might be
7for a first attempt, see sec. 5 of [146].
42
possible that we are not looking at the problem in the right manner; there may be other magnitudes we have
not identified yet that allow for a classification of the different games and settings into something similar
to universality classes. whichever the case, it seems clear to us that much research is yet to be done along
the lines presented here. we hope that this review encourages others to walk offthe beaten path in order
to make substantial contributions to the field.
acknowledgements
this work has been supported by projects mosaico, from the spanish ministerio de educaci ́
on y
ciencia, and mossnoho and simumat, from the comunidad aut ́
onoma de madrid.
references
[1] c. darwin, on the origin of species by means of natural selection, or the preservation of favoured races in the struggle
for life, 1st edition, john murray, london, 1859.
[2] j. roughgarden, theory of population genetics and evolutionary ecology: an introduction, macmillan publishing co.,
new york, 1979.
[3] l. peliti, fitness landscapes and evolution, in: t. riste, d. sherrington (eds.), physics of biomaterials: fluctuations,
self-assembly and evolution, kluwer academic, dordrecht, 1996, pp. 287–308.
[4] b. drossel, biological evolution and statistical physics, adv. phys 50 (2001) 209–295.
[5] w. j. ewens, mathematical population genetics, 2nd edition, springer, new york, 2004.
[6] j. maynard smith, evolution and the theory of games, cambridge university press, cambridge, 1982.
[7] s. j. gould, e. s. vrba, exaptation - a missing term in the science of form, paleobiology 8 (1982) 4–15.
[8] j. von neumann, o. morgenstern, the theory of games and economic behavior, princeton university press, princeton,
1947.
[9] h. gintis, game theory evolving, princeton university press, princeton, 2000.
[10] m. a. nowak, evolutionary dynamics: exploring the equations of life, the belknap press of harvard university press,
cambridge, 2006.
[11] j. hofbauer, k. sigmund, evolutionary games and population dynamics, cambridge university press, cambridge, 1998.
[12] p. d. taylor, l. jonker, evolutionarily stable strategies and game dynamics, math. biosci. 40 (1978) 145–156.
[13] j. nash, equilibrium points in n-person games, proc. natl. acad. sci. usa 36 (1950) 48–49.
[14] e. pennisi, how did cooperative behavior evolve?, science 309 (2005) 93.
[15] w. d. hamilton, the genetical evolution of social behaviour i, j. theor. biol. 7 (1964) 1–16.
[16] w. d. hamilton, the genetical evolution of social behaviour ii, j. theor. biol. 7 (1964) 17–52.
[17] r. l. trivers, the evolution of reciprocal altruism, q. rev. biol. 46 (1971) 35–57.
[18] p. hammerstein (ed.), genetic and cultural evolution of cooperation (dahlem workshop report 90), mit press,
cambridge, 2003.
[19] m. a. nowak, five rules for the evolution of cooperation, science 314 (2006) 1560–1563.
[20] g. szab ́
o, g. f ́
ath, evolutionary games on graphs, phys. rep. 446 (2007) 97–216.
[21] j. hofbauer, k. sigmund, evolutionary game dynamics, bull. am. math. soc. 40 (2003) 479–519.
[22] a. rapoport, a. m. chammah, prisoner's dilemma, university of michigan press, ann arbor, 1965.
[23] r. sugden, the economics of rights, co-operation and welfare, basil blackwell,, oxford, 1986.
[24] j. maynard smith, g. price, the logic of animal conflict, nature 246 (1973) 15–18.
[25] f. vega-redondo, economics and the theory of games, cambridge university press, cambridge, 2003.
[26] p. schuster, k. sigmund, replicator dynamics, j. theor. biol. 100 (1983) 533–538.
[27] a. traulsen, j. c. claussen, c. hauert, coevolutionary dynamics: from finite to infinite populations, phys. rev. lett.
95 (2005) 238701.
[28] j. c. claussen, a. traulsen, non-gaussian fluctuations arising from finite populations: exact results for the evolutionary
moran process, phys. rev. e 71 (2005) 25101.
[29] a. traulsen, j. c. claussen, c. hauert, coevolutionary dynamics in large, but finite populations, phys. rev. e 74 (2006)
011901.
[30] j. c. claussen, discrete stochastic processes, replicator and fokker-planck equations of coevolutionary dynamics in finite
and infinite populations, banach center publications 80 (2008) 17–31.
[31] d. b. fogel, g. b. fogel, p. c. andrews, on the instability of evolutionary stable strategies, biosys. 44 (1997) 135–152.
[32] g. b. fogel, p. c. andrews, d. b. fogel, on the instability of evolutionary stable strategies in small populations, ecol.
modelling 109 (1998) 283–294.
[33] k. m. page, m. a. nowak, unifying evolutionary dynamics, j. theor. biol. 219 (2002) 93–98.
[34] g. r. price, selection and covariance?, nature 227 (1970) 520–521.
[35] s. a. frank, george price's contributions to evolutionary genetics, j. theor. biol. 175 (1995) 373–388.
[36] p. kollock, social dilemmas: the anatomy of cooperation, annu. rev. sociol. 24 (1998) 183–214.
[37] j. maynard smith, e. szathm ́
ary, the major transitions in evolution, oxford university press, oxford, 1995.
43
[38] a. p. hendry, m. t. kinnison, perspective: the pace of modern life: measuring rates of contemporary microevolution,
evolution 53 (1999) 1637–1653.
[39] a. p. hendry, j. k. wenburg, p. bentzen, e. c. volk, t. p. quinn, rapid evolution of reproductive isolation in the wild:
evidence from introduced salmon, science 290 (2000) 516–518.
[40] t. yoshida, l. e. jones, s. p. ellner, g. f. fussmann, n. g. h. jr., rapid evolution drives ecological dynamics in a
predator-prey system, nature 424 (2003) 303–306.
[41] r. m. fagen, when doves conspire: evolution of non-damaging fighting tactics in a non-random encounter animal conflict
model, am. nat. 115 (1980) 858–869.
[42] b. skyrms, the stag hunt and evolution of social structure, cambridge university press, cambridge, 2003.
[43] j. c. harsanyi, r. selten, a general theory of equilibrium selection in games, mit, cambridge, massachussets, 1988.
[44] m. w. macy, a. flache, learning dynamics in social dilemmas, proc. nat. acad. sci. usa 99 (2002) 7229–7236.
[45] m. kandori, g. j. mailath, r. roy, learning, mutation and long-run equilibria in games, econometrica 61 (1993) 29–56.
[46] d. foster, p. h. young, stochastic evolutionary game dynamics, theor. popul. biol. 38 (1990) 219–232.
[47] a. j. robson, f. vega-redondo, efficient equilibrium selection in evolutionary games with random matching, j. econ.
theory 70 (1996) 65–92.
[48] j. miekisz, equilibrium selection in evolutionary games with random matching of players, j. theor. biol. 232 (2005)
47–53.
[49] b. w ̈
olfing, a. traulsen, stochastic sampling of interaction partners versus deterministic payoffassignment, j. theor.
biol. 257 (2009) 689–695.
[50] r. t. boylan, laws of large numbers for dynamical systems with randomly matched individuals, j. econ. theory 57
(1992) 473–504.
[51] r. t. boylan, continuous approximation of dynamical systems with randomly matched individuals, j. econ. theory 66
(1995) 615–625.
[52] a. n. licht, games commissions play: 2x2 games of international securities regulation, yale j. int. law 24 (1999) 61–125.
[53] w. g ̈
uth, r. schmittberger, b. schwarze, an experimental analysis of ultimate bargaining, j. econ. behav. org. 3 (1982)
367–388.
[54] j. henrich, r. boyd, s. bowles, c. camerer, e. fehr, h. gintis (eds.), foundations of human sociality: economic
experiments and ethnographic evidence from fifteen small-scale societies, oxford university press, oxford, 2004.
[55] m. a. nowak, k. sigmund, evolution of indirect reciprocity by image scoring, nature 393 (1998) 573–577.
[56] c. f. camerer, behavioral game theory, princeton university press, princeton, 2003.
[57] e. fehr, u. fischbacher, the nature of human altruism, nature 425 (2003) 785–791.
[58] r. axelrod, w. d. hamilton, the evolution of cooperation, science 211 (1981) 1390–1396.
[59] r. axelrod, the evolution of cooperation, basic books, new york, 1984.
[60] h. gintis, strong reciprocity and human sociality, j. theor. biol. 206 (2000) 169–179.
[61] e. fehr, u. fischbacher, s. g ̈
achter, strong reciprocity, human cooperation and the enforcement of social norms, hum.
nat. 13 (2002) 1–25.
[62] k. page, m. nowak, a generalized adaptive dynamics framework can describe the evolutionary ultimatum game, j.
theor. biol. 209 (2000) 173–179.
[63] k. page, m. nowak, empathy leads to fairness, bull. math. biol. 64 (2002) 1101–1116.
[64] a. s ́
anchez, j. a. cuesta, altruism may arise from individual selection, j. theor. biol. 235 (2005) 233–240.
[65] c. p. roca, j. a. cuesta, a. s ́
anchez, time scales in evolutionary dynamics, phys. rev. lett. 97 (2006) 158701.
[66] c. p. roca, j. a. cuesta, a. s ́
anchez, the importance of selection rate in the evolution of cooperation, eur. phys. j.
special topics 143 (2007) 51–58.
[67] p. a. p. moran, the statistical processes of evolutionary theory, clarendon press, oxford, 1962.
[68] c. taylor, d. fudenberg, a. sasaki, m. a. nowak, evolutionary game dynamics in finite populations, bull. math. biol.
66 (2004) 1621–1644.
[69] s. karlin, h. m. taylor, a first course in stochastic processes, 2nd edition, academic press, new york, 1975.
[70] c. m. grinstead, j. l. snell, introduction to probability, 2nd edition, american mathematical society, providence, 1997.
[71] a. rapoport, m. guyer, a taxonomy of 2 × 2 games, general systems 11 (1966) 203–214.
[72] a. traulsen, m. a. nowak, j. m. pacheco, stochastic payoffevaluation increases the temperature of selection, j. theor.
biol. 244 (2007) 349–356.
[73] m. a. nowak, r. m. may, evolutionary games and spatial chaos, nature 359 (1992) 826–829.
[74] c. hauert, effects of space in 2 × 2 games, int. j. bifurc. chaos 12 (2002) 1531–1548.
[75] f. c. santos, j. m. pacheco, t. lenaerts, evolutionary dynamics of social dilemmas in structured heterogeneous popu-
lations, proc. natl. acad. sci. usa 103 (2006) 3490–3494.
[76] s. lozano, a. arenas, a. s ́
anchez, mesoscopic structure conditions the emergence of cooperation on social networks,
plos one 3 (2008) e1892.
[77] m. a. nowak, s. bonhoeffer, r. m. may, spatial games and the maintenance of cooperation, proc. natl. acad. sci. usa
91 (1994) 4877–4881.
[78] k. lindgren, m. g. nordahl, evolutionary dynamics of spatial games, physica d 75 (1994) 292–309.
[79] v. hutson, g. vickers, the spatial struggle of tit-for-tat and defect, phil. trans. r. soc. lond. b 348 (1995) 393–404.
[80] p. grim, spatialization and greater generosity in the stochastic prisoner's dilemma, biosys. 37 (1996) 3–17.
[81] m. nakamaru, h. matsuda, y. iwasa, the evolution of cooperation in a lattice-structured population, j. theor. biol.
184 (1997) 65–81.
[82] g. szab ́
o, c. t ̋
oke, evolutionary prisoner's dilemma game on a square lattice, phys. rev. e 58 (1998) 69–73.
44
[83] k. brauchli, t. killingback, m. doebeli, evolution of cooperation in spatially structured populations, j. theor. biol.
200 (1999) 405–417.
[84] g. abramson, m. kuperman, social games in a social network, phys. rev. e 63 (2001) 030901(r).
[85] m. d. cohen, r. l. riolo, r. axelrod, the role of social structure in the maintenance of cooperative regimes, rationality
and society 13 (2001) 5–32.
[86] m. h. vainstein, j. j. arenzon, disordered environments in spatial games, phys. rev. e 64 (2001) 051905.
[87] y. lim, k. chen, c. jayaprakash, scale-invariant behavior in a spatial game of prisoners' dilemma, phys. rev. e 65
(2002) 026134.
[88] f. schweitzer, l. behera, h. m ̈
uhlenbein, evolution of cooperation in a spatial prisoner's dilemma, adv. complex sys.
5 (2002) 269–299.
[89] m. ifti, t. killingback, m. doebeli, effect of neighbourhood size and connectivity on the spatial continuous prisoner's
dilemma, j. theor. biol. 231 (2004) 97–106.
[90] c.-l. tang, w.-x. wang, x. wu, b.-h. wang, effects of average degree on cooperation in networked evolutionary game,
eur. phys. j. b 53 (2006) 411–415.
[91] m. perc, a. szolnoki, social diversity and promotion of cooperation in the spatial prisoner's dilemma, phys. rev. e 77
(2008) 011904.
[92] t. killingback, m. doebeli, spatial evolutionary game theory: hawks and doves revisited, proc. r. soc. lond. b 263
(1996) 1135–1144.
[93] c. hauert, m. doebeli, spatial structure often inhibits the evolution of cooperation in the snowdrift game, nature 428
(2004) 643–646.
[94] m. sysi-aho, j. saram ̈
aki, j. kert ́
esz, k. kaski, spatial snowdrift game with myopic agents, eur. phys. j. b 44 (2005)
129–135.
[95] a. kun, g. boza, i. scheuring, asynchronous snowdrift game with synergistic effect as a model of cooperation, behav.
ecol. 17 (2006) 633–641.
[96] m. tomassini, l. luthi, m. giacobini, hawks and doves on small-world networks, phys. rev. e 73 (2006) 016132.
[97] l.-x. zhong, d.-f. zheng, b. zheng, c. xu, p. hui, networking effects on cooperation in evolutionary snowdrift game,
europhys. lett. 76 (2006) 724–730.
[98] l. e. blume, the statistical mechanics of strategic interaction, games econ. behav. 5 (1993) 387–424.
[99] g. ellison, learning, local interaction, and coordination, econometrica 61 (1993) 1047–1071.
[100] o. kirchkamp, spatial evolution of automata in the prisoners' dilemma, j. econ. behav. org. 43 (2000) 239–262.
[101] c. p. roca, j. cuesta, a. s ́
anchez, effect of spatial structure on the evolution of cooperation, phys. rev. e, in press.
[102] c. p. roca, j. cuesta, a. s ́
anchez, promotion of cooperation on networks? the myopic best response case, eur. phys.
j. b, in press.
[103] b. a. huberman, n. s. glance, evolutionary games and computer simulations, proc. natl. acad. sci. usa 90 (1993)
7716–7718.
[104] n. masuda, k. aihara, spatial prisoner's dilemma optimally played in small-world networks, phys. lett. a 313 (2003)
55–61.
[105] m. tomochi, defectors' niches: prisoner's dilemma game on disordered networks, soc. netw. 26 (2004) 309–321.
[106] h. hinrichsen, non-equilibrium critical phenomena and phase transitions into absorbing states, adv. phys. 49 (2000)
815–958.
[107] j. r. n. chiappin, m. j. de oliveira, emergence of cooperation among interacting individuals, phys. rev. e 59 (1999)
6419–6421.
[108] m. newman, the structure and function of complex networks, siam review 45 (2003) 167–256.
[109] s. bocaletti, v. latora, y. moreno, m. chavez, d. u. hwang, complex networks: structure and dynamics, phys. rep.
424 (2006) 175–308.
[110] d. j. watts, s. h. strogatz, collective dynamics of 'small-world' networks, nature 393 (1998) 440–442.
[111] r. albert, a.-l. barab ́
asi, statistical mechanics of complex networks, rev. mod. phys. 74 (2002) 47–97.
[112] p. holme, a. trusina, b. j. kim, p. minnhagen, prisoners dilemma in real-world acquaintance networks: spikes and
quasi-equilibria induced by the interplay between structure and dynamics, phys. rev. e 68 (2003) 030901.
[113] r. guimer`
a, l. danon, a. d ́
ıaz-guilera, f. giralt, a. arenas, self-similar community structure in a network of human
interactions, phys. rev. e 68 (2003) 065103.
[114] d. helbing, interrelations between stochastic equations for systems with pair interactions, physica a 181 (1992) 29–52.
[115] d. helbing, a mathematical model for behavioral changes through pair interactions, in: g. hagg, u. mueller, k. g.
troitzsch (eds.), economic evolution and demographic change, springer–verlag, berlin, 1992, pp. 330–348.
[116] k. h. schlag, why imitate, and if so, how? a boundedly rational approach to multi-armed bandits, j. econ. theory 78
(1998) 130–156.
[117] l. blume, now noise matters, games econ. behav. 44 (2003) 251–271.
[118] a. traulsen, m. a. nowak, j. m. pacheco, stochastic dynamics of invasion and fixation, phys. rev. e 74 (2006) 011909.
[119] g. szab ́
o, j. vukov, a. szolnoki, phase diagrams for an evolutionary prisoners dilemma game on two-dimensional lattices,
phys. rev. e 72 (2005) 047107.
[120] l. luthi, m. tomassini, e. pestelacci, evolutionary games on networks and payoffinvariance under replicator dynamics,
biosys. 96 (2009) 213–222.
[121] c. p. roca, j. cuesta, a. s ́
anchez, in preparation.
[122] z.-x. wu, x.-j. xu, y. chen, y.-h. wang, spatial prisoner's dilemma game with volunteering in newman-watts small-
world networks, phys. rev. e 71 (2005) 037103.
45
[123] z.-x. wu, x.-j. xu, y.-h. wang, prisoner's dilemma game with heterogeneous influential effect on regular small-world
networks, chin. phys. lett. 23 (2006) 531–534.
[124] a. cassar, coordination and cooperation in local, random and small world networks: experimental evidence, games
econ. behav. 58 (2007) 209–230.
[125] j. vukov, g. szab ́
o, a. szolnoki, cooperation in the noisy case: prisoner's dilemma game on two types of regular random
graphs, phys. rev. e 73 (2006) 067103.
[126] c. hauert, g. szab ́
o, game theory and physics, am. j. phys. 73 (2005) 405.
[127] f. c. santos, j. m. pacheco, scale-free networks provide a unifying framework for the emergence of cooperation, phys.
rev. lett. 95 (2005) 98104.
[128] f. c. santos, j. m. pacheco, t. lenaerts, graph topology plays a determinant role in the evolution of cooperation, proc.
roy. soc. b 273 (2006) 51–56.
[129] k. klemm, v. m. egu ́
ıluz, growing scale-free networks with small-world behavior, phys. rev. e 65 (2002) 057102.
[130] s. assenza, j. g ́
omez-garde ̃
nes, v. latora, enhancement of cooperation in highly clustered scale-free networks, phys.
rev. e 78 (2009) 017101.
[131] j. g ́
omez-garde ̃
nes, m. campillo, l. m. flor ́
ıa, y. moreno, dynamical organization of cooperation in complex networks,
phys. rev. lett. 98 (2007) 108103.
[132] j. poncela, j. g ́
omez-garde ̃
nes, l. m. flor ́
ıa, y. moreno, robustness of cooperation in the prisoner's dilemma in complex
networks, new j. phys. 9 (2007) 184.
[133] m. a. nowak, a. sasaki, c. taylor, d. fudenberg, emergence of cooperation and evolutionary stability in finite popu-
lations, nature 428 (2004) 646–650.
[134] g. wild, a. traulsen, the different limits of weak selection and the evolutionary dynamics of finite populations, j. theor.
biol. 247 (2007) 382–390.
[135] h. ohtsuki, m. a. nowak, the replicator equation on graphs, j. theor. biol. 243 (2006) 86–97.
[136] d. challet, y. c. zhang, emergence of cooperation and organization in an evolutionary game, physica a 246 (1997)
407–418.
[137] l.-x. zhong, t. qiu, b.-h. chen, c.-f. liu, effects of dynamic response time in an evolving market, physica a 388
(2009) 673–681.
[138] m. g. zimmermann, v. m. egu ́
ıluz, m. san miguel, coevolution of dynamical states and interactions in dynamic
networks, phys. rev. e 69 (2004) 65102.
[139] m. marsili, f. vega-redondo, f. slanina, the rise and fall of a networked society: a formal model, proc. nat. acad.
sci. usa 101 (2004) 1439–1442.
[140] v. egu ́
ıluz, m. zimmermann, m. g. cela-conde, m. san miguel, cooperation and the emergence of role differentiation
in the dynamics of social networks, am. j. sociol. 110 (2005) 977–1008.
[141] f. c. santos, j. m. pacheco, t. lenaerts, cooperation prevails when individuals adjust their social ties, plos comput.
biol. 2 (2006) e140.
[142] j. m. pacheco, a. traulsen, m. a. nowak, coevolution of strategy and structure in complex networks with dynamical
linking, phys. rev. lett. 97 (2006) 258103.
[143] h. ohtsuki, m. a. nowak, j. m. pacheco, breaking the symmetry between interaction and replacement in evolutionary
dynamics on graphs, phys. rev. lett. 98 (2007) 108106.
[144] f. mengel, c. fosco, cooperation through imitation and exclusion in networks, mpra paper 5258, university library
of munich, germany (oct. 2007).
url http://ideas.repec.org/p/pra/mprapa/5258.html
[145] j. m. pacheco, a. traulsen, h. ohtsuki, m. a. nowak, repeated games and direct reciprocity under active linking, j.
theor. biol. 250 (2008) 723–734.
[146] t. gross, b. blasius, adaptive coevolutionary networks: a review, j. r. soc. interface 5 (2008) 259–271.
[147] j. poncela, j. g ́
omez-garde ̃
nes, l. m. flor ́
ıa, a. s ́
anchez, y. moreno, complex cooperative networks from evolutionary
preferential attachment, plos one 3 (2008) e2449.
[148] e. a. sicardi, h. fort, m. h. vainstein, j. j. arenzon, random mobility and spatial structure often enhance cooperation,
j. theor. biol. 256 (2009) 240–246.
[149] d. helbing, w. yu, the outbreak of cooperation among success-driven individuals under noisy conditions, proc. natl.
acad. sci. usa 106 (2009) 3680–3685.
[150] c. b. harley, learning the evolutionarily stable strategy., j. theor. biol. 89 (1981) 611–633.
[151] o. kirchkamp, simultaneous evolution of learning rules and strategies., j. econ. behav. org. 40 (1999) 295–312.
[152] l. g. moyano, a. s ́
anchez, evolving learning rules and emergence of cooperation in spatial prisoner's dilemma, j. theor.
biol. 259 (2009) 84–95.
[153] g. szab ́
o, a. szolnoki, j. vukov, application of darwinian selection to dynamical rules in spatial prisoners dilemma
games, preprint.
url \unhbox\voidb@x\hbox{http://www.mfa.kfki.hu/$\sim$szabo/szaborms.html}
[154] a. szolnoki, m. perc, coevolution of teaching activity promotes cooperation, new j. phys 10 (2008) 043036.
[155] a. szolnoki, m. perc, promoting cooperation in social dilemmas via simple coevolutionary rules, eur. phys. j. b 67
(2008) 337–342.
46
a. characterization of birth-death processes
one of the relevant quantities to determine in a birth-death process is the probability cn that, starting
from state n, the process ends eventually absorbed into the absorbing state n = n. there is a simple
relationship between cn and the stochastic matrix p, namely
cn = pn,n−1cn−1 + pn,ncn + pn,n+1cn+1,
0 < n < n,
(46)
with the obvious boundary conditions c0 = 0 and cn = 1. the solution to this equation is [69]
cn = qn
qn
,
qn =
n−1
x
j=0
qj,
q0 = 1,
qj =
j
y
i=1
pi,i−1
pi,i+1
(j > 0).
(47)
another relevant quantity is vk,n, the expected number of visits that, starting from state k, the process pays
to site n before it enters one absorbing state. if v = (vk,n), with 0 < k, n < n, then
v = i + r + r2 + * * * = (i −r)−1,
(48)
where i is the identity matrix and r is the submatrix of p corresponding to the transient (non-absorbing)
states. the series converges because r is substochastic [70]. thus v fulfills the equation v = v r+i, which
amounts to an equation similar to (46) for every row of v , namely
vk,n = vk,n−1pn−1,n + vk,npn,n + vk,n+1pn+1,n + δk,n,
0 < k, n < n,
(49)
where δk,n = 1 if k = n and 0 otherwise. contrary to what happens with eq. (46), this equation has no
simple solution and it is better solved as in (48). finally, τk, the number of steps before absorption occurs
into any absorbing state, when starting at state k, is obtained as
τk =
n−1
x
n=1
vk,n.
(50)
b. absorption probability in the hypergeometric case
for the special case in which
pn,n−1
pn,n+1
=
αn + β
α(n + 1) + γ
(51)
the absorption probability into state n = n, cn, can be obtained in closed form. according to (47) the
sequence qj fulfills the hypergeometric relation
qj
qj−1
=
αj + β
α(j + 1) + γ ,
(52)
from which
(α(j + 1) + γ)qj = (αj + β)qj−1.
(53)
adding this equation up for j = 1, . . . , n −1 we get
α
n−1
x
j=1
(j + 1)qj + γ(qn −1) = α
n−2
x
j=0
(j + 1)qj + β(qn −qn−1),
(54)
and therefore
(γ −β)qn = γ + α −(β + αn)qn−1.
(55)
47
thus, provided γ ̸= β, we obtain
qn = γ + α
γ −β
1 −
n
y
j=1
αj + β
αj + γ
.
(56)
if α = 0 this has the simple form
qn =
γ
γ −β
1 −(β/γ)n
.
(57)
if α ̸= 0, then we can rewrite
qn = γ + α
γ −β
1 −γ(β/α + n + 1)γ(γ/α + 1)
γ(γ/α + n + 1)γ(β/α + 1)
.
(58)
the case γ = β can be obtained from (55) or as the limit of expression (58) when γ →β. both ways yield
qn =
n
x
j=1
α + γ
αj + γ .
(59)
48
|
0911.1721 | phase diagram of the lattice su(2) higgs model | we perform a detailed study of the phase diagram of the lattice higgs su(2)
model with fixed higgs field length. consistently with previsions based on the
fradkin shenker theorem we find a first order transition line with an endpoint
whose position we determined. the diagram also shows cross-over lines: the
cross-over corresponding to the pure su(2) bulk is also present at nonzero
coupling with the higgs field and merges with the one that continues the line
of first order transition beyond the critical endpoint. at high temperature the
first order line becomes a crossover, whose position moves by varying the
temperature.
| introduction
in this work we present a detailed study of the phase diagram of the lattice higgs
su(2) model with higgs field in the fundamental representation and of fixed
length1. the model in which higgs length is allowed to change has received quite
a lot of attention in the past for its possible phenomenological implications (see
e.g. [2]), so that its phase diagram is known with good precision. no systematic
study exists instead for the case in which the higgs length is frozen. however
this last model is often used, mainly because of its computational simplicity in
numerical simulations, as the prototype of a non-abelian gauge theory coupled
with matter in the fundamental representation. in particular some work has
been done to study confinement in this model ([3] and [4]). in those works some
properties of the phase diagram are usually taken for granted, like the existence
of a line of first order transitions for β ≳2.3, but they have never really been
tested in simulations. as a first step towards a complete understanding of this
model, we thus started to systematically investigate its phase diagram, in order
to obtain precise estimates of the location of its critical points.
the action of lattice higgs su(2) model we adopt is
s = β
x
x,μ<ν
1 −1
2retrpμν(x)
−κ
2
x
x,μ>0
tr[φ†(x)uμ(x + ˆ
μ)φ(x + ˆ
μ)]
(1)
where the first term is the standard wilson action and the higgs field φ (which
transforms in the fundamental representation) is rewritten as an su(2) matrix
(see e.g. [5], [6]). since the action is linear in the fields at each point, standard
heatbath ([7], [8]) and overrelaxation ([9]) algorithms can be used for the monte
carlo update. if the higgs length is allowed to change this is no longer true, since
1a first report of this study was presented at the lattice 2008 conference, [1].
1
in this case a quartic term is needed, which has the form λ{ 1
2tr[φ†(x)φ(x)]−1}2
and destroys linearity.
in the limit β →∞the theory reduces to an o(4) non-linear sigma model,
which is known to have a mean field phase transition for κ ≈0.6 (see e.g. [10]);
in [11] it was shown that this second order transition becomes a first order `
a la
coleman-weinberg when the gauge field is introduced as a small perturbation (β
large). the authors of [11], using the cluster expansion developed in [12], were
also able to prove the existence of a region of parameter space near the axis β = 0
and for κ →∞where every local observable is analytic. this statement is often
referred to as the fradkin-shenker (fs) theorem. it is important to notice that,
in this context, an observable is defined as local when its support is contained
in a compact set in the thermodynamic limit; observables not satisfying this
requirement can have non-analytic behaviour (see e.g. [13, 14]). this two results
suggest a phase diagram like that shown in fig. 1: the analyticity region is
indicated by ar and is limited by the dotted line, the thick line is the line
of first order transitions and the two dots are its second order endpoints. as
long as we consider the model at zero temperature, the ǫ-expansion predicts the
endpoints to be in the mean field universality class.
s
s
β
0
∞
κ
∞
ar
figure 1: phase diagram of the higgs su(2) model as predicted in [11].
a similar phase diagram was observed in the works [2], [15] for the lattice
higgs su(2) model with fourth-order scalar coupling λ ≤0.5, while in the
model considered here λ →∞. because of the supposed triviality of the φ4
model in four dimensions, λ is expected to be a marginally irrelevant parameter
and therefore the phase diagram not to change qualitatively for λ →∞(see
e.g. [16]); however it was observed since long time that the first order transition
gets weaker as λ is increased, so that the phase diagram of fig. 1 has not really
been checked at large values of λ.
after the seminal work ref.
[17] on very small lattices, in ref.
[18] the
observation of a double peak structure was reported at β = 2.3 on a 164 lattice,
which however was probably only a consequence of the poor statistics, since in a
later study, ref. [19], no double peak was found at β = 2.3. there it was stated
that "the system exhibits a transient behaviour up to l = 24 along which the
order of the transition cannot be discerned". in this paper we produce the first
clear evidence of the line of first order transition and we obtain an estimate of
the endpoint position.
2
2
numerical results
the obvious observables to look at for this system are
• the gauge-higgs coupling, g = 1
2⟨tr[φ†(x)uμ(x + ˆ
μ)φ(x + ˆ
μ)]⟩
• the plaquette, p = 1
2⟨trpμν⟩
• the energy density e = 6βp + 4κg
besides these natural ones, we monitored also the following observables:
• the polyakov loop pl(⃗
x) = 1
2tr
hqlt−1
t=0
u0(t, ⃗
x)
i
, pl = 1
v ⟨p
⃗
x pl(⃗
x)⟩
• the z2 monopoles density, m = 1 −
1
nc
p
c σc, where c stand for the
elementary cube and σc =
q
pμν∈∂c
sign trpμν
the polyakov loop behaviour is used as an indicator of confinement. the study
of the z2 monopoles is motivated by the similarity of the first order transition
with the bulk transition of the su(2) pure gauge theory, which is driven by
lattice artefacts such as the z2 monopoles. both these points will be discussed
more accurately in the following.
data were analyzed by using the optimized histogram method ([20]) and the
statistical errors were estimated by using the moving block bootstrap method
(see e.g. [21]).
the presentation of the simulation results will be divided in several parts
1. we will show that at β = 2.5 there is no signal of a phase transition and
data are consistent with a smooth cross-over
2. we will show that for β ≥2.775 the scaling is consistent with a first order
transition and we will present evidence of a double peak structure
3. two independent estimates of the endpoint will be obtained
4. we will give hints that the above transitions are not related to confinement
5. we will investigate the relation between the line of first order transition,
which becomes a smooth cross-over beyond the endpoint, and the pure
su(2) bulk transition
6. finally we will present some exploratory results at t ̸= 0
all results in the first four parts have been obtained by fixing the value of β
and by looking for transitions in κ.
2.1
cross-over region
in fig. 2 the maxima of the susceptibilities of g, p and m are plotted for
various lattice sizes and β = 2.5. for lattices up to l ≈18 the maxima of
susceptibilities are well described by a function of the form a + bl4, so that
they seem to scale linearly with volume as expected for a first order transition.
however on larger lattices all susceptibilities saturate and no singularity seems
to develop in the thermodynamical limit. this means that the system has a
3
10
20
30
40
l
0.5
1
g susceptibility
10
15
20
25
30
35
40
l
0.03
0.035
0.04
0.045
0.05
p susceptibility
5
10
15
20
25
30
35
40
45
l
1.4
1.5
1.6
1.7
1.8
m susceptibility
figure 2: top: g susceptibility peak heights at β = 2.5 (the continuous line is
a fit to a + bx4). center: p susceptibility peak heights at β = 2.5. bottom: m
susceptibility peak heights at β = 2.5. clearly they all saturate at large l.
4
correlation length of order ≈10 lattice spacing, so that the increase of the
susceptibilities with volume, that in previous studies was interpreted as due to
a first order transition, is just a signal that the lattices were too small.
to look for transitions, besides susceptibilities we also check the behaviour
of the energy binder cumulant, which is defined as b = 1 −⟨e4⟩/(3⟨e2⟩2). it
can be shown (see e.g. [22]) that near a transition b develops minima whose
depth scales as
b|min = 2
3 −1
12
e+
e−
−e−
e+
2
+o(l−4) = 2
3 −1
3
∆
ǫ
2
+o(∆3)+o(l−4) (2)
where e± = limβ→β±
c ⟨e⟩, ∆= e+ −e−and ǫ = 1
2(e+ + e−). in particular
the thermodynamical limit of b|min is less than 2/3 if and only if a latent heat
is present.
the scaling of the energy binder cumulant minima at β = 2.5 is shown in
fig. 3. also here two different behaviours are clearly visible: on small lattices
there is scaling consistent with a first order transition. by increasing the volume
b →2/3, indicating a smooth cross-over or a second order transition.
0
5e-05
0.0001
0.00015
0.0002
0.00025
v
-1
0
0.0005
0.001
0.0015
0.002
0.0025
0.003
2/3-b
figure 3: binder cumulant minima of e at β = 2.5 (v = l4).
these results indicate that at β = 2.5 there is no transition; this is in sharp
contrast with all previous studies of this model, which concluded that for β ≥2.3
the system undergoes a first order transition. this wrong conclusion was based
on the analysis of lattices of size up to 254, which we have just shown to be too
small.
2.2
first order region
at β = 2.775, β = 2.79 and β = 2.8 the scaling of the susceptibilities and
of the energy binder cumulant remains consistent with first order also for the
5
0
2e+06
4e+06
v
0.6
0.8
1
1.2
g susceptibility
0
2e+06
4e+06
v
0.0055
0.006
0.0065
0.007
p susceptibility
0
2e+06
4e+06
v
0.11
0.12
0.13
0.14
m susceptibility
0
2e+06
4e+06
v
10
12
14
16
18
e susceptibility
0
1e-06
2e-06
3e-06
v
-1
0
5e-08
1e-07
1.5e-07
2e-07
2.5e-07
2/3-b
figure 4: scaling of the maxima of the susceptibilities and of the e binder
cumulant minima for β = 2.775 (v = l4 and the larger v values correspond to
lattice sizes l = 30, 35, 40, 45).
6
0.219
0.22
0.221
0
0.01
0.02
0.03
0.04
0.05
l=45
l=50
figure 5: histogram of the observable g for β = 2.8 on the two largest lattices.
larger lattices, as shown for example in fig. 4 for β = 2.775. in this range of
β values the transition gets stronger as β increases. however we know that the
transition at β →∞has to become of second order, so that going at β high
enough the transition has to get weaker. we could not reach this regime in our
simulations since when increasing the β value it is also necessary to use larger
lattices. to have results free of spurious finite size transitions the lattice must
be large enough for the corresponding pure su(2) gauge theory to be confined
and exponentially large lattices in β are needed.
although all observables scale consistently with a first order transition, a
clear signal of metastability was revealed only at β = 2.8, where the transition
is stronger, and only in the two largest lattices, namely l = 45 and l = 50;
the histograms for the observable g on these two lattices are shown in fig. 5,
where the formation of a double peak structure is visible.
2.3
endpoint
having three sets of data with first order scaling and knowing the universality
class of the endpoint, we can estimate its position βc assuming that we are close
enough to it. we can do this in two independent ways:
1. from the second form of equation 2 we know the dependence of b on the
discontinuity ∆and for a mean-field transition we have ∆∝t1/2, where t
is the reduced temperature, t = (t −tc)/tc. in our case, near the critical
7
2.725
2.75
2.775
2.8
β
-5e-09
0
5e-09
1e-08
1.5e-08
2e-08
2/3 - b
2.72
2.74
2.76
2.78
2.8
β
0
1e-06
2e-06
3e-06
∆
2
figure 6: top: determination of the critical point using the method (1). bot-
tom: determination of the critical point using the method (2).
8
point, we can use t ∝(β −βc) and therefore equation 2 can be written as2
lim
l→∞bl|min = 2
3 −γ1(β −βc) + o(β −βc)2
(3)
2. for a first order transition the susceptibilities scale for large volume as
χ = v (⟨o2⟩−⟨o⟩2) ≈const + v ∆2
(4)
so that fitting the χ maxima to the relation χ = const + v ∆2 and using
again ∆∝t1/2, we obtain
∆2 = γ2(β −βc) + o(β −βc)2
(5)
the fit in the planes (β, b) and (β, ∆2) are shown in fig. 6 (we use the
g susceptibility); in both cases the fit is very good and the estimates for the
critical point position are
β(1)
c
= 2.7266(16)
β(2)
c
= 2.7299(36)
(6)
which agree with each other within errors.
2.4
relation to confinement
the next question is to understand if the two "phases" found at large β have
different confinement properties. we recall that they cannot be considered as
different thermodynamical phases as a consequence of fs theorem.
in this model wilson loops never obey the area law because of the presence
of the higgs field, which destroys the center symmetry of the pure gauge theory;
as a consequence the polyakov loop is always non-zero (a direct check of this
is shown in fig.8) and cannot be an order parameter. nevertheless polyakov
loop is commonly used as a confinement tracker also in theories where it is not
an order parameter, e.g. in qcd, since it abruptly jumps at the deconfinement
transition. for the lattice su(2) higgs model the polyakov loop doesn't seem to
be influenced in any way by the transition: for small lattices it slightly increases,
however for larger ones, the transition gets stronger but pl gets flatter in the
transition region, as can be seen in fig. 8 (the transition is at κ = 0.704675(30)).
also polyakov loop correlators, measured by using the multilevel algorithm of
ref. [24], do not show any significant change across the transition, as shown in
the bottom of fig. 8.
an alternative possibility is to use the order parameter introduced in [25],
which we will denote by of m.
this is defined by the limit for r →∞of
the quantity of m(r), which is constructed as the square of the mean value
of a staple-shaped parallel transport connecting two higgs fields divided by a
wilson loop; the size of the staple is related to that of the wilson loop as shown
in fig. 7, where higgs fields are represented by dots. to our knowledge of m
2a subtlety has to be considered here: since in this model the two relevant operators at
the mean field critical point are not related to any symmetry of the system, there is in general
mixing between the magnetic and thermal relevant operators (see e.g. [23]). however, since
the ∆'s are measured along the coexistence curve, near the critical endpoint the mixing is
negligible and the relation between ∆and t is as usual ∆∝tβ (here "β" is the β critical
exponent).
9
s
s
⟨
⟩
2
r 6
?
2r
-
2r 6
?
2r
-
⟨
⟩
of m(r) =
figure 7: of m(r) definition.
has never been measured in a monte carlo simulation, so we have no previous
results to compare with; its strong coupling limit was computed in [26]. of m(r)
measures the overlap between a higgs-higgs dipole of linear dimension r and the
vacuum; if asymptotic colored states exist they are orthogonal to the vacuum,
so of m(r) →0 for r →∞, otherwise we must have of m(r) →α > 0.
the results obtained by measuring of m(r) on a 404 lattice at β = 2.775 for κ
slightly above and slightly below the transition points are shown in fig. 9, where
the two lines are fits to the function3 f(x) = a + b 1
xc (symmetrized because of
periodic boundary condition).
we have for the parameter a, c the estimates:
a(0.7046) = 4.9(1) * 10−5
c(0.7046) = 2.98(5)
(7)
a(0.7048) = 10.5(5) * 10−5
c(0.7048) = 3.30(23)
(8)
so on both sides of the transition charge is screened and according to the inter-
pretation of [25] there is color confinement.
2.5
the pure gauge su(2) bulk
another interesting point to study is what relation (if any) the first order has
with the bulk transition of the su(2) pure gauge (we call this "transition" for
the sake of simplicity, although it is only a rapid but smooth cross-over). an
hint on the existence of such a relation can be obtained by noting that the value
β ≈2.3 previously thought to be the first order line endpoint position is the
value of the bulk su(2) transition.
since some test simulations indicated that the position of the su(2) bulk
is very stable for small values of κ (so that in the plane (β, κ) this crossover
follows a line perpendicular to the β axis), it seems more convenient to hold κ
fixed and vary the β value; the results obtained on a 304 lattice for the plaquette
susceptibilities for several κ values are shown in fig. 10.
from these data a structure of cross-over lines emerges as shown in fig. 11.
for κ < 0.6 the bump in β of the plaquette susceptibility which corresponds to
the su(2) bulk transition is independent on κ both in position and in shape
(vertical dotted line in fig. 11). for κ larger than 0.6 two peaks in β appear, the
bulk one and the first order transition studied above. the latter peak persists
also when κ is greater than the critical value corresponding to the endpoint
3this ansatz is motivated by the observation that for r big enough such that the wilson
loop scales with perimeter, the exponentials in numerator and denominator are the same,
leaving a power law.
10
25
30
35
40
45
l
0
0.0005
0.001
0.0015
pl
0.7042
0.7044
0.7046
0.7048
0.705
0.7052
0.7054
κ
0.0001
0.001
pl
l=25
l=30
l=35
l=40
l=45
0
5
10
15
20
y
1e-09
1e-08
<pl(x)pl(x+y)>
κ=0.70460
κ=0.70480
figure 8: top: polyakov loop for β = 2.775, κ = 0.70464 and various l,
the line is a fit to a + b exp(−cl) and the result for the asymptotic value is
a = 1.7(1)*10−4. center): polyakov loop for β = 2.775 and various l. bottom:
polyakov loop correlators measured on a 404 lattice for β = 2.775 and κ slightly
below (0.7046) and above (0.7048) the critical value.
11
0
2
4
6
8
10
r
0.0001
0.001
0.01
0.1
ofm
κ=0.7046
κ=0.7048
figure 9: results for of m(r) at β = 2.775 on a 404 lattices slightly above and
below the transition point.
andthere is no real transition, although it is smaller that the su(2) bulk signal
(see fig. 10, κ = 0.75). by increasing the κ value the cross-over remnant of
the first order transition moves towards smaller β values, until it intersects the
su(2) bulk in the point indicated by a in fig. 11. in the neighborhood of
the point a the "first order continuation" peak gets stronger and the su(2)
bulk disappears. this increase could have been misinterpreted as the first order
transition line endpoint in previous works. a direct check to ensure that in this
region there is no transition is shown in fig. 12. for still larger κ values only
one maximum is present in susceptibilities, which gets weaker as κ →∞.
this interplay between the first order transition line and the bulk su(2)
suggests that the first order transition could be in some way related to the same
lattice artifacts that drive the bulk su(2) transition, namely the z2 monopoles.
the density of z2 monopoles seems indeed to have a jump across the transition
(see fig. 4).
2.6
finite temperature
motivated by the last observation we try to investigate if the first order transi-
tion line can itself be thought as a lattice artefact, like the pure gauge so(3)
first order bulk transition. to answer this question the simplest method is to
consider the system at finite temperature: bulk transitions are insensitive to the
temporal extension lt of the lattice, while physical transitions scale by varying
lt.
for each lt value we simulate the system with several spatial extents ls, in
12
1.8
2
2.2
2.4
2.6
β
0.02
0.04
0.06
0.08
0.1
0.12
0.14
χ(β,κ)
κ=0.75
κ=0.775
κ=0.8
κ=0.85
κ=0.9
κ=0.95
1.8
2
2.2
2.4
2.6
β
-0.02
0
0.02
0.04
0.06
0.08
0.1
χ(β,κ)−χ(β,0)
κ=0.75
κ=0.775
κ=0.8
κ=0.85
κ=0.9
κ=0.95
figure 10: up: plaquette susceptibility. down: plaquette susceptibility with
subtracted the value at κ = 0.
13
s
s
β
0
∞
κ
∞
a
κ ≈0.6
figure 11: phase diagram of the higgs su(2) with rapid crossovers indicated
by dotted lines.
order to perform a finite size analysis. some of the results obtained are shown in
fig. 13. it is clear that the position of the maxima of the g susceptibility moves
by varying lt, so that the line of first order transition seen at zero temperature
cannot be a lattice artifact. moreover for all the values of lt considered the
maxima saturate by increasing ls, indicating the absence of a phase transition.
we expect that for sufficiently small temperature (i.e. large enough lt) the first
order transition is restored, however to observe this behaviour numerically we
should use lattices with lt > 20 and ls ≫lt, typically ls ≳3lt. instead we
have to restrict ourselves to l3
slt ≲4 × 106 because of computer capability. in
any case we see the cross-over gets stronger by increasing the lattice temporal
extension.
3
conclusions
this paper is a study of the phase diagram of the su(2) higgs gauge theory with
the higgs in the fundamental representation and fixed length. we find that to
investigate the phase structure of this system it is necessary to use much larger
lattices than the ones adopted in the past due to the large correlation length.
we present the first clear evidence of the presence of a line of first order
transition and we estimate its endpoint position to be at βc = 2.7266(16), thus
confirming an expectation based on the fradkin-shenker theorem.
we give indications that this transition is not a deconfinement transition.
we attempt an exploration of the system at finite temperature; the first
order transition becames then a cross-over whose position moves by varying the
temperature.
4
acknowledgments
simulation were performed using the italian grid infrastructure and one of us
(c. b.) wishes to thank tommaso boccali for his technical assistance in its use.
we thank a. d'alessandro for substantial help in the early stages of this work.
14
2.15
2.16
2.17
2.18
2.19
2.2
β
0.06
0.08
0.1
0.12
0.14
0.16
0.18
p susceptibility
l=30
l=40
figure 12: plaquette susceptibility for κ = 0.85.
a
data
in this appendix we give some details of our numerical results. the notation is
as follows
• κpc is the location in κ of the g susceptibility maximum
• χo is the peak value of the susceptibility of the o observable
• b is the value at minimum of the energy binder cumulant
β = 2.725
l
κpc
χg
χp
χm
χe
2/3-b
20
0.7090(2)
0.615(18)
0.005575(88)
0.1147(19)
10.85(30)
6.36(18)e-07
25
0.7089(2)
0.692(16)
0.005831(71)
0.1194(15)
11.89(26)
2.839(62)e-07
30
0.7088(1)
0.858(20)
0.006485(78)
0.1319(15)
14.51(32)
1.679(38)e-07
35
0.70873(5)
1.018(24)
0.007082(89)
0.1431(17)
16.64(37)
1.039(23)e-07
40
0.70866(5)
1.022(24)
0.00704(12)
0.1426(18)
16.85(37)
6.17(13)e-08
45
0.70886(5)
0.994(31)
0.00725(23)
0.1469(25)
16.25(47)
3.71(10)e-08
15
0.704
0.706
0.708
0.71
0.712
κ
0.3
0.4
0.5
0.6
0.7
g susceptibility
ls=30 lt=20
ls=40 lt=20
ls=50 lt=20
ls=30 lt=16
ls=40 lt=16
ls=30 lt=12
ls=40 lt=12
figure 13: g susceptibility for β = 2.775; the vertical line indicates the place
where on symmetric lattices the transition is observed.
β = 2.775
l
κpc
χg
χp
χm
χe
2/3-b
25
0.7049(1)
0.5740(65)
0.005459(31)
0.10807(62)
9.460(98)
2.156(22)e-07
27
0.70480(8)
0.6211(82)
0.005528(40)
0.10884(74)
10.16(12)
1.702(21)e-07
30
0.70476(5)
0.734(13)
0.005769(52)
0.1135(10)
11.82(19)
1.299(21)e-07
35
0.70466(5)
0.812(16)
0.006044(64)
0.1182(12)
12.88(24)
7.64(14)e-08
40
0.70469(3)
0.956(17)
0.006573(77)
0.1285(12)
14.66(25)
5.099(88)e-08
45
0.704675(30)
1.114(24)
0.00698(10)
0.1359(16)
16.72(34)
3.631(75)e-08
β = 2.79
l
κpc
χg
χp
χm
χe
2/3-b
25
0.70380(10)
0.5512(97)
0.005301(48)
0.10362(93)
9.05(14)
2.030(32)e-07
30
0.70360(5)
0.646(12)
0.005481(49)
0.10632(95)
10.39(17)
1.125(19)e-07
40
0.70356(3)
0.912(19)
0.006187(78)
0.1200(13)
14.01(27)
4.799(93)e-08
45
0.703525(30)
1.140(23)
0.006901(97)
0.1318(15)
17.01(33)
3.639(70)e-08
β = 2.8
l
κpc
χg
χp
χm
χe
2/3-b
25
0.7032(2)
0.683(22)
0.005080(80)
0.0978(15)
11.25(34)
2.500(75)e-07
30
0.70298(8)
0.7212(85)
0.005279(38)
0.10181(60)
11.63(13)
1.246(13)e-07
35
0.70286(5)
0.760(16)
0.005685(62)
0.1093(11)
11.64(22)
6.74(13)e-08
40
0.70282(4)
0.946(22)
0.006239(75)
0.1192(13)
14.36(31)
4.87(11)e-08
45
0.70279(3)
1.219(27)
0.007033(89)
0.1339(16)
17.75(37)
3.759(78)e-08
16
references
[1] c. bonati, g. cossu, a. d'alessandro, m. d'elia, a. di giacomo on
the phase diagram of the higgs su(2) model. pos(lattice2008)252
(arxiv:0901.4429).
[2] z. fodor, j. hein, k. jansen, a. jaster, i. montvay simulating the elec-
troweak phase transition in the su(2) higgs model. nucl. phys. b 439,
147 (1995) (arxiv:hep-lat/9409017).
[3] j. greensite, ˇ
s. olejn ́
ık vortices, symmetry breaking, and temporary con-
finement in su(2) gauge-higgs theory. phys. rev. d 74, 014502 (2006)
(arxiv:hep-lat/0603024).
[4] w. caudy, j. greensite on the ambiguity of spontaneously broken gauge
symmetry, phys. rev. d 78, 025018 (2008) (arxiv:0712.0999 [hep-lat]).
[5] i. montvay correlations and static energies in the standard higgs model.
nucl. phys. b 269, 170 (1986).
[6] i. montvay, g. m ̈
unster quantum fields on a lattice. cambridge university
press (1994).
[7] m. creutz monte carlo study of quantized su(2) gauge theory. phys. rev.
d 21, 2308 (1980).
[8] a. d. kennedy, b. j. pendleton improved heatbath method for monte carlo
calculations in lattice gauge theories. phys. lett. b 156, 393 (1985).
[9] m. creutz overrelaxation and monte carlo simulation. phys. rev. d 36,
515 (1987).
[10] a. hasenfratz, k. jansen, j. jers ́
ak, b.b. lang, t. neuhaus, h. yoneyama
study of the four component φ4 model. nucl. phys. b 317, 81 (1987).
[11] e. fradkin, s. h. shenker phase diagrams of lattice gauge theories with
higgs fields. phys. rev. d 19, 3682 (1979).
[12] k. osterwalder, e. seiler gauge field theories on a lattice. ann. phys.
110, 440 (1978).
[13] r. bertle,
m. faber,
j. greensite,
ˇ
s. olejn ́
ık center dominance
in
su(2)
gauge-higgs
theory.
phys.
rev.
d
69,
014007
(2004)
(arxiv:hep-lat/0310057).
[14] m. grady reconsidering gauge-higgs continuity. phys. lett. b 626, 161
(2005) (arxiv:hep-lat/0507037).
[15] w. bock, h. g. evertz, j. jers ́
ak, d. p. landau, t. neuhaus, j. l. xu
search for critical points in the su(2) higgs model. phys. rev. d 41, 2573
(1990).
[16] d. j. e. callaway triviality pursuit: can elementary scalar particles exist?
phys. rep. 167, 241 (1988).
17
[17] c.b. lang, c. rebbi, m.virasoro the phase structure of a non-abelian
gauge higgs field system. phys. lett. b 104, 294 (1981).
[18] w. langguth, i. montvay two-state signal at the deconfinement-higgs
phase transition in the standard su(2) higgs model. phys. lett. b 165,
135 (1985).
[19] i. campos on the su(2)-higgs phase transition. nucl. phys. b 514, 336
(1998) (arxiv:hep-lat/9706020).
[20] a. m. ferrenberg, r. h. swendsen optimized monte carlo data analysis.
phys. rev. lett. 63, 1195 (1989).
[21] s. mignani, r. rosa the moving block bootstrap to asses the accuracy of
statistical estimates in ising model simulations. comp. phys. comm. 92,
203 (1995).
[22] j. lee, j. m. kosterlitz finite-size scaling and monte carlo simulations
of first-order phase transitions. phys. rev. b 43, 3265 (1991).
[23] n. b. wilding simulation studies of fluid critical behaviour. j. phys.: con-
dens. matter 9, 585 (1997) (arxiv:cond-mat/9610133).
[24] m. luscher, p. weisz locality and exponential error reduction in numerical
lattice gauge theory. jhep 09, 010 (2001) (arxiv:hep-lat/0108014).
[25] k. fredenhagen, m. marcu confinement criterion for qcd with dynam-
ical quarks. phys. rev. lett. 56, 223 (1986).
[26] k. holland, p. minkowki, m. pepe, u.-j. wiese exceptional con-
finement in g(2) gauge theory. nucl. phys. b 668,
207 (2003)
(arxiv:hep-lat/0302023).
18
|
0911.1722 | universality of brunnian ($n$-body borromean) four and five-body systems | we compute binding energies and root mean square radii for weakly bound
systems of $n=4$ and $5$ identical bosons. ground and first excited states of
an $n$-body system appear below the threshold for binding the system with $n-1$
particles. their root mean square radii approach constants in the limit of weak
binding. their probability distributions are on average located in
non-classical regions of space which result in universal structures. radii
decrease with increasing particle number. the ground states for more than five
particles are probably non-universal whereas excited states may be universal.
| introduction.
the efimov effect was predicted in [1]
as the appearance of a series of intimately related excited
three-body states when at least two scattering lengths
are infinitely large. these states can appear at all length
scales and their properties are independent of the de-
tails of the potentials.
this effect has in recent years
been studied intensively and extended to a wider group
of physical phenomena, beginning to be known as efimov
physics. in general we define efimov physics as quantum
physics where universality and scale invariance apply.
universality means independence of the shape of the in-
terparticle potential.
scale invariance means indepen-
dence of the length scale of the system. these conditions
are rather restrictive but a number of systems are known
to exist within this window [2–7]. the great advantage is
that one theory is sufficient to explain properties without
any detailed knowledge of the interactions [8]. further-
more, properties in different subfields of physics are de-
scribed as manifestations of the same underlying theory.
our physical definition of scale invariance originates
from the halo physics first realized and discussed in nu-
clei [9], but quickly observed as applicable also to small
molecules like the helium trimers [5]. this original defi-
nition of scale invariance, that the concept applies to any
length scale is obviously continuous as exemplified by nu-
clei, atoms and molecules. often the notion of scale in-
variance is used in a different mathematical sense where
the spatial extension of the structures in one given sys-
tem repeats itself in discrete steps like the factor 22.7 for
identical particles [1]. this is a result of the independence
of potential details and here precisely defining our mean-
ing with the notion of universality. as the concepts can
be defined in different ways we will use throughout the
paper this original physical meaning of scale invariance.
together, these two concepts constitute our meaning of
efimov physics which to the best of our knowledge has
been left undefined in all previous publications.
the range of validity for such a global theory is only
well described for two and three particles [2, 5, 10]. for
n = 4 two states were found in the zero-range, inherently
universal, effective field model [11]. these states also ap-
peared as universal in finite-range models in connection
with each efimov state [12]. this is in contrast to [13]
where the disentanglement of the scales used to regularize
the three and four-body zero-range faddeev-yakubovsky
equations gives rise to a dependence of the four-body
ground state on interaction details. then a four-body
scale is needed in analogy to the three-body scale ap-
pearing independently on top of the two-body proper-
ties. this apparent discrepancy between refs. [11, 12]
and [13] is not yet resolved.
recently, three experiments evidenced two four-body
bound states connected to an efimov trimer [14–16] in
accordance with the theoretical predictions of ref. [12].
in two of these experiments were also observed deviations
from universality [15, 16]. surprisingly, the greatest devi-
ation were observed for large scattering lengths (→±∞)
- exactly at the region where universality should apply
[16]. this requires a theoretical explanation where some-
thing should be added in the universal model.
very little is known for five particles with complete so-
lutions containing all correlations as dictated by the in-
teraction. with specific assumptions about only s-waves
and essentially no correlations it was concluded in [5, 17]
that ground state halos cannot exist for n > 3. these as-
sumptions are rather extreme and could be wrong or only
partly correct. however, if halos exist they have universal
structures as the n = 4 states obtained in [11, 12]. these
results can only be reconciled by wrong assumptions in
the halo discussion or by impermissible comparison be-
tween halo ground states and excited states.
it was concluded in [18] that efimov states do not exist
for n > 3 and furthermore for three particles exist only
for dimensions between 2.3 and 3.8 [19].
however, by
2
restricting to two-body correlations within the n-body
system, a series of (highly) excited n-body states were
found with the characteristic efimov scaling of energies
and radii [20, 21]. whether they maintain their identity
and the universal character, when more correlations are
allowed in the solutions, remains to be seen.
two limits to the universality are apparent. the first
appears for large binding energy where the resulting
small radii locate the system within the range of the po-
tentials and sensitivity to details must appear. the less
strict second limit is for excitation energies above the
threshold for binding subsystems with fewer particles.
structures with such energies are necessarily continuum
states which may, or more often may not, be classified
as universal states depending on their structures and the
final states reached after the decay.
even for four particles where universality is found
[11, 12], a number of questions are still unanswered. for
five and more particles the information becomes very
scarce. a novel study claiming universality for ground
states of a van der waals potential has appeared for par-
ticle number less than 40 [22]. the critical mass is found
as a substitute for the critical strength, but the computed
radii at threshold cannot be reliably extracted.
the purpose of the present paper is to explore the win-
dow for efimov physics. we shall investigate the bound-
aries for universality preferentially leading to general con-
clusions applicable to systems of n-particles. we first
discuss qualitative features and basic properties, then ex-
tract numerical results for 4 and 5 particles very close to
thresholds of binding, and relate to classically allowed
regions.
we only investigate brunnian (n-body bor-
romean) systems [23] where no subsystem is bound.
qualitative considerations.
for two particles the in-
finite scattering length corresponds to a bound state at
zero energy. variation of 1/a around zero produces ei-
ther a bound state of spatial extension a or a continuum
state corresponding to spatial configurations correlated
over the radius a.
for three particles the efimov effect appears, i.e. for
the same interaction, a = ±∞(1/a = 0), infinitely
many three-body bound states emerge with progressively
smaller binding and correspondingly larger radii [1]. the
ratios of two and three-body threshold strengths for sev-
eral potentials were derived in [22, 24, 25]. these thresh-
olds for binding one state can be characterized by a value
of 1/a [10, 12]. infinitely many bound three-body states
appear one by one as 1/a is changed from the three-
body threshold for binding to the threshold for two-
body binding 1/a = 0.
moving opposite by decreas-
ing the attraction these states one by one cease to be
bound. they move into the continuum and continue as
resonances [26]. for asymmetric systems with a bound
two-body subsystem the three-body bound state passes
through the particle-dimer threshold becoming a virtual
state [27]. this behavior holds even for particles with
different masses [28].
all three-body s-wave states from a certain energy and
up are universal. however, this is not an a priori obvious
conclusion but nevertheless true because two effects work
together, i.e., for 1/a = 0 the system is large for the ex-
cited efimov states and for finite 1/a the binding is weak
and the radius diverges with binding [5]. both efimov
states and weakly bound states are much larger than the
range of the interaction. the continuous connection of
these bound states and resonances is therefore also in the
universal region.
the recent results for four particles were that each
three-body state has two four-body states attached with
larger binding energy [11, 12]. these four-body states
are both described as having universal features unam-
biguously related to the corresponding three-body states
for interactions of both positive and negative scattering
lengths. detailed information of structure, correlations,
and posssible limits to universality are not available.
the one-to-one correspondence between the two four-
body states and one three-body state can perhaps be
extended such that two weakly bound n-body states ap-
pear below the ground state of the (n −1)-body state.
this seems to be rather systematic for n-body efimov
states obtained with only two-body correlations [20, 21].
if these n-body efimov states remain after extension of
the hilbert space to allow all correlations, we can ex-
pect these sequences to be continued to the thresholds for
binding by decreasing the attraction. however, ground
and lowest excited states may be outside the universal
region but the sequences may still exist. in any case the
scaling properties are different for the n-body efimov
states in [20, 21] and the universal four-body states in
[11, 12].
the basic reason for the difficulties in finding detailed
and general answers is related to the fact that the thresh-
olds for binding are moving monotonously towards less
attraction with n [24, 25].
for n = 2 weak binding
and large scattering length is synonymous. already for
n = 3 this connection is broken but the weak binding
still causes the size to diverge [5]. the indications are
that for n > 3 the size remains finite even in the limit
of zero binding.
basic properties.
we consider a system of n identical
bosons each of mass m. they are confined by a harmonic
trap of frequency ωt corresponding to a length parameter
b2
t = ħ/(mωt). the particles interact pairwise through a
potential v of short range b ≪bt. we shall use the gaus-
sian shape v = v0 exp(−r2/b2). the chosen values of
v0, b, and m lead to a two-body scattering length a and
an effective range re. the solution to the schr ̈
odinger
equation is approximately found by the stochastic vari-
ational method [29]. the results are energies and root
mean square radii.
for two-body systems we know that the n′th radial
moment only diverge at threshold of binding when the
3
angular momentum l ≤(n + 1)/2, see [5]. the equality
sign implies a logarithmic divergence with binding b2 in
contrast to the normal power law bl−n/2−1/2
2
. for the
mean square radius this implies divergence for l ≤3/2.
for an n-body system with all contributions entirely
from s-waves we can generalize these rigorous results
from two-body systems [5]. the number of degrees of
freedom is f = 3(n −1) and the generalized centrifugal
barrier is obtained with an effective angular momentum
l∗= (f −3)/2. divergent root mean square radius is
then expected when l∗≤3/2 or equivalently when f ≤6
or n ≤3. if this result holds, four-body systems should
have finite root mean square radii even at the threshold
of binding.
the size of the system is measured by the square root
of the mean square radius, ⟨r2/b2 ⟩, which is expressed in
units of the "natural" size of the systems, i.e. the range
of the binding potential. the dimensionless unit of the
binding energy bn of the system is ̄
b = mb2|bn|/ħ2.
both universality and scale invariance is therefore de-
tected by inspection of these quantities as functions of
parameters and shape of the potentials. in regions where
the curves are proportional, we conclude that the prop-
erties are universal and scale invariant. results for dif-
ferent potential shapes can be expressed in terms of a
standard potential by scaling the range. then the in-
dividual curves would fall on top of each other in the
universal regime.
clasical allowed region.
universal properties can in-
tuitively only appear when the structures are outside
the potentials because otherwise any small modification
would have an effect on the wavefunction. consequently
the property would be dependent on these details in con-
flict with the assumption of universality. for two-body
systems the relative wavefunction is therefore universal
only if the largest probability is found outside the poten-
tial. this means that this classically forbidden region is
occupied. the system is extremely quantum mechanical
and very far from obeying the laws of classical physics.
to investigate the relation between universality and
the classical forbidden regions for n particles we need
to compare features of universality with occupation of
classical forbidden regions. for two-body systems this is
straightforward since the coordinate of the wavefunction
and the potential is the same. the probability of finding
the system where the energy is smaller than the potential
energy is then easy to compute as a simple spatial integral
over absolute square of the wavefunction.
for more than two particles the problem is well de-
fined but the classically forbidden regions (total energy
is smaller than the potential energy) themselves are diffi-
cult to locate. we attempt a crude estimate which at best
can only be valid on average. the energy is computed by
adding kinetic and potential energy, i.e.
−bn = n⟨t1⟩+ 1
2n(n −1)⟨v12⟩,
(1)
where we choose an arbitrary particle 1 to get the kinetic
part and a set of particles 1 and 2 to get the potential
energy. the classical region is defined by having positive
kinetic energy.
for a two-body gaussian potential we
then obtain an estimate of an average, rcl, for the classical
radius from
⟨v12⟩> −
2bn
n(n −1) = v0 exp(−r2
cl/b2) .
(2)
if the distance between two particles is larger than rcl
we should be in the universal region.
this value can
then be compared to the size obtained from the average
distance between two particles, ⟨r2
12⟩, computed in the
n-body system from the mean square radius [20], i.e.
⟨r2
b2 ⟩= n −1
2n
⟨r2
12
b2 ⟩.
(3)
thus in the classical forbidden region r2
cl/b2 from eq.(2)
should be smaller than ⟨r2
12/b2⟩from eq.(3).
the four-body system.
we show size versus binding
energy for n = 4 in fig.1. the variation arises by change
of the strength, v0, of the attractive gaussian. the sys-
tem is for numerical convenience confined by an external
one-body field. however, we are only interested in struc-
tures independent of that field, i.e. intrinsic properties
of the four-body system. we therefore increase the trap
size until the states are converged and located at dis-
tances much smaller than the confining walls. we now
know that this happens for four particles in contrast to
the three-body system where the size diverges when the
binding energy approaches zero.
in fig.1 we show results for two trap sizes deviating
by an order of magnitude and larger than the interaction
range b = 11.65a0 (a0 is the bohr radius) by a factor of 20
and 200, respectively. for large binding in the lower right
corner the results for the ground state is independent of
trap size. when the probability extends by about a factor
of 2 further out than b the effect of the small trap can
be seen. the tail of the distribution then extends out to
20b even though the mean square is 10 times smaller.
in the limit of very small binding energy the radius
approaches a constant independent of the binding. the
trap size has to be increased to 200b before the trap has
no influence which implies that the probability distribu-
tion is entirely within that distance when the threshold
for zero binding is reached. the converged size is about
2.7b for the ground state.
somewhat surprisingly also
the first excited state, which also is below the energy
of the three-body state, has converged to a value, 7.6b,
independent of the trap size. a shape different from a
gaussian would again lead to constants related through
specific properties of the potentials, but the ratio would
remain unchanged. this is precisely as found in two di-
mensions for three particles [30]. both states are at the
threshold on average very much smaller than both traps.
4
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
0
1
10
100
4<
4
*
<
<r
2 / b
2 >
b
4
*
>
5
*
4>
5
fig. 1: (color online) the mean square radius as function of
the four and five-body binding energies, all in dimensionless
units. the trap sizes for four particles are bt = 230.942a0
(red lines with 4<), 2630.956a0 (black lines with 4>), and
bt = 372.073a0 (blue lines with 5) for five particles.
here
a0 is the bohr radius. we show ground (solid) and excited
states (long-dashed with particle numbers tagged with an *),
and "classical" two-body radius (dotted (4) and dot-dashed
(5)) translated by eq.(3).
nevertheless the smallest trap would still influence the
tail of the distribution.
in fig.1 we also show the estimated classical average
distance between pairs of particles within the n-body
system. this curve is above the ground state radius for
large binding. here the probability is mostly found in
the classical region within the potential, i.e. in the non-
universal region.
another potential shape would then
move these curves. the classical and root mean square
radius cross each other when the size is slightly larger
than the range b. this limit for universality is similar to
the halo condition for universality established in [5, 17].
at smaller binding energy the classical radius becomes
less than the size of the system and the probability is on
average located outside the potential in the non-classical,
universal region.
for the extremely small binding energies close to the
threshold our estimate of the classical radius diverges log-
arithmically with binding energy. thus at some point it
has to exceed the size of the system which we concluded
converge to a finite value for zero binding. this is sim-
ply due to the character of the gaussian potential which
approaches zero for large radii. zero energy must then
be matched by an infinite radius. however, this gaus-
sian tail is too small to obstruct the convergence of the
probability distribution to a finite size. this cannot de-
stroy universality because the tail has no influence on
the wavefunction in this region far outside the range of
the potential. for universality only the binding energy
is decisive as one can see explicitly for the two-body sys-
tem. for n-body systems the same result follows from
the asymptotic large distance behavior of the wavefunc-
tion expressed in hyperspherical coordinates [31]. thus
the classical average radius argument fails for these ex-
treme energies when the probability has settled outside
the range of the short-range potential.
five-body system.
in fig.1 we also show results for
n = 5 where convergence is reached for the trap size
of bt = 372a0. we found two pentamers with energies
-0.0281 ħ2
mb2 and -0.0113 ħ2
mb2 below the four-body thresh-
old (-0.0103 ħ2
mb2 ) for infinite scattering length. the sizes
for both ground and excited states increase again with
decreasing binding energy b5 and approach finite values
when b5 = 0. these limiting radii of about 1.94b and 4.0b
are substantially smaller than corresponding values for
four particles. still the largest probability is found out-
side the potential providing the binding. this strongly
indicates that also these structures are in the universal
region. again their ratio is anticipated to be essentially
independent of potential shape. this conclusion is sup-
ported by the comparison in fig.1 to the classical ra-
dius which always is smaller than the radius of the ex-
cited state and comparable to the radius of the ground
state. as argued for four particles the largest binding for
the ground state corresponds to non-universal structure.
when the binding energy is about 0.3 in the dimension-
less units on the figure the universal structure appears.
this happens at about the same energy as for four parti-
cles. in both cases the probability is pushed outside the
potential and universality is expected for smaller binding
energies.
conclusions.
we have investigated the behavior of
brunnian systems near threshold for binding.
ground
and first excited state for four and five identical bosons
appear below the threshold for binding three and four
particles, respectively. their radii are for small binding
energies larger than the range of the potential holding
them together.
the largest part of the probability is
found in non-classical regions resulting in universal struc-
tures. for six and more particles the ground states would
be located inside the potential and thus of non-universal
structures.
excited states are larger and may still be
universal but already for seven or eight particles also the
first excited state is expected to be non-universal. the
numerical results are obtained for a two-body gaussian
potential but the features originating from wavefunctions
in non-classical regions of space are expected to be inde-
pendent of the potential shape.
acknowledgement.
we are greatful for helpful discus-
sions with m. thøgersen.
mty thanks fapesp and
cnpq for partial support.
5
[1] v. efimov, phys.lett. b 33, 563 (1970); nature physics
5, 533 (2009).
[2] k. riisager, a.s. jensen and p. møller, nucl. phys.
a548, 393 (1992).
[3] e. nielsen, d.v. fedorov and a.s. jensen, j. phys. b31,
4085 (1998).
[4] d.v. fedorov, a.s. jensen and k. riisager, phys. rev.
c49, 201 (1994); phys. rev. c50, 2372 (1994).
[5] a.s. jensen, k. riisager, d.v. fedorov and e. garrido,
rev. mod. phys. 76 215 (2004).
[6] m.t. yamashita, t. frederico, l.tomio and a. delfino,
phys. rev. a 68, 033406 (2003).
[7] m.t. yamashita, r.s. marques de carvalho, l. tomio
and t. frederico, phys. rev. a68, 012506 (2003).
[8] a.e.a. amorim, t. frederico, l. tomio, phys.rev. c 56
r2378 (1997).
[9] p.g. hansen and b. jonson, europhys. lett. 4, 409
(1987).
[10] e. braaten and h.-w. hammer, phys. rep. 428, 259
(2006).
[11] h.w. hammer and l. platter, eur. phys. j. a32, 113
(2007).
[12] j. von stecher, p. d'incao, c.h. greene, nature phys. 5,
417 (2009).
[13] m.t. yamashita, l. tomio, a. delfino, and t. frederico,
europhys. lett. 75, 555 (2006).
[14] f. ferlaino et al., phys. rev. lett. 102, 140401 (2009).
[15] m. zaccanti et al., nature phys. 5, 586 (2009).
[16] s. e. pollack, d. dries and r. g. hulet, science 326,
1683 (2009).
[17] k. riisager, d.v. fedorov and a.s. jensen europhys.
lett. 49, 547 (2000).
[18] r.d. amado and j. v. noble, phys. lett. b35, 25 (1971);
r.d. amado and f.c. greenwood phys. rev. d7, 2517
(1973).
[19] e. nielsen, d.v. fedorov, a.s. jensen and e. garrido,
phys. rep. 347, 373 (2001).
[20] o. sørensen, d.v. fedorov and a.s. jensen, phys. rev.
lett. 89, 173002 (2002).
[21] m. thøgersen, d.v.fedorov, and a.s. jensen, europhys.
lett. 83 30012 (2008).
[22] g.j. hanna and d. blume, phys. rev. a74, 063604
(2006).
[23] h. brunn, ̈
uber verkettung, s.-b.math.-phys.kl. bayer
akad. wiss. 22, 77 (1892).
[24] j.-m. richard and s. fleck, phys. rev. lett. 73 1464
(1994).
[25] j. goy, j.-m. richard, and s. fleck, phys.rev. a52 3511
(1995).
[26] f. bringas, m.t yamashita and t. frederico, phys. rev.
a 69, 040702(r) (2004).
[27] m.t. yamashita, t. frederico, a. delfino and l. tomio,
phys. rev. a 66, 052702 (2002).
[28] m.t. yamashita, t. frederico and l. tomio, phys. rev.
lett. 99, 269201 (2007); phys. lett. b660, 339 (2008).
[29] y. suzuki and k. varga, stochastic variational approach
to quantum-mechanical few-body problems, springer
(1998).
[30] e. nielsen, d.v. fedorov, and a.s. jensen, phys. rev.
a56, 3287 (1997).
[31] s.p. merkuriev, c. gignoux, a. laverne, ann phys. 99,
30 (1976).
|
0911.1723 | the last breath of the young gigahertz-peaked spectrum radio source pks
1518+047 | we present the results from multi-frequency vlba observations from 327 mhz to
8.4 ghz of the gigahertz-peaked spectrum radio source pks 1518+047 (4c 04.51)
aimed at studying the spectral index distribution across the source. further
multi-frequency archival vla data were analysed to constrain the spectral shape
of the whole source. the pc-scale resolution provided by the vlba data allows
us to resolve the source structure in several sub-components. the analysis of
their synchrotron spectra showed that the source components have steep spectral
indices, suggesting that no supply/re-acceleration of fresh particles is
currently taking place in any region of the source. by assuming the
equipartition magnetic field of 4 mg, we found that only electrons with
$\gamma$ < 600, are still contributing to the radio spectrum, while electrons
with higher energies have been almost completed depleted. the source radiative
lifetime we derived is 2700+/-600 years. considering the best fit to the
overall spectrum, we find that the time in which the nucleus has not been
active represents almost 20% of the whole source lifetime, indicating that the
source was 2150+/-500 years old when the radio emission switched off.
| introduction
the radio emission of extragalactic powerful radio sources
is due to synchrotron radiation from relativistic particles
with a power-law energy distribution. they are produced
in the "central engine", namely the active galactic nucleus
(agn), and reaccelerated in the hot spot, that is the region
where the particles, channelled through the jets, interact
with the external medium. the energy distribution of the
plasma is described by a power-law: n(e) = noe−δ that
results in a power-law radio spectrum sν ∝ν−α, with a
moderate steepening sν ∝ν−α+0.5 caused by energy losses
in the form of radio emission. the spectral steepening
occurs at the break frequency, νb, which is related to the
age of relativistic electrons (see e.g. pacholczyk
1970).
this model, known as the continuous injection (ci) model,
requires a continuous injection of power-law distributed
electrons into a volume permeated by a constant magnetic
field. the model has been applied to the interpretation of
the total spectra of the radio sources (see e.g. murgia et al.
⋆e-mail: [email protected]
1999), assuming that the emission of the lobes (supposed
to have a constant and uniform magnetic field) dominates
over that of core, jets and/or hot spots.
it is nowadays clear that powerful radio sources represent a
small fraction of the population of the active galactic nuclei
associated with elliptical galaxies, implying that the radio
activity is a transient phase in the life of these systems. the
onset of radio activity is currently thought to be related to
merger or accretion events occurring in the host galaxy.
the evolutionary stage of each powerful radio source is
indicated by its linear size. intrinsically compact radio
sources with linear size ls ⩽1 kpc, are therefore inter-
preted as young radio objects in which the radio emission
originated a few thousand years ago (e.g. fanti et al. 1995;
snellen et al. 2000). these sources are characterised by a
rising synchrotron spectrum peaking at frequencies around
a few gigahertz and are known as gigahertz-peaked spec-
trum (gps) objects. when imaged with high resolution,
they often display a two-sided morphology which looks like
a scaled-down version of the classical and large (up to a
few mpc) double radio sources (phillips & mutel
1982).
for these sources, kinematic (polatidis & conway
2003)
2
m. orienti et al.
and radiative (murgia 2003) studies provided ages of the
order of 103 and 104 years, supporting the idea that they
are young radio sources whose fate is probably to become
classical doubles. however, it has been claimed (alexander
2000; marecki et al.
2006) that a fraction of young radio
sources would die in an early stage, never becoming the
classical extended radio galaxies with linear sizes of a few
hundred kpc and ages of the order of 107 – 108 yr. support
for this scenario comes from a statistical study of gps
sources by gugliucci et al.
(2005): they showed that the
age distribution of the small radio sources they considered
has a peak around 500 years.
so far, only a few objects have been recognised as dy-
ing radio sources. they are difficult to find owing to
their extremely steep radio spectrum which makes them
under-represented
in
flux-limited
catalogues.
the
ob-
jects
0809+404
(kunert-bajraszewska et al.
2006)
and
1542+323 (kunert-bajraszewska et al.
2005) are possible
examples of young radio sources that are fading. these
sources, with an estimated age of 104 – 105 years, have been
suggested as "faders" due to the lack of active structures,
like cores and hot-spots, although their optically-thin radio
spectra do not display the typical form expected for a fader.
in
this
paper,
we
present
results
from
multi-
frequency
vlba
and
vla
data
of
the
gps
radio
source
pks 1518+047
(ra=15h
21m
14s
and
dec=
04◦30
′ 21
′′, j2000), identified with a quasar at redshift
z = 1.296 (stickel & kuhr
1996). it is a powerful radio
source (log p1.4 ghz (w/hz) = 28.53), with a linear size of
1.28 kpc. this source was selected from the gps sample of
stanghellini et al. (1998) on the basis of its steep spectrum
α8.4ghz
1.4ghz = 1.2, uncommon in young radio sources. the goal
of this paper is to determine by means of the analysis of the
radio spectrum and its slope in the optically thin region,
whether this source is in a phase where its radio activity
has, perhaps temporarily, switched off.
throughout
this
paper,
we
assume
the
following
cosmology: h0
=
71 km s−1 mpc−1, ωm
=
0.27 and
ωλ = 0.73, in a flat universe. at the redshift of the target
1
′′ = 8.436 kpc. the spectral index is defined as s(ν) ∝ν−α.
2
radio data
the target source was observed on march 5, 2008 (project
code bd129) with the vlba at 0.327, 0.611 (p band), 1.4
and 1.6 ghz (l band) with a recording band width of 16
mhz at 256 mbps for 20 min in l band and 1 hr in p band.
the correlation was performed at the vlba correlator in
socorro. these observations have been complemented with
archival vlba data at 4.98 (c band) and 8.4 ghz (x
band) carried out on march 28, 2001 (project code bs085).
the data reduction was carried out with the nrao aips
package.
during the observations, the gain corrections were found to
have variations within 5% in l, c and x bands, and 10%
in p band respectively. in p band, the antennas located
in kitt peak, mauna kea and north liberty had erratic
system temperatures, probably due to local rfi, and their
freq.
tota
nb
sb
ghz
mjy
mjy
mjy
0.32
2278±38
1136±117
512±58
0.61
4050±200c
1778±182
897±98
0.96
4720±240d
-
-
1.4
3937±120
2468±123
1555±78
1.6
3376±101
2008±101
1311±56
3.9
1460±50d
-
-
4.5
1164±30
-
-
4.9
1013±30
392±19
479±24
5.0
994±52e
-
-
8.1
486±15
-
-
8.4
441±13
148±7
232±11
11.1
305±30d
-
-
15
165±1
-
-
22
80±5
-
-
table 1. multi-frequency flux density of pks 1518+047 and of
its northern and southern components. a = archival vla data; b
= vlba data; c = wsrt data from stanghellini et al. (1998); d
= ratan data from stanghellini et al. (1998); e = wsrt data
from xiang et al. (2006).
data had to be flagged out completely. the resulting loss
of both resolution and sensitivity did not affect the source
structure and flux density.
to constrain the spectral shape of the whole source, we
analysed archival vla data at 0.317, 1.365, 1.665, 4.535,
4.985, 8.085, 8.465, 14.940 and 22.460 ghz, obtained on
august 22, 1998 (project code as637). the data reduction
was carried out with the standard procedure implemented
in the nrao aips package. uncertainties in the deter-
mination of the flux density are dominated by amplitude
calibration errors, which are found to be around 3% at all
frequencies.
vla and vlba final images were obtained after a number
of phase-only self-calibration iterations. at 4.9 and 8.4 ghz,
besides the full resolution vlba image, we also produced
a lower resolution image using the same uv-range, image
sampling and restoring beam of the 1.6 ghz, in order to
produce spectral index maps of the source. images at the
various frequencies were aligned using the task lgeom
by comparing the position of the compact and bright
components at each frequency.
flux density and deconvolved angular sizes were measured
by means of the task jmfit which performs a gaussian
fit to the source components on the image plane, or, in the
case of extended components, by tvstat which performs
an aperture integration on a selected region on the image
plane.
the parameters derived in this way are reported in table
1. about 8% and 20% of the total flux density is missing
in our vlba images at 4.9 and 8.4 ghz respectively if we
consider the vla measurements. it is likely that such a
missing flux density is from a steep-spectrum and diffuse
emission which is not sampled by the vlba data due to
the lack of appropriate short spacings.
the last breath of the young gps spectrum radio source pks 1518+047
3
3
results
3.1
source structure
multi-frequency vlba observations with parsec-scale res-
olution allow us to resolve the structure of pks 1518+047
into two main source components (fig. 1), separated by
135 mas (1.1 kpc), in agreement with previous images
by
dallacasa et al.
(1998),
xiang et al.
(2002),
and
xiang et al.
(2006). both
the
northern
and
southern
components have angular size of 23×15 mas2.
at the highest frequencies, where we achieve the best
resolution, the northern component is resolved into two
sub-structures (labelled n1 and n2) separated by 11 mas
(90 pc) and position angle of 11◦. n1 accounts for 295 and
119 mjy at 4.9 and 8.4 ghz, respectively, with angular size
of 7.7×6.8 mas2. component n2 is fainter than n1 and it
accounts for 97 and 29 mjy at 4.9 and 8.4 ghz, respectively,
with angular size of 7.4 ×5 mas2. the southern complex
is resolved into 4 compact regions located at 3, 7 and 16
mas (25, 60 and 130 pc for s2, s3 and s4 respectively)
with respect to s1, all with almost the same position angle
of 40◦. the resolution, not adequate to properly fit the
several components of the southern lobe, does not allow us
to reliably determine the observational parameters of each
single sub-component.
the spectral index in both the northern and southern
complexes, computed considering the integrated component
flux density, is steep, with α8.4
1.4 = 1.5 ± 0.1 and = 1.0 ± 0.1,
respectively. errors on the spectral indices were calculated
following the error propagation theory. the analysis of
the spectral index distribution in the southern complex
(fig. 2) shows a steepening of the spectral index going
from s1 inwards to s4, suggesting that s1 is likely the last
place where relativistic electrons coming from the source
core were re-accelerated. in the northern lobe, the spectral
index between 1.6 and 4.9 ghz is almost constant across
the component, showing a small gradient going from n1
towards n2, while in the spectral index maps at higher
frequencies it steepens from n1 to n2 (fig. 2). the steep
spectral indices showed by the northern and southern
components suggest that no current particle acceleration
across the source is taking place, indicating that active
regions, like conventional jet-knots and hot spots, are no
longer present.
at 327 and 611 mhz, i.e. in the optically-thick part of
the spectra, the low spatial resolution does not allow us
to separate the individual sub-components. however, their
spectral indices, obtained considering the integrated flux
density, are α611
327 ∼-0.6±0.2 and -0.9±0.2 for the northern
and southern component respectively, that is much flatter
than the value expected in the presence of "classical" syn-
chrotron self-absorption from a homogeneous component,
indicating that what we see is the superposition of the
spectra of the various sub-components, each characterised
by its own spectral peak occurring in a range of frequencies
and then causing the broadening of the overall spectrum
(i.e. the northern and southern components are far from be-
ing homogeneously filled by magnetised relativistic plasma).
3.2
the spectral shape
to understand the physical processes taking place in this
source we fit the optically-thin part of the overall spectrum,
as well as the spectra of the northern and southern com-
ponents, assuming two different models. the first model
assumes that fresh relativistic particles are continuously
injected in the source (ci model), while in the second model
the continuous supply of particles is over and the radio
source is already in the relic phase (ci off model).
the ci synchrotron model is described by three param-
eters:
i) αinj, the injection spectral index;
ii) νb, the break frequency;
iii) norm, the flux normalization.
as the injection of fresh particles stops (ci off model),
a second break, νb2, appears at high frequencies, and beyond
that the spectrum cuts offexponentially. this second break
is related to the first according to:
νb2 = νb
ts
toff
2
(1)
where
ts
is
the
total
source's
age
and
toff
is
the
time
elapsed
since
the
injection
switched
off
(see e.g. komissarov & gubanov
1994; slee et al.
2001;
parma et al.
2007). indeed, compared to the basic ci
model, the ci off model is characterized by one more free
parameter:
i) αinj, the injection spectral index;
ii) νb, the lowest break frequency;
iii) norm, the flux normalization;
iv) toff/ts, i.e. the relic to total source age ratio.
the spectral shapes of both models cannot be described
by analytic equations and must be computed numerically
(see e.g. murgia et al. 1999 and slee et al. 2001 for further
details).
in the study of the overall spectrum, in addition to
the archival vla data analysed in this paper, we con-
sider also ratan observations at 0.96, 3.9, and 11.1 ghz
(stanghellini et al.
1998), and wsrt observations at 5
ghz (xiang et al. 2006), to have a better frequency sam-
pling. due to the presence of another radio source located
at about 1 arcminute from the target (fig. 3), we do not
consider observations with resolution worse than ∼1
′.
during the fitting procedure, a particular care was taken
in choosing the most accurate injection spectral index. the
peculiar shape of the radio spectrum of this source does not
allow us to directly derive the injection spectral index from
the optically-thin emission below the break, because it falls
below the peak frequency where the spectrum is absorbed.
for this reason, we fit the spectrum assuming various in-
jection spectral indices, choosing the one that provides the
best reduced chi-square (fig. 4c).
the best fit we obtain with the ci model implies a very
steep injection spectral index αinj = 1.1, that is reflected
in an uncommon electron energy distribution δ = 3.2, (fig.
4a). on the other hand, the ci off model provides a more
common injection spectral index αinj = 0.7 (fig. 4b). fur-
thermore, the comparison between the reduced chi-square of
4
m. orienti et al.
center at ra 15 21 14.4193420 dec 04 30 21.660520
cont: j1521+04 ipol 326.482 mhz 1521 4 92.icl001.1
plot file version 1 created 25-aug-2008 13:03:49
cont peak flux = 7.7505e-01 jy/beam
levs = 2.840e-02 * (-1, 1, 2, 4, 8, 16, 32, 64,
128, 256, 512)
milliarc sec
milliarc sec
150
100
50
0
-50
-100
-150
-200
150
100
50
0
-50
-100
-150
-200
-250
312 mhz
peak= 775.0; f.c.= 28.4 (mjy/beam)
s
n
j1521+0340 312 mhz
1518+047 312 mhz
center at ra 15 21 14.4193420 dec 04 30 21.660520
cont: j1521+04 ipol 610.982 mhz 1521 3 50ma.icl001.1
plot file version 2 created 13-jul-2009 16:44:56
cont peak flux = 1.5853e+00 jy/beam
levs = 4.104e-02 * (-1, 1, 2, 4, 8, 16, 32, 64,
128, 256, 512)
milliarc sec
milliarc sec
100
50
0
-50
-100
-150
50
0
-50
-100
-150
n
s
j1521+0340 611 mhz
peak= 1585.3; f.c.= 41.0 (mjy/beam)
1518+047 611 mhz
center at ra 15 21 14.4193420 dec 04 30 21.660520
cont: j1521+04 ipol 1643.392 mhz 1521 if2.icl001.1
plot file version 2 created 13-jul-2009 17:00:33
cont peak flux = 8.8571e-01 jy/beam
levs = 1.050e-02 * (-1, 1, 2, 4, 8, 16, 32, 64,
128, 256, 512, 1024, 2048)
milliarc sec
milliarc sec
60
40
20
0
-20
-40
-60
-80
-100
40
20
0
-20
-40
-60
-80
-100
-120
-140
n2
n1
s4
s
j1521+0340 1.6 ghz
peak= 885.7; f.c.= 10.5 (mjy/beam)
1518+047 1.6 ghz
center at ra 15 21 14.3960180 dec 04 30 21.367740
cont: 1518+047 ipol 4975.459 mhz 1518 c3.icl001.1
plot file version 2 created 13-jul-2009 16:54:12
cont peak flux = 2.5670e-01 jy/beam
levs = 3.600e-04 * (-1, 1, 2, 4, 8, 16, 32, 64,
128, 256, 512)
milliarc sec
milliarc sec
120
100
80
60
40
20
0
-20
-40
140
120
100
80
60
40
20
0
-20
5.0 ghz
5.0 ghz
n2
s2
s3
s4
n1
s1
peak= 256.7; f.c.= 0.4 (mjy/beam)
j1521+0340 5.0 ghz
1518+047 5.0 ghz
1518+047 4.9 ghz
center at ra 15 21 14.3960180 dec 04 30 21.367740
cont: 1518+047 ipol 8409.459 mhz 1518 x4.icl001.1
plot file version 2 created 14-jul-2009 13:17:49
cont peak flux = 1.1479e-01 jy/beam
levs = 4.590e-04 * (-1, 1, 2, 4, 8, 16, 32, 64,
128, 256, 512)
milliarc sec
milliarc sec
80
60
40
20
0
-20
120
100
80
60
40
20
0
peak= 114.8; f.c.=0.4 (mjy/beam)
j1521+0340 8.4 ghz
n2
n1
s2
s3
s1
1518+047 8.4 ghz
figure 1. vlba images at 0.327, 0.611, 1.6, 4.9 and 8.4 ghz of pks 1518+047. on the images we show the observing frequency, the
peak flux density and the first contour intensity (f.c.) which is 3 times the 1σ noise level measured on the image plane. contours increase
of a factor 2. the beam is plotted on the bottom left corner.
the different models (fig. 4c) shows that the ci off model
is more accurate in fitting the data. in the context of the ci
off model, we derive a break frequency νbr = 2.4 ghz, and
toff/ts = 0.2. a similar result is found for the spectrum of
the southern component. as in the overall spectrum, both
ci and ci off models well reproduce the spectral shape
(fig. 5), but the former implies an uncommonly steep injec-
tion spectral index.
a different result is found in the analysis of the northern
lobe. in this case, we find that the ci off model with
αinj = 0.7 well reproduces the spectral shape, providing a
break frequency νbr = 0.8 ghz and toff/ts ∼0.27 (fig.
6b), while the ci model provides a worse chi-square even
considering a steep injection spectral index (fig. 6a).
the optically-thick part of the spectra is well modelled
by an absorbed spectrum with αthick = −1.2 ± 0.1, that
is different from the canonical -2.5 expected in the pres-
ence of synchrotron self-absorption from a homogeneous
component. this result, together with the source structure
resolved in several sub-components, indicates that in the
optically-thick regime the observed spectra are the super-
position of the spectra of many components that cannot be
resolved due to resolution limitation.
the best fit parameters obtained with the ci and ci off
models, and assuming synchrotron self-absorption (ssa)
are reported in table 2.
3.3
physical parameters
physical parameters for the radio source components were
computed assuming equipartition condition and using stan-
dard formulae (pacholczyk 1970). we considered particles
with energy between γmin =100 and γmax =600. the high
energy cut-offcorresponds to the value for which the break
frequency in the observer frame occurs at 2.4 ghz, as ob-
tained from the best fit to the model (section 3.2). pro-
ton and electron energy densities are assumed to be equal,
and the spectral index is α = 0.7 (see section 3.2). we as-
sume that the volume v of the emitting regions is a prolate
spheroid:
v = π
6 ab2φ
the last breath of the young gps spectrum radio source pks 1518+047
5
center at ra 15 21 14.3960180 dec 04 30 21.367740
cont: 1518+047 ipol 4975.459 mhz 1521 c v l.icl001.1
grey: j1521+04 spix 1525.392 mhz 1521 l tap.spix.1
plot file version 3 created 28-aug-2008 13:33:18
grey scale flux range= -2.000 -0.500 sp index
cont peak flux = 4.0254e-01 jy/beam
levs = 5.520e-04 * (-1, 1, 2, 4, 8, 16, 32, 64,
128, 256, 512)
-2.0
-1.5
-1.0
-0.5
milliarc sec
milliarc sec
100
80
60
40
20
0
-20
-40
140
120
100
80
60
40
20
0
-20
2
1.5
1.0
0.5
contours: 4.9 ghz spix:1.6 − 4.9 ghz
1.2
1.3
0.9
1.3
a
peak= 402.5; f.c.= 0.5 (mjy/beam)
center at ra 15 21 14.3960180 dec 04 30 21.367740
cont: 1518+047 ipol 8409.459 mhz 1521 xvl.lgeom.1
grey: 1518+047 spix 4975.459 mhz 1521 cx.spix.1
plot file version 4 created 03-nov-2009 15:40:44
grey scale flux range= -3.000 0.000 sp index
cont peak flux = 1.9741e-01 jy/beam
levs = 7.537e-04 * (-1, 1, 2, 4, 8, 16, 32, 64,
128, 256, 512, 1024)
-3
-2
-1
0
milliarc sec
milliarc sec
100
80
60
40
20
0
-20
-40
140
120
100
80
60
40
20
0
-20
1
2
3
0
contours: 8.4 ghz spix: 4.9 − 8.4 ghz
1.7
1.3
peak= 197.4; f.c.= 0.7 (mjy/beam)
b
1.1
1.5
center at ra 15 21 14.3960180 dec 04 30 21.367740
cont: 1518+047 ipol 8409.459 mhz 1521 xvl.lgeom.1
grey: j1521+04 spix 1525.392 mhz 1521 l x.spix.1
plot file version 1 created 28-aug-2008 13:48:59
grey scale flux range= -2.000 -0.500 sp index
cont peak flux = 1.9741e-01 jy/beam
levs = 7.920e-04 * (-1, 1, 2, 4, 8, 16, 32, 64,
128, 256, 512)
-2.0
-1.5
-1.0
-0.5
milliarc sec
milliarc sec
100
80
60
40
20
0
-20
-40
140
120
100
80
60
40
20
0
-20
2.0 1.5 1.0 0.5
contours: 8.4 ghz spix: 1.6 − 8.4 ghz
1.1
1.3
1.8
peak= 197.4; f.c.= 0.7 (mjy/beam)
c
figure 2. vlba spectral index images of pks 1518+047 between 1.6 and 4.9 ghz (panel a), between 4.9 and 8.4 ghz (panel b), and
between 1.6 and 8.4 ghz (panel c). on the images we show the observing frequency of the contours, the peak flux density and the first
contour intensity (f.c.) which is 3 times the 1σ noise level measured on the image plane. contours increase of a factor 2. the beam is
plotted on the bottom left corner. the grey scale is shown by the wedge at the top of each image.
where a and b are the spheroid major and minor axes,
and φ is the filling factor. in the case of the entire radio
source, we consider a =140 mas, b =25 mas, and φ = 1.
with these assumptions, we obtain an equipartition mag-
netic field heq = 4 mg, and a minimum energy density
umin = 1.4× 10−6 erg/cm3. in the case of component n and
s we assume a = 23 mas, b =15 mas and φ=1, and we infer
a magnetic field of 7 mg, and a minimum energy density
of =5×10−6 erg/cm3, in agreement with the values derived
for the entire source structure.
4
discussion
in general the shape of the synchrotron spectrum of active
radio sources is the result of the interplay between freshly
injected relativistic particles and energy losses that cause a
depletion of high-energy particles which results into a steep-
ening of the spectrum at high frequencies. the discovery of
a gps radio source, considered to be a young object where
the radio emission started a few thousand years ago, but dis-
playing a steep spectral index (α8.4
1.6 > 1.0) in all its compo-
nents is somewhat surprising. indeed, the strong steepening
found across the components of pks 1518+047 is incompat-
ible with the moderate high-frequency steepening predicted
by a continuous injection of particles, suggesting that no in-
jection/acceleration of fresh relativistic particles is currently
occurring in any region of the source. furthermore, when the
injection of fresh particles is over, the strong adiabatic losses
cause a fast shift of the spectral turnover towards lower fre-
quencies and a decrement of the peak flux density. therefore,
detections of dying radio sources with a spectral peak still
occurring in the ghz regime are extremely rare. a possible
cont: 1518+047 ipol 1652.500 mhz 1521 1.4vla.icln.1
plot file version 1 created 21-jul-2009 15:23:26
cont peak flux = 3.5540e+00 jy/beam
levs = 6.060e-03 * (1, 2, 4, 8, 16, 32, 64, 128,
256, 512)
declination (j2000)
right ascension (j2000)
15 21 20
18
16
14
12
04 31 30
00
30 30
00
29 30
00
1518+047 1.6 ghz vla
peak= 3554.0; f.c.= 6.1 (mjy/beam)
target
1518+047b
figure 3. vla image at 1.6 ghz of pks 1518+047. another
source, 1518+047b, is located at ∼1 arcminute in the south-east
direction from the target.
explanation may arise from the presence of a dense ambient
medium enshrouding the radio source which may confine
the relativistic particles reducing the adiabatic losses. this
is probably the case of fading radio sources found in galaxy
clusters where the intracluster medium (icm) is likely limit-
ing adiabatic cooling (slee et al. 2001). however, in the case
of pks 1518+047 the source is surrounded by the interstel-
lar medium (ism) of the host galaxy, but its confinement
cannot be related to neutral hydrogen, because observations
searching for hi absorption did not provide evidence of a
6
m. orienti et al.
comp.
model
αinj
νp
νb
toff/ts
χ2
red
mhz
mhz
n
ci
1.1
1300+80
−80
870+630
−440
-
12.0
n
ci off
0.7
1620+380
−280
800+1230
−410
0.27
5.4
s
ci
1.1
1400+880
−190
<35000
-
0.7
s
ci off
0.7
1720+880
−540
1260+15870
−1110
>0.01
0.57
tot
ci
1.1
1050+46
−40
7130+1740
−1470
-
5.6
tot
ci off
0.7
1000+110
−74
2380+1130
−950
0.21
0.46
table 2. best fit parameters obtained with the ci and ci off models + ssa for the source pks 1518+047 and for its northern and
southern components.
dense environment (gupta et al. 2006).
from
the
analysis
of
the
optically-thin
spectrum
of
pks 1518+047, we find that the steepening may be ex-
plained assuming that the supply of relativistic plasma is
still taking place, but the energy distribution of the injected
particles is uncommonly steep. another explanation is that
the steep spectral shape is due to the absence of freshly in-
jected/reaccelerated particles and the energy losses are driv-
ing the spectrum evolution. support to this scenario comes
from the best fit to the spectrum of the northern compo-
nent, where the continuous injection model fails in repro-
ducing the spectral shape, even assuming that relativistic
particles are injected with a steep energy distribution. this
steep spectrum can be best reproduced by a synchrotron
model in which no particle supply is taking place. further-
more, the time spent by the source in its "fader" phase is
about 20% of the whole source age.
we estimate the break energy γb of the electron spectrum
and the electron radiative time by:
γb ∼487ν1/2
b
h−1/2(1 + z)1/2
(2)
and
ts = 5.03 × 104h−3/2ν−1/2
b
(1 + z)−1/2
(yr)
(3)
where h is in mg and νb in ghz.
if in eqs. 2 and 3 we consider the equipartition magnetic
field and νb derived from the fits, we find that the energy
of the electrons responsible for the break should be γ ∼
400 – 600 and the source radiative lifetime is ts = 2700 ±
600 yr. as a consequence, electrons with γ > 600 (i.e. with
shorter radiative lifetime) have already depleted their energy
and they contribute only to the high-frequency tail of the
spectrum.
if we consider the toff/ts ∼0.2, as derived from the fit
to the overall spectrum, we find that toff = 550 ± 100
years, and this represents the time elapsed since the last
supply/acceleration of relativistic particles. these values
indicate that the radio source was 2150±500 years old when
the radio emission switched off.
the small toff derived suggests that adiabatic losses had
not enough time to shift the spectral peak of pks 1518+047
far from the ghz regime. in the presence of adiabatic
expansion, and considering a magnetic field frozen in
the plasma, the spectral peak is shifted towards lower
frequencies:
νp,1 = νp,0
t0
t1
4
(4)
where νp,0 and νp,1 are the peak frequency at the time
t0 and t1 respectively (orienti & dallacasa
2008). as
the time elapsed after the switch offincreases, the peak
moves to lower and lower frequencies, making the source
unrecognisable as a young gps. for example, when toff
will represent more than 40% of the total source lifetime,
the peak should be below 300 mhz, so it will be difficult to
identify as a dying young radio source. future instruments
like lofar may unveil a population of such fading radio
sources.
5
conclusions
we presented results from multi-frequency vlba and vla
observations of the gps radio source pks 1518+047. the
analysis of the spectral index distribution across the whole
source structure showed that all the source components
have very steep optically-thin synchrotron spectra. the
radio spectra are well explained by energy losses of rel-
ativistic particles after the cessation of the injection of
new plasma in the radio lobes. this result, together with
the lack of the source core and active non-steep spectrum
components like hot spots and knots in the jets, suggests
that no injection/acceleration of fresh particle is currently
occurring in any region of the source. for this reason,
pks 1518+047 can be considered a fading young radio
source, in which the radio emission switched offshortly
after its onset. when the supply of energy is switched off,
given the high magnetic fields in the plasma, the spectral
turnover moves rapidly towards low frequencies, making
the source undetectable at the frequencies commonly used
for radio surveys. however, in pks 1518+047 the time
elapsed since the last particle acceleration is of the order
of a few hundred years, suggesting that pks 1518+047 still
has a gps spectrum because adiabatic losses have not had
enough time to affect the source spectrum, shifting the peak
far from the ghz regime. if the interruption of the radio
activity is a temporary phase and the radio emission from
the central engine will re-start soon, it is possible that the
source will appear again as a gps without the severe steep-
ening at high frequencies. if this does not happen, the fate
of this radio source is to emit at lower and lower frequencies,
the last breath of the young gps spectrum radio source pks 1518+047
7
a
b
c
figure 4. the best fit to the overall spectrum of pks 1518+047 using the ci (left) and the ci off models (center), and the reduced
chi-square versus the injection spectral index (right). error bars are rather small, and often result within the symbol.
a
b
figure 5. the best fit to the spectrum of the southern component
of pks 1518+047 using the ci (top) and the ci off models
(bottom).
a
b
figure 6. the best fit to the spectrum of the northern component
of pks 1518+047 using the ci (top) and the ci off models
(bottom).
8
m. orienti et al.
until it disappears at frequencies well below the mhz regime.
acknowledgemnet
we thank the anonymous referee for carefully reading the
manuscript and valuable suggestions. the vlba is oper-
ated by the us national radio astronomy observatory
which is a facility of the national science foundation
operated under a cooperative agreement by associated
university, inc. this work has made use of the nasa/ipac
extragalactic database (ned), which is operated by the jet
propulsion laboratory, california institute of technology,
under contract with the national aeronautics and space
administration.
references
alexander, a., 2000, mnras, 319, 8
dallacasa, d., bondi, m., alef, w., mantovani, f., 1998,
a&as, 129, 219
fanti, c., fanti, r., dallacasa, d., schilizzi, r. t., spencer,
r. e., stanghellini, c., 1995, a&a, 302, 317
gugliucci, n.e., taylor, g.b., peck, a.b., giroletti, m.,
2005, apj,622, 136
gupta, n., salter, c.j., saikia, d.j., ghosh, t., jeyaku-
mar, s., 2006, mnras, 373, 972
komissarov, s.s., gubanov, a.g., 1994, a&a, 285, 27
kunert-bajraszewska, m., marecki, a., thomasson, p.,
spencer, r. e., 2005, a&a, 440, 93
kunert-bajraszewska, m., marecki, a., thomasson, p.,
2006, a&a, 450, 945
marecki, a., kunert-bajraszewska, m., spencer, r.e.,
2006, a&a, 449, 985
murgia, m., fanti, c., fanti, r., gregorini, l., klein, u.,
mack, k.-h., vigotti, m., 1999, a&a, 345, 769
murgia, m., 2003, pasa, 20, 19
orienti, m., dallacasa, d., 2008, a&a, 477, 807
pacholczyk, a.g., 1970, radio astrophysics, (san fran-
cisco: freeman & co.)
parma, p., murgia, m., de ruiter, h.r., fanti, r., mack,
k.-h., govoni, f., 2007, a&a, 470, 875
phillips, r.b., mutel, r.l., 1982, a&a, 106, 21
polatidis, a.g., & conway, j.e., 2003, pasa, 20, 69
slee, o.b., roy, a.l., murgia, m., andernach, h., ehle,
m., 2001, aj, 122, 1172
snellen, i. a. g., schilizzi, r. t., miley, g. k., de bruyn, a.
g., bremer, m. n., r ̈
ottgering, h. j. a., 2000, mnras,
319, 445
stanghellini, c., o'dea, c.p., dallacasa, d., baum, s.a.,
fanti, r., fanti, c., 1998, a&as, 131, 303
stickel, m., kuhr, h., 1996, a&as, 115, 11
xiang, l., stanghellini, c., dallacasa, d., haiyan, z., 2002,
a&a, 385, 768
xiang, l., reynolds, c., strom, r.g., dallcasa, d., 2006,
a&a, 454, 729
|
0911.1724 | on the approach to thermal equilibrium of macroscopic quantum systems | we consider an isolated, macroscopic quantum system. let h be a
micro-canonical "energy shell," i.e., a subspace of the system's hilbert space
spanned by the (finitely) many energy eigenstates with energies between e and e
+ delta e. the thermal equilibrium macro-state at energy e corresponds to a
subspace h_{eq} of h such that dim h_{eq}/dim h is close to 1. we say that a
system with state vector psi in h is in thermal equilibrium if psi is "close"
to h_{eq}. we show that for "typical" hamiltonians with given eigenvalues, all
initial state vectors psi_0 evolve in such a way that psi_t is in thermal
equilibrium for most times t. this result is closely related to von neumann's
quantum ergodic theorem of 1929.
| introduction
if a hot brick is brought in contact with a cold brick, and the two bricks are otherwise
isolated, then energy will flow from the hot to the cold brick until their temperatures
∗departments of mathematics and physics, rutgers university, 110 frelinghuysen road, piscataway,
nj 08854-8019, usa.
†e-mail: [email protected]
‡e-mail: [email protected]
§dipartimento di fisica dell'universit`
a di genova and infn sezione di genova, via dodecaneso 33,
16146 genova, italy.
¶e-mail: [email protected]
∥department of mathematics, rutgers university, 110 frelinghuysen road, piscataway, nj 08854-
8019, usa.
∗∗e-mail: [email protected]
††e-mail: [email protected]
1
become equal, i.e., the system equilibrates. since the bricks ultimately consist of elec-
trons and nuclei, they form a quantum system with a huge number (> 1020) of particles;
this is an example of an isolated, macroscopic quantum system.
from a microscopic point of view the state of the system at time t is described by a
vector
ψ(t) = e−ihtψ(0)
(1)
in the system's hilbert space or a density matrix
ρ(t) = e−ihtρ(0)eiht ,
(2)
where h is the hamiltonian of the isolated system and we have set ħ= 1. in this
paper we prove a theorem asserting that for a sufficiently large quantum system with a
"typical" hamiltonian and an arbitrary initial state ψ(0), the system's state ψ(t) spends
most of the time, in the long run, in thermal equilibrium. (of course, before the system
even reaches thermal equilibrium there could be a waiting time longer than the present
age of the universe.) this implies the same behavior for an arbitrary ρ(0).
this behavior of isolated, macroscopic quantum systems is an instance of a phe-
nomenon we call normal typicality [5], a version of which is expressed in von neumann's
quantum ergodic theorem [17]. however, our result falls outside the scope of von neu-
mann's theorem, because of the technical assumptions made in that theorem. our result
also differs from the related results in [4, 15, 16, 12, 13, 10], which use different notions
of when a system is in an equilibrium state. in particular they do not regard the ther-
mal equilibrium of an isolated macroscopic system as corresponding to its wave function
being close to a subspace heq of hilbert space. see section 6 for further discussion.
the rest of this paper is organized as follows. in the remainder of section 1, we
define more precisely what we mean by thermal equilibrium. in section 2 we outline
the problem and our result, theorem 1. in section 3 we prove the key estimate for the
proof of theorem 1. in section 4 we describe examples of exceptional hamiltonians,
illustrating how a system can fail to ever approach thermal equilibrium. in section 5
we compare our result to the situation with classical systems. in section 6 we discuss
related works.
1.1
the equilibrium subspace
let htotal be the hilbert space of a macroscopic system in a box λ, and let h be its
hamiltonian. let {φα} be an orthonormal basis of htotal consisting of eigenvectors of h
with eigenvalues eα. consider an energy interval [e, e + δe], where δe is small on the
macroscopic scale but large enough for the interval [e, e + δe] to contain very many
eigenvalues. let h ⊆htotal be the corresponding subspace,
h = span
φα : eα ∈[e, e + δe]
.
(3)
a subspace such as h is often called a micro-canonical energy shell. let d be the
dimension of h , i.e., the number of energy levels, including multiplicities, between e
2
and e + δe. in the following we consider only quantum states ψ that lie in h , i.e., of
the form
ψ =
x
α
cα φα
(4)
with cα ̸= 0 only for α such that eα ∈[e, e + δe].
according to the analysis of von neumann [17, 18] and others (cf. [6]), the macro-
scopic (coarse-grained) observables in a macroscopic quantum system can be naturally
"rounded" to form a set of commuting operators,
mi
i=1,...,k .
(5)
the operators are defined on htotal, but since we can take them to include (and thus
commute with) a coarse-grained hamiltonian, we can (and will) take them to commute
with the projection to h , and thus to map h to itself. we write ν = (m1, . . . , mk) for
a list of eigenvalues mi of the restriction of mi to h , and hν for the joint eigenspace.
such a set of operators generates an orthogonal decomposition of the hilbert space
h =
m
ν
hν ,
(6)
where each hν, called a macro-space, represents a macro-state of the system.
the
dimension of hν is denoted by dν; note that p
ν dν = d. if any hν has dimension 0,
we remove it from the family {hν}. in practice, dν ≫1, since we are considering a
macroscopic system with coarse-grained observables.
it can be shown in many cases, and is expected to be true generally, that among
the macro-spaces hν there is a particular macro-space heq, the one corresponding to
thermal equilibrium, such that
deq/d ≈1 ,
(7)
indeed with the difference 1 −deq/d exponentially small in the number of particles.1
this implies, in particular, that each of the macro-observables mi is "nearly constant"
on the energy shell h in the sense that one of its eigenvalues has multiplicity at least
deq ≈d. we say that a system with quantum state ψ (with ∥ψ∥= 1) is in thermal
equilibrium if ψ is very close (in the hilbert space norm) to heq, i.e., if
⟨ψ|peq|ψ⟩≈1 ,
(8)
where peq is the projection operator to heq. the corresponding relation for density
matrices is
tr(peqρ) ≈1 .
(9)
1this dominance of the equilibrium state can be expressed in terms of the (boltzmann) entropy
sν of a macroscopic system in the macro-state ν, be it the equilibrium state or some other (see [9]),
defined as sν = kb log dν, where kb is the boltzmann constant: deq/d being close to 1 just expresses
the fact that the entropy of the equilibrium state is close to the micro-canonical entropy smc, i.e.,
seq = kb log deq ≈kb log d = smc.
3
the condition (8) implies that a quantum measurement of the macroscopic observable mi
on a system with wave function ψ will yield, with probability close to 1, the "equilibrium"
value of mi. likewise, a joint measurement of m1, . . . , mk will yield, with probability
close to 1, their equilibrium values.
let μ(dψ) be the uniform measure on the unit sphere in h [14, 19]. it follows from
(7) that most ψ relative to μ are in thermal equilibrium. indeed,
z
⟨ψ|peq|ψ⟩μ(dψ) = 1
dtr peq = deq
d ≈1 .
(10)
since the quantity ⟨ψ|peq|ψ⟩is bounded from above by 1, most ψ must satisfy (8).2
1.2
examples of equilibrium subspaces
to illustrate the decomposition into macro-states, we describe two examples. as exam-
ple 1, consider a system composed of two identical subsystems designated 1 and 2, e.g.,
the bricks mentioned in the beginning of this paper, with hilbert space htotal = h1⊗h2.
the hamiltonian of the total system is
h = h1 + h2 + λv ,
(11)
where h1 and h2 are the hamiltonians of subsystems 1 and 2 respectively, and λv is
a small interaction between the two subsystems. we assume that h1, h2, and h are
positive operators. let h be spanned by the eigenfunctions of h with energies between
e and e + δe.
in this example, we consider just a single macro-observable m, which is a projected
and coarse-grained version of h1/e, i.e., of the fraction of the energy that is contained
in subsystem 1 alone. we cannot take m to simply equal h1/e because h1 is defined
on htotal, not h , and will generically not map h to itself, while we would like m to be
an operator on h . to obtain an operator on h , let p be the projection htotal →h
and set
h′
1 = ph1p
(12)
(more precisely, h′
1 is ph1 restricted to h ). note that h′
1 is a positive operator, but
might have eigenvalues greater than e. now define3
m = f(h′
1/e)
(13)
with the coarse-graining function
f(x) =
0
if x < 0.01,
0.02
if x ∈[0.01, 0.03),
0.04
if x ∈[0.03, 0.05),
etc.
. . .
(14)
2it should in fact be true for a large class of observables a on h that, for most ψ relative to μ,
⟨ψ|a|ψ⟩≈tr(ρmca), where ρmc is the micro-canonical density matrix, i.e., 1/d times the identity on
h . this is relevant to the various results on thermalization described in section 6.
3recall that the application of a function f to a self-adjoint matrix a is defined to be f(a) =
p f(aα)|φα⟩⟨φα| if the spectral decomposition of a reads a = p aα|φα⟩⟨φα|.
4
the hν are the eigenspaces of m; clearly, ⊕νhν = h . if, as we assume, λv is small,
then we expect h0.5 = heq to have the overwhelming majority of dimensions. in a
thorough treatment we would need to prove this claim, as well as that h′
1 is not too
different from h1, but we do not give such a treatment here.
as example 2, consider n bosons (fermions) in a box λ = [0, l]3 ⊆r3; i.e., htotal
consists of the square-integrable (anti-)symmetric functions on λn. let the hamiltonian
be
h = −1
2m
n
x
i=1
∇2
i +
x
i<j
v
|qi −qj|
,
(15)
where the laplacian ∇2
i has dirichlet boundary conditions, v(r) is a given pair potential,
and qi is the triple of position coordinates of the i-th particle. let h again be spanned
by the eigenfunctions with energies between e and e + δe.
in this example, we consider again a single macro-observable m, based on the oper-
ator nleft for the number of particles in the left half of the box λ:
nleftψ(q1, . . . , qn) = #
i : qi ∈[0, l/2] × [0, l]2
ψ(q1, . . . , qn) .
(16)
note that the spectrum of nleft consists of the n + 1 eigenvalues 0, 1, 2, . . ., n.
to
obtain an operator on h , let p be the projection htotal →h and set n′
left = pnleftp.
note that the spectrum of n′
left is still contained in [0, n]. now define m = f(n′
left/n)
with the coarse-graining function (14). we expect that for large n, the eigenspace with
eigenvalue 0.5, heq = h0.5, has the overwhelming majority of dimensions (and that
n′
left ≈nleft).
2
formulation of problem and results
our goal is to show that, for typical macroscopic quantum systems,
⟨ψ(t)|peq|ψ(t)⟩≈1
for most t .
(17)
to see this, we compute the time average of ⟨ψ(t)|peq|ψ(t)⟩. we denote the time average
of a time-dependent quantity f(t) by a bar,
f(t) = lim
t→∞
1
t
z t
0
dt f(t) .
(18)
since ⟨ψ(t)|peq|ψ(t)⟩is always a real number between 0 and 1, it follows that if its time
average is close to 1 then it must be close to 1 most of the time. moreover, for μ-most
ψ(0), where μ is the uniform measure on the unit sphere of h , ψ(t) is in thermal
equilibrium most of the time. this result follows from fubini's theorem (which implies
that taking the μ-average commutes with taking the time average) and the unitary
invariance of μ:
z
⟨ψ(t)|peq|ψ(t)⟩μ(dψ) =
z
⟨ψ|eihtpeqe−iht|ψ⟩μ(dψ) =
z
⟨ψ|peq|ψ⟩μ(dψ) ≈1 . (19)
5
that is, the ensemble average of the time average is near 1, so, for μ-most ψ(0), the
time average must be near 1, which implies our claim above. so the interesting question
is about the behavior of exceptional ψ(0), e.g., of systems which are not in thermal
equilibrium at t = 0. do they ever go to thermal equilibrium?
as we will show, for many hamiltonians the statement (17) holds in fact for all
ψ(0) ∈h . from now on, let h denote the restriction of the hamiltonian to h , and let
φ1, . . . , φd be an orthonormal basis of h consisting of eigenvectors of the hamiltonian
h with eigenvalues e1, . . . , ed. if
ψ(0) =
d
x
α=1
cα φα ,
cα = ⟨φα|ψ(0)⟩
(20)
then
ψ(t) =
d
x
α=1
e−ieαtcα φα .
(21)
thus,
⟨ψ(t)|peq|ψ(t)⟩=
d
x
α,β=1
ei(eα−eβ)t c∗
α cβ⟨φα|peq|φβ⟩.
(22)
if h is non-degenerate (which is the generic case) then eα −eβ vanishes only for α = β,
so the time averaged exponential is δαβ, and
⟨ψ(t)|peq|ψ(t)⟩=
d
x
α=1
cα
2⟨φα|peq|φα⟩.
(23)
thus, for the system to be in thermal equilibrium most of the time it is necessary and
sufficient that the right hand side of (23) is close to 1.
now if an energy eigenstate φα is not itself in thermal equilibrium then when ψ(0) =
φα the system is never in thermal equilibrium, since this state is stationary. conversely,
if we have that
⟨φα|peq|φα⟩≈1
for all α ,
(24)
then the system will be in thermal equilibrium most of the time for all ψ(0). this follows
directly from (23) since the right hand side of (23) is an average of the ⟨φα|peq|φα⟩. we
show below that (24) is true of "most" hamiltonians, and thus, for "most" hamiltonians
it is the case that every wave function spends most of the time in thermal equilibrium.
2.1
main result
the measure of "most" we use is the following: for any given d (distinct) energy
values e1, . . . , ed, we consider the uniform distribution μham over all hamiltonians
with these eigenvalues.
choosing h at random with distribution μham is equivalent
6
to choosing the eigenbasis {φα} according to the uniform distribution μonb over all
orthonormal bases of h , and setting h = p
α eα|φα⟩⟨φα|. the measure μonb can be
defined as follows: choosing a random basis according to μonb amounts to choosing φ1
according to the uniform distribution over the unit sphere in h , then φ2 according to
the uniform distribution over the unit sphere in the orthogonal complement of φ1, etc.
alternatively, μonb can be defined in terms of the haar measure μu(d) on the group
u(d) of unitary d × d matrices: any given orthonormal basis {χα} of h defines a
one-to-one correspondence between u(d) and the set of all orthonormal bases of h ,
associating with the matrix u = (uαβ) ∈u(d) the basis
φα =
d
x
β=1
uαβχβ ;
(25)
the image of the haar measure under this correspondence is in fact independent of the
choice of {χβ} (because of the invariance of the haar measure under right multiplication),
and is μonb.
put differently, the ensemble μham of hamiltonians can be obtained by starting from
a given hamiltonian h0 on h (with distinct eigenvalues e1, . . ., ed) and setting
h = uh0u−1
(26)
with u a random unitary matrix chosen according to the haar measure. note that,
while considering different possible hamiltonians h in h , we keep heq fixed, although
in practice it would often be reasonable to select heq in a way that depends on h (as
we did in the examples of section 1.2).
for our purpose it is convenient to choose the basis {χα} in such a way that the first
deq basis vectors lie in heq and the other ones are orthogonal to heq. then, we have
that
⟨φα|peq|φα⟩=
deq
x
β=1
|uαβ|2
(27)
with uαβ the unitary matrix satisfying (25).
we will show first, in lemma 1, that for every 0 < ε < 1, if d is sufficiently large
and deq/d sufficiently close to 1, then most orthonormal bases {φα} are such that
⟨φα|peq|φα⟩> 1 −ε
for all α .
(28)
this inequality is a precise version of (24). how close to 1 should deq/d be? the fact
that the average of ⟨ψ|peq|ψ⟩over all wave functions ψ on the unit sphere of h equals
deq/d, mentioned already in (10), implies that (28) cannot be true of most orthonormal
bases if deq/d ≤1 −ε. to have enough wiggling room, we require that
deq
d > 1 −ε
2 .
(29)
7
we will show then, in theorem 1, that for every (arbitrarily small) 0 < η < 1 and for
sufficiently large d, most h are such that for every initial wave function ψ(0) ∈h with
∥ψ(0)∥= 1, the system will spend most of the time in thermal equilibrium with accuracy
1 −η, where we say that a system with wave function ψ is in thermal equilibrium with
accuracy 1 −η if
⟨ψ|peq|ψ⟩> 1 −η .
(30)
this inequality is a precise version of (8). in order to have no more exceptions in time
than the fraction 0 < δ′ < 1, we need to set the ε in (28) and (29) equal to ηδ′.
lemma 1. let μu(d) denote the haar measure on u(d), and
sε :=
n
u ∈u(d)
∀α :
deq
x
β=1
|uαβ|2 > 1 −ε
o
.
(31)
then for all 0 < ε < 1 and 0 < δ < 1, there exists d0 = d0(ε, δ) > 0 such that
if d > d0 and deq > (1 −ε/2)d then μu(d)(sε) ≥1 −δ .
(32)
the proof of lemma 1 is given in section 3. it also shows that d0 can for example
be chosen to be
d0(ε, δ) = max
103ε−2 log(4/δ), 106ε−4
.
(33)
from (27), we obtain:
theorem 1. for all η, δ, δ′ ∈(0, 1), all integers d > d0(ηδ′, δ) and all integers deq >
(1−ηδ′/2)d the following is true: let h be a hilbert space of dimension d; let heq be a
subspace of dimension deq; let peq denote the projection to heq; let e1, . . ., ed be pairwise
distinct but otherwise arbitrary; choose a hamiltonian at random with eigenvalues eα
and an eigenbasis φα that is uniformly distributed. then, with probability at least 1 −δ,
every initial quantum state will spend (1 −δ′)-most of the time in thermal equilibrium
as defined in (30), i.e.,
lim inf
t→∞
1
t
0 < t < t : ⟨ψ(t)|peq|ψ(t)⟩> 1 −η
≥1 −δ′ ,
(34)
where |m| denotes the size (lebesgue measure) of the set m.
proof. it follows from lemma 1 that under the hypotheses of theorem 1,
⟨ψ(t)|peq|ψ(t)⟩≥1 −ηδ′
with probability at least 1 −δ. thus, since ηδ′ ≥1 −⟨ψ(t)|peq|ψ(t)⟩≥η ̃
δ, where ̃
δ is
the lim supt→∞of the fraction of the time in (0, t) for which ⟨ψ(t)|peq|ψ(t)⟩≤1 −η, it
follows that ̃
δ ≤δ′.
8
2.2
remarks
normal typicality. theorem 1 can be strengthened; with the same sense of "most" as
in theorem 1, we have that for most hamiltonians and for all ψ(0)
⟨ψ(t)|pν|ψ(t)⟩≈dim hν
dim h ,
for all ν
(35)
for most t. for ν = eq, this implies that ⟨ψ(t)|peq|ψ(t)⟩≈1. this stronger statement
we have called normal typicality [5]. a version of normal typicality was proven by von
neumann [17]. however, because of the technical assumptions he made, von neumann's
result, while much more difficult, does not quite cover the simple result of this paper.
typicality and probability. when we express that something is true for most h or
most ψ relative to some normalized measure μ, it is often convenient to use the language
of probability theory and speak of a random h or ψ chosen with distribution μ. however,
by this we do not mean to imply that the actual h or ψ in a concrete physical situation
is random, nor that one would obtain, in repetitions of the experiment or in a class of
similar experiments, different h's or ψ's whose empirical distribution is close to μ. that
would be a misinterpretation of the measure μ, one that suggests the question whether
perhaps the actual distribution in reality could be non-uniform. this question misses
the point, as there need not be any actual distribution in reality. rather, theorem 1
means that the set of "bad" hamiltonians has very small measure μham.
consequences for example 2. from lemma 1 it follows for example 2 that typical
hamiltonians of the form (26) with h0 given by the right hand side of (15) are such
that all eigenfunctions are close to h0.5; this fact in turn strongly suggests (though
we have not proved this) that the eigenfunctions are essentially concentrated on those
configurations that have approximately 50% of the particles in the left half and 50% in
the right half of the box.
equilibrium statistical mechanics.
theorem 1 implies that, for typical h, every
ψ(0) ∈h is such that for most t,
⟨ψ(t)|mi|ψ(t)⟩≈tr(ρmcmi) ,
(36)
where ρmc is the standard micro-canonical density matrix (i.e., 1/d times the projection
htotal →h ), for all macro-observables mi as described in section 1.1. this justifies
replacing |ψ(t)⟩⟨ψ(t)| by ρmc as far as macro-observables in equilibrium are concerned.
however, this does not, by itself, justify the use of ρmc for observables a not among the
{mi}. for example, consider a microscopic observable a that is not "nearly constant"
on the energy shell h . then, standard equilibrium statistical mechanics tells us to use
ρmc for the expected value of a in equilibrium. we believe that this is in fact correct for
most such observables, but it is not covered by theorem 1. results concerning many
such observables are described in section 6. these results, according to which, in an
appropriate sense,
⟨ψ(t)|a|ψ(t)⟩≈tr(ρmca)
(37)
9
for suitable a and ψ(0), are valid only in quantum mechanics. the justification of the
broad use of ρmc in classical statistical mechanics relies on rather different sorts of results
requiring different kinds of considerations.
3
proof of lemma 1
proof. let us write p for the haar measure μu(d), and let
p := p
d
\
α=1
n deq
x
β=1
|uαβ|2 > 1 −ε
o
.
(38)
observe that
p = 1 −p
d
[
α=1
n deq
x
β=1
|uαβ|2 ≤1 −ε
o
(39)
≥1 −d max
α
p
n deq
x
β=1
|uαβ|2 ≤1 −ε
o
.
(40)
since u = (uαβ) is a random unitary matrix with haar distribution, its α-th column is
a random unit vector ⃗
u := (uαβ)β whose distribution is uniform over the unit sphere
of cd (i.e., the distribution is, up to a normalizing constant, the surface area measure).
therefore, the probability in the last line does not, in fact, depend on α, and so the step
of taking the maximum over α can be omitted.
a random unit vector such as ⃗
u can be thought of as arising from a random gaussian
vector ⃗
g by normalization: let gβ for β = 1, . . . , d be independent complex gaussian
random variables with mean 0 and variance e|gβ|2 = 1/d; i.e., re gβ and im gβ are
independent real gaussian random variables with mean 0 and variance 1/2d. then the
distribution of ⃗
g = (g1, . . . , gd) is symmetric under rotations from u(d), and thus
⃗
g
∥⃗
g∥
= ⃗
u in distribution, with ∥⃗
g∥2 =
d
x
β=1
|gβ|2 .
(41)
we thus have that
p ≥1 −d p
n deq
x
β=1
|gβ|2
∥⃗
g∥2 ≤1 −ε
o
.
(42)
to estimate the probability on the right hand side of (42), we introduce three different
10
events:
a(η′) :=
n
∥⃗
g∥2 −1
< η′o
,
(43)
b(η′′) :=
n
(1 −η′′)deq
d <
deq
x
β=1
|gβ|2 < (1 + η′′)deq
d
o
,
(44)
c(η′′′) :=
n
(1 −η′′′)deq
d <
deq
x
β=1
|gβ|2
∥⃗
g∥2 < (1 + η′′′)deq
d
o
.
(45)
let us now assume that
deq
d > 1 −ε
2 .
(46)
we then have that
(1 −ε/2)deq
d > 1 −ε + ε2
4 > 1 −ε ,
(47)
so that
c(ε/2) ⊆
n
(1 −ε/2)deq
d <
deq
x
β=1
|gβ|2
∥⃗
g∥2
o
⊆
n
1 −ε <
deq
x
β=1
|gβ|2
∥⃗
g∥2
o
(48)
and thus
p ≥1 −d p(cc(ε/2)) ,
(49)
where the superscript c means complement. our goal is to find a good upper bound for
p(cc(ε/2)).
if the event a(η′) occurs for 0 < η′ < 1
2 then
1 −η′ <
1
∥⃗
g∥2 < 1 + 2η′ ,
(50)
and consequently, if a(η′) ∩b(η′′) occurs then
deq
d (1 −η′)(1 −η′′) <
deq
p
β=1
|gβ|2
∥⃗
g∥2
< deq
d (1 + 2η′)(1 + η′′) .
(51)
it is now easy to see that a(η′)∩b(η′′) ⊆c(2η′+η′′+2η′η′′), so if we choose η′ = η′′ = ε/8
we obtain that
a( ε
8) ∩b( ε
8) ⊆c( 3
8ε + 1
32ε2) ⊆c(ε/2)
for
0 < ε < 1 .
(52)
we thus have the following upper bound:
p(cc(ε/2)) ≤p(ac(ε/8)) + p(bc(ε/8)) .
(53)
11
to find an estimate of p(a(ε/8)) and p(b(ε/8)) we use the large deviations prin-
ciple. it is convenient to use a slightly stronger version of this principle than usual, see
section 2.2.1 of [3], which states that for a sequence of n i.i.d. random variables xi,
p
n
x
i=1
xi
n −e(x1)
> δ
≤2e−ni(e(x1)+δ)
(54)
where i(x) is the rate function [3] associated with the distribution of the xi, defined to
be
i(x) = sup
θ>0
(θx −log eeθxi) .
(55)
in our case, where xi will be the square of a standard normal random variable, the rate
function is
i(x) = 1
2(x −1 −log x)
∀x > 1 ,
(56)
as a simple calculation shows.
to estimate p(a(ε/8)), set
n = 2d ,
xβ = 2d(re gβ)2 ,
xd+β = 2d(im gβ)2
for β = 1, . . . , d .
(57)
thus, for i = 1, . . . , 2d, the xi are i.i.d. variables with mean exi = 2d e(re gi)2 = 1;
we thus obtain
p(ac(ε/8)) = p
n
∥⃗
g∥2 −1
> ε/8
o
=
(58)
= p
n
d
x
β=1
|gβ|2 −1
> ε/8
o
(59)
= p
n
2d
x
i=1
xi
2d −1
> ε/8
o
(60)
≤2e−2d i(1+ε/8)
(61)
= 2e−d(ε/8−log(1+ε/8))
(62)
≤2 exp
−dε2
192
.
(63)
in the last step we have used that log(1 + x) ≤x −x2/3 for 0 < x < 1/2.
we use a completely analogous argument for b, setting
n = 2deq ,
xβ = 2d(re gβ)2 ,
xd+β = 2d(im gβ)2 ,
for β = 1, . . . , deq ,
(64)
12
and obtain that
p(bc(ε/8)) = p
n
deq
x
β=1
|gβ|2 −deq
d
/deq
d > ε/8
o
(65)
= p
n
2deq
x
i=1
xi
2deq
−1
> ε/8
o
(66)
≤2 exp
−deqε2
192
.
(67)
from (53), (63), and (67) it follows that
p(cc(ε/2)) ≤2 exp
−deqε2
192
+ 2 exp
−dε2
192
≤4 exp
−dε2
384
,
(68)
where we have used that deq > d/2. therefore, by (49),
p ≥1 −4d exp
−dε2
384
.
(69)
the last term converges to 0 as d →∞, so there exists a d0 > 0 such that for all
d > d0,
p ≥1 −δ ,
(70)
which is what we wanted to show. in order to check this for the d0 specified in (33)
right after lemma 1, note that the desired relation
4d exp
−dε2
384
≤δ
(71)
is equivalent to
d
ε2
384 −log d
d
≥log(4/δ) .
(72)
thus, it suffices that d > 103ε−2 log(4/δ) and
log d
d
< 10−3ε2 .
(73)
since log d <
√
d for all positive numbers d, condition (73) will be satisfied if
√
d >
103ε−2, i.e., if d > 106ε−4.
4
examples of systems that do not approach ther-
mal equilibrium
we shall now present examples of atypical behavior, namely examples of "bad" hamilto-
nians, i.e., hamiltonians for which not all wave functions approach thermal equilibrium
13
(or, equivalently, for which (24) is not satisfied). according to theorem 1, bad hamil-
tonians form a very small subset of the set of all hamiltonians. of course, to establish
that (24) holds for a particular hamiltonian can be a formidable challenge. moreover,
the small subset might include all standard many-body hamiltonians (e.g., all those
which are a sum of kinetic and potential energies). but there is no a priori reason to
believe that this should be the case.
the first example consists of two non-interacting subsystems. this can be expressed
in the framework provided by example 1 in section 1.2 with the hamiltonian h =
h1 + h2 + λv by setting λ = 0. let {φ1
i } be an orthonormal basis of h1 consisting of
eigenvectors of h1 with eigenvalues e1
i , and {φ2
j} one of h2 consisting of eigenvectors
of h2 with eigenvalues e2
j . clearly, for λ = 0 not every wave function will approach
thermal equilibrium. after all, in this case, the φ1
i ⊗φ2
j form an eigenbasis of h, while
h = span
φ1
i ⊗φ2
j : e1
i + e2
j ∈[e, e + δe]
(74)
and
heq = span
φ1
i ⊗φ2
j : e1
i ∈[0.49e, 0.51e) and e1
i + e2
j ∈[e, e + δe]
.
(75)
thus, any φ1
i ⊗φ2
j such that e1
i + e2
j ∈[e, e + δe] but, say, e1
i < 0.49e, will be an
example of an element of h that is orthogonal to heq and, as it is an eigenfunction of
h, forever remains orthogonal to heq.
as another example, we conjecture that some wave functions will fail to approach
thermal equilibrium also when λ is nonzero but sufficiently small. we prove this now
for a slightly simplified setting, corresponding to the following modification of example
1 of section 1.2. for the usual energy interval [e, e + δe], let h be, independently of
λ, given by (74), and, instead of h1 + h2 + λv , let h be given by
h = h(λ) = p(h1 + h2 + λv )p ,
(76)
where p is the projection to h . then h defines a time evolution on h that depends
on λ. (note that h is still an "energy shell" for all sufficiently small λ, as all nonzero
eigenvalues of h(λ) are still contained in an interval just slightly larger than [e, e+δe],
and the corresponding eigenvectors lie in h .) let heq for λ ̸= 0 also be given by (75).
again, choose one particular φ1
i and one particular φ2
j (independently of λ) so that
e1
i + e2
j ∈[e, e + δe] and e1
i < 0.49e, and consider as the initial state of the system
again
ψ(t = 0) = φ1
i ⊗φ2
j ,
(77)
which evolves to
ψ(λ, t) = e−ih(λ)tφ1
i ⊗φ2
j .
(78)
suppose for simplicity that h(λ = 0) = h1 +h2 is non-degenerate.4 then, according to
standard results of perturbation theory [7], also h(λ), regarded as an operator on h ,
4since this requires that no eigenvalue difference of h1, e1
i −e1
i′, coincides with an eigenvalue
difference of h2, e2
j −e2
j′, we need to relax our earlier assumption that system 1 and system 2 be
identical; so, let them be almost identical, with slightly different eigenvalues, and let h1 and h2 each
be non-degenerate.
14
is non-degenerate for all λ ∈(−λ0, λ0) for some λ0 > 0; moreover, its eigenvalues e(λ)
depend continuously (even analytically) on λ, and so do the eigenspaces. in particular,
it is possible to choose for every λ ∈(−λ0, λ0) a normalized eigenstate φ(λ) ∈h of
h(λ) with eigenvalue e(λ) in such a way that φ(λ) and e(λ) depend continuously on
λ, and φ(λ = 0) = φ1
i ⊗φ2
j.
we are now ready to show that for sufficiently small λ > 0,
⟨ψ(λ, t)|peq|ψ(λ, t)⟩≈0
(79)
for all t; that is, ψ(λ, t) is nearly orthogonal to heq for all t, and thus is never in thermal
equilibrium. to see this, note first that since φ(0) ≈φ(λ) for sufficiently small λ, and
since e−ih(λ)t is unitary, also
e−ih(λ)tφ(0) ≈e−ih(λ)tφ(λ)
(80)
(with error independent of t). since the right hand side equals
e−ie(λ)tφ(λ) ≈e−ie(λ)tφ(0) ,
(81)
we have that
⟨e−ih(λ)tφ(0)|peq|e−ih(λ)tφ(0)⟩≈⟨φ(0)|peq|φ(0)⟩= 0 .
(82)
this proves (79) with an error bound independent of t that tends to 0 as λ →0.
another example of "bad" hamiltonians is provided by the phenomenon of anderson
localization (see in particular [1, 11]): certain physically relevant hamiltonians possess
some eigenfunctions φα that have a spatial energy density function that is macroscopi-
cally non-uniform whereas wave functions in heq should have macroscopically uniform
energy density over the entire available volume. thus, some eigenfunctions are not close
to heq, violating (24).
5
comparison with classical mechanics
in classical mechanics, one would expect as well that a macroscopic system spends
most of the time in the long run in thermal equilibrium. let us define what thermal
equilibrium means in classical mechanics. (we defined it for quantum systems in (8).)
we denote a point in phase space by x = (q1, . . . , qn, p1, . . . , pn).
instead of the
orthogonal decomposition of h into subspaces hν we consider a partition of an energy
shell γ in phase space, γ = {x : e ≤h(x) ≤e + δe}, into regions γν corresponding
to different macro-states ν, i.e., if the micro-state x of the system is in γν then the
macro-state of the system is ν. it has been shown [8] for realistic systems with large
n that one of the regions γν, corresponding to the macro-state of thermal equilibrium
and denoted γeq, is such that, in terms of the (uniform or liouville) phase space volume
measure μ on γ,
μ(γeq)
μ(γ) ≈1 .
(83)
15
though the subspaces hν play a role roughly analogous to the regions γν, a basic
difference between the classical and the quantum cases is that while every classical
phase point in γ belongs to one and only one γν, and thus is in one macro-state, a
quantum state ψ need not lie in any one hν, but can be a non-trivial superposition of
vectors in different macro-states. (indeed, almost all ψ do not lie in any one hν. that
is why we defined being in thermal equilibrium in terms of ψ lying in a neighborhood
of heq, rather than lying in heq itself.)
the time evolution of the micro-state x is given by the solution of the hamiltonian
equations of motion, which sends x (at time 0) to xt (at time t), t ∈r. we expect that
for realistic systems with a sufficiently large number n of constituents and for every
macro-state ν, most initial phase points x ∈γν will be such that xt spends most of
the time in the set γeq. this statement follows if the system is ergodic,5 but in fact is
much weaker than ergodicity. theorem 1 is parallel to this statement in that it implies,
for typical hamiltonians, that initial states (here, ψ(0)) out of thermal equilibrium will
spend most of the time in thermal equilibrium; it is different in that it applies, for typical
hamiltonians, to all, rather than most, initial states ψ(0).
6
comparison with the literature
von neumann [17] proved, as his "quantum ergodic theorem," a precise version of
normal typicality (defined in section 2.2); his proof requires much more effort, and
more refined methods, than our proof of theorem 1. however, his theorem assumes
that the dimension dν of each macro-space hν is much smaller than the full dimension
d, and thus does not apply to the situation considered in this paper, in which one of the
macro-spaces, heq, has the majority of dimensions. the reason von neumann treated
the more difficult case of small dν but left out the easier and particularly interesting
case of the thermal equilibrium macrostate is that he had in mind a notion of thermal
equilibrium different from ours. he thought of a thermal equilibrium wave function ψ,
not as one in (or close to) a particular hν, but as one with ∥pνψ∥2 ≈dν/d for every
ν, i.e., one for which |ψ⟩⟨ψ| ≈ρmc in a suitable coarse-grained sense. because of this
different focus, he did not consider the situation presented here. we also note that von
neumann's quantum ergodic theorem makes an assumption on h that we do not need in
our theorem 1; this assumption, known as a "no resonances" [6, 16] or "non-degenerate
energy gaps" [10] condition, asserts that
eα −eβ ̸= eα′ −eβ′ unless
(
either α = α′, β = β′
or α = β, α′ = β′ .
(84)
the schnirelman theorem [2] states that, in the semi-classical limit and under suit-
able hypotheses, the wigner distribution corresponding to an eigenstate φα becomes the
5a classical system is ergodic if and only if the time evolved micro-state xt spends, in the long run,
a fraction of time in each (measurable) set b ⊆γ that is equal to μ(b)/μ(γ) for μ-almost all x.
16
micro-canonical measure. that is, the φα have a property resembling thermal equilib-
rium, similar to our condition (24) expressing that all eigenstates are in thermal equilib-
rium. srednicki [15] observed other thermal equilibrium properties in energy eigenstates
of example systems, a phenomenon he referred to as "eigenstate thermalization."
the results of [16, 12, 10] also concern conditions under which a quantum system
will spend most of the time in "thermal equilibrium." for the sake of comparison, their
results, as well as ours, can be described in a unified way as follows. let us say that a
system with initial wave function ψ(0) equilibrates relative to a class a of observables
if for most times τ,
⟨ψ(τ)|a|ψ(τ)⟩≈tr
|ψ(t)⟩⟨ψ(t)|a
for all a ∈a .
(85)
we then say that the system thermalizes relative to a if it equilibrates and, moreover,
tr
|ψ(t)⟩⟨ψ(t)|a
≈tr
ρmca
for all a ∈a ,
(86)
with ρmc the micro-canonical density matrix (in our notation, 1/d times the projection
p to h ). with these definitions, the results of [16, 12, 10] can be formulated by saying
that, under suitable hypotheses on h and ψ(0) and for large enough d, a system will
equilibrate, or even thermalize, relative to a suitable class a .
our result is also of this form. we have just one operator in a , namely peq. we
establish thermalization for arbitrary ψ(0) assuming h is non-degenerate and satisfies
⟨φα|peq|φα⟩≈1 for all α, which (we show) is typically true.
von neumann's quantum ergodic theorem [17] establishes thermalization for a fam-
ily a of commuting observables; a is the algebra generated by {m1, . . . , mk} in the
notation of section 1.1. he assumes that the dimensions of the joint eigenspaces hν
are not too small and not too large; that h obeys (84); he makes an assumption about
the relation between h and the subspaces hν that he shows is typically true; and he
admits arbitrary ψ(0). see [5] for further discussion. rigol, dunjko, and olshanii [13]
numerically simulated an example system and concluded that it thermalizes relative to
a certain class a consisting of commuting observables.
tasaki [16] as well as linden, popescu, short, and winter [10] consider a system
coupled to a heat bath, htotal = hsys ⊗hbath, and take a to contain all operators
of the form asys ⊗1bath. tasaki considers a rather special class of hamiltonians and
establishes thermalization assuming that
max
α
|⟨φα|ψ(0)⟩|2 ≪1 ,
(87)
a condition that implies that many eigenstates of h contribute to ψ(0) appreciably and
that can (more or less) equivalently be rewritten as
x
α
⟨φα|ψ(0)⟩
4 ≪1 .
(88)
17
under the assumption (88) on ψ(0), linden et al. establish equilibration for h satisfying
(84). they also establish a result in the direction of thermalization under the additional
hypothesis that the dimension of the energy shell of the bath is much greater than
dim hsys.
reimann's mathematical result [12] can be described in the above scheme as follows.
let a be the set of all observables a with (possibly degenerate) eigenvalues between
0 and 1 such that the absolute difference between any two eigenvalues is at least (say)
10−1000. he establishes equilibration for h satisfying (84), assuming that ψ(0) satisfies
(88).
acknowledgements. we thank matthias birkner (lmu m ̈
unchen), peter reimann (biele-
feld), anthony short (cambridge), avraham soffer (rutgers), and eugene speer (rut-
gers) for helpful discussions. s. goldstein was supported in part by national science
foundation [grant dms-0504504]. n. zangh`
ı is supported in part by istituto nazionale
di fisica nucleare. j. l. lebowitz and c. mastrodonato are supported in part by nsf
[grant dmr 08-02120] and by afosr [grant af-fa 09550-07].
references
[1] p. w. anderson: absence of diffusion in certain random lattices. phys. rev. 109,
1492–1505, 1958.
[2] y. colin de verdi`
ere: ergodicit ́
e et fonctions propres du laplacien. commun. math.
phys. 102, 497–502, 1985.
[3] a. dembo, o. zeitouni: large deviations techniques and applications, 2nd ed.
springer, new york, 1998.
[4] j. m. deutsch: quantum statistical mechanics in a closed system. phys. rev. a
43, 2046–2049, 1991.
[5] s. goldstein,
j. l. lebowitz,
c. mastrodonato,
r. tumulka,
n. zangh`
ı:
normal
typicality
and
von
neumann's
quantum
ergodic
theorem.
http://arxiv.org/abs/0907.0108, 2009.
[6] r. jancel: foundations of classical and quantum statistical mechanics. oxford:
pergamon, 1969. translation by w. e. jones of les fondements de la m ́
ecanique
statistique classique e quantique. paris: gauthier-villars, 1963.
[7] t. kato: a short introduction to perturbation theory for linear operators. new york:
springer-verlag, 1982.
[8] o. e. lanford: entropy and equilibrium states in classical statistical mechanics.
in a. lenard (ed.), lecture notes in physics 2, 1–113, springer-verlag, 1973.
18
[9] j. l. lebowitz: from time-symmetric microscopic dynamics to time-asymmetric
macroscopic behavior: an overview. in g. gallavotti , w. l. reiter, j. yngva-
son (editors), boltzmann's legacy, 63–88. european mathematical society (2007).
http://arxiv.org/abs/0709.0724
[10] n.
linden,
s.
popescu,
a.
j.
short,
a.
winter:
quantum
mechani-
cal evolution towards thermal equilibrium. phys. rev e 79, 061103, 2009.
http://arxiv.org/abs/0812.2385
[11] v. oganesyan, d. a. huse: localization of interacting fermions at high tempera-
ture. phys. rev. b 75, 155111, 2007.
[12] p. reimann: foundation of statistical mechanics under experimentally realistic
conditions. phys. rev. lett. 101, 190403, 2008.
[13] m. rigol, v. dunjko, m. olshanii: thermalization and its mechanism for generic
isolated quantum systems. nature 452, 854–858, 2008.
[14] e. schr ̈
odinger: statistical thermodynamics. second edition, cambridge university
press, 1952.
[15] m. srednicki: chaos and quantum thermalization. phys. rev. e 50, 888, 1994.
[16] h. tasaki: from quantum dynamics to the canonical distribution: general pic-
ture and a rigorous example. phys. rev. lett. 80, 1373-1376, 1998.
[17] j. von neumann: beweis des ergodensatzes und des h-theorems in der neuen
mechanik. z. physik 57, 30, 1929.
[18] j. von neumann:
mathematical foundation of quantum mechanics. princeton
university press, 1955. translation of mathematische grundlagen der quanten-
mechanik. springer-verlag, berlin, 1932.
[19] j. d. walecka: fundamentals of statistical mechanics. manuscript and notes of
felix bloch. stanford university press, stanford, ca, 1989.
19
|
0911.1726 | higher-order phase transitions with line-tension effect | the behavior of energy minimizers at the boundary of the domain is of great
importance in the van de waals-cahn-hilliard theory for fluid-fluid phase
transitions, since it describes the effect of the container walls on the
configuration of the liquid. this problem, also known as the liquid-drop
problem, was studied by modica in [21], and in a different form by alberti,
bouchitte, and seppecher in [2] for a first-order perturbation model. this work
shows that using a second-order perturbation cahn-hilliard-type model, the
boundary layer is intrinsically connected with the transition layer in the
interior of the domain. precisely, considering the energies $$
\mathcal{f}_{\varepsilon}(u) := \varepsilon^{3} \int_{\omega} |d^{2}u|^{2} +
\frac{1}{\varepsilon} \int_{\omega} w (u) + \lambda_{\varepsilon}
\int_{\partial \omega} v(tu), $$ where $u$ is a scalar density function and $w$
and $v$ are double-well potentials, the exact scaling law is identified in the
critical regime, when $\varepsilon \lambda_{\varepsilon}^{{2/3}} \sim 1$.
| introduction
in this paper we seek to estimate the asymptotic behavior of the family of energies
ε3
z
ω
|d2u|2 dx + 1
ε
z
ω
w(u) dx + λε
z
∂ω
v (tu) dhn−1,
where u ∈h2(ω), ωis a bounded open set in rn of class c2, tu is the trace of u on ∂ω, w and v are continuous
and non-negative double-well potentials with quadratic growth at infinity, and lim
ε→0+ λε = ∞.
it is known that the transition layer in the interior of the domain has width of order ε (see [22], [20], [21], [2], [13],
[9], [15]). to formally find the order of the width of the transition layer on the boundary, it suffices to study the case
n = 2. therefore, by focusing on a neighborhood of a point on the boundary (assuming the boundary is flat), consider
a 2 −d energy in the half ball of radius δ centered at that point x0 of the boundary, and changing variables to a fixed
domain, e.g. the unit ball, we obtain
ε3
δ2
zz
b+ |d2u|2 dx dy + δ2
ε
zz
b+ w(u) dx dy + λεδ
z
e
v (tu) dh1.
equi-partition of energy between the first and last terms leads to δ ≈ελ
−1
3
ε
which, in turn, yields δ2
ε ≈ελ
−2
3
ε
, which
vanishes with ε, which seems to indicate that the middle term will not contribute for the transition on the boundary.
one also concludes that on the boundary, the energy will scale as ε3
δ2 ≈λεδ ≈ελ
2
3
ε . hence there are three essential
regimes for this energy depending on how the quantity ελ
2
3
ε behaves as ε →0+.
in this paper we study the case in which ελ
2
3
ε converges to a finite and strictly positive value. the other two regimes
will be treated in a forthcoming paper.
consider the functional
fε(u) :=
ε3
z
ω
|d2u|2 dx + 1
ε
z
ω
w(u) dx + λε
z
∂ω
v (tu) dhn−1
if u ∈h2(ω),
∞
otherwise.
(1.1)
1
arxiv:0911.1726v1 [math.ap] 9 nov 2009
2
b. galv ̃
ao-sousa
theorem 1.1 (compactness). let ω⊂rn be a bounded open set of class c2 and let w : r →[0, ∞) be such that
(hw
1 )
w is continuous and w −1({0}) = {a, b} for some a, b ∈r, a < b;
(hw
2 )
w(z) ⩾c|z|2 −1
c for all z ∈r and for some c > 0.
let v : r →[0, ∞) be such that
(hv
1 )
v is continuous and v −1({0}) = {α, β} for some α, β ∈r, α < β;
(hv
2 )
v (z) ⩾c|z|2 −1
c for all z ∈r and for some c > 0;
(hv
3 )
v (z) ⩾1
c min
|z −β|, |z −α|
2 for all z ∈(α −ρ, α + ρ) ∪(β −ρ, β + ρ)
and for some c, ρ > 0.
assume that ελ
2
3
ε →l ∈(0, ∞) as ε →0+ and consider a sequence {uε} ⊂h2(ω) such that supε>0 fε(uε) < ∞.
then there exist a subsequence {uε} (not relabeled), u ∈bv
ω; {a, b}
, and v ∈bv
∂ω; {α, β}
such that uε →u in
l2(ω) and tuε →v in l2(∂ω).
the next theorem concerns the critical regime where ε and λε are "balanced", i.e. ελ
2
3
ε ∼1, and all terms play an
important role. here λε is large enough to render the energy sensitive to the transition that occurs on the boundary,
but not too big as to force the value on the boundary to converge to a constant.
we define
(i) ea := {x ∈ω: u(x) = a} for all u ∈bv
ω; {a, b}
;
(ii) m is the energy density per unit area on the transition interfaces between the interior potential wells, precisely,
m := inf
z r
−r
w(f(t)) + |f ′′(t)|2
dt : f ∈h2
loc(r), f(−t) = a, f(t) = b for all t ⩾r, r > 0
;
(1.2)
(iii) σ is the interaction energy on the transition interface between bulk wells and boundary wells, i.e.,
σ(z, ξ) := inf
z r
0
w(f(t)) + |f ′′(t)|2
dt : f ∈h2
loc
(0, ∞)
, f(0) = ξ, f(t) = z for all t ⩾r, r > 0
;
(1.3)
(iv) fα := {x ∈∂ω: v(x) = α} for all v ∈bv
∂ω; {α, β}
;
(v) c is a lower bound to the energy on a transition interface between the wells of the boundary potential,
c := inf
(
1
8
z r
−r
z r
−r
|f ′(x) −f ′(y)|2
|x −y|2
dx dy +
z r
−r
v
f(x)
dx : f ∈h
3
2
loc(r),
f ′ ∈h
1
2 (r), f(−t) = α, f(t) = β for all t ⩾r, r > 0
o
;
(1.4)
(vi) c is an upper bound to the energy on a transition interface between the wells of the boundary potential,
c := inf
7
16
z ∞
−∞
z ∞
−∞
|f ′(x) −f ′(y)|2
|x −y|2
dx dy +
z ∞
−∞
v
f(x)
dx :
f ∈h
3
2
loc(r), f(−t) = α, f(t) = β for all t ⩾r, r > 0
o
.
(1.5)
theorem 1.2 (critical case). under the same hypotheses of theorem 1.1 the following statements hold:
(i) (lower bound) for every u ∈bv (ω; {a, b}) and v ∈bv (∂ω; {α, β}) and for every sequence {uε} ⊂h2(ω)
such that uε →u in l2(ω), tuε →v in l2(∂ω), we have
lim inf
ε→0+ fε(uε) ⩾mperω(ea) +
x
z=a,b
x
ξ=α,β
σ(z, ξ)hn−1 {tu = z} ∩{v = ξ}
+ clper∂ω(fα);
higher-order phase transitions with line-tension effect
3
(ii) (upper bound) for every u ∈bv (ω; {a, b}) and v ∈bv (∂ω; {α, β}), there exists a sequence {uε} ⊂h2(ω)
such that uε →u in l2(ω), tuε →v in l2(∂ω), and
lim sup
ε→0+ fε(uε) ⩽mperω(ea) +
x
z=a,b
x
ξ=α,β
σ(z, ξ)hn−1 {tu = z} ∩{v = ξ}
+ clper∂ω(fα).
the main results, theorems 1.1 and 1.2, imply, in particular, that
min
a<-
r
ωu dx<b
α<-
r
∂ωv dhn−1<β
fε = o(1)
as ε →0+,
where we impose a mass constraint to avoid trivial solutions which yield no energy. note that these conditions pose
no difficulties to the γ-convergence due to the strong convergence of uε and tuε. thus we identify the precise scaling
law for the minimum energy in the parameter regime ελ
2
3
ε ∼1.
observe that, although theorem 1.2 does not prove that the sequence {fε}ε>0 γ-converges as ε →0+, since the
constants of the lower and upper bounds for the last transition term do not match, we can apply theorem 8.5
from [18] to prove that there exists a subsequence εn →0+ such that the corresponding subsequence of functionals
γ-converges.
hence theorem 1.2 shows that the limiting functional concentrates on the three different kinds of transition layers: an
interior transition layer of dimension n −1, where the limiting value of u makes the transition between a and b; the
boundary of the domain, also of dimension n −1, where there is the transition between the interior phases a and b
and the boundary phases α and β; and a transition interface on the boundary, of dimension n −2, where the limiting
value of the trace tu makes the transition between α and β.
the difficulties in proving a γ-convergence result arise mainly from the nature of the functional under consideration.
on one hand, the energy involves second-order derivatives, which prevents us from following the usual techniques in
phase transitions, such as truncation and rearrangement arguments to obtain monotonically increasing test functions
for the constant c. in [2], these techniques are crucial to find a test function that matches both the lifting constant
and the optimal profile problem for the boundary wells. on the other hand, for the boundary term, the functionals
are also nonlocal. thus the estimates for the recovery sequence have to be sharper, since the nonlocality extends its
contribution beyond the characteristic length of the phase transition. the usual methods for localization make use of
truncation arguments, which do not apply in this setting due to the fact that the fractional seminorm is of higher-order.
similar difficulties can also be found in the papers [6,7,8,5] where, similarly, the γ-convergence is not established.
the difference between the constants c and c arises from two factors. first, from proposition 2.9 it does not follow
that the lifting constant is independent of the value of the trace g. and second, when estimating the upper bound for
the recovery sequence, the transition between α and β is accomplished on a layer of thickness δε = o(ε). so we rescale
the integrals by δε, but because of the non-locality of the fractional energy, it obtains a contribution from a layer of
thickness ε, which after rescaling becomes of thickness ε/δε →∞. this accounts for the fact that the integration limits
of the constant c extend to infinity, while for c they are bounded.
the proofs of theorems 1.1 and 1.2 are divided through the next sections. we begin by studying two auxiliary
one-dimensional problems. more precisely, let i, j ⊂r be two open intervals and define the following functionals
fε(u; i) :=
ε3
z
i
|u′′(x)|2 dx + 1
ε
z
i
w
u(x)
dx
if u ∈h2(i),
∞
otherwise,
(1.6)
and
gε(v; j) :=
ε3
8
z
j
z
j
v′(x) −v′(y)
2
|x −y|2
dx dy + λε
z
j
v
v(x)
dx
if v ∈h
3
2 (j),
∞
otherwise.
(1.7)
in sections 4.1 and 4.2 we prove a compactness result and a lower bound for fε which follows the techniques developed
in [13]. in section 4.3 we will prove a compactness result for gε, while in section 4.4 we will prove a lower bound by
finding "good points" x±
i such that most of the transition energy is concentrated between x−
i and x+
i and we modify
4
b. galv ̃
ao-sousa
the original sequence {un} on a small set to be admissible for c. in section 5.1 we will prove theorem 1.1 in the critical
regime using a slicing argument to reduce the compactness in the interior to the auxiliary problem studied in section
4.1, and analogously, we reduce the compactness on the boundary to the one-dimensional problem for gε studied in
section 4.3. in section 5.2 we prove the lower bound result for theorem 1.2 using the fact that the energy concentrates
in different mutually singular sets. finally, in section 5.3 we prove the upper bound for theorem 1.2.
from theorem 1.2, we deduce the following corollary.
corollary 1.3. under the same hypotheses of theorem 1.1, and assuming that α = β, then the sequence {fε}ε>0
γ-converges as ε →0+ to
f0(u) :=
mperω(ea) +
x
z=a,b
σ(z, α)hn−1 {tu = z}
if u ∈bv (ω; {a, b}),
∞
otherwise,
where m is defined as in (1.2) and σ is defined as in (1.3).
from the result of theorem 1.2, we know that the γ-limit of the functionals fε as ε →0+ will concentrate its energy
on three surfaces: the discontinuity surface of u, the boundary ∂ω, and the discontinuity surface of v. moreover, we
know the precise energy of the first two terms. for the last term, we expect it to be the product of the perimeter of the
surface times the value c of the transition between the two boundary preferred phases α and β. since the fractional
norm on the boundary is non-local, the definition of c should span the whole real line and the lifting constant should
be independent of the function g, as in the first-order case (see [2]). we offer the following conjecture.
conjecture. under the same hypotheses of theorem 1.1, then the sequence {fε}ε>0 γ-converges as ε →0+ to
f0(u, v) :=
mperω(ea) +
x
z=a,b
x
ξ=α,β
σ(z, ξ)hn−1 {tu = z} ∩{v = ξ}
+ clper∂ω(fα)
if (u, v) ∈v,
∞
otherwise,
where v := bv (ω; {a, b}) × bv (∂ω; {α, β}), m is defined as in (1.2), σ is defined as in (1.3), and c is defined by
c := inf
ζ
z ∞
−∞
z ∞
−∞
|f ′(x) −f ′(y)|2
|x −y|2
dx dy +
z ∞
−∞
v
f(x)
dx : f ∈h
3
2
loc(r), lim
x→∞f(−x) = α, lim
x→∞f(x) = β
, (1.8)
and ζ is defined by
ζ := inf
rr
r×r+
d2u(x, y)
2 dx dy
r
r
r
r
g′(x)−g′(y)
2
|x−y|2
dx dy
: u ∈h2(r × r+), tu(*, 0) = g in r
,
(1.9)
which is independent of g ∈h
3
2
loc(r) such that lim g(−x) = α as x →∞and lim g(x) = β as x →∞.
2. preliminaries
2.1. slicing
we now show a slicing argument introduced by [2] and improved in [13]. first we fix some notation. given a bounded
open set a ⊂rn, a unit vector e in rn, and a function u : a →r, we denote by
m the orthogonal complement of e,
ae the projection of a onto m,
ay
e := {t ∈r : y + te ∈a}, for all y ∈ae,
uy
e the trace of u on ay
e, i.e., uy
e(t) := tu(y + te), for all y ∈ae.
definition 2.1. for every δ > 0, two sequences {vε}, {wε} ⊂l1(e) are said to be δ−close if for every ε > 0
∥vε −wε∥l1(e) < δ.
higher-order phase transitions with line-tension effect
5
proposition 2.2. assume that e is a lipschitz, bounded and open subset of rn−1. if {wε} ⊂l1(e) is equi-integrable
and if there are n −1 linearly independent unit vectors ei such that for every δ > 0 and for every fixed i = 1, . . . , n −1,
there exist a sequence {vε} (depending on i) that is δ−close to {wε} with {vy
ε} precompact in l1(ey
ei) for hn−2-a.e.
y ∈eei, then {wε} is precompact in l1(e).
2.2. fractional order sobolev spaces
we will use the norms and seminorms of several fractional order spaces, introduced by besov and nikol'skii and
summarized in [1] and [27]. consider the following norms and seminorms for the space w
3
2 ,2(j) where j ⊂r is an
open interval.
|u|2
h
1
2 (j) :=
z
j
z
j
u(x) −u(y)
2
|x −y|2
dx dy,
|u|2
h
3
2 (j) :=
z
j
z
j
u(x) −2u
x+y
2
+ u(y)
2
|x −y|4
dx dy,
∥u∥2
w
3
2 ,2(j) := ∥u∥2
h1(j) + |u′|2
h
1
2 (j),
∥u∥2
h
3
2 (j) := ∥u∥2
l2(j) + |u|2
h
3
2 (j).
we will need to compare the two seminorms and for that we invoke an auxiliary result (see [12,25]).
proposition 2.3. let r > 1 and let u : (a, b) −
→[0, ∞] be a borel function. then
z b
a
1
(x −a)r
z x
a
u(y) dy
dx ⩽
1
r −1
z b
a
u(x)
(x −a)r−1 dx.
lemma 2.4. let j ⊂r be an open interval and let u ∈h
3
2 (j). then
|u|2
h
3
2 (j) ⩽1
8|u′|2
h
1
2 (j).
proposition 2.5 (gagliardo-nirenberg-type inequality). let j ⊂r be an open interval. then there exists c =
c(j) > 0 such that
∥u∥h1(j) ⩽c
∥u∥
1
3
l2(j)|u′|
2
3
h
1
2 (j) + ∥u∥l2(j)
for all u ∈h
3
2 (j).
we recall two inequalities due to gagliardo and nirenberg (see [14,24]).
proposition 2.6. let ω⊂rn be a bounded open set satisfying the cone property. if u ∈l2(ω) and ∇2u ∈l2(ω),
then u ∈h2(ω) and
∥∇u∥l2(ω) ⩽cln(ω)
∥u∥
1
2
l2(ω)∥∇2u∥
1
2
l2(ω;rn×n) + ∥u∥l2(ω)
,
where c > 0 is independent of u and ω.
proposition 2.7. let j ⊂r be an open bounded interval. if u ∈l1(j) and u′′ ∈l2(j) then u ∈h2(j) and
∥u′∥l
4
3 (j) ⩽c
∥u∥
1
2
l1(j)∥u′′∥
1
2
l2(j) + ∥u∥l1(j)
,
for some constant c > 0.
6
b. galv ̃
ao-sousa
2.3. lifting inequalities
we need to relate the l2 norm of the hessian with its equivalent on the boundary, i.e., the h
1
2 fractional seminorm
of the derivative of the trace. in this section, we estimate the ratio between these two seminorms. we start with an
auxiliary lemma from [10].
lemma 2.8. let 1 ⩽p < ∞, let e ⊂rn and f ⊂rm be measurable sets and let u ∈lp(e × f). then
z
f
z
e
|u(x, y)| dx
p
dy
1
p
⩽
z
e
z
f
|u(x, y)|p dy
1
p
dx.
proposition 2.9. let g ∈h
3
2 (0, r) and consider the triangle t +
r := {(x, y) ∈r2 : 0 < y < r
2 , y < x < r −y}. then,
1
8 ⩽ζr,g := inf
rr
t +
r
d2u(x, y)
2 dx dy
r r
0
r r
0
g′(x)−g′(y)
2
|x−y|2
dx dy
: u ∈h2(t +
r ), tu(*, 0) = g in (0, r)
⩽7
16.
(2.1)
proof. we divide the proof in two steps.
step 1: upper bound.
define the diamond
tr :=
(x, y) ∈r2 : 0 ⩽x ⩽r, |y| ⩽min{x, r −x}
.
(2.2)
given a function g ∈h
3
2 (0, r), we lift it to the diamond tr by
u(x, y) := 1
2y
z x+y
x−y
g(t) dt.
we are only interested in the lifting on the positive part of the diamond, i.e., on the triangle t +
r , but observe that u(x, *)
is even, and we will take advantage of that fact for some estimates. since g is continuous, one deduces immediately
that u is continuous and
tu′(x, 0) = lim
y→0+
∂u
∂x(x, y) = lim
y→0+
g(x + y) −g(x −y)
2y
= g′(x).
moreover,
∂2u
∂x2 (x, y) = g′(x + y) −g′(x −y)
2y
,
∂2u
∂x∂y (x, y) = g′(x + y) + g′(x −y)
2y
−g(x + y) −g(x −y)
2y2
,
∂2u
∂y2 (x, y) = g′(x + y) −g′(x −y)
2y
−g(x + y) + g(x −y)
y2
+ 1
y3
z x+y
x−y
g(t) dt.
we can easily deduce that
∂2u
∂x2
2
l2(t +
r ) = 1
4|g′|2
h
1
2 (0,r), and note that
∂2u
∂x∂y (x, y) =
1
2y2
z y
0
(g′(x + y) −g′(s + x) + g′(x −y) −g′(s + x −y)) ds.
use hardy's inequality from proposition 2.3 to obtain
∂2u
∂x∂y
2
l2(t +
r )
⩽1
16|g′|2
h
1
2 (0,r).
finally, notice that
∂2u
∂y2 (x, y) = 1
y3
z y
0
f2(r; x, y) dr,
higher-order phase transitions with line-tension effect
7
where f2(r; x, y) :=
z x+y
r+x
(g′(x + y) −g′(s)) ds +
z r+(x−y)
x−y
(g′(s) −g′(x −y)) ds. using hardy's inequality in propo-
sition 2.3 again, we deduce that
∂2u
∂y2
2
l2(t +
r )
⩽1
16|g′|2
h
1
2 (0,r).
we finally put the three estimates for the partial derivatives of u of second order together to obtain
zz
t +
r
|∇2u|2 dx dy ⩽7
16|g′|2
h
1
2 (0,r).
step 2: lower bound in (2.1)
case 1: assume that v ∈l1(t +
r ; r2) ∩c∞(t +
r ; r2) is such that ∇v ∈l2(t +
r ; r2×2).
first it is easy to prove that
v(x + y, 0) −v(x −y, 0)
2y
2
⩽1
2
z 1
0
∇v(x + y −ty, ty)
dt +
z 1
0
∇v(x −y + ty, ty)
dt
2
.
by estimating the right-hand side using lemma 2.8 and minkowski inequality, we obtain
|v(*, 0)|2
h
1
2 (0,r) ⩽8∥∇v∥2
l2(t +
r ).
case 2: assume that v ∈l1(t +
r ; r2) is such that ∇v ∈l2(t +
r ; r2×2).
first by reflection, extend the function to v ∈l1(tr; r2) with ∇v ∈l2(tr; r2×2). let φε be the standard mollifiers
and consider vε := v ⋆φε defined in t ε
r :=
(x, y) ∈tr : d
(x, y), ∂tr
> ε
. then vε →u in l1
loc(a; r2), ∇vε →∇v
in l2(a; r2×2) and vε(*, 0) →tv in l1 a ∩(r × {0}); r2
for any open set a ⋐tr. we can find a subsequence (not
relabeled) such that vε(x, 0) →tv(x) for l1-a.e. x ∈a ∩(r × {0}). then by case 1, we have
z
a∩(r×{0})
z
a∩(r×{0})
tv(x) −tv(y)
x −y
2
dx dy ⩽lim inf
ε→0+
z
a∩(r×{0})
z
a∩(r×{0})
vε(x, 0) −vε(y, 0)
x −y
2
dx dy
⩽8 lim
ε→0+
zz
a∩t +
r
|∇vε|2 dx dy = 8
zz
a∩t +
r
|∇v|2 dx dy.
let an ⊂an+1 ⋐tr be such that tr = s an. then one deduces that
z r
0
z r
0
tv(x) −tv(y)
x −y
2
dx dy ⩽8
zz
t +
r
|∇v|2 dx dy.
apply this result to v := ∇u to deduce
z r
0
z r
0
g(x) −g(y)
x −y
2
dx dy ⩽8
zz
t +
r
|∇2u|2 dx dy,
which proves the lower bound in (2.1).
2.4. slicing on bv
we use here the same notation as in section 2.1.
theorem 2.10 (slicing of bv functions). let u ∈l1(ω). then u ∈bv (ω) if and only if there exist n linearly
independent unit vectors ei such that uy
ei ∈bv (ωy
ei) for ln−1-a.e. y ∈ωei and
z
ωy
ei
|duy
ei|(ωy
ei) dy < ∞
for all i = 1, . . . , n.
we state an immediate corollary of theorem 1.24 from [16].
8
b. galv ̃
ao-sousa
proposition 2.11. let ω⊂rn be a bounded open lipschitz set and let e ⊂ωbe a set of finite perimeter.
then there are sets en ⊂ωof class c2 such that
(
ln(e△en) →0,
hn−1(∂e△∂en) →0.
(2.3)
proposition 2.12 (see section 5.10 in [11]). let a ⊂rn be an open set, let e ⊂a be a borel set, let e be an arbitrary
unit vector, and e has finite perimeter in a. then ey
e has finite perimeter in ay
e and ∂ey
e ∩ay
e = (∂e ∩a)y
e, and
z
ae
h0(∂ey
e ∩ay
e) dy =
z
a∂e∩a
⟨νe, e⟩dhn−1.
conversely, e has finite perimeter in a if there exist n linearly independent unit vectors ei, i = 1, . . . , n such that
z
aei
h0(∂ey
ei ∩ay
ei) dy < ∞
for all i = 1, . . . , n.
2.5. functions of bounded variation on a manifold
we consider several spaces of functions with domains a ⊂rn which are not open. specifically, a will be the boundary
of an open and bounded set ωof class c2 and so it will be a compact riemannian manifold (without boundary) of
class c2 and dimension n −1 in rn. such a manifold is endowed with a unit normal field ν which is continuous and
defined for every x ∈a. in this section we give a brief definition of these spaces. for more details see [3,11,17].
the space of integrable functions on a manifold. let a ⊂rn be a compact riemannian manifold (without
boundary) of class c1 and dimension n −1 and define the restriction measure hn−1⌊a(e) := hn−1(e ∩a). a
function v is said to be integrable on a, and we write v ∈l1(a; hn−1⌊a), if and only if v is hn−1⌊a-measurable and
hn−1⌊a-summable, precisely
v−1(j) is hn−1⌊a-measurable for every open set j ⊂r;
z
a
|v(x)| dhn−1(x) < ∞.
the space of functions of bounded variation on a manifold. we give a short introduction to the space of
functions of bounded variation on a manifold. for more details we refer to [19].
let t ⋆a be the cotangent bundle of a and let γ(t ⋆a) be the space of 1-forms on a. then, given a function v ∈l1(a),
define the variation of v by
|dv|(a) := sup
z
a
v div w dhn−1 : w ∈γc(t ⋆a), |w| ⩽1
.
(2.4)
then v ∈l1(a) is said to be a function of bounded variation, i.e., v ∈bv (a) if |dv|(a) < ∞. moreover, if v = χe
for some set e ⊂a, then e has finite perimeter if and only if v ∈bv (a), and
pera(e) = |dv|(a) = hn−2(e ∩a) < ∞.
proposition 2.13. let ω⊂rn be an open bounded set of class c2 and let e ⊂∂ωbe a set of finite perimeter with
respect to hn−2. then there are sets en ⊂∂ωof class c2 such that
(
hn−1(e△en) →0,
hn−2(∂∂ωe△∂∂ωen) →0.
higher-order phase transitions with line-tension effect
9
3. characterization of constants
lemma 3.1. assume that v : r →[0, ∞) satisfies (hv
1 ) −(hv
3 ). then the constant c defined in (1.4) belongs to
(0, ∞).
proof. assume by contradiction that c = 0. then there exist two sequences {fn} ⊂h
3
2
loc(r) and {rn} ⊂(0, ∞)
satisfying
fn(−x) = α,
fn(x) = β
for all x ⩾rn,
(3.1)
1
8
z rn
−rn
z rn
−rn
|f ′
n(x) −f ′
n(y)|2
|x −y|2
dx dy +
z rn
−rn
v
fn(x)
dx
n→∞
−
−
−
−
→0.
(3.2)
let 0 < 2δ < β −α. since fn(−rn) = α, fn(rn) = β, and fn is continuous, there exists an interval (sn, tn) such that
fn(sn) = α + δ < β −δ = fn(tn),
fn
[sn, tn]
= [α + δ, β −δ].
(3.3)
by (hv
1 ) and the continuity of v we have that cδ :=
min
z∈[α+δ,β−δ] v (z) > 0. then by (3.2),
0 = lim
n→∞
z rn
−rn
v (fn(x)) dx ⩾lim
n→∞
z tn
sn
v (fn(x)) dx ⩾lim inf
n→∞cδ(tn −sn),
and so tn −sn →0. for any t ∈[0, 1], define
gn(t) := fn
tnt + sn(1 −t)
.
then gn(0) = α + δ and g(1) = β −δ. changing variables in (3.2) yields
z tn
sn
z tn
sn
|f ′
n(x) −f ′
n(y)|2
|x −y|2
dx dy =
1
(tn −sn)2
z 1
0
z 1
0
|g′
n(s) −g′
n(t)|2
|s −t|2
ds dt →0.
this implies that
g′
n
tn−sn
h
1
2 (0,1) →0, and so, up to a subsequence (not relabeled),
g′
n
tn−sn →constant in l2(0, 1).
since tn −sn →0, this implies that g′
n →0 in l2(0, 1).
on the other hand,
0 < β −δ −(α + δ) = gn(1) −gn(0) =
z 1
0
g′
n(t) dt.
letting n →∞, we obtain a contradiction. this shows that c > 0.
to prove that c < ∞, take any function f ∈c2 such that f(t) ⩽α for t ⩽−1 and f(t) = β for t ⩾1. it is easy to
verify that the energy is finite.
remark. from the proof of the previous lemma, it follows that for every 0 < δ < β−α
2 , the constant
cδ := inf
1
8
z t
s
z t
s
|f ′(x) −f ′(y)|2
|x −y|2
dx dy +
z t
s
v
f(x)
dx : f ∈h
3
2
loc(r),
f(s) = α + δ, f(t) = β −δ, fn
(sn, tn)
= [α + δ, β −δ], for some s, t ∈r
(3.4)
also belongs to (0, ∞).
lemma 3.2. define the constant c as before by
c := inf
7
16
z ∞
−∞
z ∞
−∞
|f ′(x) −f ′(y)|2
|x −y|2
dx dy +
z ∞
−∞
v
f(x)
dx :
f ∈h
3
2
loc(r), f(−t) = α, f(t) = β,
for all t ⩾r, r > 0
o
,
where v satisfies the properties of theorem 1.1.
then c ∈(0, ∞).
10
b. galv ̃
ao-sousa
proposition 3.3. under the conditions of theorem 1.1, c = c⋆, where c⋆is defined by
c⋆:= inf
(
3
2
5
3
z 1
−1
z 1
−1
|g′(x) −g′(y)|2
|x −y|2
dx dy
1
3 z 1
−1
v
g(x)
dx
2
3
:
g ∈h
3
2
loc(r), g′ ∈h
1
2 (r), g(−t) = α, g(t) = β for all t ⩾1
o
.
proof. first we prove that c ⩾c⋆. let η > 0, and f ∈h
3
2
loc(r), r > 0 be such that
f ′ ∈h
1
2 (r),
f(−t) = α, f(t) = β, for all t ⩾r,
1
8
z r
−r
z r
−r
|f ′(x) −f ′(y)|2
|x −y|2
dx dy +
z r
−r
v
f(x)
dx ⩽c + η.
then
c + η ⩾
1
8r2
z 1
−1
z 1
−1
|(f(rx))′ −(f(ry))′|2
|x −y|2
dx dy + r
z 1
−1
v
f(rx)
dx
⩾
1
8s2
r
z 1
−1
z 1
−1
|g′
r(x) −g′
r(y)|2
|x −y|2
dx dy + sr
z 1
−1
v
gr(x)
dx ⩾c⋆
where gr(x) = f(rx) which is admissible for c⋆, and
sr = arg min
s>0
1
8s2
z 1
−1
z 1
−1
|g′
r(x) −g′
r(y)|2
|x −y|2
dx dy + s
z 1
−1
v
gr(x)
dx
=
z 1
−1
z 1
−1
|g′
r(x) −g′
r(y)|2
|x −y|2
dx dy
4
z 1
−1
v
gr(x)
dx
1
3
.
let η →0+ to deduce that c ⩾c⋆. the converse inequality follows trivially from following the first part of the proof
from the end to the beginning.
proposition 3.4. under the conditions of theorem 1.1, c = c⋆, where c⋆is defined by
c⋆:= inf
(
3 * 7
1
3
4
z ∞
−∞
z ∞
−∞
|g′(x) −g′(y)|2
|x −y|2
dx dy
1
3 z ∞
−∞
v
g(x)
dx
2
3
:
g ∈h
3
2
loc(r), g(−t) = α, g(t) = β, for all t ⩾1
o
.
4. two auxiliary one-dimensional problems
4.1. compactness for fε
theorem 4.1. assume that w : r →[0, ∞) satisfies (hw
1 ) −(hw
2 ). let i ⊂r be an open, bounded interval, let
{εn} be a positive sequence converging to 0, and let {un} ⊂h2(i) be such that
sup
n fεn(un; i) < ∞.
(4.1)
then there exist a subsequence (not relabeled) of {un} and a function u ∈bv
i; {a, b}
such that un →u in l2(i).
proof. given a sequence {un} ⊂h2(i) satisfying (4.1), by the compactness result in [13] and (hw
2 ), we obtain a
subsequence {un} (not relabeled) and a function u ∈bv
i; {a, b}
such that un →u in l2(i).
higher-order phase transitions with line-tension effect
11
4.2. lower bound for fε
theorem 4.2 (lower bound estimate for fε). let i ⊂r be an open and bounded interval and let w : r →
[0, ∞) satisfy (hw
1 ) −(hw
2 ). let u ∈bv
i; {a, b}
, let v ∈bv
∂i; {α, β}
, and let {uε} ⊂h2(i) be such that
sup
ε fε(uε; i) =: c < ∞, uε →u in l2(i) and tuε →v in h0(∂i). then
lim inf
ε→0+ fε(uε; i) ⩾mh0(s(u)) +
z
∂i
σ
tu(x), v(x)
dh0(x),
where m and σ are defined in (1.2) and (1.3), respectively.
proof. passing to a subsequence (not relabeled), we can assume that
lim inf
ε→0+ fε(uε; i) = lim
ε→0+ fε(uε; i).
since uε →u in l1(i) and ∥w(uε)∥l1(i) ⩽cε, by the growth condition (hw
2 ), we have that, up to a subsequence
(not relabeled), uε →u in l2(i), and supε ∥uε∥l2(i) ⩽c.
in turn, by proposition 2.6 and the fact that ∥u′′
ε∥l2(i) ⩽cε−3
2 , we deduce that ∥u′
ε∥l2(i) ⩽cε−3
4 , and so
lim
ε→0+
z
i
|εu′
ε(x)|2 dx = 0.
thus, up to a subsequence (not relabeled), we may assume that
εu′
ε(x) →0,
and
uε(x) →u(x)
(4.2)
for l1-a.e. x ∈i. since u ∈bv (i; {a, b}), its jump set is a finite set, so we can write s(u) := {s1, . . . , sl}, where
s0 := inf i < s1 < * * * < sl< sl+1 := sup i.
fix 0 < η <
β−α
2 , and 0 < δ0 :=
1
2 min {si+1 −si : i = 0, . . . , l}. using (4.2), for every i = 1, . . . , l, we may find
x±
i ∈(si −δ0, si + δ0) such that
|uε(x+
i ) −b| < η,
|uε(x−
i ) −a| < η,
and
|εu′
ε(x±
i )| < η.
(4.3)
moreover, since u is a constant in (s0, s1), we assume that u(x) ≡a in this interval (the case u(x) ≡b is analogous),
and we have that tu(s0) = a. using (4.2) once more, we may find a point x+
0 such that
|uε(x+
0 ) −a| < η,
and
|εu′
ε(x+
0 )| < η
(4.4)
for all ε sufficiently small.
on the other hand, tuε(s0) →v(s0), and so |tuε(s0) −v(s0)| < η
2 for all ε sufficiently small. since limx→s+
0 uε(x) =
tuε(s0), there is 0 < ρε < x+
0 < s0 such that |uε(x) −v(s0)| < η for all x ∈(s0, s0 + ρε).
there are now two cases. if uε(xε) = v(s0) for some xε ∈(s0, x+
0 ), then take x−
0,ε := xε. if uε(xε) ̸= v(s0) for all
xε ∈(s0, x+
0 ), then we claim that there exists x−
0,ε ∈(s0, x+
0 ) such that
u′
ε(x−
0,ε)
uε(x−
0,ε) −v(s0)
> 0.
indeed, if say uε(x) > v(s0) in (s0, x+
0 ), then for η > 0 such that |v(s0) −a| > 2η, we have that
uε(x+
0 ) −v(s0)
⩾|v(s0) −a| −|uε(x+
0 ) −a| > η,
and so there exists a first point xε ∈(s0, x+
0 ) such that
uε(xε) = v(s0) + η.
hence, by the mean value theorem, there is x−
0,ε ∈(s0, xε) such that
u′
ε(x0,ε) = uε(xε) −tuε(s0)
xε −s0
> v(s0) + η −v(s0) −η
2
xε −s0
> 0.
thus, we have found x0,ε ∈(s0, x+
0 ) such that
|uε(x−
0,ε) −v(s0)| < η
and
u′
ε(x−
0,ε)
uε(x−
0,ε) −v(s0)
⩾0,
(4.5)
12
b. galv ̃
ao-sousa
for all ε sufficiently small. for simplicity of notation, we write x−
0 := x−
0,ε and x−
l+1 := x−
l+1,ε. from the facts that the
intervals [x−
i , x+
i ] are disjoint for i = 0, . . . , l+ 1, and that w is nonnegative, we have that
fε(uε; i) ⩾
l+1
x
i=0
z x+
i
x−
i
ε3
u′′
ε(x)
2 + 1
εw
uε(x)
dx.
(4.6)
we claim that
z x+
i
x−
i
ε3
u′′
ε(x)
2 + 1
εw
uε(x)
dx ⩾ml−o(η) −o(ε)
(4.7)
for all i = 1, . . . , l, that
z x+
0
x−
0
ε3
u′′
ε(x)
2 + 1
εw
uε(x)
dx ⩾σ
tu(s0), v(s0)
−o(η) −o(ε),
(4.8)
and that
z x+
l+1
x−
l+1
ε3
u′′
ε(x)
2 + 1
εw
uε(x)
dx ⩾σ
tu(sl+1), v(sl+1)
−o(η) −o(ε).
(4.9)
if (4.7), (4.8), and (4.9) hold, then from (4.6) we deduce that
lim inf
ε→0+ fε(uε; i) ⩾ml+ σ
tu(s0), v(s0)
+ σ
tu(sl+1), v(sl+1)
−o(η).
letting η →0+ yields
lim inf
ε→0+ fε(uε; i) ⩾ml+
z
∂i
σ
tu(x), v(x)
dh0(x).
the remaining of the proof is devoted to the proof of (4.7), (4.8), and (4.9).
step 1.
proof of (4.7).
define the functions
g(w, z) := inf
z 1
0
w
g(x)
+
g′′(x)
2 dt : g ∈c2 [0, 1]; r
, g(0) = w, g(1) = b, g′(0) = z, g′(1) = 0
,
(4.10)
h(w, z) := inf
z 1
0
w
h(x)
+
h′′(x)
2 dt : h ∈c2 [0, 1]; r
, h(0) = a, h(1) = w, h′(0) = 0, h′(1) = z
.
(4.11)
note that, considering third-order polynomials, one deduces that these functions satisfy
lim
(w,z)→(b,0) g(w, z) = 0,
lim
(w,z)→(a,0) h(w, z) = 0.
(4.12)
from (4.3), for ε sufficiently small, we have g
uε(x+
i ), εu′
ε(x+
i )
, h
uε(x−
i ), εu′
ε(x−
i )
⩽η. by (4.10) and (4.11), we
can find admissible functions b
gi and b
hi for g
uε(x+
i ), εu′
ε(x+
i )
and h
uε(x−
i ), εu′
ε(x−
i )
, respectively, such that
z 1
0
b
gi
′′(x)
2 + w
b
gi(x)
dx ⩽g
uε(x+
i ), εu′
ε(x+
i )
+ η ⩽2η,
(4.13)
z 1
0
b
hi
′′(x)
2 + w
b
hi(x)
dx ⩽h
uε(x−
i ), εu′
ε(x−
i )
+ η ⩽2η.
(4.14)
we now rescale and translate these functions, precisely,
gi(x) := b
gi
x −x+
i
ε
,
hi(x) := b
hi
x −x−
i
ε + 1
.
higher-order phase transitions with line-tension effect
13
define
wε,i(x) :=
b
if x ⩾x+
i
ε + 1,
gi(t)
if x+
i
ε ⩽x ⩽x+
i
ε + 1,
uε(εx)
if x−
i
ε ⩽x ⩽x+
i
ε ,
hi(t)
if x−
i
ε −1 ⩽x ⩽x−
i
ε ,
a
if x ⩽x−
i
ε −1.
by construction wε,i ∈h2 x−
i
ε −1, x+
i
ε + 1
and wε,i is admissible for the constant m given in (1.2). hence for all ε
sufficiently small,
z x+
i
x−
i
ε3
u′′
ε(x)
2 + 1
εw
uε(x)
dx =
z
x+
i
ε
x−
i
ε
w′′
ε,i(y)
2 + w
wε,i(y)
dy
=
z
x+
i
ε +1
x−
i
ε −1
w′′
ε,i(y)
2 + w
wε,i(y)
dy −
z 1
0
b
gi
′′(y)
2 + w
b
gi(y)
dy
−
z 1
0
b
hi
′′(y)
2 + w
b
hi(y)
dy ⩾ml−4η,
where we used (4.13) and (4.14).
step 2.
proof of (4.8).
define the functions
l(w, z) := inf
z 1
0
w
f(x)
+
f ′′(x)
2 dt : f ∈c2 [0, 1]; r
, f(0) = w, f(1) = a, f ′(0) = z, f ′(1) = 0
,
(4.15)
j(w, z) := inf
z r
0
w
j(x)
+
j′′(x)
2 dt : j ∈c2 [0, r]; r
, j(0) = v(0), j(r) = w, j′(r) = z, for some r > 0
.
(4.16)
analogously to (4.10),
lim
(w,z)→(a,0) l(w, z) = 0, and from (4.4), for all ε sufficiently small we have l
uε(x+
0 ), εu′
ε(x+
0 )
⩽
η. hence we can find an admissible function b
f0 for l
uε(x+
0 ), εu′
ε(x+
0 )
such that
z 1
0
w
b
f0(x)
+
b
f0
′′(x)
2 dx ⩽l
uε(x+
0 ), εu′
ε(x+
0 )
+ η ⩽2η.
(4.17)
we now prove that
lim
w→v(s0)
z(w−v(s0))⩾0
j(w, z) = 0.
(4.18)
fix η > 0 and let w, z ∈r be such that |w −v(0)| < η and z(w −v(0)) ⩾0. if |z| ⩽√η, then take j(x) :=
w + z(x −r) + (v(0)−w)+rz
r2
(x −r)2, which is admissible for j(w, z), to obtain
j(w, z) ⩽c
r + (v(0) −w + rz)2
r3
⩽c
r + η2
r3 + η
r
.
choosing r = √η, we deduce that j(w, z) = o(√η).
if |z| ⩾√η, then let r := w−v(0)
z
> 0, which satisfies 0 < r < √η. then, let j(x) := w + z(x −r), which is admissible
for j(w, z) because j(0) = w −zr = v(0) and j′′(x) = 0, so j(w, z) = o(√η). this proves (4.18).
by (4.18) and (4.5), we may find a function j admissible for j
uε(x−
0,ε), εu′
ε(x−
0,ε)
such that
z r
0
w
j(t)
+
j′′(t)
2 dt ⩽j
uε(x−
0,ε), εu′
ε(x−
0,ε)
+ η ⩽2η,
(4.19)
for some r = r(η) > 0.
14
b. galv ̃
ao-sousa
set f0(x) := b
f0
x −r −
x+
0 −x−
0,ε
ε
, and define
wε,0(x) :=
a
if x ⩾1 + r +
x+
0 −x−
0,ε
ε
,
f0(x)
if r +
x+
0 −x−
0,ε
ε
⩽x ⩽1 + r +
x+
0 −x−
0,ε
ε
,
uε
ε(x −r) + x−
0,ε
if r ⩽x ⩽r +
x+
0 −x−
0,ε
ε
,
j(x)
if 0 ⩽x ⩽r.
by construction wε,0 belongs to h2
loc(0, ∞) and is admissible for σ
tu(s0), v(s0)
as defined in (1.3). hence for all ε
sufficiently small, we have that
z x+
0
x−
0
ε3
u′′
ε(x)
2 + 1
εw
uε(x)
dx =
z r+
x+
0 −x−
0
ε
r
w′′
ε,0(y)
2 + w
wε,0(y)
dy
=
z 1+r+
x+
0 −x−
0
ε
0
w′′
ε,0(y)
2 + w
wε,0(y)
dy −
z 1
0
b
f0
′′(y)
2 + w
b
f0(y)
dy
−
z r
0
j′′(y)
2 + w
j(y)
dy ⩾σ
tu(s0), v(s0)
−4η,
where we have used (4.17) and (4.19). this proves (4.8). the proof of (4.9) is analogous.
4.3. compactness for gε
to prove compactness for the functional gε defined in (1.7), we begin with an auxiliary result.
lemma 4.3. let θ ∈l1(j; [0, 1]) and let
x :=
(
x ∈j : −
z
j∩b(x;δ)
θ(s) ds ∈(0, 1) for all 0 < δ < δ0, for some δ0 = δ0(x) > 0
)
be a finite set. then θ ∈bv (j; {0, 1}) and s(θ) ⊂x.
theorem 4.4 (compactness for gε). assume that v : r →[0, ∞) satisfies (hv
1 ) −(hv
3 ). let j ⊂r be an open,
bounded interval, let {εn} be such that εnλ
2
3
n →l ∈(0, ∞), and let {vn} ⊂h
3
2 (j) be such that
sup
n gεn(vn; j) < ∞.
(4.20)
then there exist a subsequence (not relabeled) of {vn} and a function v ∈bv
j; {α, β}
such that vn →v in l2(j).
proof. since λn →∞, by (4.20) we have that
c1 := sup
n
z
j
v (vn) dx < ∞.
by condition (hv
3 ) and the fact that j is bounded, we have that
1
c l1(j) + c
z
j
|vn|2 dx ⩽
z
j
v (vn) dx ⩽c1,
and so {vn} is bounded in l2(j). thus by the fundamental theorem of young measures (for a comprehensive exposition
on young measures, see [26,4,23,12]), there exists a subsequence (not relabeled) generating a young measure {νx}x∈j.
letting f(z) := min
v (z), 1
, since λn →∞, we have that
0 = lim
n
z
j
f(vn) dx =
z
j
z
r
f(z) dνx(z) dx.
since f(z) = 0 if and only if z ∈{α, β}, we have that for l1-a.e. x ∈j,
νx = θ(x)δα +
1 −θ(x)
δβ
(4.21)
higher-order phase transitions with line-tension effect
15
for some θ ∈l∞ j; [0, 1]
. define
x :=
(
x ∈j : −
z
b(x;δ)
θ(s) ds ∈(0, 1) for all 0 < δ < δ0, for some δ0 = δ0(x) > 0
)
.
(4.22)
we claim that x is finite. to establish this, let s1, . . . , slbe distinct points of x and let 0 < d0 < 1
2 min{|si −sj| : i ̸=
j, i, j = 1, . . . , l}. since si ∈x, we may find di > 0 so small that di ⩽d0 and
−
z
b(si;di)
θ(s) ds > 0,
−
z
b(si;di)
1 −θ(s)
ds > 0.
(4.23)
define d := min{d1, . . . , dl}. let 0 < η < β−α
2 , let φη ∈c∞
c
r; [0, 1]
be such that supp φη ⊂b(α; η) and φη(α) = 1,
and let γη ∈c∞
c
r; [0, 1]
be such that supp γη ⊂b(β; η) and γη(α) = 1.
using the fundamental theorem of young measures with
f(x, z) := χb(si;d)(x)φη(z),
we obtain
lim
n→∞
z
b(si;di)
φη
vn(x)
dx =
z
r
z
r
f(x, z) dνx(z) dx =
z
b(si;di)
θ(x) dx > 0, .
(4.24)
similarly,
lim
n→∞
z
b(si;di)
γη
vn(x)
dx =
z
b(si;di)
1 −θ(x)
dx > 0.
(4.25)
in view of (4.24) and (4.25), we may find x±
n,i ∈(si −d, si + d) such that
|vn(x−
n,i) −α| < η,
and
|vn(x+
n,i) −β| < η.
let wn(x) := vn
εnλ
−1
3
n x
, which is admissible for the constant cη defined in (3.4). then by (4.20),
∞> c ⩾lim inf
n
gεn(vn; j) ⩾lim inf
n
l
x
i=1
gεn
vn; (x−
n,i, x+
n,i)
⩾lim inf
n
l
x
i=1
εnλ
2
3
ng1
wn;
x−
n,i
εnλ
−1
3
n
,
x+
n,i
εnλ
−1
3
n
⩾cηll.
we conclude that
h0(x) ⩽
c
cηl < ∞.
by lemma 4.3, this implies that θ ∈bv
j; {0, 1}
. in particular, we may write θ = χe, and so νx = δv(x), where
v(x) :=
(
α
if x ∈e,
β
if x ∈j\e.
it follows that {vn} converges in measure to v. by condition (hv
2 ), there are c, t > 0 such that v (z) ⩾c|z|2 for all
|z| ⩾t, and so
z
e∩{|vn|⩾t }
|vn(x′)|2 dx′ ⩽1
c
z
e
v (vn(x′)) dx′ ⩽c1
c
1
λn
.
this implies that {vn} is 2-equi-integrable. apply vitali's convergence theorem to deduce that vn →v in l2(j).
16
b. galv ̃
ao-sousa
4.4. lower bound for gε
in this section we prove the following theorem.
theorem 4.5 (lower bound for gε). let j ⊂r be an open and bounded interval and let v : r →[0, ∞) satisfy
(hv
1 ) −(hv
3 ). assume that ελ
2
3
ε →l ∈(0, ∞). let v ∈bv
j; {α, β}
and let {vε} ⊂h
3
2 (j) be such that
sup
ε>0
gε(vε; j) =: c < ∞
(4.26)
and vε →v in l2(j) as ε →0+. then
lim inf
ε→0+ gε(vε; j) ⩾clh0(s(v)),
where c ∈(0, ∞) is the constant defined in (1.4).
we begin with some preliminary results.
lemma 4.6. let v : r →[0, ∞) satisfy (hv
1 ) −(hv
3 ), and let v ∈h
3
2 (c, d) be such that tv(c) = w and tv′(c) = z,
for some c, d, z, w ∈r, with c < d and |z| + |w −α| ⩽1. let
f(x) :=
(
v(x)
if c ⩽x ⩽d,
p(x)
if c −1 ⩽x ⩽c,
where p is the polynomial given by
p(x) := α + (3w −3α −z)(x −c + 1)2 + (z + 2α −2w)(x −c + 1)3.
then f ∈h
3
2 (c −1, d),
z d
c−1
z d
c−1
|f ′(x) −f ′(y)|2
|x −y|2
dx dy −
z d
c
z d
c
|v′(x) −v′(y)|2
|x −y|2
dx dy
=
z c
c−1
z c
c−1
|p′(x) −p′(y)|2
|x −y|2
dx dy + 2
z c
c−1
z d
c
|v′(x) −p′(y)|2
|x −y|2
dx dy
⩽c
|z| + |α −w|
2 + 2sv(c),
(4.27)
and
z d
c−1
v (f(x)) dx −
z d
c
v (v(x)) dx =
z c
c−1
v (p(x)) dx ⩽c
|z| + |α −w|
2,
(4.28)
for some constant c = c(v, α) > 0, and where
sv(c) :=
z d
c
|v′(x) −v′(c)|2
|x −c|2
dx.
(4.29)
moreover,
z c
c−1
|p′(x)|2
|x −c + 1|2 dx ⩽c
|z| + |α −w|
2,
(4.30)
z c
c−1
|p′(x)|2
|x −d −1|2 dx ⩽c
|z| + |α −w|
2.
(4.31)
proof. since v ∈c2(r), and v (α) = v ′(α) = 0, by taylor's formula, for any t ∈r, there exists t0 between α and t
such that v (t) = v ′′(t0)
2
(t −α)2.
on the other hand, we have that
|p(x) −α| ⩽|3w −3α −z|(x −c + 1)2 + |z + 2α −2w||x −c + 1|3 ⩽5
|z| + |α −w|
for all x ∈[c −1, c], and so
z c
c−1
v (p(x)) dx ⩽1
2
max
ξ∈[α−5,α+5] |v ′′(ξ)|
z c
c−1
(p(x) −α)2 dx ⩽c
|z| + |α −w|
2.
higher-order phase transitions with line-tension effect
17
to estimate the first integral in (4.27), write p′(x) in the following form
p′(x) = z + 2(2z + 3α −3w)(x −c) + 3(z + 2α −2w)(x −c)2,
for all x ∈[c −1, c]. then, for x ∈[c −1, c],
|p′(x) −z| ⩽12
|z| + |α −w|
|x −c|,
(4.32)
while for x, y ∈[c −1, c],
|p′(x) −p′(y)| ⩽18
|z| + |α −w|
|x −y|,
(4.33)
and so
z c
c−1
z c
c−1
|p′(x) −p′(y)|2
|x −y|2
dx dy ⩽c
|z| + |α −w|
2.
to estimate the second integral in (4.27), we have that
z c
c−1
"z d
c
|v′(x) −p′(y)|2
|x −y|2
dx
#
dy ⩽2
z c
c−1
"z d
c
|v′(x) −v′(c)|2
|x −y|2
dx
#
dy + 2
z c
c−1
"z d
c
|p′(y) −z|2
|x −y|2
dx
#
dy,
where we have used the fact that v′(c) = z.
by fubini's theorem,
z c
c−1
"z d
c
|v′(x) −v′(c)|2
|x −y|2
dx
#
dy ⩽
z d
c
|v′(x) −v′(c)|2
|x −c|2
dx = sv(c),
while
z c
c−1
"z d
c
|p′(y) −z|2
|x −y|2
dx
#
dy = (d −c)
z c
c−1
|p′(y) −z|2
(c −y)(d −y) dy ⩽c
2
|z| + |α −w|
2.
this concludes the first part of the proof. to estimate (4.30), we write p′(x) = 2(3w −3α −z)(x −c + 1) + 3(z + 2α −
2w)(x −c + 1)2, so for x ∈(c −1, c) we have
|p′(x)|2 ⩽c
|z| + |α −w|
2(x −c + 1)2.
hence
z c
c−1
|p′(x)|2
|x −c + 1|2 dx ⩽c
|z| + |α −w|
2,
while
z c
c−1
|p′(x)|2
|x −d −1|2 dx ⩽c
|z| + |α −w|
2 z c
c−1
x −(c −1)
x −(d + 1)
2
dx ⩽c
|z| + |α −w|
2.
the estimate for (4.31) is analogous. this completes the proof.
corollary 4.7. let v : r →[0, ∞) satisfy (hv
1 )−(hv
3 ) and let v ∈h
3
2 (c, d) be such that tv(c) = w1 and tv′(c) = z1,
tv(d) = w2 and tv′(d) = z2, for some c, d, z1, z2, w1, w2 ∈r, with c < d and |z1|+|w1 −α| ⩽1 and |z2|+|w2 −β| ⩽1.
let
f(x) :=
p2(x)
if d ⩽x ⩽d + 1,
v(x)
if c ⩽x ⩽d,
p1(x)
if c −1 ⩽x ⩽c,
(4.34)
where p1 and p2 are the polynomials given by
p1(x) := α + (3w1 −3α −z1)(x −c + 1)2 + (z1 + 2α −2w1)(x −c + 1)3,
p2(x) := β + (3w2 −3β + z2)(d + 1 −x)2 + (2α −2w2 −z2)(d + 1 −x)3.
(4.35)
18
b. galv ̃
ao-sousa
then f ∈h
3
2 (c −1, d + 1),
z d
c
z d
c
|v′(x) −v′(y)|2
|x −y|2
dx dy ⩾
z d+1
c−1
z d+1
c−1
|f ′(x) −f ′(y)|2
|x −y|2
dx dy
−c
|z1| + |z2| + |α −w1| + |β −w2|
2 −cqv(c, d) −2sv(c) −2sv(d),
(4.36)
and
z d
c
v (v(x)) dx ⩾
z d+1
c−1
v (f(x)) dx −c
|z1| + |z2| + |α −w1| + |β −w2|
2,
(4.37)
where c = c(v, α, β) > 0,
qv(c, d) := |v′(c) −v′(d)|2
|c −d|2
,
(4.38)
and sv(*) is defined in (4.29).
proof. the estimate (4.37) follows by applying twice (4.28) in lemma 4.6.
to obtain (4.36), by figure 1, it suffices to estimate the double integrals over the sets s1, s2, and s0.
d + 1
c −1
c
d
s1
c
d
d + 1
s2
s0
fig. 1. scheme for the estimates.
the estimates on s1 and s2 are a direct consequence of lemma 4.6. to estimate the integral over s0, we observe that
p′
1(x) = z1 + 2(2z1 + 3α −3w1)(x −c) + 3(z1 + 2α −2w1)(x −c)2,
p′
2(x) = z2 + 2(−2z2 + 3β −3w2)(x −d) + 3(z2 −2β + 2w2)(x −d)2,
so for x ∈(c −1, c) and y ∈(d, d + 1), we deduce that
|p′
1(x) −p′
2(y)| ⩽|z1 −z2| + c
|z1| + |z2| + |α −w1| + |β −w2|
|x −y|.
this implies that
z d+1
d
z c
c−1
|p′
1(x) −p′
2(y)|2
|x −y|2
dx dy ⩽c |z1 −z2|2
|d −c|2
+ c
|z1| + |z2| + |α −w1| + |β −w2|
2.
corollary 4.8. let v : r →[0, ∞) satisfy (hv
1 )−(hv
3 ) and let v ∈h
3
2 (c, d) be such that tv(c) = w1, d tv′(c) = z1,
tv(d) = w2, and tv′(d) = z2, for some c, d, z1, z2, w1, w2 ∈r, with c < d, |z1| + |w1 −α| ⩽1, and |z2| + |w2 −β| ⩽1.
let
f(x) :=
β
if x ⩾d + 1,
p2(x)
if d ⩽x ⩽d + 1,
v(x)
if c ⩽x ⩽d,
p1(x)
if c −1 ⩽x ⩽c,
α
if x ⩽c −1,
(4.39)
higher-order phase transitions with line-tension effect
19
where p1 and p2 are the polynomials defined in (4.35).
then f ∈h
3
2 (c −1, d + 1), f ′ ∈h
1
2 (r), and
z d
c
z d
c
|v′(x) −v′(y)|2
|x −y|2
dx dy ⩾
z ∞
−∞
z ∞
−∞
|f ′(x) −f ′(y)|2
|x −y|2
dx dy −c
|z1| + |z2| + |α −w1| + |β −w2|
2
−cqv(c, d) −2(1 + d −c)
sv(c) −2sv(d)
−2 log(1 + d −c)
|z1|2 + |z2|2
,
(4.40)
where c = c(v, α, β) > 0, qv(*, *) is defined in (4.38), and sv(*) is defined in (4.29).
proof. by corollary 4.7, we know that
z c+1
−c−1
z d+1
c−1
|f ′(x) −f ′(y)|2
|x −y|2
dx dy −
z d
c
z d
c
|v′(x) −v′(y)|2
|x −y|2
dx dy
⩽c
|z1| + |z2| + |α −w1| + |β −w2|
2 + cqv(c, d) + 2sv(c) + 2sv(d),
so to prove estimate (4.40), it suffices to estimate
z ∞
−∞
z ∞
−∞
|f ′(x) −f ′(y)|2
|x −y|2
dx dy −
z c+1
−c−1
z d+1
c−1
|f ′(x) −f ′(y)|2
|x −y|2
dx dy = 2(i±
4 + i±
5 + i±
6 ),
where the ii's are defined by
i−
4 :=
z c−1
−∞
z c
c−1
|p′
1(x)|2
|x −y|2 dx dy,
i+
4 :=
z ∞
d+1
z d+1
d
|p′
2(x)|2
|x −y|2 dx dy,
i−
5 :=
z ∞
d+1
z c
c−1
|p′
1(x)|2
|x −y|2 dx dy,
i+
5 :=
z c−1
−∞
z d+1
d
|p′
2(x)|2
|x −y|2 dx dy,
i−
6 :=
z c−1
−∞
z d
c
|v′(x)|2
|x −y|2 dx dy,
i+
6 :=
z ∞
d+1
z d
c
|v′(x)|2
|x −y|2 dx dy.
to estimate i−
4 , we compute
i−
4 =
z c
c−1
|p′
1(x)|2
x −c + 1 dx ⩽c
|z1| + |α −w1|
2
by (4.33) (with p replaced by p1), and analogously, i+
4 ⩽c
|z2| + |β −w2|
2.
for i±
5 , we have that
i−
5 =
z c
c−1
|p′
1(x)|2
d + 1 −x dx ⩽
z c
c−1
|p′
1(x)|2
x −c + 1 dx = i−
4 ⩽c
|z1| + |α −w1|
2,
and analogously i+
5 ⩽c
|z2| + |β −w2|
2.
to estimate i−
6 , we write
i−
6 =
z d
c
|v′(x) ± v′(c)|2
x −c + 1
dx ⩽2
z d
c
|v′(x) −v′(c)|2
x −c + 1
dx + 2
z d
c
|v′(c)|2
x −c + 1 dx
(4.41)
⩽2
z d
c
|v′(x) −v′(c)|2
(x −c)2
(x −c)2
x −c + 1 dx + 2|z1|2 log(1 + d −c)
⩽2(d −c)sv(c) + 2|z1|2 log(1 + d −c).
analogously, i+
6 ⩽2(d −c)sv(d) + 2|z2|2 log(1 + d −c). this completes the proof.
proposition 4.9. let j ⊂r be an open and bounded interval and let v : r →[0, ∞) satisfy (hv
1 ) −(hv
3 ). assume
that ελ
2
3
ε →l ∈(0, ∞), and consider a sequence {vε} ⊂h
3
2 (j) such that sup
ε>0
gε
vε; (x−, x+)
< ∞, for some x± ∈j,
with
|vε(x−) −α| ⩽η,
|vε(x+) −β| ⩽η,
εv′
ε(x±)
⩽c,
ε3svε(x±)
⩽c,
ε3qvε(x−, x+)
⩽c,
(4.42)
20
b. galv ̃
ao-sousa
where c > 0, η > 0, and sv(*) and qv(*, *) are defined in (4.29) and (4.38), respectively.
then
lim inf
ε→0+ gε
vε; (x−, x+)
⩾cl,
(4.43)
where c ∈(0, ∞) is the constant defined in (1.4).
proof. define wε(t) := vε
ελ
−1
3
ε
t
for x ∈j. by the change of variables x = ελ
−1
3
ε
t, y = ελ
−1
3
ε
s, we have
gε
vε; (x−, x+)
= ε
8
z x+
x−
z x+
x−
v′
ε(x) −v′
ε(y)
2
|x −y|2
dx dy + λε
z x+
x−v
vε(x)
dx
= ελ
2
3
ε
1
8
z
x+
ελ
−1
3
ε
x−
ελ
−1
3
ε
z
x+
ελ
−1
3
ε
x−
ελ
−1
3
ε
w′
ε(t) −w′
ε(s)
2
|t −s|2
dt ds +
z
x+
ελ
−1
3
ε
x−
ελ
−1
3
ε
v
wε(t)
dt
(4.44)
let fε be the function given in (4.39) with the choice of parameters
v := wε,
c :=
x−
ελ
−1
3
ε
,
d :=
x+
ελ
−1
3
ε
,
w1 := wε
x−
ελ
−1
3
ε
!
= vε(x−),
w2 := wε
x+
ελ
−1
3
ε
!
= vε(x+),
z1 := w′
ε
x−
ελ
−1
3
ε
!
= ελ
−1
3
ε
v′
ε(x−),
z2 := w′
ε
x+
ελ
−1
3
ε
!
= ελ
−1
3
ε
v′
ε(x+).
by corollary 4.7, (4.44), and the fact that ελ
2
3
ε →l, we have that
gε
vε; (x−, x+)
⩾(l + o(1))
1
8
z
x+
ελ
−1
3
ε
+1
x−
ελ
−1
3
ε
−1
z
x+
ελ
−1
3
ε
+1
x−
ελ
−1
3
ε
−1
|f ′
ε(t) −f ′
ε(s)|2
|t −s|2
dt ds +
z
x+
ελ
−1
3
ε
+1
x−
ελ
−1
3
ε
−1
v (fε(t)) dt
−c
h
ελ
−1
3
ε
|v′
ε(x−)| + |v′
ε(x+)|
+ |α −vε(x−| + |β −vε(x+)|
+qwε
x−
ελ
−1
3
ε
,
x+
ελ
−1
3
ε
!
+ 2swε
x−
ελ
−1
3
ε
!
+ 2swε
x+
ελ
−1
3
ε
!#
.
we claim that f ′
ε ∈h
1
2 (r). if the claim holds, since fε is admissible for the constant c defined in (1.4), and by (4.42),
we have that
gε
vε; (x−, x+)
⩾(l+o(1))c−c(2λ
−1
3
ε
+2η)2−cqwε
x−
ελ
−1
3
ε
,
x+
ελ
−1
3
ε
!
−cswε
x−
ελ
−1
3
ε
!
−cswε
x+
ελ
−1
3
ε
!
. (4.45)
since λε →∞, to conclude that the first part of the proof, it remains to estimate the last three terms on the right-hand
side of (4.45). by (4.42),
qwε
x−
ελ
−1
3
ε
,
x+
ελ
−1
3
ε
!
=
w′
ε
x+
ελ
−1
3
ε
−w′
ε
x−
ελ
−1
3
ε
x+
ελ
−1
3
ε
−
x−
ελ
−1
3
ε
2
⩽cελ
−4
3
ε
,
(4.46)
while
swε
x−
ελ
−1
3
ε
!
=
z
x+
ελ
−1
3
ε
x−
ελ
−1
3
ε
w′
ε(t) −w′
ε
x−
ελ
−1
3
ε
2
t −
x−
ελ
−1
3
ε
,
2
dt ⩽cλ−1
ε ,
(4.47)
higher-order phase transitions with line-tension effect
21
and similarly
swε
x+
ελ
−1
3
ε
!
⩽cλ−1
ε .
(4.48)
thus, by (4.45)–(4.48),
gε
vε; (x−, x+)
⩾(l −o(1))c −c(2λ
−1
3
ε
+ 2η)2 −cελ
−4
3
ε
−cλ−1
ε .
letting first ε →0+ and then η →0+ we obtain (4.43). to complete the proof, we show that
sup
ε |f ′
ε|h
1
2 (r) ⩽c.
starting again from (4.44), but using corollary 4.8 in place of corollary 4.7, we obtain
c ⩾gε
vε; (x−, x+)
⩾(l + o(1))1
8
z ∞
−∞
z ∞
−∞
|f ′
ε(t) −f ′
ε(s)|2
|t −s|2
dt ds
−c
"
ελ
−1
3
ε
|v′
ε(x−)| + |v′
ε(x+)|
+ |α −vε(x−| + |β −vε(x+)|
+ qwε
x−
ελ
−1
3
ε
,
x+
ελ
−1
3
ε
!
+
1 + x+ −x−
ελ
−1
3
ε
!
swε
x−
ελ
−1
3
ε
!
+ swε
x+
ελ
−1
3
ε
!!
+ ε2λ
−2
3
ε
log
1 + x+ −x−
ελ
−1
3
ε
!
|v′
ε(x−)|2 + |v′
ε(x+)|2
#
.
(4.49)
by (4.42), and (4.45)–(4.48), we have
c ⩾gε
vε; (x−, x+)
⩾(l + o(1))|f ′
ε|2
h
1
2 (r) −c(2λ
−1
3
ε
+ 2η)2 −cελ
−4
3
ε
−c
λε
1 + x+ −x−
ελ
−1
3
ε
!
−cλ
−2
3
ε
log
1 + x+ −x−
ελ
−1
3
ε
!
.
(4.50)
since ελ
2
3
ε →l, it follows that for all ε > 0 sufficiently small, we have
(l −o(1))|f ′
ε|2
h
1
2 (r) ⩽c(1 + l) + cη,
where c depends also on x+ −x−. this proves that f ′
ε ∈h
1
2 (r), which completes the proof.
proof of theorem 4.5. passing to a subsequence (not relabeled), we can assume that
lim inf
ε→0+ gε(vε; j) = lim
ε→0+ gε(vε; j).
this will allow us to take further subsequences (not relabeled). by proposition 2.5, (4.26), and the growth condition
(hv
2 ), we know that ∥vε∥h1(j) ⩽cε−1.
since v ∈bv
j; {α, β}
, its jump set s(v) is finite, and we write
s(v) = {s1, . . . , sl},
where s1 < * * * < sl. let 0 < d < 1
2 min {si −si−1 : i = 2, . . . , l}, and assume that v = α in (s2j, s2j+1) for j = 0, . . .,
where s0, sl+1 are the endpoints of j. then
lim
k→∞lim inf
ε→0+
z s1
s1−d
"
k|vε(x) −α| + 1
k ε|v′
ε(x)| + ε3
k
z
j
v′
ε(x) −v′
ε(y)
2
|x −y|2
dy
#
dx = 0.
hence, we may find k0 ∈n such that for all k ⩾k0,
lim inf
ε→0+
z s1
s1−d
"
k|vε(x) −α| + 1
k ε|v′
ε(x)| + ε3
k
z
j
v′
ε(x) −v′
ε(y)
2
|x −y|2
dy
#
dx ⩽d.
22
b. galv ̃
ao-sousa
by fatou's lemma, we have that for k ⩾k0,
1
d
z s1
s1−d
lim inf
ε→0+
"
k|vε(x) −α| + 1
k ε|v′
ε(x)| + ε3
k
z
j
v′
ε(x) −v′
ε(y)
2
|x −y|2
dy
#
dx ⩽1
fix k1 > max
k0, 1
η
. by the mean value theorem, there exists x−
1 ∈(s1 −d, s1) such that
lim inf
ε→0+
"
|vε(x−
1 ) −α| + 1
k2
1
ε|v′
ε(x−
1 )| + ε3
k2
1
z
j
v′
ε(x−
1 ) −v′
ε(y)
2
|x−
1 −y|2
dy
#
< η.
so, up to a subsequence (not relabeled),
|vε(x−
1 ) −α| < η,
ε|v′
ε(x−
1 )| < ηk2
1,
and
ε3
z
j
v′
ε(x−
1 ) −v′
ε(y)
2
|x−
1 −y|2
dy < ηk2
1.
(4.51)
analogously, considering
lim
k→∞lim inf
ε→0+
z s1+d
s1
"
k|vε(x) −α| + 1
k ε|v′
ε(x)| + ε3
k
z
j
v′
ε(x) −v′
ε(y)
2
|x −y|2
dy + ε3
k
v′
ε(x) −v′
ε(x−
1 )
2
|x −x−
1 |2
#
dx = 0,
we may find x+
1 ∈(s1, s1 + d) such that (up to a further subsequence)
|vε(x+
1 ) −β| < η,
ε|v′
ε(x+
1 )| < ηk2
2,
and
ε3
z
j
v′
ε(x+
1 ) −v′
ε(y)
2
|x+
1 −y|2
dy < ηk2
2,
(4.52)
and
ε3
v′
ε(x+
1 ) −v′
ε(x−
1 )
2
|x+
1 −x−
1 |2
< ηk2
2.
(4.53)
we now repeat the process to find points x±
i in (si −d, si + d) with the properties (4.51)–(4.53).
by proposition 4.9, we deduce that
lim inf
ε→0+ gε(vε; j) ⩾
l
x
i=1
lim inf
ε→0+ gε
vε; (x−
i , x+
i )
⩾llc = clh0(s(v)).
5. the n-dimensional case
in this section we prove theorems 1.1 and 1.2.
5.1. compactness
in this subsection we prove theorem 1.1. we follow the argument of [13], which we reproduce for the convenience of
the reader.
theorem 5.1 (compactness in the interior). let ω, w, and v satisfy the hypotheses of theorem 1.1, and let
ελ
2
3
ε →l ∈(0, ∞). consider a sequence {uε} ⊂h2(ω) such that
c1 := sup
ε fε(uε) < ∞,
where fε is the functional defined in (1.1). then there exist a subsequence of {uε} (not relabeled) and a function
u ∈bv (ω; {a, b}) such that uε →u in l2(ω).
proof. for simplicity of notation, we suppose n = 2. the higher dimensional case is treated analogously.
higher-order phase transitions with line-tension effect
23
step 1. assume that ω= i × j, where i, j ⊂r are open bounded intervals.
for x ∈ω, we write x = (y, z), with y ∈i, z ∈j. for every function u defined on ωand every y ∈i we denote by
uy the function on j defined by uy(z) := u(y, z), and for every z ∈j we denote by uz the function on i defined by
uz(y) := u(y, z). the functions uy and uz are called one-dimensional slices of u.
we recall that by slicing, if u ∈h2(ω), then uy ∈h2(j) for l1-a.e. y ∈i, uz ∈h2(i) for l1-a.e. z ∈j, and
∂2u
∂z2 (y, z) = d2uy
dz2 (z),
∂2u
∂y2 (y, z) = d2uz
dy2 (y),
for l1-a.e. y ∈i and for l1-a.e. z ∈j.
since |∇2u|2 ⩾max
n
∂2u
∂z2
,
∂2u
∂y2
o
, we immediately obtain that
c1 ⩾fε(u) ⩾
z
i
fε(uy; j) dy,
c1 ⩾fε(u) ⩾
z
j
fε(uz; i) dz,
(5.1)
where fε is the functional defined in (1.6).
consider a family {uε} ⊂h2(ω) such that fε(uε) ⩽c1 < ∞. then we have that w(uε) →0 in l1(ω). from
condition (hw
2 ), we have the existence of c, t > 0 such that for all |z| ⩾t, w(z) ⩾c|z|2. this implies that {uε}
is 2-equi-integrable and, in particular, it is equi-integrable. therefore, fix δ > 0 and let η > 0 be such that for any
measurable set e ⊂r, with l2(e) ⩽η,
sup
ε>0
z
e
|uε(x)| + |b|
dx ⩽δ.
(5.2)
for ε > 0 we define vε : ω→r by
vε(y, z) :=
(
uy
ε(z)
if y ∈i, z ∈j, and fε(uy
ε; j) ⩽cl1(j)
η
,
b
otherwise.
we claim that {vε} and {uε} are δ-close, i.e., ∥uε −vε∥l1(ω) < δ.
indeed, let zε := {y ∈i : uy
ε ̸= vy
ε}. by (5.1), we have
c1 ⩾
z
i
fε(uy; j) dy,
and so
l1(zε) ⩽l1
y ∈i : fε(uy
ε; j) > c1l1(j)
η
⩽
η
c1l1(j)
z
i
fε(uy; j) dy ⩽
η
l1(j).
it follows that l2(zε × j) ⩽η. thus, by (5.2),
∥uε −vε∥l1(ω) ⩽
z
zε×j
|uε(x) −b| dx ⩽
z
zε×j
|uε(x)| + |b|
dx ⩽δ.
moreover, for every y ∈i we have fε(vy
ε; j) ⩽c1l1(j)
η
, where we have used the face that fε(b; j) = 0, and therefore
theorem 4.1, yields l2(j) precompactness of {vy
ε}. similarly, we can construct a sequence {wε} δ-close to {uε} so
that {wz
ε} is precompact in l2(i) for every z ∈j.
using proposition 2.2 we conclude that the sequence {uε} is precompact in l2(ω).
step 2. general case.
this case can be proved by decomposing ωinto a countable union of closed rectangles with disjoint interiors. the fact
that the limit u belongs to bv (ω; {a, b})) is a direct consequence of theorem 2.10.
theorem 5.2 (compactness at the boundary). let ω, w, and v satisfy the hypotheses of theorem 1.1, and let
ελ
2
3
ε →l ∈(0, ∞). consider a sequence {uε} ⊂h2(ω) such that
c := sup
ε fε(uε) < ∞,
where fε is the functional defined in (1.1). then there exist a subsequence of {uε} (not relabeled) and a function
v ∈bv (∂ω; {α, β}) such that tuε →v in l2(∂ω).
24
b. galv ̃
ao-sousa
to prove this theorem we introduce the localization of the functionals fε: for every open set a ⊂ωwith boundary of
class c2, for every borel set e ⊂∂a, and for every u ∈h2(a), we set
fε(u; a, e) :=
z
a
ε2
∇2u
2 + 1
εw(u)
dx + λε
z
e
v (tu) dhn−1.
note that for u ∈h2(ω), fε(u) = fε(u; ω, ∂ω).
we begin by proving compactness on the boundary in the special case in which a = ω∩b, where b is a ball centered
on ∂ωand e = b ∩∂ωis a flat disk. later on we will show that this flatness assumption can be dropped when b is
sufficiently small.
proposition 5.3. for every r > 0, let dr be the open half-ball
dr := {x = (x′, xn) ∈rn : |x| < r, xn > 0}
and let
er := {x = (x′, 0) ∈rn : |x| < r}.
let w and v satisfy the hypotheses of theorem (1.1), and let ελ
2
3
ε →l ∈(0, ∞). consider a sequence {uε} ⊂h2(dr)
such that
c1 := sup
ε>0
fε(uε; dr, er) < ∞.
then there exist a subsequence of {uε} (nor relabeled) and a function v ∈bv (er; {α, β}) such that tvε →v in
l2(er).
proof. to simplify the notation, we write d and e in place of dr and er.
the idea of the proof is to reduce to the statement of theorem 4.4 via a suitable slicing argument.
fix i = 1, . . . , n −1 and let eei := {y ∈rn−2 : (y, xi, 0) ∈e for some xi ∈r}. for every y ∈eei, define the sets
dy := {(xi, xn) ∈r2 : (y, xi, xn) ∈d},
ey := {xi ∈r : (y, xi, 0) ∈e}.
for every y ∈eei and every function u : d →r, let uy : dy →r be the function defined by
uy(xi, xn) := u(y, xi, xn),
(xi, xn) ∈dy,
and for every function v : e →r, let vy : ey →r be defined by
vy(xi) := v(y, xi),
xi ∈ey.
if u ∈h2(d), then by the slicing theorem in [27] for ln−2-a.e. y ∈eei, the function uy belongs to h2(dy), for l2-a.e.
(xi, xn) ∈d,
∂u
∂xk
(y, xi, xn) = ∂uy
∂xk
(xi, xn),
for k = i, n,
and
∂2u
∂xk∂xj
(y, xi, xn) =
∂2uy
∂xk∂xj
(xi, xn),
for k, j = i, n,
and the trace of uy on ey agrees l1-a.e. in ey with (tu)y. taking into account these facts and fubini's theorem, for
every ε > 0 we get
fε(u; d, e) ⩾ε3
z
d
|d2u(x)|2 dx + λε
z
e
v (tu(x′, 0)) dx′
⩾
z
eei
ε3
z
dy |d2
xi,xn uy(xi, xn)|2 dxi dxn + λε
z
ey v (tuy(xi, 0)) dxi
dy.
higher-order phase transitions with line-tension effect
25
we apply the trace inequality (2.1) to each function uy to obtain
fε(u; d, e) ⩾
z
eei
gε(tuy; ey) dy,
(5.3)
where gε is the functional defined in (1.7) to prove that the sequence {tuε} is precompact in l2(e), it is enough to
show that it satisfies the conditions of proposition 2.2. since
c1 = sup
ε>0
fε(uε; d, e) < ∞,
(5.4)
we have that
sup
ε>0
λε
z
e
v
tuε(x′, 0)
dx′ ⩽c1.
from condition (hv
2 ), we may find c, t > 0 such that for all |z| ⩾t, v (z) ⩾c|z|2, and so
z
e∩{|t uε|⩾t}
tuε(x′, 0)
2 dx′ ⩽2
c
z
e
v
tuε(x′, 0)
dx′ ⩽2c1
c
1
λε
.
this implies that {tuε} is 2-equi-integrable. in particular, it is equi-integrable. thus to apply proposition 2.2, it
remains to show that for every δ > 0 there is a sequence {vε} ⊂l1(e) that is δ-close to tuε, in the sense of definition
2.1, and such that {vy
ε} is precompact in l1(ey) for ln−2-a.e. y ∈eei.
fix δ > 0, let η > 0 be a constant that will be fixed later, and let
vε(y, xi) :=
(
tuy
ε(xi)
if y ∈eei, x ∈ey, and gε(tuy
ε; ey) ⩽c1
η ,
α
otherwise.
(5.5)
note that although vε is no longer in h
3
2 (e), for every y ∈eei, either vy
ε = tuy
ε ∈h
3
2 (ey), or vy
ε ≡α, and so vy
ε
always belongs to h
3
2 (ey). we claim that {vε} is δ-close to {tuε}. indeed, by fubini's theorem,
∥tuε −vε∥l1(e) ⩽
z
zei
z
ey |tuy
ε(xi) −α| dxi dy ⩽
z
zei
z
ey (|tuy
ε(xi)| + |α|) dxi dy,
where zei := {y ∈eei : tuy
ε ̸= vy
ε} =
y ∈eei : gε(tuy
ε; ey) > c1
η
. since {tuε} is equi-integrable, to prove that
the right-hand side of the previous inequality is less than δ, it suffices to show that the ln−1 measure of the set
h := {(y, xi) : y ∈zei, xi ∈ey} can be made arbitrarily small. again by fubini's theorem and the definition of zei,
ln−1(h) =
z
zei
l1(ey) dy ⩽2rln−2(zei) ⩽η
c1
z
zei
gε(tuy
ε; ey) dy ⩽η,
where we have used (5.4) and the fact that l1(ey) ⩽2r ⩽1 for r ⩽1
2. thus if η is chosen sufficiently small, we have
that {vε} is δ-close to {tuε}.
to prove that {vy
ε} if precompact for ln−2-a.e. y ∈eei, it suffices to consider only those y ∈eei such that
gε(tuy
ε; ey) ⩽c1
η (since otherwise vy
ε(xi) ≡α and there is nothing to prove). for these y ∈eei, the precompactness
follows from theorem 4.4.
hence we are in a position to apply proposition 2.2 to conclude that {tuε} is precompact in l1(e). thus, up to a
subsequence (not relabeled), we may assume that there exists a function v ∈l1(e) such that tuε →v in l1(e). note
that since {tuε} is 2-equi-integrable, it follows by vitali's convergence theorem that tuε →v in l2(e).
it remains to show that v ∈bv (e; {α, β}). indeed, replacing u by uε in (5.3), and passing to the limit as ε →0+, by
fatou's lemma we deduce that
∞> lim inf
ε→0+ fε(uε; d, e) ⩾
z
eei
lim inf
ε→0+ gε(tuy
ε; ey) dy,
which implies that lim inf
ε→0+ gε(tuy
ε; ey) is finite for ln−2-a.e. y ∈eei. since tuε →v in l2(e), up to a subsequence (not
relabeled), we have that tuy
ε →vy in l2(e) for ln−2-a.e. y ∈eei. then proposition 2.12 yields vy ∈bv (ey; {α, β})
and
lim inf
ε→0+ fε(uε; d, e) ⩾
z
eei
clh0(svy) dy.
(5.6)
26
b. galv ̃
ao-sousa
the right-hand side of (5.6) is finite, so proposition 2.12 implies that v ∈bv (e; {α, β}), and that svy agrees with
sv ∩ey for a.e. y ∈eei.
to prove compactness in the general case, i.e., where ωis not flat, we introduce the notion of isometry defect following
[2].
definition 5.4 (isometry defect). given a1, a2 ⊂rn open sets and a bi-lipschitz homeomorphism ψ : a1 →a2 of
class c2(ai; rn), the isometry defect δ(ψ) of ψ is the smallest constant δ such that
ess sup
x∈a1
dist
dψ(x), o(n)
+ dist
d2ψ(x), 0
⩽δ,
where o(n) :=
a : rn →rn linear mappings, aat = in
.
proposition 5.5. let ω, w, and v satisfy the hypotheses of theorem 1.1. given a1, a2 ⊂rn open sets and a
bi-lipschitz homeomorphism ψ : a1 →a2 of class c2(ai; rn) such that ψ has finite isometry defect and maps a set
a′
1 ⊂∂a1 onto a′
2 ⊂∂a2. then for every u ∈h2(a2) there holds
fε(u; a2, a′
2) ⩾
1 −δ(ψ)
n+4fε(u ◦ψ; a1, a′
1) −δ(ψ)
1 −δ(ψ)
2ε3
z
a2
(
d2u
du
+ δ(ψ)
du
2
dx.
(5.7)
proposition 5.6. let ω⊂rn be an open and bounded set of class c2 and let dr := {x ∈rn : |x| < r, xn > 0}.
then for every x ∈∂ω, there exists rx > 0 such that for every 0 < r < rx, there exists a bi-lipschitz homeomorphism
ψr : dr →ω∩b(x; r) such that
(i) ψr maps dr onto ω∩b(x; r) and er := br ∩{xn = 0} onto ∂ω∩b(x; r);
(ii) ψr is of class c2 in dr and ∥dψr −in∥∞+ ∥d2ψr∥∞⩽δr, where δr
r→0+
−
−
−
−
→0+.
for a proof of propositions 5.5 and 5.6 we refer to [2]. we now turn to the proof of theorem 5.2.
proof of theorem 5.2. in view of proposition 5.6 and a simple compactness argument we can cover ∂ωwith finitely
many balls bi centered on ∂ωso that ω∩bi is the image of a half-ball under a map ψi with isometry defect smaller
than 1. hence it suffices to show that the sequence {tuε} is precompact in l2(∂ω∩bi) for every i.
fix i and let e
uε := uε ◦ψi. since the isometry defect of ψi is smaller than 1, proposition 5.5 implies that
supε fε(e
uε; dr, er) < ∞. hence the precompactness of the traces tuε in l2(∂ω∩bi) is a consequence of the precom-
pactness of the traces t e
uε in l2(er), which follows from proposition 5.3. this completes the proof.
we are finally ready to prove theorem 1.1.
proof of theorem 1.1. let {uε} ⊂h2(ω) be a sequence such that c := supε fε(uε) < ∞. then, by theorem 5.1, we
may find a subsequence uεn ∈h2(ω) and a function u ∈bv (ω; {a, b}) such that uεn →u in l2(ω).
on the other hand, by applying theorem 5.2 to the sequences {εn} and {uεn}, which still satisfy c = supn fεn(uεn) <
∞, we may find a further subsequence {uεnk } of {uεn} and a function v ∈bv (∂ω; {α, β}) such that tuεnk →
v in l2(∂ω). note that we still have uεnk →u in l2(ω). this completes the proof.
5.2. lower bound in rn
before proving the lower bound estimate in the general n-dimensional case, we state an auxiliary result.
lemma 5.7. let μ, μ1, and μ2 be nonnegative finite radon measures on rn, such that μ1 and μ2 are mutually
singular, and μ(b) ⩾μi(b) for i = 1, 2, and for any open ball b such that μ(∂b) = 0.
then for any borel set e, μ(e) ⩾μ1(e) + μ2(e).
proof of theorem 1.2(i). we now have all the necessary auxiliary results to prove the lower bound estimate for the
critical regime.
consider a sequence {uε} ⊂h2(ω) and two functions u ∈bv (ω; {a, b}) and v ∈bv (∂ω; {α, β}) such that uε →u in
l2(ω) and tuε →v in l2(∂ω).
higher-order phase transitions with line-tension effect
27
we claim that
lim inf
ε→0+ fε(uε; ω) ⩾mperω(ea) +
x
z=a,b
x
ξ=α,β
σ(z, ξ)hn−1 {tu = z} ∩{v = ξ}
+ clper∂ω(fα).
(5.8)
without loss of generality, we may assume that
∞> lim inf
ε→0+ fε(uε; ω) = lim
ε→0+ fε(uε; ω).
(5.9)
for every ε > 0 we define a measure με for all borel sets e ⊂rn by
με(e) := ε3
z
ω∩e
|d2uε|2 dx + 1
ε
z
ω∩e
w(uε) dx + λε
z
∂ω∩e
v (tuε) dhn−1.
since με = fε(uε), it follows by (5.9) that by taking a subsequence (not relabeled), we obtain a finite measure μ such
that με
⋆
⇀μ in the sense of measures.
for every borel set e ⊂rn define the measures:
μ1(e) := mperω∩e(ea);
μ2(e) :=
x
z=a,b
x
ξ=α,β
σ(z, ξ)hn−1 {tu = z} ∩{v = ξ} ∩e
;
μ3(e) := clper∂ω∩e(fα).
these three measures are mutually singular and so, by lemma 5.7, (5.8) is a consequence of μ(b) ⩾μi(b) for i = 1, 2, 3
for any ball b with μ(∂b) = 0, which we prove next.
take b an open ball such that μ(∂b) = 0.
using a slicing argument as in theorem 5.1 (see (5.1) for n = 2) and fatou's lemma, we have
μ(b) = lim
ε→0+ με(b) ⩾
z
ωe∩b
lim inf
ε→0+ fε(uy
ε; by) dhn−1(y)
⩾
z
ωe∩b
mh0(suy
ε ∩by) +
z
∂by σ(tuy(s), vy(s))dh0(s)
dhn−1(y)
⩾mperω∩b(ea) +
z
∂ω∩b
σ(tu(s), v(s))dhn−1(s) = μ1(b) + μ2(b),
where we have used theorem 4.2 and proposition 2.12.
by section 2.5, the jump set of v, sv, is (n −2)-rectifiable. hence by the lebesgue decomposition theorem, the
radon-nikodym theorem, and the besicovitch derivation theorem, for hn−2-a.e. x ∈sv,
dμ
dhn−2⌊sv
(x) = lim
r→0+
μ
b(x; r)
hn−2 b(x; r) ∩sv
∈r.
(5.10)
fix a point x ∈sv for which (5.10) holds and that has density 1 for sv with respect to the hn−2 measure. take r > 0
such that μ
∂b(x; r)
= 0. find ψr as in proposition 5.6 and set uε := uε◦ψr and v := v◦ψr. then v ∈bv (er; {α, β})
and tuε →v in l2(er), where er is defined in proposition 5.6. since μ
∂b(x; r)
= 0, we have
μ
b(x; r)
= lim
ε με
b(x; r)
= lim
ε fε
uε; ω∩b(x; r), ∂ω∩b(x; r)
⩾(1 −δ(ψr))n+4 lim inf
ε
z
(er)e
gε(tuy
ε; ey
r ) dhn−2(y)
⩾cl (1 −δ(ψr))n+4
z
(er)e
h0(sv ∩ey
r ) dhn−2(y).
hence,
dμ
dhn−2⌊sv
(x) ⩾lim
r→0+
μ
b(x; r)
αn−2rn−2 ⩾cl lim
r→0+ −
z
(er)e
h0(sv ∩ey
r ) dhn−2(y) = cl,
and so
μ(b) ⩾
z
sv∩b
dμ
dhn−2⌊sv
(x) dhn−2(x) ⩾clper∂ω∩b(fα) = μ3(b).
this concludes the proof of the theorem.
28
b. galv ̃
ao-sousa
5.3. upper bound
in this subsection we will obtain an estimate for the upper bound.
first we prove the result on a smooth setting, i.e., assuming that both su and sv are of class c2. we define a recovery
sequence separately in the different regions of figure 2. in proposition 5.8, we define it on a2, then we construct the
recovery sequence on a1 in proposition 5.9 and in corollary 5.10 we glue the last two sequences together to make
{u}n. then in proposition 5.11, on the setting of a flat domain where sv has also been flattened, we first construct
the recovery sequence on t1 and then glue it to the previously constructed sequence {un} on t2. in proposition 5.12
we adapt the sequence of proposition 5.11 to a general domain, but still under smooth assumptions.
finally, using a diagonalization argument, we prove the upper bound result without regularity conditions.
a2
a2
u = b
u = a
v = α
v = β
a1
b1
t1
t2
fig. 2. partition of ωfor the construction of the recovery sequence.
in what follows, given a set e ⊂rn and ρ > 0 we denote by eρ the set eρ := {x ∈rn : dist (x, e) < ρ}.
proposition 5.8. let w : r →[0, ∞) satisfy (hw
1 ) −(hw
2 ), let εn →0+, let η > 0, and let u ∈bv (ω; {a, b}) be
such that su is an n −1 dimensional manifold of class c2. then there exists a sequence {zn} ⊂h2(ω) such that
zn →u in l2(ω),
zn = u in ω\(su)cεn,
(5.11)
∥zn∥∞⩽c,
∥∇zn∥∞⩽c
εn
,
∥∇2zn∥∞⩽c
ε2
n
,
(5.12)
and
fεn(zn; ω, ∅) ⩽(m + η)hn−1(su) + o(1),
(5.13)
where m is the constant defined in (1.2) and c > 0.
proof. by the definition of m, we may find r > 0 and a function f ∈h2
loc(r) such that f(−t) = a and f(t) = b for
all t ⩾r, and
z r
−r
|f ′′(t)|2 + w
f(t)
dt ⩽m + η.
(5.14)
since su is a manifold of class c2 in rn, there exists δ0 > 0 such that for all 0 < δ ⩽δ0 the points in the tubular
neighborhood uδ := {x ∈rn : dist (x, su) < δ} of the manifold su admit a unique smooth projection onto su. define
the function zn : ω→r by
zn(x) :=
f
du(x)
εn
if x ∈urεn ∩ω,
a
if x ∈ea\urεn,
b
if x ∈ω\(ea ∪urεn),
where du : rn →r is the signed distance to su, negative in ea and positive outside ea and where we recall that
ea := {x ∈ω: u(x) = a}.
higher-order phase transitions with line-tension effect
29
we then have
fεn(zn; ω, ∅) =
z
ω
"
ε3
n
1
ε2
n
f ′′ du(x)
εn
∇du(x) × ∇du(x) + 1
εn
f ′ du(x)
εn
hu(x)
2
+ 1
εn
w
f
du(x)
εn
#
dx,
where hu is the hessian matrix of du. change variable via the diffeomorphism x := ψ1(y, t), where ψ1 : su×(−δ0, δ0) →
uδ0 is defined by ψ1(y, t) := y + tνu(y), with νu(y) the normal vector to su at y pointing away from ea. let ju(y, t)
denote the jacobian of this map. then
fεn(zn; ω, ∅) ⩽1
εn
z
su
z rεn
−rεn
f ′′ t
εn
2
|∇du(ψ1(y, t))|2 + w
f
t
εn
ju(y, t) dt dhn−1(y)
+ εn
z
su
z rεn
−rεn
f ′ t
εn
2
|hu(ψ1(y, t))|2ju(y, t) dt dhn−1(y)
+ c
z
su
z rεn
−rεn
f ′′ t
εn
f ′ t
εn
|∇du(ψ1(y, t))|2|hu(ψ1(y, t))|ju(y, t) dt dhn−1(y),
which reduces to
fεn(zn; ω, ∅) ⩽1
εn
z
su
z rεn
−rεn
f ′′ t
εn
2
+ w
f
t
εn
ju(y, t) dt dhn−1(y)
+ c
z
su
z rεn
−rεn
εn
f ′ t
εn
2
+
f ′′ t
εn
2
f ′ t
εn
2
dt dhn−1(y) =: i1 + i2,
where we took into account the facts that the gradient of the distance is 1, and the jacobian ju and the hessian hu
of the distance are uniformly bounded. we have
i1 ⩽
sup
y∈su,
t∈(−rεn,rεn)
ju(y, t)
!
1
εn
z
su
z rεn
−rεn
f ′′ t
εn
2
+ w
f
t
εn
dt dhn−1(y)
=
1 + o(1)
z
su
z r
−r
h
|f ′′(s)|2 + w (f(s))
i
ds dhn−1(y),
⩽
1 + o(1)
(m + η)hn−1(su),
where we used (5.14) and the fact that since su is a compact manifold, ju(y, t) converges to 1 uniformly as t →0.
on the other hand, by (5.14),
i2 ⩽cεn
z r
−r
h
εn |f ′(s)|2 + |f ′′(s)| |f ′(s)|
i
ds ⩽cεn.
we conclude that fεn(zn; ω, ∅) ⩽(m + η)hn−1(su) + o(1). this completes the proof.
proposition 5.9. let w : r →[0, ∞) satisfy (hw
1 )−(hw
2 ), let v : r →[0, ∞) satisfy (hv
1 )−(hv
3 ) let εn →0+ be
such that εnλ
2
3
n →l ∈(0, ∞), let η > 0, let ωδ := {x ∈ω: dist (x, ∂ω) < δ} for δ > 0, and let u ∈bv (ω; {a, b}) and
v ∈bv (∂ω; {α, β}), with su an n −1 manifold of class c2 such that hn−1(∂ω∩su) = 0 and sv an n −2 manifold
of class c2 . then there exist r = r(η) > 0 and a sequence {vn} ⊂h2(ωrεn) such that tvn →v in l2(∂ω),
ln n
x ∈ωrεn\ωrεn
2
: vn(x) ̸= u(x)
o
⩽cε2
n,
(5.15)
∥vn∥∞⩽c,
∥∇vn∥∞⩽c
εn
,
∥∇2vn∥∞⩽c
ε2
n
,
(5.16)
and
fεn(vn; ωrεn, ∅) ⩽
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)hn−1({x ∈∂ω: tu(x) = z, v(x) = ξ}) + o(1),
(5.17)
where σ(z, ξ) is the constant defined in (1.3).
30
b. galv ̃
ao-sousa
proof. by the definition of σ(*, *), for every z ∈{a, b} and ξ ∈{α, β} there exist rzξ > 0 and gzξ ∈h2
loc(r) such that
gzξ(0) = z, gzξ(x) = ξ for all x ⩾rzξ, and
z rzξ
0
|g′′
zξ(x)|2 + w(gzξ(x))
dx ⩽σ(z, ξ) + η.
(5.18)
define r := max{r, raα, rbα, raβ, rbβ}, where r is the number r given in the previous proposition. since ∂ωis an
n −1 manifold of class c2, there exists δ0 > 0 such that every point x ∈ωδ0 admits a unique projection π(x) onto
∂ωand the map x ∈ωδ0 7→π(x) is of class c2. hence we may partition ωδ0 as follows
ωδ0 =
[
z=a,b
[
ξ=α,β
azξ
∪su ∪π−1(sv),
where azξ :=
x ∈ωδ0\
su ∪π−1(sv)
: (tu)(π(x)) = z, v(π(x)) = ξ
. let n be so large that rεn ⩽δ0 and define
gn : ωrεn →r as follows
gn(x) :=
gzξ
d(x)
εn
if x ∈azξ ∩ωrεn for some z ∈{a, b} and ξ ∈{α, β},
0
if x ∈
su ∪π−1(sv)
∩ωrεn,
where, as before, d : ω→[0, ∞) is the distance to ∂ω.
note that the functions gn are discontinuous across
su ∪π−1(sv)
∩ωrεn, and so they are not admissible for
fεn. to solve this problem, let φ ∈c∞((0, ∞); [0, 1]) be such that φ ≡0 in
0, 1
3
and φ ≡1 in
1
2, ∞
, and let
du : ωδ0 →[0, ∞) and dv : ωδ0 →[0, ∞) denote the distance to su and to π−1(sv), respectively. since su is an
n −1 manifold of class c2, it follows that du is of class c2 in a neighborhood p1 := {x ∈ωδ0 : du(x) < δ1} of su.
similarly, since sv is an n −2 manifold of class c2 by taking δ0 smaller, if necessary, we may assume that π−1(sv) is
an n −1 dimensional manifold of class c2 and thus dv is of class c2 in a neighborhood p2 := {x ∈ωδ0 : dv(x) < δ2}
of π−1(sv). let n be so large that rεn < 1
3 min{δ1, δ2} and for x ∈ωrεn define
vn(x) := φ
du(x)
rεn
φ
dv(x)
rεn
gn(x).
since φ ≡0 in
0, 1
3
, it follows that vn(x) = 0 for all x ∈ωrεn such that du(x) < 1
3rεn or dv(x) < 1
3rεn. as gn is
regular away from su ∪π−1(sv), it follows that vn ∈h2(ωrεn).
we claim that tvn →v in l2(∂ω). indeed, since hn−1(∂ω∩su) = 0, we know that
hn−1
x ∈∂ω: du(x) < 1
2rεn
⩽cεn,
(5.19)
and similarly, since sv is an n −2 manifold contained in ∂ω,
hn−1
x ∈∂ω: dv(x) < 1
2rεn
⩽cεn.
(5.20)
on the other hand, if x ∈∂ωis such that du(x) ⩾1
2rεn and dv(x) ⩾1
2rεn, then vn = gn in a neighborhood of x,
and so by the definition of the sets azξ and the fact that gzξ(0) = z, it follows that vn(x) = v(x). hence by (5.19) and
(5.20), ∥vn −v∥l2(∂ω) →0, which proves the claim.
it remains to prove (5.17). let
ln :=
x ∈ωrεn : du(x) < 1
2rεn
,
mn :=
x ∈ωrεn : dv(x) < 1
2rεn
.
step 1.
we begin by estimating fε in the set ωrεn\(ln ∪mn). since in this set vn = gn, we have that
fεn(vn; ω\(ln ∪mn), ∅) ⩽
x
z=a,b
x
ξ=α,β
fεn (gn; azξ ∩ωrεn, ∅) .
thus it suffices to estimate fεn(gn; azξ ∩ωrεn).
higher-order phase transitions with line-tension effect
31
let a′
zξ := azξ ∩∂ω, which satisfies a′
zξ = {x ∈∂ω: tu(x) = z, v(x) = ξ}. we have
fεn(gn; azξ, ∅) =
z
azξ
"
ε3
n
1
ε2
n
g′′
zξ
d(x)
εn
∇d(x) × ∇d(x) + 1
εn
g′
zξ
d(x)
εn
h(x)
2
+ 1
εn
w
gzξ
d(x)
εn
#
dx,
where h is the hessian matrix of d. change variable via the diffeomorphism x := ψ2(y, t), where ψ2 : ∂ω×
0, δ1
→ωδ1,
defined by ψ2(y, t) := y + tν(y), with ν(y) the normal vector to ∂ωat y pointing to the inside of ω. we write j(y, t)
the jacobian of this map. then
fεn(gn; azξ, ∅) ⩽
z
a′
zξ
z rεn
0
1
εn
g′′
zξ
t
εn
2
|∇d(ψ2(y, t))|2 + 1
εn
w
g′
zξ
t
εn
+ εn
g′′
zξ
t
εn
2
|h(ψ2(y, t))|2
+ c
g′′
zξ
t
εn
g′
zξ
t
εn
|∇d(ψ2(y, t))|2|h(ψ(y, t))|
j(y, t) dt dhn−1(y),
which reduces to
fεn(gn; azξ, ∅) ⩽
1
εn
z
a′
zξ
z rεn
0
g′′
zξ
t
εn
2
+ w
gzξ
t
εn
j(y, t) dt dhn−1(y)
+ c
z
a′
zξ
z rεn
0
εn
g′
zξ
t
εn
2
+
g′
zξ
t
εn
g′
zξ
t
εn
dt dhn−1(y)
=: i1 + i2,
where we took into account the facts that the gradient of the distance is 1, and the jacobian j and the hessian h of
the distance are uniformly bounded. we have
i1 ⩽
sup
y∈a′
zξ,
t∈(0,rεn)
j(y, t)
!
1
εn
z
a′
zξ
z rεn
0
g′′
zξ
t
εn
2
+ w
gzξ
t
εn
dt dhn−1(y)
⩽
1 + o(1)
(σ(z, ξ) + η)hn−1({tu = z, v = ξ}),
where we used the fact that since ∂ωis a compact manifold, j(y, t) converges to 1 uniformly as t →0. on the other
hand
i2 ⩽cεn
z r
0
h
εn
g′
zξ(s)
2 +
g′′
zξ(s)
2
g′
zξ(s)
2i
ds ⩽cεn.
we conclude that fεn(gn; azξ, ∅) ⩽(σ(z, ξ) + η)hn−1({tu = z, v = ξ}) + o(1).
step 2.
we estimate the energy in ln ∪mn.
we have
fεn(vn; ln\mn, ∅) =
z
ln\mn
ε3
n
φ
du(x)
rεn
g′′
n(x) +
2
rεn
g′
n(x)φ′ du(x)
rεn
∇du × ∇du
+
1
r2ε2
n
gn(x)φ′′ du(x)
rεn
hu
2
+ 1
εn
w
φ
du(x)
rεn
gn(x)
dx,
where hu is the hessian matrix of du. then
fεn(vn; ln\mn, ∅) ⩽c
z
ln\mn
ε3
n|g′′
n(x)|2 + εn|g′
n(x)|2 +
1
r4εn
|vn(x)|2
+ lim sup
n
1
εn
w
φ
du(x)
rεn
gn(x)
dx ⩽c 1
εn
|ln| ⩽cεn,
where we took into account the facts that the hessian hu is uniformly bounded, and that vn is uniformly bounded,
g′
n is bounded by
c
εn , and g′′
n is bounded by
c
ε2
n .
we conclude that fεn(vn; ln\mn, ∅) = o(1). similarly, we may prove that fεn(vn; mn, ∅) = o(1). this concludes the
proof.
32
b. galv ̃
ao-sousa
corollary 5.10. let w : r →[0, ∞) satisfy (hw
1 ) −(hw
2 ), let v : r →[0, ∞) satisfy (hv
1 ) −(hv
3 ). let εn →0+
be such that εnλ
2
3
n →l ∈(0, ∞), let η > 0, and let u ∈bv (ω; {a, b}) and v ∈bv (∂ω; {α, β}), with su an n −1
manifold of class c2 such that hn−1(∂ω∩su) = 0 and sv an n −2 manifold of class c2 . then there exists a
sequence {un} ⊂h2(ω) such that un →u in l2(ω), tun →v in l2(∂ω),
∥un∥∞⩽c,
∥∇un∥∞⩽c
εn
,
∥∇2un∥∞⩽c
ε2
n
,
(5.21)
and
fεn(un; ω, ∅) ⩽(m + η)hn−1(su) +
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)hn−1({x ∈∂ω: tu(x) = z, v(x) = ξ}) + o(1)
(5.22)
where m and σ(z, ξ) are the constant defined, respectively, in (1.2) and (1.3).
proof. let φ ∈c∞((0, ∞); [0, 1]) be such that φ ≡0 in
0, 1
2
and φ ≡1 in (1, ∞) and let
un(x) := φ
d(x)
rεn
zn(x) +
1 −φ
d(x)
rεn
vn(x),
for x ∈ω, where the functions zn and vn are defined, respectively, in propositions 5.8 and 5.9, r is the number given
in the previous proposition, and d is the distance to the boundary.
α
b
a
x0
zn
gaα
n
gbα
n
fig. 3. scheme for the gluing of the discontinuity set of u to the boundary ∂ωwhen there is no discontinuity in v.
since tun = tvn, it follows that tun →v in l2(∂ω).
on the other hand, since ∥vn∥∞⩽c, ln ({x ∈ω: d(x) ⩽rεn}) →0, and zn →u in l2(ω), we have that un →u in
l2(ω). moreover, by (5.13) and (5.17),
fεn(un; ω; ∅) ⩽fεn(zn; ω\ω2rεn; ∅) + fεn(vn; ωrεn; ∅) + fεn(un; pn; ∅)
⩽(m + η)hn−1(su) +
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)hn−1({x ∈∂ω: tu(x) = z, v(x) = ξ})
+ lim sup
n
fεn(un; pn; ∅) + o(1),
where pn :=
x ∈ω: 1
2rεn < d(x) < 2rεn
.
to estimate the last term, note that by (5.13) and (5.15), ln {x ∈pn : un(x) ̸= u(x)}
⩽cε2
n, and so by the
continuity of w,
1
εn
z
pn
w(un) dx = 1
εn
max
b(0;l) w
ln {x ∈pn : un(x) ̸= u(x)}
⩽cεn →0,
where l := supn ∥un∥∞.
on the other hand, we have that ∇un(x) = 0 and ∇2un(x) = 0 for ln-a.e. x ∈en := {x ∈pn : un(x) = u(x)}, while
for x ∈pn\en,
|∇2un(x)|2 ⩽c
1
ε4
n
|
|zn(x)|2 + |vn(x)|2
+ 1
ε2
n
|
|∇zn(x)|2 + |∇vn(x)|2
+ |
|∇2zn(x)|2 + |∇2vn(x)|2
⩽c
ε4
n
,
where we used the bounds on zn and vn given in (5.12) and (5.16). hence
ε3
n
z
pn
|∇2un|2 dx = ε3
n
z
pn\en
|∇un|2 dx ⩽c
εn
ln(pn\en) ⩽cεn,
which completes the proof.
higher-order phase transitions with line-tension effect
33
proposition 5.11. let w : r →[0, ∞) satisfy (hw
1 ) −(hw
2 ), let v : r →[0, ∞) satisfy (hv
1 ) −(hv
3 ). let εn →0+
be such that εnλ
2
3
n →l ∈(0, ∞), let η > 0, let dr := {x ∈rn : |x| < r, xn > 0}, and let er := {(x′, 0) ∈rn−1 × r :
|x| < r}. also let u ∈bv (dr; {a, b}) and v ∈bv (er; {α, β}), with su an n −1 manifold of class c2 such that
hn−1(er ∩su) = 0 and sv = {x ∈er : xn−1 = 0}. then there exists {un} ⊂h2(dr) such that un →u in l2(dr),
tun →v in l2(er), and
lim sup
n
fεn(un; dr, er) ⩽(m + η)hn−1(su) +
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)hn−1({tu = z, v = ξ}) + (c + η)lhn−2(sv),
where m, σ, and c are the constants defined in (1.2), (1.3), and (1.5), respectively.
proof. first we prove the result for n = 2 and then treat the n-dimensional case.
step 1.
assume that n = 2.
substep 1a.
by the definition of c there exists r > 0 and a function h ∈h
3
2
loc(r) satisfying h(−t) = α and h(t) = β
for all t ⩾r and
7
16
zz
r2
h′(t) −h′(s)
2
|t −s|2
dt ds +
z
r
v
h(t)
dt ⩽c + η.
(5.23)
define
w(t, s) = 1
2s
z t+s
t−s
h(τ) dτ.
(5.24)
by proposition 2.9, we have that w ∈h2
loc(r × (0, ∞)), tw = h, and
zz
△r
d2w(t, s)
2 dt ds ⩽7
16
z r
−r
z r
−r
h′(t) −h′(s)
2
|t −s|2
dt ds,
where △r := t +
2r −(r, 0) and t +
2r :=
(t, s) ∈r2 : 0 < s < r, s < t < 2r −s
. for (x, y) ∈△rρn define
wn(x, y) := w
x
ρn , y
ρn
, where ρn = εnλ
−1
3
n .
then
fε(wn, △rρn, (−rρn, rρn) × {0}) =
zz
△rρn
ε3
n|∇2
(x,y)wn(x, y)|2 + 1
εn
w(wn(x, y))
dx dy + λn
z rρn
−rρn
v (twn(x)) dx
=
zz
△rρn
ε3
n
ρ4
n
∇2
(t,s)w
x
ρn , y
ρn
2
+ 1
εn
w
w
x
ρn , y
ρn
dx dy
+ λn
z rρn
−rρn
v
tw
x
ρn , y
ρn
dx
=
zz
△r
h
εnλ
2
3
n|∇2
(t,s)w(t, s)|2 + εnλ
−2
3
n w(w(t, s))
i
dt ds + εnλ
2
3
n
z r
−r
v (tw(t)) dt
⩽(l + o(1))
"
7
16
z r
−r
z r
−r
h′(t) −h′(s)
2
|t −s|2
dt ds +
z r
−r
v (th(t)) dt
#
+ cε2
n,
where we used the fact that w is continuous and ∥w∥∞⩽c. thus
fε(wn, △rρn, (−rρn, rρn) × {0}) ⩽l
"
7
16
z r
−r
z r
−r
h′
η(t) −h′
η(s)
2
|t −s|2
dt ds +
z r
−r
v (thη(t)) dt
#
+ o(1).
(5.25)
substep 1b.
to complete this step, we need to match the function wn to the function un given in corollary 5.10
(with n = 2 and ω:= dr).
consider the function e
un(x, y) := ψn(x, y)wn(x, y) +
1 −ψn(x, y)
un(x, y) for (x, y) ∈r2, where ψn ∈c∞(r ×
(0, ∞); [0, 1]) satisfies ψn ≡1 in t +
rρn, ψn ≡0 in rn\t
+
rεn, and
∥∇ψn∥∞⩽c
εn
and
∥∇2ψn∥∞⩽c
ε2
n
.
(5.26)
34
b. galv ̃
ao-sousa
α
b
a
β
zn
gaα
n
gbα
n
wn
!
un
r
rλ
1
3
ε
(a)
(b)
fig. 4. (a) close-up view of t2rεn and t2rρn.; (b) domain of integration after change of variables and divided in regions.
since twn = tun = v in
△rεn\△rρn
∩er, we have that t e
un = v on
△rεn\△rρn
∩er. hence v
t e
un
= 0 in
△rεn\△rρn
∩er. thus, it suffices to estimate
fε(e
un; ln, ∅) =
z
ln
ε3
n|∇2e
un(x, y)|2 + 1
εn
w(e
un(x, y))
dx dy,
(5.27)
where ln := △rεn\△rρn.
by young's inequality and (5.26), for (x, y) ∈ln we have
|∇2e
un(x, y)|2 ⩽(1 + η)|∇2wn(x, y)|2 + cη
|∇2un(x, y)|2
+ 1
ε2
n
|∇wn(x, y)|2 + |∇un(x, y)|2
+ 1
ε4
n
|wn(x, y)|2 + |un(x, y)|2
,
(5.28)
and, so
ε3
zz
ln
|∇2e
un(x, y)|2 dx dy ⩽(1 + η)ε3
n
zz
ln
|∇2wn(x, y)|2 dx dy + c
zz
ln
εn|∇wn(x, y)|2 + 1
εn
|wn(x, y)|2
+ ε3
n|∇2un(x, y)|2 + εn|∇un(x, y)|2 + 1
εn
|un(x, y)|2
dx dy
=: i1 + i2 = i3.
(5.29)
to estimate i1, note that
ε3
n
zz
ln
|∇2
(x,y)wn(x, y)|2 dx dy = ε3
n
ρ4
n
zz
ln
∇2
(t,s)wn
x
ρn , y
ρn
2
dx dy = εnλ
2
3
n
zz
1
ρn ln
|∇2w(t, s)|2 dt ds
(5.30)
extend w to tr εn
ρn using (5.24). since w(t′, *) is even, by proposition 2.9 and (5.30), we have
ε3
n
zz
ln
|∇2
(x,y)wn(x, y)|2 dx dy = εnλ
2
3
n
zz
1
ρn ln
|∇2
(t,s)w(t, s)|2 dt ds
⩽7
16εnλ
2
3
n
zz
1
ρn ln
h′(s + t) −h′(s −t)
2t
2
ds dt
⩽7
32εnλ
2
3
n
zz
1
ρn ln
h′(s + t) −h′(s −t)
2t
2
ds dt
= 7
64εnλ
2
3
n
zz
−rλ
1
3
n ,rλ
1
3
n
2
\[−r,r]2
h′(w) −h′(z)
w −z
2
dz dw.
the integral we are estimating is the integral over the the "square annulus" in figure 4(b). note that on all four corner
higher-order phase transitions with line-tension effect
35
squares of figure 4(b), we have h′(z) = h′(w) = 0, so the integral reduces to
i1 ⩽7
32εnλ
2
3
n
z r
−r
z rλ
1
3
n
r
h′(w)
w −z
2
dz dw +
z r
−r
z −r
−rλ
1
3
n
h′(w)
w −z
2
dz dw
(5.31)
⩽7
32(l + o(1))
"z r
−r
z ∞
r
h′(w)
w −z
2
dz dw +
z r
−r
z −r
−∞
h′(w)
w −z
2
dz dw
#
.
to estimate i2, note that since ∥wn∥∞⩽c, we have
1
εn
z
ln
|wn(t, s)|2 dt ds ⩽cεn
(5.32)
and by h ̆
older inequality, proposition 2.6 and (5.31), we obtain that
εn
zz
ln
|∇wn(t, s)|2 dt ds ⩽cεn
∥wn∥l2(ln)∥∇2wn∥l2(ln) + ∥wn∥2
l2(ln)
⩽c
h
ε
−1
2
n ∥wn∥l2(ln)
ε
3
2
n∥∇2wn∥l2(ln)
+ εn∥wn∥2
l2(ln)
i
⩽c(√εn + ε3
n).
(5.33)
combining (5.32) and (5.33) yields
i2 ⩽c√εn.
(5.34)
we estimate i3 using (5.21). precisely,
i3 ⩽c
εn
l2(ln) ⩽cεn.
(5.35)
finally, using the fact that e
un is bounded in l∞(ln), we have
1
εn
zz
ln
w
e
un(x, y)
dx dy ⩽c
εn
l2(ln) ⩽cεn.
(5.36)
using (5.27), (5.29), (5.31), (5.34), (5.35), and (5.36), we obtain that
fε(e
un, ln, ∅) ⩽7l
32
"z r
−r
z ∞
r
h′(w)
w −z
2
dz dw +
z r
−r
z −r
−∞
h′(w)
w −z
2
dz dw
#
+ o(1).
(5.37)
combine (5.25) and (5.37) to obtain
fε(e
un, △rεn, (−rεn, rεn) × {0})
⩽l
"
7
16
z ∞
−∞
z ∞
−∞
h′(w) −h′(z)
w −z
2
dz dw +
z ∞
−∞
v (h(z)) dz
#
+ o(1) ⩽c + η + o(1).
(5.38)
on the other hand, from corollary 5.10 we know that
fε(e
un, dr\t +
rεn, ∅) = fε(un, dr\t +
rεn, ∅)
⩽(m + η)h1(su) +
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)h1({tu = z, v = ξ}) + o(1).
(5.39)
the result follows by combining (5.38) and (5.39).
36
b. galv ̃
ao-sousa
step 2.
general n-dimensional problem.
in this case, we define un(x) := e
un(xn−1, xn) for x = (x′′, xn−1, xn) ∈dr. by fubini's theorem and step 1, we
deduce that
fε(un, b+
r , er) =
z
bn−2
r
fεn(e
un; dx′′
r , ex′′
r ) dx′′
⩽
z
bn−2
r
(m + η)h1(su ∩({x′′} × r2))
+
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)h1({tu = z, v = ξ} ∩({x′′} × r2))
+ (c + η)lh0(sv ∩({x′′} × r2))
dx′′ + o(1).
using theorem 2.12, we then deduce that
lim sup
n→∞fε(un, b+
r , er) ⩽(m + η)hn−1(su) +
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)hn−1({tu = z, v = ξ}) + (c + η)lhn−2(sv).
this completes the proof.
proposition 5.12. let w : r →[0, ∞) satisfy (hw
1 ) −(hw
2 ), let v : r →[0, ∞) satisfy (hv
1 ) −(hv
3 ) let εn →0+
with εnλ
2
3
n →l ∈(0, ∞), let η > 0, and let u ∈bv (ω; {a, b}) and v ∈bv (∂ω; {α, β}), with su an n −1 manifold of
class c2 such that hn−1(∂ω∩su) = 0 and sv an n −2 manifold of class c2 . then there exists {un} ⊂h2(ω) such
that un →u in l2(ω), tun →v in l2(∂ω), and
lim sup
n
fεn(un) ⩽(m + η)hn−1(su) +
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)hn−1({tu = z, v = ξ}) + (c + η)lhn−2(sv),
where m, σ, and c are the constants defined in (1.2), (1.3), and (1.5), respectively.
proof. from corollary 5.10, it suffices to prove that
lim sup
n
fεn(un; ω∩b(x0; r), ∂ω∩b(x0; r)) ⩽(m + η)hn−1(su ∩b(x0; r))
+
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)hn−1({tu = z, v = ξ} ∩b(x0; r)) + (c + η)lhn−2(sv ∩b(x0; r)),
(5.40)
for each point x0 ∈sv and for some neighborhood a of x0.
ψ1
ψ2
sv
y0
sv
z0
sv
x0
fig. 5. scheme for the flattening of the boundary and of sv.
first we fix a point x0 ∈sv. since the domain is of class c2, we can find r > 0 such that, up to a rotation,
∂ω∩b(x0; r) =
x ∈rn : xn = γ1(x′)
,
(5.41)
for some function γ1 ∈c2(rn−1). so we define ψ1(x) :=
x′, xn −γ1(x′)
, u(y) := (u ◦ψ−1
1 )(y), and v(y) :=
(v ◦ψ−1
1 )(y).
moreover, sv is also of class c2, so we can find 0 < r < r such that, up to a "horizontal rotation", i.e., r =
r′
0
0
1
,
with r′ ∈so(n −1), we have
sv ∩b(x0; r) =
y ∈rn−1 × {0} : yn−1 = γ2(y′′)
,
higher-order phase transitions with line-tension effect
37
for some function γ2 ∈c2(rn−2). let ψ2(y) :=
y′′, yn−1 −γ2(y′′), yn
, u(z) :=
u◦ψ−1
2
(z) =
u ◦(ψ2 ◦ψ1)−1
(z),
v(z) :=
v ◦(ψ2 ◦ψ1)−1
(z).
let φ := ψ2 ◦ψ1 : rn →rn, which is a bi-lipschitz homeomorphism. moreover, its isometry defect δr vanishes as
r →0 due to the regularity of both ∂ωand sv.
let z0 := φ(x0) ∈sz0. note that dr := φ
ω∩b(x0; r)
is a neighborhood of z0, and set er := φ
∂ω∩b(x0; r)
. let
{un} ⊂h2 (dr) be defined as in proposition 5.11 with u and v. then from proposition 5.5, we have that
fεn (un ◦φ; ω∩b(x0; r), ∂ω∩b(x0; r)) ⩽(1 −δr)−(n+4)fεn (un; dr, er))
+
δr
(1 −δr)n+2 ε3
n
z
dr
|∇2un(z)| |∇un(z)| + δr|∇un(z)|2
dz.
on the other hand, by h ̈
older and propositions 2.6 and 5.11, we have that
ε3
n
z
b+(z0;r)
|∇2un(z)| |∇un(z)| + δr|∇un(z)|2
dz ⩽cεn,
and that
fεn (un; dr, er) ⩽(m + η)hn−1(su ∩dr) +
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)hn−1({tu = z, v = ξ} ∩er)
+ (c + η)lhn−2(sv ∩er) + o(1).
moreover,
hn−1(su ∩dr) = hn−1(s(u ◦φ−1) ∩dr) = hn−1 φ(su) ∩φ
ω∩b(x0; r)
⩽hn−1 φ
su ∩b(x0; r)
⩽lip(φ)n−1hn−1 su ∩b(x0; r)
= hn−1 su ∩b(x0; r)
,
because φ is an isomorphism. analogously, we deduce that
hn−1({tu = z, v = ξ} ∩dr) ⩽hn−1 ({tu = z, v = ξ} ∩b(x0; r)) ,
hn−2(sv ∩dr) ⩽hn−2 (sv ∩b(x0; rεn)) .
hence
lim sup
n
fεn
un ◦φ; ω∩b+(x0; r)
; ∂ω∩b(x0; r)
⩽(1 −δr)−(n+4)
(m + η)hn−1 (su ∩b(x0; r))
+
x
z=a,b
x
ξ=α,β
(σ(z, ξ) + η)hn−1 ({tu = z, v = ξ} ∩b(x0; r)) + (c + η)lhn−2 (sv ∩b(x0; r))
this proves the result.
proof of theorem 1.2(ii). since u ∈bv (ω; {a, b}), we may write u as
u(x) =
(
a
if x ∈ea,
b
if x ∈ω\ea,
where ea is a set of finite perimeter in ω. similarly, since v ∈bv (∂ω; {α, β}), we may write v as
v(x) =
(
α
if x ∈fα,
β
if x ∈∂ω\fα,
where fα is a set of finite perimeter in ∂ω. apply proposition 2.11 to the set ea to obtain a sequence of sets ek of
class c2 such that ln(ea△ek) →0 and hn−1(∂ea ∩∂ek) →0. by slightly modifying each ek, we may assume that
hn−1(∂ω∩∂ek) = 0. similarly, by proposition 2.13 applied to the set fα, we may find a sequence of sets fk ⊂∂ω
of class c2 such that hn−1(fα△fk) →0 and hn−2(∂∂ωfα△∂∂ωfk) →0. define the sequences of functions
uk(x) :=
(
a
if x ∈ω∩ek,
b
if x ∈ω\ek,
vk(x) :=
(
α
if x ∈∂ω∩fk,
β
if x ∈∂ω\fk.
38
b. galv ̃
ao-sousa
apply proposition 5.12 to find {uk,n} ⊂h2(ω) such that uk,n
n
−
→uk in l2(ω), tuk,n
n
−
→vk in l2(∂ω), and
lim sup
n
fεn(uk,n) ⩽
m + 1
k
perω(ek) +
x
z=a,b
x
ξ=α,β
σ(z, ξ) + 1
k
hn−1 {tuk = z} ∩{vk = ξ}
+
c + 1
k
lper∂ω(fk).
since uk →u in l2(ω) and vk →v in l2(∂ω), we have
lim
k lim
n ∥uk,n −u∥l2(ω) = 0,
lim
k lim
n ∥tuk,n −v∥l2(∂ω) = 0,
lim sup
k
lim sup
n
fεn(uk,n) ⩽mperω(ea) +
x
z=a,b
x
ξ=α,β
σ(z, ξ)hn−1 {tu = z} ∩{v = ξ}
+ clper∂ω(fα).
diagonalize to get a subsequence kn →∞and obtain un := ukn,n →u in l2(ω), tun →v in l2(∂ω), and
lim sup
n
fεn(un) ⩽mperω(ea) +
x
z=a,b
x
ξ=α,β
σ(z, ξ)hn−1 {tu = z} ∩{v = ξ}
+ clper∂ω(fα).
this completes the proof.
acknowledgements this research was partially funded by funda ̧
c ̃
ao para a ciˆ
encia e a tecnologia under grant
sfrh/bd/8582/2002, the department of mathematical sciences of carnegie mellon university and its center for
nonlinear analysis (nsf grants no. dms-0405343 and dms-0635983), irene fonseca (nsf grant dms-0401763)
and giovanni leoni (nsf grants no. dms-0405423 and dms-0708039).
the author thanks vincent millot and dejan slepˇ
cev for the fruitful conversations, luc tartar for useful conversations
on proposition 2.9, and is indebted to irene fonseca and giovanni leoni for uncountable discussions and advice as
the work progressed that largely influenced its course.
references
[1] r. adams, sobolev spaces, academic press, 1975. mr 56:9247
[2] g. alberti, g. bouchitt ́
e, and p. seppecher, phase transition with the line tension effect, arch. rational mech. anal. 144
(1998), 1–46. mr 99j:76104
[3] l. ambrosio, n. fusco, and d. pallara, functions of bounded variation and free discontinuity problems, oxford claredon
press, 2000. mr 2003a:49002
[4] j. ball, a version of the fundamental theorem for young measures, pdes and continuum models of phase transitions (nice,
1988), lecture notes in phys., vol. 344, springer, berlin, 1989, pp. 207–215. mr 91b:49021
[5] r. choksi, s. conti, r. kohn, and f. otto, ground state energy scaling laws during the onset and destruction of the
intermediate state in a type i superconductor, comm. pure appl. math. 61 (2008), no. 5, 595–626. mr 2009d:82162
[6] r. choksi and r. kohn, bounds on the micromagnetic energy of a uniaxial ferromagnet, comm. pure appl. math. 51
(1998), no. 3, 259–289. mr 2000d:82046
[7] r. choksi, r. kohn, and f. otto, domain branching in uniaxial ferromagnets: a scaling law for the minimum energy,
comm. math. phys. 201 (1999), no. 1, 61–79. mr 2000c:49060
[8]
, energy minimization and flux domain structure in the intermediate state of a type-i superconductor, j. nonlinear
sci. 14 (2004), no. 2, 119–171. mr 2005e:82123
[9] s. conti, i. fonseca, and g. leoni, a γ-convergence result for the two-gradient theory of phase transitions, comm. pure
applied math. 55 (2002), 857–936. mr 2003c:49017
[10] e. dibenedetto, real analysis, birkh ̈
auser, 2002. mr 2003d:00001
[11] l. evans and r. gariepy, measure theory and fine properties of functions, crc press, 1992. mr 93f:28001
[12] i. fonseca and g. leoni, modern methods in the calculus of variations: lp spaces, springer monographs in mathematics,
springer, 2007. mr mr2341508
[13] i. fonseca and c. mantegazza, second order singular perturbation models for phase transitions, siam j. math. anal. 31
(2000), no. 5, 1121–1143. mr 2001i:49030
[14] e. gagliardo, ulteriori proriet`
a di alcune classi di funzioni in pi`
u variabili, ricerche di matematica 8 (1959), 24–51. mr
22:181
[15] a. garroni and g. palatucci, a singular perturbation result with a fractional norm, variational problems in materials
science, progr. nonlinear differential equations appl., vol. 68, birkh ̈
auser, basel, 2006, pp. 111–126. mr 2223366
[16] e. giusti, minimal surfaces and functions of bounded variation, birkh ̈
auser, 1984. mr 56:9247
[17] e. hebey, nonlinear analysis on manifolds: sobolev spaces and inequalities, ams/cims, 1999. mr 2000e:58011
[18] g. dal maso, an introduction to γ-convergence, birkh ̈
auser, 1993. mr 94a:49001
higher-order phase transitions with line-tension effect
39
[19] m. miranda, d. pallara, f. paronetto, and m. preunkert, heat semigroup and functions of bounded variation on riemannian
manifolds, j. reine angew. math. 613 (2007), 99–119. mr 2009b:58053
[20] l. modica, the gradient theory of phase transitions and the minimal interface criterion, arch. rational mech. anal. 98
(1987), no. 2, 123–142. mr 88f:76038
[21]
, the gradient theory of phase transitions with boundary contact energy, ann. inst. henri poincar ́
e - analyse non
lin ́
eaire 4 (1987), no. 5, 487–512. mr 89c:76108
[22] l. modica and s. mortola, un esempio de γ−-convergenza, boll. un. mat. ital. b(5) 14 (1977), no. 1, 285–299. mr 56:3704
[23] s. m ̈
uller, variational models for microstructure and phase transitions, calculus of variations and geometric evolution
problems (cetraro, 1996), lecture notes in math., vol. 1713, springer, 1999, pp. 85–210. mr 2001b:49019
[24] l. nirenberg, an extended interpolation inequality, ann. sc. normale pisa - scienze fisiche e matematiche 20 (1966),
733–737. mr 34:8170
[25] e. stein, singular integrals and differentiability properties of functions, princeton university press, 1970. mr 44:7280
[26] l. tartar, compensated compactness and applications to partial differential equations, nonlinear analysis and mechanics:
heriot-watt symposium, vol. iv, res. notes in math., vol. 39, pitman, boston, mass., 1979, pp. 136–212. mr 81m:35014
[27] w. ziemer, weakly differentiable functions, graduate texts in mathematics, vol. 120, springer-verlag, new york, 1989,
sobolev spaces and functions of bounded variation. mr 91e:46046
|
0911.1727 | electric field generation by the electron beam filamentation
instability: filament size effects | the filamentation instability (fi) of counter-propagating beams of electrons
is modelled with a particle-in-cell simulation in one spatial dimension and
with a high statistical plasma representation. the simulation direction is
orthogonal to the beam velocity vector. both electron beams have initially
equal densities, temperatures and moduli of their nonrelativistic mean
velocities. the fi is electromagnetic in this case. a previous study of a small
filament demonstrated, that the magnetic pressure gradient force (mpgf) results
in a nonlinearly driven electrostatic field. the probably small contribution of
the thermal pressure gradient to the force balance implied, that the
electrostatic field performed undamped oscillations around a background
electric field. here we consider larger filaments, which reach a stronger
electrostatic potential when they saturate. the electron heating is enhanced
and electrostatic electron phase space holes form. the competition of several
smaller filaments, which grow simultaneously with the large filament, also
perturbs the balance between the electrostatic and magnetic fields. the
oscillations are damped but the final electric field amplitude is still
determined by the mpgf.
| introduction
the filamentation instability (fi) driven by counterpropagating electron beams amplifies
magnetic fields in astrophysical and solar flare plasmas [1-5] and it is also relevant for
inertial confinement fusion (icf) [6] and laser-plasma interactions in general [7, 8]. it
has been modelled with particle-in-cell (pic) and vlasov codes [9-17] taking sometimes
into account the ion response and a guiding magnetic field. it turns out that the fi
is important, when the beam speeds are at least mildly relativistic and if the beams
have a similar density [18]. otherwise its linear growth rate decreases below those of
the competing two-stream instability or mixed mode instability [19].
the saturation of the fi is attributed to magnetic trapping [9]. more recently, it has
been pointed out [13, 14] that the electric fields are also important in this context. an
electric field component along the beam velocity vector vb is driven by the fi through
the displacement current. this component is typically weak and its relevance to the
plasma dynamics is negligible compared to that of the magnetic and the electrostatic
fields. the fi is partially electrostatic during its linear growth phase, if the electron
beams are asymmetric due to different densities. symmetric electron beams result in
purely electromagnetic waves with wavevectors k ⊥vb [19, 20]. a nonlinear growth
mechanism is provided in this case by the current of the electrons, which have been
accelerated by the magnetic pressure gradient force (mpgf).
the electromagnetic and electrostatic components separate in a 1d simulation box,
because the gradients along two directions vanish in the maxwell's equations.
the
electrostatic field is polarized in the simulation direction, while the electromagnetic
components are polarized orthogonal to it. if both electron beams have an equal density
and temperature, the electrostatic field component along the wavevector k can only
be driven nonlinearly. we select here a direction of our 1d pic simulation box that
is orthogonal to vb, through which this nonlinear mechanism can be examined in an
isolated form. the equally dense and warm counterstreaming beams of electrons have
the velocity modulus |vb| = 0.3c. the ions are immobile and compensate the electron
charge. the mildly relativistic relative streaming speed ≈0.55c implies, that the growth
rate of the fi is significant. at the same time, any relativistic mass changes can be
neglected during the growth phase and the saturation of the fi.
the initial conditions of the plasma equal those in the refs. [21, 22]. the size
distribution of the filaments could be sampled with the help of the long 1d simulation
box in ref. [21]. a pair of current filaments, which are small according to this size
distribution, has been isolated in ref. [22]. it could be shown that the electrostatic field
is indeed driven by the mpgf for this filament pair. the electrostatic field performed
undamped oscillations around a background one. the latter excerted the same force on
the electrons as the mpgf. here we assess the influence of the filament size.
this paper is structured as follows. section 2 discusses briefly the pic code, the
initial conditions and the key nonlinear processes.
the results are presented in the
section 3, which can be summarized as follows. the electrons are heated up along the
electric fields of the filamentation instability
3
wavevector k by their interaction with the wave fields. as we increase the filament size
the peak amplitudes grow, which are reached by the magnetic and by the electrostatic
field when the fi saturates. the electron heating increases with the filament size and
large electron phase space holes form, which interact with the electromagnetic fields of
the filamentation modes. the large box sizes allow the growth of more than one wave
and the filamentation modes compete. the electrostatic field oscillations are damped
or inhibited and the amplitude modulus converges to one, which equals that expected
from the mpgf. we confirm that the strength of the electrostatic force on an electron
is comparable to that of the magnetic force, when the fi saturates. the extraordinary
modes are pumped by the fi [14]. the results are discussed in section 4.
2. the pic simulation, the initial conditions and the nonlinear terms
the pic simulation method is detailed in ref. [23]. our code is based on the numerical
scheme proposed by [24]. the phase space fluid is approximated by an ensemble of
computational particles (cps) with a mass mcp and charge qcp that can differ from
those of the represented physical particles. the charge-to-mass ratio must be preserved
though. the maxwell-lorentz equations are solved. the plasma frequency of each beam
with the density ne that we model is ωp = (e2ne/meǫ0)0.5 and ωp =
√
2ωp. the electric
and magnetic fields are normalized to en = ee/cmeωp and bn = eb/meωp. the
current is normalized to jn = j/2neec and the charge to ρn = ρ/2nee. the physical
position, the time and speed are normalized as xn = x/λs with λs = c/ωp, tn = tωp
and vn = v/c. the normalized frequency ωn = ω/ωp. we drop the indices n and
x, t, ω, e, b, j and ρ are specified in normalized units. the equations are
∇× e = −∂tb , ∇× b = j + ∂te,
(1)
∇* e = ρ, ∇* b = 0,
(2)
dtpcp = qcp (e[xcp] + vcp × b[xcp]) , dxxcp = vcp,x,
(3)
with pcp = mcpγcpvcp.
here vcp,x is the component along x of vcp.
the currents
jcp ∝qcpvcp of each cp are interpolated to the grid. the summation over all cps gives
j, which is defined on the grid. the j updates e and b through (1). our numerical
scheme fulfills (2) as constraints. the new fields are interpolated to the position of each
cp and advance its position xcp and pcp through (3). all components of p are resolved.
two spatially uniform beams of electrons with qcp/mcp = −e/me move along z.
beam 1 has the mean speed vb1 = vb and the beam 2 has vb2 = −vb1 with vb = 0.3.
both beams have a maxwellian velocity distribution in their respective rest frame with
a thermal speed vth = c−1(kbt/me)0.5 of vb/vth = 18. the negative electron charge is
compensated by an immobile positive charge background. the initial conditions are
ρ, j, e, b = 0. figure 1 displays the k spectrum of the unstable waves. the growth
rates of the fi modes are close to the maximum value, while relativistic effects are
still negligible.
the growth rate spectrum with k∥= 0 relevant for our simulations
peaks with δm = 0.29 at kmλs ≈10. a filamentation mode with kmλs = 7 has been
electric fields of the filamentation instability
4
figure 1.
(colour online) the growth rates in units of ωp as a function of the
wavenumber in the full k space, where λsk∥(λsk⊥) points along (orthogonal) to vb.
the growth rates of the fi modes with k∥= 0 are comparable to that of the two-stream
mode with k⊥= 0 and to those of the oblique modes. the growth rates for k∥= 0
decrease to zero for k⊥→0 and they are stabilized at high k⊥by thermal effects. the
growth rate maximum for k∥= 0 is δm = 0.29 and it is reached at kmλs ≈10.
considered in detail previously [22], while we investigate here larger filaments. the box
length l1 = 2 for the simulation 1 and the filamentation mode with k1 = 2π/l1 grows
at the exponential rate 0.92 δm. the box length of the simulation 2 is l2 = 2.8 and the
growth rate of the filamentation mode with k2 = 2π/l2 is 0.86 δm. the growth rates
decrease rapidly for lower k and these modes are no longer observed in pic simulations
[21]. both simulations resolve x by ng = 500 grid cells with the length ∆x and use
periodic boundary conditions. the phase space distributions f1(x, v) of beam 1 and
f2(x, v) of beam 2 are each sampled by np = 6.05 * 107 cps. the total phase space
density is defined as f(x, v) = f1(x, v) + f2(x, v).
each electron beam constitutes prior to the saturation of the fi a fluid with
the index j, which has the density nj(x) =
r
v fj(x, v)dv and the mean velocity
vj(x) =
r
v vfj(x, v)dv. the normalized momentum equation for such a fluid is
∂t(njvj) + ∇(njvjvj) = −∇pj −nje + ∇(bb) −∇b2/2 + b × ∂te, (4)
where the thermal pressure tensor pj is normalized to 2menec2. the restriction to one
spatial dimension implies, that the gradients along y and z vanish. the fi results in
this case in the initial growth of by and of a weaker electric ez. the thermal pressure
electric fields of the filamentation instability
5
is initially diagonal due to the spatially uniform single-maxwellian velocity distribution.
the x-component of the simplified fluid momentum equation is
∂t(njvj,x) + dx(njv2
j,x) = −v2
thdxnj −njex −bydxby + by∂tez.
(5)
the thermal pressure gradient v2
thdxnj is valid, as long as the electron beams have
not been heated up.
let us assume that the displacement current and the thermal
pressure gradient can be neglected, leaving us with the term njex and the mpgf as
the key nonlinear terms. the fluid momentum equations can be summed over both
beams and we consider the right hand side of (5). as long as ex is small, the electron
density is not spatially modulated and n1 + n2 ≈1. the nonlinear terms cancel out, if
ex = −2bydxby. it could be demonstrated for a short filament in ref. [22] that this
is the case, even when the fi just saturated. the ex oscillated in time and after the
saturation with the amplitude eb = −bydxby around a time-stationary eb.
3. simulation results
3.1. the scaling of by, ex and eb with the box length
the beam velocity vb ∥z and the electrons of both beams and their micro-currents are
re-distributed by the fi only along x. the initially charge- and current-neutral plasma
is transformed into one with jz(x, t) ̸= 0. the gradients along the y, z-direction vanish
in our 1d geometry. ampere's law simplifies to dxby = jz + ∂tez, resulting in the
growth of by and ez. the mpgf drives ex. the bx = 0 in the 1d geometry and
ey, bz remain at noise levels. the right-hand side of (5) depends on ex, ez and by, as
well as on their spatial gradients, which should vary with the filament size.
we want to gain qualitative insight into the scaling of the field amplitudes with the
filament size and determine if ex is driven by the mpgf also for the large filaments.
the fields that grow in simulation 1 and 2 are compared to those discussed previously
in ref. [22] that used the box size lc = 0.89. figure 2 shows the respective dominant
fourier component of by, of ex and of 2eb. the amplitude moduli of the mode with
ks = 2π/ls are considered for by and those of the 2ks mode for ex and 2eb. the
subscript s is 1, 2 or c and refers to the respective simulation.
the amplitudes of
by increase with an increasing box size.
after the fi has saturated, we find that
by(k1, t) ≈2by(kc, t) and by(k2, t) ≈2.5by(kc, t). the increase of the saturation value
of by(ks, t) with ls is consistent with magnetic trapping [9]. the magnetic bouncing
frequency ωb = (vbksb[ks, t])1/2 in our normalization. the fi should saturate once ωb is
comparable to the linear growth rate of the fi, which is approximately constant for the
box sizes lc, l1 and l2 (fig. 1). a lower ks supports a larger by(ks, t). the ωb ≈0.2
for simulation 1 is comparable to the linear growth rate ωi ≈0.25.
after the saturation, the ex(2k1,2, t) > 2ex(2kc, t) and ex(2k1, t) > ex(2k2, t). the
ex(2k1, t) > 3ex(2kc, t) while l1/lc ≈2.2. the electrostatic potential in simulation 1
is thus larger by a factor 6, which should result in a more violent electron acceleration
than in the box with the length lc. the thermal pressure gradient force is potentially
electric fields of the filamentation instability
6
30
40
50
60
70
80
90
100
0
0.02
0.04
0.06
(a) time
by(ks,t)
c
1
2
30
40
50
60
70
80
90
100
0
0.01
0.02
(b) time
ex(2ks,t)
2
1
c
25
30
35
40
45
50
55
60
10
−6
10
−3
(c) time
ex(2k1), 2eb(2k1)
25
30
35
40
45
50
55
60
10
−6
10
−3
(d) time
ex(2k2), 2eb(2k2)
figure 2.
(colour online) panel (a) compares the by(ks, t) and panel (b) the
ex(2ks, t) in the boxes with the size l1, l2 and lc (dashed curve). eb(2ks, t) (dashed
curve) is compared with ex(2ks, t) (solid curves) for the box size l1 (c) and l2 (d).
more important for larger filaments and it may modify the balance between the
nonlinearly driven ex and the mpgf. however, an excellent match between ex(2k1, t)
and 2eb(2k1, t) is observed for t < 50, due to which the two nonlinear terms on the
right hand side of (5) practically cancel for simulation 1. the ex(2k2, t) ≈2eb(2k2, t)
in simulation 2 for 30 < t < 42 and for 46 < t < 53. both fields disagree in between
these time intervals and a local minimum is observed. the field and electron dynamics
is now examined in more detail for the box lengths l1 and l2.
3.2. simulation 1: box length l1 = 2
figure 3 displays the evolution of the relevant field components. the by(x, t) rapidly
grows and saturates at t ≈45. it is initially stationary in space but it oscillates in
time until t ≈65, which implies that by(x, t) does not immediately go into its stable
saturated state. the by(x, t) shows only one spatial oscillation and the filamentation
mode with the wavelength k1 = 2π/l1 is thus strongest. however, the interval with
the large positive by(x, t ≈45) covers 0 < x < 0.9, while that with the large negative
by(x, t ≈45) is limited to 1.2 < x < 1.7. this mode is thus initially not monochromatic.
the saturated structure formed by by(x, t) drifts after t ≈65 to lower x at a speed
< 0.01 and it remains stationary in its moving rest frame. the ez(x, t) grows initially
in unison with by(x, t) and it is shifted in space by 90◦with respect to by(x, t), as
expected from ampere's law. oscillations of ez(x, t) are spatially correlated with those
of the by(x, t) for 45 < t < 65. the ez(x, t) undergoes a mode conversion at t ≈65
into a time-oscillatory and spatially uniform ez(x, t). figure 3(c) demonstrates that
ex(x, t) is following the drift of by(x, t) towards decreasing x, but that its wavenumber
is twice that of by(x, t).
the by(x, t) is stationary in its moving rest frame, while
ex(x, t > 70) is oscillating around an equilibrium electric field with an amplitude and
spatial distribution that resembles eb(x, t) in fig. 3(d). the electric and the magnetic
electric fields of the filamentation instability
7
figure 3. (colour online) the field amplitudes in the box l1: the panels (a-d) show
by, ez, ex and eb, respectively.
the amplitude of by reaches a time-stationary
distribution, which convects to decreasing x at a speed < 0.01.
the ez and ex
components are oscillatory in space and in time.
the ez is phase-shifted by 90◦
relative to by when the fields saturate at t ≈45. the ex and the eb are co-moving
and ex oscillates in time around a mean amplitude comparable to eb for t > 70.
0 2 4
50
100
150
200
250
(a) k / k1
time
0 2 4
50
100
150
200
250
(b) k / k1
time
0
0.005
0.01
0.015
0.02
0
0.005
0.01
0.015
0.02
50
100
150
200
250
0
0.01
0.02
0.03
(c) time ( 2k1 )
amplitude
50
100
150
200
250
0
0.005
0.01
(d) time (4k1)
amplitude
figure 4. the relevant part of the amplitude spectrum ex(k, t) is displayed for low k
in (a) and (b) shows that of eb(k, t). the wavenumbers are expressed in units of k1.
the amplitude moduli of the dominant modes are displayed for k = 2k1 in (c) and its
first harmonic with k = 4k1 in (d), where the dashed curves correspond to eb.
forces are comparable in their strength, but their distribution differs.
figure 4 compares in more detail the moduli of the amplitude spectra ex(k, t) and
eb(k, t). the amplitudes of the strongest modes fulfill ex(2k1, t) ≈2eb(2k1, t) until
t = 50 (see also fig. 2). the ex(2k1, t) thus overshoots eb(2k1, t) and it oscillates
around it after t = 50. the oscillation is damped and the amplitudes of ex(2k1, t) and
eb(2k1, t) converge. the full spectra ex(k, t) and eb(k, t) reveal that the mode k = 4k1
is also important for t > 100. it probably is a harmonic of the mode with k = 2k1 and
electric fields of the filamentation instability
8
figure 5. (colour online) the 10-logarithm of the phase space densities in units of
cps at the time t = 50 (a-c) and t = 120 (d-f) in the box l1: panels (a,d) show the
total phase space density f(x, pz) with the beam momentum p0 = mevbγ(vb). the
phase space density f1(x, px) of beam 1 is shown in (b,e) and the f2(x, px) of beam
2 in (c,f). both beams are spatially separated and (e,f) reveal cool electron clouds
immersed in a hot electron background with momenta of up to ≈p0.
not an independently growing fi mode. otherwise we would expect that the mode with
k ≈3k1 also grows. the amplitude of ex(4k1, t) is close to that of eb(4k1, t).
a dissipation mechanism for the interplaying jx and ex is present, which causes
the damping and the convergence of ex(x, t) to eb(x, t). the damping persists after
t = 65, when by is quasi-stationary in its moving reference frame. the term by∂tez in
(5) could, in principle, be one dissipation mechanism. however, even at t ≈50 when
∂tez is largest and by has developed in full, this term is weaker by more than one order
of magnitude than the mpgf and the term njex in simulation 1 (not shown). if the
term by∂ez would be the damping mechanism, this should have resulted in a noticable
field damping also in the short simulation box with length lc. a damping of ex(x, t)
has not been observed in ref. [22]. the thermal pressure gradient force may provide
this damping and we examine now the electron phase space density distribution.
figure 5 displays the phase space distributions f1(x, px) of the beam 1 and the
f2(x, px) of the beam 2 at the times t = 50 and t = 120. the total phase space density
f(x, pz) is shown for the same times. the beams reveal a high degree of symmetry
already at t = 50 and the filament centres are shifted along x by l1/2. the phase space
structures in fig. 5(b,c) are, however, different at the filament boundaries x ≈0.5 and
x ≈1.5. this difference is responsible for the deviation of the initial by(x, t) from a
sine curve in fig. 3(a). the phase space distribution at late times reveals, that the
electrons are heated along px but not along pz. the filament drift to lower x is visible
from figs. 5(a,d) and agrees with the observed one of by(x, t) in fig. 3. the electrons
are accelerated along x to a peak speed ∼vb, which is more than twice that observed
in the box with the length lc [22]. the peak electron kinetic energy due to the velocity
electric fields of the filamentation instability
9
figure 6. the field amplitudes in the box l2: the panels (a-d) show by, ez, ex
and eb = −bydxby, respectively. the amplitude of by reaches a steady state value,
which convects to increasing x at a speed < 0.01. the ez and ex components are
oscillatory in space and in time. the ez is phase-shifted by 90◦relative to by when
the fields saturate at t ≈50. the ex and the eb are co-moving and ex(x) oscillates
in time around a mean amplitude comparable to eb for t > 100.
component along x thus increases by a factor, which is comparable to the increase in
the electrostatic potential as we go from a box with length lc to one with l1. this
strong electron heating is likely to result in higher thermal pressure gradient forces.
the expression dxn1(x)
r vxf(x, vx)dvx has been evaluated (not shown) at t = 75 and
its peaks reach values ≈0.1, which are comparable to the mpgf. the width of these
peaks is small compared to the electron skin depth.
movie 1 animates in time the 10-logarithmic phase space distributions f1(x, px)
and f1(x, pz) of the beam 1 in the simulation 1.
the formation of the filaments is
demonstrated.
we observe a dense untrapped electron component immersed in an
electron cloud that has been heated along the simulation direction by the saturation
of the fi. the spatial width of the plasmon containing the dense bulk of the confined
electrons in f1(x, px) oscillates in time. the overlap of the filaments in fig. 5(e,f) is
thus time dependent and related through its current jx(x, t) to the oscillating ex(x, t)
in fig. 3(b). the phase space distribution f1(x, px) reveals small-scale structures (phase
space holes) that gyrate around the centre of the filament. these coherent structures
result in jumps in the thermal pressure.
3.3. simulation 2: box length l2 = 2.8
figure 6 displays the fields that grow in the simulation with the box length l2 = 2.8. the
growth rate map in fig. 1 demonstrates that the fi can drive simultaneously several
modes in the simulation box. the mode with k2 = 2π/l2 ≈2.25 has, for example,
a lower growth rate than that with k ≈2k2.
we observe consequently oscillations
electric fields of the filamentation instability
10
0
2
4
50
100
150
200
250
(a) k / k2
time
0
0.005
0.01
0.015
0.02
0
2
4
50
100
150
200
250
(b) k / k2
time
0
0.005
0.01
0.015
0.02
50
100
150
200
250
0
0.01
0.02
(c) time (2k2)
amplitude
50
100
150
200
250
0
0.01
0.02
(d) time (4k2)
amplitude
figure 7. the relevant part of the amplitude spectrum ex(k, t) is displayed for low k
in (a) and (b) shows that of eb(k, t). the wavenumbers are expressed in units of k2.
the amplitude moduli of the dominant modes are displayed for k = 2k2 in (c) and its
first harmonic with k = 4k2 in (d), where the dashed curves correspond to eb.
in by(x, t) along x, which are a superposition of several waves with a k ≥k2 during
the initial growth phase 40 < t < 50. these oscillations merge and only one spatial
oscillation of by(x, t) and, thus, a single pair of filaments survive after the saturation
at t ≈50. the magnetic field structure convects to increasing values of x at a speed
< 0.01, but it is stationary in its rest frame after t ≈70. the phase of ez(x, t) is
shifted by 90◦with respect to by(x, t) for 40 < t < 60. the oscillations of ez(x, t)
undergo a mode conversion during 60 < t < 100 and we observe undamped oscillations
with k = 0 for t > 100. the amplitude of these oscillations is modulated on a long
timescale. the ex(x, t) and the by(x, t) show no correlation until t ≈70. thereafter
the spatial amplitude of ex(x, t) oscillates in time around eb(x, t). the force on an
electron imposed by ex(x, t) is comparable to that imposed by vbby(x, t).
a more accurate comparison of ex(x, t) and eb(x, t) is again provided by the
moduli of their spatial amplitude (fourier) spectra, ex(k, t) and eb(k, t).
figure 7
displays ex(k, t) and eb(k, t) and compares in more detail ex(2k2, t) with eb(2k2, t) as
well as ex(4k2, t) with eb(4k2, t). the amplitudes ex(2k2, t) ≈2eb(2k2, t) during the
exponential growth phase of the fi for 25 < t < 45 (see fig. 2), the amplitude moduli
then have a local minimum and continue to grow after this time. we identify the likely
reason from eb(k, t) in fig. 7(b). the eb(3k2, t) competes with eb(2k2, t) at t ≈50.
a large amplitude modulus of eb(3k2, t) evidences that by(x, t) is not a sine wave
at this time. if by ∝sin (k2x), then eb ∝sin (k2x) cos (k2x) and eb(k, t) would be
composed of a wave with k = 2k2. the periodic boundary conditions would also allow
for a by ∝sin (2k2, t) and here eb would involve a wave with k = 4k2. an eb(3k2, t) can
thus not be connected to a single filamentation mode. during the linear growth phase
of the fi prior to t ≈40, the jz(x, t) can form structures with a wideband wavenumber
spectrum (see fig. 1) and their associated by can grow independently.
electric fields of the filamentation instability
11
50
100
150
200
250
0
0.5
1
c) amplitude
50
100
150
200
250
0
0.02
0.04
d) amplitude
50
100
150
200
250
0
0.02
0.04
time
e) amplitude
figure 8. (colour online) a time-interval of ez(x, t) and the 10-logarithm of its power
spectrum pez(k, ω) are displayed in (a) and (b). wavenumbers are given in units of
k2. peak 1 is at ω < 0.5 and k = k2. peak 2 is observed at k = k2 and ω ≈1 and peak
3 at k = 0 and ω ≈1. the by(k2, t) is shown in (c), the ez(k2, t) in (d) and ez(0, t)
in (e), all normalized to the maximum of by(k2, t).
once the mpgf in eq.
5 has reached a significant strength, the fi saturates.
the strength of the mpgf increases with k, due to the larger dxby(x, t) of the rapid
oscillations. the by ∝sin (k2x) should maximize the magnetic field strength for a given
mpgf. this may explain why this mode is the dominant one after t = 70 despite its
lower growth rate. the decrease of eb(2k2, t) in fig. 2(d) at t ≈45 is tied to the
saturation of eb(3k2, t). the ex(k, t) in fig. 7(a) has a broadband spectrum within
50 < t < 75, which is probably caused by the current jx arising from the rearrangement
of the filaments. after this time, ex(2k2, t) ≈eb(2k2, t) and ex(4k2, t) ≈eb(4k2, t).
the ex(2k2, t) does not show oscillations around eb(2k2, t) as the simulation 1. the
filament rearrangement inhibits an oscillatory equilibrium between jx and ex.
figure 8 examines the mode conversion of the electromagnetic ez component
observed in fig. 6(b). the pez(k, ω) is the squared modulus of the fourier transform
of ez(x, t) over space and over 45 < t < 100.
the dispersion relation shows three
peaks. peak 1 has a k = k2 and ω < 0.5 and it is tied to the ez(x, t) of the fi mode.
this mode grows exponentially and aperiodically. its frequency spectrum is thus spread
out along ω. its energy can leak into the peak 2 at k = k2 and ω ≈1. the ez(x, t)
is orthogonal to by(x, t) and peak 2 corresponds to an extraordinary mode, similar
to the slow extraordinary mode. peak 3 has a k = 0 and ω ≈1 and it corresponds
to a spatially uniform oscillation in an extraordinary mode branch. the intermittent
behaviour of ez(x, t) in fig. 8(a) results in a broadband spectrum in k and ω. these
turbulent wave fields can couple energy directly to the high-frequency electromagnetic
modes and excite a discrete spectrum if the boundary conditions are periodic [14].
the interplay of the waves belonging to the three peaks in fig. 8(a) is assessed
with the moduli of the amplitude spectra by(k2, t), ez(k2, t) and ez(0, t) in figs. 8(c-
electric fields of the filamentation instability
12
figure 9. the 10-logarithmic phase space densities in units of cps at t = 50 (a-c)
and t = 120 (d-f) in the box l2: panels (a,d) show the total distribution f(x, pz)
with p0 = mevbγ(vb). the beam temperature along pz is unchanged. the distribution
f1(x, px) of beam 1 is shown in (b,e) and the f2(x, px) of beam 2 in (c,f). the electrons
of both beams spatially separate and (e,f) reveal a dense electron component immersed
in a tenuous hot electron background, which reaches a thermal width ≈p0.
e). the by(k2, t) and ez(k2, t) grow at the same exponential rate until they saturate
at t ≈50, evidencing that they belong to the same fi mode. the by(k2, t) maintains
its amplitude after t = 50, while ez(k2, t) decreases until t ≈120 and remains constant
thereafter. the ez(0, t) grows in the same time interval to its peak amplitude, which
suggests a parametric interaction between these modes.
the amplitude modulation
in fig. 8(e) must be caused by a beat between two waves, which are similar to the
slow- and fast extraordinary modes in the limit k = 0. both modes are undamped on
the resolved timescales. one may interpret the parametric interaction as a three-wave
coupling between the waves corresponding to the peaks 1-3 in fig. 8(b), resembling
the system of ref. [25]. however, here the by(x, t) varies spatially and the parametric
interaction may involve more of the waves of the spectrum in fig. 8(b).
figure 9 displays the phase space densities f1,2(x, px) and f(x, pz) at the times
t = 50 and t = 120. figure 9(a) demonstrates that the electrons of both beams have
been rearranged by the fi. the filaments have not yet reached the stable symmetric
configuration, because the most pronounced density minima at x ≈1.5 for beam 2 and at
x ≈2.5 for beam 1 are not shifted by l2/2. this asymmetry results in the eb(3k2, t) ̸= 0
and in the broadband ex(k, t) at this time in fig. 7. the spatial gradients of by(x, t)
and ex(x, t) are high at t ≈50 and the lorentz force changes rapidly with x, explaining
the complex phase space structuring in fig. 9(b,c). the phase shift of l2/2 of the
density maxima of both beams has been reached at t = 120 in fig. 9(d). the by(x, t)
is stationary in its rest frame at this time in fig. 6(a). the electrons are heated up
from an initial thermal spread of px/p0 ≈0.05 with p0 = mevbγ(vb) to a peak value of
electric fields of the filamentation instability
13
px ≈p0 in fig. 9(e,f). the mean momentum of each beam varies along pz in response
to a drift imposed by ex(x, t) and by(x, t) but no heating is observed in this direction.
movie 2 shows the 10-logarithmic phase space density projections f1(x, px, t) and
f1(x, pz, t) of beam 1 in the simulation 2. it demonstrates that only the core electrons in
fig. 9 remain spatially confined. the heated electrons, which have in some cases reached
a momentum px that is comparable to the initial beam momentum, are untrapped. the
heated electrons move practically freely and they ensure that the beam confinement is
not perfect. the trapped electrons maintain the jz(x, t) ̸= 0 and, thus, the by(x, t) ̸= 0.
the trapped electrons slowly move to larger values of x. the associated shift of jz(x, t)
causes the slow drift of by(x, t) in fig. 6(a). the movie visualizes the formation of the
phase space beams and their evolution into phase space holes in f1(x, px).
4. discussion
we have examined here the electron beam filamentation instability (fi) in one dimension
and in an initially unmagnetized plasma with immobile ions. the fi has been driven
by nonrelativistic symmetric electron beams with the same initial conditions as those
considered previously [21, 22]. the electric field along the one-dimensional box, which
is oriented orthogonally to the beam velocity vector, can only be generated nonlinearly
if the beams are symmetric [20]. the fluid equations show that the relevant nonlinear
mechanisms can be the magnetic pressure gradient force (mpgf), the thermal pressure
gradient force and a term due to the displacement current. the magnetic tension may
become important in multi-dimensional simulations, but not for initial conditions similar
to ours [26]. the term due to the displacement current is weak in our simulations.
it has been observed in ref.
[22] that the electrostatic field performs after
the saturation of the fi undamped oscillations around a time-stationary background
electric field.
the amplitude of the oscillatory and of the background electric field
are both given by eb(x, t) ≈−bydxby.
the phases of both fields are fixed such,
that ex(x, t0) = 2eb(x, t0) at the saturation time t0.
this amplitude ensures that
the nonlinear terms due to the mpgf and due to the electrostatic field cancel each
other approximately in the fluid equations when the fi saturates. the thermal pressure
gradient force did not visibly contribute in the simulation of the small filament pair [26],
possibly because of the only modest heating of the initially cool beams. here we have
assessed the importance of the filament size with the help of two 1d pic simulations,
which used two different box lengths that were larger than that of the 1d box in ref.
[22, 26]. the initial conditions for the plasma were otherwise identical.
we summarize our findings as follows. we have demonstrated for both simulations,
that ex(x, t ≤t0) ≈2eb(x, t ≤t0) during the full exponential growth phase and not
just at the saturation time t0. the fi thus adjusts the electrostatic field during its
exponential growth phase such, that the dominant nonlinear terms cancel each other.
magnetic trapping states that the fi saturates, when the magnetic bouncing frequency
is comparable to the linear growth rate.
the exponential growth rates for the two
electric fields of the filamentation instability
14
simulations considered here and that in ref. [22] are close. the amplitude reached by
the magnetic field prior to its satuation thus increases with the box length. we found
that the electrostatic potential driven by the mpgf is 5-6 times stronger for the box
sizes used here than for the short box in ref. [22], while the initial mean kinetic energy
of the electrons is the same. consequently, the electron heating is stronger and the
plasma processes more violent for large filaments. magnetic trapping is, however, not
the exclusive saturation mechanism. the electrostatic forces are comparable in strength
to the magnetic forces when the fi saturates [13, 14].
the electrostatic field during the intermittent phase has differed in our two
simulations from that observed in ref. [22]. the movies demonstrated that this phase
involves the formation of large nonlinear structures (phase space holes) in the electron
distribution, which can result in steep gradients of the thermal pressure and in the
generation of solitary (bipolar) electrostatic wave structures that are independent of the
fields produced by the fi. the thermal pressure gradient force is comparable to that
of the other nonlinear terms, but only over limited spatial intervals. the electric field
component along the beam velocity vector has undergone a mode conversion. its energy
leaked into the high-frequency electromagnetic modes [14].
the wavenumber spectrum of the electrostatic field correlated well with that of
the mpgf in simulation 1, but the peak electric field overshot the expected one.
the electrostatic field performed damped oscillations around eb and both converged
eventually to the same value. the wavenumber spectrum of the electrostatic field in
simulation 2 deviated from that of the mpgf in the intermittent phase. its wavenumber
spectrum was broadband, while that of the mpgf was quasi-monochromatic.
the
amplitude modulus of the electrostatic field at the wavenumber, which corresponds to
the dominant fourier component of the mpgf, jumped to the value expected from the
mpgf. it did not overshoot and it was non-oscillatory.
both simulations here have evidenced that the magnetic field driven by the fi
organized itself such, that we obtained one oscillation in the simulation box after the
intermittent phase.
this is remarkable, because the exponential growth rate of the
fundamental wavenumber is below that of its first harmonic. long waves excert a lower
mpgf for a given amplitude and the dominance of the fundamental wavenumber may
thus result from the lower nonlinear damping of this mode compared to that of its
harmonics. the mode with the fundamental wavenumber considered in ref. [22] has
a higher growth rate than its harmonics and the absent mode competition may have
facilitated the undamped oscillations around the equilibrium. however, the amplitude
of the electrostatic field in the two simulations discussed here eventually converged to
that expected from the mpgf and eb is thus a robust estimate for the electrostatic
field driven by the mpgf for the considered case. this robustness explains, why a
connection between the electrostatic field and the mpgf has been observed in a 2d
pic simulation [26], where no equilibrium can be reached due to the filament mergers.
this estimate does, however, not apply if positrons are present.
their current
reduces that of the electrons. if equal amounts of electrons and positrons are present,
electric fields of the filamentation instability
15
the electrostatic field driven by the mpgf is suppressed alltogether [27]. mobile protons
will react in particular to the stationary electric field [14] and they will modify through
their charge modulation the balance between the electrostatic field and the mpgf.
highly relativistic beam velocities will probably also modify the balance between the
mpgf and the electron currents it drives. we leave relativistic beams to future work.
acknowledgements the authors acknowledge the support by vetenskapsr ̊
adet
and by the projects ftn 2006-05389 of the spanish ministerio de educacion y
ciencia and pai08-0182-3162 of the consejeria de educacion y ciencia de la junta
de comunidades de castilla-la mancha. the hpc2n has provided the computer time.
references
[1] yang t y b, gallant y, arons j and langdon a b 1993 phys. fluids b 5 3369
[2] petri j and kirk j g 2007 plasma phys. controll. fusion 49 297
[3] karlicky m, nickeler d h and barta m 2008 astron. astrophys. 486 325
[4] medvedev mv and loeb a 1999 astrophys. j. 526 697
[5] lazar m, schlickeiser r and shukla p k 2006 phys. plasmas 13 102107
[6] tabak m et al. 1994 phys. plasmas 1 1626
[7] ruhl h, sentoku y, mima k, tanaka k a and kodama r 1999 phys. rev. lett. 82 743
[8] key m h et al. 2008 phys. plasmas 15 022701
[9] davidson r c, wagner c e, hammer d a and haber i 1972 phys. fluids 15 317
[10] lee r and lampe m 1973 phys. rev. lett. 31 1390
[11] molvig k 1975 phys. rev. lett. 35 1504
[12] honda m, meyer-ter-vehn j and pukhov a 2000 phys. rev. lett. 85 2128
[13] honda m, meyer-ter-vehn j and pukhov a 2000 phys. plasmas 7 1302
[14] califano f, cecchi t and chiuderi c 2002 phys. plasmas 9 451
[15] sakai j i, schlickeiser r and shukla p k 2004 phys. lett. a 330 384
[16] medvedev m v, fiore m, fonseca r a, silva l o and mori w b 2005 astrophys. j. 618 l75
[17] stockem a, dieckmann m e and schlickeiser r 2008 astrophys. j. 50 025002
[18] bret a, gremillet l and bellido j c 2007 phys. plasmas 14 032103
[19] bret a, gremillet l, benisti d and lefebvre e 2008 phys. rev. lett. 100 205008
[20] tzoufras m, ren c, tsung f s, tonge j w, mori w b, fiore m, fonseca r a and silva l o 2006
phys. rev. lett. 96 150002
[21] rowlands g, dieckmann m e and shukla p k 2007 new j. phys. 9 247
[22] dieckmann m e, kourakis i, borghesi m and rowlands g 2009 phys. plasmas 16 074502
[23] dawson j m 1983 rev. mod. phys. 55 403
[24] eastwood j w 1991 comput. phys. commun. 64 252
[25] sharma r p, tripathi y k and kumar a 1987 phys. rev. a 35 3567
[26] dieckmann m e 2009 plasma phys. controll. fusion in press (2009)
[27] dieckmann m e, shukla p k and stenflo l 2009 plasma phys. controll. fusion 51 065015
|
0911.1728 | mass varying neutrinos, quintessence, and the accelerating expansion of
the universe | we analyze the mass varying neutrino (mavan) scenario. we consider a minimal
model of massless dirac fermions coupled to a scalar field, mainly in the
framework of finite temperature quantum field theory. we demonstrate that the
mass equation we find has non-trivial solutions only for special classes of
potentials, and only within certain temperature intervals. we give most of our
results for the ratra-peebles dark energy (de) potential. the thermal
(temporal) evolution of the model is analyzed. following the time arrow, the
stable, metastable and unstable phases are predicted. the model predicts that
the present universe is below its critical temperature and accelerates. at the
critical point the universe undergoes a first-order phase transition from the
(meta)stable oscillatory regime to the unstable rolling regime of the de field.
this conclusion agrees with the original idea of quintessence as a force making
the universe roll towards its true vacuum with zero \lambda-term. the present
mavan scenario is free from the coincidence problem, since both the de density
and the neutrino mass are determined by the scale m of the potential. choosing
m ~ 10^{-3} ev to match the present de density, we can obtain the present
neutrino mass in the range m ~ 10^{-2}-1 ev and consistent estimates for other
parameters of the universe.
| introduction
neutrino mass related questions are of great interest for particle physics as well as for
cosmology (for reviews see ref. [1] and references therein). current upper limits on the
sum of neutrino masses from cosmological observations are of the order of 1 ev [2–4], while
neutrino oscillations give a lower bound of roughly 0.01 ev [5, 6], making neutrino mass an
established element of particle physics. furthermore, understanding the origin of neutrino
mass opens a window into understanding physical processes beyond the standard model of
particle physics [7–10].
it is now well established that about seventy four percent of the universe is comprised
of dark energy (de) (for reviews see ref. [11] and citation therein). the present stage of
evolution of the universe is governed by this dominant de contribution, and the universe
experiences an accelerating expansion [12, 13]. the nature of de is still unknown, and it is
one of the major questions of modern cosmology. there are, broadly speaking, three major
2
possibilities proposed to explain the de [11]. most straightforwardly, and in good agreement
with the current observational data, it can be present just as the cosmological constant [11].
secondly, the de can be accommodated in some framework of the modified non-einsteinian
gravity theories (see, e.g., refs. [14, 15]). and lastly, following the original proposals [16, 17]
on the de originating from a scalar field action similar to the inflaton field, there has been
a lot of activity in constructing and analyzing various trial scalar field lagrangians to model
the de [13]. note, that it is even unclear what kind of scalar field potential governs the
inflationary expansion of the universe [18], and as the result, the effective quantum field
that adequately describes inflation is still under debate [19]. a similar observation can be
drawn from analyzing many potentials proposed for the de action [13].
on the other hand, several cosmological and astrophysical observations imply that about
twenty two percent of the universe consists of dark matter (dm) [11], if we admit the general
relativity theory of gravity. most probably dm is formed through massive weakly interacting
particles (wimps), and the nature of these particles is also still unknown. there are several
recent observations performed by pamela [20] and glast missions which indicate dm
particle annihilations [21]. recently it was proposed that both these observations could be
used to test baryogenesis [22] which is one of the important problems of the standard particle
physics model.
another puzzling question in modern cosmology is the coincidence problem - the density
of de is comparable to the present energy density of dm. in turn, the latter is comparable
(within the order of magnitude), to the energy density of cosmological neutrinos [1, 2]). is
there a mechanism explaining this coincidence? a very convincing answer to this question is
given by the mechanism of dm mass generation via various types of dm-de couplings, rang-
ing from yukawa to more exotic ones. [23–28] the mass of the dm particle in this approach
is naturally time-dependent, and they were coined varying mass particles (vamps). vari-
ous de–dm interaction models have been constrained by observations of supernovae type
ia [29], the age of the universe [30–32], cosmic microwave background (cmb) anisotropies
[33, 34], and large scale structure (lss) formation [35].
fardon, nelson and weiner elaborated on the vamp mechanism in the context of neu-
trinos. [36]1. in their model the relic neutrinos, i.e., fermionic field(s), interact with a scalar
field via the yukawa coupling. if the decoupled neutrino field is initially massless, then the
coupling generates a (varying) mass of neutrinos in this de-neutrinos model. this mass
varying neutrino (mavan) scenario is quite compelling, since it connects the origin of neu-
trino mass to the de, and solves the additional coincidence problem of why the neutrino
mass and de are of comparable scales [38]. (for more on the coincidence, see, e.g. [39]). to
consider neutrinos as particles which get their mass through the coupling is attractive for
particle physics, as well as for its cosmological consequences. however there are significant
issues that have to be resolved for the sake of viability of the mavan scenario. most notably,
it has been shown [40] that the model of ref. [36] suffers from a strong instability due to
the negative sound speed squared of the de-neutrino fluid (see also [41]).
any dm-de coupling induces observable changes in large scale structure formation [42].
the main reason for this is due to the presence of additional dm contributions (perturba-
tions) in the equation of motion which determines the dynamics of the scalar field. the
changes in the dynamics are drastic when massive neutrinos are coupled to de [40]. in
this case the squared sound speed of the de-neutrino fluid defined as c2
s = δp/δρ, (where
1 the de-neutrino coupling and the baryogenesis constraints have been also studied also in ref. [37]
3
δ represents the variation, and p and ρ are pressure and energy density of the de-neutrino
fluid) is negative. the negative squared sound speed results in an exponential growth of
scalar perturbations. [43–46]
after the critique in ref. [40], the issue of stability of the de-neutrinos fluid has been
addressed by many authors [41, 46–53]. various physical assumptions were made in those
references in order to avoid the exponential clustering of neutrinos. in particular, to achieve
stability, proposals were put forward to make the de-dm model more complicated, e.g., by
extending it to a multi-component scalar field, or by promoting its supersymmetry. [49, 51]
we however are not inclined to pursue this line of thought and will explore the simplest
possible "minimal" model. as we will demonstrate, the occurrence of the instability in the
coupled de-neutrinos model is meaningful, and we will explore the physical implications of
this phenomenon. note that wetterich and co-workers [46] have already analyzed various
implications of the instability in the mavan model on the dynamics of neutrino clustering.
in this paper we re-address the analysis of the de-neutrinos coupled model. what is
really new in our results, to the best of our knowledge, apart from a consistent equation
for the equilibrium condition, is the analysis of the thermal (i.e. temporal) evolution of the
mavan model and prediction of its stable, metastable and unstable phases. the analysis
of the dynamics in the unstable phase results in, for the first time in the framework of the
mavan scenario, a picture of the present-time universe totally consistent with observations.
our findings are in line with the original proposal [16, 17] of the de potential (quintessence)
to model the universe slowly rolling towards its true vacuum (λ = 0). as it turns out, the
present universe, seen as a system of the coupled de (quintessence) field and fermions
(neutrinos) is below its critical temperature. it is similar to a supercooled liquid which has
not crystallized yet: its high temperature (meta)stable phase became unstable, but the new
low-temperature stable phase (λ = 0) is still to be reached. the afshordi-zaldarriaga-kohri
instability corresponding to c2
s < 0 is just telling us this.
the rest of the paper is organized as follows: in section ii we give the outlook of the
model and formalism applied and derive the basic equations for the coupled model.
in
section iii we present the qualitative analysis of the equation which yields the fermionic
(neutrino) mass. section iv contains analysis of the coupled model with the ratra-peebles
de potential at equilibrium. the dynamics of the model applied to the whole universe is
studied in section v. the results are summarized in the concluding section vi.
ii.
model and formalism. basic equations
a.
outlook
in this paper we focus on the case when the scalar field potential u(φ) does not have
a non-trivial minimum, and the generation of the fermion mass is due to the breaking of
chiral symmetry in the dirac sector of the lagrangian. a non-trivial solution of the fermionic
mass equation is a result of the interplay between the scalar and fermionic contributions.
we consider the most natural and intuitively plausible yukawa coupling between the dirac
and the scalar fields.
the key assumption is that the fermionic mass generation can be obtained from mini-
mization of the thermodynamic potential. that is, the coupled system of the scalar bosonic
and fermionic fields is at equilibrium, at least at some temperatures. this will be analyzed
below more specifically. we assume the cosmological evolution, governed by the scale factor
4
a(t) to be slow enough that the coupled system is at equilibrium at a given temperature
t(a). then the methods of thermal quantum field theory [54, 55] can be applied.
this problem is rather well studied with quantum field theory and statistical physics
in different contexts [54–56]. the major conceptual difficulty in applying quantum field-
theoretical methods for the dark-energy scalar field is the lack of "well-behaved" potentials
interesting for cosmological applications. for instance, a class of the very popular inverse
power law slow-rolling quintessence potentials [13] are singular at the origin. consequently,
the field theory should be understood as a sort of effective theory, and we plan to address
this issue more deeply in our future work.
as far as the fermionic sector of the theory is concerned, one needs to distinguish two
different cases pertinent for neutrino applications:
(i) an equal number of fermions and antifermions, i.e., zero chemical potential μ = 0;
(ii) a surplus of particles over antiparticles, and small non-zero chemical potential.
for the bounds on the neutrino chemical potential, see refs. [1, 57]. if experiments
confirm neutrinoless double beta decay, i.e., that neutrinos are majorana fermions, then
the lepton number is not conserved [8], and one cannot introduce a (non-zero) chemical
potential. then case (i) above is applicable, proviso that the majorana fields are utilized
instead of the dirac ones. for the case (i) with dirac fermions the ground state corresponds
to a complete annihilation of fermion-antifermion pairs, i.e. the fermions completely vanish
in the zero-temperature limit.
assumption of the fermion-antifermion asymmetry and (conserving) particle surplus, i.e.,
of a non-zero chemical potential, results in the fermionic contributions which survive the zero-
temperature limit. however the smallness of the zero-temperature contribution renders this
issue rather academic. indeed, for the neutrinos we are interested in this study, by assuming
the maximal particle surplus n◦∼115 cm−3, one gets the fermi momentum kf ∼3*10−4 ev.
for m ∼10−2 ev, one obtains μ(t = 0) = εf =
p
k2
f + m2 = m+ o(10−4 ev). this results
in a non-trivial vacuum with the particle surplus frozen within an extremely narrow fermi
shell m ≤ε ≤εf. thus, trying to grasp the essential physics in this study from possibly
the simplest "minimal model", we assume the fermions to be described by a dirac spinor
field with zero chemical potential.
in this work we will use the standard methods of general relativity and finite-temperature
quantum field theory extended for fields living in a spatially flat universe with the
friedmann-lemaˆ
ıtre-robertson–walker (flrw) metric where the line element is ds2 =
dt2 −a2(t)dx2. here t is the physical time and a(t) is the scale factor, which can be obtained
from the friedmann equations [9, 10]
h2(t) =
̇
a
a
2
= 8πg
3 ρtot ,
(1)
̇
h(t) + h2(t) = ̈
a
a = −4πg
3 (ρtot + 3ptot) .
(2)
eqs. (1)-(2) also lead to the continuity equation
̇
ρtot + 3 ̇
a
a (ρtot + ptot) = 0 .
(3)
here the dot represents the physical time derivative and ρtot and ptot are the total energy
density and pressure of the universe. in accordance with the (standard) λcdm model, the
universe is assumed to consist of (1) de, (2) cold dm (cdm) made of weakly interacting
5
massive particles, presumably mdm > 1 ∼10 gev, (3) photons, and (4) baryons. the
dm and baryon density parameters today are ωdm = ρdm(tnow)/ρcr ≈0.22 and ωb =
ρb(tnow)/ρcr ≈0.04. here ρcr = 3h2
0/(8πg) = 8.1h2 × 10−47 gev4 is the critical density
today, tnow defines the current time, h0 = 2.1h×10−42 gev is the present hubble parameter,
g is the newton constant, and h ≈0.72 is the hubble parameter in units of 100 km/sec/mpc.
the photon contribution to the energy density today can be neglected. the flatness of the
universe leads to the relative energy density of the de-neutrino coupled fluid ωφν ≈0.74.
to ensure the accelerated expansion of the universe today, the r.h.s. of eq. (2) must be
positive at t = tnow.
in this paper we will not assume the existence of the cosmological constant λ, as the
λcdm model suggests. instead we accept the hypothesis of the dynamical dark energy
described by a scalar field. this is a bold assumption and a highly debatable issue. we
vindicate our approach a posteriori by the consistent picture we arrive at the end. for a
review and/or alternative approaches, see, e.g., refs. [13, 58, 59]. the massless neutrinos
are described by the conventional dirac lagrangian. the resulting model is given by the
coupled dirac and scalar fields. the grand thermodynamic potential of the coupled model
can be derived from the euclidian functional integral representation of the grand partition
function. the dynamics of the coupled model is governed by the friedmann equations.
throughout the paper we use natural units where ħ= c = kb = 1.
b.
bosonic scalar field
the bosonic scalar field hamiltonian in the flrw metric reads as [9, 60]
hb =
z
a3d3x
h1
2 ̇
φ2 + 1
2a2(∇φ)2 + u(φ)
i
,
(4)
where the comoving volume v =
r
d3x, while the physical volume vphys = a3(t)v . since
this field does not carry a conserved charge (number), the chemical potential μ = 0. the
grand partition function in the functional integral representation:
zb ≡tr e−β ˆ
h =
z
dφ e−se
b
(5)
with the bosonic euclidian action
se
b =
z β
0
dτ
z
a(t)3d3x
h1
2(∂τφ)2 + 1
2a2(∇φ)2 + u(φ)
i
,
(6)
where φ = φ(x, τ).
it is instructive to find the partition function of the free scalar field u(φ) =
1
2m2
b φ2
following the methods explained by kapusta and gale [54] for the case of the minkowski
metric. rescaling of the field
̃
φ = a3/2φ
(7)
changes the partition function (5) by a thermodynamically irrelevant prefactor. the func-
tional integration over ̃
φ of the gaussian action gives
log zb = −v
z
d3k
(2π)3
h
β
q
m2
b + k2/a2 + log
1 −e−β√
m2
b +k2/a2i
.
(8)
6
then the density (with respect to the physical volume) of the thermodynamic potential is
given by
ωb ≡−
1
βa3v log zb = −pb
=
z
d3k
(2π)3
ε + 1
β log
1 −e−βε
,
(9)
where ε =
p
m2
b + k2 and pb is the pressure due to the bosonic field.
c.
free dirac spinor field
the dirac hamiltonian in the flrw metric is [60]
hd =
z
a3d3x ̄
ψ
−ı
aγ * ∇+ m
ψ .
(10)
the grand partition function is given by the following grassmann functional integral:
zd ≡tr e−β( ˆ
h−μ ˆ
q) =
z
d ̄
ψdψ e−se
d
(11)
where the conserved charge (lepton number) operator ˆ
q =
r
a3d3xψ†ψ and the euclidian
action
se
d =
z β
0
dτ
z
a(t)3d3x ̄
ψ(x, τ)
γo ∂
∂τ −ı
aγ * ∇+ m −μγo
ψ(x, τ).
(12)
by rescaling the grassmann fields (7) and using the standard techniques [54], we get the
thermodynamic potential density (pressure) as a function of the chemical potential and
temperature:
ωd ≡−
1
βa3v log zd = −pd
= −2
z
d3k
(2π)3
ε + 1
β log
1 + e−βε−
+ 1
β log
1 + e−βε+
,
(13)
where
ε(k) =
√
m2 + k2 ,
(14)
and ε± = ε(k) ± μ. the first term on the r.h.s. of eq. (13) corresponds to the vacuum
contribution to the thermodynamic potential (pressure):
−ω0 = p0 = 2
z
d3k
(2π)3ε(k)
(15)
introducing the notation for the fermi distribution function
nf(x) ≡
1
eβx + 1 ,
(16)
eq. (13) can be brought to the following form:
−ωd = pd = p0 +
1
3π2
z ∞
0
k4dk
ε(k)
nf(ε−) + nf(ε+)
(17)
7
d.
coupled model: scalar field and dirac massless fermions
let us consider a scalar bosonic field interacting via a yukawa coupling with massless
dirac fermions. the euclidian action of the model in the flrw metric reads:
s = se
b + se
d
m=0 + g
z β
0
dτ
z
a3d3x φ ̄
ψψ
(18)
the path integral for the partition function of the coupled model is:
z =
z
dφd ̄
ψdψ e−s
(19)
the grassmann fields can be formally integrated out resulting in
z =
z
dφ e−s(φ) =
z
dφ exp
−se
b + log det ˆ
d(φ)
,
(20)
where the dirac operator
ˆ
d(φ) = γo ∂
∂τ −ı
aγ * ∇+ gφ(x, τ) −μγo
(21)
the thermodynamic potential ωof the model (18) at tree level can be found by evaluating
the path integral (20) in the saddle-point approximation.
assuming the existence of a
constant (x, τ)-independent field φc which minimizes the action s(φ), the term log det ˆ
d
can be evaluated exactly, and fermionic contribution to the thermodynamic potential is
given by eqs. (13) or (17) with the fermionic mass
m = gφc .
(22)
the bosonic contribution to the partition function in this approximation is simply z ∝
exp[−βa3v u(φc)] . the thermodynamic potential density is given then by
ω(φc) = u(φc) + ωd(φc) .
(23)
self-consistency of the employed saddle-point approximation naturally coincides with the
condition of minimum of the thermodynamic potential at equilibrium (at fixed temperature
and chemical potential):
∂ω(φ)
∂φ
φ=φc = 0 ,
(24)
and
∂2ω(φ)
∂φ2
φ=φc > 0 ,
(25)
note that a non-trivial solution φc of eq. (24) (if it exists) is called the classical field: it
is the average of the bosonic field, i.e., φc = ⟨φ⟩. eqs. (22,23,24) can be brought to the
equivalent form:
u′(φc) + gρs = 0 ,
(26)
8
where the scalar fermionic density (a.k.a. the chiral density) ρs is given by the following
expression:
ρs ≡⟨ˆ
n⟩
v
= ∂ωd
∂m = ρ0 + m
π2
z ∞
0
k2dk
ε(k)
nf(ε−) + nf(ε+)
,
(27)
and ˆ
n =
r
d3x ̄
ψψ. here ρ0 stands for the vacuum contribution to the chiral condensate:
ρ0 ≡∂ω0
∂m = −m
π2
z ∞
0
k2dk
ε(k) .
(28)
note that even if the time, i.e., a(t), does not enter explicitly in the equations for the
thermodynamic quantities of the coupled, fermionic or bosonic models (9,13,23,26,27), and
they look like their counterparts in a flat static universe, such parameters as, e.g., the
temperature and chemical potential in those equations are time-dependent, i.e., t = t(a)
and μ = μ(a). the particular form of the dependencies t(a) and μ(a) must be determined
from the friedmann continuity equation (3) which relates the energy density ρ(t) and
pressure p(t) to the evolution of a(t)[9, 10]. in addition, the fermionic mass m ∝φc in the
coupled model is also time varying, since the time enters into φc (26) via t, μ, and all three
functions m(a), t(a) and μ(a) are governed by the friedmann equations (1,2,3).
the present theory works consistently for the physical quantities (bosonic or fermionic)
measured with respect to their vacuum contributions.
so, in the rest of the paper we
will employ the thermodynamic quantities with subtracted vacuum contributions, keeping
however, the same notations, e.g.:
ωd 7→ωd −ω0 , pd 7→pd −p0 , ρs 7→ρs −ρ0 .
(29)
then, according to volovik [61], the pressure and energy of the pure and equilibrium vacuum
is exactly zero. (the renormalization of the vacuum terms is, of course a very subtle issue.
there are alternative approaches to this problem known from the literature.
see, e.g.,
[62, 63].)
iii.
analysis of the mass (gap) equation: general properties
in cases interesting for cosmological applications, the scalar field potential u(φ) does not
have a non-trivial minimum, and the generation of the fermion mass (i.e. a solution of (24)
0 < φc < ∞) is due to the interplay between the scalar and fermionic contributions to the
total thermodynamic potential (23).
from now on we adapt our equations for the case of equal number of fermions and
antifermions and μ = 0, as discussed in sec. ii a. keeping in mind the neutrinos, we assume
an extra flavor index of fermions with the number of flavors s. (for neutrinos s = 3.) we
also assume the flavor degeneracy of the fermionic sector.
before proceeding further, we need to make some important observations regarding the
behavior of the coupled model in two limiting cases. assuming that a non-trivial solution
of (24) with finite m exists, the fermionic contribution to the thermodynamic potential
(pressure) (17) can be written as:
−ωd = pd =
2s
3π2β4ip(βm) ,
μ = 0 ,
(30)
9
where the integral defined as
ip(κ) ≡
z ∞
κ
(z2 −κ2)3/2
ez + 1
dz
(31)
can be evaluated analytically in two cases:
ip(κ) =
( 7π4
120 −π2
8 κ2 + o(κ4) ,
κ < 1
3κ2k2(κ) + o(e−2κ) , κ ≳1
(32)
where kν(x) is the modified bessel function of the second kind.
in the (classical) low-temperature regime
βm ≡m
t ≫1
(33)
the above equation results in
−ωd = pd = 2sm2
π2β2 k2(βm) + o(e−2βm) .
(34)
to leading order
−ωd = pd ≈
√
2s
π3/2 t(tm)3/2e−m/t .
(35)
the chiral condensate density (27)
ρs = 2sm
π2β2
z ∞
βm
(z2 −(βm)2)
1
2
ez + 1
dz ,
μ = 0
(36)
can be also evaluated in the low-temperature limit as
ρs = 2sm2
π2β k1(βm) + o(e−2βm) ,
(37)
which gives to leading order
ρs ≈
√
2s
π3/2 (tm)3/2e−m/t .
(38)
in this limit the fermions enter the regime of a classical ideal gas. indeed, the fermionic
particle (antiparticle) density
n+ = n−=
s
π2β3
z ∞
βm
z(z2 −(βm)2)
1
2
ez + 1
dz
(39)
in the low-temperature limit yields
n± = sm2
π2β k2(βm) + o(e−2βm) ,
(40)
and to leading order:
n± ≈
s
√
2π3/2(tm)3/2e−m/t .
(41)
10
we see from eqs. (34,40) that up to terms o(e−2βm), the fermions satisfy the ideal gas
equation of state
pd ≈(n+ + n−)t ,
(42)
and the chiral density is equal to the total particle density n.
ρs ≈n ≡n+ + n−.
(43)
in the (ultra-relativistic) high-temperature regime
m
t ≪1
(44)
one obtains
−ωd = pd ≈7π2s
180 t 4 −s
12(mt)2 .
(45)
to leading order the chiral condensate is
ρs ≈s
6mt 2 ,
(46)
while the particle density is
n± ≈3sζ(3)
2π2 t 3 .
(47)
now we can make some general observations of the fermionic mass generation in the
coupled model:
(i) it is obvious from the sign of ρs (cf. 27,36) that non-trivial solutions of (26) are
impossible for a monotonically increasing potential u(φ).
that rules out some popular
potentials, e.g., u ∝log(1 + φ/m) [13, 36] for this yukawa-coupling driven scenario of the
mass generation.
(ii) the monotonously decreasing slow-rolling de potentials ([16, 17] and for reviews,
see [11, 13]), e.g., u ∝φ−α or u ∝exp[−aφγ], do have a window of parameters wherein
non-trivial solutions of (26) exist. as we can see from (38), for those decreasing potentials
the mass equation (26) always has a trivial solution m = gφc = ∞for the minimum of
the thermodynamic potential (23).
2 this solution corresponds to a "doomsday" vacuum
state [61], when the universe reached its true ground state with zero dark energy density
and completely frozen out fermions. a non-trivial solution of (26), corresponding to another
minimum of the potential (23), is totally due to the fermionic contribution. since the latter
freezes out in the limit t →0, it is clear qualitatively that such a solution 0 < m < ∞can
exist only above a certain temperature. for a more quantitative account of these phenomena
we need to assume some specific form of the de potential. this will be done in the following
section.
(iii) to explain the differences between the present study and earlier related work on
mass varying fermions (see [23, 24, 36, 40] and more references there), some clarifications
are warranted.
it is usually assumed in the literature that the low-temperature regime
formulas are applicable, and according to (43) ρs = n. the approximation for (26) then can
be written as ∂u/∂m + n = 0. the latter is interpreted as a result of minimization of some
2 recall that the grand thermodynamical potential is equal to the free energy for the case μ = 0.
11
effective potential ueff= u + nm with fixed n, which always has a non-trivial minimum
0 < m < ∞for the class of decreasing potentials u, see, e.g., [23, 24]. it turns out that such
an approximation changes the picture qualitatively.
in what follows, we explore in detail the predictions of the consistent mass equation (26)
on the mass varying scenario for the coupled model with a specific de potential ansatz.
iv.
coupled model with the ratra-peebles quintessence
potential
a.
mass equation and critical temperature
now we analyze in detail our coupled model for a particular choice of u(φ), the so-called
ratra-peebles quintessence potential [16] :
u(φ) = mα+4
φα
,
(48)
where α > 0. it is convenient to introduce the dimensionless parameters
∆≡m
t ,
κ ≡gφ
t , ωr ≡ω
m4 .
(49)
then the mass equation (26) can be written as:
απ2
2s gα∆α+4 = iα(κ) ,
(50)
where we introduced
iα(κ) ≡κα+2
z ∞
κ
√
z2 −κ2
ez + 1 dz .
(51)
according to the relation eq. (22) between the fermionic mass m and the classical field, we
get m = tκc, where κc is the solution of eq. (50) corresponding to the minimum of the
thermodynamic potential which reads now as (cf. eq. (31)):
ωr = gα∆
κ
α
−
2
3π2
1
∆4ip(κ) .
(52)
the dimensionless yukawa coupling constant g ∼1. to reduce the number of model pa-
rameters we can set g = 1. this is equivalent to the simultaneous rescaling gφ 7→ ̃
φ and
mg
α
α+4 7→ ̃
m. 3 for simplicity, we also restrict the number of flavors s = 1.
we define the mass of the scalar field as:
m2
φ = ∂2u(φ)
∂φ2
φ=φc .
(53)
3 one can check this scaling also holds for the dynamics of the model, considered in section v. in particular,
the neutrino masses do not depend on the value of g. to avoid cluttering of notations we will drop tildes
in the rescaled parameters.
12
in terms of the dimensionless parameters it reads
mφ
m =
p
α(α + 1)
∆
κc
α+2
2
(54)
it is important to realize that the integral iα(κ) on the r.h.s. of the mass equation is
bounded. the quantitative parameters of the function iα(κ) depend on α, but its shape
is always similar to the curve shown in fig. 1 for α = 1. so, there exists a maximal ∆crit
(critical temperature tcrit) such that for ∆> ∆crit (t < tcrit) only a trivial solution m = ∞
exists, and the stable vacuum has zero energy and pressure.
0
0.5
1
1.5
2
2.5
3
3.5
0
2
4
6
8
10
12
(d)
(c)
(b)
(a)
iα(κ)
κcrit
κ0
-1
-0.5
0
0.5
0
2
4
6
8
10
12
14
16
18
20
κ
(d)
(c)
(b)
(a)
ωr
κ
fig. 1: (color online) left: graphical solutions of the mass equation (50) for different values
of ∆≡m/t (α = 1). right: dimensionless density of the thermodynamic potential (52). the
thermodynamically stable solutions of eq. (50) indicated by the large dots correspond to the minima
of the potential. the arrows indicate the unstable solutions of the mass equation, corresponding
to the maxima of the potential.
the mass equation eq. (50) is solved numerically for various values of its parameters, and
the characteristic results are shown in fig. 1. the numerical results can be complemented
by an approximate analytical treatment of the problem. the latter turns out to be quite
accurate and greatly helps in gaining intuitive understanding of the results.
it is easy to evaluate iα(κ) to leading order:
iα(κ) ≈
( π2
12κα+2 ,
κ < 1
κα+3k1(κ) , κ ≳1
(55)
13
for the critical point where i ′
α(κcrit) = 0, we obtain:
κcrit ≈ν ,
ν ≡α + 5
2 ;
(56)
iα(κcrit) ≈
rπ
2 ννe−ν.
(57)
the most important conclusion we draw from fig. 1 is that there are three phases in the
model's phase diagram. we analyze each of them in the following subsections.
1.
stable (massive) phase: ∆< ∆◦(t◦< t < ∞)
in this range of parameters the equation (50) has two nontrivial solutions. the root
κc < κ◦indicated with a large dot in fig. 1 (case a) gives the fermionic mass and corresponds
to a global minimum of the potential. so it is a thermodynamically stable state. in this
phase ω(κc) < 0, so the pressure is positive p > 0.
another non-trivial root of (50)
corresponds to a thermodynamically unstable state (maximum of ωindicated with an arrow
in fig. 1). there is a trivial third root of the mass equation κ = ∞. at these temperatures
it corresponds to the metastable vacuum state ω= 0.
in the high-temperature region of this phase where ∆≪1 the fermionic mass is small
(see fig. 2):
m
m ≈
√
6αm
t
2
α+2 ∝t −
2
α+2
(58)
the fermionic contribution to the thermodynamic potential is dominant, and it behaves to
leading order as the potential of the ultra-relativistic fermion gas (cf. eq. (45)):
ω= −p = −7π2
180t 4 + o(t
2α
α+2) .
(59)
one can check that the subleading term in the above expression combines the de potential
contribution and the first fermionic mass correction, which are both of the same order.
it is important to stress that in this coupled model with the slow-rolling potential eq. (48),
the mass generation does not follow a conventional landau thermal phase transition scenario.
there is no critical temperature below which the chiral symmetry is spontaneously broken
and the mass is generated. instead the mass grows smoothly as κc ∝∆
α+4
α+2, albeit starting
from the "point" t = ∞. from physical grounds we expect the applicability of the model
to have the upper temperature bound:
t ≲trd ,
(60)
where trd is roughly the temperature of the boundary between inflation and the radiation-
dominated era. the high-temperature result (59) shows that the stable massive phase of
the present model can indeed be extended up to those temperatures.
the scalar field and fermionic masses demonstrate opposite temperature dependencies.
the scalar field is "heavy" at high temperatures:
mφ ≈
r
α + 1
6
t ,
∆≪1 ,
(61)
however its mass decreases together with the temperature. in contrary, the fermionic mass
m monotonously increases with decreasing temperature. the exact numerical results for the
two masses are shown in fig. 2
14
0
1
2
3
4
5
6
δ
0
0.2
0.4
0.6
0.8
1
m/m
mφ/m
fig. 2: (color online) masses of the fermionic and scalar fields (m and mφ resp.) as functions of
∆≡m/t, α = 1. at ∆> ∆crit (t < tcrit) the stable phase corresponds to m = ∞and mφ = 0
2.
metastable (massive) phase: ∆◦< ∆< ∆crit (tcrit < t < t◦)
upon increasing ∆we reach a certain value ∆◦corresponding to a critical temperature t◦
when the thermodynamic potential has two degenerate minima ω(κ◦) = p(κ◦) = ω(∞) = 0.
this is shown in fig. 1 (case b). after this point, when the temperature decreases further in
the range ∆◦< ∆< ∆crit (here ∆crit stands for the maximal value of ∆when a non-trivial
solution of the gap equation (50) exists, see fig. 1), the two minima of the thermodynamic
potential exchange their roles. the root κc now becomes a metastable state with ω(κc) > 0,
i.e., with the negative pressure p(κc) < 0, while the stable state of the system corresponds
to the true stable vacuum of the universe [61] ω(∞) = p(∞) = 0. see fig. 1 (case c).
the system's state in the local minimum ω(κc) is analogous to a metastable supercooled
liquid. we disregard the exponentially small probability of tunneling of the fermions from
the metastable state ω(κc) into the vacuum state ω(∞) = 0[18]. accordingly, the fermionic
mass in this phase is determined by the root κc of (50).
in the metastable phase κc ≳1, so by using eqs. (52,32,50) we obtain the potential:
ωr ≈
∆
κc
αn
1 −α
κc
−3α
2κ2
c
o
.
(62)
from the above result we can find the metastability point ω(κ◦) = 0 as
κ◦≈α
2
1 +
r
1 + 6
α
(63)
expanding iα(κ) near its maximum and using eqs. (55,56,57) along with the gap equation
eq. (50), we obtain the following equation:
(κc −κcrit)2
2ν
≈1 −
∆
∆crit
α+4
.
(64)
on finds from the above equation, e.g., how the mass approaches its critical value:
mcrit −m ∝
t
tcrit
−1
1/2
,
(65)
15
or the ratios of temperatures and masses at the metastable and critical points. these latter
parameters are given in table i.
table i: masses, critical temperatures and potentials for various values of α. all the parameters
used in this table are defined in the text.
α
tcrit
t◦
∆crit
m◦
mcrit
mcrit
m
mcrit
φ
m
tcrit
m
ωcrit
m4
ρcrit
m4
w(tcrit)
1
0.90
0.91
0.558
3.86
0.187
1.10
0.15
0.84
-0.18
2
0.95
1.04
0.70
4.35
0.130
0.97
0.02
0.25
-0.09
4
0.98
1.44
0.81
4.52
0.048
0.70
6 * 10−4
0.02
-0.03
10
0.99
3.00
0.91
4.16
2 * 10−3
0.33
7 * 10−8 9 * 10−6
-0.008
3.
critical point: ∆= ∆crit (t = tcrit) and phase transition
the critical point of the model corresponds to the case when the two roots of the mass
equation eq. (50) merge, and the minimum of the potential disappears. one can check that
instead of the minimum this is an inflection point of the the potential, i.e., ω′′
r(κcrit) = 0.
this situation is shown in fig. 1 (case d). at this point the system is in the unstable state
with the fermionic mass
mcrit
tcrit
= κcrit ≈ν .
(66)
in particular, this implies that the fermions are non-relativistic at the critical temperature.
from eqs. (57,50) we find the critical parameter (see table i for its numerical values)
∆crit ≈
√
2
απ3/2ννe−ν
1
α+4 ,
(67)
which allows us to evaluate the critical temperature
tcrit =
m
∆crit
.
(68)
we can also find the potential at tcrit:
ωcrit ≈5
2ν
∆crit
ν
α
m4
(69)
thus, from the viewpoint of equilibrium thermodynamics at t = tcrit the model must
undergo a first-order (discontinuous) phase transition and reach its third thermodynamically
stable (at t < tcrit) phase corresponding to the vacuum ω(κ = ∞) = p(κ = ∞) = 0. during
this transition the fermionic mass given at the critical point by eq. (66) and the scalar field
mass
mcrit
φ
≈
p
α(α + 1)
∆crit
ν
α+2
2 m
(70)
both jump to their values in the vacuum state m = ∞and mφ = 0. see fig. 2.
16
however, the above arguments are based on the minimization of the thermodynamic
potential (i.e. maximization of entropy) at equilibrium. to address the question of how
such a system behaves as the universe evolves towards the new equilibrium vacuum state,
we need to analyze the dynamics of this phase transition. more qualitatively, we need to
study how the particle at the point κcrit at the critical temperature (see fig. 1) rolls down
towards its equilibrium at infinity. this issue will be addressed in section v.
b.
equation of state
we define the equation of state in the standard form:
p = wρ ,
(71)
where the total pressure in this model is obtained from eq. (52), while the total energy
density (ρ) and its dimensionless counterpart (ρr) are determined by the following equation:
ρr ≡
ρ
m4 =
∆
κ
α
+ 2
π2
1
∆4iε(κ) .
(72)
here we define the integral
iε(κ) ≡
z ∞
κ
z2√
z2 −κ2
ez + 1
dz ,
(73)
which can be evaluated in two limits of our interest:
iε(κ) =
( 7π4
120 −π2
24κ2 + o(κ4) ,
κ < 1
3κ2k2(κ) + κ3k1(κ) + o(e−2κ) , κ ≳1
(74)
in the high-temperature region of the stable massive phase where ∆≪1, the fermionic
contribution is dominant, and the energy density to leading order is that of the ultra-
relativistic fermion gas (cf. eq. (59))
ρ = 7π2
60 t 4 + o(t
2α
α+2) .
(75)
thus, in this regime the model follows approximately the equation of state of a relativistic
gas with w ≈1
3.
in the region κc ≳1 which includes the metastable phase and the critical point, we obtain
by using eqs. (72,74,50,62):
ρ ≈
∆
κc
αn
1 + α + 3α
κc
+ 9α
2κ2
c
o
,
(76)
and
w ≈−
1 −α
κc −3α
2κ2
c
1 + α + 3α
κc + 9α
2κ2
c
(77)
the last equation follows very closely the results of the exact numerical calculations shown
in fig. 3. at the critical point we evaluate
17
-1
0
0.2
0.4
δ
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
α=1
α=2
α=4
δcr
α=1 δcr
α=2
δcr
α=4
w
fig. 3: (color online) w ≡p/ρ for several values of α. at ∆> ∆crit(α), i.e., t < tcrit(α) the
equilibrium value w = −1 exactly.
ρcrit ≈
∆crit
ν
αn
1 + α + 3α
ν
o
m4 ,
(78)
and making a rough estimate, we get a lower bound:
w ≈−5
2
1
ν(1 + α + 3α
ν ) ≥−1
4,
∀α ≥1 .
(79)
thus for any power law α ≥1, the parameter w of this model at equilibrium cannot cross
the bound w < −1
3, necessary for accelerating expansion of the universe ̈
a > 0. 4
at t < tcrit we obtain the equilibrium value of w in the stable vacuum state from
eqs. (52,72):
w = lim
κ→∞
p(κ)
ρ(κ) = −1 .
(80)
so the true vacuum in this model corresponds to the universe with a cosmological constant
in the limit λ →0.
c.
speed of sound
we define the sound velocity as
c2
s =
dp
dt
dρ
dt
=
dp
d∆
dρ
d∆
,
(81)
4 the relation (79) w(tcrit) ≳−1
4 holds for the model which contains only the de-neutrino coupled fluid.
in a more realistic model for the universe, baryons and dm also contribute to the total energy density,
and as a consequence w(tcrit) increases, see sec. v.
18
where to obtain the second expression we used the fact that the time enters our formulas
only through the temperature t(a(t)), so
d
dt = d∆
dt
d
d∆.
(82)
let us first consider the temperatures t ≥tcrit, i.e., ∆≤∆crit. then
dρ
d∆= ∂ρ
∂∆+ ∂ρ
∂κ * dκ
d∆
κ=κc ,
(83)
where κ is related to ∆through the gap equation (50):
dκ
d∆
κ=κc ≡ ̇
κc = α + 4
∆
iα(κc)
i ′
α(κc) = α + 4
∆
d log iα(κc)
dκ
−1
.
(84)
note that for the pressure the following relation
dp
d∆= ∂p
∂∆
(85)
holds, since
∂p
∂κ
κ=κc = 0
(86)
is just another form of the gap equation (24). thus
c2
s =
∂p
∂∆
∂ρ
∂∆+ ∂ρ
∂κ ̇
κc
κ=κc
.
(87)
in the high-temperature regime ∆≪1 (κc ≪1), it is even easier to use the explicit
asymptotic expansions for p(∆) and ρ(∆) in the definition (81) instead of the above formula
(87). a straightforward calculation gives the result
c2
s ≈1
3 −b∆
2(α+4)
α+2
,
b > 0 ,
(88)
consistent with the earlier observation that for ∆≪1 the model behaves as an ultra-
relativistic fermi gas.
in the case κc ≳1 we find
̇
κc ≈α + 4
∆
κc
ν −κc
,
(89)
and
c2
s ≈
ν −κc
α(α + 4)(1 +
4
αν ) .
(90)
everywhere at t > tcrit, including the stable and metastable massive phases c2
s > 0, so the
model is stable with respect to the density fluctuations. the sound velocity vanishes in the
limit t →t +
crit as
cs ∝√ν −κc →0 .
(91)
qualitatively, the vanishing speed of sound is due to divergent ̇
κc (84,89) at the critical
point.
19
-1
0
0.2
0.4
δ
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
α=1
α=2
α=4
δcr
α=1 δcr
α=2
δcr
α=4
cs
2
fig. 4: (color online) the square of the sound velocity for several values of α. at ∆> ∆crit(α),
i.e., t < tcrit(α) the equilibrium value c2
s = −1 exactly.
the above analytical results are in excellent agreement with the numerical calculations
of c2
s from the formula (87) shown in fig. 4. at the temperatures t < tcrit there is no gap
equation relating κ and ∆, so the sound velocity is easily calculated to yield the value in
the equilibrium vacuum state:
c2
s = lim
κ→∞
∂p
∂∆
∂ρ
∂∆
= −1 .
(92)
that what is expected for a barotropic perfect liquid with a constant w, where c2
s = w.
v.
dynamics of the coupled model and observable universe
a.
scales and observable universe
in order to make a connection between the above model results and the observable uni-
verse, we need to first conclude where we are now with respect to the critical temperatures
t◦and tcrit. as one can see from table i for α ∼1, the model has t◦∼tcrit ∼m. we
identify the current equilibrium temperature of the universe with the cosmic background
radiation temperature t = 2.275 k = 2.4 * 10−4 ev. then we see right away that we cannot
be above the critical temperature of the coupled model, since:
(i) assumption t > tcrit leads to m ≲10−4 ev, which in turn implies too small densities
ρ ∼m4 ∼10−16 ev4, i.e, four orders of magnitude less than the observable density;
(ii) at t > tcrit the equation of state has w > −1
4 (see fig. 3), which is not even enough to
get a positive acceleration ̈
a > 0, while the observable value w ≈−1. [13]
so, the first qualitative conclusion is that we are currently below the critical temperature.
the universe has already passed the stable and metastable phases and is now unstable, i.e.
it is in the transition toward the stable "doomsday" vacuum m = ∞and ω= 0.
since at the temperature of metastability p◦= 0, the transition occurs somewhere be-
tween the beginning of the matter-dominated era (tmd ≈16500 k ≈1.42 ev) and now,
i.e., 1.4 ev ≳tcrit > tnow ∼2.4 * 10−4 ev. because of eq.(68) this inequality gives us the
20
possible range of the model's single parameter m:
2.4 * 10−4 ev < m ≲1.4 ev.
(93)
as we will show in the following, other consistency checks of the model bring the upper
bound of m much lower.
b.
universe before the phase transition
in order to apply the results of the coupled model for the calculation of the parameters
of the observable universe, we need to incorporate the matter (we will just add up the dark
and conventional baryonic matter together) and the radiation. assuming a spatially flat
universe, the total energy density is critical, so
ρtot = ργ,now/a4 + ρm,now/a3 + ρφν(∆) = ρcr = 3h2
8πg ,
(94)
where from now on we denote ρφν the energy density of the coupled model given by eq. (72).
to relate our model's parameters to the standard cosmological notations, we assume that
the temperature is evolving as that of the blackbody radiation, i.e., t = tnow/a. then
∆≡m
t = ma
tnow
=
m
tnow(1 + z) .
(95)
we know that
ργ = π2
15t 4 ,
(96)
and we set the current density of the coupled scalar field to the observable value of the dark
energy, i.e., 3/4 of the critical density:
ρφν,now = 3
4 * 3h2
0
8πg ≈31 * (10−3 ev)4 ,
(97)
and
ρm,now ≈1
4 * 3h2
0
8πg .
(98)
the equations above allow us to plot the relative energy densities
ω# ≡ρ#/ρtot
(99)
as functions of redshift (or temperature) up to the critical point, see fig. 5.
5
in the
high-temperature limit, the matter term is sub-leading and
ρtot ≈ργ + ρφν ≈π2
15
1 + 7
4
t 4 .
(100)
5 we apologize for some abuse of notations, but using the same greek letter for the grand thermodynamic
potential and relative densities seems to be standard now. since these quantities are mainly discussed in
different sections of the paper, we hope the reader will not be confused.
21
ω
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
z+1
1010
108
106
104
102
100
t (ev)
106
104
102
100
10-2
ωφν
ωγ
ωm
z*
zcr
fig. 5: (color online) relative energy densities plotted up to the current redshift (temperature,
upper axis): ωφν – coupled de and neutrino contribution; ωγ – radiation; ωm – combined baryonic
and dark matters. parameter m = 2.39 * 10−3 ev (α = 0.01), chosen to fit the current densities,
determines the critical point of the phase transition zcr ≈3.67. the crossover redshift z∗≈0.83
corresponds to the point where the universe starts its accelerating expansion.
in this limit, then
ωφν = 7
11 ≈0.636 ,
ωγ = 4
11 ≈0.363 ,
(101)
which agrees well with the numerical results displayed in fig. 5. at the critical point the
matter strongly dominates and ρm/ργ,φν ≳102.
the equation of state parameter of the entire universe, wtot, is given by ptot = wtotρtot.
since the matter contribution pm = 0, then ptot = pγ + pφν, where pγ =
1
3ργ and the
pressure of the coupled model pφν is obtained from eq. (52). the numerical results of wtot
are given in fig. 6.
w
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
z+1
1010
108
106
104
102
100
t (ev)
106
104
102
100
10-2
zcr
z*
fig. 6: (color online) equation of state parameter wtot = ptot/ρtot for m = 2.39 * 10−3 ev
(α = 0.01) plotted up to the current redshift (temperature, upper axis).
22
to analyze the dynamics of the coupled model we need, in principle, to go beyond the
saddle-point approximation applied in the previous sections and solve the equation of motion:
̈
φ + 3h ̇
φ + ∂ω
∂φ = 0 .
(102)
above the transition point (t > tcrit) the dynamics is quite simple. let us analyze pertur-
bations to the saddle-point solution of (24):
φ(t) ≡φc + ψ(t) .
(103)
taylor-expanding the thermodynamic potential of the coupled model
∂ω
∂φ = ω2ψ + 1
2ω′′′(φc)ψ2 + ...
(104)
with ω2 ≡ω′′(φc), we obtain from (102) the equation of a damped harmonic oscillator to
the leading order:
̈
ψ + 3h ̇
ψ + ω2ψ = 0 .
(105)
so, the quintessence field φ(t) oscillates around its saddle-point value φc with ψ(t) ∝
eıωt−3
2ht. the damping is very small, since as one can check
ω ≫3
2h .
(106)
the violation of the above condition and breaking down of the oscillating regime occurs in
the vicinity of the critical point, which is the inflection point of the potential (ω = 0). this
is the well-known phenomenon of the critical slowing down near phase transition. retaining
the first non-vanishing term in (104), the equation of motion in the vicinity of the critical
point reads:
̈
ψ + 3h ̇
ψ + 1
2ω′′′(φc)ψ2 = 0 .
(107)
neglecting the small damping term in this equation, its solution can be found analytically
via a hypergeometric function. since the explicit form of this solution is not very interesting
at this point, we just emphasize the qualitative conclusion of the analysis: the fluctuation
ψ(t) oscillates near the classical field φc in the stable (metastable) phase at t > tcrit, and
it enters the run-away (power-law) regime when t →t +
crit.[64]
c.
late-time acceleration of the universe. towards the end of times
the equilibrium methods are not applicable below the phase transition, and we study
the dynamics of the model from the equation of motion (102) together with the friedmann
equations (1,2,3). solution of the dirac equations yields ρs ∝a−3 for the chiral density [66],
so the equation of motion (102) at a ≤acrit reads:
̈
φ + 3h ̇
φ = −∂u
∂φ −ρs,crit
acrit
a
3
.
(108)
23
from the results of the previous section we evaluate the chiral density at the critical point:
ρs,crit ≈α
∆crit
ν
α+1
m3 .
(109)
the system of the integro-differential equations (108,1,2,3) was solved numerically. all the
quantities entering those equations are defined in the previous subsection, except that one
needs to include the extra term 1
2 ̇
φ2 in the computation of both ρtot and ptot. however
the numerical results show that in the regimes of the parameters we are interested, the
kinetic term can be safely neglected. since the critical point of the model lies in the matter-
dominated regime (cf. fig. 5), we start with the hubble parameter h = 2/3t (a ∝t2/3).
at the latest times (z ≲1) the hubble parameter was determined self-consistently from the
numerical solution of the friedmann equations.
we find numerically that the quintessence field φ(t) from the critical point to the present
time oscillates quickly (with the period τ ∼10−27 gyr) around the smooth ("mean value")
solution ̄
φ(t), where the "mean" ̄
φ nullifies the r.h.s.
of the equation of motion (108).
relating the mean values with the physically relevant observable quantities, we can easily
obtain the key results analytically. (they are checked against direct numerical calculations
and found to be accurate within 5 % at most). thus we get
̄
φ = φcrit *
1 + zcrit
1 + z
3
α+1 ,
(110)
ρ ̄
φ = ρφ,crit *
1 + z
1 + zcrit
3α
α+1 ,
(111)
where φcrit ≈
ν
∆critm and ρφ,crit ≈( ∆crit
ν )αm4. having a free model parameter m, we'll set
it by matching the current density of the scalar field ρφ,now to the observable value of the
de density (97), so
m =
ναρφ,now
α+1
α+4∆−α
critt
−3α
α+4
now
.
(112)
the exponent of the quintessence potential α is now the only parameter which can be varied.
we define the time-dependent mass via the solution of the motion equation as m(t) = ̄
φ(t),
thus obtaining an estimate for the present-time neutrino mass. results for various α are
given in table ii. there we also calculate the critical points parameterized by the redshifts
zcrit and the crossover points z∗. the latter is defined as the redshift at which the universe
starts its late-time acceleration, i.e., where wtot = −1
3. for the present time we find
wnow
tot ≈−3
4 .
(113)
as we infer from the data of table ii, the range of exponents α ≪1 corresponds to more
realistic predictions for the neutrino mass [1, 7, 8] and for the crossover redshift z∗[65].
for α = 0.01 we plot the evolution of the relative energy densities, the equation of state
parameter, and the neutrino mass in figs. 5,6,7.
we consider the quite artificial case of small quintessence exponent α as an ansatz crossing
over smoothly from physically plausible potentials with, say, α = 1 or 2 to the logarithmic
potential
u(φ) = m4
1 + α log m
φ
.
(114)
24
table ii: model's parameters and observables for various α. all the entries in this table are
defined in the text.
α
m (ev)
mnow (ev)
zcrit
z∗
2
9.75 * 10−2
167
392
4.9
1
1.69 * 10−2
44.6
76.6
2.3
1/2
6.33 * 10−3
17.0
27.7
1.5
10−1
2.81 * 10−3
2.82
8.73
0.93
10−2
2.39 * 10−3
0.27
3.67
0.83
10−3
2.36 * 10−3
0.027
1.60
0.82
mν
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
z+1
1010
108
106
104
102
100
t (ev)
106
104
102
100
10-2
z*
zcr
fig. 7: neutrino mass m for m = 2.39 * 10−3 ev (α = 0.01) plotted up to the current redshift
(temperature, upper axis).
the latter often appears in various contexts [13, 36].6
vi.
conclusions
in this paper we analyzed the mavan scenario in a framework of a simple "minimal"
model with only one species of the (initially) massless dirac fermions coupled to the scalar
quintessence field. by using the methods of thermal quantum field theory we derived for
the first time (in the context of the mavan or, even more broadly, the vamp models) a
6 the numerical results for small parameter α, as e.g.
α = 0.01 taken for the plots, are virtually in-
distinguishable for the cases of the ratra-peebles (48) or logarithmic (114) potentials.
however the
ratra-peebles potential at more "natural" α = 1, 2 allows to probe the coupled fermionic-quintessence
models in the search of heavy dm particle candidates.
25
consistent equation for fermionic mass generation in the coupled model.
we demonstrated that the mass equation has non-trivial solutions only for special classes
of potentials and only within certain temperature intervals. it appears that these results
have not been reported in the literature on vamps before now.
we gave most of the results for the particular choice of a trial de potential – the ratra-
peebles quintessence potential. this potential has all the necessary properties we needed for
our task: it is simple, it satisfies the criteria we found for non-trivial solutions of the mass
equation to exist, and it has only one dimensionfull parameter- the energy scale m to tune.
also, at small values of the exponent α it effectively crosses over to the case of a logarithmic
potential. we have checked that other potentials, e.g., exponential, lead to a qualitatively
similar picture, but they have at least one more energy scale to handle, which we consider
as an unnecessary complication at this point.
we analyzed the thermal (i.e.
temporal) evolution of the model, following the time
arrow. contrary to what one might expect from analogies with other contexts, like, e.g.,
condensed matter, the model does not generate the mass via a conventional spontaneous
symmetry breaking below a certain temperature. instead it has a non-trivial solution for
the fermionic mass evolving "smoothly" from zero at the "point" t = ∞. the scalar field
is infinitely heavy at the same point. more realistically, we assumed the model is applicable
starting at the temperatures somewhere in the beginning of the radiation-dominated era.
we found that the de contribution in this regime is subleading, and the model behaves as
an ultra-relativistic fermi gas at those temperatures.
this regime corresponds to a stable phase of the model given by a global minimum of the
thermodynamic potential ω(φ). the temperature/time dependent minimum ⟨φ⟩generates
the varying fermionic mass m ∝⟨φ⟩.
with increase in time, as the temperature decreases, the model reaches the point of
metastability where its pressure (p) vanishes. from our estimates of the model's scales,
we showed that this happens during the matter-dominated era of the universe. at this
point the system's ground state becomes doubly degenerate, and the potential ω= 0 at the
non-trivial (finite) minimum ⟨φ⟩as well as at the trivial vacuum φ = ∞.
further on, at lower temperatures the system stays in the metastable (supercooled) state
until it reaches the critical point where the local minimum of the thermodynamic potential
disappears and it becomes an inflexion point. at this critical temperature the model un-
dergoes a first-order (discontinuous) phase transition. at the critical point the equilibrium
values of the fermionic and the scalar field masses discontinuously jump to the 'doomsday"
vacuum state values m = ∞and mφ = 0, respectively. the square of the sound velocity and
equation of state parameter w have the equilibrium values corresponding to the de sitter
universe with a cosmological constant, i.e. c2
s = w = −1. it is worth pointing out that
c2
s > 0 in both the stable and metastable phases, and the sound velocity vanishes reaching
the critical temperature from above.
since the equilibrium approach is not applicable below the critical temperature, we find
parameters of the model from direct numerical solution of the equation of motion and the
friedmann equations. the single scale m of the quintessence potential is chosen to match
the present de density, then other parameters of the universe are determined. we obtain a
consistent picture: the phase transition has occurred rather recently at zcrit ≲5 during the
matter-dominated era, and the universe is now being driven towards the stable vacuum with
zero λ-term. the expansion of the universe accelerates starting from z∗≈0.83. setting
α = 0.01 for m ≈2.4 * 10−3 ev, we end up with the neutrino mass m ≈0.27 ev.
26
the present results allow us to propose a completely new viewpoint not only on the
mavan, but on the quintessence scenario for the universe as well. the common concerns
about the slow-rolling mechanism for the de relaxation toward the λ = 0 vacuum are related
to the question of what is the mechanism to set the initial value of the scalar field φ where
it evolves (rolls down) from. our results demonstrate that up to recent times (i.e. above the
critical temperature) the quintessence field was locked around its average (classical) value
⟨φ⟩. its value is determined by the scale m and the temperature. the average ⟨φ⟩gives
the fermionic mass at the same time. the scalar field is rigid (i.e. massive), although it
softens (i.e., its mass decreases) as the system approaches the critical temperature. above
the critical temperature the scalar field can only oscillate around its equilibrium value ⟨φ⟩.
at the critical point the minimum of the thermodynamic potential becomes the inflexion
point, the scalar field looses its rigidity (mass). then the field can only roll down towards
the new stable ground state ω= 0 at φ = ∞. so physically, the critical point corresponds
to the transition of the universe from the stable oscillatory to the unstable rolling regime.
a more sophisticated numerical study of the kinetics after the critical point is warranted
in order to address such issues as the detailed description of the crossover between different
regimes, and the clustering of neutrinos. these and some other questions are relegated to
our future work.
acknowledgments
we highly appreciate useful comments and discussions with d. marfatia, b. ratra, and n.
weiner. we are grateful to n. arhipova, d. boyanovsky, r. brandenberger, o. chkvorets,
h. feldman, a. gruzinov, l. kisslinger, the late l. kofman, s. lukyanov, and u. wi-
choski for helpful discussions and communications. we thank the anonymous referee for
constructive criticism and comments which stimulated us to undertake deeper analyses of
the model, and especially of its dynamics. g.y.c. thanks the center for cosmology and par-
ticle physics at new york university for hospitality. we acknowledge financial support from
the natural science and engineering research council of canada (nserc), the laurentian
university research fund (lurf), scientific co-operation programme between eastern
europe and switzerland (scopes), the georgian national science foundation grants #
st08/4-422. t.k. acknowledges the support from nasa astrophysics theory program
grant nnxloac85g and the ictp associate membership program. a.n. thanks the bruce
and astrid mcwilliams center for cosmology for financial support.
[1] a. d. dolgov, phys. atom. nucl. 71, 2152 (2008) [arxiv:0803.3887 [hep-ph]]; physics reports
370, 333 (2002).
[2] j. lesgourgues and s. pastor, phys. rept. 429, 307 (2006) [arxiv:astro-ph/0603494].
[3] a. strumia and f. vissani, arxiv:hep-ph/0606054.
[4] s. hannestad, ann. rev. nucl. part. sci. 56, 137 (2006) [arxiv:hep-ph/0602058].
[5] s. eidelman et al. [particle data group], phys. lett. b 592, 1 (2004).
[6] h. abele, prog. part. nucl. phys. 60, 1 (2008).
[7] r. n. mohapatra, et al, rep. prog. phys. 70, 1757 (2007).
27
[8] f. t. avignone, s. r. elliott and j. engel, rev. mod. phys. 80, 481 (2008) [arxiv:0708.1033
[nucl-ex]].
[9] s. weinberg, cosmology, oxford university press, new york (2008).
[10] s. dodelson, modern cosmology, academic press, san diego (2003).
[11] p. j. e. peebles and b. ratra, rev. mod. phys. 75, 559 (2003) [arxiv:astro-ph/0207347].
[12] s. m. carroll,
econf c0307282,
tth09 (2003) [aip conf. proc. 743,
16 (2005)]
[arxiv:astro-ph/0310342].
[13] e. j. copeland, m. sami and s. tsujikawa, int. j. mod. phys. d 15, 1753 (2006)
[arxiv:hep-th/0603057].
[14] s. nojiri and s. d. odintsov, phys. rev. d 68, 123512 (2003).
[15] s. m. carroll, a. de felice, v. duvvuri, d. a. easson, m. trodden and m. s. turner, phys.
rev. d 71, 063513 (2005).
[16] b. ratra and p. j. e. peebles, phys. rev. d 37, 3406 (1988)
[17] c. wetterich, nucl. phys. b 302, 668 (1988).
[18] a. linde, lect. notes phys. 738, 1 (2008) [arxiv:0705.0164 [hep-th]]; particle physics and
inflationary cosmology, chur, switzerland: harwood (1990) [arxiv: hep-th/0503203].
[19] s. weinberg, phys. rev. d 77, 123541 (2008) [arxiv:0804.4291 [hep-th]].
[20] e. mocchiutti et al., arxiv:0905.2551 [astro-ph.he].
[21] e. a. baltz et al., jcap 0807, 013 (2008) [arxiv:0806.2911 [astro-ph]].
[22] k. kohri, a. mazumdar, n. sahu and p. stephens, phys. rev. d 80, 061302 (2009)
[arxiv:0907.0622 [hep-ph]].
[23] g. w. anderson and s. m. carroll, arxiv:astro-ph/9711288.
[24] m. b. hoffman, arxiv:astro-ph/0307350.
[25] g. r. farrar and p. j. e. peebles, astrophys. j. 604, 1 (2004) [arxiv:astro-ph/0307316].
[26] m. kawasaki, h. murayama and t. yanagida, mod. phys. lett. a 7, 563 (1992).
[27] j. garcia-bellido, int. j. mod. phys. d 2, 85 (1993) [arxiv:hep-ph/9205216].
[28] d. comelli, m. pietroni and a. riotto, phys. lett. b 571, 115 (2003) [arxiv:hep-ph/0302080].
[29] u. alam, v. sahni, t. d. saini and a. a. starobinsky, mon. not. roy. astron. soc. 354, 275
(2004) [arxiv:astro-ph/0311364]; l. amendola, m. gasperini and f. piazza, jcap 0409, 014
(2004) [arxiv:astro-ph/0407573]. m. szydlowski, t. stachowiak and r. wojtak, phys. rev. d
73, 063516 (2006) [arxiv:astro-ph/0511650]; h. li, b. feng, j. q. xia and x. zhang, phys.
rev. d 73, 103503 (2006) [arxiv:astro-ph/0509272]. z. k. guo, n. ohta and s. tsujikawa,
phys. rev. d 76, 023508 (2007) [arxiv:astro-ph/0702015].
[30] u. franca and r. rosenfeld, phys. rev. d 69, 063517 (2004) [arxiv:astro-ph/0308149].
[31] u. franca, m. lattanzi, j. lesgourgues and s. pastor, phys. rev. d 80, 083506 (2009)
arxiv:0908.0534 [astro-ph.co].
[32] g. huey and b. d. wandelt, phys. rev. d 74, 023519 (2006) [arxiv:astro-ph/0407196].
[33] b. wang, j. zang, c. y. lin, e. abdalla and s. micheletti, cosmological parameters," nucl.
phys. b 778, 69 (2007) [arxiv:astro-ph/0607126].
[34] r. mainini and s. bonometto, jcap 0709, 017 (2007) [arxiv:0709.0174 [astro-ph]].
[35] t. koivisto, phys. rev. d 72, 043516 (2005) [arxiv:astro-ph/0504571]; r. rosenfeld, phys.
rev. d 75, 083509 (2007) [arxiv:astro-ph/0701213]; s. das, p. s. corasaniti and j. khoury,
phys. rev. d 73, 083509 (2006) [arxiv:astro-ph/0510628]; x. j. bi, b. feng, h. li and
x. m. zhang, neutrinos," phys. rev. d 72, 123523 (2005) [arxiv:hep-ph/0412002]; m. man-
era and d. f. mota, mon. not. roy. astron. soc. 371, 1373 (2006) [arxiv:astro-ph/0504519];
l. amendola, g. camargo campos and r. rosenfeld, phys. rev. d 75, 083506 (2007)
28
[arxiv:astro-ph/0610806]; g. j. stephenson, j. t. goldman and b. h. j. mckellar, int. j. mod.
phys. a 13, 2765 (1998) [arxiv:hep-ph/9603392]; s. matarrese, m. pietroni and c. schimd,
jcap 0308, 005 (2003) [arxiv:astro-ph/0305224]; a. v. maccio, c. quercellini, r. mainini,
l. amendola and s. a. bonometto, phys. rev. d 69, 123516 (2004) [arxiv:astro-ph/0309671];
f. vernizzi, phys. rev. d 69, 083526 (2004) [arxiv:astro-ph/0311167]; h. li, m. z. li and
x. m. zhang, phys. rev. d 70, 047302 (2004) [arxiv:hep-ph/0403281]; k. cheung and
o. seto, phys. rev. d 69, 113009 (2004) [arxiv:hep-ph/0403003]; a. nusser, s. s. gubser
and p. j. e. peebles, phys. rev. d 71, 083505 (2005) [arxiv:astro-ph/0412586]. s. s. gubser
and p. j. e. peebles, phys. rev. d 70, 123510 (2004) [arxiv:hep-th/0402225].
[36] r. fardon, a. e. nelson and n. weiner, jcap 0410, 005 (2004) [arxiv:astro-ph/0309800].
[37] p. gu, x. wang and x. zhang, phys. rev. d 68, 087301 (2003).
[38] r. d. peccei, phys. rev. d 71, 023527 (2005) [arxiv:hep-ph/0411137].
[39] j. grande, j. sola and h. stefancic, jcap 0608, 011 (2006) [arxiv:gr-qc/0604057]; j. grande,
a. pelinson and j. sola, phys. rev. d 79, 043006 (2009) [arxiv:0809.3462 [astro-ph]].
[40] n.
afshordi,
m.
zaldarriaga
and
k.
kohri,
phys.
rev.
d
72,
065024
(2005)
[arxiv:astro-ph/0506663].
[41] m. kaplinghat and a. rajaraman, phys. rev. d 75, 103504 (2007) [arxiv:astro-ph/0601517].
[42] v. pettorino and c. baccigalupi, phys. rev. d 77, 103003 (2008) [arxiv:0802.1086 [astro-ph]].
[43] o. e. bjaelde, a. w. brookfield, c. van de bruck, s. hannestad, d. f. mota, l. schrempp
and d. tocchini-valentini, jcap 0801, 026 (2008) [arxiv:0705.2018 [astro-ph]]; o. e. bjaelde
and s. hannestad, phys. rev. d 81, 063001 (2010) [arxiv:0806.2146 [astro-ph]].
[44] r. bean, e. e. flanagan and m. trodden, phys. rev. d 78, 023009 (2008) [arxiv:0709.1128
[astro-ph]]; new j. phys. 10, 033006 (2008) [arxiv:0709.1124 [astro-ph]].
[45] j. valiviita, e. majerotto and r. maartens, jcap 0807, 020 (2008) [arxiv:0804.0232 [astro-
ph]].
[46] c. wetterich and v. pettorino, arxiv:0905.0715 [astro-ph.co]; v. pettorino, d. f. mota,
g. robbers and c. wetterich, aip conf. proc. 1115, 291 (2009) [arxiv:0901.1239 [astro-ph]];
l. amendola, m. baldi and c. wetterich, phys. rev. d 78, 023015 (2008) [arxiv:0706.3064
[astro-ph]];
c. wetterich,
phys. lett. b 655,
201 (2007) [arxiv:0706.4427 [hep-ph]];
d. f. mota, v. pettorino, g. robbers and c. wetterich, phys. lett. b 663, 160 (2008)
[arxiv:0802.1515 [astro-ph]]; v. pettorino, d. f. mota, g. robbers and c. wetterich, aip
conf. proc. 1115, 291 (2009) [arxiv:0901.1239 [astro-ph]].
[47] a. w. brookfield, c. van de bruck and l. m. h. hall, phys. rev. d 77, 043006 (2008)
[arxiv:0709.2297 [astro-ph]]; a. w. brookfield, c. van de bruck, d. f. mota and d. tocchini-
valentini, phys. rev. lett. 96, 061301 (2006) [arxiv:astro-ph/0503349]; phys. rev. d 73,
083515 (2006) [erratum-ibid. d 76, 049901 (2007)] [arxiv:astro-ph/0512367].
[48] a. e. bernardini and o. bertolami, phys. rev. d 77, 083506 (2008) [arxiv:0712.1534 [astro-
ph]]; phys. lett. b 662, 97 (2008) [arxiv:0802.4449 [hep-ph]]; phys. lett. b 684, 96 (2010)
[arxiv:0909.1280 [gr-qc]]; phys. rev. d 80, 123011 (2009) [arxiv:0909.1541 [gr-qc]].
[49] r. takahashi and m. tanimoto, phys. lett. b 633, 675 (2006) [arxiv:hep-ph/0507142].
[50] r. takahashi and m. tanimoto, jhep 0605, 021 (2006) [arxiv:astro-ph/0601119].
[51] r. fardon, a. e. nelson and n. weiner, jhep 0603, 042 (2006) [arxiv:hep-ph/0507235].
[52] c. spitzer, arxiv:astro-ph/0606034.
[53] k. ichiki and y. y. keum, jcap 0806, 005 (2008) [arxiv:0705.2134 [astro-ph]]; jhep 0806,
058 (2008) [arxiv:0803.2274 [hep-ph]].
[54] j.i. kapusta and c. gale, finite-temperature field theory: principles and applications,
29
second edition, cambridge university press, cambridge (2006).
[55] j. zinn-justin, quantum field theory and critical phenomena, 4th edition, clarendon press,
oxford (2002).
[56] w. greiner, l. neise, and h. st ̈
ocker, thermodynamics and statistical mechanics, springer-
verlag, new york (1995).
[57] s. hannestad, jcap 0305, 004 (2003) [arxiv:astro-ph/0303076]; v. barger, j. p. kneller,
h. s. lee, d. marfatia and g. steigman, phys. lett. b 566, 8 (2003) [arxiv:hep-ph/0305075];
a. melchiorri and c. j. odman, phys. rev. d 67, 081302 (2003) [arxiv:astro-ph/0302361];
s. d. stirling and r. j. scherrer, phys. rev. d 66, 043531 (2002) [arxiv:astro-ph/0206173];
j. hamann, j. lesgourgues and g. mangano, jcap 0803, 004 (2008) [arxiv:0712.2826 [astro-
ph]].
[58] i. l. shapiro and j. sola, phys. lett. b 475, 236 (2000) [arxiv:hep-ph/9910462]; jhep 0202,
006 (2002) [arxiv:hep-th/0012227]; i. l. shapiro, j. sola, c. espana-bonet and p. ruiz-
lapuente, phys. lett. b 574, 149 (2003) [arxiv:astro-ph/0303306]; i. l. shapiro, j. sola and
h. stefancic, jcap 0501, 012 (2005) [arxiv:hep-ph/0410095]; j. sola, j. phys. a 41, 164066
(2008) [arxiv:0710.4151 [hep-th]].
[59] f. bauer, j. sola and h. stefancic, phys. lett. b 678, 427 (2009) [arxiv:0902.2215 [hep-th]];
phys. lett. b 688, 269 (2010) [arxiv:0912.0677 [hep-th]]; d. a. demir, found. phys. 39,
1407 (2009) [arxiv:0910.2730 [hep-th]]. s. basilakos, m. plionis and j. sola, phys. rev. d 80,
083511 (2009) [arxiv:0907.4555 [astro-ph.co]].
[60] n.d. birrel and p.c.w. davies, quantum fields in curved space, cambridge university press,
cambridge (1982).
[61] g. e. volovik, phil. trans. roy. soc. lond. a 366, 2935 (2008) [arxiv:0801.0724 [gr-qc]];
f. r. klinkhamer and g. e. volovik, phys. rev. d 77, 085015 (2008) [arxiv:0711.3170 [gr-
qc]]; g. e. volovik, the universe in a helium droplet, clarendon press, oxford (2003);
g. e. volovik, jetp lett. 80, 531 (2004); jetp lett. 77, 769 (2003).
[62] h. j. de vega and n. g. sanchez, arxiv:astro-ph/0701212.
[63] e. v. gorbar and i. l. shapiro, jhep 0302, 021 (2003) [arxiv:hep-ph/0210388]; jhep 0306,
004 (2003) [arxiv:hep-ph/0303124]; i. l. shapiro and j. sola, phys. lett. b682, 105 (2009)
[arxiv:0910.4935[hep-th]].
[64] a detailed analysis of the vicinity of the critical point shows that it is an unstable degenerate
saddle–node of the differential equation (107). see, e.g., n.n. bautin and e.a. leontovich,
methods of the qualitative analysis of dynamical systems on a plane, second edition, nauka,
moscow (1990).
[65] e. e. o. ishida, r. r. r. reis, a. v. toribio and i. waga, astropart. phys. 28, 547 (2008)
[arxiv:0706.0546 [astro-ph]].
[66] s. m. r. micheletti, arxiv:1009.6198 [gr-qc].
|
0911.1730 | variational approach for electrolyte solutions: from dielectric
interfaces to charged nanopores | a variational theory is developed to study electrolyte solutions, composed of
interacting point-like ions in a solvent, in the presence of dielectric
discontinuities and charges at the boundaries. three important and non-linear
electrostatic effects induced by these interfaces are taken into account:
surface charge induced electrostatic field, solvation energies due to the ionic
cloud, and image charge repulsion. our variational equations thus go beyond the
mean-field theory. the influence of salt concentration, ion valency, dielectric
jumps, and surface charge is studied in two geometries. i) a single neutral
air-water interface with an asymmetric electrolyte. a charge separation and
thus an electrostatic field gets established due to the different image charge
repulsions for coions and counterions. both charge distributions and surface
tension are computed and compared to previous approximate calculations. for
symmetric electrolyte solutions close to a charged surface, two zones are
characterized. in the first one, with size proportional to the logarithm of the
coupling parameter, strong image forces impose a total ion exclusion, while in
the second zone the mean-field approach applies. ii) a symmetric electrolyte
confined between two dielectric interfaces as a simple model of ion rejection
from nanopores. the competition between image charge repulsion and attraction
of counterions by the membrane charge is studied. for small surface charge, the
counterion partition coefficient decreases with increasing pore size up to a
critical pore size, contrary to neutral membranes. for larger pore sizes, the
whole system behaves like a neutral pore. the prediction of the variational
method is also compared with mc simulations and a good agreement is observed.
| introduction
the first experimental evidence for the enhancement
of the surface tension of inorganic salt solutions com-
pared to that of pure water was obtained more than eight
decades ago [1, 2]. wagner proposed the correct physi-
cal picture [3] by relating this effect to image forces that
originate from the dielectric discontinuity and act on ions
close to the water–air interface. he also correctly pointed
out the fundamental importance of the ionic screening of
image forces and formulated a theoretical description of
the problem by establishing a differential equation for the
electrostatic potential and solving it numerically to com-
pute the surface tension. using series expansions, on-
sager and samaras found the celebrated limiting law [4]
that relates the surface tension of symmetric electrolytes
to the bulk electrolyte density at low salt concentration.
∗email: [email protected]
†email: [email protected]
‡email: [email protected]
however, it is known that the consideration of charge
asymmetry leads to a technical complication. indeed, im-
age charge repulsion, whose amplitude is proportional to
the square of ion valency, leads to a split of concentration
profiles for ions of different charge, which in turn causes
a local violation of the electroneutrality and induces an
electrostatic field close to a neutral dielectric interface.
bravina derived five decades ago a poisson-boltzmann
type of equation for this field [5] and used several ap-
proximations in order to derive integral expressions for
the charge distribution and the surface tension.
these image charge forces play also a key role in slit-
like nanopores which are model systems for studying ion
rejection and nanofiltration by porous membranes (see
the review [6] and references therein, and
[7] for a re-
view of nano-fluidics). several results have been found in
this geometry and also for cylindrical nanopores beyond
the mean-field approach (using the debye closure and the
bbgky hierarchical equations) and averaging all dielec-
tric and charge effects over the pore cross section. within
these two approximations, the salt reflection coefficient
has been studied as a function of the pore size, the bulk
arxiv:0911.1730v2 [cond-mat.soft] 31 mar 2010
2
salt concentration and the pore surface charge.
more precisely, the strength of electrostatic correla-
tions of ions in the presence of charged interfaces without
dielectric discontinuity is quantified by one unique cou-
pling parameter
ξ = 2πq3l2
bσs
(1)
where q is the ion valency, and σs the fixed surface
charge [8–10]. the bjerrum length in water for mono-
valent ions, lb = e2/(4πεwkbt) ≈0.7 nm (εw is the
dielectric permittivity of water) is defined as the dis-
tance at which the electrostatic interaction between two
elementary charges is equal to the thermal energy kbt.
the second characteristic length is the gouy-chapman
length lg = 1/(2πqlbσs) defined as the distance at which
the electrostatic interaction between a single ion and a
charged interface is equal to kbt. the coupling param-
eter can be reexpressed in terms of these two lengths
as ξ = q2lb/lg. on the one hand, the limit ξ →0,
called the weak coupling (wc) limit, is where the physics
of the coulomb system is governed by the mean-field
or poisson-boltzmann (pb) theory, and thermal fluctu-
ations overcome electrostatic interactions.
it describes
systems characterized by a high temperature, low ion va-
lency or weak surface charge. on the other hand, ξ →∞
is the strong coupling (sc) limit, corresponding to low
temperature, high valency of mobile ions or strong sur-
face charge. in this limit, ion–charged surface interac-
tions control the ion distribution perpendicularly to the
interface. for single interface and slab geometries, several
perturbative approaches going beyond the wc limit [11]
or below the sc limit [8, 12, 13] have been developed.
although these calculations were able to capture impor-
tant phenomena such as charge renormalization [14], ion
specific effects at the water-air interface [15, 16], man-
ning condensation [17], effect of monopoles [18] or attrac-
tion between similarly charged objects, they also showed
slow convergence properties, which indicates the inabil-
ity of high-order expansions to explore the intermediate
regime, ξ ≃1. this is quite frustrating since the com-
mon experimental situation usually corresponds to the
range 0.1 < ξ < 10 where neither wc nor sc theory is
totally adequate.
consequently, a non-perturbative approach valid for
the whole range of ξ is needed. a first important attempt
in this direction has been made by netz and orland [19]
who derived variational equations within the primitive
model for point-like ions and solved them at the mean-
field level in order to illustrate charge renormalization ef-
fect. interestingly, these differential equations are equiv-
alent to the closure equations established in the context
of electrolytes in nanopores [6].
they are too compli-
cated to be solved analytically or even numerically for
general ξ. a few years later, curtis and lue [20] and
hatlo et al. [21] investigated the partition of symmetric
electrolytes at neutral dielectric surfaces using a similar
variational approach (see also the review [22]). they have
also recently proposed a new variational scheme based on
a hybrid low fugacity and mean-field expansion [23], and
showed that their approach agrees well with monte-carlo
simulation results for the counterions-only case.
how-
ever, this method is quite difficult to handle, and one has
to solve two coupled variational equations, i.e. a sixth
order differential equation for the external potential to-
gether with a second algebraic equation.
within this
approach, these authors generalized the study of ion-ion
correlations for counterions close to a charged dielectric
interface, first done by netz in the wc and sc limits [24],
to intermediate values of ξ. they also studied an elec-
trolyte between two charged surfaces without dielectric
discontinuities at the pore boundary, in two cases: coun-
terions only and added salt, handled at the mean-field
level [25]. although this simplification allows one to focus
exclusively on ion-ion correlations induced by the surface
charge, the dielectric discontinuity can not be discarded
in synthetic or biological membranes. indeed, it is known
that image forces play a crucial in ion filtration mecha-
nisms [6]. the main goal of this work is to propose a
variational analysis which is simple enough to intuitively
illustrate ionic exclusion in slit pores, by focusing on the
competition between image charge repulsion and surface
charge interaction. moreover, our approach allows us to
connect nanofiltration studies [26–28] with field-theoretic
approaches of confined electrolyte solutions within a gen-
eralized onsager-samaras approximation [4] character-
ized by a uniform variational screening length. this vari-
ational parameter takes into account the interaction with
both image charge and surface charge. we also compare
the prediction of the variational theory with monte-carlo
simulations [29] and show that the agreement is good.
the paper is organized as follows.
the variational
formalism for coulombic systems in the presence of di-
electric discontinuities is introduced in section ii. sec-
tion iii deals with a single interface. we show that the
introduction of simple variational potentials allows one to
fully account for the physics of asymmetric electrolytes
at dielectric interfaces (e.g. water–air, liquid–liquid and
liquid–solid interfaces, see ref. [30]), first studied by brav-
ina [5] using several approximations, as well as the case of
charged surfaces. in section iv, the variational approach
is applied to a symmetric electrolyte confined between
two dielectric surfaces in order to investigate the prob-
lem of ion rejection from membrane nanopores. using
restricted variational potentials, we show that due to the
interplay between image charge repulsion and direct elec-
trostatic interaction with the charged surface, the ionic
partition coefficient has a non-monotonic behaviour as a
function of pore size.
ii.
variational calculation
in this section, the field theoretic variational approach
for many body systems composed of point-like ions in the
presence of dielectric interfaces is presented. since the
field theoretic formalism as well as the first order varia-
3
tional scheme have already been introduced in previous
works [19, 20], we only illustrate the general lines.
the grand-canonical partition function of p ion species
in a liquid of spatially varying dielectric constant ε(r) is
z =
p
y
i=1
∞
x
ni=0
eniμi
ni!λ3ni
t
z
ni
y
j=1
drije−(h−es)
(2)
where λt is the thermal wavelength of an ion, μi denotes
the chemical potential and ni the total number of ions of
type i. for sake of simplicity, all energies are expressed
in units of kbt. the electrostatic interaction is
h = 1
2
z
dr′drρc(r)vc(r, r′)ρc(r′)
(3)
where ρc is the charge distribution (in units of e)
ρc(r) =
p
x
i=1
ni
x
j=1
qiδ(r −rij) + ρs(r),
(4)
and qi denotes the valency of each species, ρs(r) stands
for the fixed charge distribution and vc(r, r′) is the
coulomb potential whose inverse is defined as
v−1
c (r, r′) = −kbt
e2 ∇[ε(r)∇δ(r −r′)]
(5)
where ε(r) is a spatially varying permittivity. the self-
energy of mobile ions, which is subtracted from the total
electrostatic energy, is
es = vb
c(0)
2
p
x
i=1
niq2
i
(6)
where vb
c(r) = lb/r is the bare coulomb potential for
ε(r) = εw.
after performing a hubbard-stratonovitch
transformation and the summation over ni in eq. (2),
the grand-canonical partition function takes the form of
a functional integral over an imaginary electrostatic aux-
iliary field φ(r), z =
r
dφ e−h[φ]. the hamiltonian is
h[φ] =
z
dr
"
[∇φ(r)]2
8πlb(r) −iρs(r)φ(r)
(7)
−
x
i
λie
q2
i
2 vb
c(0)+iqiφ(r)
#
where a rescaled fugacity
λi = eμi/λ3
t
(8)
has been introduced. the variational method consists in
optimizing the first order cumulant
fv = f0 + ⟨h −h0⟩0.
(9)
where averages ⟨* * * ⟩0 are to be evaluated with respect to
the most general gaussian hamiltonian [19],
h0[φ] = 1
2
z
r,r′ [φ(r) −iφ0(r)] v−1
0 (r, r′) [φ(r′) −iφ0(r′)]
(10)
and f0 = −1
2tr ln v0. the variational principle consists
in looking for the optimal choices of the electrostatic
kernel v0(r, r′) and the average electrostatic potential
φ0(r) which extremize the variational grand potential
eq. (9). the variational equations δfv/δv−1
0 (r, r′) = 0
and δfv/δφ0(r) = 0, for a symmetric electrolyte and
ε(r) = εw, yield
∆φ0(r) −8πlbqλe−q2
2 w (r) sinh [qφ0(r)]
= −4πlbρs(r)
(11)
−∆v0(r, r′) + 8πlbq2λe−q2
2 w (r) cosh [qφ0(r)] v0(r, r′)
= 4πlbδ(r −r′).
(12)
where we have defined
w(r) ≡lim
r→r′
v0(r, r′) −vb
c(r −r′)
(13)
whose physical signification will be given below.
the
second terms on the lhs of eq. (11) and of eq. (12)
have simple physical interpretations: the former is 4πlb
times the local ionic charge density and the latter is
4πlbq2 times the local ionic concentration.
the rela-
tions eqs. (11)–(12) are respectively similar in form to
the non-linear poisson-boltzmann (nlpb) and debye-
h ̈
uckel (dh) equations, except that the charge and salt
sources due to mobile ions are replaced by their local
values according to the boltzmann distribution. on the
one hand, eq. (11) is a poisson-boltzmann like equa-
tion where appears the local charge density proportional
to sinh φ0.
this equation handles the asymmetry in-
duced by the surface through the electrostatic potential
φ0, which ensures electroneutrality. this asymmetry may
be due to the effect of the surface charge on anion- and
cation-distributions (see section iii b) or due to dielec-
tric boundaries and image charges at neutral interfaces,
which give rise to interactions proportional to q2, and in-
duce a local non-zero φ0 for asymmetric electrolytes (see
section iii a). on the other hand, the generalized dh
equation eq. (12), where appears the local ionic concen-
tration proportional to cosh φ0, fixes the green's function
v0(r, r′) evaluated at r with the charge source located at
r′ and takes into account dielectric jumps at boundaries.
these variational equations were first obtained within
the variational method by netz and orland [19]. they
were also derived in ref. [31] within the debye closure
approach and the bbgky hierarchic chain. yaroshchuk
obtained an approximate solution of the closure equa-
tions for confined electrolyte systems in order to study
ion exclusion from membranes [6].
equations (11)–(12) enclose the limiting cases of wc
(ξ →0) and sc (ξ →∞). to see that, it is interesting to
rewrite theses equations by renormalizing all lengths and
the fixed charge density, ρs(r), by the gouy-chapman
length according to ̃
r = r/lg, ̃
ρs( ̃
r) = lgρs(r)/σs (σs
is the average surface charge density). by introducing
a new electrostatic potential ̃
φ0(r) = qφ0(r), one can
4
express the same set of equations in an adimensional form
̃
∆ ̃
φ0( ̃
r) −λe−ξ
2 ̃
w ( ̃
r) sinh ̃
φ0( ̃
r) = −2 ̃
ρs( ̃
r)
(14)
− ̃
∆ ̃
v0( ̃
r, ̃
r′) + λe−ξ
2 ̃
w ( ̃
r) cosh ̃
φ0( ̃
r) ̃
v0( ̃
r, ̃
r′)
= 4πδ( ̃
r − ̃
r′)(15)
where ̃
v0 = v0lg/lb, ̃
w = wlg/lb and we have also
introduced the rescaled fugacity λ = 8πλl3
gξ [32]. now,
one can check that, in both limits ξ →0 and ξ →∞, the
coupling between φ0 and v0 in eq. (11) disappears and
the theory becomes integrable. finally, it is important to
note that this adimensional form of variational equations
allows one to focus on the role of v0(r, r) whose strength
is controlled by ξ in eqs. (14) and (15). however, even
at the numerical level, their explicit coupling does not
allow for exact solutions for general ξ.
in the present work, we make a restricted choice
for v0(r, r′) and replace the local salt concentration in
the form of a local debye-h ̈
uckel parameter (or inverse
screening length) κ(r) in eq. (12),
κ(r)2 = 8πlbq2λe−q2
2 w (r) cosh [qφ0(r)] ,
(16)
by a constant piecewise one κv(r) = κv in the presence
of ions and κv(r) = 0 in the salt-free parts of the sys-
tem. note that it has been recently shown that many
thermodynamic properties of electrolytes are successfully
described with a debye-h ̈
uckel kernel [33].
the inverse kernel (or the green's function) v0(r, r′)
is then taken to be the solution to a generalized debye-
h ̈
uckel (dh) equation
−∇(ε(r)∇) + ε(r)κ2
v(r)
v0(r, r′) =
e2
kbt δ(r −r′) (17)
with the boundary conditions associated with the dielec-
tric discontinuities of the system
lim
r→σ−v0(r, r′) =
lim
r→σ+ v0(r, r′),
(18)
lim
r→σ−ε(r)∇v0(r, r′) =
lim
r→σ+ ε(r)∇v0(r, r′)
(19)
where σ denotes the dielectric interfaces.
we now re-
strict ourselves to planar geometries. we split the grand
potential (9) into three parts, fv = f1 + f2 + f3, where
f1 is the mean electrostatic potential contribution,
f1 = s
z
dz
(
−[∇φ0(z)]2
8πlb
+ ρs(z)φ0(z)
−
x
i
λie−
q2
i
2 w (z)−qiφ0(z)
)
,
(20)
f2 the kernel part and f3 the unscreened van der waals
contribution.
the explicit forms of f2 and f3 are re-
ported in appendix a. the first variational equation is
given by ∂fv/∂κv = ∂(f1 + f2) /∂κv = 0. this equa-
tion is the restricted case of eq. (12).
as we will see
below, its explicit form depends on the confinement ge-
ometry of the electrolyte system as well as on the form
of ε(r).
the variational equation for the electrostatic
potential [34] δfv/δφ0(z) = 0 yields regardless of the
confinement geometry
∂2φ0
∂z2 + 4πlbρs(z) +
x
i
4πlbqiλie−
q2
i
2 w (z)−qiφ0(z) = 0.
(21)
the second-order differential equation (21), which is
simply the generalization of eq. (11) for a general elec-
trolyte in a planar geometry, does not have closed-form
solutions for spatially variable w(z). in what follows,
we optimize the variational grand potential fv using re-
stricted forms for the electrostatic potential φ0(z) and
compare the result to the numerical solution of eq. (21)
for single interfaces and slit-like pores.
the single ion concentration is given by
ρi(z) = λi e−
q2
i
2 w (z)−qiφ0(z)
(22)
and its spatial integral by
z
dzρi(z) = −λi
∂fv
∂λi
.
(23)
we define the potential of mean force (pmf) of ions of
type i, φi(z), as
φi(z) ≡−ln ρi(z)
ρb
(24)
by defining
w(z) ≡w(z) −wb
(25)
where wb is the value of w(z) in the bulk and comparing
eqs. (22)–(24), we find
φi(z) = q2
i
2 w(z) + qiφ0(z)
(26)
q2
i
2 wb = ln γb
i ≡μi −ln(ρbλ3
t)
(27)
hence, q2
i wb/2 is nothing else but the excess chemical
potential of ion i in the bulk and q2
i w(z)/2 = ln γi(z)
is its generalization for ion i at distance z from the
interface. they are related to the activity coefficients γb
i
and γi(z). note that the zero of the chemical potential is
fixed by the condition that φ0 vanishes in the bulk. the
pmf, eq. (26), is thus the mean free energy per ion (or
chemical potential) needed to bring an ion from the bulk
at infinity to the point at distance z from the interface,
taking into account correlations with the surrounding
ionic cloud.
before applying the variational procedure to single and
double interfaces, let us consider the variational approach
in the bulk. in this case, the variational potential φ0 is
5
ε
0
z
εw
ε
ε
0
z
d
εw
(a)
(b)
fig. 1: (color online) geometry for a single dielectric inter-
face (e.g. water–air) (a) and double interfaces or slit-like pores
(b).
equal to 0, and the variational grand potential fv only
depends on κv.
two minima appear: one metastable
minimum κ0
v at low values of κv, and a global minimum
at infinity (fv →−∞for κv →∞) which is unphysical
since at these large concentration values, finite size effects
should be taken into account. it has been shown by intro-
ducing a cutoffat small distances [20], that, for physical
temperatures, this instability disappears and the global
minimum of fv is κ0
v ≈κb given by the debye–h ̈
uckel
limiting law:
μi = ln(ρbλ3
t) −q2
i
2 κblb
κ2
b = 4πlb
p
i q2
i ρi,b
(28)
from eq. (27), we thus find wb ≈−κblb and the poten-
tial w(z) reduces to
w(z) ≈v0(z, z) −vb
c(0) + κblb,
(29)
which will be adopted in the rest of the paper. further-
more, problems due to the formation of ion pairs do not
enter at the level of the variational approach we have
adopted. let us also report the following conversion re-
lations
ib ≃0.19(κblb)2 mol.l−1
(30)
κb ≃3.29
√
i nm−1,
where i = 1/2 p
i q2
i ρi,b is the ionic strength expressed
in mol.l−1. finally, the single-ion densities are given by
ρi(z) = ρi,be−
q2
i
2 w(z)−qiφ0(z).
(31)
iii.
single interface
the single interfacial system considered in this section
consists of a planar interface separating a salt-free left
half-space from a right half-space filled up with an elec-
trolyte solution of different species (fig. 1). in the gen-
eral case, the dielectric permittivity of the two half spaces
may be different (we note ε the permittivity in the salt-
free part). the green's function, which is chosen to be
solution of the dh equation with ε(z) = εθ(−z) + εwθ(z)
0.0
0.2
0.4
0.6
0.0
1.0
2.0
3.0
w(z)
z/lg
fig. 2: (color online) potential w(z) in units of kbt for ε = 0
(black solid curve), eq. (36), and ε = εw (red dashed curve)
and κblb = 4.
and κ(z) = κvθ(z) where θ(z) stands for the heaviside
distribution, is given for z > 0 by [5]
w(z) = lb(κb −κv)
(32)
+lb
z ∞
0
kdk
p
k2 + κ2
v
∆(k/κv) e−2√
k2+κ2
vz
where
∆(x) = εw
√
x2 + 1 −εx
εw
√
x2 + 1 + εx
.
(33)
and f2 [eq. (a4)] can be analytically computed [20]
f2 = v κ3
v
24π + s∆κ2
v
32π ,
(34)
where
∆≡∆(x →∞) = εw −ε
εw + ε.
(35)
the first term on the rhs. of eq. (34) is simply the volu-
mic debye free energy associated with a hypothetic bulk
with a debye inverse screening length κv and the second
term on the rhs. involves interfacial effects, including the
dielectric jump ∆, and κv.
for the single interface system, as seen in section ii, f3
is independent of κv and φ0(z), which means that it does
not contribute to the variational equations. by minimiz-
ing eqs. (20) and (34) with respect to κv for fixed φ0(r)
and taking v →∞, one exactly find the same variational
equation for κv as for the bulk case. hence, as discussed
above, we have κv = κb given by eq. (28) and the first
term of the rhs. of eq. (32) vanishes. this result was
obtained in [20] for the special case φ(r) = 0. it is of
course not surprising to end up with the same result for
finite φ(r) since we know that the electrostatic potential
should vanish in the bulk. this potential combines in an
intricate way both the image charge and solvation contri-
butions due to the presence of the interface. the image
6
force corresponds to the interaction of a given ion with
the polarized charges at the interface and is equivalent
to the interaction of the charged ion with its image lo-
cated at the other side of the dielectric surface. as it
is well known, the image charge interaction is repulsive
for ε < εw (e.g. water-air interface) and attractive for
ε > εw (the case for an electrolyte-metal interface) [35].
the interfacial reduction in solvation arises because an
ion always prefers to be screened by other ions in order
to reduce its free energy. hence, it is attracted towards
areas where the ion density is maximum (at least at not
too high concentrations for which steric repulsion may
predominate).
this term is non-zero even for ε = εw
since for an ion close to the interface, there is a "hole"
of screening ions in the salt-free region (where κv = 0).
although our choice of homogeneous variational inverse
screening length allows us to handle the deformation of
ionic atmospheres near interfaces that are impermeable
to ions, it does not allow us to treat in detail the local
variations in ion solvation free energy arising from ion-
ion correlations (except in an average way in confined
geometries where κv can differ from the bulk value of the
inverse screening length, see section iv below).
equation (32) simplifies in three cases :
1) for ε = 0 (∆= 1), where the solvation effect vanishes
because the lines of forces are totally excluded from the
air region [35], eq. (32) reduces to
w0(z) = lb
e−2κbz
2z
.
(36)
this is the case where the image charge repulsion is the
strongest (see fig. 2).
2)a slightly better approximation for ε ̸= 0 can be ob-
tained by artificially allowing salt to be present in the air
region. this gives rise to the "undistorted ionic atmo-
sphere" approximation [6], for which w(z) in eq. (36) is
multiplied by ∆:
w(z) = ∆lb
e−2κbz
2z
.
(37)
solvation effects are now absent and salt exclusion arises
solely from dielectric repulsion. eq. (37) is exact for ar-
bitrary κb and ∆= 1, or arbitrary ∆and κb = 0.
3)in the absence of a dielectric discontinuity ε = εw (∆=
0), the potential can be expressed as
w(z) = κblbf(κbz)
(38)
f(x) = (1 + x)2e−2x
2x3
−k2(2x)
x
where k2(x) is the bessel function of the second kind.
one notices that unlike the case ∆> 0, the potential has
a finite value at the interface, i.e. w(0) = κblb/3.
we note that in this case of one interface, we have
limz→∞φ0(z) = 0 and the fugacity λi of each species is
fixed by its bulk concentration according to
ρi,b = lim
z→∞ρi(z) = λie
q2
i
2 κblb
(39)
where we used eq. (22).
a.
neutral dielectric interface
we investigate in this section the physics of an asym-
metric electrolyte close to a neutral dielectric interface
(e.g.
water–air, liquid–liquid or liquid–solid interface)
located at z = 0 (σs = 0). for the sake of simplicity, we
assume ε = 0, which is a very good approximation for
the air-water interface characterized by ε = 1 (see the
discussion in ref. [4]). hence we keep the approxima-
tion w(z) = w0(z) given by eq. (36). the electrolyte is
composed of two species of bulk density ρ+ and ρ−and
charge (q+e),−(q−e) with q+ > q−. in order to satisfy the
electroneutrality in the bulk, we impose ρ+q+ = ρ−q−.
according to eq. (28), the bulk inverse screening length
noted κb is given by
κ2
b = 4πlbq−ρ−(q−+ q+)
(40)
and the variational equation (21) for the electrostatic po-
tential is a modified poisson-boltzmann equation
∂2φ0
∂z2 + 4πlbρch(z) = 0
(41)
with a local charge concentration
ρch(z) = ρ−q−
e−
q2
+
2 w(z)−q+φ0(z) −e−
q2
−
2 w(z)+q−φ0(z)
.
(42)
equation (41) can not be solved analytically. its numer-
ical solution, obtained using a 4th order runge-kutta
method, is plotted in fig. 3(a) for asymmetric elec-
trolytes with divalent and quadrivalent cations and the
local charge density is plotted in fig. 3(b).
fig. 3 clearly shows that, very close to the dielectric
interface for z < a, image charge repulsion expulses all
ions (since ρch(z) ∼exp(−1/z) has an essential singu-
larity) and φ0 is flat. for z > a, but still close to the
interface, there is a layer where the electrostatic field is
almost constant (φ0 increases linearly), which is created
by the charge separation of ions of different valency due
to repulsive image interactions. the intensity of image
forces increases with the square of ion valency and close
to the interface, ρch(z) < 0 since we assumed q+ > q−
(the case for mgi2). to ensure electroneutrality, the lo-
cal charge then becomes positive when we move away
from the surface (fig. 3(b)), and the electrostatic poten-
tial goes exponentially to zero with a typical relaxation
constant κφ. moreover, in fig. 3(a) one observes that
when the charge asymmetry increases, the electrostatic
potential also increases.
knowing that for symmetric
electrolytes, φ0 = 0, our results confirm that the charge
asymmetry is the source of the electrostatic potential φ0.
fig. 3(b) is qualitatively similar to fig. 1 of bravina who
had derived an integral solution of eq. (41) by using an
approximation valid for κblb ≪1 [5]. in order to go fur-
ther in the description of the interfacial distribution of
ions, we look for a restricted variational function φ0(z)
which not only contains a small number of variational
7
(a)
0.00
1.00
2.00
3.00
4.00
z/lb
-0.16
-0.12
-0.08
-0.04
0.00
electrolyte 1:2
electrolyte 1:4
φ0
(b)
0.00
1.00
2.00
3.00
4.00
z/lb
-0.03
-0.01
0.01
0.03
0.05
electrolyte 1:2
electrolyte 1:4
lb
3 ρch
fig. 3:
(color online) (a) electrostatic potential φ0 (in
kbt units) for asymmetric electrolytes: numerical solution
of eq. (41) (symbols) and variational choice, eq. (44) (solid
lines), for divalent and quadrivalent ions and ρ−= 0.242 m.
variational parameters are κφ ≃1.4κb, a/lb = 0.12; 0.21 and
φ = −0.10; −0.156. (b) associated local charge density profile
(thick lines) and anion (dashed lines) and cation (thin solid
lines) concentrations.
parameters (such as a and κφ) but also is as close as
possible to the numerical solution. as suggested by the
description of fig. 3, a continuous piecewise φ0(z) is nec-
essary to account for the essential singularity of ρch(z).
to show this, let us expand eq. (41) to order φ0:
∂2φ0
∂z2
≈−4πlbρ−q−
e−
q2
+
2 w(z) −e−
q2
−
2 w(z)
(43)
+4πlbρ−q−
q+e−
q2
+
2 w(z) + q−e−
q2
−
2 w(z)
φ0.
this linearization is legitimate,
as seen in fig. 3:
q+|φ0(z)| < 1 is satisfied for physical valencies. the first
term in the rhs. of eq. (43) corresponds to an effective
local charge source while the second term is responsi-
ble for the screening of the potential. if we observe the
charge distribution for q2w(z) > qφ0(z) and z > a, i.e.
the first term of the rhs. of eq. (43), we notice that it
behaves like a distorted peak. the simplest function hav-
ing a similar behavior is f(z) = cze−κφz, where c and κφ
are constants. hence, we choose a restricted variational
piecewise solution φ0(z)
φ0(z) =
φ
for
z ≤a,
φ [1 + κφ(z −a)] e−κφ(z−a) for
z ≥a.
(44)
whose derivation is explained in appendix b. the vari-
ational parameters are the constant potential φ, the de-
pletion distance a and the inverse screening length κφ.
the grand potential (b5) derived for this solution was
optimized with respect to the variational parameters us-
ing the mathematica software. the restricted variational
potential (44) is compared to the numerical solution of
eq. (41) in fig. 3 for electrolytes 1 : 2 and 1 : 4 and
ρ−l3
b = 0.05. the agreement is excellent. one notices
that the screening of the effective surface charge created
by dielectric exclusion enters into play when z > κ−1
φ . fi-
nally, let us note that since κblb = 1.37 and κblb = 1.77
respectively for the monovalent and quadrivalent elec-
trolytes in fig. 3, the method adopted by bravina is not
valid.
to summarize, the charge separation is taken into ac-
count by the potential φ (which increases with q+/q−)
and the relaxation constant κφ ≃1.4κb is almost inde-
pendent of q+/q−. interestingly, the variational parame-
ter a/lb ≃0.1 −0.2 is less than 1 nm. indeed, for finite
size ions, w(z) differs from eq. (36) very close to the in-
terface and reaches a finite value at z = 0. the size of
this region exactly corresponds to a which is of the order
of an ion radius. this is thus an artifact of our point-like
ion model and occurs only for asymmetric electrolytes at
neutral surfaces.
the surface tension σ is equal to the excess grand po-
tential defined as the difference between the grand poten-
tial of the interfacial system and that of the bulk system:
σ = ∆κ2
b
32π −κφφ2
32πlb
(45)
−ρ−
z ∞
0
dz
e−
q2
−
2 w(z)+q−φ0(z) −1
+ q−
q+
e−
q2
+
2 w(z)−q+φ0(z) −1
.
the surface tension for electrolytes characterized by q−=
1 and q+ = 1 to 4 is plotted in fig. 4 as a a function
of ρ−, because the anion density is an experimentally
accessible parameter. unlike symmetric electrolytes [20],
a plot with respect to κ2
b may lead to a different behavior.
one notices that the increase in valency asymmetry leads
to an important increase of the surface tension. this is of
course mainly due to the reduction of the cation density
in the bulk by a factor of q−/q+ necessary to satisfy the
bulk electroneutrality (see the second term in the integral
of eq. (45)).
8
0.01
0.06
0.11
0.16
0.21
0.26
lb
3ρ-
0.02
0.07
0.12
0.17
0.22
electrolyte 1:1
electrolyte 1:2
electrolyte 1:3
electrolyte 1:4
βlb
2σ
fig. 4: (color online) surface tension l2
bσ/kbt for asymmet-
ric electrolytes vs. the anion bulk concentration, for increas-
ing asymmetry q+/q−= 1 to 4 from bottom to top.
b.
charged surfaces
we now consider a symmetric electrolyte in the prox-
imity of an interface of constant surface charge σs < 0
located at z = 0. the variational equation (21) simplifies
to
∂2 ̃
φ0
∂ ̃
z2 = 2δ( ̃
z) + ̃
κ2
be−ξ
2 ̃
w( ̃
z) sinh ̃
φ0.
(46)
the mean-field limit (ξ →0) of this equation corre-
sponds to the nlpb equation, whose solution reads
̃
φ0( ̃
z) = 4arctanh
γbe− ̃
κb ̃
z
(47)
where γb = ̃
κb−
p
1 + ̃
κ2
b. in this section, we show that a
piecewise solution for the electrostatic potential similar
to the one introduced in section iii a agrees very well
with the numerical solution of eq.
(46).
inspired by
the existence of a salt-free layer close to the interface
and a mean-field regime far from the interface (wc),
we propose two types of piecewise variational functions
(see appendix c). the first variational choice obeys the
poisson equation in the first zone of size h and the non-
linear poisson-boltzmann solution in the second zone :
̃
φnl
0 ( ̃
z) =
(
4arctanhγ + 2( ̃
z − ̃
h)
for
̃
z ≤ ̃
h,
4arctanh
γe− ̃
κφ( ̃
z− ̃
h)
for
̃
z ≥ ̃
h,
(48)
where γ = ̃
κφ −
q
1 + ̃
κ2
φ. variational parameters are h
and an effective inverse screening length κφ. the second
type of trial potential obeys the laplace equation with a
charge renormalization in the first zone and the linearized
poisson-boltzmann solution in the second zone :
̃
φl
0( ̃
z) =
(
−2η
̃
κφ + 2η( ̃
z − ̃
h) for
̃
z ≤ ̃
h,
−2η
̃
κφ e− ̃
κφ( ̃
z− ̃
h)
for
̃
z ≥ ̃
h.
(49)
variational parameters are ̃
h, ̃
κφ, and the charge renor-
malization η, which takes into account the non-linear ef-
fects at the mean-field level [19]. the explicit form of the
φ0
0.0
0.5
1.0
1.5
2.0
z/lg
-1.5
-1.0
-0.5
0.0
numerical
non-linear pb
linear pb
0
2
4
6
8
log ξ
0.0
0.5
1.0
1.5
2.0
2.5
h/lg
fig. 5: (color online) electrostatic potential, φ0 (in units of
kbt): numerical solution of eq. (46) (symbols) and restricted
variational choices eqs. (48) and (49) for ε = 0, κblg = 4,
and ξ = 1, 10, 100, and 1000 (from top to bottom). the vari-
ational parameters are respectively κφ = 3.83, 3.74, 3.69, 3.66
and η ≃1. markers on the x-axis denote, for each curve, the
size, ̃
h, of the sc zone, plotted vs. ln ξ in the inset.
associated variational free energies are reported in ap-
pendix c. the inset of fig. 5 displays the size of the sc
layer h against ξ. our approach predicts a logarithmic
dependence ̃
h ∝ln ξ, the factor behind the logarithm
being ̃
κ−1
b
for ̃
κb ≫1. the restricted choices for φ0 are
compared with the full numerical solution of eq. (41) in
the same figure for ε = 0. we see that, as in the pre-
vious section, the numerical solution and the restricted
ones match perfectly. hence salt-exclusion effects are es-
sentially carried by the parameter h. furthermore, one
notices that ̃
φ0( ̃
z) relaxes to zero between ̃
z = ̃
h and
̃
z = ̃
h + 2 ̃
κ−1
φ . at κblg = 4 we are in the linear regime
of the pb equation and therefore one has η ≃1. the
charge renormalization idea was introduced by alexan-
der et al. [14], who showed that the non-linearity of the
pb equation can be effectively taken into account at long
distances by renormalizing the fixed charge source and
extending the linearized zone where | ̃
φ0| < 1 to the whole
domain. a linear solution of the form eq. (49) can be
very helpful for complicated geometries or in the presence
of a non-uniform charge distribution where the nlpb
equation does not present an analytical solution even at
the mean-field level. these issues will be discussed in a
future work.
figure 6 displays the ion concentrations ρi(z)/ρi,b =
e−φi, which are related to the ion pmf eq. (24), com-
puted with the restricted solution eq. (48) for several val-
ues of ξ. as already said in the introduction, in rescaled
distance, the coupling parameter ξ measures the strength
of the excess chemical potential, w(z). we first see that
for coions as well as for counterions, the depletion layer
in rescaled units in the proximity of the dielectric inter-
face increases with ξ due to the image charge repulsion
and/or solvation effect, i.e. the term e−ξ
2 ̃
w( ̃
z) in eq. (46).
furthermore, one notices that the counterion density ex-
9
(a)
0.0
0.5
1.0
1.5
2.0
z/lg
0.00
0.25
0.50
0.75
1.00
1.25
1.50
ρ/ρb
(b)
ρ/ρb
0.0
0.5
1.0
1.5
2.0
z/lg
0.00
0.25
0.50
0.75
1.00
1.25
1.50
fig. 6: (color online) ion densities for κblg = 4, and (a)
ε = 0 and (b) ε = εw, for increasing coupling parameter: from
left to right, ξ = 1, 10, 100, and 1000. solid lines correspond
to counterions, dashed lines to coions and dashed-dotted lines
to the poisson-boltzmann result (47).
hibits a maximum.
this concentration peak is due to
the competition between the attractive force towards the
charged wall and the repulsive image and solvation in-
teractions. it is important to note that in the particular
case ε = εw, there is no depletion layer for ξ < 10.
iv.
double interface
in this section, the variational method is applied to a
double interface system which consists of a slit-like pore
of thickness d, in contact with an external ion reservoir
at its extremities (fig. 1). the dielectric constant is εw
inside the pore and ε in the outer space. the electrolyte
occupies the pore and the external space is salt-free. the
solution of the dh equation (a2) in this geometry is [6]
w(z) = (κb −κv)lb
(50)
+ lb
z ∞
0
kdk
p
k2 + κ2
v
∆(k/κv)
e2d√
k2+κ2
v −∆2(k/κv)
×
h
2∆(k/κv) + e2(d−z)√
k2+κ2
v + e2z√
k2+κ2
v
i
where ∆(x) is given in eq. (33).
the variational pa-
rameter of the green's function is the variational inverse
screening length κv which is taken uniform (generalized
onsager-samaras approximation, see
[6, 21]). a more
complicated approach has been previously developed in
ref. [21] where the authors introduced a piecewise form
for the variational screening length, i.e. κ(z) = κv over
a layer of size h and κv = κb in the middle of the pore.
although this choice is more general than ours, the min-
imization procedure with respect to κv is significantly
longer than in our case and the variational equation is
much more complicated.
consequently, this piecewise
approach is not very practical when one wishes to study
a charged membrane where the external field created by
the surface charge considerably complicates the technical
task (see section iv b). we show that the simple varia-
tional choice adopted here captures the essential physics
with less computational effort.
as in eq. (32), the integral on the rhs. of eq. (50)
takes into account both image charge and solvation ef-
fects due to the two interfaces, whereas the first term
is the debye result for the difference between the bulk
and a hypothetic bulk of inverse screening length κv. we
should emphasize that, in the present case, the spatial in-
tegrations in eqs. (a3)-(a4) run over the confined space,
that is from z = 0 to z = d. by substituting the so-
lution eq. (50) into eqs. (20)-(a5) and performing the
integration over z, one finds [22]
f2 + f3
s
= dκ3
v
24π + ∆κ2
v
16π
(51)
+κ2
v
4π
z ∞
1
dxx ln
1 − ̄
∆2(x)e−2κvdx
+κ2
v
8π
z ∞
1
dx
̄
∆(x) − ̄
∆3(x)
/x −2κvd ̄
∆2(x)
e2dκvx − ̄
∆2(x)
where we have defined ̄
∆(x) = ∆
√
x2 −1
.
the limiting case ε = 0 allows for closed-form expres-
sions. this limit is a good approximation for describing
biological and artificial pores characterized by an exter-
nal dielectric constant much lower than the internal one.
in the following part of the work, we will deal most of the
time with the special case ε = 0, unless stated otherwise.
in this limit, eq. (51) simplifies to
f2 + f3
s
= κ3
vd
24π + κ2
v
16π
1 + 2 ln
1 −e−2dκv
−
κv
8πdli2
e−2dκv
−li3
e−2dκv
16πd2
(52)
where lin(x) stands for the polylogarithm function and
ξ(x) the riemann zeta function (see appendix d).
within the same limit (ε = 0), ∆(x) = 1 and we obtain
an analytical expression for the green's function eq. (50)
w0(z) = (κb −κv)lb −lb
d ln
1 −e−2dκv
+ lb
2d
h
β
e−2dκv; 1 −z
d, 0
+ d
z e−2κvz
2f1
1, z
d, 1 + z
d, e−2dκv
(53)
10
where β(x; y, z) is the incomplete beta function and
2f1(a, b; c; d) the hypergeometric series. the definitions
of these special functions are given in appendix d. at
this step, the pmf thus depends on three adimensional
parameters, namely dκv, dκb, and d/lb.
for the system with a single interface, the ion fugacity
λi was fixed by the bulk density.
in the present case
where the confined system is in contact with an external
reservoir, λi is fixed by chemical equilibrium:
λi = λi,b = ρi,be−
q2
i
2 κi,blb,
(54)
where κb and λi,b are respectively the inverse debye
screening length and the fugacity in the bulk reservoir
[see eq. (28)].
once this constraint is taken into ac-
count, the last term of electrostatic part of the vari-
ational grand potential eq. (20) can be written as
−p
i ρi,b
r d
0 dz e−
q2
i
2 w(z)−qiφ0(z).
eq.
(21) then becomes for a symmetric q : q elec-
trolyte:
∂2 ̃
φ0
∂z2 −κ2
be−q2
2 w(z) sinh ̃
φ0 = −4πqlbσs [δ(z) + δ(z −d)]
(55)
the optimization of fv = f1 +f2 +f3 given by eq. (20)
and (52) with respect to the inverse trial screening length
κv leads to the following variational equation for κv:
(dκv)2 + dκv tanh(dκv) = (dκb)2
z 1
0
dx e−q2
2 w0(xd)
× cosh[ ̃
φ0(xd)]
1 + cosh [(2x −1)dκv]
cosh(dκv)
.
(56)
within the particular choice that fixed the functional
form of the κv dependent green's function eq. (53), the
two coupled equations (55) and (56) are the most general
variational equations. in the following, we first consider
the case of neutral pores and then the more general case
of charged pores.
a.
neutral pore, symmetric electrolyte
in the case of a symmetric q : q electrolyte and a neu-
tral membrane, σs = 0, the solution of eq. (55) is natu-
rally φ0 = 0. the variational parameter κv is solution of
eq. (56) with φ0 = 0 and w(z) = w0(z) given by eq. (53)
when ε = 0, which can be written as dκv = f(dκb, lb/d).
let us note that eq. (56) can be solved with the mathe-
matica software in a fraction of a second.
within
the
debye-h ̈
uckel
closure
approach,
yaroshchuk (see eq. (59) of ref. [6]) obtains a self-
consistent approximation for constant κv by replacing
the exponential term of eq. (12) with its average value
in the pore:
κ2
v = κ2
b
z 1
0
dx e−q2
2 w(xd),
(57)
0.0
0.2
0.4
0.6
0.8
1.0
variational
yaroschuk
κv/κb
0
1
2
3
d/lb
0.0
0.2
0.4
0.6
0.8
1.0
(b)
(a)
κblb
0.1
0.6
0
1
2
3
d*/lb
κv/κb
fig. 7: (color online) inverse screening length inside the neu-
tral membrane (monovalent ions) normalized by κb vs. the
pore size d/lb for ε = 0 and (a) κblb = 0.1 (ρb = 1.926 mm),
(b) κblb = 1 (ρb = 0.1926 m). dashed lines correspond to
the mid-point approximation, eq. (58). the inset shows the
characteristic pore size corresponding to total ionic exclusion
as a function of the inverse bulk screening length. the bot-
tom curve corresponds to monovalent ions and the top curve
to divalent ions.
which should be compared with eq. (56) with φ0 = 0. in
order to simplify the numerical task, yaroshchuk intro-
duces a further approximation in which he replaces the
potential w(z) inside the depletion term of eq. (57) by its
value in the middle of the pore, w(d/2). then eq. (57)
takes the simpler form
κ2
v = κ2
be−q2
2 w(d/2).
(58)
the self-consistent midpoint approximation is frequently
used in nanofiltration theories [6, 28, 36]. for ε = 0, the
mid-point potential has the simple form w(d/2) = (κb −
κv)lb −2lb ln(1 −e−κvd)/d. this approach is compared
with the full variational treatment in fig. 7 where the
adimensional inverse screening length in the pore κv/κb
is plotted as a function of the pore size d. we first note
that as d decreases below a critical value d∗, the pore is
empty of salt and κv = 0. the inset of fig. 7 shows d∗
versus the inverse bulk screening length. searching for d
such that κv = 0 in eq. (56) leads to the same equation
as eq. (57), thus the value of d∗is identical within both
approaches. however, fig. 7 shows that the mid-point
approximation, eq. (58), overestimates the internal salt
concentration as well as the abruptness of the crossover
to an ion-free regime for decreasing pore size. indeed,
this approximation is equivalent to neglecting the strong
ion exclusion close to the pore surfaces (which is larger
than in the middle of the pore). a similar behavior was
also observed in fig. 6 of ref. [21] for the screening length
in the neighborhood of the dielectric interface.
the effect of the dielectric discontinuity is illustrated
in fig. 8(a) where the inverse internal screening length is
11
(a)
0.5
1.0
1.5
2.0
2.5
3.0
3.5
d/lb
0.0
0.2
0.4
0.6
0.8
1.0
κv/κb
(b)
0.0
0.4
0.8
1.2
log10 (d/lb)
0.0
0.2
0.4
0.6
0.8
1.0
κv/κb
monovalent ions
divalent ions
trivalent ions
fig. 8:
(color online) inverse screening length inside the
membrane vs. the pore size d/lb (εw = 78, κblb = 1). (a)
from bottom to top: ε = 0 (∆= 1), ε = 3.2 (∆= 0.92),
ε = 39 (∆= 1/3), and ε = 78 (∆= 0). (b) log-linear plot
for monovalent, divalent and trivalent ions, from left to right
(ε = 0).
compared for ε between 0 and εw = 78 where the image-
charge repulsion is absent and the solvation effect is solely
responsible for ion repulsion. first of all, one observes
that the total exclusion of ions in small pores is specific to
the case ε = 0. moreover, in the solvation only case, the
inverse screening length inside the pore only slightly de-
viates from the bulk value, 0.8 ≤κv/κb ≤1. this clearly
indicates that, within the point-like ion model consid-
ered in this work, the image-charge interaction brings the
main contribution to salt-rejection from neutral mem-
branes. roughly speaking, the image-charge and solva-
tion effects come into play when the surface of the ionic
cloud of radius κ−1
b
around a single ion located at the pore
center touches the pore wall, i.e. for κ−1
b
> d/2. this
simple picture fixes a characteristic length dch ≃2κ−1
b
below which the internal ion density significantly devi-
ates from the bulk value and ion-rejection takes place.
this can be verified for intermediate salt densities in the
bottom plot of fig. 7 and the top plot of fig. 8.
since image-charge effects are proportional to q2, we
illustrate in fig. 8(b) the effect of ion valency q.
at
pore size d ≃2.5lb ≃1.8 nm, where the inverse inter-
nal screening length for monovalent ions is close to 80 %
-5.0
-4.0
-3.0
-2.0
-1.0
log10(κb lb)
0.1
0.3
0.5
0.7
0.9
σs
d=5, variational
d=5, yaroschuk
d=2, variational
d=2, yaroschuk
d=2, variational, plug flow
fig. 9: (color online) salt reflection coefficient (dimension-
less) against the logarithm of the inverse bulk screening length
for ε = 0 and two pore sizes, d/lb = 2 and 5 (red lines corre-
spond to the mid-point approximation, eq. (58)).
of its saturation value κb, the exclusion of divalent ions
from the membrane is total. this effect driven by image
interactions is even much more pronounced for trivalent
ions. since the typical pore size of nano-filtration mem-
branes ranges between 0.5 and 2 nm, we thus explain why
ion valency can play a central role in ion selectivity, even
inside neutral pores.
the salt reflection coefficient, frequently used in mem-
brane transport theories to characterize the maximum
salt-rejection (obtained at high pressure) is related to the
ratio of the net flux of ions across the membrane to that
of the solvent volume flux j per unit transverse surface :
σs ≡1 −
1
jρb
z d
0
v||(z)ρ(z)dz
(59)
= 1 −12
z 1
1/2
x(1 −x) e−q2
2 w(xd)dx.
where we have used, in the second equality, the poiseuille
velocity profile, v||(z) = 6j
d3 z(d −z) in the pore and the
pmf given by eq. (24). it depends only on the parame-
ters κblb and d/lb. in certain nanopores with hydropho-
bic surfaces, the solvent flux may considerably deviate
from the poiseuille profile (see [37]). in this case, the ve-
locity profile is flat, v||(z) = j
d . we emphasize that since
the velocity profile is normalized in both cases, the mid-
point approximation is unable to distinguish between a
poiseuille and a plug flow velocity profile. fig. 9 displays
σs as a function of the inverse bulk screening length
for two pore sizes d = 2lb and d = 5lb. as seen by
yaroshchuk, decreasing the pore size shifts the curves to
higher bulk concentration and thus increases the range of
bulk concentration where nearly total salt rejection oc-
curs. however, quantitatively, the difference between the
variational and mid-point approaches becomes significant
at high bulk concentrations and this difference is accen-
tuated in the case of plug-flow (for which σs is higher
12
when compared to the poiseuille case because the flow
velocity no longer vanishes at the pore wall where the
salt exclusion is strongest). this deviation is again due
to the midpoint approximation of eq. (58) in which the
image interactions are underestimated.
however since
the velocity profile vanishes at the solid surface for the
poiseuille flow, the deficiencies of the mid-point approx-
imation are less visible in σs than in κv in this case.
finally, we compute the disjoining pressure within our
variational approach. we compare in appendix e the
result with that of the more involved variational scheme
presented in ref. [21] and show that one gets a very
similar behaviour, revealing that the simpler variational
method is able to capture the essential physics of the slit
pore.
as stressed above, the main benefit obtained from the
simpler approach proposed in this work is that the min-
imization procedure is much less time consuming. this
point becomes crucial when considering the fixed charge
of the membrane, which is thoroughly studied in the next
section.
b.
charged pore, symmetric electrolyte
in this section, we apply the variational approach to a
slit-like pore of surface charge σs < 0. in the following,
we will solve eqs. (55) and (56) numerically in order to
test, as in the case of a single charged surface, the validity
of restricted trial forms for φ0(z). we define the partition
coefficients in the pore for counterions and coions, k+ and
k−, as
k± ≡ρ±
ρb
=
z d
0
dz
d e−φ±(z).
(60)
where φ±(z) is given by eq. (26).
1.
effective donnan potential
when one considers a charged nanopore, because of its
small size, gradients of the potential φ0 can be neglected
as a first approximation. we thus assume a constant po-
tential ̄
φ0. the so-called effective donnan potential ̄
φ0
introduced by yaroshchuk [6] will be fixed by the vari-
ational principle. by differentiating the grand potential
eq. (20) with respect to ̄
φ0 (or equivalently integrating
eq. (55) from z = 0 to z = d with ∇ ̄
φ0 = 0), we find
2 |σs| = −2qρb sinh(q ̄
φ0)
z d
0
dz e−q2
2 w(z)
(61)
which is simply the electroneutrality relation in the pore,
taken in charge by the electrostatic potential ̄
φ0.
by
1.0
1.5
2.0
donnan
numerical
piecewise
cyar
1.0
2.0
3.0
4.0
ξ
0.6
1.2
1.8
κvlg
(a)
(b)
κvlg
fig. 10: (color online) inverse internal screening length κv
against ξ for κblg = 2, ε = 0 and (a) d = 3lg and (b) d =
10lg.
comparison of various approximations: yaroshchuk,
eq. (71) (diamonds), variational donnan potential (dashed
line), piecewise solutions (solid line), and numerical results
(squares).
horizontal lines corresponds to the wc limit,
eq. (67) (top), and sc limit, eq. (70) (bottom) .
defining
γ =
z 1
0
dx exp[−q2lb ̄
w(xd)/(2d)]
(62)
=
z 1
0
dx exp[−ξ ̄
w(x ̃
d)/(2 ̃
d)],
(63)
where ̄
w(x) ≡w(x)d/lb, we have k± = γ exp(∓q ̄
φ0) and
eq. (61) can be rewritten as
k+ −k−= 2 |σs|
qρbd = xm
qρb
=
8
κ2
bdlg
=
8
̃
κ2
b ̃
d
(64)
where the second equality contains the gouy-chapman
length lg and the quantity xm = 2|σs|/d, frequently
used in nanofiltration theories, corresponds to the volume
charge density of the membrane. hence, the partition
coefficient of the charge, k+ −k−, does not depend on ξ,
i.e. charge image and solvation forces. by using eq. (61)
in order to eliminate the potential ̄
φ0 from eq. (60), one
can rewrite the partition coefficients in the form
k± = γe∓q ̄
φ0 =
v
u
u
tγ2 +
4
̃
κ2
b ̃
d
!2
±
4
̃
κ2
b ̃
d
(65)
by substituting into eq. (56) the analytical expression
for ̄
φ0 obtained from eq. (61) (or eq. (65)), one obtains
a single variational equation for κv to be solved numeri-
cally,
( ̃
d ̃
κv)2 + ̃
d ̃
κv tanh( ̃
d ̃
κv) = ( ̃
d ̃
κb)2
v
u
u
tγ2 +
4
̃
κ2
b ̃
d
!2
×
1 +
z 1
0
dx e−ξ
2 ̃
w(x ̃
d) cosh
h
(2x −1) ̃
d ̃
κv
i
γ cosh( ̃
d ̃
κv)
.
(66)
13
1.0
2.0
3.0
4.0
ξ
0.0
0.2
0.4
0.6
0.8
1.0
<ρ/ρb>
0.0
0.2
0.4
0.6
0.8
1.0
1.2
counterions
coions
(a)
(b)
<ρ/ρb>
fig. 11: (color online) ionic partition coefficients, k±, vs. ξ
for κblg = 2, ε = 0, and (a) d = 3lg and (b) d = 10lg. the
horizontal line corresponds to the sc limit for counterions.
as explained in the text,we note that k+ −k−= 8/(κ2
bdlg).
the numerical solution of eq. (66) is plotted in fig. 10 as
a function of the coupling parameter ξ. we see that as we
move from the wc limit to the sc one by increasing ξ,
the pore evolves from a high to a low salt regime. this
quite rapid crossover, which results from the exclusion
of ions from the membrane, is mainly due to repulsive
image-charge and solvation forces controlled by γ whose
effects increase with increasing ξ.
in fig. 11 are plotted the partition coefficients of coun-
terions and coions, eq. (60), as a function of ξ. here
again, k± decreases with increasing ξ.
moreover, we
clearly see that the rejection of coions from the mem-
brane becomes total for ξ > 4.
in other words, even
for intermediate coupling parameter values, we are in a
counterion-only state. this is obviously related to the
electrical repulsion of coions by the charged surface.
in the asymptotic wc limit (ξ →0), γ = 1 and we find
the classical donnan results in mean-field where k−=
k−1
+ = eq ̄
φ0 with q ̄
φ0 = arcsinh[4/( ̃
κ2
b ̃
d)]. the variational
equations (66) and (65) reduce to
κ2
v = κ2
b
v
u
u
t1 +
4
̃
κ2
b ̃
d
!2
(67)
k± =
v
u
u
t1 +
4
̃
κ2
b ̃
d
!2
±
4
̃
κ2
b ̃
d
(68)
quite interestingly, the relation eq. (67) shows that, even
in the mean-field limit, due to the ion charge imbalance
created by the pore surface charge, the inverse screening
length is larger than the debye-h ̈
uckel value κb. in the
case of small pores or strongly charged pores or at low
values of the bulk ionic strength, i.e.
κ2
blgd ≪1 or
dρb ≪|σs|/q, we find κv ≃2/√lgd and ρ−= 0 and ρ+ =
2|σs|/(dq). we thus find the classical poisson-boltzmann
result for counterions only [24]. the counterion-only case
is also called good coion exclusion limit (gce), a notion
introduced in the context of nanofiltration theories [6,
38, 39]. hence, in this limit the quantity of counterions
in the membrane is independent of the bulk density and
depends only on the pore size d and the surface charge
density σs. in the case of a membrane of size d ≃1 nm
and fixed surface charge σs ≃0.03 nm−2, this limit can
be reached with an electrolyte of bulk concentration ρb ≃
50 mm.
in the opposite limit κ2
blgd ≫1, one finds
κv ≃κb and ρ± = ρb.
in the sc limit ξ →∞, γ = 0 and eq. (66) simplifies
to
( ̃
d ̃
κv)2 + ̃
d ̃
κv tanh( ̃
d ̃
κv) = 4 ̃
d[1 + sech( ̃
d ̃
κv)].
(69)
for d > lg ( ̃
d > 1), the solution of eq. (69) yields with
a high accuracy
̃
κv ≃
p
1 + 16 ̃
d −1
2 ̃
d
.
(70)
the partition coefficients simplify to k−= 0 and k+ =
8/( ̃
d ̃
κ2
b) = 2|σs|/(dqρb) and we find the counterion only
case (or gce limit) without image charge forces dis-
cussed by netz [24]. partition coefficients in the sc limit
and variational inverse screening length in both limits,
eqs. (67) and (70), are illustrated in figs. 10 and 11
by dotted reference lines. consequently, one reaches for
ξ = 0 the gce limit exclusively for low salt density
or small pore size, while the sc limit leads to gce for
arbitrary bulk density. it is also important to note that
although the pore-averaged densities of ions are the same
in the gce limit of wc and sc regimes, the density pro-
files are different since when one moves away from the
pore center, the counterion densities close to the inter-
face increase in the wc limit due to the surface charge
attraction and decrease in the sc limit due to the image
charge repulsion.
it is interesting to compare this variational approach to
the approximate mid-point approach of yaroshchuk [6].
for charged membranes, he considers a constant po-
tential and replaces the exponential term of eqs. (11)
and (12) by its value in the middle of the pore. he ob-
tains the following self-consistent equations:
κ2 = κ2
be−q2
2 w(d/2) cosh
q ̄
φ0
(71)
2 |σs| = −2qdρb sinh
q ̄
φ0
e−q2
2 w(d/2).
(72)
the above set of equations are frequently used in nanofil-
tration theories [6, 28, 36]. by combining these equations
in order to eliminate ̄
φ0, one obtains an approximate non-
linear equation for κv (approximation cyar in fig. 10).
in the limit of a high surface charge, the non-linear equa-
tions (71)–(72) depend only on the pore size d and the
surface charge density σs:
κ2 ≃8πlbq|σs|
d
=
4
lgd.
(73)
14
σ lb
2
0.0
1.0
2.0
3.0
κvlb
donnan variational
cyar
asymptotic limit
(a)
0.0
0.1
0.2
0.3
0.4
0.0
1.0
2.0
3.0
(b)
(a)
κvlb
fig. 12: (color online) inverse internal screening length κv
against the reduced surface charge ̄
σ = l2
bσs for d = lb, ε = 0
and (a) κblb = 1, (b) κblb = 2: constant variational donnan
approximation (solid line), asymptotic result eq. (73) (dotted
line) and yaroshchuk approximation eq. (71) (dashed line).
one can verify that in the regime of strong surface
charge, eq. (73) is also obtained from the asymptotic
solution eq. (70) since the dependence of the pmf on
z is killed when ξ →∞and only the mid-pore value
contributes.
the numerical solution of eq. (71)-(72)
is illustrated as a function of ξ in fig. 10, and as a
function of the surface charge in fig. 12, together with
the asymptotic formula eq. (73).
for the parameter
range considered in fig. 10, the solution of eq. (71)
strongly deviates from the result of the full variational
calculation. for ξ < 2, the mid-point approach follows
an incorrect trend with increasing ξ.
it is clearly
seen that at some values of the coupling parameter,
eqs. (71)-(72) do not even present a numerical solution.
using the relations d/lb = ̃
d/ξ and lbκb = ξ ̃
κb for
monovalent ions, one can verify that the regime where
the important deviations take place corresponds to high
ion concentrations.
this is confirmed in fig. 12: the
error incurred by the approximate mid-point solution of
yaroshchuk increases with the electrolyte concentration.
in section iv a on neutral nanopores, it has been un-
derlined that, due to the image charge repulsion, the ionic
concentration inside the pore increases with the pore size
d (see fig. 8). in the present case of charged nanopores,
this result is modified: eqs. (67), (70) and (73) show that
for strongly charged nanopores the concentration of ions
inside the pore decreases with d. moreover, the very high
charge limit is a counterion-only state and eq. (61) shows
that, for a fixed surface charge density, electroneutrality
alone fixes the number of counterions, n−, in a layer
of length d joining both interfaces, and image charge
interactions play a little role.
this is the reason why
κ2
v ∝ρ−= n−/(sd) decreases for increasing d.
0.5
1.0
1.5
2.0
2.5
3.0
3.5
0.0
0.2
0.4
0.6
0.8
k-
d/lb
(a)
0.5
1.0
1.5
2.0
2.5
3.0
3.5
0.0
0.5
1.0
1.5
2.0
2.5
d/lb
0.01
0.03
0.05
0.07
σ (nm-2 )
0
5
10
15
20
25
dcr (nm)
k+
(b)
fig. 13:
(color online) partition coefficient in the pore
of coions (a) and counterions (b) vs.
the pore size
d/lb
for
increasing
surface
charge
density,
σsl2
b
=
0, 0.004, 0.02, 0.04, 0.08, 0.12, from left to right, and κblb = 1.
inset: critical pore size dcr vs. the surface charge density σs
(ε = 0).
hence, we expect an intermediate charge regime which
interpolate between image force counterion repulsion
(case of neutral pores, see section iv a) and counterion
attraction by the fixed surface charge. this is illustrated
in fig. 13 where the partition coefficients are plotted vs.
d for increasing σs. as expected, coions are electrostat-
ically pushed away by the surface charge which adds to
the repulsive image forces, leading to a stronger coion ex-
clusion than for neutral pores. the issue is more subtle
for counterions: obviously, increasing the surface charge,
σs, at constant pore size, d, increases k+. however, for
small fixed σs, a regime where image charge and direct
electrostatic forces compete, k+ is non-monotonic with
d. below a characteristic pore size, d < dcr, the electro-
static attraction dominates over image charge repulsion
and due to the mechanism explained above, k+ decreases
for increasing d. for d > dcr, the effect of the surface
charge weakens and k+ starts increasing with d. in this
regime, the pore behaves like a neutral system. the inset
of fig. 13 shows that dcr increases when σs increases. for
highly charged membranes l2
bσs ≫0.1, there is no min-
imum in k+(d), and the average counterion density in-
15
2πξlg3 ρ(z)
0.0
0.5
1.0
1.5
2.0
0.0
0.5
1.0
1.5
2.0
z/lg
counterion, mc
coion, mc
counterion, variational with four parameters
coion, variational with four parameters
counterion, variational with a single parameter
coion, variational with a single parameter
counterion, nlpb
coion, nlpb
fig. 14: (color online) ion densities in the nanopore for ε =
εw , ξ = 1 and h/lg = 2. the continuous lines correspond to
the prediction of the variational method with four parameters,
the dashed-dotted line the variational solution with a single
parameter (see the text), the symbols are mc results (fig. 2
of [29]) and the dashed lines denote the numerical solution of
the non-linear pb result.
side the membrane monotonically decreases towards the
bulk value. experimental values for surface charges are
0 ≤σs ≤0.5 nm−2 (or 0 ≤l2
bσs ≤0.25), which cor-
responds to physically attainable values of dcr. the in-
terplay between image forces and direct electrostatic at-
traction is thus relevant to the experimental situation.
the variational donnan potential approximation is
thus of great interest since it yields physical insight into
the exclusion mechanism and allows a reduction of the
computational complexity.
however, membranes and
nanopores are often highly charged and spatial variations
of the electrostatic potential inside the pore may play an
important role. in the following we seek a piecewise so-
lution for φ0(z).
2.
piecewise solution
the variational modified pb equation (55) for ̃
φ0 shows
that as one goes closer to the dielectric interface, w(z)
increases and the screening experienced by the poten-
tial ̃
φ0 gradually decreases because of ionic exclusion.
this non-perturbative effect which originates from the
strong charge-image repulsion inspires our choice for the
variational potential ̃
φ0(z). we opt for a piecewise so-
lution as in section iii: a salt-free solution in the zone
0 < z < h and the solution of the linearized pb equa-
tion for h < z < d/2, with a charge renormalization
parameter η taking into account non-linear effects. by
inserting the boundary conditions ∂ ̃
φ0/∂z|z=0 = 2η/lg
and ∂ ̃
φ0/∂z|z=d/2 = 0 and imposing the continuity of ̃
φ0
and its first derivative at z = h [eq. (b3)], the piecewise
-1.0
-0.5
0.0
numerical
homogeneous
piecewise
φ0
(a)
0.0
1.0
2.0
3.0
4.0
z/lg
-111.5
-110.0
-108.5
(b)
φ0
fig. 15: (color online) variational electrostatic potential (in
units of kbt) in the nanopore. comparison of the numeri-
cal solution of eq. (55) with the homogeneous (h = 0) and
piecewise solution of eq. (c5) for lgκb = 3 and (a) ξ = 1,
(b) ξ = 100.
the horizontal line is the donnan potential
obtained from eq. (61) (ε = 0).
potential, solution of eq. (55) with κ2
b exp[−q2w(z)/2]
replaced by κ2
φ, takes the form
̃
φ0(z) =
(
̄
φ −2η
lg
z −d
2
for
0 < z ≤h,
φ −
2η
lgκφ
cosh[κφ(d/2−z)]
sinh[κφ(d/2−h)] for
h ≤z ≤d/2
(74)
where
̄
φ = φ + 2η
lg
d
2 −h
−
2η
lgκφ
coth
κφ
d
2 −h
(75)
is imposed by continuity, and κφ, φ, h and η are the vari-
ational parameters. by injecting the piecewise solution
eq. (c5) into eq. (20), we finally obtain
f1
s
= −2 |σs|
q
η(η −2) h
lg
−
η2(d/2 −h)
2lg sinh2[κφ(d/2 −h)]
+
η(η −4)
2lgκφ
coth[κφ(d/2 −h)] + φ
−
κ2
b
4πlbq2
z d
0
dz e−q2
2 w(z) cosh ̃
φ0(z).
(76)
the solution to the variational problem is found by op-
timization of the total grand potential f = f1 + f2 with
respect to κφ, φ, h, η and κv, where f2 + f3 is given by
eq. (52) for a general value of ε and by eq. (52) for ε = 0.
this was easily carried out with mathematica software.
a posteriori, we checked that two restricted forms for
φ0, homogeneous with h = 0 and piecewise with φ = 0,
were good variational choices. fig. 14 compares the ion
densities obtained from the variational approach (with
homogeneous φ0) with the predictions of the mc simu-
lations [29] and the nlpb equation for ε = εw , ̃
d = 2
and ξ = 1. two variational choices are displayed in this
16
figure, namely, the homogeneous approach with four pa-
rameters κv = 1.68, κφ = 1.36, φ = 0.16, η = 0.97 and a
simpler choice with η = 1, κφ = κv and two variational
parameters: κv = 1.69, φ = −0.18. in the latter case,
one can obtain an analytical solution for φ and injecting
this solution into the free energy, one is left with a single
parameter κv to be varied in order to find the optimal
solution. we notice that with both choices, the agree-
ment between the variational method and mc result is
good. it is clearly seen that the proposed approach can
reproduce with a good quantitative accuracy the reduced
solvation induced ionic exclusion, an effect absent at the
mean-field level. moreover, we verified that with the sin-
gle parameter choice, one can reproduce at the mean-field
variational level the ion density profiles obtained from
the numerical solution of the nlpb equation (dashed
lines in fig. 14) almost exactly. we finally note that the
small discrepancy between the predictions of the varia-
tional approach and the mc results close to the interface
may be due to either numerical errors in the simulation,
or our use of the generalized onsager-samaras approx-
imation (our homogeneous choice for the inverse effec-
tive screening length appearing in the green's function
v0 does not account for local enhancement or diminution
of ionic screening due to variations in local ionic density).
for ε = 0, the piecewise and homogeneous solu-
tions are compared with the full numerical solution of
eqs. (55)–(56) in fig. 15 for ξ = 1 and 100. first of
all, one observes that for ξ = 1, both variational solu-
tions match perfectly well with the numerical solutions.
for ξ = 100, the piecewise solution matches also per-
fectly well with the numerical one, whereas the match-
ing of the homogeneous one is poorer. the optimal val-
ues of the variational parameters (κv, κφ, η, h) for the
piecewise choice are (2.57, 2.6, 0.98, 0.15) for ξ = 1 and
(0.83, 0.13, 0.97, 1.37) for ξ = 100.
the form of the electrostatic potential φ0(z) is inti-
mately related to ionic concentrations. ion densities in-
side the pore are plotted in fig. 16 for ξ = 1 and ξ = 100.
we first notice that even at ξ = 1, the counterion density
is quite different from the mean-field prediction. further-
more, due to image charge and electrostatic repulsions
from both sides, the coion density has its maximum in
the middle of the pore. on the other hand, the coun-
terion density exhibits a double peak, symmetric with
respect to the middle of the pore, which originates from
the attractive force created by the fixed charge and the
repulsive image forces. when ξ increases, we see that the
counterion density close to the wall shrinks and becomes
practically flat in the middle of the pore. hence the po-
tential φ0 linearly increases with z until the counterion-
peak is reached and then it remains almost constant since
the counterion layer screens the electrostatic field created
by the surface charge (since in fig. 15, z is renormalized
by the gouy-chapman length which decreases with in-
creasing σs, one does not see the increase of the slope
φ0(z = 0)). in agreement with the variational donnan
approximation above, coions are totally excluded from
0.0
0.5
1.0
1.5
coion
counterion
mean-field
ρ/ρbulk
z/lg
0.0
1.0
2.0
3.0
4.0
5.0
0.00
0.10
0.20
0.30
0.40
(b)
(a)
ρ/ρbulk
fig. 16: (color online) local ionic partition coefficient in the
nanopore (same parameters as in fig. 15 and ε = 0) com-
puted with the piecewise solution. (a) ξ = 1 , (b) ξ = 100.
the dotted line in the top plot corresponds to the mean-field
prediction for counterion density.
the pore for large ξ. hence, the piecewise potential al-
lows one to go beyond the variational donnan approxi-
mation within which the density profile does not exhibit
any concentration peak.
the inverse screening length κv obtained with the
piecewise solution is compared in fig. 10 with the pre-
diction of the donnan approximation and that of the
numerical solution.
the agreement between piecewise
and numerical solutions is extremely good.
although
the donnan approximation slightly underestimates the
salt density in the pore, its predictions follow the correct
trend.
v.
conclusion
in this study, we applied the variational method to
interacting point-like ions in the presence of dielectric
discontinuities and charged boundaries. this approach
interpolates between the wc limit (ξ ≪1) and the sc
one (ξ ≫1), originally defined for charged boundaries
without dielectric discontinuity, and takes into account
image charge repulsion and solvation effects. the vari-
ational greens's function v0 has a debye-h ̈
uckel form
with a variational parameter κv and the average vari-
ational electrostatic potential φ0(z) is either computed
numerically or a restricted form is chosen with varia-
tional parameters. the physical content of our restricted
variational choices can be ascertained by inspecting the
general variational equations (eqs. (11)-(12) for symmet-
ric salts).
the generalized onsager-samaras approxi-
mation that we have adopted for the green's function
replaces a local spatially varying screening length by a
constant variational one; although near a single interface
this screening length is equal to the bulk one, in confined
17
geometries the constant variational screening length can
account in an average way for the modified ionic environ-
ment (as compared with the external bulk with which the
pore is in equilibrium) and can therefore strongly devi-
ate from the bulk value.
this modified ionic environ-
ment arises both from dielectric and reduced solvation
effects present even near neutral surfaces (encoded in the
green's function) and the surface charge effects encoded
in the average electrostatic potential. our restricted vari-
ational choice for φ0(z) is based on the usual non-linear
poisson-boltzmann type solutions with a renormalized
inverse screening length that may differ from the one
used for v0 and a renormalized external charge source.
the coupling between v0 and φ0 arises because the in-
verse screening length for v0 depends on φ0 and vice-
versa. the optimal choices are the ones that extremize
the variational free energy.
in the first part of the work, we considered single in-
terface systems. for asymmetric electrolytes at a single
neutral interface, the potential φ0(z) created by charge
separation was numerically computed. it was satisfac-
torily compared to a restricted piecewise variational so-
lution and both charge densities and surface tension are
calculated in a simpler way than bravina [5] and valid
over a larger bulk concentration range. the variational
approach was then applied to a single charged surface and
it was shown that a piecewise solution, characterized by
two zones, can accurately reproduce the correlations and
non-linear effects embodied in the more general varia-
tional equation. the first zone of size h is governed by a
salt-free regime, while the second region corresponds to
an effective mean-field limit. the variational calculation
predicts a relation between h and the surface charge of
the form h ∝(c + ln |σs|)/|σs| where the parameter c
depends on the temperature and ion valency.
in the second part, we dealt with a symmetric elec-
trolyte confined between two dielectric interfaces and in-
vestigated the important problem of ion rejection from
neutral and charged membranes. we illustrated the ef-
fects of ion valency and dielectric discontinuity on the
ion rejection mechanism by focusing on ion partition and
salt reflection coefficients. we computed within a vari-
ational donnan potential approximation, the inverse in-
ternal screening length and ion partition coefficients, and
showed that for ξ > 4 one reaches the sc limit, where
the partition coefficients are independent of the bulk con-
centration and depend only on the size and charge of the
nanopore. this result has important experimental appli-
cations, since it indicates that complete filtration can be
done at low bulk salt concentration and/or high surface
charge. furthermore, we showed that, due to image inter-
actions, the quantity of salt allowed to penetrate inside a
neutral nanopore increases with the pore-size. in the case
of strongly charged membranes, this behavior is reversed
for the whole physical range of pore size.
we quanti-
fied the interplay between the image charge repulsion and
the surface charge attraction for counterions and found
that even in the presence of a weak surface charge, the
competition between them leads to a characteristic pore
size dcr below which the counterion partition coefficient
rapidly decreases with increasing pore size. on the other
hand, for nanopores of size larger than dcr the system be-
haves like a neutral pore. our variational calculation was
compared to the debye closure approach and the mid-
point approximation used by yaroshchuk [6]. the clo-
sure equations have no exact solution even at the numer-
ical level. our approach, based on restricted variational
choices, shows significant deviations from yaroshchuk's
mid-point approach at high ion concentrations and small
pore size. finally, the introduction of a simple piecewise
trial potential for φ0, which perfectly matches the nu-
merical solutions of the variational equations, enabled us
to go beyond the variational donnan potential approxi-
mation and thus account for the concentration peaks in
counterion densities. we computed ion densities in the
pore and showed that for ξ > 4, the exclusion of coions
from the pore is total. we also compared the ionic den-
sity profiles obtained from the variational method with
mc simulation results and showed that the agreement is
quite good, which illustrates the accuracy of the varia-
tional approach in handling the correlation effects absent
at the mean-field level.
the main goal in this work was first to connect two
different fields in the chemical physics of ionic solutions
focusing on complex interactions with surfaces:
field-
theoretic calculations and nanofiltration studies. more-
over, on the one hand, this variational method allows one
to consider, in a non-perturbative way, correlations and
non-linear effects; on the other hand the choice of one
constant variational debye-h ̈
uckel parameter is simple
enough to reproduce previous results and to illuminate
the mechanisms at play. this approach is also able to
handle, in a very near future, more complicated geome-
tries, such as cylindrical nanopores, or a non-uniform sur-
face charge distribution.
the present variational scheme also neglects ion-size
effects and gives rise to an instability of the free energy
at extremely high salt concentration.
second order
corrections to the variational method may be necessary
in order to properly consider ionic correlations leading
to pairing [40, 41] and to describe the physics of charged
liquids at high valency, high concentrations or low
temperatures. introducing ion size will also allow us to
introduce an effective dielectric permittivity εp for water
confined in a nanopore intermediate between that of
the membrane matrix and bulk water, leading naturally
to a born-self energy term that varies inversely with
ion size and depends on the difference between 1/εw
and 1/εp [42, 43].
furthermore, the incorporation of
the ion polarizability [44] will yield a more complete
physical description of the behavior of large ions [45].
charge inversion phenomena for planar and curved
interfaces is another important phenomenon that we
would like to consider in the future [46]. note, however,
that our study of asymmetric salts near neutral surfaces
reveals a closely related phenomenon:
the generation
18
of an effective non-zero surface charge due to the
unequal ionic response to a neutral dielectric interface
for asymmetric salts.
a further point that possesses
experimental relevance is the role played by surface
charge inhomogeneity.
strong-coupling calculations
show that an inhomogeneous surface charge distribution
characterized by a vanishing average value gives rise to
an attraction of ions towards the pore walls, but this
effect disappears at the mean-field level [47]. for a better
understanding of the limitations of the proposed model,
more detailed comparison with mc/md simulations are
in order[48, 49]. finally, dynamical hindered transport
effects [27, 48] such as hydrodynamic forces deserve to be
properly included in the theory for practical applications.
acknowledgments
we would like to thank david s. dean for numer-
ous helpful discussions.
this work was supported in
part by the french anr program nano-2007 (simo-
nanomem project, anr-07-nano-055).
appendix a: variational free energy
for planar geometries (charged planes), the transla-
tional invariance parallel to the plane, allows us to sig-
nificantly simplify the problem by introducing the partial
fourier-transformation of the trial potential in the form
v0(z, z′, r|| −r′
||) =
z
dk
(2π)2 eik*(r||−r′
||)ˆ
v0(z, z′, k). (a1)
by
injecting
the
fourier
decomposition
(a1)
into
eq. (17), the dh equation becomes
−∂
∂z ε(z) ∂
∂z + ε(z)[k2 + κ2
v(z)]
ˆ
v0 (z, z′, k; κv(z))
=
e2
kbt δ(z −z′).
(a2)
the translational symmetry of the system enables us to
express any thermodynamic quantity in terms of the par-
tially fourier-transformed green's function ˆ
v0(z, z, k).
the average electrostatic potential contribution to fv
that follows from the average ⟨h⟩0 reads
f1 = s
z
dz
(
−[∇φ0(z)]2
8πlb
+ ρs(z)φ0(z)
−
x
i
λie−
q2
i
2 w (z)−qiφ0(z)
)
,
(a3)
the kernel part is
f2 =
s
16π2
z 1
0
dξ
z ∞
0
dkk
z
dz κ2
v(z)
lb(z)
(a4)
×
h
ˆ
v0
z, z, k; κv(z)
p
ξ
−ˆ
v0 (z, z, k; κv(z))
i
where the first term in the integral follows from f0 and
the second term from ⟨h0⟩0. finally, the unscreened van
der waals contribution, which comes from the unscreened
part of f0 , is given by
f3 =
s
8π
z 1
0
dξ
z ∞
0
dz
1
lb(z) −1
lb
d
(∇φ)2e
ξ
−ln
z
dφe−
r
dr
8πlb (∇φ)2
(a5)
the technical details of the computation of f3 can be
found in ref. [50]. the last term of eq. (a5) simply
corresponds to the free energy of a bulk electrolyte with
a dielectric constant εw. in the above relations, s stands
for the lateral area of the system. the dummy "charg-
ing" parameter ξ is usually introduced to compute the
debye-h ̈
uckel free energy [51]. it multiplies the debye
lengths of ˆ
v0 (z, z, k; κv(z)) in eq. (a4) and the dielec-
tric permittivities contained in the thermal average of
the gradient in eq.(a5). this later is defined as
d
(∇φ)2e
ξ = −(∇φ0)2
(a6)
+
z
dk
(2π)2
k2 + ∂z∂z′
ˆ
vc [z, z′, k; lξ(z)]|z=z′
where we have introduced l−1
ξ (z) ≡l−1
b +ξ
l−1
b (z) −l−1
b
and ˆ
vc [z, z′, k; lξ(z)] stands for the fourier transformed
coulomb operator given by eq. (5) with bjerrum length
lξ(z). the quantity f3 defined in eq. (a5) does not de-
pend on the inverse screening length κv. moreover, in
order to satisfy the electroneutrality, φ0(z) must be con-
stant in the salt-free parts of the system where lb(z) ̸=
lb. hence, f3 does not depend on the potential φ0(z).
appendix b: variational choice for the neutral
dielectric interface
we report in this appendix the restricted variational
piecewise φ0(z) for a neutral dielectric interface which is
a solution of
∂2φ0
∂z2
= 0
for
z ≤a
(b1)
∂2φ0
∂z2
−κ2
φφ0 = cze−κφz
for
z ≥a
(b2)
where φ0(z) in both regions is joined by the continuity
conditions
φ<
0 (a) = φ>
0 (a),
∂φ<
0
∂z
z=a
= ∂φ>
0
∂z
z=a
.
(b3)
we also tried to introduce different variational screening
lengths in the second term of the lhs. and in the rhs.
of eq. (b2) without any significant improvement at the
variational level. for this reason, we opted for a single
19
inverse variational screening length, κφ. the solution of
eqs. (b1)-(b2) is
φ0(z) =
φ
for
z ≤a,
φ [1 + κφ(z −a)] e−κφ(z−a) for
z ≥a.
(b4)
where the coefficient c disappears when we impose the
boundary and continuity conditions, eq. (b3). the re-
maining variational parameters are the constant poten-
tial φ, the distance a and the inverse screening length
κφ. by substituting eq. (b4) into eq. (a3), we obtain
the variational grand potential
fv = v κ3
b
24π +
s
32π
∆κ2
b −κφ
lb
φ2
−sρ−
z ∞
0
dz
×
e−
q2
−
2 w(z)+q−φ0(z) + q−
q+
e−
q2
+
2 w(z)−q+φ0(z)
(b5)
appendix c: variational choice for the charged
dielectric interface
the two types of piecewise variational functions used
for single charged surfaces are reported below.
• the first trial potential obeys the salt-free equa-
tion in the first zone and the nlpb solution in the
second zone,
∂2 ̃
φnl
0
∂ ̃
z2
= 2δ( ̃
z)
for
̃
z ≤ ̃
h,
(c1)
∂2 ̃
φnl
0
∂ ̃
z2
− ̃
κ2
φ sinh φ0 = 0
for
̃
z ≥ ̃
h,
whose solution is
̃
φnl
0 ( ̃
z) =
(
4arctanhγ + 2( ̃
z − ̃
h)
for
̃
z ≤ ̃
h,
4arctanh
γe− ̃
κφ( ̃
z− ̃
h)
for
̃
z ≥ ̃
h,
(c2)
where γ = ̃
κφ −
q
1 + ̃
κ2
φ. variational parameters
are h and κφ, and the electrostatic contribution of
the variational grand potential eq. (a3) is
f1
̃
s
=
̃
h + γ −4arctanhγ
2π ξ
− ̃
κ2
b
4πξ
z
d ̃
ze−ξ
2 ̃
w( ̃
z) cosh ̃
φnl
0 .
(c3)
• the second type of trial potential obeys the salt-
free equation with a charge renormalization in the
first zone and the linearized poisson-boltzmann so-
lution in the second zone,
∂2 ̃
φl
0
∂ ̃
z2
= 2ηδ( ̃
z)
for
̃
z ≤ ̃
h,
∂2 ̃
φl
0
∂ ̃
z2
− ̃
κ2
φ ̃
φl
0 = 0
for
̃
z ≥ ̃
h,
(c4)
whose solution is given by
̃
φl
0( ̃
z) =
(
−2η
̃
κφ + 2η( ̃
z − ̃
h) for
̃
z ≤ ̃
h,
−2η
̃
κφ e− ̃
κφ( ̃
z− ̃
h)
for
̃
z ≥ ̃
h.
(c5)
variational parameters introduced in this case are
̃
h, ̃
κφ, and the charge renormalization η, which
takes into account non-linearities at the mean-field
level [19]. the variational grand potential reads
f1
̃
s
= 2η(1 + ̃
h ̃
κφ) −η2(1/2 + ̃
h ̃
κφ)
2πξ ̃
κφ
−
̃
κ2
φ
4πξ
z
d ̃
ze−ξ
2 ̄
w( ̃
z) cosh ̃
φl
0( ̃
z).
(c6)
in both cases, the boundary condition satisfied by φ0
is the gauss law
∂ ̃
φ0
∂ ̃
z
z=0
= 2η
(c7)
where η = 1 for the non-linear case. it is important to
stress that in the case of a charged interface, eq. (c7)
holds even if ε ̸= 0. in fact, since the left half-space is
ion-free, ̃
φ0(z) must be constant for z < 0 in order to
satisfy the global electroneutrality in the system.
appendix d: definition of the special functions
the definition of the four special functions used in this
work are reported below.
lin(x) =
x
k≥1
xk
kn ,
ξ(n) = lin(1)
(d1)
β(x; y, z) =
z x
0
dt ty−1(1 −t)z−1
(d2)
2f1(a, b; c; x) =
x
k≥0
(a)k(b)k(c)k
xk
k!
(d3)
where (a)k = a!/(a −k)!.
appendix e: disjoining pressure for the neutral pore
the net pressure between plates is defined as
p = −1
s
∂fv
∂d −
2ρb −κ3
b
24π
(e1)
where the subtracted term on the rhs. is the pressure
of the bulk electrolyte.
the total van der waals free
energy, which is simply the zeroth order contribution f0
to the variational grand potential eq. (9), is with the
20
0.3
0.5
0.8
1.0
1.3
1.5
1.8
2.0
d/lb
0.1
0.2
0.3
0.4
0.5
-βlb
3(pvar-pvdw)
fig. 17: (color online) difference between the pressure and
the screened van der waals contribution vs d/lb for κblb =
0.5, 1 and 1.5, from left to right (ε = 0).
constraint κv = κb (there is no renormalization of the
inverse screening length at this order),
fvdw = dκ3
b
24π −κb
8πdli2
e−2dκb
−
1
16πd2 li3
e−2dκb
(e2)
and
pvdw = −1
s
∂fvdw
∂d
+ κ3
b
24π .
(e3)
we illustrate in fig. 17 the difference between the van
der waals pressure and the prediction of the variational
calculation for κblb = 0.5, 1 and 1.5. we notice that the
prediction of our variational calculation yields a very sim-
ilar behavior to that illustrated in fig. 8 of ref. [21]. the
origin of the extra-attraction that follows from the vari-
ational calculation was discussed in detail in the same
article. this effect originates from the important ionic
exclusion between the plates at small interplate separa-
tion, an effect that can be captured within the variational
approach.
[1] a. heydweiller, ann. d. phys., 4, 33145 (1910)
[2] p.k. weissenborn and r.j. pugh, langmuir, 11, 1422
(1995)
[3] g. wagner, phys. zs., 25, 474 (1924)
[4] l. onsager and n. samaras, j. chem. phys., 2 528 (1934)
[5] v.e. bravina, soviet phys. doklady, 120, 381 (1958)
[6] a.e. yaroshchuk, adv. colloid interf. sci., 85, 193 (2000)
[7] r.b. schoch, j. han and p. renaud, rev. mod. phys.
80, 839 (2008)
[8] a.g. moreira and r.r. netz, phys. rev. lett., 87,
078301 (2000)
[9] a.g. moreira and r.r. netz, europhys. lett., 52, 705
(2000)
[10] h. boroudjerdi et al., physics reports, 416, 129 (2005)
[11] r.r. netz and h. orland, eur. phys. j. e, 1, 203 (2000)
[12] a.g. moreira and r.r. netz, eur. phys. j. e, 8, 33
(2002)
[13] a. naji, s. jungblut, a. moreira and r.r. netz , physica
a, 352, 131 (2005)
[14] s. alexander, p.m. chaikin, p. grant, g.j. morales, p.
pincus and d. hone, j. chem. phys., 80, 5776 (1984)
[15] levin y., rep. prog. phys., 65, 1577 (2002)
[16] levin y., j. chem. phys., 113, 9722 (2000)
[17] g.s. manning, j. chem. phys., 51, 924 (1969)
[18] m. kanduc, a naji, y.s. jho, p.a. pincus and r. pod-
gornik, j. phys.: condens. matter, 21, 424103 (2009)
[19] r.r. netz and h. orland, eur. phys. j. e, 11, 301 (2003)
[20] r.a. curtis and l. lue,. j. chem. phys., 123, 174702
(2005)
[21] m.m. hatlo, r.a. curtis and l. lue, j. chem. phys.,
128, 164717 (2008)
[22] m.m. hatlo and l. lue, soft matter, 4, 1582 (2008)
[23] m.m. hatlo and l. lue, soft matter, 5, 125 (2009)
[24] r.r. netz, eur. phys. j. e, 5, 557 (2001)
[25] the case with dielectric discontinuity has been stud-
ied without added salt in m.m. hatlo and l. lue,
arxiv:0806.3716, (2008)
[26] v.m. starov and n.v. churaev, adv. colloid interf. sci.
43, 145 (1993)
[27] x. lefebvre, j. palmeri and p. david, j. phys. chem. b
108, 16811 (2004)
[28] a. yaroshchuk, sep. purif. technology 22-23, 143 (2001)
[29] c. li and h. r. ma, j. chem. phys, 121, 1917 (2004)
[30] i. benjamin, annu. rev. phys. chem., 48, 407-51 (1997)
[31] s.m. avdeev and g.a. martynov. colloid j. ussr, 48,
632 (1968)
[32] note that the factor ξ−1 in front of v0 in eq. (12) found
in [19] should be cancelled.
[33] j. janeˇ
cek and r.r. netz, j. chem. phys., 130, 074502
(2009)
[34] this is in fact a maximization with respect to φ0 or
to any variational parameter associated to a restricted
choice for φ0. indeed, the physical electrostatic potential
φ0 corresponds to a pure imaginary auxiliary field φ in
the functional integral z. hence the (complex) minimum
of ⟨h −h0⟩0 with respect to φ corresponds to a (real)
maximum with respect to φ0.
[35] j.d. jackson, classical electrodynamics (wiley 2nd ed.,
new york, 1975)
[36] a. szymczyk and p. fievet, j. membrane sci. 252, 77
(2005)
[37] f. fornasiero et al., proc. nat. acad. sci. usa, 105,
17250 (2008)
[38] x. lefebvre and j. palmeri, j. phys. chem. b 109, 5525
(2005)
[39] a. yaroshchuk, adv. colloid interf. sci. 60 1 (1995)
[40] m.e. fisher and y. levin, phys. rev. lett., 71, 3826
(1993)
[41] j.p. simonin et al., j. phys chem. b, 103, 699 (1999)
[42] s. senapati and a. chandra, j. phys. chem. b, 105,
5106 (2001)
[43] j. marti el al., j. phys. chem. b, 110, 23987 (2006)
21
[44] s. buyukdagli, d.s. dean, m. manghi and j. palmeri, in
preparation.
[45] p. jungwirth and b. winter, annu. rev. phys. chem.,
59, 343 (2008)
[46] c.d. lorenz and a. travesset, phys. rev. e, 75, 061202
(2007)
[47] a. naji and r. podgornik, phys. rev. e, 72, 041402
(2005)
[48] y. ge et al., phys. rev. e, 80, 021928 (2009)
[49] k.
leung
and
s.b.
rempe,
j.
comput.
theoret.
nanosci., 6, 1948 (2009)
[50] r.r. netz, eur. phys. j. e, 5, 189 (2001)
[51] d.a. mcquarrie, statistical mechanics chap.15 (univer-
sity science book, new york, 2000)
|
0911.1731 | material properties of of caenorhabditis elegans swimming at low
reynolds number | undulatory locomotion, as seen in the nematode \emph{caenorhabditis elegans},
is a common swimming gait of organisms in the low reynolds number regime, where
viscous forces are dominant. while the nematode's motility is expected to be a
strong function of its material properties, measurements remain scarce. here,
the swimming behavior of \emph{c.} \emph{elegans} are investigated in
experiments and in a simple model. experiments reveal that nematodes swim in a
periodic fashion and generate traveling waves which decay from head to tail.
the model is able to capture the experiments' main features and is used to
estimate the nematode's young's modulus $e$ and tissue viscosity $\eta$. for
wild-type \emph{c. elegans}, we find $e\approx 3.77$ kpa and $\eta \approx-860$
pa$\cdot$s; values of $\eta$ for live \emph{c. elegans} are negative because
the tissue is generating rather than dissipating energy. results show that
material properties are sensitive to changes in muscle functional properties,
and are useful quantitative tools with which to more accurately describe new
and existing muscle mutants.
| introduction
motility analysis of model organisms, such as the nematode caenorhabditis elegans (c.
elegans), is of great scientific and practical interest. it can provide, for example, a powerful
tool for the analysis of genetic diseases in humans such as muscular dystrophy (md) [1, 2, 3]
since c. elegans have muscle cells that are highly similar in both anatomy and molecular
∗present address: department of mechanical & aerospace engineering, princeton university, princeton
nj 08544, usa
†electronic address: [email protected]
arxiv:0911.1731v1 [physics.bio-ph] 9 nov 2009
2
makeup to vertebrate skeletal muscles [4, 5]. due to the nematode's small size (l ≈1 mm),
the motility of c. elegans swimming in a simple, newtonian fluid is usually investigated in
the low reynolds numbers (re) regime, where linear viscous forces dominate over nonlinear
inertial forces [6, 7]. at low re, locomotion results from non-reciprocal deformations to
break time-reversal symmetry [8]; this is the so-called "scallop theorem" [9]. experimental
observations have shown that motility of swimming nematodes including c. elegans results
from the propagation of bending waves along the nematode's body length [10, 11, 12]. these
waves consist of alternating phases of dorsal and ventral muscle contractions driven by the
neuromuscular activity of muscle cells. while it is generally accepted that during locomotion
the nematode's tissues obey a viscoelastic reaction [13, 14, 15], quantitative data on c.
elegans' material properties such as tissue viscosity and young's modulus remain largely
unexplored.
motility behavior of c. elegans is a strong function of its body material properties.
recent investigations have provided valuable data on c. elegans' motility, such as velocity,
bending frequency, and body wavelength [11, 12, 13, 16, 17, 18]. however, only recently
have the nematode's material properties been probed using piezoresistive cantilevers [19].
such invasive measurements provided young's modulus values of the c. elegans' cuticle on
the order of 400 mpa; this value is closer to stiffrubber than to soft tissues.
in this paper, we investigate the motility of c. elegans in both experiments and in a
model in order to estimate the nematode's material properties.
experiments show that
nematodes swim in a highly periodic fashion and generate traveling waves which decay from
head to tail. a dynamic model is proposed based on force and moment (torque) balance. a
simplified version of the model is able to capture the main features of the experiments such
as the traveling waves and their decay. the model is used to estimate both the young's
modulus and tissue viscosity of c. elegans. such estimates are used to characterize motility
phenotypes of healthy nematodes and mutants carrying muscular dystrophy (md).
ii.
experimental methods
experiments are performed by imaging c. elegans using standard microscopy and a high-
speed camera at 125 frames per second. we focus our analysis on forward swimming in
shallow channels to minimize three-dimensional motion. channels are machined in acrylic
3
and are 1.5 mm wide and 500 μm deep; they are sealed with a thin (0.13 mm) cover glass.
channels are filled with an aqueous solution of m9 buffer [20], which contains 5 to 10
nematodes. the buffer viscosity μ and density ρ are 1.1 cp and 1.0 g/cm3, respectively.
under such conditions, the reynolds number, defined as re = ρul/μ, is less than unity,
where u and l are the nematode's swimming speed and length, respectively.
in fig. 1(a), we display nematode tracking data over multiple bending cycles for a healthy,
wild-type nematode. results show that the nematode swims with an average speed < u >=
0.45 mm/s and with a beating pattern of period t = 0.46 s. this periodic behavior is
also qualitatively observed in the motion of the nematode tail (fig. 1a); see also video 1
(supplementary material). under such conditions, re ≈0.4. snapshots of the nematode
skeletons over one beating cycle (fig. 1b) reveal an envelope of well-confined body postures
with a wavelength of approximately 1 mm, which corresponds nearly to the nematode's
body length. the displacement amplitudes at the head and tail are similar with 465 μm
and 400 μm, respectively. however, the amplitudes of the curvature at the head and tail
differ sharply with approximately 6.07 mm−1 and 2.21 mm−1, respectively. the tail/head
curvature ratio of 0.36 suggests that the bending motion is initiated at the head.
extensive genetic analysis in c. elegans has identified numerous mutations affecting ne-
matode motility. one such mutant, dys-1, encodes a homolog of the human dystrophin
protein, which is mutated in duchenne's and becker's muscular dystrophy (md). using qual-
itative observation, dys-1 mutants have an extremely subtle movement defect [21], which
includes slightly exaggerated head bending and time-dependent decay in movement. the
quantitative imaging platform presented here is able to robustly differentiate between wild
type and dys-1 mutants, as shown in fig. 1(c) and video 2 (supplementary material).
results show that the dys-1 mutant swims with an average speed < u >= 0.17 mm/s and
re ≈0.15; both values are significantly smaller than for the wild-type nematode. although
the dys-1 mutant suffers from severe motility defects [21], it still moves in a highly periodic
fashion with t = 0.63 s. snapshots of nematode skeletons over one beating cycle (fig. 1d)
also reveal an envelope of well-confined body postures with a wavelength corresponding to
the nematode's body length. the dys-1 mutant exhibits a tail/head curvature ratio of 0.23,
which is similar to the value found for wild type nematodes. however, the corresponding
displacement amplitudes of the mutant are much smaller than those observed for the wild-
type (fig. 1b). the displacement amplitudes at the head and tail are 330 μm and 155 μm,
4
respectively. this observation suggests that the dys-1 mutant body, and in particular the
tail, are becoming inactive as the bending motion is not able to deliver as much body
displacement.
to further characterize the motility of c. elegans, we measure the curvature κ(s, t) =
dφ/ds along the nematode's body (fig. 2a). here, φ is the angle made by the tangent to
the x-axis at each point along the centerline and s is the arc-length coordinate spanning
the nematode's head (s = 0) to its tail (s = l). the spatio-temporal evolution of κ for a
swimming nematode is shown in fig. 2(a). approximately 6 bending cycles are illustrated
and curvature values are color-coded; red and blue represent positive and negative values of
κ, respectively. the y-axis in fig. 2(a) corresponds to the non-dimensional body position
s/l. the contour plot shows the existence of highly periodic, well-defined diagonally oriented
lines. these diagonal lines are characteristic of bending waves, which propagate in time along
the body length. note that as the wave travels along the nematode body, the magnitude of κ
decays from head to tail. such behavior contrasts sharply with that observed for undulatory
swimmers of the inertial regime (e.g. eel, lamprey), where amplitudes of body displacement
grow instead from head to tail [22, 23].
the body bending frequency (f) is obtained from the one-dimensional fast fourier trans-
form (fft) of the curvature field κ at multiple body positions s/l (fig. 2b). here, the
body bending frequency is defined as f = ω/2π. the angular frequency ω is calculated by
first extracting multiple lines from the curvature field at distinct body positions s/l, and
then computing the one-dimensional fft. the wave speed c is extracted from the slope of
the curvature κ propagating along the nematode's body; the wavelength λ is computed from
the expression λ = c/f. a single frequency peak f = 2.17 ± 0.18 hz (n = 25) is found in
the fourier spectrum, where n is the number of nematodes. this single peak is irrespective
of body position and corresponds to a wave speed c = 2.14 ± 0.16 mm/s.
iii.
mathematical methods
the swimming motion of c. elegans is modeled as a slender body in the limit of low
reynolds numbers (re) [24, 25, 26].
this model is later used to estimate the material
properties of c. elegans. we assume that the nematode is inextensible [26]; the uncertainty
in the measured body lengths is less than 3%. the nematode's motion is restricted to the
5
xy-plane and is described in terms of its center-line y(s, t), where s is the arc-length along the
filament [27]. the swimming c. elegans experiences no net total force or torque (moments)
such that, in the limit of low re, the dynamic equations of motion are
∂⃗
f
∂s = ct⃗
ut + cn⃗
un,
(1)
∂m
∂s
= −[fy cos(φ) −fx sin(φ)].
(2)
in eq. (1), ⃗
f(s, t) is the internal force in the nematode, ci is the drag coefficient experi-
enced by the nematode, ⃗
ui is the nematode velocity, and the subscripts t and n correspond
to the tangent and normal directions, respectively. the drag coefficients ct and cn are ob-
tained from slender body theory [26]. due to the finite confinement of nematodes between
parallel walls, corrections for wall effects on the resistive coefficients are estimated for slender
cylinders [7, 28].
in eq. (2), m = mp + ma, where mp is a passive moment and ma is an active moment
generated by the muscles of the nematode; the active and passive moments are parts of a
total internal moment [13, 14]. the passive moment is given by the voigt model [15] such
that mp = eiκ + ηpi(∂κ/∂t), where i is the second moment of inertia of the nematode
cross section. the voigt model is one of the simplest models for muscle and is extensively
used in the literature [29]. qualitatively, the elastic part of the voigt model is represented
by a spring of stiffness e while the dissipative part of the voigt model is represented by
a dashpot filled with a fluid of viscosity ηp (fig. 3). here, we assume two homogeneous
effective material properties, namely (i) a constant young's modulus e and (ii) a constant
tissue viscosity ηp.
the active moment generated by the muscle is given by ma = −(eiκa + ηai∂κ/∂t),
where κa is a space and time dependent preferred curvature produced by the muscles of
the nematode and ηa is a positive constant [30]. a simple form for κa can be obtained by
assuming that κa is a sinusoidal function of time with an amplitude that decreases from
the nematode's head to its tail (see appendix). note that if η = ηp −ηa > 0, there is net
dissipation of energy in the tissue; conversely, if η = ηp −ηa < 0, there is net generation
of energy in the tissue. experiments have shown that the force generated by active muscle
decreases with increasing velocity of shortening [29], so that the force-velocity curve for
active muscle has a negative slope (fig. 3). such negative viscosity has been derived by a
6
mathematical analysis of the kinetics of the mechano-chemical reactions in the cross-bridge
cycle of active muscles [30]. for live nematodes, we expect η = ηp −ηa < 0 because the net
energy produced in the (muscle) tissue is needed to overcome the drag from the surrounding
fluid.
equations (1) and (2) are simplified by noting that the nematode moves primarily in the
x-direction (see video 1, s.m.) and that the deflections of its centerline from the x-axis are
small. in such case, s ≈x and cos(φ) ≈1. this results in a linearized set of equations given
by
∂fy
∂x −cn
∂y
∂t = 0,
(3)
∂m
∂x + fy = 0.
(4)
differentiating eq. (4) with respect to x and combining with eq. (3), we obtain
∂2m
∂x2 + cn
∂y
∂t = 0.
(5)
substituting for m(x, t) in terms of κ(x, t) and its time derivative yields a bi-harmonic
equation for the displacement y(x, t) of the type
∂4y
∂x4 + ξ∂y
∂t = 0,
(6)
which can be solved analytically for appropriate boundary conditions, where ξ is a con-
stant that depends on the nematode's material properties and the fluid drag coefficient (see
appendix).
the boundary conditions are such that both the force and moment at the nematode's
head and tail are equal to zero. that is, fy(0, t) = fy(l, t) = 0 and m(0, t) = m(l, t) = 0.
note that the zero moment boundary conditions at the head and tail imply that eiκ(0, t)+
ηi(∂κ(0, t)/∂t) = eiκa(0, t) and eiκ(l, t) + ηi(∂κ(l, t)/∂t) = eiκa(l, t).
experiments show that the curvature κ has non-zero amplitudes (fig. 2a) both at the head
(x = 0) and the tail (x = l). in order to capture this observation, we assume that κa(x, t) is a
sinusoidal wave with decreasing amplitude of the form κa(x, t) = q0 cos ωt+q1x cos(ωt−b),
where q0, q1 and b are inferred from the experiments. note that if the curvature amplitude
7
at the head is larger than that at the tail, then the nematode swims forward. conversely,
if the curvature amplitude is smaller at the head than at the tail, the nematode swims
backward; if the amplitudes are equal at the head and tail then it remains stationary. we
note, however, that other forms of the preferred curvature κa are possible and could replicate
the behavior seen in experiments.
iv.
results & discussion
equation (6) is solved for the displacement y(x, t) in order to obtain the curvature
κ(x, t) = ∂2y/∂x2. the solution for y(x, t) is a superposition of four traveling waves of
the general form ai exp(−βx cos pi) cos(βx sin pi −ωt −φi) where β = (cnω/kb)1/4 and
pi is a function of the phase angle ψ. the amplitude ai and phase φi are constants to
be determined by enforcing the boundary conditions discussed above (see appendix). the
solution reveals both the traveling bending wave and the characteristic decay in κ, as seen in
experiments. note that our formulation does not assume a wave-functional form for κ(x, t).
rather the wave is obtained as part of the solution.
next, the curvature amplitude |κ(x)| predicted by the model is fitted to those obtained
from experiment to estimate the bending modulus kb = i
p
e2 + ω2η2 and the phase angle
ψ = tan−1(ηω)/e. the nematode is assumed to be a hollow, cylindrical shell [19, 31] such
that i = π((rm + t/2)4 −(rm −t/2)4)/4, where the mean nematode radius rm ≈35 μm and
the cuticle thickness t ≈0.5 μm [32]. for the population of wild-type c. elegans tested here
(n = 25), the best fit values are kb = 4.19×10−16±0.49×10−16 nm2 and ψ = −45.3o±3.0o.
in fig. 4, the experimental values of |κ(x)| along the body of a wild-type c. elegans are
displayed together with theoretical values of |κ(x)|, which are obtained by using the best
fit value of the bending modulus for this nematode and by changing the phase angle from
ψ = 0o to ψ = −90o. figure 4 shows that the model is able to capture the decay in |κ| as a
function of body length and the nematode's viscoelastic behavior.
the values of young's modulus e and tissue viscosity η can now be estimated based on
the values of ψ and kb discussed above. results show that, for the wild-type nematodes,
e = 3.77 ± 0.62 kpa and η = −860.2 ± 99.4 pa*s. the estimated value of e lies in the
range of values of tissue elasticity measured for isolated brain (0.1 −1 kpa) and muscle
cells (8 −17 kpa) [33]. the values of η for live c. elegans are negative because the tissue
8
is generating rather than dissipating energy [30, 34, 35, 36]. we note, however, that the
absolute values of tissue viscosity |η| are within the range (102 −104 pa*s) measured for
living cells [37, 38]. in order to determine whether the nematode's material properties can
be extracted reliably from shape measurements alone, experiments in solutions of different
viscosities were conducted [39]. the inferred young's modulus and effective tissue viscosity
remain constant for up to a 5-fold increase in the surrounding fluid viscosity (or mechanical
load).
the nematode's curvature κ(x, t) is now determined from the estimated values of e and
η. figure 2(c) shows a typical curvature κ(x, t) contour plot obtained from the solution of
the above equations using the estimated values of e and η. while the influence of non-
linearities is neglected for the scope of the present paper, the analytical results show that
our linearized model, while not perfect, is nevertheless able to capture the main features
observed in experiments (fig. 2d).
next, the method described above is used to quantify motility phenotypes of three distinct
mutant md strains (see table 1 in supplementary material): one with a well-characterized
muscle defect (dys-1;hlh-1); one with a qualitatively subtle movement defect (dys-1), and
one mutant that has never been characterized with regards to motility phenotypes but
is homologous to a human gene that causes a form of muscular dystrophy expressed in
nematode muscle (fer-1). note that while both fer-1 and dys-1 genes are expressed in c.
elegans muscle, they exhibit little, if any, change in whole nematode motility under standard
lab assays [40].
figure 5 displays results of both kinematics (5a) and tissue material properties (5b) for all
nematodes investigated here. quantitative results are summarized in table 1 (supplemen-
tary material). we find that all three mutants exhibit significant changes in both motility
kinematics and tissue properties. for example, fer-1 mutants exhibit defects in motility
kinematics which are not found with standard assays [40]. specifically, both the maximum
amount of body curvature attained in fer-1 mutants is increased by ∼5%, and the rate
of curvature decay along the body is increased by ∼5% (fig. 5a). this data show that
fer-1(hc24) mutants exhibit small yet noteworthy defects in whole nematode motility and
exhibit an uncoordinated (unc) phenotype. in comparison, kinematics data on dys-1;hlh-
1 show that body curvature at the head of such mutant nematodes increases by ∼70%
compared to wild-type nematodes, while the rate of decay along the body is increased by
9
approximately ∼40%. these results are useful to quantify the paralysis seen earlier in the
tail motion of such md mutants (fig. 1c).
the young's modulus (e) and the absolute values of tissue viscosity (|η|) of wild-type
and mutant strains are shown in fig. 5(b). results show that mutants have lower values
of e when compared to wild-type nematodes. in other words, dys-1, dys-1;hlh-1, and fer-1
mutants c. elegans are softer than their wild-type counterpart. the values of |η| of fer-1
mutants are similar to wild-type nematodes, within experimental error. however, the values
of |η| for dys-1 mutants are lower than wild-type c. elegans. since muscle fibers are known
to exhibit visible damage for dys-1;hlh-1 mutants [21], we hypothesize that the deterioration
of muscle fibers may be responsible for the lower values of e and |η| found for dys-1 and
dys-1;hlh-1 mutants.
v.
conclusion
in summary, we characterize the swimming behavior of c. elegans at low re. results
show a distinct periodic swimming behavior with a traveling wave that decays from the
nematode's head to tail. by coupling experiments with a linearized model based on force
and torque balance, we are able to estimate, non-invasively, the nematode's tissue material
properties such as young's modulus (e) and viscosity (η) as well as bending modulus (kb).
results show that c. elegans behaves effectively as a viscoelastic material with e ≈3.77 kpa,
|η| ≈860.2 pa*s, and kb ≈4.19 × 10−16 nm2. in particular, the estimated values of e are
much closer to biological tissues than previously reported values obtained using piezoresistive
cantilevers [19]. we demonstrate that the methods presented here may be used, for example,
to quantify motility phenotypes and tissue properties associated with muscular dystrophy
mutations in c. elegans. overall, by combining kinematic data with a linearized model,
we are able to provide a robust and highly quantitative phenotyping tool for analysis of
c. elegans motility, kinematics, and tissue mechanical properties. given the rapid non-
invasive optical nature of this method, it may provide an ideal platform for genetic and
small molecule screening applications aimed at correcting phenotypes of mutant nematodes.
our method also sheds new light on our understanding of muscle function, physiology, and
animal locomotion in general.
10
appendix:
in this appendix, we detail the model for the motion of the nematode c. elegans. the
nematode is modeled as a slender filament at low reynolds numbers [26]. in our experiments,
the uncertainty in the measured body lengths is less than 3% and we assume inextensibility.
the nematode's motion is described in terms of its center-line ⃗
y(s, t), where s is the arc-
length along the filament and t is time [27]. we assume that the nematode moves in the
xy-plane. the swimming c. elegans experiences no net total force or torque (moments) such
that, in the limit of low re, the equations of motion are
∂⃗
f
∂s = ct⃗
ut + cn⃗
un,
(a.1)
∂m
∂s ⃗
ez = −ˆ
t × ⃗
f,
(a.2)
where ˆ
t(s, t) = ∂⃗
y/∂s is the tangent vector to the center-line, ⃗
f(s, t) is the internal force,
and m(s, t)⃗
ez = ⃗
m(s, t) is the internal moment consisting of a passive and active part
[13, 14]. tangential and normal velocities are respectively given by ⃗
ut = (∂⃗
y/∂t * ˆ
t)ˆ
t and
⃗
un = (i−ˆ
t⊗ˆ
t)∂⃗
y/∂t. the drag coefficients, ct and cn, are obtained from slender body theory
[26]. due to the finite confinement of nematodes between the parallel walls, corrections for
wall effects on the resistive coefficients are estimated for slender cylinders [7]. the local
body position ⃗
y, the velocity at any body position ∂⃗
y/∂t, and the tangent vector ˆ
t are all
experimentally measured.
the constitutive relation for the moment m(s, t) in our inextensible filament is assumed
to be given by
m = mp + ma,
(a.3)
where mp(s, t) is a passive moment and ma(s, t) is an active moment generated by the
muscles of the nematode. the passive moment is given by a viscoelastic voigt model [15]
mp = eiκ + ηpi ∂κ
∂t ,
(a.4)
where κ(s, t) is the curvature along the nematode.
here, we assume two homogeneous
effective material properties, namely (i) a constant young's modulus e and (ii) a constant
11
tissue viscosity ηp. the active moment generated by the muscle is assumed to be given by:
ma = −(eiκa + ηai ∂κ
∂t ),
(a.5)
where κa = q0 cos ωt + q1s cos(ωt −b) is a preferred curvature and ηa is a positive
constant [30].
q0, q1 and b must be obtained by fitting to experiments.
q2
1l2 +
2q0q1l cos(b) < 0 means that the curvature amplitudes at the head are larger than those at
the tail and we should expect traveling waves going from head to tail causing the nematode
to swim forward; similarly, q2
1l2 + 2q0q1l cos(b) > 0 should give traveling waves going
from tail to head so that the nematodes swim backward [12]. the total moment m(s, t) can
be written as
m = mp + ma = ei(κ −κa) + (ηp −ηa)i ∂κ
∂t = ei(κ −κa) + ηi ∂κ
∂t .
(a.6)
note that if η = ηp −ηa > 0 then there is net dissipation of energy in the tissue; if
η = ηp −ηa < 0 then there is net generation of energy in the tissue. for live nematodes
that actively swim in the fluid we expect η = ηp −ηa < 0 since the net energy produced
in the (muscle) tissue is needed to overcome the drag from the surrounding fluid. in other
words, the driving force for the traveling waves seen in the nematode has its origins in the
contractions of the muscle.
we model the nematode as a hollow cylindrical shell with outer radius ro and inner radius
ri [19] such that the principal moment of inertia (second moment of the area of cross-section)
i along the entire length of the nematode is given by i = π
4
(rm + t
2)4 −(rm −t
2)4
where
rm is the mean radius and t is the cuticle thickness.
consistent with experimental observations, we assume that the nematode moves along
the x-axis and that the deflections of the centerline of the nematode from the x-axis are
small. this allows us to take s = x and write κ(x, t) = ∂φ/∂x = ∂2y/∂x2 where y(x, t) is
the deflection of the centerline of the nematode and φ(x, t) is the angle made by the tangent
ˆ
t to the x-axis. the equations of motion can then be written as
∂fx
∂x = ctvx,
∂fy
∂x = cn
∂y
∂t ,
(a.7)
∂m
∂x + fy
=
0,
(a.8)
12
where ⃗
f = fx⃗
ex +fy⃗
ey. vx is the velocity of the nematode along the x-axis and corresponds
to the average forward speed u. combining a linearized formulation of eqs. (a.1) and (a.2)
along with the viscoelastic model of eq. (a.6) offers a direct route towards (i) a closed-form
analytical solution for the curvature κ(x, t) and (ii) an estimate of tissue properties (e and
η). note that due to the assumption of small deflections [26], the x-component of the force
balance becomes decoupled from the rest of the equations. as a result, we will not be able
to predict vx even if y(x, t) is determined. but, we can solve equations (a.6), (a.7), and
(a.8) for appropriate boundary conditions on fy and m to see if we get solutions that look
like traveling waves whose amplitude is not a constant, but in fact, is decreasing from the
head to the tail of the nematode (fig. 2a). to do so, we observe that the nematode's body
oscillates at a single frequency irrespective of position x along its centerline (fig. 2b). we
therefore assume y(x, t) and m(x, t) to have a form that involves a single frequency ω, so
that y(x, t) = f(x) cos ωt + g(x) sin ωt, and
m(x, t) = kb(ω)∂2f
∂x2 cos(ωt + ψ(ω))
+kb(ω)∂2g
∂x2 sin(ωt + ψ(ω)),
(a.9)
where f(x) and g(x) are as yet unknown functions, ψ(ω) is frequency dependent phase
angle, and kb(ω) is a frequency dependent bending modulus of the homogeneous viscoelastic
material making up the nematode. in particular,
tan ψ = ηω
e ,
kb = i
p
e2 + ω2η2,
(a.10)
where the parameter kb, e, and η correspond to the effective tissue properties of the nema-
tode. to be consistent with the observation of a single frequency ω and non-zero amplitudes
of the curvature κ at the head and tail, we apply boundary conditions
fy(0, t) = 0,
m(0, t) = 0,
fy(l, t) = 0,
m(l, t) = 0,
(a.11)
we can make further progress by differentiating the balance of moments once with respect
13
to x and substituting the y-component of the balance of forces into it to get
∂2m
∂x2 + cn
∂y
∂t = 0
(a.12)
substituting for m(x, t) from eqn.(a.9) yields a biharmonic equation for the functions f
and g which can be solved to give the following solution for y(x, t)
y(x, t) = a01 exp(βx cos(π
8 + ψ
4 )) cos(βx sin(π
8 + ψ
4 ) −ωt −φ01)
+ a23 exp(−βx cos(π
8 + ψ
4 )) cos(−βx sin(π
8 + ψ
4 ) −ωt −φ23)
+ a45 exp(βx cos(ψ
4 −3π
8 )) cos(βx sin(ψ
4 −3π
8 ) −ωt −φ45)
+ a67 exp(−βx cos(ψ
4 −3π
8 )) cos(−βx sin(ψ
4 −3π
8 ) −ωt −φ67),
(a.13)
where β = (cnω/kb)1/4, and a01, a23, a45, a67, φ01, φ23, φ45 and φ67 are eight constants to
be determined from the eight equations resulting from the sin ωt and cos ωt coefficients of the
boundary conditions. note that these are four waves of the type y(x, t) = b(x) cos(2π(x +
vwt)/λ) which is the form originally assumed by gray and hancock based on experiment
[26]. in contrast, we have obtained such waves as a solution to the equations of motion. we
plot the amplitude and phase of these waves for a particular choice of parameters in fig. 6.
but, the exact solution above is cumbersome to use. we need simple expressions that can
be easily fit to some observable in the experiment to obtain β and ψ, or equivalently, e and
η. one such parameter is the amplitude of the traveling waves as a function of position x
along the nematode. we develop a strategy to obtain this amplitude in the following.
based on the exact solutions to the equations we approximate the displacement y(x, t)
as follows:
y(x, t) ≈a1 exp(−βx cos(ψ
4 −3π
8 )) cos(−βx sin(ψ
4 −3π
8 ) −ωt)
+ a2 exp(βx cos(ψ
4 + π
8 )) cos(βx sin(ψ
4 + π
8 ) −ωt + φ2)
+ a2 exp(βx cos(ψ
4 −3π
8 )) cos(βx sin(ψ
4 −3π
8 ) −ωt + φ2 −3π
4 )
+ a1 exp(−βx cos(ψ
4 + π
8 )) cos(−βx sin(ψ
4 + π
8 ) −ωt −π
2 ).
(a.14)
14
for experimental values of κa(l, t)/κa(0, t) = 0.33 we find a1/a2 ≈443 so that a2 is much
smaller in comparison to a1. note that
y(0, t) = a1 cos ωt + a2 cos(ωt −φ2) + a2 cos(ωt −φ2 + 3π
4 ) + a1 cos(ωt + π
2 )
=
√
2a1 cos(ωt + π
4 ) + 2a2 cos(3π
8 ) cos(ωt −φ2 + 3π
8 ).
(a.15)
the curvature can be calculated as under:
κ(x, t) = ∂2y
∂x2 ≈β2a1 exp(−βx cos(ψ
4 −3π
8 )) cos(−βx sin(ψ
4 −3π
8 ) −ωt −3π
4 + ψ
2 )
+ β2a2 exp(βx cos(ψ
4 + π
8 )) cos(βx sin(ψ
4 + π
8 ) −ωt + φ2 + π
4 + ψ
2 )
+ β2a2 exp(βx cos(ψ
4 −3π
8 )) cos(βx sin(ψ
4 −3π
8 ) −ωt + φ2 −3π
4 −3π
4 + ψ
2 )
+ β2a1 exp(−βx cos(ψ
4 + π
8 )) cos(−βx sin(ψ
4 + π
8 ) −ωt −π
2 + π
4 + ψ
2 ).
(a.16)
note again that
κ(0, t) = β2a1 cos(ωt + 3π
4 −ψ
2 ) + β2a2 cos(ωt −φ2 −π
4 −ψ
2 )
+β2a2 cos(ωt −φ2 + 3π
4 + 3π
4 −ψ
2 ) + β2a1 cos(ωt + π
4 −ψ
2 )
=
√
2β2a1 cos(ωt + π
2 −ψ
2 ) + 2a2 cos(7π
8 ) cos(ωt −φ2 −ψ
2 + 5π
8 ).
(a.17)
it is possible to determine β and ψ from (a.15) and (a.17) alone. if we recognize that
a1 >> a2 at the head then we see that the ratio of the amplitude of the curvature to the
amplitude of the displacement at the head is simply β2 and the phase difference between
them is (π
4 −ψ
2 ) (the phase difference between ∂y
∂x and ∂2y
∂x2 at the head is 19π
16 −ψ
4 ). we find
by comparing with the exact solution that an estimate of β using this method is accurate
to within 1% and that of ψ is accurate to within 2 or 3 degrees.
the curvature is an oscillatory function and we can determine its amplitude a(x) simply
by isolating the coefficients of cos ωt and sin ωt and then squaring and adding them. we get
15
the following expression as a result of this exercise:
a2(x) = β4a2
1 exp(−2βx cos(ψ
4 −3π
8 )) + β4a2
2 exp(2βx cos(ψ
4 + π
8 ))
+β4a2
2 exp(2βx cos(ψ
4 −3π
8 )) + β4a2
1 exp(−2βx cos(ψ
4 + π
8 ))
−2β4a1a2 exp(−
√
2βx sin(ψ
4 −π
8 )) cos(−
√
2βx sin(ψ
4 −π
8 ) −φ2)
+2β4a1a2 cos(−2βx sin(ψ
4 −3π
8 ) −φ2 + 3π
4 )
−2β4a2
2 exp(
√
2βx cos(ψ
4 −π
8 )) cos(
√
2βx cos(ψ
4 −π
8 ) + 3π
4 )
+2β4a1a2 cos(2βx sin(ψ
4 + π
8 ) + π
2 + φ2)
+2β4a1a2 exp(
√
2βx sin(ψ
4 −π
8 )) cos(
√
2βx sin(ψ
4 −π
8 ) + φ2 −5π
4 )
+2β4a2
1 exp(−
√
2βx cos(ψ
4 −π
8 )) cos(
√
2βx cos(ψ
4 −π
8 ) −π
2 ).
(a.18)
we can fit the experimental data of curvature as a function of x using the expression above.
there are five fit parameters – a1, a2, β, ψ and φ2. the value of φ2 mostly affects the
curvature profile near the tail. values of φ2 ≈π/3 seem to give good fits for the curvature
data of the nematodes. a1 can be determined from the amplitude of the displacement at
the head. this leaves three fit parameters – a2, β and ψ. we can use this fit to check if the
parameters β and ψ obtained from analyzing the motion of the head alone are reasonable
or not.
methods summary
c. elegans strain. all strains were maintained using standard culture methods and fed
with the e. coli strain op50. the following muscular dystrophic (md) strains were used:
fer-1(hc24ts), dys-1(cx18)i and hlh-1(cc561)ii;dys-1(cx18)i double mutant. note that hlh-1
is a myod mutant that qualitatively reveals the motility defects of dys-1 mutants. analysis
were performed on hypochlorite synchronized young adult animals.
fer-1 mutants were
hatched at the restrictive temperature of 25oc and grown until they reach the young adult
stage. dys-1(cx18) i ; hlh-1(cc561) ii mutants were grown at the permissive temperature
of 16oc. wild-type nematodes grown at the appropriate temperature were used as controls.
strains were obtained from the caenorhabditis elegans genetic stock center.
16
acknowledgements
the authors would like to thank j. yasha kresh, y. goldman, p. janmey, and t. shinbrot
for helpful discussions. we also thank r. sznitman for help with vision algorithms and p.
rockett for manufacturing acrylic channels. some nematode strains used in this work were
provided by the caenorhabditis genetics center, which is funded by the nih national
center for research resources (ncrr).
references
[1] bargmann, c. i., science 282, 2028 (1998).
[2] mendel, j. e., h. korswagen, k. s. liu, y. m. hadju-cronin, m. i. simon, r. h. plasterk,
and p. w. sternberg, science 267, 1652 (1995).
[3] nelson, l. s., m. l. rosoff, and c. li, science 281, 1686 (1998).
[4] white, j. g., e. southgate, j. n. thomson, and s. brenner, phil. trans. r. soc. lond. b
biol. sci. 275, 327 (1976).
[5] white, j. g., e. southgate, j. n. thomson, and s. brenner, phil. trans. r. soc. lond. b
biol. sci. 314, 1 (1986).
[6] childress, s., mechanics of swimming and flying (cambridge university press, 1981).
[7] brennen, c. and h. winet, annu. rev. fluid mech. 9, 339 (1977).
[8] taylor, g. i., proc. r. soc. a 209, 447 (1951).
[9] purcell, e. m., am. j. phys. 45, 3 (1977).
[10] gray, j. and h. w. lissmann, j. exp. biol. 41, 135 (1964).
[11] korta, j., d. a. clark, c. v. gabel, l. mahadevan, and a. d. t. samuel, j. exp. biol. 210,
2383 (2007).
[12] pierce-shimomura, j. t., b. l. chen, j. j. mun, r. ho, r. sarkis, and s. l. mcintire, proc.
natl. acad. sci. usa 105, 2098220987 (2008).
[13] karbowski, j., c. j. cronin, a. seah, j. e. mendel, d. cleary, and p. w. sternberg, j. theor.
biol. 242, 652 (2006).
[14] guo, z. v. and l. mahadevan, proc. natl. acad. sci. usa 105, 3179 (2008).
17
[15] fung, y. c., a first course in continuum mechanics (prentice hall, 1993).
[16] cronin, c. j., j. e. mendel, s. mukhtar, y.-m. kim, r. c. stirb, j. bruck, and p. w.
sternberg, bmc genetics 6, 5 (2005).
[17] feng, z., c. j. cronin, j. h. wittig, and p. w. s. annd w. r. schafer, bmc bioinformatics
5, 115 (2004).
[18] ramot, d., b. e. johnson, t. l. berry, l. carnell, and m. b. goodman, plos one 3, e2208
(2008).
[19] park, s.-j., m. b. goodman, and b. l. pruitt, proc. natl. acad. sci. usa 104, 17376 (2007).
[20] brenner, s., genetics 77, 71 (1974).
[21] gieseler, k., k. grisoni, and l. s ́
egalat, current biology 10, 1092 (2000).
[22] tytell, e., and g. v. lauder, j. exp. biol. 207, 1825 (2004).
[23] kern, s., and p. koumoutsakos, j. exp. biol. 209, 4841 (2006).
[24] qian, b., t. r. powers, and k. s. breuer, phys. rev. let. 100, 078101 (2008).
[25] yu, t. s., e. lauga, and a. e. hosoi, phys. fluids 18, 091701 (2006).
[26] gray, j. and g. hancock, j. exp. biol. 32, 802 (1955).
[27] antman, s. s., nonlinear problems in elasticity (springer-verlag, new york, 1995).
[28] katz, d. f., j. r. blake, and s. l. paveri-fontana, j. fluid mech. 72, 529 (1975).
[29] linden, r. j., recent advances in physiology (churchill livingstone, edinburgh and london,
1974).
[30] thomas, n. and r. thornhill, j. phys. d: appl. phys. 31, 253 (1998).
[31] zelenskaya, a., j. b. de monvel, d. pesen, m. radmacher, j. h. hoh, and m. ulfendahl,
biophys. j. 88, 2982 (2005).
[32] cox, g. n., m. kusch, and r. s. edgar, j. cell biol. 90, 7 (1981).
[33] engler, a. j., s. sen, h. l. sweener, and d. e. discher, cell 126, 677 (2006).
[34] feit, h., m. kawai, and m. i. schulman, muscle nerve 8, 503 (1985).
[35] kawai, m. and p. w. brandt, j. muscle res. cell motil. 1, 279 (1980).
[36] tawada, a. and m. kawai, biophys. j. 57, 643 (1990).
[37] yamada, s., d. wirtz, and s. c. kuo, biophys. j. 78, 1736 (2000).
[38] thoumine, o. and a. ott, j. cell sci. 110, 2109 (1997).
[39] purohit, p. k., x. shen, and p. e. arratia, exp. mech. under review, (2009).
[40] bessou, c., j.-b. giugia, c. j. franks, l. holden-dye, and l. segalat, neurogenetics 2, 61
18
(1998).
19
figure legends
figure 1:
motility of wild-type c. elegans and dys-1;hlh-1 muscular dystrophic (md)
mutant swimming at low reynolds number. (a) and (c): visualization of c. elegans
motion illustrating instantaneous body centerline or skeleton. also shown are the nematode's
(i) centroid and (ii) tail-tip trajectories over multiple body bending cycles. (b) and (d):
color-coded temporal evolution of c. elegans skeletons over one beating cycle.
results
reveal a well-defined envelope of elongated body shapes with a wavelength corresponding
approximately to the nematode's body length.
figure 2:
spatio-temporal kinematics of c. elegans forward swimming gait. (a) represen-
tative contour plot of the experimentally measured curvature (κ) along the nematode's body
centerline for approximately 6 bending cycles. red and blue colors represent positive and
negative κ values, respectively. the y-axis corresponds to the dimensionless position s/l
along the c. elegans' body length where s = 0 is the head and s = l is the tail. (b)
nematode's body bending frequency obtained from fast fourier transform of κ at different
s/l. the peak is seen at a single frequency (∼2.4 hz) irrespective of the location s/l.
(c) contour plot of curvature κ values obtained from the model. the model captures the
longitudinal bending wave with decaying magnitude, which travels from head to tail. (d)
comparison between experimental and theoretical curves of κ at s/l = 0.1 and s/l = 0.4;
dashed lines correspond to model predictions (root mean square error is ∼10% of peak-to-
peak amplitude).
figure 3:
schematic of the analytical model for the total internal moment m. muscle tissue
is described by a visco-elastic model containing both passive and active elements.
the
passive moment (mp) is described by the voigt model consisting of a passive elastic element
(spring) of stiffness e (i.e. young's modulus) and a passive viscous element (dashpot) of
tissue viscosity ηp. the active moment (ma) is described by an active muscular element of
20
viscosity ηa and illustrates a negative slope on a force-velocity plot. since there is a net
generation of energy in the muscle to overcome drag from the surrounding fluid, we expect
η = ηp −ηa < 0.
figure 4:
typical c. elegans viscoelastic material properties. typical experimental profile of
the curvature amplitude |κ| decay as a function of body position s/l. color-coded theoretical
profiles of the curvature amplitude decay |κ| at fixed value of the bending modulus kb.
curves vary from ψ = 0o (red) to ψ = −90o (blue), which corresponds to η = 0 and e = 0,
respectively.
figure 5:
kinematics and material properties of wild type and three muscle mutants of c.
elegans. (a) measured kinematic data and (b) estimated young's modulus e and absolute
values of tissue viscosity |η| for wild-type, fer-1(hc24), dys-1(cx18), and dys-1(cx18); hlh-
1(cc561) adult nematodes (n = 7-25 nematodes for each genotype. * - p < 0.01).
figure 6:
amplitude and phase of traveling waves obtained from enforcing force and mo-
ment boundary conditions at x = 0, l.
(a) amplitudes of the a23 and a67 waves
decrease from x = 0 to x = l while the amplitudes of the a01 and a45 waves increase from
x = 0 to x = l. the amplitudes of the former two waves are larger than the latter two.
(b) phases φ01, φ23, φ45 and φ67 are all constant from x = 0 to x = l. the phase difference
between the a67 wave and a23 wave is approximately 90o. the parameters used to obtain
these plots are: ω = 4π radians/sec, l = 1.0mm, kb = 5.0 × 10−16nm2, cn = 0.06ns/m2,
ψ = −45o, eiq0 = 4.35 × 10−12nm, q1l = −1.054q0 and b = 198.4o.
21
fig. 1:
22
fig. 2:
23
fig. 3:
24
fig. 4:
25
fig. 5:
fig. 6:
|
0911.1732 | a deep dive into ngc 604 with gemini/niri imaging | the giant hii region ngc 604 constitutes a complex and rich population to
studying detail many aspects of massive star formation, such as their
environments and physical conditions, the evolutionary processes involved, the
initial mass function for massive stars and star-formation rates, among many
others. here, we present our first results of a near-infrared study of ngc 604
performed with niri images obtained with gemini north. based on deep jhk
photometry, 164 sources showing infrared excess were detected, pointing to the
places where we should look for star-formation processes currently taking
place. in addition, the color-color diagram reveals a great number of objects
that could be giant/supergiant stars or unresolved, small, tight clusters. a
extinction map obtained based on narrow-band images is also shown.
| introduction
ngc 604 is a giant hii region (ghr) located in an outer spiral arm of m33, at a
distance of 840 kpc. it is the second most luminous hii region in the local group,
after 30 doradus in the lmc. both are nearby examples of giant star-forming re-
gions whose individual objects can be spatially resolve for further study. ngc 604 has
been the target of many studies during the past few decades. a brief summary of
known facts about ngc 604 includes the following. this ghr is ionized by a mas-
sive, young cluster, with at least 200 o stars (some as early as o3-o4). the cluster
does not exhibit a central core distribution. instead, the stars are widely spread over
its projected area in a structure called 'scaled ob association' (soba; hunter et al.
(1996); ma ́
ız-apell ́
aniz et al. (2004); bruhweiler et al. (2003)). wolf-rayet stars, a con-
firmed and many candidate red supergiant stars, a luminous blue variable and a su-
pernova remnant are all part of ngc 604's stellar population (conti & massey (1981);
d'odorico & rosa (1981); drissen et al. (1993); d ́
ıaz et al. (1996); churchwell & goss
(1999); terlevich et al. (1996); barb ́
a et al. (2009)). the age of the central ionizing cluster
has been determined by different authors as between 3 and 5 myr (gonz ́
alez delgado r.m.& p ́
erez, e.
(2000); bruhweiler et al. (2003); d ́
ıaz et al. (1996); hunter et al. (1996); rela ̃
no & kennicut
(2009)).
the interestellar medium reveals a complex structure with a high-excitation central re-
gion (made up of multiple two-dimensional structures), asymmetrically surrounded by a
low-excitation halo. the whole region shows a very complex geometry of cavities, expand-
119
120
c. fari ̃
na et al.
table 1. main characteristics of broad-band and narrow-band filters used in our gemini/niri
observations.
broad bands
narrow bands
filter central λ (μm) coverage(μm)
filter
central λ (μm) coverage(μm)
j
1.25
0.97-1.07
paβ
1.282
∼0.1
h
1.65
1.49-1.78
brγ
2.16
∼0.1
ks
2.15
1.99-2.30
h2(2-1)
2.24
∼0.1
ing shells and filaments, as well as dense molecular regions. all of these structures show
different kinematic behaviour (ma ́
ız-apell ́
aniz et al. (2004); tenorio-tagle et al. (2000);
sabalisck et al. (1995); rela ̃
no & kennicut (2009)).
aiming to characterize the youngest stellar population and its environment, we performed
near-infrared (nir) photometry (j h k) and analysed narrow-band images in paβ, brγ
and h2(2-1). taking into account that nir observations are less affected by dust extinc-
tion characteristic in star-fomation environments, we can take a deep dive into ngc 604
to study those very young objects which are still immersed in their parental clouds at
the sites of current star formation.
2. images and data processing
the images were obtained with the near infrared imager and spectrometer (niri)
at gemini north. the resulting plate scale is 0.117 arcsecpixel−1, with a field of view of
120 × 120 arcsec2. the filters used and their main characteristics are listed in table 1.
the images were taken under excellent seeing conditions, on average ∼0.35" in the
j, h, and ks images. a set of approximately 10 individual exposures was taken in each
band to combine into the final image. data reduction and processing were performed
with specific tasks using the gemini-niri iraf package. images were sky subtracted and
flat fielded and short darks exposures were used to identify bad pixels. stellar magnitudes
were obtained by point-spread-function (psf) fitting in crowded fields using daophot
software (stetson 1987) in iraf. although a standard procedure, psf construction and
fitting involves an iterative and careful process in which several tries were made to get
the best results. the effective area covered by our photometry is ∼107 × 107 arcsec2
(∼430 × 430 pc2 at the distance of m33). the average photometric errors are 0.09,
0.11 and 0.21 mags in the h, j and ks filters, respectivelly, and the completeness limits
are 22 mags in j and 21 mags in h and ks. magnitudes in the individual filters were
matched in a unique list containing 5566 objects in the field in which all three j, h and
ks magnitudes were measured. astrometry was derived using 35 objects in common in
our field of ngc 604 and the gsc-ii catalog, version 2.3.2 (2006) in the icrs, equinox
j2000.0.
3. results and discussion
the resulting color-magnitude (cm) and color-color (cc) diagrams are shown in fig-
ures 1 and 2, respectively. we have included ∼2000 selected objects, located within a
radius of 48 arcsec (∼200 pc) and centered on ngc 604, meeting a certain photometric
quality level (magnitude error ⩽median(error)+ 1.0 × standard deviation(error)). red
symbols in both plots are objects that lie on the right side of the reddening line for a
o6-o8 v star.for each of these objects, the error in (h-ks) color is smaller than its
distance to the reddening line, so that we can ensure that they undoubtedly show an ir
iaus266. ngc 604: near-infrared gemini/niri imaging
121
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
1.75
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
1.75
j-h
h-ks
06-8v
b5v
f5v
k5v
m5v
g0iii
k2iii
m0iii
m7iii
06-8v
b5v
f5v
k5v
m5v
g0iii
k2iii
m0iii
m7iii
ngc 604 field
objects with ir excess
iii
v
av = 10 mag
15
16
17
18
19
20
21
22
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
ks
h-ks
03v
06v
09.5v
k2iii
m1iii
m5iii
03z
09z
o7z
09i
a5i
f8i
m4i
b3i
ngc 604 field
objects with ir excess
av = 10 mag
ms
zams
i
iii
figure 1. color-color (left) and color-magnitude (right) diagrams for objects observed in a
circular area centered on ngc 604. red squares are objects that show an ir excess.
excess.
among the objects with intrinsic ir excesses in ghrs we expect to find wolf-rayet
stars, early supergiants and of stars, and massive young stellar objects (mysos). many
of these types of objects (or candidates) have already been found in ngc 604. based
on our deep j h k photometry we found a total of 164 objects that show ir excesses.
as can be expected, ∼70% of these lie in a small area near the region's center and
a large proportion are tightly grouped in regions coincident with the radio-continuum
peaks at 8.4 ghz from churchwell & goss (1999), as shown in figure 3 (left panel).
these results are in complete agreement with the analysis of barb ́
a et al. (2009) based
on hst/nicmos nir images, where myso candidates also appeared aligned with the
radio peak structures.
what we found supports the idea mentioned by many authors, that those areas may
be embedded star-forming regions, also taking into account that regions coincident with
the radio knots show conditions of high temperature and density (tosaki et al. (2007)),
and in two of them ma ́
ız-apell ́
aniz et al. (2004) identified compact hii regions. most
figure 2. (left) three-color (j, h and ks) composite image of ngc 604 with 8.4 ghz radio–
continuun countours overlaid (adapted from churchwell & goss (1999)). red circles are objects
that show ir excess in our photometry. (right) extinction map from brγ/hα, regions with
higher extinction are in red/white.
122
c. fari ̃
na et al.
objects with ir excesses will be the targets for further observing programs (making use
of the integral-field spectroscopic facilities at gemini north) to elucidate their nature
and derive accurate properties by means of spectroscopic study. also, those objects that
are located at the bright, massive end on the cm diagram deserve further observation
and study.
on the basis of our narrow-band images we generated an extinction map, derived from
the observed variations of the brγ to hα ratio (with the hα image taken from bosch et al.
(2002)). the color scale used in the map shows regions with higher extinction in red/white.
the present and future results of our nir study will be placed in the context of previ-
ous studies of ngc 604 to complete the picture of the overall formation and evolution
scenario of this ghr.
acknowledgements
rb acknowledges partial support from the universidad de la serena, project diuls
cd08102.
references
barb ́
a, rodolfo h.; ma ́
ız apell ́
aniz, jes ́
us; perez, enrique; rubio, monica; bolatto, alberto;
fari ̃
na, cecilia; bosch, guillermo; walborn, nolan r. 2009, 2009arxiv0907.1419b
bosch, guillermo; terlevich, elena; terlevich, roberto 2002 mnras 329, 481
bruhweiler, fred c.; miskey, cherie l.; smith neubig, margaret 2003, aj, 125, 3082
churchwell, e. & goss, w. m. 1999, apj, 514, 188
conti, p. s. & massey, p. 1999, apj, 249, 471
diaz, a. i.; terlevich, e.; terlevich, r.; gonzalez-delgado, r. m.; perez, e.; garcia-vargas, m.
l. 1996, asp-cs, 98, 399
d'odorico, s. & rosa, m. 1981, apj, 248, 1015
drissen, laurent; moffat, anthony f. j.; shara, michael m. 1993, aj, 105, 1400
gonz ́
alez delgado, r. m. and p ́
erez, e. 2000, mnras 317, 64
hunter, deidre a.; baum, william a.; o'neil, earl j., jr.; lynds, roger 1996, apj, 456, 174
ma ́
ı-apell ́
aniz, j.; p ́
erez, e.; mas-hesse, j. m. 2004, aj, 128, 1196
rela ̃
no, m ́
onica & kennicutt, robert c. 2009, apj, 699, 1125
sabalisck, nanci s. p.; tenorio-tagle, guillermo; castaneda, hector o.; munoz-tunon, casiana
1995, apj, 444, 200
stetson p. b. 1987, pasp, 99, 191
tenorio-tagle, guillermo; mu ̃
noz-tu ̃
n ́
on, casiana; p ́
erez, enrique; ma ́
ız-apell ́
aniz, jes ́
us;
medina-tanco, gustavo 2000, apj, 541, 720
terlevich, e.; daz, a. i.; terlevich, r.; gonzlez-delgado, r. m.; prez, e.; garca vargas, m. l.
1996, mnras 279, 1219
tosaki, t.; miura, r.; sawada, t.; kuno, n.; nakanishi, k.; kohno, k.; okumura, s. k.; kawabe,
r. 2007, apj, 664, 27
|
0911.1733 | the interaction of kelvin waves and the non-locality of the energy
transfer in superfluids | we argue that the physics of interacting kelvin waves (kws) is highly
non-trivial and cannot be understood on the basis of pure dimensional
reasoning. a consistent theory of kw turbulence in superfluids should be based
upon explicit knowledge of their interactions. to achieve this, we present a
detailed calculation and comprehensive analysis of the interaction coefficients
for kw turbulence, thereby, resolving previous mistakes stemming from
unaccounted contributions. as a first application of this analysis, we derive a
new local nonlinear (partial differential) equation. this equation is much
simpler for analysis and numerical simulations of kws than the biot-savart
equation, and in contrast to the completely integrable local induction
approximation (in which the energy exchange between kws is absent), describes
the nonlinear dynamics of kws. secondly, we show that the previously suggested
kozik-svistunov energy spectrum for kws, which has often been used in the
analysis of experimental and numerical data in superfluid turbulence, is
irrelevant, because it is based upon an erroneous assumption of the locality of
the energy transfer through scales. moreover, we demonstrate the weak
non-locality of the inverse cascade spectrum with a constant particle-number
flux and find resulting logarithmic corrections to this spectrum.
| introduction to the problem
to derive an effective kw hamiltonian leading to the
lne (8), we first briefly overview the hamiltonian de-
scription of kws initiated in [1] and further developed
in [4, 13].
the main goal of sec. i b is to start with
the so-called "bare" hamiltonian (5) for the biot-savart
description of kws (2) and obtain expressions for the
frequency eqs. (21), four- and six-kws interaction coeffi-
cients eqs. (22) and (23) and their 1/λ-expansions, which
will be used in further analysis. eqs. (21), (22) and (23)
are starting points for further modification of the kw
description, given in sec. i c, in which we explore the
consequences of the fact that non-trivial four-wave inter-
actions of kws are prohibited by the conservation laws
of energy and momentum:
ω1 + ω2 = ω3 + ω4 ,
k1 + k2 = k3 + k4 ,
(13)
where ωj ≡ω(kj) is the frequency of the kj-wave. only
the trivial processes with k1 = k3, k2 = k4, or k1 = k4,
k2 = k3 are allowed.
it is well known (see, e.g. ref. [14]) that in the case
when nonlinear wave processes of the same kind (e.g.
1 →2) are forbidden by conservation laws, the terms
corresponding to this kind of processes can be eliminated
from the interaction hamiltonian by a weakly nonlinear
canonical transformation. a famous example [14] of this
procedure comes from a system of gravity waves on water
surface in which three-wave resonances ω1 = ω2 + ω3 are
forbidden. then, by a canonical transformation to a new
variable b = a + o(a2), the old hamiltonian h(a, a∗) is
transformed to a new one, e
h(b, b∗) , where the three-wave
(cubic) interaction coefficient v 2,3
1
is eliminated at the
expanse of appearance of an additional contribution to
the next order term (i.e. four-wave interaction coefficient
k
k
k
k
3
2
4
1
k5
v 1
35
v 4
25
(
)
*
k
k
k
k
k
k
3
2
4
5
6
1
k7
t12
47
t37
56
fig. 1:
examples of contribution of the triple vertices to
the four-wave hamiltonian when three-wave resonances are
forbidden (left) and contributions of the quartet vertices to
the six-wave hamiltonian in resonances when four-wave reso-
nances are forbidden (right). intermediate virtual waves are
shown by dash lines.
t 3,4
1,2 ) of the type
v 3,5
1
v 2,5
4
∗
[ ω5 + ω3 −ω1 ] .
(14)
one can consider this contribution as a result of the
second-order perturbation approach in the three-wave
processes k1 →k3 + k5 and k5 + k2 →k4, see fig. 1,
left.
the virtual wave k5 oscillates with a forced fre-
quency ω1 −ω3 which is different from its eigenfrequency
ω5.
the inequality ω(k1) −ω(k3) ̸= ω(|k1 −k3|) is a
consequence of the fact that the three-wave processes
ω(k1)−ω(k3) = ω(|k1 −k3|) are forbidden. as the result
the denominator in eq. (14) is non-zero and the pertur-
bation approach leading to eq. (14) is applicable when
the waves' amplitudes are small.
strictly speaking our problem is different: as we men-
tioned above, not all 2 ↔2 processes (13) are forbidden,
but only the non-trivial ones that lead to energy exchange
between kws. therefore, the use of a weakly nonlinear
canonical transformation (9) (as suggested in [1]) should
be done with extra caution. the transformation (9) is
supposed to eliminate the fourth order terms from the
bse-based interaction hamiltonian by the price of ap-
pearance of extra contributions to the "full" six-wave in-
teraction amplitude f
w 4,5,6
1,2,3, (25), of the following type
(see fig. 1, right):
t 4,7
1,2 t 5,6
3,7
ω7 + ω4 −ω1 −ω2
,
ω7 ≡ω(|k1 + k2 −k4|) .
(15)
here all wave vectors are taken on the six-wave resonant
manifold
ω1 + ω2 + ω3 = ω4 + ω5 + ω6 ,
(16)
k1 + k2 + k3 = k4 + k5 + k6 .
the danger is seen from a particular example when
k1 →k4 , k2 →k5 and k3 →k6, so ω7 →ω2, and the de-
nominator of eq. (15) goes to zero, while the numerator
remains finite. this means, that the perturbation con-
tribution (15) diverges and this approach becomes ques-
tionable.
however a detailed analysis of all contributions of the
type (15) performed a-posteriori and presented in sec. i c
5
demonstrates cancelations of diverging terms with oppo-
site signs such that the resulting "full" six-wave interac-
tion coefficient remains finite, and the perturbation ap-
proach (15) appears to be eligible. the reason for this
cancelation is hidden deep within the symmetry of the
problem, which will not be discussed here.
moreover, finding the "full" hamiltonian is not enough
for formulating the effective model. such a model must
include all contributions in the same o(1/λ), namely the
leading order allowing energy transfer in the k-space.
the "full" hamiltonian still contains un-expanded in
(1/λ) expressions for the kw frequencies. leaving only
the leading (lia) contribution in the kw frequency ω,
as it was done in [1, 2], leads to a serious omission of an
important leading order contribution. indeed, the first
sub-leading contribution to ω shifts the lia resonant
manifold, which upsets the integrability. as a result, the
lia part of f
w 4,5,6
1,2,3 yields a contribution to the effective
model in the leading order. this (previously overlooked)
contribution will be found and analyzed in sec. i d.
b.
"bare" hamiltonian dynamics of kws
1.
canonical form of the "bare" kw hamiltonian
let us postulate that the motion of a tangle of quan-
tized vortex lines can be described by the bse model (2),
and assume that
λ0 ≡ln
l/a0
≫1,
(17)
where a0 is the vortex core radius. the bse can be writ-
ten in the hamiltonian form (4) with hamiltonian (5).
without the cut-off, the integral in h
bse, eq. (5), would
be logarithmically divergent with the dominant contribu-
tion given by the leading order expansion of the integrand
in small z1 −z2, that corresponds to h
lia, eq. (6).
as we have already mentioned, lia represents a com-
pletely integrable system and it can be reduced to the
one-dimensional nonlinear schr ̈
odinger (nls) equation
by the hasimoto transformation [12]. however, it is the
complete integrability of lia that makes it insufficient
for describing the energy cascade and which makes it
necessary to consider the next order corrections within
the bse model.
for small-amplitude kws when w′(z) ≪1, we can
expand the hamiltonian (5) in powers of w′ 2, see ap-
pendix a 1:
h = h2 + h4 + h6 + . . . .
(18)
here we omitted the constant term h0 which does not
contribute to the equation of motion eq. (4). assuming
that the boundary conditions are periodical on the length
l (the limit kl ≫1 to be taken later) we can use the
fourier representation
w(z, t) = κ−1/2 x
k
a(k, t) exp(ikz) .
(19)
the bold face notation of the one-dimensional wave vec-
tor is used for convenience only. indeed, such a vector
is just a real number, k ∈r, in our case for kws. for
further convenience, we reserve the normal face notation
for the length of the one-dimensional wave vector, i.e.
k = |k| ∈r+. in fourier space, the hamiltonian equa-
tion also takes a canonical form:
i ∂a(k, t)
∂t
= δh{a, a∗}
δa∗(k, t) .
(20a)
the new hamiltonian h is the density of the old one:
h{a, a∗} = h{w, w∗}/l = h2 + h4 + h6 + . . . (20b)
the hamiltonian
h2 =
x
k
ωk ak a∗
k ,
(20c)
describes the free propagation of linear kws with the dis-
persion law ωk ≡ω(k), given by eq. (21), and the canon-
ical amplitude ak ≡a(k, t). the interaction hamiltoni-
ans h4 and h6 describe the four-wave processes of 2 ↔2
scattering and the six-wave processes of 3 ↔3 scattering
respectively. using a short-hand notation aj ≡a(kj, t),
they can be written as follows:
h4 = 1
4
x
1+2=3+4
t 3,4
1,2 a1a2a∗
3a∗
4 ,
(20d)
h6 =
1
36
x
1+2+3=4+5+6
w 4,5,6
1,2,3 a1a2a3a∗
4a∗
5a∗
6 . (20e)
here
t 3,4
1,2
≡
t (k1, k2|k3, k4)
and
w 4,5,6
1,2,3
≡
w(k1, k2, k3|k4, k5, k6) are "bare" four- and six-wave
interaction coefficients, respectively.
summations over
k1 . . . k4 in h4 and over k1 . . . k6 in h6 are constrained
by k1 +k2 = k3 +k4 and by k1 +k2 +k3 = k4 +k5 +k6,
respectively.
2.
λ-expansion of the bare hamiltonian function
as will be seen below, the leading terms in the hamil-
tonian functions ωk, t 3,4
1,2 and w 4,5,6
1,2,3, are proportional to
λ, which correspond to the lia (6). they will be de-
noted further by a front superscript " λ ", e.g. λωk, etc.
because of the complete integrability, there are no dy-
namics in the lia. therefore, the most important terms
for us will be the ones of zeroth order in λ, i.e. the ones
proportional to λ0 = o(1). these will be denoted by a
front superscript " 1 " , e.g. 1ωk, etc.
explicit calculations of the hamiltonian coefficients
must be done very carefully, because even a minor mis-
take in the numerical prefactor can destroy various can-
celations of large terms in the hamiltonian coefficients.
this could change the order of magnitude of the answers
and the character of their dependence on the wave vectors
in the asymptotical regimes. details of these calculations
6
are presented in appendix a 1, whereas the results are
given below.
together with eq. (10), the kelvin wave frequency is:
ωk =
λωk + 1ωk + o(λ−1) ,
where
(21a)
λωk = κ λ k2
4π ,
(21b)
1ωk = −κ k2 ln
kl
4π .
(21c)
the "bare" 4-wave interaction coefficient is:
t 3,4
1,2 =
λt 3,4
1,2 + 1t 3,4
1,2 + o(λ−1) ,
(22a)
λt 3,4
1,2 = −λ k1k2k3k4
4π ,
(22b)
1t 3,4
1,2 = −
5 k1k2k3k4 + f 3,4
1,2
16π .
(22c)
the function f 3,4
1,2 is symmetric with respect to k1 ↔k2,
k3 ↔k4 and {k1, k2} ↔{k3, k4}; its definition is given
in appendix a 2.
the "bare" 6-wave interaction coefficient is
w 4,5,6
1,2,3 =
λw 4,5,6
1,2,3 + 1w 4,5,6
1,2,3 + o(λ−1) ,
(23a)
λw 4,5,6
1,2,3 =
9 λ
8πκ k1k2k3k4k5k6 ,
(23b)
1w 4,5,6
1,2,3 =
9
32πκ
7 k1k2k3k4k5k6 −g 4,5,6
1,2,3
.(23c)
the function g 4,5,6
1,2,3 is symmetric with respect to k1 ↔
k2 ↔k3, k4 ↔k5 ↔k6 and {k1, k2, k3} ↔{k4, k5, k6};
its definition is given in appendix a 3.
note that the full expressions for ωk, t 3,4
1,2 and w 4,5,6
1,2,3
do not contain lbut rather ln(1/a0). this is natural be-
cause in the respective expansions lwas introduced as
an auxillary parameter facilitating the calculations, and
it does not necessarily have to coincide with the inter-
vortex distance.
more precisely, we should have used
some effective intermediate length-scale, leff, such that
l≪leff≪2π/k.
however, since leffis artificial and
would have to drop from the full expressions anyway, we
chose to simply write lomitting subscript "eff". cance-
lation of lis a useful check for verifying the derivations.
c.
full "six-kw" hamiltonian dynamics
1.
full six-wave interaction hamiltonian e
h6
importantly,
the
four-wave
dynamics
in
one-
dimensional
media
with
concaved
dispersion
laws
ω(k) are absent because the conservation laws of en-
ergy and momentum allow only trivial processes with
k1 = k3, k2 = k4, or k1 = k4, k2 = k3. this means
that by a proper nonlinear canonical transformation
{a, a∗} ⇒{b, b∗}, h4 can be eliminated from the hamil-
tonian description. this comes at a price of appearance
of additional terms in the full interaction hamiltonian
e
h6:
h{a, a∗} ⇒
e
h{b, b∗} = e
h2 + e
h4 + e
h6 + . . . ,
(24a)
e
h2 =
x
k
ωk bk b∗
k ,
e
h4 ≡0 ,
(24b)
e
h6 =
1
36
x
1+2+3=4+5+6
f
w 4,5,6
1,2,3 b1b2b3b∗
4b∗
5b∗
6 ,
(24c)
f
w 4,5,6
1,2,3 = w 4,5,6
1,2,3 + q4,5,6
1,2,3 ,
(24d)
q4,5,6
1,2,3 =
1
8
3
x
i,j,m= 1
i̸=j̸=m
6
x
p,q,r= 4
p̸=q̸=r
q p,q,r
i,j,m ,
(24e)
q p,q,r
i,j,m ≡
t j, m
r, j+m−r t q, p
i, p+q−i
ωr, j+m−r
j, m
+
t q, r
m, q+r−m t i, j
p, i+j−p
ωm, q+r−m
q, r
,
ω3,4
1,2 ≡ω1 + ω2 −ω3 −ω4
(24f)
=
λω
3,4
1,2 + 1ω
3,4
1,2 + o(λ−1) .
the q-terms in the full six-wave interaction coefficient
f
w 4,5,6
1,2,3 can be understood as contributions of two four-
wave scatterings into resulting six-wave process via a vir-
tual kw with k = kj + km −kr in the first term in q
and via a kw with k = kq +kr −km in the second term;
see fig. 1, right.
2.
1/λ-expansion of the full interaction coefficient f
w 4,5,6
1,2,3
similarly to eq. (23a), we can present f
w 4,5,6
1,2,3 in the
1/λ-expanded form:
f
w 4,5,6
1,2,3 = λf
w 4,5,6
1,2,3 + 1f
w 4,5,6
1,2,3 + o(λ−1) ,
(25)
due to the complete integrability of the kw system in
the lia, even the six-wave dynamics must be absent in
the interaction coefficient λf
w 4,5,6
1,2,3. this is true if func-
tion λf
w 4,5,6
1,2,3 vanishes on the lia resonant manifold:
λf
w 4,5,6
1,2,3 δ4,5,6
1,2,3 δ
λe
ω4,5,6
1,2,3
≡0 ,
(26a)
where
δ4,5,6
1,2,3 = δ(k1 + k2 + k3 −k4 −k5 −k6) ,
(26b)
λe
ω4,5,6
1,2,3 = κλ
4π
k2
1 + k2
2 + k2
3 −k2
4 −k2
5 −k2
6
. (26c)
explicit calculation of λf
w 4,5,6
1,2,3 in appendix b shows that
this is indeed the case: the contributions λw 4,5,6
1,2,3 and
λq4,5,6
1,2,3 in eq. (24d) cancel each other (see section 1 in
appendix b). (such cancelation of complicated expres-
sions was one of the tests of consistency and correctness of
our calculations and our mathematica code). the same
is true for all the higher interaction coefficients: they
must be zero within lia.
7
thus we need to study the first-order correction to the
lia, which for the interaction coefficient can be schemat-
ically represented as follows:
1f
w
4,5,6
1,2,3 =
1w
4,5,6
1,2,3
+ 1
1q
4,5,6
1,2,3 +
1
2q
4,5,6
1,2,3 + 1
3q
4,5,6
1,2,3 ,
(27a)
1
1q ∼
λt ⊗1t
λω
,
1
2q ∼
1t ⊗λt
λω
,
(27b)
1
3q ∼−1ω
λt ⊗λt
[ λω]2
.
(27c)
here 1w is the λ0-order contribution in the bare vertex
w, given by eq. (23c), 1q is the λ0-order contribution
in q which consists of 1
1q and 1
2q originating from the
part 1t in the four-wave interaction coefficient t , and
1
3q originating from the 1ωcorrections to the frequencies
ωin eqs. (24e) and (24f). explicit eqs. (b3) for 1
1q
4,5,6
1,2,3,
1
2q
4,5,6
1,2,3 and 1
3q
4,5,6
1,2,3 are presented in appendix b 2. they
are very lengthy and were analyzed using mathematica,
see sec. i d 2.
d.
effective six-kw dynamics
1.
effective equation of motion
the lia cancelation (26a) on the full manifold (16) is
not exact:
λf
w 3,4,5
k,1,2 δ 3,4,5
k,1,2 δ
e
ω3,4,5
k,1,2
̸= 0 ,
e
ω3,4,5
k,1,2 ≡ωk + ω1 + ω2 −ω3 −ω4 −ω5
= λe
ω
3,4,5
k,1,2 + 1e
ω
3,4,5
k,1,2 + o(λ−1) .
the residual contribution due to 1e
ω3,4,5
k,1,2 has to be ac-
counted for – an important fact overlooked in the previ-
ous kw literature, including the formulation of the effec-
tive kw dynamics recently presented by ks in [2]. now
we are prepared to take another crucial step on the way
to the effective kw model by replacing the frequency
ωk by its leading order (lia) part (21b) and simultane-
ously compensating for the respective shift in the reso-
nant manifold by correcting the effective vertex e
h. this
corresponds to the following hamiltonian equation,
i ∂bk
∂t
=
λωk bk
(28)
+ 1
12
x
k+1+2=3+4+5
w 3,4,5
k,1,2 b1b2b∗
3b∗
4b∗
5 ,
where the constraint k + k1 + k2 = k3 + k4 + k5 holds.
here w 3,4,5
k,1,2 is a corrected interaction coefficient, which
is calculated in appendix b 3:
w 3,4,5
k,1,2 =
1f
w 3,4,5
k,1,2 + 1 e
s 3,4,5
k,1,2 ,
(29a)
1 e
s4,5,6
1,2,3 = 2π
9κ
1e
ω4,5,6
1,2,3
x
i={1,2,3}
j={4,5,6}
(∂j + ∂i) λf
w 4,5,6
1,2,3
(kj −ki) λ
, (29b)
where ∂j(*) ≡∂(*)/∂kj.
(28) represents a correct effective model and, will serve
as a basis for our future analysis of kw dynamics and
kinetics. however, to make this equation useful we need
to complete the calculation of the effective interaction co-
efficient w 3,4,5
k,1,2 and simplify it to a reasonably tractable
form. the key for achieving this is in a remarkably simple
asymptotical behavior of w 3,4,5
k,1,2, which will be demon-
strated in the next section. such asymptotical expres-
sions for w 3,4,5
k,1,2 will allow us to establish nonlocality of
the ks theory, and thereby establish precisely that these
asymptotical ranges with widely separated scales are the
most dynamically active, which would lead us to the re-
markably simple effective model expressed by the lne
(8).
2.
analysis of the effective interaction coefficient w 3,4,5
k,1,2
now we will examine the asymptotical properties of
the interaction coefficient which will be important for
our study of locality of the kw spectra and formula-
tion of the lne (8). the effective six-kw interaction
coefficient w 3,4,5
k,1,2 consists of five contributions given by
eqs. (27) and (29). the explicit form of w 3,4,5
k,1,2 involves
about 2×104 terms. however its asymptotic expansion in
various regimes, analyzed by mathematica demonstrates
very clear and physical transparent behavior, which we
will study upon the lia resonance manifold
k + k1 + k2 = k3 + k4 + k5 ,
(30a)
k2 + k2
1 + k2
2 = k2
3 + k2
4 + k2
5 .
(30b)
if the smallest wavevector (say k5) is much smaller than
the largest wave vector (say k) we have a remarkably
simple expression:
w 3,4,5
k,1,2 →−3
4πκkk1k2k3k4k5 ,
(31)
as
min{k, k1, k2, k3, k4, k5}
max{k, k1, k2, k3, k4, k5} →0 .
we emphasize that in the expression (31), it is enough
for the minimal wave vector to be much less than the
maximum wave number, and not all of the remaining
five wave numbers in the sextet. this was established
using mathematica and taylor expanding w 3,4,5
k,1,2 with
respect to one, two and four wave numbers [20]. all of
these expansions give the same leading term as in (31),
see apps. b 4 and b 5.
8
the form of expression (31) demonstrates a very sim-
ple physical fact: long kws (with small k-vectors) can
contribute to the energy of a vortex line only when they
produce curvature.
the curvature, in turn, is propor-
tional to wave amplitude bk and, at a fixed amplitude,
is inversely proportional to their wave-length, i.e. ∝k.
therefore, in the effective motion equation each bj has
to be accompanied by kj, if kj ≪k. this statement is
exactly reflected by formula (31).
furthermore, a numerical evaluation of w 3,4,5
k,1,2 on a
set of 210 randomly chosen wave numbers, different from
each other at most by a factor of two, indicate that in
the majority of cases its values are close to the asymp-
totical expression (31) (within 40%). therefore, for most
purposes we can approximate the effective six-kw inter-
action coefficient by the simple expression (31). finally,
our analysis of locality seen later in this paper, indicates
that the most important wave sextets are those which
include modes with widely separating wavelengths, i.e.
precisely those described by the asymptotic formula (31).
this leads us to the conclusion that the effective model
for kw turbulence should use the interaction coefficient
(31). returning back to the physical space, we thereby
obtain the desired local nonlinear equation (lne) for
kws given by (8).
as we mentioned in the beginning of the present paper,
lna is very close (isomorphous for small amplitudes) to
the tlia model (7) introduced and simulated in [13]. it
was argued in [13] that the tlia model is a good al-
ternative to the original biot-savart formulation due to
it dramatically greater simplicity. in the present paper
we have found a further support for this model, which
is strengthened by the fact that now it follows from a
detailed asymptotical analysis, rather than being intro-
duced ad hoc.
3.
partial contributions to the 6-wave effective interaction
coefficient
it would be instructive to demonstrate the relative im-
portance of different partial contributions, 1w, 1
1q, 1
2q,
1
3q and 1 e
s [see eqs. (27) and (29)] to the full effective
six-wave interaction coefficient.
for this, we consider
the simplest case, when four wave vectors are small, say
k1, k2, k3, k5 →0. we have (see appendix b 5):
1w
w
→−1 + 3
2 ln
(kl)
,
(32a)
1
1q
w
→+1
2 −3
2 ln
(kl) −1
6 lnk3
k ,
(32b)
1
2q
w
→+1
2 −3
2 ln
(kl) −1
6 lnk3
k ,
(32c)
1
3q
w
→+1 + 3
2 ln
(kl) + 1
6 lnk3
k ,
(32d)
1 e
s
w
→
1
6 lnk3
k .
(32e)
one sees that eqs. (32) for the partial contributions
involve the artificial separation scale l, which cancels out
from 1f
w = 1w + 1
1q + 1
2q + 1
3q. this is not surprising
because the initial expressions eqs. (23) do not contain l
but rather ln(1/a0). this cancelation serves as one more
independent check of consistency of the entire procedure.
notice that in the ks paper [1], contributions (32d)
and (32e) were mistakenly not accounted for. therefore
the resulting ks expression for the six-wave effective in-
teraction coefficient depends on the artificial separation
scale l. this fact was missed in their numerical simula-
tions [1]. in their recent paper [2], the lack of contribu-
tion (32d) in the previous work was acknowledged (also
in [13]), but the contribution (32e) was still missing.
ii.
kinetic description of kw
turbulence
a.
effective kinetic equation for kws
the statistical description of weakly interacting waves
can be reached [14] in terms of the kinetic equation (ke)
shown below for the continuous limit kl ≫1,
∂n(k, t)
∂t = st(k, t) ,
(33a)
for the spectra n(k, t) which are the simultaneous pair
correlation functions, defined by
⟨b(k, t)b∗(k′, t)⟩= 2π
l δ(k −k′) n(k, t) ,
(33b)
where ⟨. . .⟩stands for proper (ensemble, etc.) averaging.
in the classical limit [21], when the occupation numbers
of bose particles n(k, t) ≫1, n(k, t) = ħn(k, t), the
collision integral st(k, t) can be found in various ways [1,
4, 14], including the golden rule of quantum mechanics.
for the 3 ↔3 process of kw scattering, described by
the motion eq. (28):
st3↔3(k) =
π
12
zzzzz
w 3,4,5
k,1,2
2
δ 3,4,5
k,1,2 δ
λω3,4,5
k,1,2
×
n−1
k
+ n−1
1
+ n−1
2
−n−1
3
−n−1
4
−n−1
5
× nkn1n2n3n4n5 dk1 dk2 dk3 dk4 dk5 .
(33c)
ke (33) conserves the total number of (quasi)-particles
n and the total (bare) energy of the system
λe, defined
respectively as follows:
n ≡
z
nk dk ,
λe ≡
z
λωknk dk .
(34)
ke (33) has a rayleigh-jeans solution,
nt(k) =
t
ħλωk + μ ,
(35)
which corresponds to thermodynamic equilibrium of
kws with temperature t and chemical potential μ.
9
in various wave systems, including the kws described
by ke (33a), there also exist flux-equilibrium solutions,
ne(k) and nn(k), with constant k-space fluxes of energy
and particles respectively. the corresponding solution for
ne(k) was suggested in the ks-paper [1] under an (unver-
ified) assumption of locality of the e-flux. in sec. ii c,
we will analyze this assumption in the framework of the
derived ke (33), and will prove that it is wrong. the
n-flux solution nn(k) was discussed in [3]. in sec. ii c,
we will show that this spectrum is marginally nonlocal,
which means that it can be "fixed" by a logarithmic cor-
rection.
b.
phenomenology of the e- and n-flux
equilibrium solutions for kw turbulence
conservation laws (34) for e and n allow one to intro-
duce the continuity equations for nk and λek ≡
λωknk
and their corresponding fluxes in the k-space, μk and εk:
∂nk
∂t + ∂μk
∂k = 0 ,
μk ≡−
z k
0
st3↔3(k) dk ,
(36a)
∂λek
∂t
+ ∂εk
∂k = 0 ,
εk ≡−
z k
0
λωk st3↔3(k) dk . (36b)
in scale-invariant systems, when the frequency and in-
teraction coefficients are homogeneous functions of wave
vectors, eqs. (36) allow one to guess the scale-invariant
flux equilibrium solutions of ke (33) [14]:
ne(k) = aek−xe ,
nn(k) = ank−xn ,
(37)
here ae and an are some dimensional constants. scaling
exponents xn and xe can be found in the case of local-
ity of the n- and e-fluxes, i.e. when the integrals over
k1, . . . k5 in eqs. (36) and (33c) converge. in this case,
the leading contribution to these integrals originate from
regions where k1 ∼k2 ∼k3 ∼k4 ∼k5 ∼k and thus, the
fluxes (36) can be estimated as follows:
μk ≃k5[w(k, k, k|k, k, k)]2n5
n(k)
.
ωk ,
(38a)
εk ≃k5[w(k, k, k|k, k, k)]2n5
n(k).
(38b)
stationarity of solutions of eqs. (36) require constancy
of their respective fluxes: i.e. μk and εk should be k-
independent. together with eqs. (38) this allows one to
find the scaling exponents in eq. (37).
our formulation (33) of kw kinetics belongs to the
scale-invariant class [22]:
λωk ∝k 2 , and for ∀η
w(ηk, ηk1, ηk2 | ηk3, ηk4, ηk5)
= η6 w(k, k1, k2 | k3, k4, k5) .
estimating w(k, k, k|k, k, k) ≃k6/κ and λωk ≃κλk2 in
eqs. (38), one gets for n-flux spectrum [3]:
nn(k) ≃
μκ
λ
1/5 k−3 ,
xn = 3 ,
(39a)
and for e-flux ks-spectrum [1]:
ne(k) ≃
εκ21/5 k−17/5 ,
xe = 17/5 .
(39b)
c.
non-locality of the n- and e-fluxes by
3 ↔3-scattering
consider the 3 ↔3 collision term (33c) for kws with
the interaction amplitude w 4,5,6
1,2,3.
note that in (33c)
r
dkj are one-dimensional integrals
r ∞
−∞dkj. let us ex-
amine the "infrared" (ir) region ( k5 ≪k, k1, k2, k3, k4 )
in the integral
(33c), taking into account the asymp-
totics (31), and observing that the expression
δ 3,4,5
k,1,2 δ
λe
ω3,4,5
k,1,2
n−1
k + n−1
1 + n−1
2 −n−1
3 −n−1
4 −n−1
5
× nk n1 n2 n3 n4 n5
→δ 3,4
k,1,2δ
λω3,4
k,1,2
n−1
k
+ n−1
1
+ n−1
2
−n−1
3
−n−1
4
× nk n1 n2 n3 n4 n5 ∼n5 ∼k −x
5
.
thus the integral over k5 in the ir region can be factor-
ized and written as follows:
2
z
0
k2
5 n(k5) dk5 ∝2
z
1/l
k2−x
5
dk5 .
(40)
the factor 2 here originates from the symmetry of the
integration area and evenness of the integrand:
r ∞
−∞=
2
r ∞
0 .
the lower limit 0 in this expression should be
replaced by the smallest wave number where the assumed
scaling behavior (37) holds, and moreover, it depends
on the particular way the wave system is forced.
for
example, this cutoffwave number could be 1/l, where lis
the mean inter-vortex separation l, at which one expects
a cutoffof the wave spectrum. the crucial assumption
of locality, under which both the e-flux (ks) and the
n-flux spectra were obtained, implies that the integral
(40) is independent of this cutoffin the limit l→0.
clearly, integral (40) depends on the ir-cutoffif x ≥3,
which is the case for both the e-flux (ks) and the n-flux
spectra (39). note that all other integrals over k1, k2, k3
and k4 in (33c) diverge exactly in the same manner as
the integral over k5, i.e. each of them leads to expression
(40).
even stronger ir divergence occurs when two wave
numbers on the same side of the sextet (e.g. k1 and k2,
or k3 and k4, etc.) are small. in this case, integrations
over both of the small wave numbers will lead to the
same contribution, namely integral (40), i.e. the result
will be the integral (40) squared. this appears to be the
strongest ir singularity, and the resulting behavior of the
collision integral eq. (33c) is
st
ir ∼
z
1/l
k2−x
5
dk5
!2
.
(41)
when two wave numbers from the opposite sides of the
sextet (e.g. k2 and k5) tend to zero simultaneously, we
10
get an extra small factor in the integrand because in this
case
n−1
k
+ n−1
1
−n−1
3
−n−1
4
→0.
as a result we
get ir convergence in this range. one can also show ir
convergence when two wave numbers from one side and
one on the other side of the sextet are small (the resulting
integral is ir convergent for x < 9/2).
divergence of integrals in eq. (33c) means that both
spectra (39) with xn = 3 and xe = 17/5 > 3, obtained
under opposite assumption of the convergence of these
integrals in the limit l→∞are not solutions of the
3 ↔3-ke (33c) and thus cannot be realized in nature.
one should find another self-consistent solution of this
ke. note, that the proof of divergence at the ir lim-
its is sufficient for discarding the spectra under the test,
whereas proving convergence would require considering
all the singular limits including the ultra-violet (uv)
ranges.
however, we have examined these limits, too.
at the uv end we have obtained convergence for the ks
and for the inverse cascade spectra. thus the most dan-
gerous singularity appears to be in the ir range, when
two wave numbers from the same side of the wave sextet
are small simultaneously.
d.
logarithmic corrections for the n-flux
spectrum (39a)
note that for the n-flux spectrum (39a) nn(k) ∝k−3,
that the integrals (40) and (41) diverge only logarithmi-
cally. the same situation happens, e.g. for the direct en-
strophy cascade in two-dimensional turbulence: dimen-
sional reasoning leads to the kraichnan-1967 [15] turbu-
lent energy spectrum
e(k) ∝k−3
(42a)
for which the integral for the enstrophy flux diverges log-
arithmically. using a simple argument of constancy of
the enstrophy flux, kraichnan suggested [16] a logarith-
mic correction to the spectra
e(k) ∝k−3 ln−1/3(kl) ,
(42b)
that permits the enstrophy flux to be k-independent.
here l is the enstrophy pumping scale.
using the same arguments, we can substitute in
eq. (33c), a logarithmically corrected spectrum nn(k) ∝
k−3 ln−x(kl) and find x by the requirement that the re-
sulting n-flux, μk, eq. (36a) will be k-independent. hav-
ing in mind that according to eq. (38a) μk ∝n5
n, we can
guess that x = 1/5. then, the divergent integral (40) will
be ∝ln4/5(kl), while the remaining convergent integrals
in eq. (33c) will be ∝ln−4/5(kl). therefore, the resulting
flux μk will be k-independent as it should be [16]. so, our
prediction is that instead of a non-local spectrum (39a)
we have a slightly steeper log-corrected spectrum
nn(k) ≃
μ κ
1/5
k3 ln1/5(k l)
.
(43)
the difference is not large, but the underlying physics
must be correct; as one says on the odessa market: "we
can argue the price, but the weight must be correct".
conclusions
in this paper, we have derived an effective theory of
kw turbulence based on asymptotic expansions of the
biot-savart model in powers of small 1/λ and small non-
linearity, by applying a canonical transformation elim-
inating non-resonant low-order (quadric) interactions,
and by using the standard wave turbulence approach
based on random phases [14]. in doing so, we have fixed
errors arising from the previous derivations, particularly
the latest one by ks
[2], by taking into account pre-
viously omitted and important contributions to the ef-
fective six-wave interaction coefficient. we have exam-
ined the resulting six-wave interaction coefficient in sev-
eral asymptotic limits when one or several wave numbers
are in the ir range. these limits are summarized in a
remarkably simple expression (31). this allowed us to
achieve three goals:
• to derive a simple effective model for kw turbu-
lence expressed in the local nonlinear equation
(8). in addition to small 1/λ and the weak nonlin-
earity, this model relies on the fact that our findings
show, for dynamically relevant wave sextets, the in-
teraction coefficient is a simple product of the six
wave numbers, eq. (31). for weak nonlinearities,
the lne is isomorphic to the previously suggested
tlia model [13].
• to examine the locality of the e-flux (ks) and the
n-flux spectra. we found that the ks spectrum is
non-local and therefore cannot be realized in na-
ture.
• the n-flux spectrum is found to be marginally non-
local and could be "rescued" by a logarithmic cor-
rection, which we constructed following a qualita-
tive kraichnan approach. however, it remains to
be seen if such a spectrum can be realized in quan-
tum turbulence, because, as it was shown in [17],
the vortex line reconnections can generate only the
forward cascade and not the inverse one (i.e. the re-
connections produce an effectively large-scale wave
forcing).
finally we will discuss the numerical studies of kw
turbulence. the earliest numerics by ks were reported in
[1]. they claimed that they observed the ks spectrum.
at the same time they gave a value of the e-flux con-
stant ∼10−5 which is unusually small. we have already
mentioned that this work failed to take into account sev-
eral important contributions to the effective interaction
coefficient, and thus these numerical results cannot be
trusted. in particular, we showed that their interaction
coefficient must have contained a spurious dependence on
11
the scale lwhich makes the numerical results arbitrary
and dependent on the choice of such a cutoff. in addi-
tion, even if the interaction coefficient was correct, the
monte-carlo method used by ks is a rather dangerous
tool when one deals with slowly divergent integrals (in
this case
r
0 x−7/5 dx).
on the other hand, recent numerical simulations of
the tlia model also reported agreement with the ks
scaling (as well as an agreement with the inverse cascade
scaling) [13].
how can one explain this now when we
showed analytically that the ks spectrum is non-local?
it turns out that the correct kw spectrum, which takes
into account the non-local interactions with long kw's,
has an index which is close (but not equal) to the ks
index, and it is also consistent with the data of [13]. we
will report these results in a separate publication.
acknowledgments
we are very grateful to mark vilensky for fruitful dis-
cussions and help. we acknowledge the support of the
u.s. - israel binational science foundation, the sup-
port of the european community – research infrastruc-
tures under the fp7 capacities specific programme, mi-
crokelvin project number 228464.
appendix a: bare interactions
1.
actual calculation of the bare interaction
coefficients
the geometrical constraint of a small amplitude per-
turbation can be expressed in terms of a parameter
ǫ(z1, z2) = |w(z1) −w(z2)|/|z1 −z2| ≪1.
(a1)
this allows one to expand hamiltonian (5) in powers of
ǫ and to re-write it in terms consisting of the number of
wave interactions, according to eq. (18). ks found the
exact expressions for h2, h4 and h6 [1]:
h2 =
κ
8π
z
dz1dz2
|z1 −z2|
h
2re
w
′∗(z1)w
′(z2)
−ǫ2i
,(a2)
h4 =
κ
32π
z
dz1dz2
|z1 −z2|
h
3ǫ4 −4ǫ2re
w
′∗(z1)w
′(z2)
i
,
h6 =
κ
64π
z
dz1dz2
|z1 −z2|
h
6ǫ4re
w
′∗(z1)w
′(z2)
−5ǫ6i
.
the explicit calculation of these integrals was ana-
lytically done in [13], by evaluating the terms in (a2)
in fourier space, and then expressing each integral as
various cosine expressions [1].
hamiltonian (a2) can
be expressed in terms of a wave representation variable
ak = a(k, t) by applying a fourier transform (19) in the
variables z1 and z2, (for details see [1, 13]). the result
is given by eqs. (20), in which the cosine expressions for
ωk, t 34
12 and w 456
123 were done in [1]. in our notations they
are
ωk =
κ
2π [a −b] ,
t 34
12 = 1
4π [6d −e] ,
w 456
123 =
9
4πκ [3p −5q] ,
where
(a3)
a =
z ∞
a0
dz−
z−
k2ck ,
b =
z ∞
a0
dz−
z3
−
1 −ck
,
d =
z ∞
a0
dz−
z5
−
1 −c1 −c2 −c3 −c4 + c3
2 + c43 + c4
2
,
e =
z ∞
a0
dz−
z3
−
k1k4
c4 + c1 −c43 −c4
2
+ k1k3
c3 + c1 −c43 −c3
2
+ k3k2
c3 + c2 −c43 −c3
1
+ k4k2
c4 + c2 −c43 −c3
2
,
(a4)
p =
z ∞
a0
dz−
z5
−
k6k2[c2 −c5
2 −c23 + c5
23 −c4
2 + c45
2 + c4
23 −c6
1 + c6 −c56 −c6
3 + c56
3 −c46 + c456 + c46
3 −c12] ,
q =
z ∞
a0
dz−
z7
−
1 −c4 −c1 + c4
1 −c6 + c46 + c6
1 −c46
1 −c5 + c45 + c5
1 −c45
1 + c65 −c456 −c56
1 + c23
−c3 + c4
3 + c13 −c4
13 + c6
3 −c46
3 −c6
13 + c5
2 + c5
3 −c45
3 −c5
13 + c6
2 −c56
3 + c12 + c4
2 −c2
.
here the variable, z−= |z1 −z2| and the expressions c,
are cosine functions such that c1 = cos(k1z−), c4
1 =
12
cos((k4 −k1)z−), c45
1
= cos((k4 + k5 −k1)z−), c45
12 =
cos((k4 + k5 −k1 −k2)z−) and so on. the lower limit
of integration a0 is the induced cutoffof the vortex core
radius a0 < |z1 −z2|.
the trick used for explicit calculation of the analyt-
ical form of these integrals was suggested and used in
[13].
first one should integrate by parts all the co-
sine integrals, so they can be expressed in the form of
z ∞
a0
cos(z)
z
dz. then, one can use a cosine identity for
this integral [19],
z ∞
a0
cos(z)
z
dz = −γ −ln(a0) −
z a0
0
cos(z) −1
z
dz (a5)
= −γ −ln(a0) −
∞
x
k=1
−a2
0
k
2k (2k)! = −γ −ln(|a0|) + o(a2
0) ,
where γ = 0.5772 . . . is the euler constant. therefore,
in the limit of a small vortex core radius a0, we can ne-
glect terms of order ∼a2
0 and higher. for example, let's
consider the following general cosine expression that can
be found in eqs. (a4):
z ∞
a0
z−3 cos(kz)dz, where k is
an expression that involves a linear combination of wave
numbers, i.e. k = k1 −k4. therefore, integration by
parts will yield the following result for this integral:
z ∞
a0
cos(kz)
z3
dz =
−cos(kz)
2z2
∞
a0
+
k sin(kz)
2z
∞
a0
−k
2
z ∞
a0
cos(kz)
z
dz
= cos(ka0)
2a2
0
−k sin(ka0)
2a0
−k2
2
z ∞
ka0
cos(y)
y
dy .
we then expand cos(ka0) and sin(ka0) in powers of a0,
and apply the cosine formula (a5) for the last integral,
where in the last step we have also changed integration
variables: y = kz. the final expression is then
∞
z
a0
cos(kz)
z3
dz =
1
2a2
0
−3k2
4
+ k2
2 [γ + ln(|ka0|)] + o(a2
0) .
by applying a similar procedure to the other cosine in-
tegrals, we find that all terms of negative powers of a0,
(that will diverge in the limit a0 →0) actually cancel in
the final expression for each interaction coefficient. ap-
plying this strategy to all interaction cofficients, we get
the following analytical evaluation of the hamiltonian
functions [13]:
λ0 = ln(l/a0) ,
ωk = κk2
4π
h
λ0 −γ −3
2 −ln(kl)
i
,
(a6)
t 34
12 =
1
16π
h
k1k2k3k4(1 + 4γ −4λ0) −f 3,4
1,2
i
,
w 456
123 =
9
32πκ
h
k1k2k3k4k5k6(1 −4γ + 4λ0) −g 4,5,6
1,2,3
i
.
explicit equations for f34
12 and g456
123 are given below in
appendices a 2 and a 3. in the main text we introduced
λ ≡λ0 −γ −3/2. writing λ = ln(l/a), we see that
a = a0eγ+3/2 ≃8a0.
2.
bare 4-wave interaction function f 3,4
1,2
a rather cumbersome calculation, presented above, re-
sults in an explicit equation for the 4-wave interaction
function f 3,4
1,2 in eqs. (a6) and (22b). function f 3,4
1,2 is a
symmetrical version of f 3,4
1,2: f 3,4
1,2 ≡
n
f 3,4
1,2
o
s where the
operator {. . . }s stands for the symmetrization k1 ↔k2,
k3 ↔k4 and {k1, k2} ↔{k3, k4}. in its turn f 3,4
1,2 is
defined as following:
f 3,4
1,2 ≡
x
k∈k1
k4 ln(|k|l)
(a7a)
+2
x
i,j
x
k∈kij
kikj k2 ln(|k|l) .
the p
i,j denotes sum of four terms with (i, j)
=
(4, 1), (3, 1), (3, 2), (4, 2)
, k is either a single wave vec-
tor or linear combination of wave-vectors that belong to
one of the following sets:
k1 =
−[1],−[2],−[3],−[4],+ [3
2],+ [43],+ [4
2]
,
k41 =
+[4],+ [1],−[43],−[4
2]
,
(a7b)
k31 =
+[3],+ [1],−[43],−[3
2]
,
k32 =
+[3],+ [2],−[43],−[3
1]
,
k42 =
+[4],+ [2],−[43],−[4
1]
.
here we used the following shorthand notations with
α , β , γ = 1, 2, 3, 4:
[α] ≡kα , [β] ≡−kβ , [α
β] ≡
kα −kβ , [αγ] ≡kα + kγ , [βγ] ≡−kβ −kγ , and
+
or
−signs before [. . . ] should be understood as prefac-
tors +1 or −1 in the corresponding term in the sum. for
example:
k4 ln(|k|l) for k ∈{−[1]} is −k4
1 ln(k1l) ,
k4 ln(|k|l) for k ∈{+[4
2]} is + (k4 −k2)4 ln(|k4 −k2|l) ,
kikj k2 ln(|k|l) for i = 4, j = 1,
k ∈{−[43]} is −k4k1 (k4 + k3)4 ln(|k4 + k3|l) .
3.
bare 6-wave interaction function g 4,5,6
1,2,3
function g 4,5,6
1,2,3 ≡
n
g 4,5,6
1,2,3
o
s.
the operator {. . . }s
stands for the symmetrization k1 ↔k2 ↔k3, k4 ↔
k5 ↔k6 and {k1, k2, k3} ↔{k4, k5, k6}, and g 4,5,6
1,2,3 is
defined as following:
13
g 4,5,6
1,2,3 ≡
x
k∈k3
k6k2 k4 ln(|k|l)
+ 1
18
x
k∈k4
k 6 ln(|k|l) ,
(a8a)
where
k3 =
n
+[2], −[5
2], −[23], +[5
23], −[4
2], +[45
2 ], +[4
23], −[6
1], +[6],
−[56], −[6
3], +[56
3 ], −[46], +[456], +[46
3 ], −[12]
o
, (a8b)
k4 =
n
−[4], −[1], +[4
1], −[6], +[46], +[6
1], −[46
1 ], −[5],
+[45], +[5
1], −[45
1 ], +[65], −[456], −[56
1 ], +[23], −[3],
+[4
3], +[13], −[4
13], +[6
3], −[46
3 ], −[6
13], +[5
2], +[5
3],
−[45
3 ], −[5
13], +[6
2], −[65
3 ], +[12], +[4
2], −[2]
o
.
(a8c)
appendix b: effective six-kw interaction coefficient
1.
absence of 6-wave dynamics in lia
according to eqs. (24d) and (24e), the expression for
λf
w 4,5,6
1,2,3 is given by
λf
w 4,5,6
1,2,3 =
λw 4,5,6
1,2,3 + λq4,5,6
1,2,3 ,
(b1a)
λq4,5,6
1,2,3 = 1
8
3
x
i,j,m= 1
i̸=j̸=m
6
x
p,q,r= 4
p̸=q̸=r
λq p,q,r
i,j,m ,
(b1b)
λq p,q,r
i,j,m ≡
λt j, m
r, j+m−r
λt q, p
i, p+q−i
λωr, j+m−r
j, m
+
λt q, r
m, q+r−m
λt i, j
p, i+j−p
λωm, q+r−m
q, r
,
(b1c)
where λω
3,4
1,2 ≡λω1 + λω2 −λω3 −λω4.
we want to
compute this equation on the lia manifold (30). to do
this we express two wave vectors in terms of the other
four [23] using the lia manifold constraint (30):
k1 = (k3 −k) (k2 −k3)
k + k2 −k3 −k5
+ k5 ,
(b2a)
k4 = (k3 −k) (k2 −k3)
k + k2 −k3 −k5
+ k + k2 −k3 . (b2b)
then λf
w 4,5,6
1,2,3 is easily simplified to zero with the help of
mathematica. this gives an independent verification of
the validity of our initial eqs. (24) for full interaction cof-
ficient λf
w 4,5,6
1,2,3 which is needed for the calculations of the
o(1) contribution 1f
w 4,5,6
1,2,3. another way to see the can-
celation is to use the zakharov-schulman variables [18]
that parameterise the lia manifold (30).
2.
exact expression for 1f
w
we get expressions for 1
1q, 1
2q and 1
3q, introduced by
eqs. (27), from eqs. (24d) and (24e). namely:
1
1q4,5,6
1,2,3 = 1
8
3
x
i,j,m= 1
i̸=j̸=m
6
x
p,q,r= 4
p̸=q̸=r
" λt j, m
r, j+m−r
1t q, p
i, p+q−i
λωr, j+m−r
j, m
+
λt q, r
m, q+r−m
1t i, j
p, i+j−p
λωm, q+r−m
q, r
#
,
(b3a)
1
2q4,5,6
1,2,3 = 1
8
3
x
i,j,m= 1
i̸=j̸=m
6
x
p,q,r= 4
p̸=q̸=r
" 1t j, k
r, j+k−r
λt q, p
i, p+q−i
λωr, j+m−r
j, m
+
1t q, r
m, q+r−m
λt i, j
p, i+j−p
λωm, q+r−m
q, r
#
,
(b3b)
1
3q4,5,6
1,2,3 = 1
8
3
x
i,j,m= 1
i̸=j̸=m
6
x
p,q,r= 4
p̸=q̸=r
" λt j, m
r, j+m−r
λt q, p
i, p+q−i
λωr, j+m−r
j, m
2
* 1ωr, j+m−r
j, m
+
λt q, r
m, q+r−m
λt i, j
p, i+j−p
λωm, q+r−m
q, r
2
* 1ωm, q+r−m
q, r
#
.
(b3c)
again, using mathematica we substitute eqs. (b2) into
eqs. (b3a) – (b3c). clearly, the resulting equations are
too cumbersome to be presented here. but we will ana-
14
lyze them in various limiting cases, see below.
3.
derivation of eq. (29b) for 1e
s 3,4,5
k,1,2
first of all, let us find a parametrization for the full
resonant manifold, by calculation of the correction to the
lia parametrization (b2), namely
k1 =
λk1 + 1k1 ,
k4 = λk4 + 1k4 ,
(b4)
where λk1 and λk4 are given by the right-hand sides of
eqs. (b2) respectively. corrections 1k1 and 1k4 are found
so that the resonances in k, eq. (30a), and (full) ω are
satisfied. the resonances in k fixes 1k1 = 1k4. then the
ω-resonance in the leading order in 1/λ gives
e
ω4,5,6
1,2,3 =
1k1
∂λω1
∂k1
−1k4
∂λω4
∂k4
+ 1e
ω4,5,6
1,2,3 + o(λ−1) = 0 .
(b5)
thus
1k1 = 1k4 ≈2π
λκ
1e
ω4,5,6
1,2,3
(k4 −k1) .
(b6)
this allows us to write down the contribution of λf
w from
the deviation of the lia resonant surface:
1 e
s4,5,6
1,2,3 =
1k1
∂λf
w 4,5,6
1,2,3
∂k1
+ 1k4
∂λf
w 4,5,6
1,2,3
∂k1
+ o(λ−1)
≈
2π
λκ
1e
ω4,5,6
1,2,3
(∂4 + ∂1) λf
w 4,5,6
1,2,3
(k4 −k1)
,
(b7)
with ∂j(*) = ∂j(*)/∂kj. it is obvious that instead of k1
and k4 we could use parametrizations in terms of other
pairs ki and kj with i = 1, 2 or 3 and j = 4, 5 or 6. this
enables us to write a fully symmetric expression for 1s:
1 e
s4,5,6
1,2,3 = 2π
9λκ
1e
ω4,5,6
1,2,3
x
i={1,2,3}
j={4,5,6}
(∂j + ∂i) λf
w 4,5,6
1,2,3
(kj −ki)
.
(b8)
this is the required expression eq. (29b).
4.
analytical expression for w on the lia
manifold when two wave numbers are small
let us put together the coefficients to the interaction
coefficient w 3,4,5
k,1,2 given in (27a), (27b), (27c), (23c) and
(29), and use in these expressions the formulae obtained
in the previous appendices and the parametrization of
the lia surface (b2). using mathematica, and taylor
expanding w 3,4,5
k,1,2 with respect to one wave number, e.g.
k5, we obtain a remarkably simple result, - expression
(31).
now we will consider the asymptotical limit when two
of the wave numbers, say k2 and k5 (let them be on
the opposite sides of the resonance conditions), are much
less than the other wave numbers in the sextet. using
mathematica and taylor expanding w 3,4,5
k,1,2 with respect
to two wave numbers k2 and k5, we have
lim
k2 →0
k5 →0
w 3,4,5
k,1,2 = −3
4πκk2k2k2
3k5 ,
(b9)
simultaneously, see eq. (b2):
lim
k2 →0
k5 →0
k1 →k3 ,
lim
k2 →0
k5 →0
k4 →k .
(b10)
therefore, (b9) coincides with (31). note that this was
not obvious a priori, because formally (31) was obtained
when k5 is much less than the rest of the wave numbers,
including k2.
for reference, we provide expressions for the different
contributions to the interaction coefficient w 3,4,5
k,1,2 given
in eqs. (27) and (29). for k2, k5 →0:
15
1w →−3
4πκk2k2k2
3k5
+ 3
2 ln(kl) −1
24
49 −(1 −x)2(7 + 10x + 7x2)
x2
ln |1 −x|
+ 2x(12 + 7x) ln |x| −7(1 + x)4
x2
ln |1 + x|
,
(b11a)
1
1q = 1
2q →−3
4πκk2k2k2
3k5
−3
2 ln(kl) + 1
48
59 −(1 −x)2(9 + 10x + 9x2)
x2
ln |1 −x|
+ 2
9x2 + 14x −6 +
2
1 −x
ln |x| −9(1 + x)4
x2
ln |1 + x|
,
(b11b)
1
3q →−3
4πκk2k2k2
3k5
+ 3
2 ln(kl) + 1
48
7 + (1 −x)2 1 + x2
x2
ln |1 −x|
+ 21 −5x + x3
1 −x
ln |x| + (1 + x)4
x2
ln |1 + x|
,
(b11c)
1s →−3
4πκk2k2k2
3k5
1
6
1 + x
1 −x ln |x|
,
(b11d)
1f
w →−3
4πκk2k2k2
3k5
1 −1
6
1 + x
1 −x ln |x|
,
x ≡k3/k .
(b11e)
another possibility is for two small wave numbers to be
on the same side of the sextet. we have checked that on
the resonant manifold, this also leads to (31).
5.
analytical expression for w on the lia
manifold when four wave numbers are small
now let us, using mathematica, calculate the asymp-
totic behavior of w when four wave vectors are smaller
than the other two; on the lia manifold this automat-
ically simplifies to k1, k2, k3, k5 ≪k, k4 (remember that
on the lia manifold k1 and k4 are expressed in terms of
the other wave numbers using eq. (b2), thus from (b10))
we have
lim
k 1,2,3,5 →0 w 3,4,5
k,1,2 = −3
4πκ k2k2k2
3k5.
(b12)
again, we have got an expression which coincides with
(31). we emphasize that this was not obvious a priori,
because formally (31) was obtained when k5 is much less
than the rest of the wave numbers, including k1, k2, k3.
therefore we conclude that the expression (31) is valid
when k5 is much less than just one other wave number in
the sextet, say k, and not only when it is much less than
all of the remaining wave numbers.
for a reference, we give the term by term results for
the limit k1, k2, k3, k5 ≪k, k4:
1w →−3
4πκk2k2k2
3k5
−1 + 3
2 ln(kl) +
0
,
1
1q →−3
4πκk2k2k2
3k5
+1
2 −3
2 ln(kl) −1
6 lnk3
k
,
1
2q →−3
4πκk2k2k2
3k5
+1
2 −3
2 ln(kl) −1
6 lnk3
k
,
1
3q →−3
4πκk2k2k2
3k5
+1 + 3
2 ln(kl) + 1
6 lnk3
k
,
1s →−3
4πκk2k2k2
3k5
0
+
0
+ 1
6 lnk3
k
.
the sum of this contributions is very simple:
1f
w →−3
4πκk2k2k2
3k5 [ +1
+
0
+
0 ] .
[1] e. kozik and b. svistunov, phys. rev. lett. 92, 035301
(2004), doi: 10.1103/physrevlett.92.035301.
[2] e. kozik and b. svistunov, journal of low temp. phys.
156, 215-267 (2009), doi: 10.1007/s10909-009-9914-y.
[3] s. nazarenko, jetp letters 83, 198-200 (2006), doi:
10.1134/s0021364006050031.
[4] v.
s.
l'vov,
s.
v.
nazarenko
and
o.
rudenko,
phys. rev. b 76, 024520 (2007), doi: 10.1103/phys-
revb.76.024520.
[5] v. s. l'vov, s. v. nazarenko and o. rudenko), journal
of low temp. phys., 153, 140-161 (2008).
[6] e.v. kozik, b.v. svistunov, phys. rev. b 77, 060502(r)
16
(2008)
[7] r. j. donnelly, quantized vortices in he ii (cambridge
university press, cambridge, 1991)
[8] quantized vortex dynamics and superfluid turbulence,
ed. by c. f. barenghi et al., lecture notes in physics
571 (springer-verlag, berlin, 2001)
[9] k.w. schwarz, phys. rev. b 31, 5782 (1985), doi:
10.1103/physrevb.31.5782, and 38, 2398 (1988), doi:
10.1103/physrevb.38.2398.
[10] b.v. svistunov, phys. rev. b 52, 3647 (1995), doi:
10.1103/physrevb.52.3647.
[11] r.j. arms and f.r. hama, phys. fluids 8, 553 (1965).
[12] h. hasimoto, journal of fluid mech. 51, 477-485 (1972),
doi: 10.1017/s0022112072002307.
[13] g. boffetta, a. celani, d. dezzani, j. laurie and s.
nazarenko, journal of low temp. phys. 156, 193-214
(2009), doi: 10.1007/s10909-009-9895-x.
[14] v.e. zakharov, v.s. l'vov and g.e. falkovich. kol-
mogorov spectra of turbulence, (springer-verlag, 1992).
[15] r. kraichnan, phys. fluids, 10 1417 (1967), doi:
10.1063/1.1762301.
[16] r. kraichnan, j. fluid mech, 47 525 (1971), doi:
10.1017/s0022112071001216.
[17] s.v.nazarenko, jetp letters 84, 585-587 (2007), doi:
10.1134/s0021364006230032.
[18] v.e. zakharov and e.i. schulman, physica d: nonlin-
ear phenomena 4, 270-274 (1982), doi: 10.1016/0167-
2789(82)90068-9.
[19] i. gradstein and i. ryzhik, table of integrals, series, and
products, academic press, new york (1980).
[20] the limit of three small wave numbers is not allowed
by the resonance conditions. indeed, putting three wave
numbers to zero, we get a 1 ↔2 process which is not
allowed in 1d for λω ∼k2.
[21] here, we evoke a quantum mechanical analogy as an el-
egant shortcut, allowing us to introduce ke and the re-
spective solutions easily. however, the reader should not
get confused with this analogy and understand that our
kw system is purely classical. in particular, the plank's
constant ħis irrelevant outside of this analogy, and should
be simply replaced by 1.
[22] it is evident for the approximation eq. (31). for the full
expression eq. (29a) it was confirmed by symbolic com-
putation with the help of mathematica.
[23] it is appropriate to remind the reader, that we use the
bold face notation of the one-dimensional wave vector
for convenience only. indeed, such a vector is just a real
number, k ∈r.
|