id
stringlengths 9
9
| title
stringlengths 15
188
| full_text
stringlengths 6
704k
|
---|---|---|
0704.1633 | Hilbert Spaces with Generic Predicates | HILBERT SPACES WITH GENERIC PREDICATES
ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
Abstract. We study themodel theory of expansions of Hilbert spaces by generic
predicates. We first prove the existence of model companions for generic ex-
pansions of Hilbert spaces in the form first of a distance function to a random
substructure, then a distance to a random subset. The theory obtained with the
random substructure isω-stable, while the one obtained with the distance to a
random subset is TP2 and NSOP1. That example is the first continuous struc-
ture in that class.
1. Introduction
This paper deals with Hilbert spaces expanded with random predicates in the
framework of continuous logic as developed in [2]. The model theory of Hilbert
spaces is very well understood, see [2, Chapter 15] or [5]. However, we briefly
review some of its properties at the end of this section.
In this paper we build several new expansions, by various kinds of random
predicates (random substructure and the distance to a random subset) of Hilbert
spaces, and study them within the framework of continuous logic. While our
constructions are not exactlymetric Fraïssé (failing the hereditaryproperty), some
of them are indeed amalgamation classes and we study the model theory of their
limits.
Several papers deal with generic expansions of Hilbert spaces. Ben Yaacov,
Usvyatsov and Zadka [3] studied the expansion of a Hilbert space with a generic
This workwas partially supported by Colciencias grantMétodos de Estabilidad en Clases No Esta-
bles. The second and third author were also sponsored by Catalonia’s Centre de Recerca Matemàtica
(Intensive Research Program in Strong Logics) and by the University of Helsinki in 2016 for part
of this work. The third author was also partially sponsored by Colciencias (Proy. 1101-05-13605
CT-210-2003).
http://arxiv.org/abs/0704.1633v2
2 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
automorphism. The models of this theory are expansions of Hilbert spaces with a
unitary map whose spectrum is S1. A model of this theory can be constructed by
amalgamating together the collection ofn-dimensional Hilbert spaces with a uni-
tary map whose eigenvalues are the n-th roots of unity as n varies in the positive
integers. More work on generic automorphisms can be found in [4], where the
first author of this paper studied Hilbert spaces expanded with a random group
of automorphisms G.
There are also several papers about expansions of Hilbert spaces with random
subspaces. In [5] the first author and Buechler identified the saturated models of
the theory of beautiful pairs of a Hilbert space. An analysis of lovely pairs (the
generalization of beautiful pairs (belles paires) to simple theories) in the setting of
compact abstract theories is carried out in [1]. In the (very short) second section of
this paper we build the beautiful pairs of Hilbert spaces as the model companion
of the theory of Hilbert spaces with an orthonormal projection. We provide an
axiomatization for this class and we show that the projection operator into the
subspace is interdefinable with a predicate for the distance to the subspace. We
also prove that the theory of beautiful pairs of Hilbert spaces is ω-stable. Many
of the properties of beautiful pairs of Hilbert spaces are known from the literature
or folklore, so this section is mostly a compilation of results.
In the third section we add a predicate for the distance to a random subset. This
construction was inspired by the idea of finding an analogue to the first order
generic predicates studied by Chatzidakis and Pillay in [7]. The axiomatization
we found for the model companion was inspired in the ideas of [7] together with
the following observation: in Hilbert spaces there is a definable function that
measures the distance between a point and a model. We prove that the theory
of Hilbert spaces with a generic predicate is unstable. We also study a natural
notion of independence in a monster model of this theory and prove some of its
properties. Several natural independence notions have various good properties,
but the theory fails to be simple and even fails to be NTP2.
1.1. Model theory of Hilbert spaces (quick review).
HILBERT SPACES WITH GENERIC PREDICATES 3
1.1.1. Hilbert spaces. We follow [2] in our study of the model theory of a real
Hilbert space and its expansions. We assume the reader is familiar with the basic
concepts of continuous logic as presented in [2]. A Hilbert spaceH can be seen as
amulti-sorted structure (Bn(H), 0,+, 〈, 〉, {λr : r ∈ R})0<n<ω, whereBn(H) is the
ball of radiusn,+ stands for addition of vectors (defined fromBn(H)×Bn(H) into
B2n(H)), 〈, 〉 : Bn(H) × Bn(H) → [−n2, n2] is the inner product, 0 is a constant
for the zero vector and λr : Bn(H) → Bn(⌈|r|⌉)H is the multiplication by r ∈ R.
We denote by L the language of Hilbert spaces and by T the theory of Hilbert
spaces.
By a universal domainH of T we mean a Hilbert spaceH which is κ-saturated
and κ-strongly homogeneous with respect to types in the language L, where κ is
a cardinal larger than 2ℵ0 . Constructing such a structure is straightforward —just
consider a Hilbert space with an orthonormal basis of cardinality at least κ.
We will assume that the reader is familiar with the metric versions of definable
closure and non-dividing. The reader can check [2, 5] for the definitions.
1.1. Notation. Let dcl stand for the definable closure and acl stand for the alge-
braic closure in the language L.
1.2. Fact. Let A ⊂ H be small. Then dcl(A) = acl(A) = the smallest Hilbert
subspace ofH containing A.
Proof. See Lemma 3 in [5, p. 80] �
Recall a characterization of non-dividing in pure Hilbert spaces (that will be
useful in the more sophisticated constructions in forthcoming sections):
1.3. Proposition. Let B,C ⊂ H be small, let (a1, . . . , an) ∈ Hn and assume that
C = dcl(C), soC is a Hilbert subspace ofH. Denote by PC the projection onC. Then
tp(a1, . . . , an/C ∪ B) does not divide over C if and only if for all i ≤ n and all
b ∈ B, ai − PC(ai) ⊥ b− PC(b).
Proof. Proved as Corollary 2 and Lemma 8 of [5, pp. 81–82]. �
4 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
For A,B,C ⊂ H small, we say that A is independent from B over C if for all
n ≥ 1 and ā ∈ An, tp(ā/C ∪ B) does not divide over C.
Under non-dividing independence, types over sets are stationary. In particular,
the independence theorem holds over sets, and we may refer to this property as
3-existence. It is also important to point out that non-dividing is trivial, that is,
for all sets B,C and tuples (a1, . . . , an) from H, tp(a1, . . . , an/C ∪ B) does not
divide over C if and only if tp(ai/B ∪ C) does not divide over C for i ≤ n.
2. Random subspaces and beautiful pairs
We now deal with the easiest situation: a Hilbert space with an orthonormal
projection operator onto a subspace. Let Lp = L ∪ {P} where P is a new unary
function andwe consider structures of the form (H, P), whereP : H→ H is a pro-
jection into a subspace. Note that P : Bn(H) → Bn(H) and that P is determined
by its action onB1(H). Recall that projections are bounded linear operators, char-
acterized by two properties:
(1) P2 = P
(2) P∗ = P
The second condition means that for any u, v ∈ H, 〈P(u), v〉 = 〈u, P(v)〉.
A projection also satisfies, for any u, v ∈ H, ‖P(u) − P(v)‖ ≤ ‖u − v‖. In
particular, it is a uniformly continuous map and its modulus of uniform continuity
is ∆P(ǫ) = ǫ.
We start by showing that the class of Hilbert spaces with projections has the
free amalgamation property:
2.1. Lemma. Let (H0, P0) ⊂ (Hi, Pi) where i = 1, 2 and H1 |
H2 be (possibly
finite dimensional) Hilbert spaces with projections. Then H3 = span{H1, H2} is a
Hilbert space and P3(v3) = P1(v1) + P2(v2) is a well defined projection, where
v3 = v1 + v2 and v1 ∈ H1, v2 ∈ H2.
HILBERT SPACES WITH GENERIC PREDICATES 5
Proof. It is clear that H3 = span{H1 ∪ H2} is a Hilbert space containing H1 and
H2. It remains to prove that P3 is a projection map and that it is well defined. We
denote byQ0,Q1,Q2 the projections onto the spacesH0,H1 andH2 respectively.
Since H0 ⊂ H1, we can write H1 = H0 ⊕ (H1 ∩ H⊥0 ). Similarly H2 = H0 ⊕
(H2 ∩H⊥0 ). Finally, since H1 |⌣H0 H2,
H3 = H0 ⊕ (H1 ∩H⊥0 )⊕ (H2 ∩H⊥0 )
Let v3 ∈ H3. Let u0 = Q0(v3), u1 = PH⊥
∩H1(v3) = Q1(v3) − u0, u2 =
∩H2(v3) = Q2(v3) − u0. Then v3 = u0 + u1 + u2.
AsH1∩H2 = H0, we can write v3 in many different ways as a sum of elements
inH1 andH2. Given one such expression, v3 = v1+v2, with v1 ∈ H1 and v2 ∈ H2,
it is easy to see that P1(v1)+P2(v2) = P0(u0)+P1(u1)+P2(u2), and thus prove
that P3 is well defined.
Let TP be the theory of Hilbert spaces with a projection. It is axiomatized by
the theory of Hilbert spaces together with the axioms (1) and (2) that say that P
is a projection.
Consider first the finite dimensional models. Given an n-dimensional Hilbert
space Hn, there are only n + 1 many pairs (Hn, P), where P is a projection,
modulo isomorphism. They are classified by the dimension of P(H), which ranges
from 0 to n.
In order to characterize the existentially closedmodels of TP, note the following
facts:
(1) Let (H, P) be existentially closed, and (Hn, Pn) be ann-dimensional Hilbert
space with an orthonormal projection with the property that Pn(Hn) =
Hn. Then (H, P) ⊕ (Hn, Pn) is an extension of (H, P) with dim([P ⊕
Pn](H ⊕Hn)) ≥ n. Since n can be chosen as big as we want and (H, P)
is existentially closed, dim(P(H)) = ∞.
6 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
(2) Let (H, P) be existentially closed, and (Hn, P0) be ann-dimensionalHilbert
space with an orthonormal projection such that P0(Hn) = {0}. Then
(H, P)⊕ (Hn, P0) is an extension of (H,P) such that dim(([P⊕ P0](H⊕
⊥) ≥ n. Since n can be chosen as big as we want and (H, P) is
existentially closed, dim(P(H)⊥) = ∞.
The theory TPω extending T
P, stating that Pω is a projection and that there
are infinitely many pairwise orthonormal vectors v satisfying Pω(v) = v and
also infinitely many pairwise orthonormal vectors u satisfying Pω(u) = 0 gives
an axiomatization for the the model companion of TP, which corresponds to the
theory of beautiful pairs of Hilbert spaces. We will now study some properties of
Let (H, P) |= TPω and for any v ∈ H let dP(v) = ‖v − P(v)‖. Then dP(v)
measures the distance between v and the subspace P(H). The distance function
dP(x) is definable in (H, P). We will now prove the converse, that is, that we can
definably recover P from dP.
2.2. Lemma. Let (H, P) |= TPω. For any v ∈ Hω let dP(v) = ‖v − P(v)‖. Then
P(v) ∈ dcl(v) in the structure (H, dP).
Proof. Note that P(v) is the unique element x in P(H) satisfying ‖v−x‖ = dP(v).
Thus P(v) is the unique realization of the conditionϕ(x) = max{dP(x), |‖v−x‖−
dP(v)|} = 0. �
2.3. Proposition. Let (H, P) |= TPω. For any v ∈ Hω let dP(v) = ‖v − P(v)‖.
Then the projection function P(x) is definable in the structure (H, dP)
Proof. Let (H, P) |= TPω be κ-saturated for κ > ℵ0 and let dP(v) = ‖v − P(v)‖.
Since dP is definable in the structure (H, P), the new structure (H, dP) is still
κ-saturated. Let GP be the graph of the function P. Then by the previous lemma
GP is type-definable in (H, dP) and thus by [2, Proposition 9.24] P is definable in
the structure (H, dP). �
HILBERT SPACES WITH GENERIC PREDICATES 7
2.4. Notation. We write tp for L-types, tpP for LP-types and q�pP for quantifier
free LP-types. We write aclP for the algebraic closure in the language LP. We
follow a similar convention for dclP .
2.5. Lemma. TPω has quantifier elimination.
Proof. It suffices to show that quantifier free LP-types determine the LP-types.
Let (H, P) |= TPω and let ā = (a1, . . . , an), b̄ = (b1, . . . , bn) ∈ Hn. Assume that
q�pP(ā) = q�pP(b̄). Then
tp(P(a1), . . . , P(an)) = tp(P(b1), . . . , P(bn))
tp(a1 − P(a1), . . . , an − P(an)) = tp(b1 − P(b1), . . . , bn − P(bn)).
Let H0 = P(H) and let H1 = H
0 , both are then infinite dimensional Hilbert
spaces and H = H0 ⊕ H1. Let f0 ∈ Aut(H0) satisfy f0(P(a1), . . . , P(an)) =
(P(b1), . . . , P(bn)) and let f1 ∈ Aut(H1) be such that f1(a1 − P(a1), . . . , an −
P(an)) = (b1−P(b1), . . . , bn−P(bn)). Let f be the automorphism ofH induced
by by f0 and f1, that is, f = f0 ⊕ f1. Then f ∈ Aut(H, P) and f(a1, . . . , an) =
(b1, . . . , bn), so tpP(ā) = tpP(b̄). �
Characterization of types: By the previous lemma, the LP-type of an n-tuple
ā = (a1, . . . , an) inside a structure (H, P) |= T
ω is determined by the type of its
projections tp(P(a1), . . . , P(an), a1 − P(a1), . . . , an − P(an)). In particular, we
may regard (H, P) as the direct sum of the two independent pure Hilbert spaces
(P(H),+, 0, 〈, 〉) and (P(H)⊥,+, 0, 〈, 〉).
We may therefore characterize definable and algebraic closure, as follows.
2.6. Proposition. Let (H, P) |= TPω and let A ⊂ H. Then dclP(A) = aclP(A) =
dcl(A ∪ P(A)).
We leave the proof to the reader. Another consequence of the description of
types is:
8 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
2.7. Proposition. The theory TPω isω-stable.
Proof. Let (H, P) |= TPω be separable and let A ⊂ H be countable. Replacing
(H, P) for (H, P) ⊕ (H, P) if necessary, we may assume that
P(H)∩dclP(A)⊥ is infinite dimensional and that P(H)⊥∩dclP(A)⊥ is infinite
dimensional. Thus every Lp-type over A is realized in the structure (H, P) and
(S1(A), d) is separable. �
2.8. Proposition. Let (H, P) |= TPω be a κ-saturated domain and let A,B,C ⊂ H
be small. Then tpP(A/B ∪C) does not fork over C if and only if tp(A ∪ P(A)/B ∪
P(B) ∪ C ∪ P(C)) does not fork over C ∪ P(C).
Again the proof is straightforward.
3. Continuous random predicates
We now come to our main theory and to our first set of results. We study the
expansion of a Hilbert space with a distance function to a subset of H. Let dN
be a new unary predicate and let LN be the language of Hilbert spaces together
with dN. We denote the LN structures by (H, dN), where dN : H → [0, 1] and we
want to consider the structures where dN is a distance to a subset ofH. Instead of
measuring the actual distance to the subset, we truncate the distance at one. We
start by characterizing the functions dN corresponding to distances.
3.1. The basic theory T0. We denote by T0 the theory of Hilbert spaces together
with the next two axioms (compare with Theorem 9.11 in [2]):
(1) supxmin{1−
· dN(x), infymax{|dN(x) − ‖x− y‖|, dN(y)}} = 0
(2) supx supy[dN(y) − ‖x − y‖− dN(x)] ≤ 0
We say a point is black if dN(x) = 0 and white if dN(x) = 1. All other points
are gray, darker if d(x) is close to zero and whiter if dN(x) is close to one. This
terminology follows [10]. From the second axiom we get that dN is uniformly
continuous (with modulus of uniform continuity ∆(ǫ) = ǫ). Thus we can apply
the tools of continuous model theory to analyze these structures.
HILBERT SPACES WITH GENERIC PREDICATES 9
3.1. Lemma. Let (H, d) |= T0 be ℵ0-saturated and let N = {x ∈ H : dN(x) = 0}.
Then for any x ∈ H, dN(x) = dist(x,N).
Proof. Let v ∈ H and let w ∈ N. Then by the second axiom dN(v) ≤ ‖v − w‖
and thus dN(v) ≤ dist(v,N).
Now let v ∈ H. If dN(v) = 1 there is nothing to prove, so we may assume
that dN(v) < 1. Consider now the set of statements p(x) given by dN(x) = 0,
‖x− v‖ = dN(v).
Claim The type p(x) is approximately satisfiable.
Let ε > 0. We want to show that there is a realization of the statements
dN(x) ≤ ε, dN(v) ≤ ‖x − v‖ + ε. By the first axiom there is w such that
dN(w) ≤ ε and dN(v) ≤ ‖v −w‖+ ε.
Since (H, d) is ℵ0-saturated, there is w ∈ N such that ‖v − w‖ = dN(v) as
we wanted. �
There are several ages that need to be considered. We fix r ∈ [0, 1) and we
consider the class Kr of all models of T0 such that dN(0) = r. Note that in all
finite dimensional spaces in Kr we have at least a point v with dN(v) = 0.
3.2. Notation. If (Hi, d
N) |= T0 for i ∈ {0, 1}, we write (H0, d0N) ⊂ (H1, d1N) if
H0 ⊂ H1 and d0N = d1N ↾H0 (for each sort).
We will work in Kr. We start with constructing free amalgamations:
3.3. Lemma. Let (H0, d
N) ⊂ (Hi, diN) where i = 1, 2 and H1 |⌣H0 H2 be Hilbert
spaces with distance functions, all of them in Kr. Let H3 = span{H1, H2} and let
d3N(v) = min
(PH1(v))
2 + ‖PH2∩H⊥0 (v)‖
(PH2(v))
2 + ‖PH1∩H⊥0 (v)‖
Then (Hi, d
N) ⊂ (H3, d3N) for i = 1, 2, and (H3, d3N) ∈ Kr.
Proof. For arbitrary v ∈ H1,
(PH1(v))
2 + ‖PH2∩H⊥0 (v)‖
2 = d1N(v). Since
(H0, d
N) ⊂ (Hi, diN) we also have
10 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
(PH2(v))
2 + ‖PH1∩H⊥0 (v)‖
(PH0(v))
2 + ‖PH1∩H⊥0 (v)‖
2 ≥ d1N(v).
Similarly, for any v ∈ H2, d3N(v) = d2N(v).
Therefore (H3, d
N) ⊃ (Hi, diN) for i ∈ {1, 2}. Now we have to prove that the
function d3N that we defined is indeed a distance function.
Geometrically, d3N(v) takes the minimum of the distances of v to the selected
black subsets of H1 and H2. That is, the random subset of the amalgamation of
(H1, d
N) and (H2, d
N) is the union of the two random subsets. It is easy to check
that (H3, d
N) |= T0. Since each of (H1, d
N), (H2, d
N) belongs to Kr, we have
d1N(0) = r = d
N(0) and thus d
N(0) = r. �
The classK0 also has the JEP: let (H1, d
N), (H2, d
N) belong toK0 and assume
that H1 ⊥ H2. Let N1 = {v ∈ H1 : d1N(v) = 0} and let N2 = {v ∈ H2 :
d2N(v) = 0}. Let H3 = span(H1 ∪H2) and let N3 = N1 ∪N2 ⊂ H3 and finally,
let d3N(v) = dist(v,N3). Then (H3, d
N) is a witnesses of the JEP in K0.
3.4. Lemma. There is a model (H, dN) |= T0 such that H is a 2n-dimensional
Hilbert space and there are orthonormal vectors v1, . . . , vn ∈ H, u1, . . . , un ∈ H
such that dN((ui+ vj)/2) =
2/2 for i ≤ j, dN(0) = 0 and dN((ui+ vj)/2) = 0
for i > j.
Proof. LetH be a Hilbert space of dimension 2n, and fix some orthonormal basis
〈v1, . . . , vn, u1, . . . , un〉 for H. Let N = {(ui + vj)/2 : i > j} ∪ {0} and let
dN(x) = dist(x,N). Then dN(0) = 0 and dN((ui + vj)/2) = 0 for i > j. Since
‖(ui + vj)/2− (uk + vj)/2‖ =
2/2 for i 6= k and ‖(ui + vj)/2− 0‖ =
we get that dN(ui + vj) =
2/2 for i ≤ j �
A similar construction can be made in order to get the Lemma with dN(0) = r
for any r ∈ [0, 1]. In particular, if we fix an infinite cardinal κ and we amalga-
mate all possible pairs (H,d) in Kr for dim(H) ≤ κ, the theory of the resulting
structure will be unstable.
3.2. The model companion.
HILBERT SPACES WITH GENERIC PREDICATES 11
3.2.1. Basic notations. We now provide the axioms of the model companion of
T0 ∪ {dN(0) = 0}.
Call Td0 the theory of the structure built out of amalgamating all separable
Hilbert spaces together with a distance function belonging to the age K0. Infor-
mally speaking, Td0 = Th(lim−→(K0)). We show how to axiomatize Td0.
The idea for the axiomatization of this part (unlike our third example, in next
section) follows the lines of Theorem 2.4 of [7]. There are however important
differences in the behavior of algebraic closures and independence, due to the
metric character of our examples.
Let (M,dN) inK0 be an existentially closed structure and take some extension
(M1, dN) ⊃ (M,dN). Let x̄ = (x1, . . . , xn+k) be elements in M1 \M and let
z1, . . . , zn+k be their projections on M. Assume that for i ≤ n there are ȳ =
(y1, . . . , yn) inM1 \M that satisfy dN(xi) = ‖xi − yi‖ and dN(yi) = 0. Also
assume that for i > n, the witnesses for the distances to the black points belong
toM, that is, d2N(xi) = ‖xi − zi‖2 + d2N(zi) for i > n. Also, let us assume that
all points in x̄, ȳ live in a ball of radius L around the origin. Let ū = (u1, . . . , un)
be the projection of ȳ = (y1, . . . , yn) overM.
Let ϕ(x̄, ȳ, z̄, ū) be a formula such that ϕ(x̄, ȳ, z̄, ū) = 0 describes the val-
ues of the inner products between all the elements of the tuples, that is, it de-
termines the (Hilbert space) geometric locus of the tuple (x̄, ȳ, z̄, ū). The state-
ment ϕ(x̄, ȳ, z̄, ū) = 0 expresses the position of the potentially new points x̄, ȳ
with respect to their projections into a model. Since dN(xi) = ‖xi − yi‖ and
dN(yi) = 0, we have ‖xi − yj‖ ≥ ‖xi − yi‖ for j ≤ n, i ≤ n. Also, for i > n,
d2N(xi) = ‖xi − zi‖2 + d2N(zi), and get ‖xi − yj‖2 ≥ ‖xi − zi‖2 + d2N(zi) for
j ≤ n.
Note that as (M1, dN) ⊃ (M,dN), for all z ∈ M, d2N(z) ≤ ‖z − yi‖2 =
‖z − ui‖2 + ‖yi − ui‖2 for i ≤ n. We may also assume that there is a positive
real ηϕ such that ‖xi − zi‖ ≥ ηϕ for i ≤ n + k and ‖yi − ui‖ ≥ ηϕ for i ≤ n.
3.2.2. An informal description of the axioms. We want to express that for any pa-
rameters z̄, ū in the structure
12 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
if we can find realizations x̄, ȳ of ϕ(x̄, ȳ, z̄, ū) = 0 such that for allw and i ≤ n,
d2N(w) ≤ ‖w− ui‖2 + ‖ui − yi‖2, ‖xi − yi‖2 ≤ ‖xi − zi‖2 + d2N(zi) for i ≤ n,
‖xi − yj‖2 ≥ ‖xi − zj‖2 + d2N(zj) for i > n and j ≤ n,
then there are tuples x̄ ′, ȳ ′ such that ϕ(x̄ ′, ȳ ′, z̄, ū) = 0, dN(y
i) = 0, dN(x
‖x ′i − y ′i‖ for i ≤ n and d2N(xj) = ‖xj − zj‖2 + d2N(zj) for j > n.
That is, for any z̄, ū in the structure, if we can find realizations x̄, ȳ of the
Hilbert space locus given byϕ, and we prescribe “distances” dN that do not clash
with the dN information we already had, in such a way that for i ≤ n, the yi’s
are black and are witnesses for the distance to the black set for the xi’s, and for
i > n the xi’s do not require new witnesses, then we can actually find arbitrarily
close realizations, with the prescribed distances.
The only problemwith this idea is that we do not have an implication in contin-
uous logic. We replace the expression “p → q” by a sequence of approximations
indexed by ε.
3.2.3. The axioms of TN.
3.5. Notation. Let z̄, ū be tuples inM and let x ∈M1. By Pspan(z̄ū)(x) we mean
the projection of x in the space spanned by (z̄, ū).
For fixed ε ∈ (0, 1), let f : [0, 1] → [0, 1] be a continuous function such that
whenever ϕ(t̄) < f(ε) and ϕ(t̄ ′) = 0, then
(a): ‖Pspan(z̄ū)(xi) − zi‖ < ε.
(b): ‖Pspan(z̄ū)(yi) − ui‖ < ε.
(c): |‖ti − tj‖− ‖t ′i − t ′j‖| < ε where t̄ is the concatenation of x̄, ȳ, z̄, ū.
Choosing ε small enough, we may assume that
(d): ‖xi − Pspan(z̄ū)(xi)‖ ≥ ηϕ/2 for i ≤ n + k.
(e): ‖yi − Pspan(z̄ū)(yi)‖ ≥ ηϕ/2 for i ≤ n.
Let δ = 2
ε(L+ 2) and consider the following axiom ψϕ,ε (which we write
as a positive bounded formula for clarity) where the quantifiers range over a ball
of radius L+ 1:
HILBERT SPACES WITH GENERIC PREDICATES 13
∀z̄∀ū
∀x̄∀ȳϕ(x̄, ȳ, z̄, ū) ≥ f(ε)∨∃w
i≤n(d
N(w) ≥ ‖w−ui‖2+‖yi−ui‖2+
i>n,j≤n(‖xi − yj‖2 ≤ ‖xi − zi‖2 + d2N(zi) + ε2)∨
i,j≤n,j6=i(‖xi−yj‖ ≤ ‖xi−yi‖−ε)∨
i≤n(‖xi−zi‖2+d2N(zi) ≤ ‖xi−yi‖2−ε2)
∨∃x̄∃ȳ
(ϕ(x̄, ȳ, z̄, ū) ≤ f(ε)∧
i≤n dN(yi) ≤ δ)∧
i≤n |dN(xi)−‖xi−yi‖| ≤
i>n |d
N(xi) − ‖xi − zi‖2 − d2N(zi)| ≤ 4δL
Let TN be the theory T0 together with this scheme of axiomsψϕ,ε indexed by all
Hilbert space geometric locus formulas ϕ(x̄, ȳ, z̄, ū) = 0 and ε ∈ (0, 1) ∩ Q. The
radius of the ball that contains all elements, L, as well as n and k are determined
from the configuration of points described by the formula ϕ(x̄, ȳ, z̄, ū) = 0.
3.2.4. Existentially closed models of T0.
3.6. Theorem. Assume that (M,dN) |= T0 is existentially closed. Then (M,dN) |=
Proof. Fix ε > 0 and ϕ as above. Let z̄ ∈Mn+k, ū ∈Mn and assume that there
are x̄, ȳwithϕ(x̄, ȳ, z̄, ū) < f(ε) and d2N(w) < ‖w−ui‖2+‖yi−ui‖2+ε2 for all
w ∈M, ‖xi−yi‖2 < ‖xi−zi‖2+d2N(zi)+ε2 for i ≤ n, ‖xi−yj‖ > ‖xi−yi‖−ε
for i, j ≤ n, i 6= j, ‖xi − yj‖2 > ‖xi − zi‖2 + d2N(zj) + ε2 for i > n, j ≤ n. Let
ε ′ < ε be such that ϕ(x̄, ȳ, z̄, ū) < f(ε ′) and
(f): d2N(w) < ‖w − ui‖2 + ‖yi − ui‖2 + ε ′2 for allw ∈M.
(g1): ‖xi − yi‖2 > ‖xi − zi‖2 + dN(zi) + ε ′2 for i ≤ n.
(g2): ‖xi − yj‖ > ‖xi − yi‖− ε ′ for i, j ≤ n, i 6= j
(h): ‖xi − yj‖2 ≥ ‖xi − zi‖2 + d2N(zi) + ε ′2 for i > n, j ≤ n.
We construct an extension (H,dN) ⊃ (M,dN) where the conclusion of the
axiom indexed by ε ′ holds. Since (M,dN) is existentially closed and the conclu-
sion of the axiom is true for (H,dN) replacing ε for ε
′ < ε, then the conclusion
of the axiom indexed by ε will hold for (M,dN).
So let H ⊃ M be such that dim(H ∩ M⊥) = ∞. Let a1, . . . , an+k and
c1, . . . , cn ∈ H be such that tp(ā, c̄/z̄ū) = tp(x̄, ȳ/z̄ū) and āc̄ |
⌣z̄ū
M. We
14 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
can write ai = a
i + z
i and ci = c
i + u
i for some z
i ∈ M and a ′i, c ′i ∈M⊥.
By (d) and (e) ‖a ′i‖ ≥ η/2 for i ≤ n + k and ‖c ′i‖ ≥ η/2 for i ≤ n. Now let
ĉi = c
i + u
i + δ
′c ′i/‖c ′i‖, where δ ′ =
2ε ′(L+ 2).
Let the black points in H be the ones fromM plus the points ĉ1, . . . , ĉn. Now
we check that the conclusion of the axiom ψϕ,ε ′ holds.
(1) ϕ(ā, c̄, z̄, ū) ≤ f(ε ′) since tp(ā, c̄/z̄ū) = tp(x̄, ȳ/z̄ū).
(2) Since ‖ci − ĉi‖ ≤ δ ′ and ĉi is black we have dN(ci) ≤ δ ′.
(3) We check that the distance from ai to the black set is as prescribed for
i ≤ n. dN(ai) ≤ ‖ai − ĉi‖ ≤ ‖ai − ci‖+ δ ′ for i ≤ n.
Also, for i 6= j, i, j ≤ n, using (g2)we prove ‖ai− ĉj‖ ≥ ‖ai−cj‖−δ ′ ≥
‖ai − ci‖ − ε ′ − δ ′ ≥ ‖ai − ci‖− 2δ ′. Finally by (a) ‖ai − PM(ai)‖2 +
d2N(PM(ai)) ≥ (‖ai − zi‖− ε ′)2+(dN(zi)− ε ′)2 ≥ ‖ai− zi‖2− 2Lε ′ +
ε ′2 + d2N(zi) − 2ε
′ + ε ′2 and by (g1), we get ‖ai − zi‖2 − 2Lε ′ + ε ′2 +
d2N(zi) − 2ε
′ + ε ′2 ≥ ‖ai − ci‖2 − 2Lε ′ − 2ε ′ ≥ ‖ai − ci‖2 − 4δ ′2.
(4) We check that dN(ai) is as desired for i > n. Clearly ‖aj − ĉi‖ ≥ ‖aj −
ci‖ − δ ′, so ‖aj − ĉi‖2 ≥ ‖aj − ci‖2 + δ ′2 − 2δ ′2L and by (h) we get
‖aj − ci‖2 + δ ′2 − 4δ ′L ≥ ‖aj − zj‖2 + d2N(zj) − 4δ ′L − ε ′2 + δ ′2 ≥
‖aj − zj‖2 + d2N(zj) − 4δ ′L.
It remains to show that (M,dN) ⊂ (H,dN), i.e., the function dN onH extends
the function dN on M . Since we added the black points in the ball of radius
L + 1, we only have to check that for any w ∈ M in the ball of radius L + 2,
d2N(w) ≤ ‖w − ĉi‖2 = ‖w − u ′i‖2 + ‖c ′i + δ ′(c ′i/‖c ′i‖)‖2.
But by (f) d2N(w) ≤ ‖w − ui‖2 + ‖ci − ui‖2 + ε ′2, so it suffices to show that
‖w− ui‖2 + ‖ci − ui‖2 + ε ′2 ≤ ‖w− u ′i‖2 + ‖c ′i‖2 + 2δ ′‖c ′i‖+ δ ′2
By (a) ‖w− u ′i‖2 ≥ (‖w − ui‖− ε ′)2 and is enough to prove that
‖w− ui‖2 + ‖ci − ui‖2 + ε ′2 ≤ (‖w − ui‖− ε ′)2 + ‖c ′i‖2 + 2δ ′‖c ′i‖+ δ ′2
But (‖w−ui‖−ε ′)2+‖c ′i‖2+2δ ′‖c ′i‖+δ ′2 = ‖w−ui‖2−2ε ′‖w−ui‖+ε ′2+
‖c ′i‖2+2δ ′‖c ′i‖+δ ′2 and ‖ci−ui‖2 ≤ ‖ci−u ′i‖2+2ε ′‖ci−u ′i‖+ε ′2 = ‖c ′i‖2+
HILBERT SPACES WITH GENERIC PREDICATES 15
2ε ′‖c ′i‖+ε ′2. Thus, after simplifying, we only need to check 2ε ′‖w−ui‖+ε ′2 ≤
δ ′2 which is true since 2ε ′‖w−ui‖+ ε ′2 ≤ 2ε ′(2L+ 2) + ε ′2 ≤ 4ε ′(L+ 2). �
3.7. Theorem. Assume that (M,dN) |= TN. Then (M,dN) is existentially closed.
Proof. Let (H,dN) ⊃ (M,dN) and assume that (H,dN) is ℵ0-saturated. Let
ψ(x̄, v̄) be a quantifier free LN-formula, where x̄ = (x1, . . . xn+k) and v̄ = (v1, . . . vl).
Suppose that there are a1, . . . , an+k ∈ H \ M and e1, . . . el ∈ M such that
(H,dN) |= ψ(ā, ē) = 0. After enlarging the formula ψ if necessary, we may
assume that ψ(x̄, v̄) = 0 describes the values of dN(xi) for i ≤ n+ k, the values
of dN(vj) for j ≤ l and the inner products between those elements. We may as-
sume that for i ≤ n there is ρ > 0 such that dN(ai)−d(ai, z) ≥ 2ρ for all z ∈M
with dN(z) ≤ ρ. Since (H,dN) isℵ0-saturated, there are c1, . . . cn ∈ H such that
dN(ai) = ‖ai− ci‖ and dN(ci) = 0. Then d(ci,M) ≥ ρ. Fix ε > 0, ε < ρ, 1. We
may also assume that for i > n, |d2N(ai)−‖ai−PM(ai)‖2−d2N(PM(ai))| ≤ ε/2.
Also, assume that all points mentioned so far live in a ball of radius L around the
origin. Let b1, . . . , bn+k ∈M be the projections of a1, . . . , an+k ontoM and let
d1, . . . , dn ∈ M be the projections of c1, . . . , cn ontoM. Let ϕ(x̄, ȳ, z̄, ū) = 0
be an L-statement that describes the inner products between the elements listed
and such that ϕ(ā, c̄, b̄, d̄) = 0. Using the axioms we can find ā ′, c̄ ′ inM such
that ϕ(ā ′, c̄ ′, b̄, d̄) ≤ f(ε), dN(c ′i) ≤ δ for i ≤ n, |dN(a ′i) − ‖a ′i − c ′i‖| ≤ δ for
i ≤ n and |d2N(ai) − ‖ai − bi‖2 −d2N(bi)| ≤ 4Lδ, where δ =
2ε(L + 2). Since
ε > 0 was arbitrary we get (M,dn) |= infx1 . . . infxn+k ψ(x̄, v̄) = 0. �
4. Model theoretic analysis of TN
We prove three theorems in this section about the theory TN:
• TN is not simple,
• TN is not even NTP2! (Of course, this implies the previous, but we will
provide the proof of non-simplicity as well.)
16 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
• TN is NSOP1. Therefore, in spite of having a tree property, our theory is
still “close to being simple” in the precise sense of not having the SOP1
tree property.
These results place TN in a very interesting situation in the stability hierarchy
for continuous logic.
4.1. Notation. We write tp for types of elements in the language L and tpN for
types of elements in the language LN. Similarly we denote by aclN the algebraic
closure in the language LN and by acl the algebraic closure for pure Hilbert spaces.
Recall that for a set A, acl(A) = dcl(A), and this corresponds to the closure of
the space spanned by A (Fact 1.2).
4.2. Observation. The theory TN does not have elimination of quantifiers. We
use the characterization of quantifier elimination given in Theorem 8.4.1 from
[9]. Let H1 be a two dimensional Hilbert space, let {u1, u2} be an orthonormal
basis for H1 and let N1 = {0, u0 +
u1} and let d
N(x) = min{1, dist(x,N1)}.
Then (H1, d
N) |= T0. Let a = u0, b = u0 −
u1 and c = u0 +
u1. Note
that d1N(b) =
. Let (H ′1, d
N) ⊃ (H1, d1N) be existentially closed. Now let H2
be an infinite dimensional separable Hilbert space and let {vi : i ∈ ω} be an
orthonormal basis. LetN2 = {x ∈ H : ‖x− v1‖ = 14 , Pspan(v1)(x) = v1} ∪ {0} and
let d2N(x) = min{1, dist(x,N2)}. Let (H
N) ⊃ (H2, d2N) be existentially closed.
Then (span(a), d1N ↾span(a))
∼= (span(v1), d
N ↾span(v1)) and they can be identified
say by a function F. But (H ′1, d
N) and (H
N) cannot be amalgamated over this
common substructure: If they could, then we would have dist(F(b), v1 +
vi) =
dist(b, v1 +
vi) <
for some i > 1 and thus d1N(b) <
, a contradiction.
In this case, the main reason for this failure of amalgamation resides in the
fact that (span(a), d1N ↾span(a))
∼= (span(v1), d
N ↾span(v1)) is not a model of T0:
informally, the distance values around v1 are determinedby an “external attractor”
(the black point u0 +
u1 or the black ring orthogonal to v1 at distance
) that
the subspace (span(a), d1N ↾span(a)) simply cannot see. This violates Axiom (1) in
HILBERT SPACES WITH GENERIC PREDICATES 17
the description of T0. This “noise external to the substructure” accounts for the
failure of amalgamation, and ultimately for the lack of quantifier elimination.
In [7, Corollary 2.6], the authors show that the algebraic closure of the expan-
sion of a simple structure with a generic subset corresponds to the algebraic in
the original language. However, in our setting, the new algebraic closure aclN(X)
does not agree with the old algebraic closure acl(X):
4.3. Observation. The previous construction shows that aclN does not coincide
with acl. Indeed, c ∈ aclN(a) \ acl(a) - the set of solutions of the type tpN(c/a)
is {c}, but c /∈ dcl(a) as c /∈ span(a).
However, models of the basic theory T0 are LN-algebraically closed. The proof
is similar to [7, Proposition 2.6(3)]:
4.4. Lemma. Let (M,dN) |= TN and let A ⊂ M be such that A = dcl(A) and
(A,dN ↾A) |= T0. Let a ∈M. Then a ∈ aclN(A) if and only if a ∈ A.
Proof. Assume a /∈ A. We will show that a /∈ aclN(A). Let a ′ |= tp(a/A) be
such that a ′ |
M. Let (M ′, dN) be an isomorphic copy of (M,dN) over A
through f : M →A M ′ such that f(a) = a ′. We may assume thatM ′ |
Since (A,dN ↾A) is an amalgamation base, (N,dN) = (M ⊕A M ′, dN) |= T0.
Let (N ′, dN) ⊃ (N,dN) be an existentially closed structure. Then tpN(a/A) =
tpN(a
′/A) and therefore a /∈ aclN(A). �
As TN is model complete, the types in the extended language are determined by
the existential formulas within them, i.e. formulas of the form inf ȳϕ(ȳ, x̄) = 0
Another difference with the work of Chatzidakis and Pillay is that the analogue
to [7, Proposition 2.5] no longer holds. Let a, b, c be as in Observation 4.3; notice
that (span(a), dN ↾span(a))
∼= (span(v1), dN ↾span(v1)). However, (H
1, dN, a) 6≡
(H ′2, dN, v1). Instead, we can show the following weaker version of the Proposi-
tion.
18 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
4.5.Proposition. Let (M,dN) and (N,dN) bemodels of TN and letA be a common
subset ofM and N such that (span(A), dN ↾span(A)) |= T0. Then
(M,dN) ≡A (N,dN).
Proof. Assume thatM∩N = span(A). Since (span(A), dN ↾span(A)) |= T0, it is an
amalgamation base and therefore we may consider the free amalgam (M⊕span(A)
N,dN) of (M,dN) and (N,dN) over (span(A), dN ↾span(A)). Let now (E, dN) be
a model of TN extending (M⊕span(A) N,dN). By the model completeness of TN,
we have that (M,dN) ≺ (E, dN) and (N,dN) ≺ (E, dN) and thus (M,dN) ≡A
(N,dN). �
4.1. Generic independence. In this section we define an abstract notion of in-
dependence and study its properties.
Fix (U, dN) |= TN be a κ-universal domain.
4.6. Definition. Let A,B,C ⊂ U be small sets. We say that A is ∗-independent
from B over C and write A |∗
B if aclN(A ∪ C) is independent (in the sense of
Hilbert spaces) from aclN(C ∪ B) over aclN(C). That is, A |∗
B if for all a ∈
aclN(A ∪ C), PB∪C(a) = PC(a), where B ∪ C = aclN(C ∪ B) and C = aclN(C).
4.7. Proposition. The relation |∗
satisfies the following properties (here A, B, etc.,
are any small subsets of U):
(1) Invariance under automorphisms of U.
(2) Symmetry: A |∗
B⇐⇒ B |∗
(3) Transitivity: A |∗
BD if and only if A |∗
B and A |∗
(4) Finite Character: A |∗
B if and only ā |∗
B for all ā ∈ A finite.
(5) Local Character: If ā is any finite tuple, then there is countable B0 ⊆ B such
that ā |∗
(6) Extension property over models of T0. If (C,dN ↾C) |= T0, then we can find
A ′ such that tpN(A/C) = tpN(A
′/C) and A ′ |∗
(7) Existence over models: ā |∗
M for any ā.
(8) Monotonicity: āā ′ |∗
b̄b̄ ′ implies ā |
HILBERT SPACES WITH GENERIC PREDICATES 19
Proof. (1) Is clear.
(2) It follows from the fact that independence in Hilbert spaces satisfies Sym-
metry (see Proposition 1.3).
(3) It follows from the fact that independence in Hilbert spaces satisfies Tran-
sitivity (see Proposition 1.3).
(4) Clearly A |∗
B implies that ā |∗
B for all ā ∈ A finite. On the other
hand if ā |∗
B for all ā ∈ A finite, then for a dense subset A0 of A,
B and thus A |∗
(5) Local Character: let ā be a finite tuple. Since independence in Hilbert
spaces satisfies local character, there is B1 ⊆ aclN(B) countable such that
ā |∗
B. Now let B0 ⊆ B be countable such that aclN(B0) ⊃ B1. Then
ā |∗
(6) LetC be such that (C,dN ↾C) |= T0. LetD ⊃ A∪C be such that (D,dN ↾D
) |= T0 and let E ⊃ B ∪ C be such that (E, dN ↾E) |= T0. Changing D for
another set D ′ with tpN(D
′/C) = tpN(D/C), we may assume that the
space generated by D ′ ∪ E is the free amalgamation of D ′ and E over C.
By lemma 4.4 D ′, E are algebraically closed andD ′ |∗
(7) It follows from the definition of ∗-independence.
(8) It follows from the definition of ∗-independence and transitivity.
Therefore we have a natural independence notion that satisfies many good
properties, but not enough to guarantee the simplicity of TN.
We will show below that the theory TN has both TP2 and NSOP1. This places
it in an interesting area of the stability hierarchy for continuous model theory:
while having the tree property TP2 and therefore lacking the good properties of
NTP2 theories, it still has a quite well-behaved independence notion |
, good
enough to guarantee that it does not have the SOP1 tree property. Therefore,
although the theory is not simple, it is reasonably close to this family of theories.
4.2. The failure of simplicity.
20 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
4.8. Theorem. The theory TN is not simple.
The proof’s idea uses a characterization of simplicity in terms of the number of
partial types due to Shelah (see [11]; see also Casanovas [6] for further analysis): T
is simple iff for all κ, λ such thatNT(κ, λ) < 2κ+λω, whereNT(κ, λ) counts the
supremum of the cardinalities of families of pairwise incompatible partial types
of size ≤ κ over a set of cardinality≤ λ. This holds for continuous logic as well.
We show that TN fails this criterion.
Proof. Fix κ an infinite cardinal and λ ≥ κ. We will find a complete submodel
Mf of the monster model, of density character λ, and λ
κ many types over sub-
sets of Mf of power κ in such a way that we guarantee that they are pairwise
incompatible in a uniform way.
Now also fix some orthonormal basis ofMf, listed as
{bi|i < κ} ∪ {aj|j < λ} ∪ {cX|X ∈ Pκ(λ)}.
Also fix, for every X ∈ Pκ(λ), a bijection
fX : {bi|i < κ} → {aj|j ∈ X}.
Let the “black points” ofMf consist of the set
N = {cX + bi + (1/2)fX(bi) | i < κ,X ∈ Pκ(λ)} ∪ {0}
and as usual define dN(x) as the distance from x to N. This is a submodel of the
monster.
Let AX := {bi|i < κ} ∪ {aj|j ∈ X} for each X ∈ Pκ(λ).
The crux of the proof is to notice that if X 6= Y then the types tp(cX/AX) and
tp(cY/AY) are incompatible, thereby witnessing that there are λ
κ many incom-
patible types:
Suppose there is some c such that tp(c/AX) = tp(cX/AX) and tp(c/AY) =
tp(cY/AY). Take (wlog) j ∈ Y \ X. Pick ℓ < κ such that fY(bℓ) = aj. Let k ∈ X
be such that fX(bℓ) = ak.
HILBERT SPACES WITH GENERIC PREDICATES 21
InMf, the distance to black of cX+bℓ−
ak is 1: by definition, cX+bℓ+
cX + bℓ +
fX(bℓ) ∈ N and the only difference between cX + bℓ − 12ak and
cX + bℓ +
ak is the sign in front of an element of an orthonormal basis.
Therefore the distance to black of d = c+ bℓ −
ak is also 1 (in the monster).
However, e = c + bℓ +
aj must be a black point, since e
′ = cY + bℓ +
aj is
black (by definition of N and since aj = fY(bℓ) and tp(c/AY) = tp(cY/AY)).
On the other hand, the distance from e to d is
< 1. This contradicts that
the color of d is 1. �
This stands in sharp contrast with respect to the result by Chatzidakis and
Pillay in the (discrete) first order case. The existence of these incompatible types
is rendered possible here by the presence of “euclidean” interactions between the
elements of the basis chosen.
So far we have two kinds of expansions of Hilbert spaces by predicates: either
they remain stable (as in the case of the distance to a Hilbert subspace as in the
previous section) or they are not even simple.
4.3. TN has the tree property TP2.
4.9. Theorem. The theory TN has the tree property TP2.
Proof. We will construct a complete submodel M |= T0 of the monster model,
of density character 2ℵ0 , and a quantifier free formula ϕ(x;y, z) that witnesses
TP2 inside M. Since this model can be embedded in the monster model of TN
preserving the distance to black points, this will show that TN has TP2.
We fix some orthonormal basis ofM, listed as
{bi|i < ω} ∪ {cn,i|n, i < ω} ∪ {af|f : ω→ ω}.
Also let the “black points” ofM consist of the set
N = {af + bn + (1/2)cn,f(n) | n < ω, f : ω→ ω} ∪ {0}
and as usual define dN(x) as the distance from x to N. This is a model of T0 and
thus a submodel of the monster.
22 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
Let ϕ(x, y, z) = max{1− dN(x+ y− (1/2)z), dN(x+ y − (1/2)z)}.
Claim1 For each i, the conditions {ϕ(x, bi, ci,j) = 0 : j ∈ ω} are 2-inconsistent.
Assume otherwise, so we can find a (in an extension ofM) such that dN(a +
bi + (1/2)ci,j) = 0 and dN(a + bi − (1/2)ci,l) = 1 for some j < l. But then
d(a+bi+(1/2)ci,j, a+bi−(1/2)ci,l) = d((1/2)ci,j,−(1/2)ci,l) =
2/2 < 1.
Sincea+bi+(1/2)ci,j is a black point, we get thatdN(a+bi−(1/2)ci,l) ≤
a contradiction.
Claim 2 For each f the conditions {ϕ(x, bi, ci,f(i)) = 0 : i ∈ ω} are consistent.
Indeed fix f and consider af, then by construction dN(af+bn+(1/2)cn,f(n)) =
0 and d(af + bn − (1/2)cn,f(n), af + bn + (1/2)cn,f(n)) = 1, so dN(af + bn −
(1/2)cn,f(n)) ≤ 1.
Now we check the distance to the other points in N. It is easy to see that
d(af + bn − (1/2)cn,f(n), af + bm + (1/2)cm,f(m)) > 1 form 6= n, d(af + bn −
(1/2)cn,f(n), ag + bk + (1/2)ck,g(k)) > 1 for g 6= f and all indexes k. Finally,
d(af + bn − (1/2)cn,f(n), 0) > 1. This shows that af is a witness for the claim.
4.4. TN and the property NSOP1. Chernikov and Ramsey have proved that
whenever a first order discrete theory satisfies the following properties (for ar-
bitrary models and tuples), then the theory satisfies theNSOP1 property (see [8,
Prop. 5.3]).
• Strong finite character: whenever ā depends on b̄ overM, there is a for-
mula ϕ(x, b̄, m̄) ∈ tp(ā/b̄M) such that every ā ′ |= ϕ(x̄, b̄, m̄) depends
on b̄ overM.
• Existence over models: ā |
M for any ā.
• Monotonicity: āā ′ |
b̄b̄ ′ implies ā |
• Symmetry: ā |
b̄ ⇐⇒ b̄ |
• Independent amalgamation: c̄0 |
c̄1, b̄0 |
c̄0, b̄1 |
c̄1, b̄0 ≡M b̄1
implies there exists b̄ with b̄ ≡c̄0M b̄0, b̄ ≡c̄1M b̄1.
HILBERT SPACES WITH GENERIC PREDICATES 23
We prove next that in TN, |
∗ satisfies analogues of these five properties -
we may thereby conclude that TN can be regarded (following the analogy) as a
NSOP1 continuous theory.
In what remains of the paper, we prove that TN satisfies these properties.
We focus our efforts in strong finite character and independent amalgamation,
the other properties were proved in Proposition 4.7.
We need the following setting:
Let M be the monster model of TN andA ⊂M. FixAwithA ⊂ A ⊂ M be such
that A |= T0 and let ā = (a0, . . . , an) ∈ M. We say that (ā, A,B) is minimal if
tp(B/A) = tp(A/A) and for all b̄ ∈ M, if tp(b̄/A) = tp(ā/A) then
‖ prB(b0)‖+ · · · + ‖ prB(bn)‖ ≥ ‖ prB(a0)‖+ · · · + ‖ prB(an)‖.
By compactness, for all p ∈ S(A) there is a minimal (ā, A,B) such that ā |= p.
Now let cl0(A) be the set of all x such that for some minimal (ā, A,B), x =
prB(a0) (the first coordinate of ā).
4.10. Lemma. If tp(B/A) = tp(A/A) and x ∈ cl0(A) then x ∈ B.
Proof. Suppose not. Let B witness this and let C and ā be such that (ā, A,C) is
minimal and x = prC(a0). Since C |= T0, wlog B |⌣C a (independence in the
sense of Hilbert spaces). But then prB(ai) = prB(prC(ai)) for each i and thus
‖ prB(a0)‖+ · · ·+‖ prB(an)‖ < ‖ prC(a0)‖+ · · ·+‖ prC(an)‖. This contradicts
minimality. �
A direct consequence of the previous lemma is that cl0(A) ⊂ bclN(A) =
∩A⊂B|=TNB, as cl0(A) belongs to every model of the theory TN.
We now define the essential closure ecl. Let clα+1(A) = cl0(clα(A)) for all
ordinals α, clδ(A) =
α<δ(clα(A)), and ecl(A) =
α∈On clα(A).
4.11. Lemma. For all ā, B,A, if ecl(A) = A then there is b̄ such that tp(b̄/A) =
tp(ā/A) and b̄ |
Proof. Choose A |= T0 such that A ⊂ A and c̄ such that tp(c̄/A) = tp(ā/A)
and (c̄, A,A) is minimal. Since cl0(A) = A, prA(ci) ∈ A for all i ≤ n (c̄ =
24 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
(c0, . . . , cn)), i.e. c̄ |
A. Now choose b̄ such that tp(b̄/A) = tp(c̄/A) and
B. Then b̄ is as needed. �
4.12. Corollary. ecl(A) = aclN(A).
Proof. Clearly aclN(A) ⊂ bclN(A). On the other hand, assume that x /∈ aclN(A).
Let B be a model of TN such that A ⊂ B. By Lemma 4.11, we may assume that
B. Then x /∈ B, so x /∈ bcl(A), so x /∈ ecl(A). �
4.13. Theorem. Suppose ecl(A) = A, A ⊂ B,C, B |∗
C (i.e. ecl(B) |
ecl(C)),
ā |∗
B, b̄ |∗
C and tp(ā/A) = tp(b̄/A). Then there is c̄ such that tp(c̄/B) =
tp(ā/B), tp(c̄/C) = tp(b̄/C) and c̄ |∗
Proof. Wlog ecl(B) = B and ecl(C) = C. By Lemma 4.11 we can find modelsA0,
A1, B
∗ and C∗ of T0 such that Aā ⊂ A0, Ab̄ ⊂ A1, B ⊂ B∗ and C ⊂ C∗, such
that B∗ |∗
C∗,A0 |
B∗ andA1 |
C∗. We can also find models of T0,A
and D∗ such that A0B
∗ ⊂ A∗0, A1C∗ ⊂ A∗1 and B∗C∗ ⊂ D∗ and wlog we may
assume that ā and b̄ are chosen so thatA∗0 |⌣B∗ D
∗,A∗1 |⌣C∗ D
∗, and that there is
an automorphism F of the monster model fixingA pointwise such that F(ā) = b̄,
F(A0) = A1 and F(A
0) |⌣A1
A∗1 . Notice that now
D∗ and A1 |
We can now find Hilbert spaces A∗, A∗∗0 , A
1 and E such that
(i) E is generated byD∗A∗∗0 A
(ii) A ⊂ A∗ ⊂ A∗∗0 ∩A∗∗1 , B∗ ⊂ A∗∗0 , C∗ ⊂ A∗∗1 ,
(iii) There are Hilbert space isomorphisms G : A∗∗0 → A
0 and H : A
1 → A
such that
a) F ◦G ↾ A∗ = H ↾ A∗,
b) G ↾ B∗ = idB∗ , H ↾ C
∗ = idC∗ ,
c) G ∪ idD∗ generate an isomorphism
〈A∗∗0 D∗〉 → 〈A∗0D∗〉
HILBERT SPACES WITH GENERIC PREDICATES 25
d) H ∪ idD∗ generate an isomorphism
〈A∗∗1 D∗〉 → 〈A∗1D∗〉
e) F ∪G ∪H generate an isomorphism
〈A∗∗0 A∗∗1 〉 → 〈F(A∗0)A∗1〉.
We can find these because non-dividing independence in Hilbert spaces has 3-
existence (the independence theorem holds for types over sets).
Now we choose the “black points” of our model: a ∈ E is black if one of the
following holds:
(i) a ∈ A∗∗0 and G(a) is black,
(ii) a ∈ A∗∗0 and H(a) is black,
(iii) a ∈ D∗ and is black.
Then in E we define the “distance to black” function simply as the real distance.
Then in D∗ there is no change and G and H remain isomorphisms after adding
this structure;D∗, A∗0, A
1 and F(A
0) witness this.
So we can assume that E is a submodel of the monster, and letting c̄ = G−1(a),
G witnesses that tp(c̄/B) = tp(ā/B) and H witnesses that tp(c̄/C) = tp(b̄/C).
We have already seen that A∗ |
D∗ and thus c̄ |∗
BC. �
4.14. Proposition. Suppose b̄ 6 |∗
C, A ⊂ B ∩ C and (wlog) C = bcl(C). Then
there exists a formula χ ∈ tp(b̄/C) such that for all ā |= χ, ā 6 |∗
Proof. By compactness, we can find ε > 0 such that (letting b̄ = (b0, . . . , bn),
(ā = (a0, . . . , an)),
∀ā |= tp(b̄/B), ‖ prC(a0)‖+· · ·+‖ prC(an)‖ ≥ ε+‖ prbcl(A)(a0)‖+· · ·+‖ prbcl(A)(an)‖.
Again by compactness we can find χ ∈ tp(b̄/B) such that (1) holds when we
replace tp(b̄/B) by χ and ε by ε/2, that is:
26 ALEXANDER BERENSTEIN, TAPANI HYTTINEN, AND ANDRÉS VILLAVECES
∀ā |= χ, ‖ prC(a0)‖+· · ·+‖ prC(an)‖ ≥ ε/2+‖ prbcl(A)(a0)‖+· · ·+‖ prbcl(A)(an)‖.
and in particular ā 6 |∗
C, as we wanted �
References
[1] Itaï Ben Yaacov, Lovely pairs of models: the non first order case, Journal of Symbolic Logic, volume
69 (2004), 641-662.
[2] Itaï Ben Yaacov, Alexander Berenstein, Ward C. Henson and Alexander Usvyatsov, Model The-
ory for metric structures in Model Theory with Applications to Algebra and Analysis, Volume
2, Cambridge University Press, 2008.
[3] Itaï Ben Yaacov, Alexander Usvyatsov and Moshe Zadka, Generic automorphism of a Hilbert
space, preprint avaliable at http://ptmat.fc.ul.pt/∼alexus/papers.html.
[4] Alexander Berenstein, Hilbert Spaces with Generic Groups of Automorphisms, Arch. Math. Logic,
vol 46 (2007) no. 3, 289–299.
[5] Alexander Berenstein and Steven Buechler, Simple stable homogeneous expansions of Hilbert
spaces. Ann. Pure Appl. Logic 128 (2004), no. 1-3, 75–101.
[6] Enrique Casanovas, The number of types in simple theories. Ann. Pure Appl. Logic 98 (1999),
69–86.
[7] Zoé Chatzidakis and Anand Pillay, Generic structures and simple theories, Ann. Pure Appl. Logic
95 (1998), no. 1-3, 71–92.
[8] Artem Chernikov and Nicholas Ramsey, On model-theoretic tree properties, Journal of Mathe-
matical Logic, 16 (2), 2016.
[9] Wilfrid Hodges,Model Theory, Cambridge University Press 1993.
[10] Bruno Poizat, Le carré de l’égalité, Journal of Symbolic Logic 64 (1999), 1339–1355.
[11] Saharon Shelah, Simple Unstable Theories, Ann. Math Logic 19 (1980), 177–203.
http://ptmat.fc.ul.pt/~alexus/papers.html
HILBERT SPACES WITH GENERIC PREDICATES 27
Alexander Berenstein, Universidad de los Andes, Departamento de Matemáticas, Cra
1 # 18A-10, Bogotá, Colombia.
URL: www.matematicas.uniandes.edu.co/~aberenst
Tapani Hyttinen, University of Helsinki, Department of Mathematics and Statistics,
Gustaf Hällströminkatu 2b. Helsinki 00014, Finland.
E-mail address: [email protected]
Andrés Villaveces, Universidad Nacional de Colombia, Departamento de Matemáticas,
Av. Cra 30 # 45-03, Bogotá 111321, Colombia.
E-mail address: [email protected]
1. Introduction
1.1. Model theory of Hilbert spaces (quick review)
2. Random subspaces and beautiful pairs
3. Continuous random predicates
3.1. The basic theory T0
3.2. The model companion
4. Model theoretic analysis of TN
4.1. Generic independence
4.2. The failure of simplicity
4.3. TN has the tree property TP2
4.4. TN and the property NSOP1
References
|
0704.1634 | Riggings of locally compact abelian groups | RIGGINGS OF LOCALLY COMPACT
ABELIAN GROUPS.
M. Gadella, F. Gómez, S. Wickramasekara.
Abstract
We obtain a set of generalized eigenvectors that provides a gen-
eralized spectral decomposition for a given unitary representation of
a commutative, locally compact topological group. These generalized
eigenvectors are functionals belonging to the dual space of a rigging on
the space of square integrable functions on the character group. These
riggings are obtained through suitable spectral measure spaces.
1 Introduction
The purpose of the present paper is to take a first step towards a general
formalism of unitary representations of groups and semigroups on rigged
Hilbert spaces. To begin with, we want to introduce the theory correspond-
ing to Abelian locally compact groups, leaving the more general nonabelian
case as well as semigroups for a later work. We recall that a rigged Hilbert
space or a rigging of a Hilbert space H is a triplet of the form
Φ ⊂ H ⊂ Φ× , (1)
where Φ is a locally convex space dense in H with a topology stronger than
that inherited from H and Φ× is the dual space of Φ. In this paper, we
shall always assume that H is separable.
To each self adjoint operator A on H, the von Neumann theorem [1]
associates a spectral measure space. This is the quadruple (Λ,A,H, P ),
whereH is the Hilbert space on which A acts, Λ = σ(A) is the spectrum of A,
A is the family of Borel sets in Λ, and P is the projection valued measure on
A determined by A through the von Neumann theorem. Obviously Λ ⊂ R.
A complete discussion on the relation between these concepts can be found
in [2]. We say that the topological vector space (Φ, τΦ) (vector space Φ with
http://arxiv.org/abs/0704.1634v1
the locally convex topology given by τΦ) equips or rigs the spectral measure
(Λ,A,H, P ) if the following conditions hold:
i. There exists a one-to-one linear mapping I : Φ 7−→ H with range
dense in H. We can assume that Φ ⊂ H is a dense subspace of H and
I, the canonical injection from Φ into H.
ii. There exists a σ-finite measure µ on (Λ,A), a set Λ0 ⊂ Λ with zero µ
measure and a family of vectors in Φ× of the form
{|λk×〉 ∈ Φ× : λ ∈ Λ\Λ0, k ∈ {1, 2, . . . ,m}}, (2)
where m ∈ {∞, 1, 2, . . .}, such that
(φ, P (E)ϕ)H =
〈φ|λk×〉 〈ϕ|λk×〉∗ dµ(λ), ∀φ,ϕ ∈ Φ, ∀E ∈ A.
Each family of the form (2) satisfying (3) is called a complete system of
Dirac kets of the spectral measure (Λ,A,H, P ) in (Φ, τΦ). In this case, the
triplet Φ ⊂ H ⊂ Φ× is a rigged Hilbert space, which is called a rigging of
(Λ,A,H, P ).
Conversely, the von Neumann theorem asserts that a projection valued
measure defined on the σ-algebra of Borel sets on a subset of the real line
determines a self adjoint operator A. If (Λ,A,H, P ), where Λ ⊂ R, is such a
measure space, then for ϕ and φ on a suitable dense domain, the self-adjoint
operator A such that Λ = σ(A) is defined by
(φ,Aϕ)H =
λ 〈φ|λk×〉 〈ϕ|λk×〉∗ dµ(λ) (4)
where µ, |λk×〉 and m are as defined in (3). Further, if f(λ) is a measurable
complex valued function on Λ, then, for φ,ϕ on a suitable dense domain,
which is the whole of H if f(λ) is bounded, the operator valued function
f(A) is defined by
(φ, f(A)ϕ)H =
f(λ) 〈φ|λk×〉 〈ϕ|λk×〉∗ dµ(λ) . (5)
The functionals |λk×〉 ∈ Φ× and the complex numbers f(λ) are the gen-
eralized eigenvectors and respective generalized eigenvalues of f(A) [3]. In
particular, if f(λ) = eitλ, where t ∈ R, the set of operators eitA forms a one
parameter commutative group of unitary operators and Φ ⊂ H ⊂ Φ× as
defined above is a rigging for this group.
One can expect that similar riggings exist for unitary representations of
arbitrary groups and semigroups and that the operators of the representa-
tions can be expanded in terms of generalized eigenvectors and eigenvalues
as in (5). Riggings that make use of Hardy functions on a half plane exist
for one parameter dynamical semigroups e−itH , t ≤ 0 and e−itH , t ≥ 0,
where H is the Hamiltonian [3].
In the present paper, we show that riggings along the above lines al-
ways exist for unitary representations of Abelian locally compact groups.
In particular, let G be an Abelian locally compact group and π, a unitary
representation of G on a separable Hilbert space H. We will see that the
Fourier transform on G, or equivalently, the Gelfand transformation on the
C∗-algebra L1(G) allows us to represent π in terms of generalized eigenfunc-
tions and riggings of H in a manner similar to the description given in [2]
for the action of a spectral measure.
2 Characters of Abelian Locally Compact Groups.
Let G be a locally compact abelian group with Haar measure µ. A character
χ of G is any continuous mapping from G into the set of complex numbers
C such that χ(g1g2) = χ(g1)χ(g2) for all g1, g2 ∈ G and |χ(g)| = 1 for all
g ∈ G, i.e., a character of G is a continuous homomorphism from G into the
unit circle T. The set of all the characters of G forms a group, Ĝ, which is
often called the dual group of G. We shall use the notation χ(g) := 〈g|χ〉.
Let L1(G) be the space of complex valued functions, integrable in the
modulus with respect to the Haar measure µ on G. L1(G) is an abelian ∗-
algebra, with the convolution product. The dual group Ĝ can be identified
with the set of maximal ideals of L1(G) [4]. When endowed with the Gelfand
topology, Ĝ is a compact Hausdorf space (see [5] page 268).
For any χ ∈ Ĝ, we may define a linear functional Λχ on L
1(G) by
Λχ(f) =
〈g|χ〉∗f(g)dµ(g) . (6)
Let C(Ĝ) be the space of complex continuous functions on Ĝ with the
supremun norm topology. The Gelfand-Fourier transform is the mapping
F : L1(G) 7−→ C(Ĝ) defined by:
[Ff ](χ) = f̂(χ) = Λχ(f) =
〈g|χ〉∗ f(g) dµ(g) . (7)
Let (π,H) be a unitary representation of G. Then (see [6] page 105),
there is a unique spectral measure (Ĝ,B,H, P ), where B is the σ-algebra of
Borel sets on Ĝ, such that for all g ∈ G and all f ∈ L1(G), we have
π(g) =
〈g|χ〉 dP (χ) ; π(f) =
Λχ(f) dP (χ) . (8)
There is a one to one correspondence between unitary representations of G
and non degenerate ∗-representations1 of L1(G) as given by (7) and (8).
2.1 Riggings of functions of characters.
Let us consider the spectral measure space (Ĝ,B,H, P ) introduced in the
previous section. For simplicity in the discussion, we assume the existence
of a cyclic vector u ∈ H. This means that the subspace spanned by the
vectors of the form P (E)u with E ∈ A is dense in H. The general case can
be easily obtained as a finite or countable direct sum of cyclic subspaces of
Then, the von Neumann decomposition theorem [1] establishes that be-
ing given the spectral measure space (Ĝ,B,H, P ) and a positive measure
ν on (Ĝ,B) with maximal spectral type2 [P ] (ν ∈ [P ]), there exists a uni-
tary mapping U : H 7−→ L2(Ĝ, dµ) , such that πν(g) := Uπ(g)U
−1 is the
multiplication by 〈g|χ〉 on L2(Ĝ, dν):
πν(g)φ(χ) = Uπ(g)U
−1φ(χ) = 〈g|χ〉φ(χ) , ∀φ(χ) ∈ L2(Ĝ, dν) . (9)
Since πν(g) is a multiplication operator, it is easy to see that the Dirac
delta type Radon measures λ(χ)δχ form a complete system of Dirac kets for
the spectral measure space (Ĝ,B,H, P ) in the sense given by (3). For any
f(χ) ∈ Φ these deltas satisfy
f(χ) δχ′ dν = f(χ
′) . (10)
Thus, a possible choice for Φ is C(Ĝ), the space of continuous functions
on Ĝ endowed with a topology τΦ stronger than both the topologies of the
supremun and the || · ||
L2( bG,dν)
norm. In this case, the dual Φ× of Φ includes
the space of all Radon measures on (Ĝ,B). We have the rigged Hilbert space
Φ ⊂ L2(Ĝ) ⊂ Φ×.
1Non degenerate means that π(f)v = 0 for every f implies v = 0. The representation
has also the property that π(f∗) = π†(f), where f 7→ f∗ is the involution on L1(G), see
2For a definition and properties of the spectral type, see [1, 2].
3 Positive Type Functions and Riggings.
Next, we shall introduce another representation πφ of G linked to a function
of positive type, that can be defined as follows: Let φ(g) ∈ L∞(G). We say
that φ(g) is a function of positive type if for any f(g) ∈ L1(G), we have that
f∗(g)f(gg′)φ(g′) dµ(g) dµ(g′) ≥ 0 , (11)
where the star ∗ denotes complex conjugation.
If φ(g) is a function of positive type, then, the following positive Hermi-
tian form on L1(G)
〈h|f〉φ :=
h∗(g′)f(g)φ(g−1g′) dµ(g′) dµ(g) (12)
is semi-definite in the sense that it may exist non-zero functions f ∈ L1(G)
such that 〈f |f〉φ = 0. These functions form a subspace of L
1(G) that we
denote by N . Consider the factor space L1(G)/N and again denote by 〈·|·〉φ
the scalar product induced on L1(G)/N by the Hermitian form (12). The
completion of L1(G)/N by 〈·|·〉φ gives a Hilbert space usually denoted as
Hφ. Then, for any g ∈ G and f(g) ∈ L
1(G), we define:
(Lg)f(g
′) := f(g−1g′) . (13)
Note that Lg preserves the scalar product 〈·|·〉φ:
〈Lgh|Lgf〉φ =
h∗(g−1g′)f(g−1g′′)φ(g
−1g′′) dµ(g′) dµ(g′′)
h∗(g′)f(g′′)φ((gg′)−1(gg′′)) dµ(g′) dµ(g′′) = 〈h|f〉φ , (14)
for all f(g) ∈ L1(G). This also shows that LgN ⊂ N and therefore Lg
induces a transformation on the factor space L1(G)/N , that we also denote
as Lg, defined as
Lg(f(g
′) +N ) := f(g−1g′) +N = Lg(f(g
′)) +N . (15)
By (14), we easily see that Lg preserves the scalar product on L
1(G)/N . It
is obviously invertible. Therefore, it can be uniquely extended into a unitary
operator on Hφ. Then, if for each g ∈ G we write
πφ(g)f := Lgf , ∀ f ∈ Hφ , (16)
then, πφ(g) determines a unitary representation of G on Hφ. The proof of
this statement is straightforward.
The representation πφ(g) of G on Hφ can be lifted to a unitary repre-
sentation of the group algebra L1(G) on Hφ that we shall also denote as πφ.
In this case, for all f ∈ L1(G), we have πφ(f)h := f ∗ h. Here, ∗ denotes
convolution.
The existence of a cyclic vector η ∈ Hφ for the representation πφ is proven
in [6]. Recall that η is cyclic vector if the subspace {πφ(f)η, ∃f ∈ L
1(G)}
is dense in Hφ. In addition, this result also gives the following formula that
allows to find the function φ(g) in terms of η and the unitary representation
πφ of G on Hφ:
φ(g) = 〈η|πφ(g)η〉 . (17)
Now, let us consider the unitary πν representation of G given by (9) with
cyclic vector ξ and define the following complex valued function on G:
φ(g−1g′) := 〈ξ|πν(g
−1g′)ξ〉
L2( bG,dν)
= 〈πν(g)ξ|πν(g
L2( bG,dν)
. (18)
Then, as shown in [6], Chapter 3,
i.) the function φ is of positive type in the sense of (11), and
ii.) the representation ofG onHφ given by πφ, where φ is as (18) is equivalent
to πν .
Note that this result implies in particular that for this φ as in (18)
φ(g) = 〈η|πφ(g)η〉φ = 〈ξ|πν(g)ξ〉L2( bG,dν) , ∀ g ∈ G . (19)
According to (9) and (18), we have that
φ(g−1g′) = 〈πν(g)ξ|πν(g
L2( bG,dν)
[〈g|χ〉 ξ(χ)]∗ 〈g′|χ〉 ξ(χ) dν(χ) .
If we carry this formula into (12) and apply the Fubini theorem of the change
of the order of integration, we have for all f, h ∈ L1(G):
〈f |h〉φ =
[f(g)〈g|χ〉]∗ dµ(g)
h(g′)〈g′|χ〉 dµ(g′)
|ξ(χ)|2 dν(χ)
[f̂(χ)]∗ ĥ(χ) |ξ(χ)|2 dν(χ) =
〈f̂ |χ〉〈χ|ĥ〉 |ξ(χ)|2 dν(χ) . (21)
This latter formula shows that the generalized eigenvalues Fχ of πφ(g) are
the following: if f ∈ Φ := L1(G) ∩ L2(G)
|Fχ〉 ≡ Fχ : f 7−→ |η(χ)|f̂
∗(χ) = |η(χ)|
〈g|χ〉f∗(g) dµ(g) . (22)
We endow Φ with any topology stronger than the topologies L1(G) and
L2(G). For instance, we can choose a locally convex topology with the semi-
norms p1(f) := ||f ||L1(G) and p2(f) := ||f ||L2(G), for all f ∈ Φ. With this
topology or another stronger one, the antilinear functional Fχ is continuous.
Then, if we use (8) in the scalar product on Hφ, we have:
〈f |πφ(g)h〉φ =
〈g|χ〉 d〈f |P (χ)h〉φ =
〈g|χ〉 |η(χ)|2 f̂∗(χ)ĝ(χ) dν(χ)
〈g|χ〉 〈f |Fχ〉〈Fχ|h〉 dν(χ) . (23)
If we omit the arbitrary f, h ∈ Φ in (23), we have the following spectral
decomposition for πφ(g) for all g ∈ G:
πφ(g) =
〈χ|g〉 |Fχ〉〈Fχ| dν(χ) . (24)
Note that in the antidual space Φ×, the generalized eigenvalue equation
πφ(g)|Fχ〉 = 〈χ|g〉|Fχ〉 is valid, where we use the same notation πφ(g) for
the extensions of these unitary operators into Φ×.
In conclusion, for each unitary representation of a locally compact Abelian
topological group, we have found an equivalent representation and a rigged
Hilbert space such that each of the unitary operators of the representation
admits a generalized spectral decomposition in terms of generalized eigen-
vectors of them. The eigenvectors of the decomposition are labeled by the
group characters only and their respective eigenvalues, complex numbers
with modulus one, depend on both the corresponding character and the
group element. The spectral decomposition and the corresponding rigging
comes after the existence of a spectral measure space.
Note that the Abelian property is crucial in our derivation and in partic-
ular in the existence of the spectral measure space (Ĝ,B,H, P ), since then,
the group algebra is also Abelian and the Gelfand theory applies. An ex-
tension of the present formalism to nonabelian locally compact groups will
require an extension of the Gelfand formalism that at least allows for a new
and consistent definition of the Gelfand Fourier transform (7), an essential
feature of our construction.
Acknowledgements
We acknowledge the financial support from the Junta de Castilla y León
Project VA013C05 and the Ministry of Education and Science of Spain,
projects MTM2005-09183 and FIS2005-03988. S.W. acknowledges the fi-
nancial support from the University of Valladolid where he was a visitor
while this work was done and additional financial support from Grinnell
College.
References
[1] J. von Neumann, Mathematical Foundations of Quantum Mechanics
(Princeton University, Princeton, N.J., 1955).
[2] M. Gadella, F. Gómez, Foundations of Physics, 32, 815 (2002); M.
Gadella, F. Gómez, International Journal of Theoretical Physiscs,
42, 2225-2254 (2003); M. Gadella, F Gómez, Bulletin des Sciences
Mathèmatiques, 129, 567 (2005); M. Gadella, F. Gómez, Reports on
Mathematical Physics, 59, 127 (2007).
[3] A. Bohm, M. Gadella, Dirac kets, Gamow vectors and Gelfand triplets,
Springer Lecture Notes in Physics, 348 (Springer, Berlin 1989).
[4] M.A. Naimark, Normed Rings (Wolters-Noordhoff, Groningen, The
Netherlands, 1970).
[5] W. Rudin, Functional Analysis (McGraw-Hill, New York 1973).
[6] G. B. Folland, A Course in Abstract Harmonic Analysis (CRC, Boca
Raton, London, 1995).
M. Gadella
Departamento de F́ısica Teórica
Facultad de Ciencias
c. Real de Burgos, s.n.
47011 Valladolid, Spain
E-mail address:
[email protected]
F. Gómez
Departamento de Análisis Matemático
Facultad de Ciencias
c. Real de Burgos, s.n.
47011 Valladolid, Spain
E-mail address:
[email protected]
S. Wickramasekara
Department of Physics,
Grinnell College, Grinnell, IA 50112, USA
E-mail address:
[email protected]
Characters of Abelian Locally Compact Groups.
Riggings of functions of characters.
Positive Type Functions and Riggings.
|
0704.1635 | Weak Amenability of Hyperbolic Groups | WEAK AMENABILITY OF HYPERBOLIC GROUPS
NARUTAKA OZAWA
Abstract. We prove that hyperbolic groups are weakly amenable. This par-
tially extends the result of Cowling and Haagerup showing that lattices in
simple Lie groups of real rank one are weakly amenable. We take a combina-
torial approach in the spirit of Haagerup and prove that for the word length
distance d of a hyperbolic group, the Schur multipliers associated with the
kernel rd have uniformly bounded norms for 0 < r < 1. We then combine
this with a Bożejko-Picardello type inequality to obtain weak amenability.
1. Introduction
The notion of weak amenability for groups was introduced by Cowling and
Haagerup [CH]. (It has almost nothing to do with the notion of weak amenability
for Banach algebras.) We use the following equivalent form of the definition. See
Section 2 and [BO, CH, Pi] for more information.
Definition. A countable discrete group Γ is said to be weakly amenable with
constant C if there exists a sequence of finitely supported functions ϕn on Γ
such that ϕn → 1 pointwise and supn ‖ϕn‖cb ≤ C, where ‖ϕ‖cb denotes the
(completely bounded) norm of the Schur multiplier on B(ℓ2Γ) associated with
(x, y) 7→ ϕ(x−1y).
In the pioneering paper [Ha], Haagerup proved that the group C∗-algebra of a
free group has a very interesting approximation property. Among other things, he
proved that the graph distance d on a tree Γ is conditionally negatively definite;
in particular, the Schur multiplier on B(ℓ2Γ) associated with the kernel r
d has
(completely bounded) norm one for every 0 < r < 1. For information of Schur
multipliers and completely bounded maps, see Section 2 and [BO, CH, Pi]. Bożejko
and Picardello [BP] proved that the Schur multiplier associated with the charac-
teristic function of the subset {(x, y) : d(x, y) = n} has (completely bounded)
norm at most 2(n + 1). These two results together imply that a group acting
properly on a tree is weakly amenable with constant one. Recently, this result was
extended to the case of finite-dimensional CAT(0) cube complexes by Guentner
and Higson [GH]. See also [Mi]. Cowling and Haagerup [dCH, Co, CH] proved that
lattices in simple Lie groups of real rank one are weakly amenable and computed
explicitly the associated constants. It is then natural to explore this property for
hyperbolic groups in the sense of Gromov [GdH, Gr]. We prove that hyperbolic
2000 Mathematics Subject Classification. Primary 20F67; Secondary 43A65, 46L07.
Key words and phrases. hyperbolic groups, weak amenability, Schur multipliers.
Supported by Sloan Foundation and NSF grant.
http://arxiv.org/abs/0704.1635v3
2 NARUTAKA OZAWA
groups are weakly amenable, without giving estimates of the associated constants.
The results and proofs are inspired by and partially generalize those of Haagerup
[Ha], Pytlik-Szwarc [PS] and Bożejko-Picardello [BP]. We denote by N0 the set of
non-negative integers, and by D the unit disk {z ∈ C : |z| < 1}.
Theorem 1. Let Γ be a hyperbolic graph with bounded degree and d be the graph
distance on Γ. Then, there exists a constant C such that the following are true.
(1) For every z ∈ D, the Schur multiplier θz on B(ℓ2Γ) associated with the
kernel
Γ× Γ ∋ (x, y) 7→ zd(x,y) ∈ C
has (completely bounded) norm at most C|1−z|/(1−|z|). Moreover, z 7→ θz
is a holomorphic map from D into the space V2(Γ) of Schur multipliers.
(2) For every n ∈ N0, the Schur multiplier on B(ℓ2Γ) associated with the
characteristic function of the subset
{(x, y) ∈ Γ× Γ : d(x, y) = n}
has (completely bounded) norm at most C(n+ 1).
(3) There exists a sequence of finitely supported functions fn : N0 → [0, 1] such
that fn → 1 pointwise and that the Schur multiplier on B(ℓ2Γ) associated
with the kernel
Γ× Γ ∋ (x, y) 7→ fn(d(x, y)) ∈ [0, 1]
has (completely bounded) norm at most C for every n.
Let Γ be a hyperbolic group and d be the word length distance associated with
a fixed finite generating subset of Γ. Then, for the sequence fn as above, the
sequence of functions ϕn(x) = fn(d(e, x)) satisfy the properties required for weak
amenability. Thus we obtain the following as a corollary.
Theorem 2. Every hyperbolic group is weakly amenable.
This solves affirmatively a problem raised by Roe at the end of [Ro]. We close
the introduction with a few problems and remarks. Is it possible to construct a
family of uniformly bounded representations as it is done in [Do, PS]? Is it true
that a group which is hyperbolic relative to weakly amenable groups is again weakly
amenable? There is no serious difficulty in extending Theorem 1 to (uniformly)
fine hyperbolic graphs in the sense of Bowditch [Bo]. Ricard and Xu [RX] proved
that weak amenability with constant one is closed under free products with finite
amalgamation. The author is grateful to Professor Masaki Izumi for conversations
and encouragement.
2. Preliminary on Schur multipliers
Let Γ be a set and denote by B(ℓ2Γ) the Banach space of bounded linear oper-
ators on ℓ2Γ. We view an element A ∈ B(ℓ2Γ) as a Γ× Γ-matrix: A = [Ax,y]x,y∈Γ
with Ax,y = 〈Aδy, δx〉. For a kernel k : Γ×Γ → C, the Schur multiplier associated
with k is the map mk on B(ℓ2Γ) defined by mk(A) = [k(x, y)Ax,y]. We recall the
necessary and sufficient condition for mk to be bounded (and everywhere-defined).
See [BO, Pi] for more information of completely bounded maps and the proof of
the following theorem.
WEAK AMENABILITY OF HYPERBOLIC GROUPS 3
Theorem 3. Let a kernel k : Γ × Γ → C and a constant C ≥ 0 be given. Then
the following are equivalent.
(1) The Schur multiplier mk is bounded and ‖mk‖ ≤ C.
(2) The Schur multiplier mk is completely bounded and ‖mk‖cb ≤ C.
(3) There exist a Hilbert space H and vectors ζ+(x), ζ−(y) in H with norms
at most
C such that 〈ζ−(y), ζ+(x)〉 = k(x, y) for every x, y ∈ Γ.
We denote by V2(Γ) = {mk : ‖mk‖ < ∞} the Banach space of Schur multipliers.
The above theorem says that the sesquilinear form
ℓ∞(Γ,H)× ℓ∞(Γ,H) ∋ (ζ−, ζ+) 7→ mk ∈ V2(Γ),
where k(x, y) = 〈ζ−(y), ζ+(x)〉, is contractive for any Hilbert space H.
Let Pf(Γ) be the set of finite subsets of Γ. We note that the empty set ∅ belongs
to Pf (Γ). For S ∈ Pf(Γ), we define ξ̃+S and ξ̃
∈ ℓ2(Pf (Γ)) by
ξ̃+S (ω) =
1 if ω ⊂ S
0 otherwise
and ξ̃−S (ω) =
(−1)|ω| if ω ⊂ S
0 otherwise
We also set ξ+
= ξ̃+
− δ∅ and ξ−S = −(ξ̃
− δ∅). Note that ξ±S ⊥ ξ
if S ∩ T = ∅.
The following lemma is a trivial consequence of the binomial theorem.
Lemma 4. One has ‖ξ±
‖2 + 1 = ‖ξ̃±
‖2 = 2|S| and
〉 = 1− 〈ξ̃−
, ξ̃+
1 if S ∩ T 6= ∅
0 otherwise
for every S, T ∈ Pf(Γ).
3. Preliminary on hyperbolic graphs
We recall and prove some facts of hyperbolic graphs. We identify a graph Γ
with its vertex set and equip it with the graph distance:
d(x, y) = min{n : ∃x = x0, x1, . . . , xn = y such that xi and xi+1 are adjacent}.
We assume the graph Γ to be connected so that d is well-defined. For a subset
E ⊂ Γ and R > 0, we define the R-neighborhood of E by
NR(E) = {x ∈ Γ : d(x,E) < R},
where d(x,E) = inf{d(x, y) : y ∈ E}. We write BR(x) = NR({x}) for the ball
with center x and radius R. A geodesic path p is a finite or infinite sequence of
points in Γ such that d(p(m), p(n)) = |m − n| for every m,n. Most of the time,
we view a geodesic path p as a subset of Γ. We note the following fact (see e.g.,
Lemma E.8 in [BO]).
Lemma 5. Let Γ be a connected graph. Then, for any infinite geodesic path
p : N0 → Γ and any x ∈ Γ, there exists an infinite geodesic path px which starts at
x and eventually flows into p (i.e., the symmetric difference p△ px is finite).
Definition. We say a graph Γ is hyperbolic if there exists a constant δ > 0 such
that for every geodesic triangle each edge is contained in the δ-neighborhood of
the union of the other two. We say a finitely generated group Γ is hyperbolic if its
4 NARUTAKA OZAWA
Cayley graph is hyperbolic. Hyperbolicity is a property of Γ which is independent
of the choice of the finite generating subset [GdH, Gr].
From now on, we consider a hyperbolic graph Γ which has bounded degree:
supx |BR(x)| < ∞ for every R > 0. We fix δ > 1 satisfying the above definition.
We fix once for all an infinite geodesic path p : N0 → Γ and, for every x ∈ Γ,
choose an infinite geodesic path px which starts at x and eventually flows into p.
For x, y, w ∈ Γ, the Gromov product is defined by
〈x, y〉w =
(d(x,w) + d(y, w)− d(x, y)) ≥ 0.
See [BO, GdH, Gr] for more information on hyperbolic spaces and the proof of
the following lemma which says every geodesic triangle is “thin”.
Lemma 6 (Proposition 2.21 in [GdH]). Let x, y, w ∈ Γ be arbitrary. Then, for
any geodesic path [x, y] connecting x to y, one has d(w, [x, y]) ≤ 〈x, y〉w + 10δ.
Lemma 7. For x ∈ Γ and k ∈ Z, we set
T (x, k) = {w ∈ N100δ(px) : d(w, x) ∈ {k − 1, k} },
where T (x, k) = ∅ if k < 0. Then, there exists a constant R0 satisfying the
following: For every x ∈ Γ and k ∈ N0, if we denote by v the point on px such
that d(v, x) = k, then
T (x, k) ⊂ BR0(v).
Proof. Let w ∈ T (x, k) and choose a point w′ on px such that d(w,w′) < 100δ.
Then, one has |d(w′, x)− d(w, x)| < 100δ and
d(w, v) ≤ d(w,w′) + d(w′, v) ≤ 100δ + |d(w′, x)− k| < 200δ + 1.
Thus the assertion holds for R0 = 200δ + 1. �
Lemma 8. For k, l ∈ Z, we set
W (k, l) = {(x, y) ∈ Γ× Γ : T (x, k) ∩ T (y, l) 6= ∅}.
Then, for every n ∈ N0, one has
E(n) := {(x, y) ∈ Γ× Γ : d(x, y) ≤ n} =
W (k, n− k).
Moreover, there exists a constant R1 such that
W (k, l) ∩W (k + j, l − j) = ∅
for all j > R1.
Proof. First, if (x, y) ∈ W (k, n−k), then one can find w ∈ T (x, k)∩T (y, n−k) and
d(x, y) ≤ d(x,w) + d(w, y) ≤ n. This proves that the right hand side is contained
in the left hand side. To prove the other inclusion, let (x, y) and n ≥ d(x, y) be
given. Choose a point p on px ∩ py such that d(p, x) + d(p, y) ≥ n, and a geodesic
path [x, y] connecting x to y. By Lemma 6, there is a point a on [x, y] such that
d(a, p) ≤ 〈x, y〉p + 10δ. It follows that
〈x, p〉a + 〈y, p〉a = d(a, p)− 〈x, y〉p ≤ 10δ.
WEAK AMENABILITY OF HYPERBOLIC GROUPS 5
We choose a geodesic path [a, p] connecting a to p and denote by w(m) the point
on [a, p] such that d(w(m), a) = m. Consider the function f(m) = d(w(m), x) +
d(w(m), y). Then, one has that f(0) = d(x, y) ≤ n ≤ d(p, x) + d(p, y) = f(d(a, p))
and that f(m + 1) ≤ f(m) + 2 for every m. Therefore, there is m0 ∈ N0 such
that f(m0) ∈ {n− 1, n}. We claim that w := w(m0) ∈ T (x, k) ∩ T (y, n− k) for
k = d(w, x). First, note that d(w, y) = f(m0)− k ∈ {n− k − 1, n− k}. Since
〈x, p〉w ≤
(d(x, a) + d(a, w) + d(p, w) − d(x, p))
(d(x, a) + d(p, a)− d(x, p))
= 〈x, p〉a
≤ 10δ,
one has that d(w, px) ≤ 20δ by Lemma 6. This proves that w ∈ T (x, k). One
proves likewise that w ∈ T (y, n − k). Therefore, T (x, k) ∩ T (y, n − k) 6= ∅ and
(x, y) ∈ W (k, n− k).
Suppose now that (x, y) ∈ W (k, l) ∩ W (k + j, l − j) exists. We choose v ∈
T (x, k) ∩ T (y, l) and w ∈ T (x, k + j) ∩ T (y, l− j). Let vx (resp. wx) be the point
on px such that d(vx, x) = k (resp. d(wx, x) = k + j). Then, by Lemma 7, one
has d(v, vx) ≤ R0 and d(w,wx) ≤ R0. We choose vy, wy on py likewise for y. It
follows that d(vx, vy) ≤ 2R0 and d(wx, wy) ≤ 2R0. Choose a point p on px ∩ py.
Then, one has |d(vx, p)− d(vy , p)| ≤ 2R0 and |d(wx, p)− d(wy , p)| ≤ 2R0. On the
other hand, one has d(vx, p) = d(wx, p) + j and d(vy , p) = d(wy , p)− j. It follows
2j = d(vx, p)− d(wx, p)− d(vy, p) + d(wy , p) ≤ 4R0.
This proves the second assertion for R1 = 2R0. �
Lemma 9. We set
Z(k, l) = W (k, l) ∩
W (k + j, l − j)c.
Then, for every n ∈ N0, one has
χE(n) =
χZ(k,n−k).
Proof. We first note that Lemma 8 implies Z(k, l) = W (k, l)∩
j=1 W (k+j, l−j)c
k=0 Z(k, n−k) ⊂
k=0 W (k, n−k) = E(n). It is left to show that for every
(x, y) and n ≥ d(x, y), there exists one and only one k such that (x, y) ∈ Z(k, n−k).
For this, we observe that (x, y) ∈ Z(k, n− k) if and only if k is the largest integer
that satisfies (x, y) ∈ W (k, n− k). �
4. Proof of Theorem
Proposition 10. Let Γ be a hyperbolic graph with bounded degree and define
E(n) = {(x, y) : d(x, y) ≤ n}. Then, there exist a constant C0 > 0, subsets
Z(k, l) ⊂ Γ, a Hilbert space H and vectors η+
(x) and η−
(y) in H which satisfy the
following properties:
6 NARUTAKA OZAWA
(1) η±m(w) ⊥ η±m′(w) for every w ∈ Γ and m,m′ ∈ N0 with |m−m′| ≥ 2.
(2) ‖η±m(w)‖ ≤
C0 for every w ∈ Γ and m ∈ N0.
(3) 〈η−
(y), η+
(x)〉 = χZ(k,l)(x, y) for every x, y ∈ Γ and k, l ∈ N0.
(4) χE(n) =
k=0 χZ(k,n−k) for every n ∈ N0.
Proof. We use the same notations as in the previous sections.
Let H = ℓ2(Pf (Γ))⊗(1+R1) and define η+k (x) and η
(y) in H by
(x) = ξ+
T (x,k)
⊗ ξ̃+
T (x,k+1)
⊗ · · · ⊗ ξ̃+
T (x,k+R1)
(y) = ξ−
T (y,l)
⊗ ξ̃−
T (y,l−1)
⊗ · · · ⊗ ξ̃−
T (y,l−R1)
If |m−m′| ≥ 2, then T (w,m)∩T (w,m′) = ∅ and ξ±
T (w,m)
T (w,m′)
. This implies
the first assertion. By Lemma 7 and the assumption that Γ has bounded degree,
one has C1 := supw,m |T (w,m)| ≤ supv |BR0(v)| < ∞. Now the second assertion
follows from Lemma 4 with C0 = 2
C1(1+R1). Finally, by Lemma 4, one has
(y), η+
(x)〉 = χW (k,l)(x, y)
χW (k+j,l−j)c (x, y) = χZ(k,l)(x, y).
This proves the third assertion. The fourth is nothing but Lemma 9. �
Proof of Theorem 1. Take η±m ∈ ℓ∞(Γ,H) as in Proposition 10 and set C = 2C0.
For every z ∈ D, we define ζ±z ∈ ℓ∞(Γ,H) by the absolutely convergent series
ζ+z (x) =
ζ−z (y) =
where
1− z denotes the principal branch of the square root. The construction
of ζ±z draws upon [PS]. We note that the map D ∋ z 7→ (ζ±z (w))w ∈ ℓ∞(Γ,H) is
(anti-)holomorphic. By Proposition 10, one has
〈ζ−z (y), ζ+z (x)〉 = (1 − z)
zk+lχZ(k,l)(x, y)
= (1 − z)
znχE(n)(x, y)
= (1 − z)
n=d(x,y)
= zd(x,y)
WEAK AMENABILITY OF HYPERBOLIC GROUPS 7
for all x, y ∈ Γ, and
‖ζ±z (w)‖2 ≤ 2|1− z|
j=0,1
(z±)2m+jη±2m+j(w)‖
= 2|1− z|
j=0,1
|z|4m+2j‖η±2m+j(w)‖
≤ 2|1− z| 1
1− |z|2C0
|1− z|
1− |z|
for all w ∈ Γ. Therefore the Schur multiplier θz associated with the kernel zd has
(completely bounded) norm at most C|1 − z|/(1− |z|) by Theorem 3. Moreover,
the map D ∋ z 7→ θz ∈ V2(Γ) is holomorphic.
For the second assertion, we simply write ‖Z‖ for the (completely bounded)
norm of the Schur multiplier associated with the characteristic function χZ of a
subset Z ⊂ Γ× Γ. By Proposition 10 and Theorem 3, one has
‖E(n)‖ ≤
‖Z(k, n− k)‖ ≤ C0(n+ 1).
and ‖{(x, y) : d(x, y) = n}‖ = ‖E(n) \ E(n − 1)‖ ≤ C(n + 1). This proves the
second assertion. The third assertion follows from the previous two, by choosing
fn(d) = χE(Kn)(d)r
n for suitable 0 < rn < 1 and Kn ∈ N0 with rn → 1 and
Kn → ∞. We refer to [BP, Ha] for the proof of this fact. �
References
[Bo] B.H. Bowditch, Relatively hyperbolic groups. Preprint. 1999.
[BP] M. Bożejko and M.A. Picardello, Weakly amenable groups and amalgamated products.
Proc. Amer. Math. Soc. 117 (1993), 1039–1046.
[BO] N. Brown and N. Ozawa, C∗-algebras and Finite-Dimensional Approximations. Graduate
Studies in Mathematics, 88. American Mathematical Society, Providence, RI, 2008.
[dCH] J. de Cannière and U. Haagerup, Multipliers of the Fourier algebras of some simple Lie
groups and their discrete subgroups. Amer. J. Math. 107 (1985), 455–500.
[Co] M. Cowling, Harmonic analysis on some nilpotent Lie groups (with application to the
representation theory of some semisimple Lie groups). Topics in modern harmonic anal-
ysis, Vol. I, II (Turin/Milan, 1982), 81–123, Ist. Naz. Alta Mat. Francesco Severi, Rome,
1983.
[CH] M. Cowling and U. Haagerup, Completely bounded multipliers of the Fourier algebra of
a simple Lie group of real rank one. Invent. Math. 96 (1989), 507–549.
[Do] A.H. Dooley, Heisenberg-type groups and intertwining operators. J. Funct. Anal. 212
(2004), 261–286.
[GdH] E. Ghys and P. de la Harpe, Sur les groupes hyperboliques d’aprés Mikhael Gromov.
Progress in Math., 83, Birkaüser, 1990.
[Gr] M. Gromov, Hyperbolic groups. Essays in group theory, 75–263, Math. Sci. Res. Inst.
Publ. 8, Springer, New York, 1987.
[GH] E. Guentner and N. Higson, Weak amenability of CAT(0) cubical groups. Preprint
arXiv:math/0702568.
[Ha] U. Haagerup, An example of a nonnuclear C∗-algebra, which has the metric approximation
property. Invent. Math. 50 (1978/79), 279–293.
http://arxiv.org/abs/math/0702568
8 NARUTAKA OZAWA
[Mi] N. Mizuta, A Bożejko-Picardello type inequality for finite dimensional CAT(0) cube com-
plexes. J. Funct. Anal., in press.
[Pi] G. Pisier, Similarity problems and completely bounded maps. Second, expanded edition.
Includes the solution to ”The Halmos problem”. Lecture Notes in Mathematics, 1618.
Springer-Verlag, Berlin, 2001.
[PS] T. Pytlik and R. Szwarc, An analytic family of uniformly bounded representations of free
groups. Acta Math. 157 (1986), 287–309.
[RX] É. Ricard and Q. Xu, Khintchine type inequalities for reduced free products and appli-
cations. J. Reine Angew. Math. 599 (2006), 27–59.
[Ro] J. Roe, Lectures on coarse geometry, University Lecture Series, 31. American Mathemat-
ical Society, Providence, RI, 2003.
Department of Mathematical Sciences, University of Tokyo, Komaba, 153-8914,
Department of Mathematics, UCLA, Los Angeles, CA 90095-1555
E-mail address: [email protected]
1. Introduction
2. Preliminary on Schur multipliers
3. Preliminary on hyperbolic graphs
4. Proof of Theorem
References
|
0704.1636 | Light Curves of Dwarf Plutonian Planets and other Large Kuiper Belt
Objects: Their Rotations, Phase Functions and Absolute Magnitudes | arXiv:0704.1636v2 [astro-ph] 13 Apr 2007
Light Curves of Dwarf Plutonian Planets and other Large Kuiper
Belt Objects: Their Rotations, Phase Functions and Absolute
Magnitudes
Scott S. Sheppard
Department of Terrestrial Magnetism, Carnegie Institution of Washington,
5241 Broad Branch Rd. NW, Washington, DC 20015
[email protected]
ABSTRACT
I report new time-resolved light curves and determine the rotations and phase
functions of several large Kuiper Belt objects, which includes the dwarf planet
Eris (2003 UB313). Three of the new sample of ten Trans-Neptunian objects
display obvious short-term periodic light curves. (120348) 2004 TY364 shows a
light curve which if double-peaked has a period of 11.70±0.01 hours and a peak-
to-peak amplitude of 0.22±0.02 magnitudes. (84922) 2003 VS2 has a well defined
double-peaked light curve of 7.41±0.02 hours with a 0.21±0.02 magnitude range.
(126154) 2001 YH140 shows variability of 0.21± 0.04 magnitudes with a possible
13.25±0.2 hour single-peaked period. The seven new KBOs in the sample which
show no discernible variations within the uncertainties on short rotational time
scales are 2001 UQ18, (55565) 2002 AW197, (119979) 2002 WC19, (120132) 2003
FY128, (136108) Eris 2003 UB313, (90482) Orcus 2004 DW, and (90568) 2004
GV9. Four of the ten newly sampled Kuiper Belt objects were observed over a
significant range of phase angles to determine their phase functions and absolute
magnitudes. The three medium to large sized Kuiper Belt objects 2004 TY364,
Orcus and 2004 GV9 show fairly steep linear phase curves (∼ 0.18 to 0.26 mags
per degree) between phase angles of 0.1 and 1.5 degrees. This is consistent with
previous measurements obtained for moderately sized Kuiper Belt objects. The
extremely large dwarf planet Eris (2003 UB313) shows a shallower phase curve
(0.09 ± 0.03 mags per degree) which is more similar to the other known dwarf
planet Pluto. It appears the surface properties of the largest dwarf planets in the
Kuiper Belt maybe different than the smaller Kuiper Belt objects. This may have
to do with the larger objects ability to hold more volatile ices as well as sustain
atmospheres. Finally, it is found that the absolute magnitudes obtained using
the phase slopes found for individual objects are several tenths of magnitudes
different than that given by the Minor Planet Center.
http://arxiv.org/abs/0704.1636v2
– 2 –
Subject headings: Kuiper Belt — Oort Cloud — minor planets, asteroids — solar
system: general — planets and satellites: individual (2001 UQ18, (126154) 2001
YH140, (55565) 2002 AW197, (119979) 2002 WC19, (120132) 2003 FY128, (136199)
Eris 2003 UB313, (84922) 2003 VS2, (90482) Orcus 2004 DW, (90568) 2004 GV9,
and (120348) 2004 TY364)
1. Introduction
To date only about 1% of the Trans-Neptunian objects (TNOs) are known of the nearly
one hundred thousand expected larger than about 50 km in radius just beyond Neptune’s
orbit (Trujillo et al. 2001). The majority of the largest Kuiper Belt objects (KBOs) now
being called dwarf Plutonian planets (radii > 400 km) have only recently been discovered
in the last few years (Brown et al. 2005). The large self gravity of the dwarf planets will
allow them to be near Hydrostatic equilibrium, have possible tenuous atmospheres, retain
extremely volatile ices such as Methane and are likely to be differentiated. Thus the surfaces
as well as the interior physical characteristics of the largest TNOs may be significantly
different than the smaller TNOs.
The largest TNOs have not been observed to have any remarkable differences from the
smaller TNOs in optical and near infrared broad band color measurements (Doressoundiram
et al. 2005; Barucci et al. 2005). But near infrared spectra has shown that only the three
largest TNOs (Pluto, Eris (2003 UB313) and (136472) 2005 FY9) have obvious Methane on
their surfaces while slightly smaller objects are either spectrally featureless or have strong
water ice signatures (Brown et al. 2005; Licandro et al. 2006). In addition to the Near
infrared spectra differences, the albedos of the larger objects appear to be predominately
higher than those for the smaller objects (Cruikshank et al. 2005; Bertoldi et al. 2006; Brown
et al. 2006). A final indication that the larger objects are indeed different is that the shapes
of the largest KBOs seem to signify they are more likely to be in hydrostatic equilibrium
than that for the smaller KBOs (Sheppard and Jewitt 2002; Trilling and Bernstein 2006;
Lacerda and Luu 2006).
The Kuiper Belt has been dynamically and collisionally altered throughout the age of
the solar system. The largest KBOs should have rotations that have been little influenced
since the sculpting of the primordial Kuiper Belt. This is not the case for the smaller KBOs
where recent collisions and fragmentation processes will have highly modified their spins
throughout the age of the solar system (Davis and Farinella 1997). The large volatile rich
KBOs show significantly different median period and possible amplitude rotational differ-
ences when compared to the rocky large main belt asteroids which is expected because of
– 3 –
their differing compositions and collisional histories (Sheppard and Jewitt 2002; Lacerda and
Luu 2006).
I have furthered the photometric monitoring of large KBOs (absolute magnitudes H <
5.5 or radii greater than about 100 km assuming moderate albedos) in order to determine
their short term rotational and long term phase related light curves to better understand
their rotations, shapes and possible surface characteristics. This is a continuation of previous
works (Jewitt and Sheppard 2002; Sheppard and Jewitt 2002; Sheppard and Jewitt 2003;
Sheppard and Jewitt 2004).
2. Observations
The data for this work were obtained at the Dupont 2.5 meter telescope at Las Campanas
in Chile and the University of Hawaii 2.2 meter telescope atop Mauna Kea in Hawaii.
Observations at the Dupont 2.5 meter telescope were performed on the nights of Febru-
ary 14, 15 and 16, March 9 and 10, October 25, 26, and 27, November 28, 29, and 30 and
December 1, 2005 UT. The instrument used was the Tek5 with a 2048 × 2048 pixel CCD
with 24 µm pixels giving a scale of 0.′′259 pixel−1 at the f/7.5 Cassegrain focus for a field
of view of about 8′.85 × 8′.85. Images were acquired through a Harris R-band filter while
the telescope was autoguided on nearby bright stars at sidereal rates (Table 1). Seeing was
generally good and ranged from 0.′′6 to 1.′′5 FWHM.
Observations at the University of Hawaii 2.2 meter telescope were obtained on the nights
of December 19, 21, 23 and 24, 2003 UT and used the Tektronix 2048 × 2048 pixel CCD.
The pixels were 24 µm in size giving 0.′′219 pixel−1 scale at the f/10 Cassegrain focus for
a field of view of about 7′.5 × 7′.5. Images were obtained in the R-band filter based on the
Johnson-Kron-Cousins system with the telescope auto-guiding at sidereal rates using nearby
bright stars. Seeing was very good over the several nights ranging from 0.′′6 to 1.′′2 FWHM.
For all observations the images were first bias subtracted and then flat-fielded using the
median of a set of dithered images of the twilight sky. The photometry for the KBOs was
done in two ways in order to optimize the signal-to-noise ratio. First, aperture correction
photometry was performed by using a small aperture on the KBOs (0.′′65 to 1.′′04 in radius)
and both the same small aperture and a large aperture (2.′′63 to 3.′′63 in radius) on several
nearby unsaturated bright field stars. The magnitude within the small aperture used for the
KBOs was corrected by determining the correction from the small to the large aperture using
the PSF of the field stars. Second, I performed photometry on the KBOs using the same field
stars but only using the large aperture on the KBOs. The smaller apertures allow better
– 4 –
photometry for the fainter objects since it uses only the high signal-to-noise central pixels.
The range of radii varies because the actual radii used depends on the seeing. The worse
the seeing the larger the radius of the aperture needed in order to optimize the photometry.
Both techniques found similar results, though as expected, the smaller aperture gives less
scatter for the fainter objects while the larger aperture is superior for the brighter objects.
Photometric standard stars from Landolt (1992) were used for calibration. Each in-
dividual object was observed at all times in the same filter and with the same telescope
setup. Relative photometric calibration from night to night was very stable since the same
fields stars were observed. The few observations that were taken in mildly non-photometric
conditions (i.e. thin cirrus) were easily calibrated to observations of the same field stars on
the photometric nights. Thus, the data points on these mildly non-photometric nights are
almost as good as the other data with perhaps a slightly larger error bar. The dominate
source of error in the photometry comes from simple root N noise.
3. Light Curve Causes
The apparent magnitude or brightness of an atmospherless inert body in our solar system
is mainly from reflected sunlight and can be calculated as
mR = m⊙ − 2.5log
2φ(α)/(2.25× 1016R2∆2)
in which r [km] is the radius of the KBO, R [AU] is the heliocentric distance, ∆ [AU] is
the geocentric distance, m⊙ is the apparent red magnitude of the sun (−27.1), mR is the
apparent red magnitude, pR is the red geometric albedo, and φ(α) is the phase function in
which the phase angle α = 0 deg at opposition and φ(0) = 1.
The apparent magnitude of the TNO may vary for the main following reasons:
1) The geometry in which R,∆ and/or α changes for the TNO. Geometrical consider-
ations at the distances of the TNOs are usually only noticeable over a few weeks or longer
and thus are considered long-term variations. These are further discussed in section 5.
2) The TNOs albedo, pR, may not be uniform on its surface causing the apparent
magnitude to vary as the different albedo markings on the TNOs surface rotate in and out
of our line of sight. Albedo or surface variations on an object usually cause less than a 30%
difference from maximum to minimum brightness of an object. (134340) Pluto, because of
its atmosphere (Spencer et al. 1997), has one of the highest known amplitudes from albedo
variations (∼ 0.3 magnitudes; Buie et al. 1997).
3) Shape variations or elongation of an object will cause the effective radius of an object
– 5 –
to our line of sight to change as the TNO rotates. A double peaked periodic light curve
is expected to be seen in this case since the projected cross section would go between two
minima (short axis) and two maxima (long axis) during one complete rotation of the TNO.
Elongation from material strength is likely for small TNOs (r < 100 km) but for the larger
TNOs observed in this paper no significant elongation is expected from material strength
because their large self gravity.
A large TNO (r > 100 km) may be significantly elongated if it has a large amount of
rotational angular momentum. An object will be near breakup if it has a rotation period
near the critical rotation period (Pcrit) at which centripetal acceleration equals gravitational
acceleration towards the center of a rotating spherical object,
Pcrit =
where G is the gravitational constant and ρ is the density of the object. With ρ = 103 kg m−3
the critical period is about 3.3 hours. At periods just below the critical period the object
will likely break apart. For objects with rotations significantly above the critical period the
shapes will be bimodal Maclaurin spheroids which do not shown any significant rotational
light curves produced by shape (Jewitt and Sheppard 2002). For periods just above the
critical period the equilibrium figures are triaxial ellipsoids which are elongated from the
large centripetal force and usually show prominent rotational light curves (Weidenschilling
1981; Holsapple 2001; Jewitt and Sheppard 2002).
For an object that is triaxially elongated the peak-to-peak amplitude of the rotational
light curve allows for the determination of the projection of the body shape into the plane
of the sky by (Binzel et al. 1989)
∆m = 2.5log
− 1.25log
a2cos2θ + c2sin2θ
b2cos2θ + c2sin2θ
where a ≥ b ≥ c are the semiaxes with the object in rotation about the c axis, ∆m is
expressed in magnitudes, and θ is the angle at which the rotation (c) axis is inclined to the
line of sight (an object with θ = 90 deg. is viewed equatorially). The amplitudes of the light
curves produced from rotational elongation can range up to about 0.9 magnitudes (Leone et
al. 1984).
Assuming θ = 90 degrees gives a/b = 100.4∆m. Thus the easily measured quantities of
the rotation period and amplitude can be used to determine a minimum density for an object
if it is assumed to be rotational elongated and strengthless (i.e. the bodies structure behaves
like a fluid, Chandrasekhar 1969). The two best cases of this high angular momentum
– 6 –
elongation in the Kuiper Belt are (20000) Varuna (Jewitt and Sheppard 2002) and (136108)
2003 EL61 (Rabinowitz et al. 2006).
4) Periodic light curves may be produced if a TNO is an eclipsing or contact binary. A
double-peaked light curve would be expected with a possible characteristic notch shape near
the minimum of the light curve. Because the two objects may be tidally elongated the light
curves can range up to about 1.2 magnitudes (Leone et al. 1984). The best example of such
an object in the Kuiper Belt is 2001 QG298 (Sheppard and Jewitt 2004).
5) A non-periodic short-term light curve may occur from a complex rotational state, a
recent collision, a binary with each component having a large light curve amplitude and a
different rotation period or outgassing/cometary activity. These types of short term vari-
ability are expected to be extremely rare and none have yet been reliably detected in the
Kuiper Belt (Sheppard and Jewitt 2003; Belskaya et al. 2006)
4. Light Curve Results and Analysis
The photometric measurements for the 10 newly observed KBOs are listed in Table 1,
where the columns include the start time of each integration, the corresponding Julian date,
and the magnitude. No correction for light travel time has been made. Results of the light
curve analysis for all the KBOs newly observed are summarized in Table 2.
The phase dispersion minimization (PDM) method (Stellingwerf 1978) was used to
search for periodicity in the individual light curves. In PDM, the metric is the so-called
Theta parameter, which is essentially the variance of the unphased data divided by the
variance of the data when phased by a given period. The best fit period should have a very
small dispersion compared to the unphased data and thus Theta << 1 indicates that a good
fit has been found. In practice, a Theta less than 0.4 indicates a possible periodic signature.
4.1. (120348) 2004 TY364
Through the PDM analysis I found a strong Theta minima for 2004 TY364 near a period
of P = 5.85 hours with weaker alias periods flanking this (Figure 1). Phasing the data to all
possible periods in the PDM plot with Theta < 0.4 found that only the single-peaked period
near 5.85 hours and the double-peaked period near 11.70 hours fits all the data obtained
from October, November and December 2005. Both periods have an equally low Theta
parameter of about 0.15 and either could be the true rotation period (Figures 2 and 3). The
peak-to-peak amplitude is 0.22± 0.02 magnitudes.
– 7 –
If 2004 TY364 has a double-peaked period it may be elongated from its high angular
momentum. If the TNO is assumed to be observed equator on then from Equation 3 the
a : b axis ratio is about 1.2. Following Jewitt and Sheppard (2002) I assume the TNO is
a rotationally elongated strengthless rubble pile. Using the spin period of 11.7 hours, the
1.2 a : b axis ratio found above and the Jacobi ellipsoid tables produced by Chandrasekhar
(1969) I find the minimum density of 2004 TY364 is about 290 kg m
−3 with an a : c axis
ratio of about 1.9. This density is quite low which leads one to believe either the TNO is
not being viewed equator on or the relatively long double-peaked period is not created from
high angular momentum of the object.
4.2. (84922) 2003 VS2
The KBO 2003 VS2 has a very low Theta of less than 0.1 near 7.41 hours in the PDM
plot (Figure 4). Phasing the December 2003 data to this period shows a well defined double-
peaked period (Figure 5). The single peaked period for this result would be near 3.71 hours
which was a possible period determined for this object by Ortiz et al. (2006). The 3.71 hour
single-peaked period does not look as convincing (Figure 6) which confirms the PDM result
that the single-peaked period has about three times more dispersion than the double-peaked
period. This is likely because one of the peaks is taller in amplitude (∼ 0.05 mags) and a
little wider. The other single-peaked period of 4.39 hours (Figure 7) and the double-peaked
period of 8.77 hours (Figure 8) mentioned by Oritz et al. (2006) do not show a low Theta
in the PDM and also do not look convincing when examining the phased data. The peak-
to-peak amplitude is 0.21 ± 0.02 magnitudes, which is similar to that detected by Ortiz et
al. (2006).
The fast rotation of 7.41 hours and double-peaked nature suggests that 2003 VS2 may
be elongated from its high angular momentum. Using Equation 3 and assuming the TNO
is observed equator on the a : b axis ratio is about 1.2. Using the spin period of 7.41 hours,
the 1.2 a : b axis ratio and the Jacobi ellipsoid tables produced by Chandrasekhar (1969) I
find the minimum density of 2003 VS2 is about 720 kg m
−3 with an a : c axis ratio of about
1.9. This result is similar to other TNO densities found through the Jacobian Ellipsoid
assumption (Jewitt and Sheppard 2002; Sheppard and Jewitt 2002; Rabinowitz et al. 2006)
as well as recent thermal results from the Spitzer space telescope (Stansberry et al. 2006).
– 8 –
4.3. (126154) 2001 YH140
(126154) 2001 YH140 shows variability of 0.21 ± 0.04 magnitudes. The PDM for this
TNO shows possible periods near 8.5, 9.15, 10.25 and 13.25 hours though only the 13.25
hour period has a Theta less than 0.4 (Figure 9). Visibly examining the phased data finds
only the 13.25 hour period is viable (Figure 10). This is consistent with the observation that
one minimum and one maximum were shown on December 23, 2003 in about six and a half
hours, which would give a single-peaked light curve of twice this time or about 13.25 hours.
Ortiz et al. (2006) found this object to have a similar variability but with very limited data
could not obtain a reliable period. Ortiz et al. did have one period of 12.99 hours which
may be consistent with our result.
4.4. Flat Rotation Curves
Seven of the ten newly observed KBOs; 2001 UQ18, (55565) 2002 AW197, (119979) 2002
WC19, (120132) 2003 FY128, (136199) Eris 2003 UB313, (90482) Orcus 2004 DW, and (90568)
2004 GV9 showed no variability within the photometric uncertainties of the observations
(Table 2; Figures 11 to 21). These KBOs thus either have extremely long rotational periods,
are viewed nearly pole-on or most likely have small peak-to-peak rotational amplitudes.
The upper limits for the objects short-term rotational variability as shown in Table 2 were
determined through a monte carlo simulation. The monte carlo simulation determined the
lowest possible amplitude that would be seen in the data from the time sampling and variance
of the photometry as well as the errors on the individual points.
Ortiz et al. (2006) reported a possible 0.04± 0.02 photometric range for (90482) Orcus
2004 DW and a period near 10 hours. I do not confirm this result here. Ortiz et al. (2006)
also reported a marginal 0.08± 0.03 photometric range for (55565) 2002 AW197 with no one
clear best period. I can not confirm this result and find that for 2002 AW197 the rotational
variability appears significantly less than 0.08 magnitudes.
Some of the KBOs in this sample appear to have variability which is just below the
threshold of the data detection and thus no significant period could be obtained with the
current data. In particular 2001 UQ18 appears to have a light curve with a significant
amplitude above 0.1 magnitudes but the data is sparser for this object than most the others
and thus no significant period is found. Followup observations will be required in order
to determine if most of these flat light curve objects do have any significant short-term
variability.
– 9 –
4.5. Comparisons with Size, Amplitude, Period, and MBAs
In Figures 22 and 23 are plotted the diameters of the largest TNOs and Main Belt
Asteroids (MBAs) versus rotational amplitude and period, respectively. Most outliers on
Figure 22 can easily be explained from the discussion in section 3. Varuna, 2003 EL61 and
the other unmarked TNOs with photometric ranges above about 0.4 magnitudes are all
spinning faster than about 8 hours. They are thus likely hydrostatic equilibrium triaxial
Jacobian ellipsoids which are elongated from their rotational angular momentum (Jewitt
and Sheppard 2002; Sheppard and Jewitt 2002; Rabinowitz et al. 2006). 2001 QG298’s
large photometric range is probably because this object is a contact binary indicative of its
longer period and notched shaped light curve (Sheppard and Jewitt 2004). Pluto’s relatively
large amplitude light curve is best explained through its active atmosphere (Spencer et
al. 1997). Like the MBAs, the photometric amplitudes of the TNOs start to increase
significantly at sizes less than about 300 km in diameter. The likely reason is this size range
is where the objects are still large enough to be dominated by self-gravity and are not easily
disrupted through collisions but can still have their angular momentum highly altered from
the collisional process (Farinella et al. 1982; Davis and Farinella 1997). Thus this is the
region most likely to be populated by high angular momentum triaxial Jacobian ellipsoids
(Farinella et al. 1992).
From this work Eris (2003 UB313) has one of the highest signal-to-noise time-resolved
photometry measurements of any TNO searched for a rotational period. There is no obvi-
ous rotational light curve larger than about 0.01 magnitudes in our extensive data which
indicates a very uniform surface, a rotation period of over a few days or a pole-on view-
ing geometry. Carraro et al. (2006) suggest a possible 0.05 magnitude variability for Eris
between nights but this is not obvious in this data set. The similar inferred composition
and size of Eris to Pluto suggests these objects should behave very similar (Brown et al.
2005,2006). Since Pluto has a relatively substantial atmosphere at its current position of
about 30 AU (Elliot et al. 2003; Sicardy et al. 2003) it is very likely that Eris has an active
atmosphere when near its perihelion of 38 AU. At Eris’ current distance of 97 AU its surface
thermal temperature should be over 20 degrees colder than when at perihelion. Like Pluto,
Eris’ putative atmosphere near perihelion would likely be composed of N2, CH4 or CO which
would mostly condense when near aphelion (Spencer et al. 1997; Hubbard 2003), effectively
resurfacing the TNO every few hundred years. This is the most likely explanation as to why
the surface of Eris appears so uniform. This may also be true for 2005 FY9 which appears
compositionally similar to Pluto (Licandro et al. 2006) and at 52 AU is about 15 degrees
colder than Pluto.
Figure 23 shows that the median rotation period distribution for TNOs is about 9.5 ±
– 10 –
1 hours which is marginally larger than for similarly sized main belt asteroids (7.0 ± 1
hours)(Sheppard and Jewitt 2002; and Lacerda and Luu 2006). If confirmed, the likely
reason for this difference are the collisional histories of each reservoir as well as the objects
compositions.
5. Phase Curve Results
The phase function of an objects surface mostly depends on the albedo, texture and
particle structure of the regolith. Four of the newly imaged TNOs (Eris 2003 UB313, (120348)
2004 TY364, Orcus 2004 DW, and (90568) 2004 GV9) were viewed on two separate telescope
observing runs occurring at significantly different phase angles (Figures 24 to 27). This
allowed their linear phase functions,
φ(α) = 10−0.4βα (4)
to be estimated where α is the phase angle in degrees and β is the linear phase coefficient in
magnitudes per degree (Table 3). The phase angles for TNOs are always less than about 2
degrees as seen from the Earth. Most atmosphereless bodies show opposition effects at such
small phase angles (Muinonen et al. 2002). The TNOs appear to have mostly linear phase
curves between phase angles of about 2 and 0.1 degrees (Sheppard and Jewitt 2002,2003;
Rabinowitz et al. 2007). For phase angles smaller than about 0.1 degrees TNOs may display
an opposition spike (Hicks et al. 2005; Belskaya et al. 2006).
The moderate to large KBOs Orcus, 2004 TY364, and 2004 GV9 show steep linear R-band
phase slopes (0.18 to 0.26 mags per degree) similar to previous measurements of similarly
sized moderate to large TNOs (Sheppard and Jewitt 2002,2003; Rabinowitz et al. 2007). In
contrast the extremely large dwarf planet Eris (2003 UB313) has a shallower phase slope (0.09
mags per degree) more similar to Charon (∼ 0.09 mags/deg; Buie et al. (1997)) and possibly
Pluto (∼ 0.03 mags/deg; Buratti et al. (2003)). Empirically lower phase coefficients between
0.5 and 2 degrees may correspond to bright icy objects whose surfaces have probably been
recently resurfaced such as Triton, Pluto and Europa (Buie et al. 1997; Buratti et al. 2003;
Rabinowitz et al. 2007). Thus Eris’ low β is consistent with it having an icy surface that
has recently been resurfaced.
In Figures 28 to 32 are plotted the linear phase coefficients found for several TNOs versus
several different parameters (reduced magnitude, albedo, rotational photometric amplitude
and B − I broad band color). Table 4 shows the significance of any correlations. Based on
only a few large objects it appears that the larger TNOs may have lower β values. This
is true for the R-band and V-band data at the 97% confidence level but interestingly using
– 11 –
data from Rabinowitz et al. (2007) no correlation is seen in the I-band (Table 4). Thus
further measurements are needed to determine if there is a significantly strong correlation
between the size and phase function of TNOs. Further, it may be that the albedos are anti-
correlated with β, but since we have such a small number of albedos known the statistics don’t
give a good confidence in this correlation. If confirmed with additional observations, these
correlations may be an indication that larger TNOs surfaces are less susceptible to phase
angle opposition effects at optical wavelengths. This could be because the larger TNOs
have different surface properties from smaller TNOs due to active atmospheres, stronger
self-gravity or different surface layers from possible differentiation.
5.1. Absolute Magnitudes
From the linear phase coefficient the reduced magnitude, mR(1, 1, 0) = mR−5log(R∆) or
absolute magnitude H (Bowell et al. 1989), which is the magnitude of an object if it could be
observed at heliocentric and geocentric distances of 1 AU and a phase angle of 0 degrees, can
be estimated (see Sheppard and Jewitt 2002 for further details). The results for mR(1, 1, 0)
and H are found to be consistent to within a couple hundreths of a magnitude (Table 3 and
Figures 24 to 27). It is found that the R-band empirically determined absolute magnitudes
of individual TNOs appears to be several tenths of a magnitude different than what is given
by the Minor Planet Center (Table 3). This is likely because the MPC assumes a generic
phase function and color for all TNOs while these two physical properties appear to be
significantly different for individual KBOs (Jewitt and Luu 1998). The work by Romanishin
and Tegler (2005) attempts to determine various absolute magnitudes of TNOs by using
main belt asteroid type phase curves which are not appropriate for TNOs (Sheppard and
Jewitt 2002).
6. Summary
Ten large trans-Neptunian objects were observed in the R-band to determine photomet-
ric variability on times scales of hours, days and months.
1) Three of the TNOs show obvious short-term photometric variability which is taken
to correspond to their rotational states.
• (120348) 2004 TY364 shows a double-peaked period of 11.7 hours and if single-peaked
is 5.85 hours. The peak-to-peak amplitude of the light curve is 0.22± 0.02 mags.
– 12 –
• (84922) 2003 VS2 has a well defined double-peaked period of 7.41 hours with a peak-
to-peak amplitude of 0.21± 0.02 mags. If the light curve is from elongation than 2003
VS2’s a/b axis ratio is at least 1.2 and the a/c axis ratio is about 1.9. Assuming 2003
VS2 is elongated from its high angular momentum and is a strengthless rubble pile it
would have a minimum density of about 720 kg m−3.
• (126154) 2001 YH140 has a single-peaked period of about 13.25 hours with a photo-
metric range of 0.21± 0.04 mags.
2) Seven of the TNOs show no short-term photometric variability within the measure-
ment uncertainties.
• Photometric measurements of the large TNOs (90482) Orcus and (55565) 2002 AW197
showed no variability within or uncertainties. Thus these measurements do not confirm
possible small photometric variability found for these TNOs by Ortiz et al. (2006).
• No short-term photometric variability was found for (136199) Eris 2003 UB313 to about
the 0.01 magnitude level. This high signal to noise photometry suggests Eris is nearly
spherical with a very uniform surface. Such a nearly uniform surface may be explained
by an atmosphere which is frozen onto the surface of Eris when near aphelion. The
atmosphere, like Pluto’s, may become active when near perihelion effectively resurfac-
ing Eris every few hundred years. The Methane rich TNO 2005 FY9 may also be in a
similar situation.
3) Four of the TNOs were observed over significantly different phase angles allowing
their long term photometric variability to be measured between phase angles of 0.1 and 1.5
degrees.
• TNOs Orcus, 2004 TY364 and 2004 GV9 show steep linear R-band phase slopes between
0.18 and 0.26 mags/degree.
• Eris 2003 UB313 shows a shallower R-band phase slope of 0.09 mags/degree. This is
consistent with Eris having a high albedo, icy surface which may have recently been
resurfaced.
• At the 97% confidence level the largest TNOs have shallower R-band linear phase
slopes compared to smaller TNOs. The largest TNOs surfaces may differ from the
smaller TNOs because of their more volatile ice inventory, increased self-gravity, active
atmospheres, differentiation process or collisional history.
– 13 –
3) The absolute magnitudes determined for several TNOs through measuring their phase
curves show a difference of several tenths of a magnitude from the Minor Planet Center values.
• The values found for the reduced magnitude, mR(1, 1, 0), and absolute magnitude, H ,
are similar to within a few hundreths of a magnitude for most TNOs.
Acknowledgments
Support for this work was provided by NASA through Hubble Fellowship grant # HF-
01178.01-A awarded by the Space Telescope Science Institute, which is operated by the
Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS
5-26555.
REFERENCES
Barucci, M., Belskaya, I., Fulchignoni, M. & Birlan, M. 2005, AJ, 130, 1291
Belskaya, I., Ortiz, J., Rousselot, P., Ivanova, V., Borisov, G., Shevchenko, V. & Peixinho,
N. 2006, Icarus, 184, 277
Bertoldi, F., Altenhoff, W., Weiss, A., Menten, K. & Thum, C. 2006, Nature, 439, 563
Binzel, R., Farinella, P., Zappala V., & Cellino, A. 1989, in Asteroids II, ed. R. Binzel, T.
Gehrels, and M. Matthews (Tucson: Univ. of Arizona Press), 416
Bowell, E., Hapke, B., Domingue, D., Lumme, K., Peltoniemi, J., & Harris, A. 1989, in
Asteroids II, ed. R. Binzel, T. Gehrels, and M. Matthews (Tucson: Univ. of Arizona
Press), 524
Brown, M., Trujillo, C. & Rabinowitz, D. 2005, ApJ, 635, L97
Brown, M., Schaller, E., Roe, H., Rabinowitz, D., & Trujillo, C. 2006, ApJ, 643, L61
Buie, M., Tholen, D. & Wasserman, L. 1997, Icarus, 125, 233
Buratti, B., Hillier, J., Heinze, A., Hicks, M., Tryka, K., Mosher, J., Ward, J., Garske, M.,
Young, J. and Atienza-Rosel, J. 2003, Icarus, 162, 171
Carraro, G., Maris, M., Bertin, D. and Parisi, M. 2006, AA, 460, L39
Chandrasekhar, S. 1969, Ellipsoidal Figures of Equilibrium. Yale Univ. Press, New Haven,
Conn.
– 14 –
Cruikshank, D., Stansberry, J., Emery, J., Fernandez, Y., Werner, M., Trilling, D. & Rieke,
G. 2005, ApJ, 624, 53
Cruikshank, D., Barucci, M., Emery, J., Fernandez, Y., Grundy, W., Noll, K. and Stansberry,
J. 2006, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, and K. Keil (Tucson:
Univ. of Arizona Press), in press
Davis, D. and Farinella, P. 1997, Icarus, 125, 50
de Bergh, C., Delsanti, A., Tozzi, G., Dotto, E., Doressoundiram, A. and Barucci, M. 2005,
AA, 437, 1115
Doressoundiram, A., Peixinho, N., Doucet, C., Mousis, O., Barucci, M., Petit, J. and Veillet,
C. 2005, Icarus, 174, 90
Elliot, J., Ates, A., Babcock, B. et al. 2003, Nature, 424, 165
Farinella, P. and Paolicchi, P. 1982, Icarus, 52, 409
Farinella, P., Davis, D., Paolicchi, P., Cellino, A. and Zappala, V. 1992, AA, 253, 604
Hicks, M., Simonelli, D. and Buratti, B. 2005, Icarus, 176, 492
Holsapple, K. 2001, Icarus, 154, 432
Hubbard, W. 2003, Nature, 424, 137
Jewitt, D. & Luu, J. 1998, AJ, 115, 1667
Jewitt, D. & Sheppard, S. 2002, AJ, 123, 2110
Lacerda, P. & Luu, J. 2006, AJ, 131, 2314
Landolt, A. 1992, AJ, 104, 340
Leone, G., Farinella, P., Paolicchi, P. & Zappala, V. 1984, A&A, 140, 265
Licandro, J., Pinilla-Alonso, N., Pedani, M., Oliva, E., Tozzi, G. and Grundy, W. 2006,
A&A, 445, L35
Muinonen, K., Piironen, J., Shkuratov, Y., Ovcharenko, A. and Clark, B. 2002, in Asteroids
III, ed. W. Bottke, A. Cellino, P. Paolicchi and R. Binzel (Tucson: Univ. of Arizona
Press), 123
Ortiz, J., Gutierrez, P., Santos-Sanz, P., Casanova, V. and Sota, A. 2006, A&A, 447, 1131
Rabinowitz, D., Barkume, K., Brown, M. et al. 2006, ApJ, 639, 1238
Rabinowitz, D., Schaefer, B. and Tourtellotte, S. 2007, AJ, 133, 26
Romanishin, W. and Tegler, S. 2005, Icarus, 179, 523
Sheppard, S. & Jewitt, D. 2002, AJ, 124, 1757
– 15 –
Sheppard, S. & Jewitt, D. 2003, EM&P, 92, 207
Sheppard, S. & Jewitt, D. 2004, AJ, 127, 3023
Sicardy, B., Widemann, T., Lellouch, E. et al. 2003, Nature, 424, 168
Spencer, J., Stansberry, J., Trafton, L, Young, E., Binzel, R. and Croft, S. 1997, in Pluto
and Charon, ed. S. Stern and D. Tholen (Tucson: Univ. of Arizona Press), 435
Stansberry, J., Grundy, W., Margot, J., Cruikshank, D., Emery, J., Rieke, G. and Trilling,
D. 2006, ApJ, 643, 556
Stellingwerf, R. 1978, ApJ, 224, 953
Trilling, D. & Bernstein, G. 2006, AJ, 131, 1149
Trujillo, C., Jewitt, D. & Luu, J. 2001, AJ, 122, 457
Weidenschilling, S. 1981, Icarus, 46, 124
This preprint was prepared with the AAS LATEX macros v5.2.
– 16 –
Table 1. Observations of Kuiper Belt Objects
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
2001 UQ18 uq1223n3025 1.17 320 2003 12 23.22561 22.20 0.04
uq1223n3026 1.15 320 2003 12 23.23060 22.30 0.04
uq1223n3038 1.02 320 2003 12 23.28384 22.38 0.04
uq1223n3039 1.01 320 2003 12 23.28882 22.56 0.04
uq1223n3051 1.01 350 2003 12 23.34333 22.40 0.04
uq1223n3052 1.01 350 2003 12 23.35123 22.48 0.04
uq1223n3070 1.15 350 2003 12 23.41007 22.37 0.04
uq1223n3071 1.17 350 2003 12 23.41540 22.28 0.04
uq1224n4024 1.21 350 2003 12 24.21433 22.30 0.03
uq1224n4025 1.19 350 2003 12 24.21969 22.15 0.03
uq1224n4033 1.03 350 2003 12 24.27591 22.07 0.03
uq1224n4034 1.02 350 2003 12 24.28125 22.04 0.03
uq1224n4041 1.00 350 2003 12 24.31300 22.11 0.03
uq1224n4042 1.00 350 2003 12 24.31834 22.14 0.03
uq1224n4051 1.04 350 2003 12 24.36433 22.22 0.03
uq1224n4052 1.05 350 2003 12 24.36967 22.18 0.03
uq1224n4061 1.17 350 2003 12 24.41216 22.27 0.03
uq1224n4062 1.20 350 2003 12 24.41750 22.22 0.03
uq1224n4072 1.50 350 2003 12 24.46253 22.14 0.03
uq1224n4073 1.56 350 2003 12 24.46781 22.09 0.03
(126154) 2001 YH140 yh1219n1073 1.10 300 2003 12 19.42900 20.85 0.02
yh1219n1074 1.08 300 2003 12 19.43381 20.82 0.02
yh1219n1084 1.01 300 2003 12 19.47450 20.81 0.02
yh1219n1085 1.01 300 2003 12 19.47935 20.79 0.02
yh1219n1092 1.00 300 2003 12 19.51172 20.77 0.02
yh1219n1093 1.00 300 2003 12 19.51657 20.80 0.02
yh1219n1112 1.06 300 2003 12 19.56332 20.86 0.02
yh1219n1113 1.08 300 2003 12 19.56815 20.80 0.02
yh1219n1116 1.15 350 2003 12 19.59215 20.81 0.02
yh1219n1117 1.18 350 2003 12 19.59764 20.79 0.02
– 17 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
yh1219n1122 1.36 350 2003 12 19.63042 20.87 0.02
yh1219n1123 1.41 350 2003 12 19.63587 20.85 0.02
yh1219n1125 1.50 350 2003 12 19.64669 20.95 0.03
yh1221n2067 1.46 300 2003 12 21.35652 20.98 0.02
yh1221n2068 1.42 300 2003 12 21.36124 20.92 0.02
yh1223n3059 1.24 300 2003 12 23.38285 20.86 0.02
yh1223n3060 1.21 300 2003 12 23.38762 20.89 0.02
yh1223n3078 1.03 300 2003 12 23.44626 20.88 0.02
yh1223n3079 1.02 300 2003 12 23.45102 20.87 0.02
yh1223n3086 1.00 300 2003 12 23.48192 20.92 0.02
yh1223n3087 1.00 300 2003 12 23.48668 20.91 0.02
yh1223n3091 1.00 300 2003 12 23.51268 20.94 0.02
yh1223n3092 1.01 300 2003 12 23.51744 20.95 0.02
yh1223n3101 1.08 300 2003 12 23.55695 20.92 0.02
yh1223n3102 1.09 300 2003 12 23.56169 20.96 0.03
yh1223n3106 1.18 300 2003 12 23.58754 20.98 0.03
yh1223n3107 1.20 300 2003 12 23.59231 21.01 0.03
yh1223n3114 1.31 300 2003 12 23.61182 21.03 0.03
yh1223n3115 1.34 300 2003 12 23.61659 20.99 0.03
yh1223n3119 1.56 300 2003 12 23.64084 20.99 0.03
yh1224n4047 1.49 300 2003 12 24.34589 20.90 0.02
yh1224n4048 1.44 300 2003 12 24.35066 20.91 0.02
yh1224n4057 1.18 300 2003 12 24.39217 20.85 0.02
yh1224n4058 1.16 300 2003 12 24.39693 20.85 0.02
yh1224n4068 1.03 300 2003 12 24.44421 20.87 0.02
yh1224n4069 1.02 300 2003 12 24.44898 20.87 0.02
yh1224n4080 1.00 300 2003 12 24.49899 20.84 0.02
yh1224n4081 1.00 300 2003 12 24.50375 20.86 0.02
yh1224n4088 1.02 300 2003 12 24.52567 20.82 0.02
yh1224n4089 1.03 300 2003 12 24.53043 20.83 0.02
– 18 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
yh1224n4093 1.08 300 2003 12 24.55588 20.81 0.02
yh1224n4094 1.09 300 2003 12 24.56065 20.82 0.02
yh1224n4102 1.23 300 2003 12 24.59447 20.87 0.02
yh1224n4103 1.25 300 2003 12 24.59926 20.82 0.02
yh1224n4107 1.44 300 2003 12 24.62661 20.87 0.02
(55565) 2002 AW197 aw1223n3088 1.09 220 2003 12 23.49223 19.89 0.01
aw1223n3089 1.08 220 2003 12 23.49606 19.89 0.01
aw1223n3093 1.03 220 2003 12 23.52320 19.87 0.01
aw1223n3094 1.03 220 2003 12 23.52704 19.88 0.01
aw1223n3103 1.02 220 2003 12 23.56663 19.89 0.01
aw1223n3104 1.02 220 2003 12 23.57046 19.89 0.01
aw1223n3108 1.05 220 2003 12 23.59807 19.89 0.01
aw1223n3109 1.06 220 2003 12 23.60190 19.89 0.01
aw1223n3116 1.11 220 2003 12 23.62171 19.87 0.01
aw1223n3117 1.12 220 2003 12 23.62556 19.89 0.01
aw1223n3122 1.26 220 2003 12 23.65822 19.87 0.01
aw1223n3123 1.28 220 2003 12 23.66201 19.89 0.01
aw1224n4066 1.34 220 2003 12 24.43521 19.87 0.01
aw1224n4067 1.32 220 2003 12 24.43903 19.86 0.01
aw1224n4078 1.09 220 2003 12 24.48975 19.89 0.01
aw1224n4079 1.08 220 2003 12 24.49358 19.89 0.01
aw1224n4086 1.04 220 2003 12 24.51683 19.86 0.01
aw1224n4087 1.03 220 2003 12 24.52066 19.89 0.01
aw1224n4091 1.01 220 2003 12 24.54768 19.90 0.01
aw1224n4092 1.01 220 2003 12 24.55158 19.90 0.01
aw1224n4100 1.04 220 2003 12 24.58659 19.86 0.01
aw1224n4101 1.04 220 2003 12 24.59042 19.86 0.01
aw1224n4105 1.10 220 2003 12 24.61789 19.86 0.01
aw1224n4106 1.12 220 2003 12 24.62172 19.87 0.01
aw1224n4111 1.25 220 2003 12 24.65382 19.86 0.01
– 19 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
aw1224n4112 1.27 220 2003 12 24.65766 19.87 0.01
aw1224n4113 1.30 220 2003 12 24.66150 19.88 0.01
aw1224n4114 1.32 220 2003 12 24.66534 19.88 0.01
(119979) 2002 WC19 wc1219n1033 1.19 350 2003 12 19.27766 20.56 0.02
wc1219n1045 1.05 300 2003 12 19.32341 20.61 0.02
wc1219n1046 1.04 300 2003 12 19.32826 20.59 0.02
wc1219n1057 1.00 300 2003 12 19.36042 20.60 0.02
wc1219n1058 1.00 300 2003 12 19.36529 20.57 0.02
wc1219n1066 1.01 300 2003 12 19.40263 20.56 0.02
wc1219n1067 1.02 300 2003 12 19.40748 20.57 0.02
wc1219n1077 1.11 300 2003 12 19.44804 20.61 0.02
wc1219n1078 1.12 300 2003 12 19.45289 20.58 0.02
wc1219n1088 1.33 300 2003 12 19.49419 20.59 0.02
wc1219n1089 1.37 300 2003 12 19.49909 20.55 0.02
wc1219n1094 1.58 300 2003 12 19.52222 20.57 0.02
wc1219n1095 1.64 300 2003 12 19.52704 20.58 0.02
wc1221n2026 1.64 300 2003 12 21.21505 20.56 0.02
wc1221n2027 1.59 300 2003 12 21.21980 20.57 0.02
wc1221n2042 1.26 300 2003 12 21.25881 20.55 0.02
wc1221n2043 1.24 300 2003 12 21.26356 20.53 0.02
wc1221n2065 1.01 300 2003 12 21.33897 20.58 0.02
wc1221n2066 1.01 300 2003 12 21.34373 20.63 0.02
wc1223n3027 1.38 300 2003 12 23.23616 20.57 0.02
wc1223n3028 1.34 300 2003 12 23.24092 20.60 0.02
wc1223n3044 1.05 300 2003 12 23.30891 20.57 0.02
wc1223n3045 1.04 300 2003 12 23.31367 20.57 0.02
wc1223n3057 1.00 300 2003 12 23.37221 20.58 0.02
wc1223n3058 1.00 300 2003 12 23.37696 20.56 0.02
wc1223n3076 1.10 320 2003 12 23.43506 20.57 0.02
wc1223n3077 1.12 320 2003 12 23.44005 20.60 0.02
– 20 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
wc1223n3084 1.25 320 2003 12 23.47067 20.61 0.02
wc1223n3085 1.28 320 2003 12 23.47566 20.60 0.02
wc1224n4026 1.44 300 2003 12 24.22597 20.55 0.02
wc1224n4027 1.40 300 2003 12 24.23073 20.56 0.02
wc1224n4035 1.10 300 2003 12 24.28804 20.58 0.02
wc1224n4036 1.09 300 2003 12 24.29281 20.61 0.02
wc1224n4043 1.02 300 2003 12 24.32492 20.58 0.02
wc1224n4044 1.02 300 2003 12 24.32969 20.58 0.02
wc1224n4053 1.00 300 2003 12 24.37574 20.54 0.02
wc1224n4054 1.01 300 2003 12 24.38049 20.56 0.02
wc1224n4063 1.07 350 2003 12 24.42313 20.57 0.02
wc1224n4076 1.32 300 2003 12 24.47888 20.58 0.02
wc1224n4077 1.35 300 2003 12 24.48365 20.61 0.02
(120132) 2003 FY128 fy0309n037 1.16 350 2005 03 09.30416 20.29 0.02
fy0309n038 1.17 350 2005 03 09.30906 20.31 0.02
fy0309n045 1.32 350 2005 03 09.34449 20.29 0.02
fy0309n046 1.35 350 2005 03 09.34942 20.28 0.02
fy0309n051 1.55 350 2005 03 09.37484 20.28 0.02
fy0309n052 1.60 350 2005 03 09.37975 20.30 0.02
fy0310n113 1.37 300 2005 03 10.13114 20.33 0.02
fy0310n114 1.34 300 2005 03 10.13543 20.31 0.02
fy0310n121 1.15 300 2005 03 10.18141 20.27 0.02
fy0310n122 1.14 300 2005 03 10.18572 20.29 0.02
fy0310n131 1.08 250 2005 03 10.23854 20.28 0.02
fy0310n132 1.08 250 2005 03 10.24229 20.27 0.02
fy0310n142 1.17 300 2005 03 10.30636 20.29 0.02
fy0310n146 1.27 300 2005 03 10.33107 20.27 0.02
fy0310n147 1.29 300 2005 03 10.33564 20.27 0.02
fy0310n152 1.51 300 2005 03 10.36726 20.25 0.02
fy0310n153 1.55 300 2005 03 10.37157 20.22 0.02
– 21 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
(136199) Eris 2003 UB313 ub1026c142 1.71 350 2005 10 25.02653 18.372 0.006
ub1026c143 1.65 350 2005 10 25.03144 18.374 0.005
ub1026c150 1.37 300 2005 10 25.06369 18.370 0.005
ub1026c156 1.35 300 2005 10 25.06800 18.361 0.005
ub1026c162 1.11 250 2005 10 25.19559 18.361 0.005
ub1026c170 1.11 250 2005 10 25.19931 18.364 0.005
ub1026c171 1.16 250 2005 10 25.22449 18.360 0.005
ub1026c174 1.16 250 2005 10 25.22825 18.370 0.005
ub1026c175 1.25 300 2005 10 25.25305 18.327 0.005
ub1026c178 1.27 300 2005 10 25.25694 18.365 0.005
ub1026c179 1.38 300 2005 10 25.27710 18.350 0.005
ub1026c183 1.41 300 2005 10 25.28146 18.369 0.005
ub1026c184 1.54 300 2005 10 25.29766 18.365 0.005
ub1026c187 1.58 300 2005 10 25.30202 18.351 0.005
ub1026c188 1.78 350 2005 10 25.31871 18.364 0.006
ub1026c189 1.85 350 2005 10 25.32363 18.364 0.006
ub1027c043 1.83 250 2005 10 26.01513 18.356 0.006
ub1027c044 1.78 250 2005 10 26.01890 18.362 0.006
ub1027c049 1.34 200 2005 10 26.06597 18.360 0.005
ub1027c050 1.18 300 2005 10 26.10460 18.352 0.005
ub1027c069 1.10 300 2005 10 26.14440 18.348 0.005
ub1027c070 1.11 250 2005 10 26.19650 18.352 0.005
ub1027c074 1.11 250 2005 10 26.20049 18.365 0.005
ub1027c075 1.15 300 2005 10 26.21742 18.348 0.005
ub1027c084 1.16 300 2005 10 26.22174 18.359 0.005
ub1027c085 1.20 300 2005 10 26.23858 18.351 0.005
ub1027c088 1.22 300 2005 10 26.24295 18.331 0.005
ub1027c089 1.36 300 2005 10 26.27118 18.352 0.005
ub1027c092 1.39 300 2005 10 26.27555 18.345 0.005
ub1027c093 1.51 350 2005 10 26.29210 18.344 0.005
– 22 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
ub1027c096 1.56 350 2005 10 26.29702 18.345 0.005
ub1027c097 1.61 350 2005 10 26.30193 18.357 0.005
ub1028c240 1.65 300 2005 10 27.02670 18.350 0.005
ub1028c246 1.33 300 2005 10 27.06572 18.362 0.005
ub1028c258 1.10 250 2005 10 27.14482 18.357 0.005
ub1028c266 1.12 250 2005 10 27.19952 18.359 0.005
ub1028c267 1.12 250 2005 10 27.20321 18.367 0.005
ub1028c271 1.18 300 2005 10 27.22789 18.370 0.005
ub1028c272 1.19 300 2005 10 27.23216 18.366 0.005
ub1028c276 1.29 300 2005 10 27.25643 18.272 0.005
ub1028c277 1.31 300 2005 10 27.26070 18.376 0.005
ub1028c280 1.42 300 2005 10 27.27739 18.375 0.005
ub1028c281 1.45 300 2005 10 27.28171 18.369 0.005
ub1028c282 1.48 300 2005 10 27.28598 18.371 0.005
ub1028c283 1.52 300 2005 10 27.29033 18.372 0.005
ub1028c284 1.56 300 2005 10 27.29462 18.371 0.005
ub1028c285 1.61 300 2005 10 27.29900 18.381 0.005
ub1028c286 1.65 300 2005 10 27.30333 18.392 0.005
ub1028c287 1.70 300 2005 10 27.30761 18.393 0.006
ub1028c288 1.76 300 2005 10 27.31197 18.383 0.006
ub1028c289 1.82 300 2005 10 27.31625 18.369 0.006
ub1028c290 1.89 300 2005 10 27.32052 18.388 0.006
ub1028c291 1.96 300 2005 10 27.32484 18.367 0.006
ub1028c292 2.04 300 2005 10 27.32916 18.405 0.007
ub1028c293 2.13 300 2005 10 27.33347 18.378 0.007
ub1128n027 1.11 250 2005 11 28.10847 18.389 0.005
ub1128n028 1.12 250 2005 11 28.11219 18.382 0.005
ub1128n029 1.12 250 2005 11 28.11593 18.401 0.005
ub1128n032 1.16 250 2005 11 28.13378 18.391 0.005
ub1128n033 1.17 250 2005 11 28.13749 18.383 0.005
– 23 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
ub1128n034 1.18 250 2005 11 28.14118 18.391 0.005
ub1128n035 1.19 250 2005 11 28.14496 18.389 0.005
ub1128n036 1.21 250 2005 11 28.14866 18.386 0.005
ub1128n037 1.22 250 2005 11 28.15236 18.380 0.005
ub1128n038 1.23 250 2005 11 28.15614 18.376 0.005
ub1128n039 1.25 250 2005 11 28.15983 18.397 0.005
ub1128n040 1.27 250 2005 11 28.16353 18.393 0.005
ub1128n041 1.28 250 2005 11 28.16732 18.379 0.005
ub1128n042 1.30 250 2005 11 28.17102 18.378 0.005
ub1128n043 1.32 250 2005 11 28.17472 18.402 0.005
ub1128n044 1.34 250 2005 11 28.17850 18.379 0.005
ub1128n045 1.37 250 2005 11 28.18220 18.385 0.005
ub1128n046 1.39 250 2005 11 28.18589 18.380 0.005
ub1128n047 1.42 250 2005 11 28.18969 18.375 0.005
ub1128n048 1.44 250 2005 11 28.19339 18.387 0.005
ub1128n049 1.47 250 2005 11 28.19708 18.396 0.005
ub1128n050 1.51 250 2005 11 28.20084 18.393 0.005
ub1128n051 1.54 250 2005 11 28.20453 18.390 0.005
ub1128n052 1.58 250 2005 11 28.20822 18.391 0.005
ub1128n053 1.61 250 2005 11 28.21195 18.391 0.005
ub1128n054 1.66 250 2005 11 28.21565 18.376 0.005
ub1128n055 1.70 250 2005 11 28.21935 18.386 0.005
ub1128n056 1.75 250 2005 11 28.22311 18.377 0.006
ub1128n057 1.80 250 2005 11 28.22680 18.379 0.006
ub1128n058 1.85 250 2005 11 28.23050 18.382 0.006
ub1128n059 1.91 250 2005 11 28.23437 18.393 0.006
ub1128n060 1.98 250 2005 11 28.23800 18.386 0.006
ub1128n061 2.05 250 2005 11 28.24173 18.376 0.007
ub1128n062 2.13 250 2005 11 28.24546 18.383 0.007
ub1129n112 1.15 250 2005 11 29.02268 18.424 0.005
– 24 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
ub1129n119 1.09 250 2005 11 29.08304 18.432 0.005
ub1129n120 1.09 250 2005 11 29.08673 18.429 0.005
ub1129n121 1.10 250 2005 11 29.09043 18.421 0.005
ub1129n122 1.10 250 2005 11 29.09412 18.430 0.005
ub1129n123 1.10 250 2005 11 29.09782 18.426 0.005
ub1129n124 1.11 250 2005 11 29.10161 18.426 0.005
ub1129n125 1.11 250 2005 11 29.10530 18.431 0.005
ub1129n126 1.12 250 2005 11 29.10900 18.435 0.005
ub1129n127 1.12 250 2005 11 29.11269 18.418 0.005
ub1129n128 1.13 250 2005 11 29.11639 18.422 0.005
ub1129n129 1.14 250 2005 11 29.12018 18.435 0.005
ub1129n130 1.14 250 2005 11 29.12387 18.425 0.005
ub1129n131 1.15 250 2005 11 29.12757 18.418 0.005
ub1129n132 1.16 250 2005 11 29.13136 18.421 0.005
ub1129n133 1.17 250 2005 11 29.13506 18.421 0.005
ub1129n134 1.18 250 2005 11 29.13876 18.420 0.005
ub1129n135 1.20 250 2005 11 29.14254 18.415 0.005
ub1129n136 1.21 250 2005 11 29.14624 18.419 0.005
ub1129n137 1.22 250 2005 11 29.14993 18.424 0.005
ub1129n138 1.24 250 2005 11 29.15373 18.426 0.005
ub1129n139 1.25 250 2005 11 29.15742 18.422 0.005
ub1129n142 1.35 250 2005 11 29.17679 18.418 0.005
ub1129n143 1.37 250 2005 11 29.18049 18.421 0.005
ub1129n144 1.40 250 2005 11 29.18418 18.408 0.005
ub1129n145 1.43 250 2005 11 29.18788 18.422 0.005
ub1129n146 1.45 250 2005 11 29.19158 18.397 0.005
ub1129n147 1.48 250 2005 11 29.19531 18.412 0.005
ub1129n148 1.52 250 2005 11 29.19901 18.403 0.005
ub1129n149 1.55 250 2005 11 29.20270 18.394 0.005
ub1129n150 1.59 250 2005 11 29.20640 18.401 0.005
– 25 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
ub1129n151 1.63 250 2005 11 29.21010 18.400 0.005
ub1129n152 1.67 250 2005 11 29.21388 18.405 0.005
ub1129n153 1.71 250 2005 11 29.21758 18.401 0.005
ub1129n154 1.76 250 2005 11 29.22127 18.391 0.005
ub1129n155 1.81 250 2005 11 29.22496 18.397 0.006
ub1129n156 1.87 250 2005 11 29.22866 18.396 0.006
ub1129n157 1.93 250 2005 11 29.23238 18.415 0.006
ub1129n158 2.00 250 2005 11 29.23607 18.399 0.006
ub1130n226 1.13 250 2005 11 30.11178 18.386 0.005
ub1130n227 1.13 250 2005 11 30.11548 18.386 0.005
ub1130n228 1.14 250 2005 11 30.11918 18.394 0.005
ub1130n229 1.15 250 2005 11 30.12288 18.394 0.005
ub1130n230 1.16 250 2005 11 30.12657 18.390 0.005
ub1130n231 1.17 250 2005 11 30.13027 18.383 0.005
ub1130n232 1.18 250 2005 11 30.13397 18.398 0.005
ub1130n233 1.19 250 2005 11 30.13766 18.394 0.005
ub1130n234 1.20 250 2005 11 30.14136 18.392 0.005
ub1130n235 1.21 250 2005 11 30.14515 18.384 0.005
ub1130n236 1.23 250 2005 11 30.14884 18.391 0.005
ub1130n237 1.24 250 2005 11 30.15254 18.387 0.005
ub1130n238 1.26 250 2005 11 30.15624 18.391 0.005
ub1130n239 1.28 250 2005 11 30.15993 18.397 0.005
ub1130n240 1.29 250 2005 11 30.16370 18.388 0.005
ub1130n241 1.31 250 2005 11 30.16740 18.405 0.005
ub1130n242 1.33 250 2005 11 30.17110 18.379 0.005
ub1130n243 1.36 250 2005 11 30.17480 18.388 0.005
ub1130n244 1.38 250 2005 11 30.17849 18.383 0.005
ub1130n247 1.50 250 2005 11 30.19434 18.386 0.005
ub1130n248 1.53 250 2005 11 30.19804 18.394 0.005
ub1130n249 1.57 250 2005 11 30.20173 18.393 0.005
– 26 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
ub1130n250 1.60 250 2005 11 30.20543 18.400 0.005
ub1130n251 1.64 250 2005 11 30.20912 18.397 0.005
ub1130n252 1.69 250 2005 11 30.21281 18.390 0.005
ub1130n253 1.73 250 2005 11 30.21651 18.387 0.005
ub1130n254 1.78 250 2005 11 30.22020 18.403 0.006
ub1130n255 1.84 250 2005 11 30.22389 18.399 0.006
ub1130n256 1.90 250 2005 11 30.22768 18.379 0.006
ub1130n257 1.96 250 2005 11 30.23138 18.394 0.006
ub1130n258 2.03 250 2005 11 30.23514 18.393 0.007
ub1201n327 1.10 300 2005 12 01.04542 18.378 0.005
ub1201n333 1.10 250 2005 12 01.08574 18.376 0.005
ub1201n334 1.10 250 2005 12 01.08943 18.397 0.005
ub1201n335 1.10 250 2005 12 01.09313 18.386 0.005
ub1201n338 1.12 250 2005 12 01.10868 18.391 0.005
ub1201n339 1.13 250 2005 12 01.11237 18.381 0.005
ub1201n340 1.14 250 2005 12 01.11606 18.398 0.005
ub1201n341 1.15 250 2005 12 01.11976 18.382 0.005
ub1201n342 1.16 250 2005 12 01.12347 18.385 0.005
ub1201n343 1.17 250 2005 12 01.12725 18.389 0.005
ub1201n344 1.18 250 2005 12 01.13095 18.388 0.005
ub1201n345 1.19 250 2005 12 01.13465 18.386 0.005
ub1201n346 1.20 250 2005 12 01.13843 18.384 0.005
ub1201n347 1.21 250 2005 12 01.14212 18.381 0.005
ub1201n348 1.23 250 2005 12 01.14581 18.381 0.005
ub1201n351 1.30 250 2005 12 01.16207 18.379 0.005
ub1201n352 1.32 250 2005 12 01.16577 18.394 0.005
ub1201n353 1.34 250 2005 12 01.16946 18.394 0.005
ub1201n354 1.36 250 2005 12 01.17316 18.385 0.005
ub1201n355 1.39 250 2005 12 01.17685 18.383 0.005
ub1201n356 1.41 250 2005 12 01.18055 18.391 0.005
– 27 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
ub1201n357 1.44 250 2005 12 01.18424 18.379 0.005
ub1201n358 1.47 250 2005 12 01.18793 18.377 0.005
ub1201n359 1.50 250 2005 12 01.19163 18.381 0.005
ub1201n360 1.53 250 2005 12 01.19548 18.394 0.005
ub1201n361 1.57 250 2005 12 01.19918 18.389 0.005
ub1201n362 1.61 250 2005 12 01.20287 18.388 0.005
ub1201n363 1.65 250 2005 12 01.20688 18.388 0.005
ub1201n364 1.69 250 2005 12 01.21058 18.377 0.005
ub1201n365 1.74 250 2005 12 01.21427 18.390 0.006
ub1201n366 1.79 250 2005 12 01.21797 18.396 0.006
ub1201n367 1.85 250 2005 12 01.22166 18.396 0.006
ub1201n368 1.96 250 2005 12 01.22846 18.390 0.007
ub1201n369 2.03 250 2005 12 01.23228 18.382 0.007
(84922) 2003 VS2 vs1219n1031 1.07 250 2003 12 19.26838 19.39 0.01
vs1219n1032 1.06 250 2003 12 19.27266 19.36 0.01
vs1219n1043 1.02 250 2003 12 19.31376 19.37 0.01
vs1219n1044 1.02 250 2003 12 19.31810 19.41 0.01
vs1219n1055 1.03 220 2003 12 19.35203 19.53 0.01
vs1219n1056 1.04 220 2003 12 19.35594 19.52 0.01
vs1219n1064 1.11 220 2003 12 19.39432 19.53 0.01
vs1219n1065 1.12 220 2003 12 19.39821 19.52 0.01
vs1219n1075 1.29 220 2003 12 19.43959 19.39 0.01
vs1219n1076 1.31 220 2003 12 19.44346 19.38 0.01
vs1219n1086 1.67 230 2003 12 19.48544 19.34 0.01
vs1219n1087 1.72 230 2003 12 19.48944 19.38 0.01
vs1221n2024 1.26 220 2003 12 21.20617 19.52 0.01
vs1221n2025 1.24 220 2003 12 21.21014 19.52 0.01
vs1221n2040 1.10 220 2003 12 21.24958 19.53 0.01
vs1221n2041 1.09 220 2003 12 21.25340 19.52 0.01
vs1221n2046 1.05 220 2003 12 21.27486 19.46 0.01
– 28 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
vs1221n2047 1.05 220 2003 12 21.27870 19.45 0.01
vs1223n3022 1.21 200 2003 12 23.21214 19.41 0.01
vs1223n3023 1.19 200 2003 12 23.21579 19.44 0.01
vs1223n3024 1.17 250 2003 12 23.22026 19.46 0.01
vs1223n3042 1.02 220 2003 12 23.30008 19.34 0.01
vs1223n3043 1.02 220 2003 12 23.30391 19.33 0.01
vs1223n3055 1.07 220 2003 12 23.36359 19.46 0.01
vs1223n3056 1.07 220 2003 12 23.36743 19.50 0.01
vs1223n3074 1.28 220 2003 12 23.42704 19.47 0.01
vs1223n3075 1.31 220 2003 12 23.43087 19.46 0.01
vs1223n3082 1.55 200 2003 12 23.46289 19.36 0.01
vs1223n3083 1.59 200 2003 12 23.46644 19.35 0.01
vs1224n4022 1.23 250 2003 12 24.20522 19.35 0.01
vs1224n4023 1.21 250 2003 12 24.20940 19.38 0.01
vs1224n4030 1.05 220 2003 12 24.26469 19.37 0.01
vs1224n4031 1.05 220 2003 12 24.26848 19.38 0.01
vs1224n4039 1.02 220 2003 12 24.30385 19.51 0.01
vs1224n4040 1.02 220 2003 12 24.30871 19.52 0.01
vs1224n4049 1.06 220 2003 12 24.35630 19.45 0.01
vs1224n4050 1.06 220 2003 12 24.36014 19.44 0.01
vs1224n4059 1.19 220 2003 12 24.40395 19.34 0.01
vs1224n4060 1.20 220 2003 12 24.40778 19.32 0.01
vs1224n4070 1.50 220 2003 12 24.45457 19.43 0.01
vs1224n4071 1.53 220 2003 12 24.45839 19.48 0.01
(90482) Orcus 2004 DW dw0214n028 1.22 200 2005 02 14.11873 18.63 0.01
dw0214n029 1.21 200 2005 02 14.12189 18.65 0.01
dw0215n106 1.84 250 2005 02 15.03735 18.64 0.01
dw0215n107 1.78 250 2005 02 15.04156 18.66 0.01
dw0215n108 1.73 250 2005 02 15.04534 18.65 0.01
dw0215n109 1.69 250 2005 02 15.04911 18.64 0.01
– 29 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
dw0215n113 1.42 250 2005 02 15.07806 18.65 0.01
dw0215n114 1.39 250 2005 02 15.08183 18.65 0.01
dw0215n118 1.26 220 2005 02 15.10658 18.65 0.01
dw0215n119 1.25 220 2005 02 15.10998 18.64 0.01
dw0215n128 1.11 220 2005 02 15.17545 18.65 0.01
dw0215n129 1.11 220 2005 02 15.17885 18.65 0.01
dw0215n140 1.18 220 2005 02 15.24664 18.65 0.01
dw0215n141 1.19 220 2005 02 15.25007 18.65 0.01
dw0215n147 1.33 220 2005 02 15.28450 18.63 0.01
dw0215n148 1.35 220 2005 02 15.28789 18.66 0.01
dw0215n155 1.68 230 2005 02 15.32663 18.65 0.01
dw0215n156 1.73 230 2005 02 15.33014 18.64 0.01
dw0216n199 1.76 250 2005 02 16.04005 18.65 0.01
dw0216n200 1.72 250 2005 02 16.04379 18.67 0.01
dw0216n205 1.51 250 2005 02 16.06390 18.66 0.01
dw0216n206 1.47 250 2005 02 16.06767 18.67 0.01
dw0216n209 1.37 250 2005 02 16.08251 18.65 0.01
dw0216n210 1.35 250 2005 02 16.08625 18.66 0.01
dw0216n217 1.17 250 2005 02 16.13055 18.66 0.01
dw0216n218 1.16 250 2005 02 16.13437 18.66 0.01
dw0216n235 1.21 250 2005 02 16.25223 18.64 0.01
dw0216n247 1.81 300 2005 02 16.33285 18.66 0.01
dw0309n014 1.21 250 2005 03 09.05919 18.71 0.01
dw0309n015 1.20 250 2005 03 09.06295 18.70 0.01
dw0309n022 1.11 300 2005 03 09.11334 18.72 0.01
dw0309n023 1.11 300 2005 03 09.11762 18.71 0.01
dw0309n027 1.11 300 2005 03 09.13928 18.69 0.01
dw0309n028 1.11 300 2005 03 09.14363 18.70 0.01
dw0310n091 1.43 250 2005 03 10.01315 18.71 0.01
dw0310n092 1.40 250 2005 03 10.01688 18.71 0.01
– 30 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
dw0310n097 1.27 250 2005 03 10.04113 18.72 0.01
dw0310n098 1.25 250 2005 03 10.04487 18.72 0.01
dw0310n107 1.12 250 2005 03 10.09569 18.71 0.01
dw0310n108 1.12 250 2005 03 10.09945 18.72 0.01
dw0310n125 1.26 250 2005 03 10.20552 18.70 0.01
dw0310n126 1.28 250 2005 03 10.20927 18.71 0.01
dw0310n135 1.67 300 2005 03 10.26076 18.72 0.01
(90568) 2004 GV9 gv0215n130 1.75 250 2005 02 15.18402 19.75 0.03
gv0215n131 1.70 250 2005 02 15.18792 19.81 0.03
gv0215n142 1.19 250 2005 02 15.25502 19.77 0.03
gv0215n143 1.17 250 2005 02 15.25891 19.73 0.03
gv0215n153 1.03 250 2005 02 15.31558 19.74 0.03
gv0215n154 1.03 250 2005 02 15.31931 19.79 0.03
gv0215n159 1.00 250 2005 02 15.34893 19.77 0.03
gv0215n160 1.00 250 2005 02 15.35268 19.80 0.03
gv0215n165 1.01 250 2005 02 15.38618 19.79 0.03
gv0215n166 1.02 250 2005 02 15.38992 19.83 0.03
gv0216n229 1.41 250 2005 02 16.21394 19.74 0.03
gv0216n230 1.38 250 2005 02 16.21768 19.76 0.03
gv0216n242 1.05 300 2005 02 16.29937 19.76 0.03
gv0216n254 1.01 300 2005 02 16.37745 19.75 0.03
gv0309n029 1.46 300 2005 03 09.14960 19.64 0.02
gv0309n030 1.42 300 2005 03 09.15392 19.66 0.02
gv0309n033 1.01 300 2005 03 09.27972 19.75 0.02
gv0309n034 1.00 300 2005 03 09.28405 19.73 0.02
gv0309n039 1.01 300 2005 03 09.31533 19.70 0.02
gv0309n040 1.01 300 2005 03 09.31965 19.67 0.02
gv0309n047 1.06 300 2005 03 09.35654 19.70 0.02
gv0309n048 1.07 300 2005 03 09.36090 19.68 0.02
gv0309n054 1.18 300 2005 03 09.39525 19.68 0.02
– 31 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
gv0309n055 1.20 300 2005 03 09.39959 19.69 0.02
gv0310n115 1.51 300 2005 03 10.14103 19.64 0.02
gv0310n116 1.44 300 2005 03 10.14905 19.66 0.02
gv0310n127 1.11 300 2005 03 10.21489 19.64 0.02
gv0310n128 1.09 300 2005 03 10.21918 19.68 0.02
gv0310n143 1.01 300 2005 03 10.31197 19.75 0.02
gv0310n148 1.04 300 2005 03 10.34106 19.72 0.02
gv0310n149 1.05 300 2005 03 10.34539 19.77 0.02
gv0310n155 1.14 300 2005 03 10.38172 19.76 0.02
gv0310n158 1.20 300 2005 03 10.39711 19.72 0.02
gv0310n159 1.22 300 2005 03 10.40147 19.73 0.02
(120348) 2004 TY364 ty1025n041 1.89 400 2005 10 25.01425 19.89 0.01
ty1025n042 1.81 400 2005 10 25.01944 19.86 0.01
ty1025n047 1.45 400 2005 10 25.05162 19.87 0.01
ty1025n048 1.41 400 2005 10 25.05711 19.92 0.01
ty1025n067 1.04 350 2005 10 25.18450 19.98 0.01
ty1025n068 1.04 350 2005 10 25.18939 19.99 0.01
ty1025n072 1.06 350 2005 10 25.21357 19.95 0.01
ty1025n073 1.07 350 2005 10 25.21847 19.92 0.01
ty1025n082 1.12 350 2005 10 25.24193 19.89 0.01
ty1025n083 1.13 350 2005 10 25.24683 19.90 0.01
ty1025n086 1.19 350 2005 10 25.26632 19.90 0.01
ty1025n087 1.21 350 2005 10 25.27123 19.90 0.01
ty1025n090 1.29 350 2005 10 25.28657 19.89 0.01
ty1025n091 1.32 350 2005 10 25.29147 19.95 0.01
ty1025n094 1.42 350 2005 10 25.30718 19.95 0.01
ty1025n095 1.46 350 2005 10 25.31207 20.00 0.01
ty1025n098 1.63 350 2005 10 25.32928 19.98 0.01
ty1025n099 1.69 350 2005 10 25.33418 19.97 0.01
ty1025n100 1.76 350 2005 10 25.33909 20.02 0.01
– 32 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
ty1025n101 1.83 350 2005 10 25.34410 20.03 0.01
ty1026n144 1.71 400 2005 10 26.02367 19.85 0.01
ty1026n145 1.64 400 2005 10 26.02917 19.86 0.01
ty1026n151 1.30 350 2005 10 26.07121 20.02 0.01
ty1026n157 1.13 400 2005 10 26.11043 20.07 0.01
ty1026n163 1.06 400 2005 10 26.14943 20.03 0.01
ty1026n172 1.06 400 2005 10 26.20516 19.92 0.01
ty1026n176 1.09 400 2005 10 26.22689 19.88 0.01
ty1026n177 1.10 400 2005 10 26.23234 19.90 0.01
ty1026n181 1.16 450 2005 10 26.25425 19.94 0.01
ty1026n182 1.19 450 2005 10 26.26393 19.92 0.01
ty1026n185 1.27 400 2005 10 26.28048 19.95 0.01
ty1026n186 1.30 400 2005 10 26.28599 19.98 0.01
ty1026n190 1.45 450 2005 10 26.30748 20.03 0.01
ty1026n191 1.50 450 2005 10 26.31356 20.09 0.01
ty1026n192 1.56 450 2005 10 26.31963 20.09 0.01
ty1026n193 1.62 450 2005 10 26.32568 20.07 0.01
ty1026n194 1.70 450 2005 10 26.33173 20.09 0.01
ty1026n195 1.78 450 2005 10 26.33774 20.10 0.01
ty1026n196 1.87 450 2005 10 26.34378 20.08 0.01
ty1027n241 1.58 400 2005 10 27.03210 20.01 0.01
ty1027n247 1.28 400 2005 10 27.07100 20.04 0.01
ty1027n264 1.05 400 2005 10 27.18749 19.89 0.01
ty1027n265 1.05 400 2005 10 27.19292 19.87 0.01
ty1027n269 1.07 400 2005 10 27.21551 19.91 0.01
ty1027n270 1.08 400 2005 10 27.22094 19.90 0.01
ty1027n274 1.14 400 2005 10 27.24440 19.90 0.01
ty1027n275 1.15 400 2005 10 27.24988 19.89 0.01
ty1027n278 1.22 350 2005 10 27.26622 19.98 0.01
ty1027n279 1.24 350 2005 10 27.27115 19.99 0.01
– 33 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
ty1027n294 1.83 400 2005 10 27.33881 20.09 0.01
ty1027n295 1.92 400 2005 10 27.34429 20.09 0.01
ty1027n296 2.02 400 2005 10 27.34976 20.10 0.01
ty1128n030 1.07 350 2005 11 28.12175 19.99 0.01
ty1128n031 1.07 350 2005 11 28.12665 19.99 0.01
ty1128n063 1.87 400 2005 11 28.25204 20.17 0.01
ty1128n064 1.96 400 2005 11 28.25747 20.15 0.01
ty1129n117 1.05 350 2005 11 29.07101 20.00 0.01
ty1129n118 1.04 350 2005 11 29.07586 20.00 0.01
ty1129n140 1.17 350 2005 11 29.16375 20.11 0.01
ty1129n141 1.19 350 2005 11 29.16860 20.12 0.01
ty1129n159 1.76 350 2005 11 29.24196 20.17 0.01
ty1129n160 1.83 350 2005 11 29.24682 20.13 0.01
ty1129n161 1.91 350 2005 11 29.25167 20.15 0.01
ty1129n162 1.99 350 2005 11 29.25657 20.08 0.01
ty1130n224 1.05 350 2005 11 30.10029 19.98 0.01
ty1130n225 1.05 350 2005 11 30.10514 20.00 0.01
ty1130n245 1.27 350 2005 11 30.18325 20.18 0.01
ty1130n246 1.30 350 2005 11 30.18810 20.15 0.01
ty1130n259 1.77 350 2005 11 30.23992 20.12 0.01
ty1130n260 1.84 350 2005 11 30.24478 20.10 0.01
ty1130n261 1.92 350 2005 11 30.24963 20.13 0.01
ty1130n262 2.01 350 2005 11 30.25454 20.11 0.01
ty1201n328 1.06 400 2005 12 01.05104 19.99 0.01
ty1201n336 1.05 350 2005 12 01.09816 20.08 0.01
ty1201n337 1.05 350 2005 12 01.10301 20.07 0.01
ty1201n349 1.14 350 2005 12 01.15006 20.18 0.01
ty1201n350 1.16 350 2005 12 01.15492 20.13 0.01
ty1201n370 1.77 350 2005 12 01.23777 20.05 0.01
ty1201n371 1.85 350 2005 12 01.24262 20.07 0.01
– 34 –
Table 1—Continued
Name Imagea Airmass Expb UT Datec Mag.d Err
(sec) yyyy mm dd.ddddd (mR) (mR)
ty1201n372 1.93 350 2005 12 01.24748 20.08 0.01
ty1201n373 2.02 350 2005 12 01.25239 20.04 0.01
aImage number.
bExposure time for the image.
cDecimal Universal Date at the start of the integration.
dApparent red magnitude.
eUncertainties on the individual photometric measurements.
– 35 –
Table 2. Properties of Observed KBOs
Name Ha mR
b Nightsc ∆mR
d Singlee Doublef
(mag) (mag) (#) (mag) (hrs) (hrs)
2001 UQ18 5.4 22.3 2 < 0.3 - -
(126154) 2001 YH140 5.4 20.85 4 0.21± 0.04 13.25± 0.2 -
(55565) 2002 AW197 3.3 19.88 2 < 0.03 - -
(119979) 2002 WC19 5.1 20.58 4 < 0.05 - -
(120132) 2003 FY128 5.0 20.28 2 < 0.08 - -
(136199) Eris 2003 UB313 -1.2 18.36 7 < 0.01 - -
(84922) 2003 VS2 4.2 19.45 4 0.21± 0.02 - 7.41± 0.02
(90482) Orcus 2004 DW 2.3 18.65 5 < 0.03 - -
(90568) 2004 GV9 4.0 19.68 4 < 0.08 - -
(120348) 2004 TY364 4.5 19.98 7 0.22± 0.02 5.85± 0.01 11.70± 0.01
aThe visible absolute magnitude of the object from the Minor Planet Center. The values from
the MPC differ than the R-band absolute magnitudes found for the few objects in which we have
actual phase curves as shown in Table 3.
bMean red magnitude of the object. For the four objects observed at significantly different phase
angles the data near the lowest phase angle is used: Eris in Oct. 2005, Orcus in Feb. 2005, 90568
in Mar. 2005, and 120348 in Oct. 2005.
cNumber of nights data were taken to determine the lightcurve.
dThe peak to peak range of the lightcurve.
eThe lightcurve period if there is one maximum per period.
fThe lightcurve period if there are two maximum per period.
– 36 –
Table 3. Phase Function Data for KBOs
Name mR(1, 1, 0)
a Hb MPCc β(α < 2◦)d
(mag) (mag) (mag) (mag/deg)
(136199) Eris 2003 UB313 −1.50± 0.02 −1.50± 0.02 −1.65 0.09± 0.03
(90482) Orcus 2004 DW 1.81± 0.05 1.81± 0.05 1.93 0.26± 0.05
(90568) 2004 GV9 3.64± 0.06 3.62± 0.06 3.5 0.18± 0.06
(120348) 2004 TY364 3.91± 0.03 3.90± 0.03 4.0 0.19± 0.03
aThe R-band reduced magnitude determined from the linear phase coefficient found
in this work.
bThe R-band absolute magnitude determined as described in Bowell et al. (1989).
cThe R-band absolute magnitude from the Minor Planet Center converted from the
V-band as is shown in Table 2 to the R-band using the known colors of the objects:
V-R= 0.45 for Eris (Brown et al. 2005), V-R= 0.37 for Orcus (de Bergh et al. 2005),
and a nominal value of V-R= 0.5 for 90568 and 120348 since these objects don’t have
known V-R colors.
dβ(α < 2◦) is the phase coefficient in magnitudes per degree at phase angles < 2◦.
– 37 –
Table 4. Phase Function Correlations
β vs.a rcorr
b Nc Sigd
mR(1, 1, 0) 0.50 19 97%
mV (1, 1, 0) 0.54 16 97%
mI(1, 1, 0) 0.12 14 < 60%
pR -0.51 5 65%
pV -0.38 9 70%
pI -0.27 10 < 60%
∆m -0.21 19 < 60%
B − I -0.20 11 < 60%
aβ is the linear phase coefficient in magnitudes per degree at phase
angles < 2◦. In the column are what β is compared to in order to see
if there is any correlation; mR(1, 1, 0), mV (1, 1, 0) and mI(1, 1, 0) are
the reduced mangitudes in the R, V and I-band respectively and are
compared to the value of β determined at the same wavelength; pR, pV
and pI are the geometric albedos compared to β in the R, V and I-band
respectively; ∆m is the peak-to-peak amplitude of the rotational light
curve; and B − I is the color. The phase curves in the R-band are
from this work and Sheppard and Jewitt (2002;2003) while the V and
I-band data are from Buie et al. (1997) and Rabinowitz et al. (2007).
The albedo information is from Cruikshank et al. (2006) and the colors
from Barucci et al. (2005).
brcorr is the Pearson correlation coefficient.
cN is the number of TNOs used for the correlation.
dSig is the confidence of significance of the correlation.
– 38 –
Fig. 1.— The Phase Dispersion Minimization (PDM) plot for (120348) 2004 TY364. The
best fit single-peaked period is near 5.85 hours.
– 39 –
Fig. 2.— The phased best fit single-peaked period for (120348) 2004 TY364 of 5.85 hours. The
peak-to-peak amplitude is about 0.22 magnitudes. The data from November and December
has been vertically shifted to correspond to the same phase angle as the data from October
using the phase function found for this object in this work. Individual error bars for the
measurements are not shown for clarity but are generally ±0.01 mags as seen in Table 1.
– 40 –
Fig. 3.— The phased double-peaked period for (120348) 2004 TY364 of 11.70 hours. The
data from November and December has been vertically shifted to correspond to the same
phase angle as the data from October using the phase function found for this object in this
work. Individual error bars for the measurements are not shown for clarity but are generally
±0.01 mags as seen in Table 1.
– 41 –
Fig. 4.— The Phase Dispersion Minimization (PDM) plot for (84922) 2003 VS2. The best
fit is the double-peaked period near 7.41 hours.
– 42 –
Fig. 5.— The phased best fit double-peaked period for (84922) 2003 VS2 of 7.41 hours. The
peak-to-peak amplitude is about 0.21 magnitudes. The two peaks have differences since one
is slightly wider while the other is slightly shorter in amplitude. This is the best fit period
for (84922) 2003 VS2. Individual error bars for the measurements are not shown for clarity
but are generally ±0.01 mags as seen in Table 1.
– 43 –
Fig. 6.— The phased single-peaked period for (84922) 2003 VS2 of 3.70 hours. The single
peaked period for 2003 VS2 does not look well matched and has a larger scatter about the
solution compared to the double-peaked period shown in Figure 5. Individual error bars for
the measurements are not shown for clarity but are generally ±0.01 mags as seen in Table
– 44 –
Fig. 7.— The phased single-peaked period for (84922) 2003 VS2 of 4.39 hours. Again, the
single peaked period for 2003 VS2 does not look well matched and has a larger scatter about
the solution compared to the double-peaked period shown in Figure 5. Individual error bars
for the measurements are not shown for clarity but are generally ±0.01 mags as seen in Table
– 45 –
Fig. 8.— The phased double-peaked period for (84922) 2003 VS2 of 8.77 hours. This double-
peaked period for 2003 VS2 does not look well matched and has a larger scatter about the
solution compared to the 7.41 hour double-peaked period shown in Figure 5. Individual
error bars for the measurements are not shown for clarity but are generally ±0.01 mags as
seen in Table 1.
– 46 –
Fig. 9.— The Phase Dispersion Minimization (PDM) plot for 2001 YH140. The best fit is
the single-peaked period near 13.25 hours. The other possible fits near 8.5, 9.15 and 10.25
hours don’t look good when phasing the data and viewing the result by eye.
– 47 –
Fig. 10.— The phased best fit single-peaked period for 2001 YH140 of 13.25 hours. The peak-
to-peak amplitude is about 0.21 magnitudes. Individual error bars for the measurements are
not shown for clarity but are generally ±0.02 mags as seen in Table 1.
– 48 –
Fig. 11.— The flat light curve of 2001 UQ18. The KBO may have a significant amplitude
light curve but further observations are needed to confirm.
– 49 –
Fig. 12.— The flat light curve of (55565) 2002 AW197. The KBO has no significant short-term
variations larger than 0.03 magnitudes over two days.
– 50 –
Fig. 13.— The flat light curve of (119979) 2002 WC19. The KBO has no significant short-
term variations larger than 0.03 magnitudes over four days.
– 51 –
Fig. 14.— The flat light curve of (119979) 2002 WC19. The KBO has no significant short-
term variations larger than 0.03 magnitudes over four days.
– 52 –
Fig. 15.— The flat light curve of (120132) 2003 FY128. The KBO has no significant short-
term variations larger than 0.08 magnitudes over two days.
– 53 –
Fig. 16.— The flat light curve of Eris (2003 UB313) in October 2005. The KBO has no
significant short-term variations larger than 0.01 magnitudes over several days.
– 54 –
Fig. 17.— The flat light curve of Eris (2003 UB313) in November and December 2005. The
KBO has no significant short-term variations larger than 0.01 magnitudes over several days.
– 55 –
Fig. 18.— The flat light curve of (90482) Orcus 2004 DW in February 2005. The KBO has
no significant short-term variations larger than 0.03 magnitudes over several days.
– 56 –
Fig. 19.— The flat light curve of (90482) Orcus 2004 DW in March 2005. The KBO has no
significant short-term variations larger than 0.03 magnitudes over several days.
– 57 –
Fig. 20.— The flat light curve of (90568) 2004 GV9 in February 2005. The KBO has no
significant short-term variations larger than 0.1 magnitudes over several days.
– 58 –
Fig. 21.— The flat light curve of (90568) 2004 GV9 in March 2005. The KBO has no
significant short-term variations larger than 0.1 magnitudes over several days.
– 59 –
Fig. 22.— This plot shows the diameter of asteroids and TNOs versus their light curve
amplitudes. The TNOs sizes if unknown assume they have moderate albedos of about 10
percent. For objects with flat light curves they are plotted at the variation limit found by
observations.
– 60 –
Fig. 23.— Same as the previous figure except the diameter versus the light curve period
is plotted. The dashed line is the median of known TNOs rotation periods (9.5 ± 1 hours)
which is significantly above the median large MBAs rotation periods (7.0± 1 hours). Pluto
falls off the graph in the upper right corner because of its slow rotation created by the tidal
locking to its satellite Charon.
– 61 –
Fig. 24.— The phase curve for Eris (2003 UB313). The dashed line is the linear fit to the
data while the solid line uses the Bowell et al. (1989) H-G scattering formalism. In order to
create only a few points with small error bars, the data has been averaged for each observing
night.
– 62 –
Fig. 25.— The phase curve for (90482) Orcus 2004 DW. The dashed line is the linear fit
to the data while the solid line uses the Bowell et al. (1989) H-G scattering formalism. In
order to create only a few points with small error bars, the data has been averaged for each
observing night.
– 63 –
Fig. 26.— The phase curve for (120348) 2004 TY364. The dashed line is the linear fit to the
data while the solid line uses the Bowell et al. (1989) H-G scattering formalism. In order to
create only a few points with small error bars, the data has been averaged for each observing
night.
– 64 –
Fig. 27.— The phase curve for (90568) 2004 GV9. The dashed line is the linear fit to the
data while the solid line uses the Bowell et al. (1989) H-G scattering formalism. In order to
create only a few points with small error bars, the data has been averaged for each observing
night.
– 65 –
Fig. 28.— The R-band reduced magnitude versus the R-band linear phase coefficient β(α < 2
degrees) for TNOs. R-band data is from this work and Sheppard and Jewitt (2002),(2003)
as well as Sedna from Rabinowitz et al. (2007) and Pluto from Buratti et al. (2003). A
linear fit is shown by the dahsed line. Larger objects (smaller reduced magnitudes) may
have smaller β at the 97% confidence level using the Pearson correlation coefficient.
– 66 –
Fig. 29.— Same as Figure 28 except for the V-band (squares) and I-band (diamonds). Pluto
and Charon data are from Buie et al. (1997) and the other data are from Rabinowitz et al.
(2007). Error bars are usually less than 0.04 mags/deg. The V-band data shows a similar
correlation (97% confidence, dashed line) as found for the R-band data in Figure 28, that
is larger objects may have smaller β. There is no correlation found using the I-band data
(dotted line).
– 67 –
Fig. 30.— Same as Figures 28 and 29 except is the albedo versus linear phase coefficient for
TNOs. Filled circles are R-band data, squares are V-band and diamonds are I-band data.
Albedos are from Cruikshank et al. (2006).
– 68 –
Fig. 31.— Same as Figure 28 except is the light curve amplitude versus the linear phase
coefficient for TNOs. TNOs with no measured rotational variability are plotted with their
possible amplitude upper limits. No significant correlation is found.
– 69 –
Fig. 32.— Same as Figure 28 except is the B-I broad band colors versus the linear phase
coefficient for TNOs. Colors are from Barucci et al. (2005). No significant correlation is
found.
|
0704.1637 | Fischler-Susskind holographic cosmology revisited | Fischler-Susskind holographic cosmology revisited
Pablo Diaz∗ M. A. Per†, Antonio Segui‡
Departamento de Fisica Teorica
Universidad de Zaragoza. 50009-Zaragoza. Spain
Abstract
When Fischler and Susskind proposed a holographic prescription based on the
Particle Horizon, they found that spatially closed cosmological models do not verify
it due to the apparently unavoidable recontraction of the Particle Horizon area.
In this article, after a short review of their original work, we expose graphically
and analytically that spatially closed cosmological models can avoid this problem if
they expand fast enough. It has been also shown that the Holographic Principle is
saturated for a codimension one brane dominated Universe. The Fischler-Susskind
prescription is used to obtain the maximum number of degrees of freedom per Planck
volume at the Planck era compatible with the Holographic Principle.
∗e-mail: [email protected]
†e-mail: [email protected]
‡e-mail: [email protected]
http://arxiv.org/abs/0704.1637v2
1 Introduction
One of the most promising ideas that emerged in theoretical physics during the last decade
was the Holographic Principle according to which a physical system can be described
uniquely by degrees of freedom living on its boundary [1, 2]. If the Holographic Principle
is indeed a primary principle of fundamental physics it should be verified when the entire
universe is considered as a physical system. That is, the physical information inside
any cosmological domain should be holographically codified on its boundary area. But
obviously, if an unlimited region of scale L is considered, its entropy content will scale like
volume L3 and its boundary area like L2; so inevitably the former will grow quicker than
the second and the holographic codification will be impossible for big size cosmological
domains. The origin of the Holographic Principle is related to black hole horizons; so,
it seems natural to relate it now to any kind of cosmological horizon. It is at this stage
when the causal relationship that gives rise to cosmological horizons should be taken into
account. William Fischler and Leonard Susskind proposed a cosmological holographic
prescription based on the particle horizon [3]
SPH ≤
. (1)
The entropy content inside the particle horizon of a cosmological observer cannot be
greater than one quarter of the horizon area in Planck units. Enforcing this condition
for the future of any cosmological model with constant ω = p/ρ (Friedmann-Robertson-
Walker models, FRW) spatially flat, Fischler and Susskind found the limit ω < 1. The
compatibility of this limit with the dominant energy condition seems to support the
Fischler-Susskind (FS) holographic prescription. In section 2, a detailed deduction of this
limit is shown. Moreover, the verification of the FS prescription in the past is enforced,
finding a limit for the entropy density in the Planck era.
On the other hand, in spatially closed cosmological models, the FS holographic pre-
scription yields to apparently unavoidable problems. Indeed, if the model has compact
homogeneous spatial sections, all of them of finite volume, then a physical system cannot
have an arbitrary big size at a given time. But for this kind of cosmological models the
boundary area does not grow uniformly when the size of a cosmological domain increases.
Graphically, it is shown that when the domain crosses the equator the boundary area
begins to decrease, going to zero when the domain reaches the antipodes and covers the
entire universe [3, 4]. Figure 1 show this behavior for spatial dimension n = 2.
Raphael Bousso proposed a different holographic prescription [4, 5] based on the evalua-
tion of the entropy content over certain null sections named light-sheets. This prescription
solves the problems associated to spatially closed cosmological models, but it also lacks
the simplicity of the FS prescription. The Bousso prescription will not be used here but
it can be shown that both prescriptions are closely related: Two of the light-sheets de-
fined by Bousso give rise to the past light cone of a cosmological observer1. According to
our previous work [6], the entropy content over the past light cone is proportional to the
entropy content over the particle horizon (defined over the homogeneous spatial section
of the observer), and for adiabatic expansion both will be exactly the same. In fact, the
1According to the Bousso’s nomenclature, every past light cone can be built with the light sheets (+-)
and (-+) associated to the maximum of that cosmological light cone, also called apparent horizon [4, 5].
Figure 1: Decrease of the area of a domain defined in a compact spatial section when its volume
increases and goes beyond one half of the total volume (further than the equator).
original FS prescription applies to the entropy content over the ingoing past directed null
section associated to a given spherical boundary; the key is that the verification for the
particle horizon (1) guarantees the verification for every spherical boundary. In conclu-
sion, the FS holographic prescription (1) also imposes a limit on the entropy content over
the past light cone, and then it may also be regarded covariant as well as the Bousso
prescription.
In section 3 of this paper general explicit solutions for the area and the volume of spherical
cosmological domains are obtained in spatially closed (n+1)-dimensional FRW models.
It is shown that, in fact, the boundary area of the particle horizon defined in recontract-
ing models (dominated by conventional matter) tends to zero; so, the FS holographic
prescription will be violated for this kind of models. But it is also shown that non-
recontracting models, that is, spatially closed (n+1)-dimensional FRW models dominated
by quintessence matter (bouncing models), do not necessarily present this problematical
behavior. These models present accelerated expansion, and particularly only the most
accelerated models avoid the collapse of the particle horizon. So, it is deduced that a
rapid enough cosmological expansion does not allow the particle horizon to evolve enough
over the hyperspheric spatial section to reach the antipodes, so the boundary area never
decreases. It will be shown that the sufficiently accelerated FRW model corresponds to
universes dominated by a codimension one brane gas; thus, such a fluid could saturate
the Holographic Principle.
Section 3 concludes with a discussion of our results in contrast with other related works.
Especially interesting are the recent works about holographic dark energy. The simplified
argument is that a holographic limit on the entropy of a cosmological domain could also
imply a limit of its energy content; thus, the Holographic Principle applied to cosmology
might illuminate the dark energy problem [7, 8]. It is argued how our results could improve
the compatibility between the particle horizon and the holographic dark energy. Finally,
section 4 exposes the basic conclusions of our work.
2 Fischler-Susskind holography in flat universes
We will consider (n+1)-dimensional cosmological models with constant parameter ω = p/ρ
(FRW models). Here we study the spatially flat case k = 0; the scale factor grows
according to the potential function
R(t) = R0
n(1+ω)
∝ t1−
α (2)
where subscript 0 refers to the value of a magnitude in an arbitrary reference time t0. For
later convenience we have defined
n(1 + ω)
n(1 + ω)− 2
n being the spatial dimension of the model. In this section, only conventional matter
dominated models –which are decelerated and verify α > 1– will be considered, and
quintessence dominated models –which are accelerated and verify α < 0– are left for the
next section. Table 1 summarizes these cases and gives the specific limiting values
acceleration ω-range α-range denomination
R̈ < 0
− 1 < ω ≤ +1 α ≥
> 0 conventional matter
R̈ = 0 ω =
− 1 α = ∞ curvature dominated
R̈ > 0 −1 ≤ ω <
− 1 α ≤ 0 quintessence matter
Table 1: Relation among the cosmological acceleration, the dynamically dominant matter and
the parameters of its equation of state ω and α. The ranges can be obtained from the spatially
flat case (2) but they are also valid for the positively (18) and negatively curved case. The
dominant energy condition |ω| ≤ 1 and the value ω = −1 related with a cosmological constant
(de Sitter universe) has been also included.
Given the scale factor, the particle horizon (named in [9] like future event horizon) for
decelerated FRW models can be obtained as [10, 11, 12]
DPH(t) = R(t)
R(t′)
= αt . (4)
Assuming adiabatic expansion, the entropy in a comoving volume must be constant; so,
the spatial entropy density scales like
s(t)R(t)n = s0R
0 = constant ⇒ s(t) = s0R
0 R(t)
−n. (5)
Now the entropy content inside the particle horizon can be computed
SPH(t) = s(t)VPH(t) = s0R
0 R(t)
−n ωn−1
DPH(t)
n , (6)
where ωn−1 is the area of the unit sphere. The FS holographic prescription [3] demands
that the above entropy content must not be greater than one quarter of the particle
horizon area (1). Then
SPH(t) = s(t)
DPH(t)
APH(t) =
ωn−1DPH(t)
n−1 , (7)
performing some cancelations and introducing (5) we arrive at
DPH(t) ≤
4s(t)
R(t)n . (8)
This inequality is the simplified form of the FS holographic prescription for spatially flat
cosmological models. Now, according to the FS work the inequality should be imposed
in the future of any FRW model. For this purpose, comparing the exponents of temporal
evolution is sufficient: the particle horizon evolves linearly (4) and the scale factor evolves
according to (2). Thus, we obtain a family of cosmological models which will verify the
FS holographic prescription in the future
n(1 + ω)
⇒ ω < 1 . (9)
This bound on the parameter of the equation of state coincides with the limit of Special
Relativity; the sound speed in a fluid given by v2 = δp/δρ must not be greater than
the speed of light. When ω = 1, the entropic limit could be also verified depending on
the numerical prefactors (see condition (11) below). So, according to this, the dominant
energy condition enables the verification of the FS holographic prescription2 in the future.
But the previous FS argument presents an objection that we will not obviate. If we
enforce that in the future the particle horizon area dominates over its entropy content,
being potential functions, it is unavoidable that in the past the entropy content dominates
over the horizon area. In other words, these mathematical functions intersects in a given
time, so that at any previous time the holographic codification will be impossible. This
intersection time depends on the numeric prefactors that we have previously left out.
Our proposal is the enforcement of the intersection time near the Planck time; thus,
the apparent violation of the holographic prescription will be restricted to the Planck era.
Imposing this limit we will obtain an interesting relation involving the numeric prefactors;
so, we have to enforce the simplified holographic relation (8) at the Planck time (tP l = 1).
Using (4) and (3) we reach
SPH(tP l) ≤
APH(tP l)
⇒ α <
4 sP l
⇒ sP l <
1 + ω
. (10)
The first idea about this result is that the verification of the Holographic Principle needs,
in general, not too high an entropy density; concretely, the FS prescription gives us a limit
2The reverse implication is not valid: the FS prescription allows temporal violations of the dominant
energy condition [13].
on the entropy density at the Planck time. This fact is usually skipped in the literature.
Perhaps it is assumed that an entropy density at the Planck time sP l of the same order
as one is not problematic. A second view at the previous result may take one to interpret
it as a restriction the Holographic Principle imposes on the complexity of our world: the
number of degrees of freedom per Planck volume at the Planck era must not be greater
than the previous value. Thus, taking n = 3 and assuming a radiation dominated universe
(ω = 1/3) at early times, we get sP l < 3/8. Note also that this result does not depend
on the final behavior of the model, in a way that is also valid for our universe which is
supposed to be dominated now by some kind of dark energy.
Restriction (10) is not trivial. If we consider a cosmological model dynamically dominated
by a fluid with ω very near to the limit
ωlim =
− 1 (α = ∞ ) , (11)
then, the entropy density required at Planck time (10) will be absurdly small. This is
because the models with fluid of matter driven by (11) do not present particle horizon
(R(t) ∝ t); near this limit the particle horizon becomes arbitrarily big, so the entropy
content –scaled with the volume– can hardly be codified on the horizon area. Moreover,
according to [14] the observational data are compatible with a universe very near the
linear evolution; so this case cannot be discarded.
Bousso [4], Kaloper and Linde [15] proposed an ad hoc solution based on a redefinition
of the particle horizon. They took integral (4) from the Planck time t = 1 instead of
t = 0 as the starting point. However, it is not a valid solution for accelerated models
(ω < ωlim ∼ α < 0); let us see the reason. According to the new prescription, the
redefined particle horizon D̃PH grows as the scale factor (2)
D̃PH(t) = R(t)
R(t′)
= α(t− t1−1/α) ∼ −α t1−1/α . (12)
So, computing the associated entropy content S̃PH –with the entropy density (5)– leads
to a function that approaches a constant value; it can be simplified taking the Planck
time as reference time
S̃PH(t) = s0R
0 R(t)
−n ωn−1
D̃PH(t)
n ⇒ lim
S̃PH(t) =
sP l|α|
n . (13)
This limit for the entropy content seems fairly unnatural because it is of the same order
as one.
3 Fischler-Susskind holography in closed universes
Let us focus on Robertson-Walker metrics with closed spatial sections (curvature param-
eter k = +1). The line element in conformal coordinates (η, χ) reads
ds2 = R2(η)
− dη2 + dχ2 + sin2(χ)dΩ2n−1
, (14)
where dΩn−1 is the metric of the (n-1)-dimensional unit sphere. The inner volume and
area of a spherical domain of coordinate radius χ can be obtained by integrating this
metric at a given cosmological time
A(η, χ) = ωn−1R(η)
n−1 sinn−1(χ) (15)
V (η, χ) = R(η)nωn−1
sinn−1(χ′) dχ′ . (16)
The entropy content inside this volume is obtained using the entropy density (5)
S(χ) = s0R
0 ωn−1
sinn−1(χ′) dχ′ , (17)
where scale factors R(t) have been cancelled; thus, the entropy content inside a comoving
volume is constant (adiabatic expansion). Note that S(χ) strictly grows with the confor-
mal size χ of the spherical domain; however boundary area A(η, χ) reaches a maximum
near the equator : for χ > π/2 the boundary area decreases, going to zero at the antipodes,
where χ → π (see Fig. 1). Similar problems appear when the cosmological model recon-
tracts to a Big Crunch, because every boundary area will shrink to zero. In both cases
holographic codification will be impossible. This problem will be reviewed in detail and
a solution based on the cosmological acceleration will be proposed in the next section.
3.1 Conventional matter dominated cosmological models
Fischler and Susskind applied the previous ideas to a FRW (3+1)-dimensional spatially
closed cosmological model, dynamically dominated by conventional matter [3]; the explicit
solution for the scale factor is
R(η) = Rm
. (18)
Here Rm is the maximum value of the scale factor on decelerated models (α > 1 for
conventional matter, see Table 1); it depends on the relation Ω between the energy density
of the model and the critical density
Rm ≡ R0
1− Ω−10
. (19)
Introducing this scale factor on (15), and computing (17) for the usual case n = 3, the
relation between the entropy content and the boundary area of a spherical domain of
coordinate size χ at the conformal time η is obtained
(η, χ) =
2χ− sin 2χ
(sin η
)2(α−1) sin2 χ
. (20)
It should also be kept in mind that the maximum domain accessible at a given time η is
the particle horizon; so this relation must be evaluated for χPH(η), the value that locates
the particle horizon for each η [10, 12]
χPH(η) = η − ηBB, (21)
where ηBB is the value of the conformal time assigned to the beginning of the universe
(usually the Big Bang). A quick observation of relation (20) shows that the denominator
goes to zero at χPH = π (antipodes) and also when the scale factor collapses in a Big
Crunch; for both cases the ratio SPH/APH diverges and so the holographic codification (1)
is impossible. All FRW spatially closed dynamically dominated by conventional matter
models (that is −1/3 < ω ≤ 1 for n = 3) will finally recollapse; so, these models will
violate the FS holographic prescription.
3.2 Quintessence dominated cosmological models
As seen in the last section, some scenarios can become problematic for the holographic
prescription. This section aims to expose an alternative solution for some of those trou-
bling cosmological models. The key point in what follows lies in the fact that not all
spatially closed cosmological models do recollapse; for example a positive cosmological
constant could avoid the recontraction and finally provide an accelerated expansion. The
same can be said for different mechanisms which drive acceleration. The present study
provides an example where the final accelerated expansion is driven by a negative pres-
sure fluid; this means considering FRW spatially closed (curvature parameter k = +1)
cosmological models dynamically dominated by quintessence matter, that is α < 0 (see
Table 1).
The explicit solution for this kind of models is (18) as well, but its behavior is very differ-
ent: a negative exponent for the scale factor prevents it from reaching the problematic zero
value and so these models are safe from recollapsing in a Big-Crunch and from presenting
a singular Big-Bang. Now, the scale factor take a minimum value at same η; firstly the
universe contracts, but after this minimum it undergoes an accelerated expansion for ever;
these are called bouncing models [16]. Bouncing models present the obvious advantage
of being free of singularities [17], and they also enjoy a renewed interest [18] due to the
observed cosmological acceleration [21] and especially in relation with brane-cosmology
[16]3. On the other hand bouncing cosmologies meets with many problems when trying
to reproduce the universe we observe; so the solution (18) must be only considered like
a toy model to study the final behavior of an spatially closed and finally accelerated cos-
mological model. Now, formula (19) gives the minimum value of scale factor Rm, and
according to it Rm tends to zero when the energy density tends to the critical density
(Ω → 1). For an almost flat bouncing cosmology, near the minimum on the scale fac-
tor Rm quantum gravity effects could dominate erasing every correlation coming from the
previous era4. So, in following calculations the beginning of the cosmological time is going
to be taken at the minimum on the scale factor (like a no-singular Big-Bang); according
to (18), this corresponds to a conformal time ηBB = π(1−α)/2. The coordinate distance
to the particle horizon (21) is then
χPH(η) = η − ηBB = η −
(1− α) . (22)
3However, our simplest bouncing models associated to the general solution (18) usually are not con-
sidered in the literature.
4George Gamow words refering to bouncing models: “from the physical point of view we must forget
entirely about the precollapse period” [19].
It was also obtained from (18) that the scale factor diverges for η∞ = π(1 − α). This
bounded value of the conformal time implies a bounded value for the coordinate size of
the particle horizon χPH(η∞) too.
As argued before, problems for the FS holographic prescription arise at χPH = π, i. e. the
value at which a refocusing of the particle horizon on the antipodes of the observer takes
place (the horizon area goes to zero). However, this scenario can be avoided by preventing
the conformal time from reaching the problematic value (see Fig. 2); such FRW spatially
closed models will never present any particle horizon recontraction
χPH∞ < π ⇔ η∞ − ηBB =
(1− α) < π ⇔ α > −1 . (23)
Quintessence models also verify α < 0; then the allowed range becomes 0 > α > −1 which
corresponds to very accelerated cosmological models.
This result can be physically interpreted as follows: For very accelerated spatially closed
cosmological models the growing rate of the scale factor is so high that it does not permit
null geodesics to develop even half a rotation over the spatial sections (see Fig. 3). So the
particle horizon, far from reaching the antipodal point, presents an eternally increasing
area. It also happens for the limiting case α = −1 (ω = −2/3 if n = 3) due to the diver-
gence of the scale factor. This can be summarized in the next statement: every spatially
closed quintessence model with α ≥ −1 has an eternally increasing particle horizon area.
The volume of the spatial sections for spatially closed cosmological models is always
finite, and so the entropy content will be; moreover the entropy content of the universe
for adiabatic expansion is constant. Then, in accordance with the previous result, the
relation SPH/APH remains finite and goes to zero (see Fig. 4); now, using (3) leads to the
conclusion that the FS holographic limit is also compatible with FRW spatially closed
models verifying
− 1 (n = 3, ω ≤ −
). (24)
D. Youm [22] applies the same argument to brane universes and arrives to similar con-
clusions. Note that the limiting value ω = 1
− 1 corresponds to a gas of co-dimension
one branes [23]; with this kind of matter the FS holographic limit could be saturated
depending on the numerical prefactors (like the value of the entropy density s0).
The FS prescription is neither violated in the past since entropy content SPH goes to zero
quicker than the particle horizon area APH as the beginning is approached, in a way that
the relation SPH/APH also goes to zero. This behavior may be checked by introducing
(22) in the general equation (20)
(χPH) = sm
χPH − sinχPH cosχPH
sin2 χPH
)2(1−α)
χPH ≪ π : ≃
sm χPH , (26)
where sm is the spatial entropy density at the beginning of the universe, which is chosen
as reference time (so s0 = sm and R0 = Rm). Fig. 4 shows function (25) for different
values of α(ω); there, the behavior that has been analytically deduced may be graphically
Figure 2: Penrose diagrams for spatially closed FRW universes dominated by quintessence
(spatial dimension n = 3); at the “Big-Bounce” the scale factor reaches a minimum but at the
“future infinite” diverges. Depending on the particle horizon behavior two very different cases
are shown:
• On the left the particle horizon reaches the antipodes χ = π; in this case the particle horizon
area firstly grows but later it surpasses the equator of the hyperspherical spatial section and
finally decreases and shrinks to zero (see Fig. 1) in a finite time. In this case the holographic
codification will be impossible.
• But on the right the model is more accelerated and so the scale factor diverges for a lower
value of the conformal time; so the diagram height is shorter and the particle horizon cannot
reach the antipodes. In this case the particle horizon area diverges (due to the divergence of the
scale factor at the future infinite) and the holographic codification is always possible.
The height of diagram ∆η discriminates both behaviors; so, the limit case is obviously ∆η = π;
then the limit value ω = −2/3 is obtained. For this limiting case the particle horizon reaches the
antipodes at the future infinite; the scale factor diverges, the particle horizon area also diverges
and, as a consequence, the holographic codification is allowed. So, the ω-range compatible
to the holographic codification on the particle horizon is −1 ≤ ω ≤ −2/3 which corresponds
to very accelerated spatially closed cosmological models. In general, a sufficient cosmological
acceleration do not permit the recontraction of the particle horizon at the antipodes and enables
the Fischler-Susskind holographic prescription.
Different Particle Horizon Behavior
accelerated FRW models k=+1 Hα<0: quintessence L
observer at the Big-Bounce
-expanding
particle
horizon
Figure 3: Polar representation of particle horizons for quintessence dominated (α < 0) spatially
closed FRW models. Future light cones are represented from the beginning η = ηBB (Big-
Bounce) for an observer at χ = 0. For α < −1 the particle horizon reconverges in the antipodes
(it reaches and surpasses value χ = π), so the particle horizon area shrinks to zero; this shrinkage
for a particular future light cone is also shown in the figure. However, for α ≥ −1 the particle
horizon does not reconverge since the cosmological acceleration does not allow it. The FS
holographic prescription would be verified in this case. A thick line has been used to show the
limit case α = −1 (ω = −2/3 if n = 3).
The accelerated growth of the closed spatial sections (3-spheres) is shown by concentric circles;
the smallest of them is considered the beginning of the universe, so all the particle horizons
(future light cones) arise from it. In this kind of representations the radial distance coincides
with the physical radius of the spatially closed model. So, in the figure, light cones do not
show the usual 45 degrees evolution. In fact, at the beginning, the future light cones are very
flattened since the scale factor of bouncing models evolves very slowly near the minimum which
is considered the beginning of time.
0.5 π π
Evolution of the Entropy - Area relation
α=-0.2
ω=-8 9
α=-1.6
ω=-3 5
ω=-2 3
Figure 4: Evolution of quotient SPH/APH depending on the coordinate distance χPH as the
particle horizon evolves and assuming sm = 1. Functions for different values of the parameter
α(ω) are shown. A thick line represents the limit case α = −1. For α < −1 (ω > −2/3 if
n = 3) the quotient diverges as the particle horizon reaches χPH = π (the particle horizon area
shrinks to zero at the antipodes of a fiducial observer). But for very accelerated models, α ≥ −1
(ω ≤ −2/3 if n = 3), the quotient is always finite which is a necessary condition for the FS
holographic prescription to be verified.
verified. Looking at maxima of the SPH/APH functions proves that, for non-problematic
cases (α ≥ −1), value 0.5 is an upper bound, so that
α ≥ −1 (n = 3, ω ≤ −2/3) ⇒
(η) < 0.5 sm . (27)
The maximum initial entropy density compatible with the FS entropic limit depends on
this bound and this turns out to be
sm ≤ 1/2 ⇒ SPH ≤
. (28)
This means that to impose not to have more than one degree of freedom for each two
Planck volumes is enough to ensure the verification of the FS prescription for spatially
closed and accelerated FRW models with α > −1.
3.3 A more realistic cosmological model
The previous results are based on a simple explicit solution for the scale factor (18) but its
beginning (the bounce) probably is far from the real evolution of our universe. Here the
opposite point of view is exposed: a two-fluid explicit, but not simple, solution mimics a
spatially closed cosmological model according to the observed behavior. The Friedmann
equations with curvature parameter k = +1 can be solved exactly for a universe initially
dominated by radiation plus a positive cosmological constant Λ that finally provides the
desired final acceleration5. The scale factor then evolves as
R(t) =
2− 2 cosh
, (29)
where Cγ is a constant related to the radiation density ργ 0 measured in an arbitrary
reference time:
ργ 0R
0 . (30)
Due to the initial deceleration (radiation dominated era) this model presents a genuine
particle horizon defined by the future light-cone from the Big-Bang. The evolution of this
light-front over the compact spatial sections is better described by the conformal angle
χPH(t) =
. (31)
Like in the previous section if this conformal angle reaches the value π for a finite time
this means that the particle horizon has covered all the spatial section, that is, it has
reached the antipodes. There the particle horizon area is zero and the FS holographic
prescription is not verified. But the proposed model is finally dominated by a positive
Λ that provides an extreme (exponential) cosmological acceleration that could prevent
the refocusing of the particle horizon. It can be checked that the conformal angle never
reaches the problematic value π when the parameters verify CγΛ > 1.2482 (in Planck
units).
Experimental measurements suggest that our universe is flat or almost flat; here the
second case is assumed, based on the value Ω = 1.02±0.02 from the combination of SDSS
and WMAP data [20]. The best fit of the scale factor (29) to the standard cosmological
parameters H0, t0 and ΩΛ takes place for CγΛ ∼ 700. Thus, the final acceleration of our
universe seems to be enough to avoid the refocusing of the particle horizon; particularly
it will tend to the asymptotic value χPH∞ ∼ 0.5 rad. The conclusion is that if our
universe is positively curved and its evolution is similar to (29) then it could verify the
FS holographic prescription far from saturation due to the ever increasing character of
the particle horizon area.
3.4 Discussion and related works
After the Fischler and Susskind exposition of the problematic application of the holo-
graphic principle for spatially closed models [3] and R. Easther and D. Lowe confirmed
these difficulties [24], several authors proposed feasible solutions. Kalyana Rama [25]
5For a small enough Λ the attractive character of the radiation always dominates and the universe
recollapses in a Big-Crunch. Like in the classical Lemâıtre’s model (initially dominated by pressureless
matter) there exists a critical value Λ
which provides a static but inestable model.
proposed a two-fluid cosmological model, and found that when one was of quintessence
type, the FS prescription would be verified under some additional conditions. N. Cruz
and S. Lepe [26] studied cosmological models with spatial dimension n = 2, and found
also that models with negative pressure could verify the FS prescription. There are some
alternative ways such as [13] which are worth quoting. All these authors analyzed math-
ematically the functional behavior of relation S/A; our work however claims to endorse
the mathematical work with a simple picture: ever expanding spatially closed cosmolog-
ical models could verify the FS holographic prescription, since, due to the cosmological
acceleration, future light cones could not reconverge into focal points and, so, the particle
horizon area would never shrink to zero.
As one can imagine, by virtue of the previous argument there are many spatially closed
cosmological models which fulfill the FS holographic prescription; ensuring a sufficiently
accelerated final era is enough. Examples other than quintessence concern spatially closed
models with conventional matter and a positive cosmological constant, the so-called oscil-
lating models of the second kind [27]. In fact, the late evolution of this family of models is
dominated by the cosmological constant which is compatible with ω = −1, and this value
verifies (24). Roughly speaking, an asymptotically exponential expansion will provide
acceleration enough to avoid the reconvergence of future light cones.
One more remark about observational result comes to support the study of quintessence
models. If the fundamental character of the Holographic Principle as a primary princi-
ple guiding the behavior of our universe is assumed, it looks reasonable to suppose the
saturation of the holographic limit. This is one of the arguments used by T. Banks and
W. Fischler [28, 29] to propose a holographic cosmology based on a an early universe,
spatially flat, dominated by a fluid with ω = 16. According to (9) this value saturates
the FS prescription for spatially flat FRW models, but it seems fairly incompatible with
observational results. However, for spatially closed FRW cosmological models, it has been
found that the saturation of the Holographic Principle is related to the value ω = −2/3
which is compatible with current observations (according to [30], ω < −0.76 at the 95%
confidence level). It is likely that the simplest bouncing model (18) does not describe our
universe correctly; however, as shown in this paper, the initial behavior of the universe
can enforce the evolution of the particle horizon (future light cone from the beginning)
to a saturated scenario compatible with the observed cosmological acceleration7. Thus,
the dark energy computation based on the Holographic Principle [7, 8] seems much more
plausible
ρDE ∼ s T ∼
SPH/VPH
APH/VPH
∼ D−2PH . (32)
Taking DPH ∼ 10Gy gives ρDE ∼ 10
−10 eV4 in agreement the measured value [31].
Finally, two recent conjectures concerning holography in spatially closed universes deserve
some comments. W. Zimdahl and D. Pavon [32] claim that dynamics of the holographic
dark energy in a spatially closed universe could solve the coincidence problem; however
the cosmological scale necessary for the definition of the holographic dark energy seems
to be incompatible with the particle horizon [7, 8, 33]. In a more recent paper F. Simpson
6Banks and Fischler propose a scenario where black holes of the maximum possible size –the size of
the particle horizon– coalesce saturating the holographic limit; this “fluid” evolves according to ω = 1.
7Work in progress.
[34] proposed an imaginative mechanism in which the non-monotonic evolution of the
particle horizon over a spatially closed universe controls the equation of state of the dark
energy. The abundant work in that line is still inconclusive but it seems to be a fairly
promising line of work.
4 Conclusions
It is usually believed that we live in a very complex and chaotic universe. The Holographic
Principle puts a bound for the complexity on our world arguing that a more complex
universe would undergo a gravitational collapse. So, one dare say that gravitational
interaction is responsible for the simplicity of our world. In this paper a measure of
the maximum complexity of the universe compatible with the FS prescription of the
Holographic Principle has been deduced. The maximum entropy density at the Planck
era under the assumption of a flat FRW universe (10) and a quintessence dominated
spatially closed FRW universe (28) has been computed as well.
One of the main points of this paper is to get over an extended prejudice which states that
the FS holographic prescription is, in general, incompatible with spatially closed cosmo-
logical models. Only two very particular solutions –[25] and [26]– solved the problem but
no physical arguments were given. It has been shown along this paper that cosmological
acceleration actually allows the verification of the FS prescription for a wide range of
spatially closed cosmological models.
Finally, let us take a further step, a step to a more clear suggestion. First let us assume
that the FS prescription is a correct method for the application of the Holographic Prin-
ciple in Cosmology, then if our universe is spatially closed (although almost flat) it should
be accelerated by virtue of the FS prescription. In this sense, the observed acceleration
[30] enforces the previous assumption. In fact, the experimental results are compatible
with k = 0 [31], but a very small positive curvature cannot be discarded [20, 30, 35, 36].
This reductionist use of the Holographic Principle is not usual in the literature. The most
common way is to search a valid prescription for every cosmological model and every sce-
nario (like the Bousso solution [4, 5]). However, the only possible world we have evidence
of is the one which is observed, and maybe it is so because the Holographic Principle does
not permit a different one.
Acknowledgements
We acknowledge R. Bousso criticism and suggestions. This work has been supported by
MCYT (Spain) under grant FPA 2003-02948.
References
[1] G. ’t Hooft: Dimensional reduction in quantum gravity ; in Salanfestschrift pp. 284-
296, ed. A. Alo, J. Ellis, S. Randjbar-Daemi, World Scientific Co, Singapore (1993)
[gr-qc/9310026].
http://arxiv.org/abs/gr-qc/9310026
[2] L. Susskind: The world as a hologram; J. Math. Phys. 36, 6377 (1995)
[hep-th/9409089].
[3] W. Fischler, L. Susskind: Holography and Cosmology ; [hep-th/9806039].
[4] R. Bousso: The Holographic Principle; Rev. Mod. Phys. 74, 825 (2002)
[hep-th/0203101].
[5] R. Bousso: Holography in general space-times ; JHEP 9906, 028 (1999)
[hep-th/9906022].
[6] M. A. Per, A. J. Segui: Encoding the scaling of the cosmological variables with the
Euler Beta function; Int. J. Mod. Phys. A20, 4917 (2005) [hep-th/0210266].
[7] S. D. H. Hsu: Entropy bounds and dark energy ; Phys. Lett. B594, 13 (2004)
[hep-th/0403052].
[8] M. Li: A model of holographic dark energy ; Phys. Lett. B603, 1 (2004)
[hep-th/0403127].
[9] S. W. Hawking, G. F. R. Ellis: The Large Scale Structure of Spacetime; Cambridge
University Press, Cambridge (1973).
[10] M. Trodden, S. M. Carroll: TASI Lectures: Introduction to Cosmology ;
[astro-ph/0401547].
[11] G. F. R. Ellis, T. Rothman: Lost horizons ; Am. J. Phys. 61, 10 (1993).
[12] W. Rindler: Visual horizons and world models ; Mon. Not. Roy. Astr. Soc. 116, 662
(1956).
[13] D. N. Vollick: Holography in closed universes ; [hep-th/0306149].
[14] M. Kaplinghat; G. Steigman; I. Tkachev; T. P. Walker: Observational Constraints
On Power-Law Cosmologies ; Phys. Rev. D59, 043514 (1999) [astro-ph/9805114].
[15] N. Kaloper, A. Linde: Cosmology vs. holography ; Phys. Rev. D60, 103509 (1999)
[hep-th/9904120].
[16] C. P. Burgess, F. Quevedo, R. Rabadan, G. Tasinato, I. Zavala: On bouncing brane-
worlds, S-branes and branonium cosmology ; JCAP 0402 008 (2004) [hep-th/0310122].
[17] J. D. Bekenstein: Nonsingular General Relativistic Cosmologies ; Phys. Rev. D 11,
2072 (1975).
[18] C. Molina-Paris, M. Visser: Minimal conditions for the creation of a Friedmann-
Robertson-Walker universe from a ‘bounce’ ; Phys. Lett. B455, 90 (1999)
[gr-qc/9810023].
[19] H. Kragh: George Gamow and the “factual approach” to relativistic cosmology ; In
The universe of general relativity, A. J. Kox, Jean Eisenstaedt (eds.), Einstein Stud-
ies, Vol. 11 p. 175, Boston (2005).
http://arxiv.org/abs/hep-th/9409089
http://arxiv.org/abs/hep-th/9806039
http://arxiv.org/abs/hep-th/0203101
http://arxiv.org/abs/hep-th/9906022
http://arxiv.org/abs/hep-th/0210266
http://arxiv.org/abs/hep-th/0403052
http://arxiv.org/abs/hep-th/0403127
http://arxiv.org/abs/astro-ph/0401547
http://arxiv.org/abs/hep-th/0306149
http://arxiv.org/abs/astro-ph/9805114
http://arxiv.org/abs/hep-th/9904120
http://arxiv.org/abs/hep-th/0310122
http://arxiv.org/abs/gr-qc/9810023
[20] M. Texmark, M. A. Strauss, M. R. Blanton et al.: Cosmological parameters from
SDSS and WMAP ; Phys. Rev. D69, 103501 (2004) [astro-ph/0310723].
[21] Benjamin K. Tippet, Kayll Lake: Energy conditions and a bounce in FLRW cosmolo-
gies ; [gr-qc/9810023].
[22] D. Youm: A Note on Thermodynamics and Holography of Moving Giant Gravitons ;
Phys. Rev. D64, 065014 (2001) [hep-th/0104011].
[23] A. Karch, L. Randall: Relaxing to Three Dimensions ; Phys. Rev. Lett. 95, 161601
(2005) [hep-th/0506053].
[24] R. Easther, D. A. Lowe: Holography, cosmology and the second law of thermodynam-
ics ; Phys. Rev. Lett. 82 4967 (1999) [hep-th/9902088].
[25] S. Kalyana Rama: Holographic Principle in the closed universe: a resolution with
negative pressure matter ; Phys. Lett. B457, 268 [hep-th/9904110].
[26] N. Cruz, S. Lepe: Closed universes can satisfy the holographic principle in three
dimensions ; Phys. Lett. B521, 343 (2002) [hep-th/0110175].
[27] J. V. Narlikar: Introduction to Cosmology p.136; Cambridge University Press, Cam-
bridge (1993).
[28] T. Banks, W. Fischler: Holographic Cosmology 3.0 ; Phys. Scripta T 117, 56-63 (2005)
[hep-th/0310288].
[29] T. Banks, W. Fischler, L. Mannelli: Microscopic Quantum Mechanics of the p = ρ
Universe; [hep-th/0408076].
[30] A. G. Riess, L. G. Strolger, J. Tonry et al.: Type Ia Supernova Discoveries at z > 1
From the Hubble Space Telescope: Evidence for Past Deceleration and Constraints
on Dark Energy Evolution; Astrophys. J. 607, 665 (2004) [astro-ph/0402512].
[31] D. N. Spergel, R. Bean, O. Dore et al. : Wilkinson Microwave Anisotropy Probe
(WMAP) Three Year Results: Implications for Cosmology ; [astro-ph/0603449].
[32] W. Zimdahl, D. Pavon: Spatial curvature and holographic dark energy ;
[hep-th/0606555].
[33] M. R. Setare: Interacting holographic dark energy model in non-flat universe; Phys.
Lett. B 642, 1-4 (2006) [hep-th/0609069].
[34] F. Simpson: An alternative approach to holographic dark energy ; [astro-ph/0609755].
[35] M. Tegmark: Measuring Spacetime: from Big Bang to Black Holes ; Lect. Notes Phys.
646, 169 (2004) [astro-ph/0207199].
[36] K. Ichikawa, M. Kawasaki, T. Sekiguchi et al.: Implication of dark en-
ergy parametrizations on the determination of the curvature of the universe;
[astro-ph/0605481].
http://arxiv.org/abs/astro-ph/0310723
http://arxiv.org/abs/gr-qc/9810023
http://arxiv.org/abs/hep-th/0104011
http://arxiv.org/abs/hep-th/0506053
http://arxiv.org/abs/hep-th/9902088
http://arxiv.org/abs/hep-th/9904110
http://arxiv.org/abs/hep-th/0110175
http://arxiv.org/abs/hep-th/0310288
http://arxiv.org/abs/hep-th/0408076
http://arxiv.org/abs/astro-ph/0402512
http://arxiv.org/abs/astro-ph/0603449
http://arxiv.org/abs/hep-th/0606555
http://arxiv.org/abs/hep-th/0609069
http://arxiv.org/abs/astro-ph/0609755
http://arxiv.org/abs/astro-ph/0207199
http://arxiv.org/abs/astro-ph/0605481
Introduction
Fischler-Susskind holography in flat universes
Fischler-Susskind holography in closed universes
Conventional matter dominated cosmological models
Quintessence dominated cosmological models
A more realistic cosmological model
Discussion and related works
Conclusions
|
0704.1638 | Accelerated expansion of the Universe filled up with the scalar
gravitons | Accelerated expansion of the Universe
filled up with the scalar gravitons
Yu. F. Pirogov
Theory Division, Institute for High Energy Physics, Protvino,
RU-142281 Moscow Region, Russia
Abstract
The concept of the scalar graviton as the source of the dark matter and dark energy
of the gravitaional origin is applied to study the evolution of the isotropic homo-
geneous Universe. A realistic self-consistent solution to the modified pure gravity
equations, which correctly describes the accelerated expansion of the spatially flat
Universe, is found and investigated. It is argued that the scenario with the scalar
gravitons filling up the Universe may emulate the LCDM model, reducing thus the
true dark matter to an artefact.
1 Introduction
According to the present-day cosmological paradigm our Universe is fairly isotropic,
homogeneous, spatially flat and experiences presently the accelerated expansion.
The conventional description of the latter phenomenon is given by the model with
the Λ-term and the cold dark matter (CDM).1 Nevertheless, such a description may
be just a phenomenological reflection of a more fundamental mechanism. A realistic
candidate on such a role is presented in the given paper.
In a preceding paper [2], we proposed a modification of the General Relativity
(GR), with the massive scalar graviton in addition to the massless tensor one.2 The
scalar graviton was put forward as a source of the dark matter (DM) and the dark
energy (DE) of the gravitational origin. In ref. [4], this concept was applied to study
the evolution of the isotropic homogeneous Universe. The evolution equations were
derived and the plausible arguments in favour of the reality of the evolution scenario
with the scalar gravitons were presented.
In the present paper, we expose an explicit solution to the evolution equations
in the vacuum, which gives the correct description of the accelerated expansion of
the spatially flat Universe. It is shown that the emulation of the LCDM model can
indeed be reached as it was anticipated earlier [4]. In Section 2, we first briefly
remind the evolution equations in the vacuum filled up only with the scalar gravi-
tons. Then the master equation for the Hubble parameter is presented. Finally, a
self-consistent solution of the latter equation, possessing the desired properties, is
found and investigated. In the Conclusion, the proposed solution to the DM and
DE problems is recapitulated.
1Hereof, the LCDM model. For a review on cosmology, see, e.g., ref. [1].
2For a brief exposition of such a modified GR, see ref. [3].
http://arxiv.org/abs/0704.1638v1
2 Accelerated expansion
Evolution equations We consider the isotropic homogeneous Universe without
the true DM. Besides, we neglect by the luminous matter missing thus the initial
period of the Universe evolution. Then, the vacuum evolution equations look like3
(ρs + ρΛ),
(ps + pΛ), (1)
with a(t) being the dynamical scale factor of the Universe, t being the comoving
time and ȧ = da/dt, etc. In the above, κ2 is proportional to the spatial curvature,
with κ2 = 0 for the spatially flat Universe. The parameter mP is the Planck mass.
On the r.h.s. of eq. (1), ρΛ and pΛ are the energy density and the pressure
corresponding to the cosmological constant Λ: ρΛ = −ρΛ = m2PΛ ≥ 0. Likewise, ρs
and ps are, respectively, the energy density and pressure of the scalar gravitons:
ρs = f
σ̇2 + 3
σ̇ + σ̈
+m2PΛs(σ),
ps = f
σ̇2 − 3
σ̇ − σ̈
−m2PΛs(σ). (2)
Here, fs = O(mP) is a constant with the dimension of mass entering the kinetic
term of the scalar graviton field σ. The latter in the given context looks like
σ = 3 ln
, (3)
with ã(t) being a nondynamical scale factor given a priori. The σ-field is defined up
to an additive constant. Without any loose of generality, we can fix the constant
by the asymptotic condition: σ(t) → 0 at t → ∞.
In eq. (2), we put
Vs + ∂Vs/∂σ ≡ m2P
Λs(σ) + Λ
, (4)
where Vs(σ) is the scalar graviton potential. More particularly, we put
Vs = V0 +
s (σ − σ0)2 +O
(σ − σ0)3
, (5)
with σ0 being a constant, fs(σ−σ0) the physical field of the scalar graviton and ms
the mass of the latter. By their nature, Λs and Λ are quite similar. To make the
division onto these two parts unambiguous we normalize Λs by an additive constant
so that Λs(0) = 0. Clearly, we get from eq. (2) that ρs + ps = f
2. Here, the
contribution of Λs exactly cancels what is quite similar to the relation ρΛ + pΛ = 0.
So, the contribution of Λs is a kind of the dark energy. In what follows, we put
σ0 = 0 and V0 = m
PΛ, with σ → 0 at t → ∞ becoming the ground state.
The nondynamical functions Vs and ã being the two characteristics of the vacuum
are not quite independent. More particularly, adopting the isotropic homogeneous
ansatz for the solution of the gravity equations, with only one dynamical variable a,
we tacitly put a consistency relation between ã and Vs. As a result, only one
combination of the two lines of eq. (1) is the true equation of evolution, with the
second independent combination giving just the required consistency condition.
3We refer the reader to ref. [4] for more details.
Master equation In what follows, we restrict ourselves by the case of the spa-
tially flat Universe, κ = 0. Subtracting the first line of eq. (1) from the second one
and accounting for eq. (2) we get the relation
Ḣ = −
ασ̇2, (6)
where H ≡ ȧ/a is the Hubble parameter and
α = 2
. (7)
We assume that α = O(1). Substituting σ̇ given by eq. (6) into the first line of
eq. (1) we get the integro-differential master equation for the Hubble parameter:
H2 = −1
Ḧ + 6HḢ
Λs(σ) + Λ
. (8)
where it is to be understood
−Ḣ(τ) dτ. (9)
Remind that we assume σ(t) → 0 at t → ∞. Equations (8) and (6) supersede the
pair of the original evolution equations (1).
Self-consistent solution Let us put in what follows Λs ≡ 0. This will be
justified afterwards. Iterating eq. (8), with Λ considered as a perturbation, we can
get the solution with any desired accuracy. In particular, substituting into the r.h.s.
of eq. (8) the solution H = α/t from the zeroth approximation (Λ = 0) we get the
first approximation as follows
O(1) at t → 0 ,
O(1/t3) at t → ∞, (10)
or otherwise
(tΛ/t)
2 + 1
α2/t2 at t/tΛ < 1,
α2/t2Λ at t/tΛ > 1,
being the characteristic time of the evolution of the Universe. Numerically, tΛ ∼
1010yr is of order the age of the Universe. Equation (11) is the basis for the quali-
tative discussion in what follows.
Integrating eq. (11) we get the scale factor as follows:
+ 1− ln
α ln(t/tΛ) at t/tΛ < 1,
αt/tΛ at t/tΛ > 1,
where a0 is an integration constant.
4 Explicitly, the scale factor looks like
(t/tΛ)
α at t/tΛ < 1,
exp(αt/tΛ) at t/tΛ > 1,
4To phenomenologically account for the effect of the initial inflation we could formally shift the origin
of time: t → t+ t0, with t0 > 0.
with tΛ bordering thus the epoch of the the power law expansion from the epoch of
the exponential expansion. Equation (13) gives the two-parametric representation
for the scale factor of the acceleratedly expanding Universe after the initial period.
With account for eq. (9) the σ-field behaves as
σ = −
(t/tΛ)
ξ(1 + ξ)1/4
2 ln(t/tΛ) at t/tΛ < 1,
tΛ/t at t/tΛ > 1.
Note that at tΛ → ∞ or, equivalently, Λ → 0 the integral above diverges and the
σ-field can not be normalized properly. Λ 6= 0 is thus necessary as a regulator in
the theory. Now, the consistency condition looks like
ã = a exp (−σ/3) ∼
(t/tΛ)
α−2/3 at t/tΛ < 1,
exp (αt/tΛ) at t/tΛ > 1.
Clearly, Λ should be already presupposed in ã. Note that in the case α = 2/3, the
parameter ã is approximately constant at t/tΛ < 1.
Substituting equations (11) and (6) into the first line of eq. (2) we can explicitly
verify that
. (17)
This is to be anticipated already from the relation ρΛ/m
P = Λ, as well as eq. (10)
and the first line of eq. (1). Clearly, ρs is positive. At t/tΛ < 1 we get from
equations (14) and (17) that ρsa
3 ∼ t3α−2. In the case α = 2/3, we have ρs ∼ 1/a3
as it should be for the true CDM. On the other hand, the pressure of the scalar
gravitons is as follows
ps = −
(tΛ/t)
(tΛ/t)2 + 1
3α(2/3 − α)/t2 − α/t2Λ at t/tΛ < 1,
(2α/t2Λ)(tΛ/t)
3 at t/tΛ > 1.
At the same conditions as before, the pressure is ps/m
P = −Λ/2, being near constant
though not zero as it should be anticipated for the true CDM. Nevertheless, we see
that the value α = 2/3 is exceptional in many respects. Conceivably, such a value
is distinguished by a more fundamental theory.
Introducing the critical energy density ρc = 3m
2, we get for the partial energy
densities Ωs = ρs/ρc and ΩΛ = ρΛ/ρc, respectively, of the scalar gravitons and the
Λ-term the following:
1 + (t/tΛ)2
1− (t/tΛ)2 at t/tΛ < 1,
(tΛ/t)
2 at t/tΛ > 1,
with ΩΛ = 1 − Ωs. Note that Ωs = ΩΛ = 1/2 at t/tΛ = 1. Presently, we have
Ωs/ΩΛ ≃ 1/3 and thus the respective time t, in the neglect by the effect of the
initial inflation, is somewhat larger tΛ.
Finally, the condition Λs = 0 adopted earlier can be justified as follows. First
of all, Λs is indeed negligible at t → ∞ due to σ → 0 and Λs(0) = 0. On the other
hand, at t ∼ tΛ we have |σ| ∼ 1 and hence Λs ∼ m2s . For Λs to be negligible in this
region, too, we should require ms ≤
Λ ∼ 1/tΛ. Nevertheless, in the early period
of evolution when |σ| > 1 the contribution of Λs may be significant. The parameter
α = 2/3 being fixed the theory may be terminated just by two mass parameters:
the ultraviolet mP and the infrared t
Λ or, otherwise, ms.
3 Conclusion
To conclude, let us recapitulate the proposed solution to the DM and DE problems
in the context of the evolution of the Universe. According to the viewpoint adopted,
there is neither true DM nor DE in the Universe (at least, in a sizable amount).
Instead, the field σ of the scalar graviton serves as a common source of both the DM
and DE of the gravitational origin. DM is represented by the derivative contribution
of σ, with DE being reflected by the derivativeless contribution. In this, the constant
part of the latter contribution corresponds to the conventional Λ-term, while the
σ-dependent part corresponds to DE. The latter is less important than the Λ-term
at present, becoming conceivably more crucial at the early time.
The self-consistent evolution of the Universe may be considered as the transition
of the “sea” of the scalar gravitons, produced in the early period, from the excited
state with |σ| > 1 to the ground state with σ = 0. The ground state is characterized
by the cosmological constant Λ which, in turn, predetermines the characteristic
evolution time of the Universe, tΛ ∼ 1/
Λ. The scenario is in the possession to
naturally describe the accelerated expansion of the spatially flat Universe, correctly
emulating thus the conventional LCDM model. The more complete study of the
scenario, the initial period of the evolution including, is in order.
The author is grateful to O. V. Zenin for the useful discussions.
References
[1] M. Trodden and S. M. Carroll, astro-ph/0401547.
[2] Yu. F. Pirogov, Phys. Atom. Nucl. 69, 1338 (2006) [Yad. Fiz. 69, 1374 (2006)];
gr-qc/0505031.
[3] Yu. F. Pirogov, gr-qc/0609103.
[4] Yu. F. Pirogov, gr-qc/0612053.
http://arxiv.org/abs/astro-ph/0401547
http://arxiv.org/abs/gr-qc/0505031
http://arxiv.org/abs/gr-qc/0609103
http://arxiv.org/abs/gr-qc/0612053
Introduction
Accelerated expansion
Conclusion
|
0704.1641 | U Geminorum: a test case for orbital parameters determination | U Geminorum: a test case for orbital parameters determination
Juan Echevarŕıa1, Eduardo de la Fuente2 and Rafael Costero3
Instituto de Astronomı́a, Universidad Nacional Autónoma de México,
Apartado Postal 70-264, México, D.F., México
ABSTRACT
High-resolution spectroscopy of U Gem was obtained during quiescence. We
did not find a hot spot or gas stream around the outer boundaries of the accretion
disk. Instead, we detected a strong narrow emission near the location of the
secondary star. We measured the radial velocity curve from the wings of the
double-peaked Hα emission line, and obtained a semi-amplitude value that is
in excellent agreement with the obtained from observations in the ultraviolet
spectral region by Sion et al. (1998). We present also a new method to obtain
K2, which enhances the detection of absorption or emission features arising in
the late-type companion. Our results are compared with published values derived
from the near-infrared NaI line doublet. From a comparison of the TiO band with
those of late type M stars, we find that a best fit is obtained for a M6V star,
contributing 5 percent of the total light at that spectral region. Assuming that
the radial velocity semi-amplitudes reflect accurately the motion of the binary
components, then from our results: Kem = 107 ± 2 km s
−1; Kabs = 310 ± 5
km s−1, and using the inclination angle given by Zhang & Robinson (1987); i =
69.7◦ ± 0.7, the system parameters become: MWD = 1.20 ± 0.05M⊙; MRD =
0.42 ± 0.04M⊙; and a = 1.55± 0.02R⊙. Based on the separation of the double
emission peaks, we calculate an outer disk radius of Rout/a ∼ 0.61, close to the
distance of the inner Lagrangian point L1/a ∼ 0.63. Therefore we suggest that,
at the time of observations, the accretion disk was filling the Roche-Lobe of the
primary, and that the matter leaving the L1 point was colliding with the disc
directly, producing the hot spot at this location.
Subject headings: binaries: close — novae, cataclysmic variables — stars: indi-
vidual (U Geminorum)
1email: [email protected]
2present address: Departamento de F́ısica, CUCEI, Universidad de Guadalajara. Av. Revolución 1500
S/R Guadalajara, Jalisco, Mexico. email: [email protected]
3email: [email protected]
http://arxiv.org/abs/0704.1641v1
– 2 –
1. Introduction
Discovered by Hind (1856), U Geminorum is the prototype of a subclass of dwarf novae,
a descriptive term suggested by Payne-Gaposchkin & Gaposchkin (1938) due to the small
scale similarity of the outbursts in these objects to those of Novae.
After the work by Kraft (1962), who found U Gem to be a single-lined spectroscopic
binary with an orbital period around 4.25 hr, and from the studies by Kreminski (1965),
who establish the eclipsing nature of this binary, Warner & Nather (1971) and Smak (1971),
established the classical model for Cataclysmic Variable stars. The model includes a white
dwarf primary surrounded by a disc accreted from a Roche-Lobe filling late-type secondary
star. The stream of material, coming through the L1 point intersects the edge of the disc
producing a bright spot, which can contribute a large fraction of the visual flux. The bright
spot is observed as a strong hump in the light curves of U Gem and precedes a partial eclipse
of the accretion disk and bright spot themselves (the white dwarf is not eclipsed in this
object).
A mean recurrence time for U Gem outbursts of ≈ 118 days, with ∆mV =5 and out-
burst width of 12 d, was first found by Szkody & Mattei (1984). However, recent analysis
shows that the object has a complex outburst behavior (Cook 1987; Mattei et al. 1987;
Cannizo, Gehrels & Mattei 2002). Smak (2004), using the AAVSO data on the 1985 out-
burst, has discovered the presence of super-humps, a fact that challenges the current theories
of super-outbursts and super-humps for long period system with mass ratios above 1/3. The
latter author also points out the fact that calculations of the radius of the disc – obtained
from the separation of the emission peaks (Kraft 1975) in quiescence – are in disagreement
with the calculations of the disc radii obtained from the photometric eclipse data (Smak
2001).
Several radial velocity studies have been conducted since the first results published by
Kraft (1962). In the visible spectral range, where the secondary star has not been detected,
their results are mainly based on spectroscopic radial velocity analysis of the emission lines
arising from the accretion disc (Kraft 1962; Smak 1976; Stover 1981; Unda-Sanzana et al.
2006). In other wavelengths, works are based on absorption lines: in the near-infrared,
on the Na I doublet from the secondary star (Wade 1981; Friend et al. 1990; Naylor et al.
2005) and in the ultraviolet, on lines coming from the white dwarf itself (Sion et al. 1998;
Long & Gilliland 1999).
Although the research work on U Gem has been of paramount importance in our under-
standing of cataclysmic variables, the fact that it is a partially-eclipsed and – in the visual
range – a single-lined spectroscopic binary, make the determination of its physical param-
– 3 –
eters difficult to achieve through precise measurements of the semi-amplitudes K1,2 and of
the inclination angle i of the orbit. Spectroscopic results of K1,2 differ in the ultraviolet,
visual and infrared ranges. Therefore, auxiliary assumptions have been used to derive its
more fundamental parameters (Smak 2001). In this paper we present a value of K1, obtained
from our high-dispersion Echelle spectra, which is in agreement with the ultraviolet results,
and of K2 from a new method applicable to optical spectroscopy. By chance, the system was
observed at a peculiar low state, when the classical hot spot was absent.
2. Observations
U Geminorum was observed in 1999, January 15 with the Echelle spectrograph at the
f/7.5 Cassegrain focus of the 2.1 m telescope of the Observatorio Astrónomico Nacional at San
Pedro Mártir, B.C., México. A Thomson 2048×2048 CCD was used to cover the spectral
range between λ5200 and λ9100 Å, with spectral resolution of R=18,000. An echellette
grating of 150 l/mm, with Blaze around 7000 Å , was used. The observations were obtained
at quiescence (V ≈ 14), about 20 d after a broad outburst (data provided by the AAVSO:
www.aavso.org). The spectra show a strong Hα emission line. No absorption features were
detected from the secondary star. A first complete orbital cycle was covered through twenty-
one spectra, each with 10 min exposure time. Thirteen further spectra were subsequently
acquired with an exposure of 5 min each. The latter cover an additional half orbital period.
The heliocentric mid-time of each observation is shown in column one in Table 1. The
flux standard HR17520 and the late spectral M star HR3950 were also observed on the same
night. Data reduction was carried out with the IRAF package1. The spectra were wavelength
calibrated using a Th-Ar lamp and the standard star was also used to properly subtract the
telluric absorption lines using the IRAF routine telluric.
3. Radial Velocities
In this section we derive radial velocities from the prominent Hα emission line observed
in U Gem, first by measuring the peaks, secondly by using a method based on a cross-
correlating technique, and thirdly by using the standard double-Gaussian technique designed
to measure only the wings of the line. In the case of the secondary star, we were unable to
detect any single absorption line in the individual spectra; therefore it was not possible to
1IRAF is distributed by the National Optical Observatories, operated by the Association of Universities
for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
– 4 –
use any standard method. However, here we propose and use a new method, based on a co-
adding technique, to derive the semi-amplitude of the orbital radial velocity of the companion
star. In this section, we compare our results with published values for both components in
the binary. We first discuss the basic mathematical method used here to derive the orbital
parameters and its limitation in the context of Cataclysmic Variables; then we present our
results for the orbital parameters – calculated from the different methods – and finally discuss
an improved ephemeris for U Gem.
3.1. Orbital Parameters Calculations
To find the orbital parameters of the components in a cataclysmic variable – in which
no eccentricity is expected (Zahn 1966; Warner 1995) – we use an equation of the form
V(t)(em,abs) = γ +K(em,abs)sin[(2π(t− HJD⊙)/Porb)],
where V(t)(em,abs) are the observed radial velocities as measured from the emission lines
in the accretion disc or from the absorption lines of the red star; γ is the systemic velocity;
K(em,abs) are the corresponding semi-amplitudes derived from the radial velocity curve; HJD⊙
is the heliocentric Julian time of the inferior conjunction of the companion; and Porb is the
orbital period of the binary.
A minimum least-squares sinusoidal fit is run, which uses initial values for the four
(Porb, γ, Kem,abs, and HJD⊙) orbital parameters. The program allows for one or more of
these variables to be fixed, i.e. they can be set to constant values in the initial parameters
file.
If the orbital period is not previously known, a frequency search – using a variety of
methods for evenly- or unevenly-sampled time series data (Schwarzenberg-Czerny 1999) –
may be applied to the measured radial velocities in order to obtain an initial value for Porb,
which is then used in the minimum least-squares sinusoidal fit. If the time coverage of the
observations is not sufficient or is uneven, period aliases may appear and their values have to
be considered in the least-squares fits. A tentative orbital period is selected by comparing the
quality of each result. In these cases, additional radial velocity observations should be sought,
until the true orbital period is found unequivocally. Time series photometric observations
are usually helpful to find orbital modulations and are definitely important in establishing
the orbital period of eclipsing binaries. In the case of U Gem, the presence of eclipses and
the ample photometric coverage since the early work of Kreminski (1965), has permitted to
establish its orbital period with a high degree of accuracy (Marsh et al. 1990). Although in
eclipsing binaries a zero phase is also usually determined, in the case of U Gem the variable
– 5 –
positions of the hot spot and stream, causes the zero point to oscillate, as mentioned by
the latter authors. Accurate spectroscopic observations are necessary to correctly establish
the time when the secondary star is closest to Earth, i.e in inferior conjunction. Further
discussion on this subject is given in section 4.
To obtain the real semi-amplitudes of the binary, i.e K(em,abs)=K(1,2), some reasonable
auxiliary assumptions are made. First, that the measurements of the emission lines, produced
in the accretion disc, are free from distortions and accurately follow the orbital motion of the
unseen white dwarf. Second, that the profiles of the measured absorption lines are symmetric,
which implies that the brightness at the surface of the secondary star is the same for all its
longitudes and latitudes. Certainly, a hot spot in the disc or irradiation in the secondary
from the energy sources related to the primary will invalidate either the first, the second, or
both assumptions. Corrections may be introduced if these effects are present. In the case of
U Gem, a three-body correction was introduced by Smak (1976) in order to account for the
radial velocity distortion produced by the hot spot, and a correction to heating effects on
the face of the secondary star facing the primary was applied by Friend et al. (1990) before
equating Kabs = K2.
As initial values in our least-squared sinusoidal fits, we use Porb = 0.1769061911 d and
HJD⊙ = 2, 437, 638.82325 d from Marsh et al. (1990), a systemic velocity of 42 km s
−1 from
Smak (2001), and K1 = 107 km s
−1 and K2 = 295 km s
−1 from Long & Gilliland (1999) and
Friend et al. (1990), respectively. In our calculations, the orbital period was set fixed at the
above mentioned value, since our observations have a very limited time coverage. This allow
us to increase the precision for the other three parameters.
3.2. The Primary Star
In this section we compare three methods for determining the radial velocity of the
primary star, based on measurements of the Hα emission line. Although, as we will see
in the next subsections, the last method results in far better accuracy and agrees with the
ultraviolet results, we have included all of them here because the first method essentially
provides an accurate way to determine the separations of the blue and red peaks, which is
an indicator of the outer radius of the disc (Smak 2001), and the second yields a Kem value
much closer to that obtained from UV results than any other published method. This cross-
correlation method might be worthwhile to consider for its use in other objects. Furthermore,
as we will see in the discussion, all three methods yield a consistent value of the systemic
velocity, which is essential to the understanding of other parameters in the binary system.
– 6 –
Table 1: Measured Hα Radial Velocities.
HJD φ∗ Peaksa Fxcb Wingsc
(240000+) (km s−1)
51193.67651 0.68 166.1 139.1 121.1
51193.68697 0.75 183.4 130.0 133.8
51193.69679 0.80 181.9 125.0 126.9
51193.70723 0.86 167.9 102.0 101.1
51193.71744 0.92 137.1 81.7 90.9
51193.72726 0.97 90.0 46.8 41.7
51193.73581 0.02 14.0 -17.9 6.9
51193.74700 0.09 -47.9 -48.1 -27.1
51193.75691 0.14 -67.1 -66.7 -48.2
51193.76743 0.20 -99.6 -84.6 -79.3
51193.77738 0.26 -132.3 -86.1 -75.7
51193.78900 0.32 -152.6 -60.2 -48.8
51193.80174 0.39 -77.9 -32.9 -33.6
51193.81211 0.45 9.0 10.9 14.5
51193.82196 0.51 104.3 79.2 65.1
51193.83176 0.56 134.6 113.7 107.0
51193.84175 0.62 141.0 142.8 124.9
51193.85156 0.67 159.3 158.6 147.6
51193.86133 0.73 165.6 148.0 131.7
51193.87101 0.79 192.9 142.8 130.3
51193.88116 0.84 175.0 120.7 110.6
51193.88306 0.91 154.6 106.5 91.1
51193.90530 0.98 90.6 32.3 31.9
51193.91751 0.05 -70.5 8.0 -23.1
51193.93029 0.12 -88.5 -71.8 -51.6
51193.94259 0.19 -97.1 -79.0 -66.7
51193.95483 0.26 -114.4 -88.8 -75.6
51193.95955 0.29 -142.2 -70.9 -67.9
∗Orbital phases derived from the ephemeris given in section 4
aVelocities derived as described in section 3.2.1
bVelocities derived as described in section 3.2.2
cVelocities derived as described in section 3.2.3
– 7 –
To match the signal to noise ratio of the first twenty-one spectra, we have co-added,
in pairs, the thirteen 5-minute exposures. The last three spectra were added to form two
different spectra, in order to avoid losing the last single spectrum. A handicap to this
approach is that, due to the large read-out time of the Thomson CCD, we are effectively
smearing the phase coverage of the co-added spectra to nearly 900 s. However, the mean
heliocentric time was accordingly corrected for each sum. This adds to a total sample of
twenty-eight 600 s spectra.
3.2.1. Measurements from the double-peaks
We have measured the position of the peaks using a double-gaussian fit, with their sepa-
ration, width and position as free parameters. The results yield a mean half-peak separation
Vout of about 460 km s
−1. The average value of the velocities of the red and blue peaks, for
each spectrum, is shown in column 3 of Table 1. We then applied our nonlinear least-squares
fit to these radial velocities. The obtained orbital parameters are shown in column 2 of
Table 2. The numbers in parentheses after the zero point results are the evaluated errors of
the last digit. We will use this notation for large numbers throughout the paper. The radial
velocities are also shown in Figure 1, folded with the orbital period and the time of inferior
conjunction adopted in the section 4. The solid lines in this figure correspond to sinusoidal
fits using the derived parameters in our program. Although we have not independently tab-
ulated the measured velocities of the blue and red peaks, they are shown in Figure 1 together
with their average. The semi-amplitudes of the plotted curves are 154 km s−1 and 167 km
s−1 for the blue and red peaks, respectively.
3.2.2. Cross Correlation using a Template
We have also cross-correlated the Hα line in our spectra with a template constructed
as follows: First, we selected a spectrum from the first observed orbital cycle close to phase
0.02 when, in the case of our observations, we should expect a minimum distortion in the
double-peaked line due to asymmetric components (see section 5). The blue peak in this
spectrum is slightly stronger than the red one. This is probably caused by the hot spot
near the L1 point (see section 3.3.2), which might be visible at this phase due to the fact
that the binary has an inclination angle smaller than 70 degrees. The half-separation of the
peaks is 470 km s−1, a value similar to that measured in a spectrum taken during the same
orbital phase in next cycle. The chosen spectrum was then highly smoothed to minimize
high-frequency correlations. The resulting template is shown in Figure 2. A radial velocity
– 8 –
Fig. 1.— Radial velocity curve of the double peaks. The half-separation of the peaks, shown
at the top of the diagram has a mean value of about 460 km s−1. The curve at the middle
is the mean from blue (bottom curve) and red (top curve).
– 9 –
Fig. 2.— Hα template near phase 0.02. The half-separation of the peaks has a value of 470
km s−1.
– 10 –
for the template was derived from the wavelength measured at the dip between the two peaks
and corrected to give an heliocentric velocity. The IRAF fxc task was then used to derive
the radial velocities, which are shown in column 4 of Table 1. As in the previous section, we
have fitted the radial velocities with our nonlinear least-squares fit algorithm. The resulting
orbital parameters are given in column 3 of Table 2. In Figure 3 the obtained velocities and
the corresponding sinusoidal fit (solid line) are plotted.
3.2.3. Measurements from the wings and Diagnostic Diagrams
The Hα emission line was additionally measured using the standard double Gaussian
technique and its diagnostic diagrams, as described in Shafter, Szkody and Thorstensen
(1986). We refer to this paper for the details on the interpretation of our results. We
have used the convolve routine from the IRAF rvsao package, kindly made available to us by
Thorstensen (private communication). The double peaked Hα emission line – with a sepa-
ration of about 20 Å – shows broad wings reaching up to 40 Å from the line center. Unlike
the case of low resolution spectra – where for over-sampled data the fitting is made with
individual Gaussians having a FWHM of about one resolution element – in our spectra, with
resolution ≈ 0.34 Å, such Gaussians would be inadequately narrow, as they will cover only a
very small region in the wings. To measure the wings appropriately and, at the same time,
avoid possible low velocity asymmetric features, we must select a σ value which fits the line
regions corresponding to disc velocities from about 700 to 1000 kms−1.
As a first step, we evaluated the width of the Gaussians by setting this as a free pa-
rameter from 10 to 40 pixels and for a wide range of Gaussian separations (between 180 and
280 pixels). For each run, we applied a nonlinear least-squares fit of the computed radial
velocities to sinusoids of the form described in section 3.1. The results are shown in Figure 4,
in particular for three different Gaussian separations: a = 180, 230 and 280 pixels. These
correspond to the low and upper limits as well as to the value for a preferred solution, all of
which are self-consistent with the second step (see below). In the bottom panel of the figure
we have plotted the overall rms value for each least-squares fit, as this parameter is very sen-
sitive to the selected Gaussian separations. As expected at this high spectral resolution, the
parameters in the diagram change rapidly for low values of σ, and there are even cases when
no solution was found. At low values of a (e.g. crosses) there are no solutions for widths
narrower than 20 pixels. The rms values increase rapidly with width, while the σ(K)/K,
γ and phase shift values differ strongly from the other cases. For higher values of a (open
circles) we obtain lower values for σ(K)/K, but the rms results are still large, in particular
for intermediate values of the width of the Gaussians. For the middle solution (dots) the
– 11 –
Fig. 3.— Radial velocities obtained from cross correlation using the template. The solid line
correspond to the solution from column 3 in Table 2.
– 12 –
results are comparable with those for large a values, but the rms is much lower. Similar
results were found for other intermediate values of a, and they all converge to a minimum
rms for a width of 26 pixels at a = 230 pixels.
For the second step we have fixed the width to a value of 26 Å and ran the double-
Gaussian program for a range of a separations, from about 60 to 120 Å. The results obtained
are shown in Figure 5.
If only an asymmetric low velocity component is present, the semi-amplitude should
decrease asymptotically as a increases, until K1 reaches the correct value. Here we observe
such behavior, although for larger values of a, there is a K1 increase for values of a up to
40 Å, before it decreases strongly with high values of a. This behavior might be due to
the fact that we are observing a narrow hot-spot near the L1 point (see section 5). On
the other hand, as expected, the σ(K)/K vs a curve has a change in slope, at a value
of a for which the individual Gaussians have reached the velocity width of the line at the
continuum. For larger values of a the velocity measurements become dominated by noise.
For low values of a, the phase shift usually gives spurious results, although in our case it
approaches a stable value around 0.015. We believe this value reflects the difference between
the eclipse ephemeris, which is based mainly on the eclipse of the hot spot, and the true
inferior conjunction of the secondary star. This problem is further discussed in section 5.
Finally, we must point out that the systemic velocity smoothly increases up to a maximum
of about 40 km s−1 at Gaussian separation of nearly 42 Å, while the best results, as seen
from the Figure, are obtained for a = 31Å. This discrepancy may be also be related to the
narrow hot-spot near the L1 point and might be due to the phase-shift between the hot-spot
eclipse and the true inferior conjunction. This problem will also be address in section 4. The
radial velocities, corresponding to the adopted solution, are shown in column 5 of Table 1
and plotted in Figure 6, while the corresponding orbital parameters – obtained from the
nonlinear least-squares fit – are given in column 4 of Table 2.
3.3. The Secondary Star
We were unable to detect single features from the secondary star in any individual
spectra, after careful correction for telluric lines. In particular we found no radial velocity
results using a standard cross-correlation technique near the NaI λλ8183.3, 8194.8 Å doublet.
As we will see below, this doublet was very weak compared with previous observations (Wade
1981; Friend et al. 1990; Naylor et al. 2005). We have been able, however, to detect the NaI
doublet and the TiO Head band around λ7050 Å with a new technique, which enables us to
derive the semi-amplitude Kabs of the secondary star velocity curve. We first present here
– 13 –
the general method for deriving the semi-amplitude and then apply it to U Gem, using not
only the absorption features but the Hα emission as well.
3.3.1. A new method to determine K2
In many cataclysmic variables the secondary star is poorly visible, or even absent, in the
optical spectral range. Consequently, no V (t) measurements are feasible for this component.
Among these systems are dwarf novae with orbital periods under 0.25 days, for which it
is thought that the disc luminosity dominates over the luminosity of the Roche-Lobe filling
secondary, whose brightness depends on the orbital period of the binary (Echevarŕıa & Jones
1984). For such binaries, the orbital parameters have been derived only for the white-dwarf-
accretion disc system, in a way similar to that described in section 3.1.
In order to determine a value of Kabs from a set of spectra of a cataclysmic variable, for
which the orbital period and time of inferior conjunction have been already determined from
the emission lines, we propose to reverse the process: derive V (t)abs using Kpr as the initial
value for the semi-amplitude, and set the values of Porb and HJD⊙, derived from the emission
lines, as constants. The initial value for the systemic velocity is set to zero, and its final value
may be calculated later (see below). The individual spectra are then co-added in the frame
of reference of the secondary star, i.e. by Doppler-shifting the spectra using the calculated
V (t)calc from the equation given in section 3.1, and then add them together. Hereinafter we
will refer to this procedure as the co-phasing process. Ideally, as the proposed Kpr is changed
through a range of possible values, there will be a one for which the co-phased spectral
features associated with the absorption spectrum will have an optimal signal-to-noise ratio.
In fact, this will also be the case for any emission line features associated with the red
star, if present. In a way, this process works in a similar fashion as the double Gaussian
fitting used in the previous section, provided that adequate criteria are set in order to select
the best value for Kabs. We propose three criteria or tests that, for late type stars, may
be used with this method: The first one consists in analyzing the behavior of the measured
depths or widths of a well identified absorption line in the co-phased spectra, as a function of
the proposed Kpr; one would expect that the width of the line will show a minimum and its
depth a maximum value at the optimal solution. This method could be particularly useful for
K-type stars which have strong single metallic lines like Ca I and Fe I. The second criterion is
based upon measurements of the slope of head-bands, like that of TiO at λ7050 Å. It should
be relevant to short period systems, with low mass M-type secondaries with spectra featuring
strong molecular bands. In this case one could expect that the slope of the head-band will be
a function of Kpr, and will have a maximum negative value at the best solution. A third test
– 14 –
is to measure the strength of a narrow emission arising from the secondary. This emission,
if present, would be particularly visible in the co-phased spectrum and will have minimum
width and maximum height at the best selected semi-amplitude Kpr.
We have tested these three methods by means of an artificial spectrum with simulated
narrow absorption lines, a TiO-like head band and a narrow emission line. The spectrum with
these artificial features was then Doppler shifted using pre-established inferior conjunction
phase and orbital period, to produce a series of test spectra. An amount of random Gaussian
noise was added to each Doppler shifted spectrum, sufficient to mask the artificial features.
We then proceeded to apply the co-phasing process to recover our pre-determined orbital
values. All three criteria reproduced back the original set of values, as long as the random
noise amplitude was of the same order of magnitude as the strength of the clean artificial
features.
3.3.2. Determination of K2 for U Gem
We have applied the above-mentioned criteria to U Gem. The time of the inferior
conjunction of the secondary and the orbital period were taken from section 4. To attain the
best signal to noise ratio we have used all the 28 observed spectra. Although they span over
slightly more than 1.5 orbital periods, any departure from a real K2 value will not depend
on selecting data in exact multiples of the orbital period, as any possible deviation from the
real semi-amplitude will already be present in one complete orbital period and will depend
mainly on the intrinsic intensity distribution of the selected feature around the secondary
itself (also see below the results for γ).
Figure 7 shows the application of the first test to the NaI doublet λλ 8183,8195 Å. The
spectra were co-phased varying Kpr between 250 to 450 km s
−1. The line depth of the blue
and red components of the doublet (stars and open circles, respectively), as well as their mean
value (dots) are shown in the diagram. We find a best solution for K2 = 310 ± 5 km s
The error has been estimated from the intrinsic modulation of the solution curve. As it
approaches its maximum value, the line depth value oscillates slightly, but in the same way
for both lines. A similar behavior was present when low signal to noise features were used
on the artificial spectra process described above. Figure 8 shows the co-phased spectrum
of the NaI doublet of our best solution for K2. These lines appear very weak as compared
with those reported by Friend et al. (1990) and Naylor et al. (2005). We have also measured
the gamma velocity from the co-phased spectrum by fitting a double-gaussian to the Na I
doublet (dotted line in Figure 7) and find a mean value γ = 69± 10 km s−1 (corrected to
the heliocentric standard of motion). We did a similar calculation for γ by co-phasing the
– 15 –
selected spectra used in section 5, covering a full cycle only. The results were very similar
to those obtained by using all spectra.
The second test, to measure the slope of the TiO band has at λ7050 Å was not suc-
cessful. The solution curve oscillates strongly near values between 250 and 350 km s−1. We
believe that the signal to noise ratio in our spectra is too poor for this test and that more ob-
servations, accumulated during several orbital cycles, have to be obtained in order to attain
a reliable result using this method.
However, we have co-phased our spectra for K2 = 310 km s
−1, with the results shown in
Figure 9. The TiO band is clearly seen while the noise is prominent, particulary along the
slope of the head-band. We have used this co-added spectrum to compare it with several
late-type M stars extracted from the published data by Montes et al. (1997) fitted to our
co-phased spectrum. A gray continuum has been added to the comparison spectra in order
to compensate for the fill-in effect arising from the other light sources in the system, so as
to obtain the best fit. In particular, we show in the same figure the fits when two close
candidates – GJ406 (M6 V, upper panel) and GJ402 (M4-5 V, lower panel) – are used. The
best fit is obtained for the M6 V star, to which we have added a 95 percent continuum. For
the M4-5 V star the fit is poor, as we observe a flux excess around 7000 Å and a stronger
TiO head-band. Increasing the grey flux contribution will fit the TiO head band, but will
result in a larger excess at the 7000 Å region. On the other hand, the fit with the M6 V
star is much better all along the spectral interval. There are a number of publications which
assign to U Gem spectral types M4 (Harrison et al. 2000), M5 (Wade 1981) and possibly as
far as M5.5 (Berriman et al. 1983). Even in the case that the spectral type of the secondary
star were variable, its spectral classification is still incompatible with its mass determination
(Echevarŕıa 1983).
For the third test, we have selected the region around Hα, as in the individual spectra
we see evidence of a narrow spot, which is very well defined in our spectrum near orbital
phase 0.5. In this test we have co-phased the spectra as before, and have adopted as the test
parameter the peak intensity around the emission line. The results are shown in Figure 10. A
clear and smooth maximum is obtained for Kpr = 310 ± 3 km s
−1. The co-phased spectrum
obtained from this solution is shown in Figure 11. The double-peak structure has been
completely smeared – as expected when co-adding in the reference frame of the secondary
star, as opposed to that of the primary star- and instead we observe a narrow and strong
peak at the center of the line. We have also fitted the peak to find the radial velocity of the
spot. We find γ = 33± 10 km s−1, compatible with the gamma velocity derived from the
radial velocity analysis of the emission line, γ = 34± 2 km s−1 (see section 3.2.3). This is a
key result for the determination of the true systemic velocity and can be compared with the
– 16 –
values derived from the secondary star (see section 7).
4. Improved Ephemeris of U Gem
As mentioned in section 3.1, the presence of eclipses in U Gem and an ample photo-
metric coverage during 30 years has permitted to establish, with a high degree of accuracy,
the value of orbital period. This has been discussed in detail by Marsh et al. (1990). How-
ever, as pointed by these authors, this object shows erratic variations in the timing of the
photometric mid-eclipse that may be caused either by orbital period changes, variations in
the position of the hot spot, or they may even be the consequence of the different methods
of measuring of the eclipse phases. A variation in position and intensity of the gas stream
will also contribute to such changes. A date for the zero phase determined independently
from spectroscopic measurements would evidently be desirable. Marsh et al. (1990) discuss
two spectroscopic measurements by Marsh & Horne (1988) and Wade (1981), and conclude
that the spectroscopic inferior conjunction of the secondary star occurs about 0.016 in phase
prior to the mean photometric zero phase. There are two published spectroscopic studies
(Honeycutt et al et al. 1987; Stover 1981), as well as one in this paper, that could be used
to confirm this result. Unfortunately there is no radial velocity analysis in the former paper,
nor in the excellent Doppler Imaging paper by Marsh et al. (1990) based on their original
observations. However, the results by Stover (1981) are of particular interest since he finds
the spectroscopic conjunction in agreement with the time of the eclipse when using the pho-
tometric ephemerides by Wade (1981), taken from Arnold et al. (1976). The latter authors
introduce a small quadratic term which is consistent with the O-C oscillations shown in
Marsh et al. (1990).
It is difficult to compare results derived from emission lines to those obtained from
absorption lines, especially if they are based on different ephemerides. Furthermore, the
contamination on the timing of the spectroscopic conjunction – either caused by a hot spot,
by gas stream or by irradiation on the secondary – has not been properly evaluated. However,
since our observations were made at a time when the hot spot in absent (or, at least, is along
the line between the two components in the binary) and the disc was very symmetric (see
section 5), we can safely assume that in our case, the photometric and spectroscopic phases
must coincide. If we then take the orbital period derived by Marsh et al. (1990) and use the
zero point value derived from our measurements of the Hα wings, (section 3.2.3), we can
improve the ephemeris:
HJD = 2, 437, 638.82566(4) + 0.1769061911(28) E ,
– 17 –
for the inferior conjunction of the secondary star. These ephemeris are used throughout
this paper for all our phase folded diagrams and Doppler Tomography.
5. Doppler Tomography
Doppler Tomography is a useful and powerful tool to study the material orbiting the
white dwarf, including the gas stream coming from the secondary star as well as emission
regions arising from the companion itself. It uses the emission line profiles observed as a
function of the orbital phase to reconstruct a two-dimensional velocity map of the emitting
material. A detailed formulation of this technique can be found in Marsh & Horne (1988).
A careful interpretation of these velocity maps has to be made, as the main assumption
invoked by tomography is that all the observed material is in the orbital plane and is visible
at all times.
The Doppler Tomography, derived here from the Hα emission line in U Gem, was
constructed using the code developed by Spruit (1998). Our observations of the object
cover 1.5 orbital cycles. Consequently – to avoid disparities on the intensity of the trailed
and reconstructed spectra, as well as on the tomographic map – we have carefully selected
spectra covering a full cycle only. For this purpose we discarded the first 3 spectra (which
have the largest airmass) and used only 18 spectra out of the first 21, 600 s exposures, starting
with the spectrum at orbital phase 0.88 and ending with the one at phase 0.86 (see Table 1).
In addition, in generating the Tomography map we have excluded the spectra taken during
the partial eclipse of the accretion disc (phases between 0.95 and 0.05). The original and
reconstructed trailed spectra are shown in Figure 12. They show the sinusoidal variation
of the blue and read peaks, which are strong at all phases. The typical S-wave is also seen
showing the same simple sinusoidal variation, but shifted by 0.5 in orbital phase with respect
to the double-peaks. The Doppler tomogram is shown in Figure 13; as customary, the oval
represents the Roche-Lobe of the secondary and the solid lines the Keplerian (upper) and
ballistic (lower) trajectories. The Tomogram reveals a disc reaching to the distance of to the
inner Lagrangian point in most phases. A compact and strong emission is seen close to the
center of velocities of the secondary star. A blow-up of this region is shown in Figure 14.
Both maps have been constructed using the parameters shown at the top of the diagrams
and a γ velocity of 34 km s−1. The velocity resolution of the map near the secondary star is
about 10 km s−1. The V (x, y) position of the hot-spot (in km s−1) is (-50,305), within the
uncertainties.
The tomography shown in Figure 13 is very different from what we expected to find
and from what has been observed by other authors. We find a very symmetric full disc,
– 18 –
reaching close to the inner Lagrangian point and a compact bright spot also close to the L1
point, instead of a complex system like that observed by Unda-Sanzana et al. (2006), who
find U Gem at a stage when the Doppler Tomographs show: emission at low velocity close
to the center of mass; a transient narrow absorption in the Balmer lines; as well as two
distinct spots, one very narrow and close in velocity to the accretion disc near the impact
region and another much broader, located between the ballistic and Keplerian trajectories.
They present also tentative evidence of a weak spiral structure, which have been seen as
strong spiral shocks during an outburst observed by Groot (1991). Our results also differ
from those of Marsh et al. (1990), who also find that the bulk of the bright spot arising from
the Balmer, He I and He II emission come from a region between the ballistic and Keplerian
trajectories. We interpret the difference between our results and previous studies simply by
the fact that we have observed the system at a peculiar low state not detected before (see
sections 1 and 7) . This should not be at all surprising because, although U Gem is a well
observed object, it is also a very unusual and variable system.
Figure 14 shows a blow-up of the region around the secondary star. The bright spot is
shown close to the center of mass of the late-type star, slightly located towards the leading
hemisphere. Since this is a velocity map and not a geometrical one, there are at two possible
interpretations of the position in space of the bright spot (assuming the observed material is
in the orbital plane). The first one is that the emission is been produced at the surface of the
secondary, i.e. still attached to its gravitational field. The second is that the emission is the
result of a direct shock front with the accretion disc and that the compact spot is starting
to gain velocity towards the Keplerian trajectory. We believe that the second explanation
is more plausible, as it is consistent with the well accepted mechanism to produce a bright
spot. On the other hand, at this peculiar low state it is difficult to invoke an external source
strong enough to produce a back-illuminated secondary and especially a bright and compact
spot on its leading hemisphere.
6. Basic system parameters
Assuming that the radial velocity semi-amplitudes reflect accurately the motion of the
binary components, then from our results –Kem = K1 = 107±2 km s
−1; Kabs = K2 = 310±5
km s−1 – and adopting P = 0.1769061911 we obtain:
= 0.35± 0.05,
– 19 –
M1 sin
3 i =
PK2(K1 +K2)
= 0.99± 0.03M⊙,
M2 sin
3 i =
PK1(K1 +K2)
= 0.35± 0.02M⊙,
a sin i =
P (K1 +K2)
= 1.46± 0.02R⊙.
Using the inclination angle derived by Zhang & Robinson (1987), i = 69.7◦ ± 0.7, the
system parameters become: MWD = 1.20 ± 0.05M⊙; MRD = 0.42 ± 0.04M⊙; and a =
1.55± 0.02R⊙.
6.1. The inner and outer size of the disc
A first order estimate of the dimensions of the disc – the inner and outer radius – can
be made from the observed Balmer emission line. Its peak-to peak velocity separation is
related to the outer radius of the accreted material, while the wings of the line, coming
from the high velocity regions of the disc, can give an estimate of the inner radius (Smak
2001). The peak-to-peak velocity separation of the 31 individual spectra were measured (see
section 3.2.1), as well as the velocity of the blue and red wings of Hα at ten percent level of
the continuum level. ¿From these measurements we derive mean values of Vout = 460 km s
and Vin = 1200 km s
These velocities can be related to the disc radii from numerical disc simulations, tidal
limitations and analytical approximations (see Warner (1995) and references therein). If we
assume the material in the disc at radius r is moving with Keplerian rotational velocity V (r),
then the radius in units of the binary separation is given by (Horne, Wade & Szkody 1986):
r/a = (Kem +Kabs)Kabs/V (r)
The observed maximum intensity of the double-peak emission in Keplerian discs occurs
close to the velocity of its outer radius (Smak 1981). From the observed Vout and Vin values
we obtain an outer radius of Rout/a = 0.61 and an inner radius of Rin/a = 0.09. If we take
a = 1.55±0.02R⊙ from the last section we obtain an inner radius of the disc Rin = 0.1395R⊙
– 20 –
equivalent to about 97 000 km. This is about 25 times larger than the expected radius of the
white dwarf (see section 7). On the other hand, the distance from the center of the primary
to the inner Lagrangian point, RL1/a, is
RL1/a = 1− w + 1/3w
2 + 1/9W 3,
where w3 = q/(3(1 + q) ((Kopal 1959)). Using q = 0.35 we obtain RL1/a = 0.63. The
disc, therefore, appears to be large, almost filling the Roche-Lobe of the primary, with the
matter leaving the secondary component through the L1 point colliding with the disc directly
and producing the hot spot near this location.
7. Discussion
For the first time, a radial velocity semi-amplitude of the primary component of U Gem
has been obtained in the visual spectral region, which agrees with the value obtained from
ultraviolet observations by Sion et al. (1998) and Long & Gilliland (1999). In a recent paper,
Unda-Sanzana et al. (2006) present high-resolution spectroscopy around Hα and Hβ and
conclude that they cannot recover the ultraviolet value for K1 to better than about 20
percent by any method. Although the spectral resolution at Hα of the instrument they used
is only a factor of two smaller than that of the one we used, the diagnostic diagrams they
obtain show a completely different behavior as compared to those we present here, with
best values for K1 of about 95 km s
−1 from Hα and 150 km s−1 from Hβ (see their Figures
13 and 14, respectively). We believe that the disagreement with our result lies not in the
quality of the data or the measuring method, but in the distortion of the emission lines due
to the presence of a complex accretion disc at the time of their observations, as the authors
themselves suggest. Their Doppler tomograms show emission at low velocity, close to the
center of mass, two distinct spots, a narrow component close to the L1 point, and a broader
and larger one between the Keplerian and the ballistic trajectories. There is even evidence
of a weak spiral structure. In contrast, we have observed U Gem during a favorable stage,
one in which the disc was fully symmetric, and the hot-spot was narrow and near the inner
Lagrangian point. This allowed us to measure the real motion of the white dwarf by means
of the time-resolved behavior of the Hα emission line.
Our highly consistent results for the systemic velocity derived from the Hα spot (γ =
33± 10 km s−1 and those found from the different methods used for the radial velocity
analysis of the emission arising from the accretion disk (see section 3.2 and Table 2), give
strong support to our adopting a true systemic velocity value of γ = 34± 2 km s−1. If we are
– 21 –
indeed detecting the true motion of the white dwarf, we can use this adopted value, to make
an independent check on the mass of the primary: The observed total redshift of the white
dwarf (gravitational plus systemic)– found by Long & Gilliland (1999) – is 172 km s−1, from
which, after subtraction of the adopted systemic velocity, we derive a gravitational shift of
the white dwarf of 138 km s−1. From the mass-radius relationship for white dwarfs (Anderson
1988), we obtain consistent results for Mwd = 1.23M⊙ and Rwd = 3900 km (see Figure 7
in (Long & Gilliland 1999)). This mass is in excellent agreement with that obtained in this
paper from the radial velocity analysis.
¿From our new method to determine the radial velocity curve of the secondary (sec-
tion 3.3.2), we obtain a value for the semi-amplitude close to 310 km s−1. Three previous
papers have determinations of the radial velocity curves from the observed Na I doublet
in the near-infrared. In order to evaluate if our method is valid, we here compare our re-
sult with these direct determinations. The published values are: Krd = 283 km s
((Wade 1981)); Krd = 309 km s
±3, (before correction for irradiation effects, (Friend et al.
1990)); and Krd = 300 km s
−1 (Naylor et al. 2005). Wade (1981) notes that an elliptical
orbital (e = 0.086) may better fit his data, as the velocity extremum near phase 0.25 ap-
pears somewhat sharper than that near phase 0.75 (see his Figure 3). However, he also
finds a very large systemic velocity, γ = 85 km s−1, much larger than the values found by
Kraft (1962) (γ = 42 km s−1) and Smak (1976) (γ = 40± 6 km s−1), both obtained from
the emission lines. Since the discrepancy with the results of these two authors was large,
Wade (1981) defers this discussion to further confirmation of his results. Instead, and more
important, this author discusses two scenarios that may significantly alter the real value of
K2: the non-sphericity and the back-illumination of the secondary. In the latter effect, each
particular absorption line may move further away from, or closer to the center of mass of
the binary. He estimates the magnitude of this effect and concludes that the deviation of
the photocenter would probably be much less than 0.1 radii. Friend et al. (1990) further
discusses the circumstances that might cause the photocenter to deviate, and concludes that
their observed value for the semi-amplitude should be corrected down by 3.5 percent, to yield
K2 = 298 km s
± 9. Although they discuss the results by Martin (1988) – which indicate
that the relatively small heating effects in quiescent dwarf novae always lead to a decrease in
the measured Krd for the Na I lines – they argue that line quenching, produced by ionization
of the same lines, may also be important, and result in an increased Krd. Another disturbing
effect, considered by the same authors, is line contamination by the presence of weak disc
features, like the Paschen lines. In this respect we point out here that a poor correction for
telluric lines will function as an anchor, reducing also the amplitude of the radial velocity
measurements. Friend et al. (1990) also find an observed systemic velocity of γ = 43± 6
km s−1 and a small eccentricity of e = 0.027. Naylor et al. (2005) also discuss the distortion
– 22 –
effects on the Na I lines and, based on their fit residuals, argue in favor of a depletion of the
doublet in the leading hemisphere of the secondary, around phases 0.4 and 0.6, as removing
flux from the blueward wing of the lines results in an apparent redshift, which would explain
the observed residuals. However, they additionally find that fitting the data to an eccentric
orbit, with e = 0.024, results in a significant decrease in the residuals caused by this deple-
tion, and conclude that it may be unnecessary to further correct the radial velocity curve.
We must point out that a depletion of the blueward wing of the Na I lines will results in
a contraction of the observed radial velocity curves, as the measured velocities – especially
around phases 0.25 and 0.75 – will be pulled towards the systemic velocity. Naylor et al.
(2005) present their results derived from the Na I doublet and the K I/TiO region (around
7550-7750 Å), compared with several spectral standards, all giving values between 289 and
305 km s−1 (no errors are quoted). Based on the radial velocity measurements for Na I,
obtained by these authors in 2001 January (115 spectra), and using GJ213 as template (see
their Table 1), we have recalculated the circular orbital parameters through our nonlinear
least-squares fit. We find K2 = 300 km s
± 1, in close agreement with their published
value.
It would be advisable to establish a link between the observed gamma velocity of the
secondary and the semi-amplitude K2, under the assumption that its value may be distorted
by heating effects. We take as a reference our results from the radial velocity analysis of the
broad Hα line and the hot-spot from the secondary, which support a true systemic velocity
of 34 km s−1. However, we find no positive correlation in the available results derived from
the Na I lines, either between different authors or even among one data set. In the case of
Naylor et al. (2005), the gamma values show a range between 11 and 43 km s−1, depending on
the standard star used as a template, for K2 velocities in the range 289 to 305 km s
−1. Wade
(1981) finds γ = 85± 10 km s−1 for a low K2) value of 283 km s
−1, while Friend et al. (1990)
finds γ = 43± 6 km s−1 for K2 about 309 km s
−1, and we obtain a large gamma velocity of
about 69 km s−1 for a K2 value of 310 km s
−1. We believe that further and more specific
spectroscopic observations of the secondary star should be conducted in order to understand
the possible distortion effects on lines like the Na I doublet, and their implications on the
derived semi-amplitude and systemic velocity values.
Acknowledgments
E. de la F wishes to thank Andrés Rodriguez J. for his useful computer help. The
Thomson detector, used in our observations, was obtained through PACIME-CONACYT
project F325-E9211.
– 23 –
REFERENCES
Anderson, N., 1988, ApJ., 326, 266
Arnold, S., Berg, R.A. & Duthie, J.G., 1976, ApJ., 206, 790
Berriman, G., Beattie, I.A., Lee, T.J., Mochnacki, S.W. & Szkody, P., 1983, MNRAS, 204,
Cannizzo, J., K., Gehrels, N., & Mattei, J.A., 2002, ApJ, 579, 760
Cook, L.M., 1987, JAVSO, 16, 83
Echevarŕıa, J, 1983, RMAA, 8, 109
Echevarŕıa, J & Jones, D.H.P., 1984, MNRAS, 206, 919
Friend, M. T., Martin, J. S., Connon Smith, R., & Jones, D. H. P., 1990, MNRAS, 246, 637
Groot, P.J., 1991, ApJ, , 2649
Harrison, T.E., McNamara, B.J., Szkody, P. & Gilliland, R.L., 2000, ApJ, 120, 2649
Hind, J. R., 1856, MNRAS, 16, 56
Honeycutt, R. K., Kaitchuck, R. H., & Schlegel, E. M., 1987, ApJS, 65, 451
Horne, K., Wade, R.A. & Szkody, P., 1986, MNRAS, 219, 791
Kopal, Z., Close Binary Systems, Champan & Hall, London.
Kraft, R. P., 1962, ApJ, 135, 408
Kraft, R. P., 1975, private communication in Smak (1976).64, 637
Kreminski, W., 1965, AJ, 142, 1051
Long, K. S., & Gilliland, R. L., 1999, ApJ, 511, 916Mass.
Marsh, T.R., & Horne, K., 1988, MNRAS, 235, 26997, A&ASuppl.Ser., 123, 473
Marsh, T. R., Horne, K., Schlegel, E. M., Honeycutt, R. K. & Kaitchuck, R. H., 1990, ApJ,
364, 637
Martin, J.S., 1988, D Phil thesis, University of Sussex No. 5, Cambridge, Mass.
– 24 –
Mattei, J.A., Saladyga, M., Wagen, E.O. & Jones, C.M., 1987, AAVSO, Monograph, Cam-
bridge, Mass.
Montes, D., Mart́ın, E.L., Fernández-Figueroa, M.J., Cornide, M. & De Castro, E., 1997,
A&ASuppl.Ser., 123, 473
Naylor, T., Allan, A. & Long, K. S., 2005, MNRAS, 361, 1091ApJ, 496, 449
Payne-Gaposchkin, C. & Gaposchkin, 1938, Variable Stars, Harv. Obs. Mono. No. 5, Cam-
bridge, Mass.
Schwarzenberg-Czerny, A., 1999 , Astrophys. J., 516, 315
Shafter, A.W., Szkody, P. & Thorstensen, J. R. 1986, ApJ, 308, 765
Sion, E.L., Cheng, F.H., Szkody, P., Sparks, W., Gänsicke, B.Huang, M & Mattei,, J. 1998,
ApJ, 496, 449
Smak, J., 1971, Acta Astr., 21, 15
Smak, J., 1976, Acta Astr., 26, 277
Smak, J., 1981, Acta Astr., 31, 395
Smak, J., 2001, Acta Astr., 51, 279
Smak, J., 2004, Acta Astr., 54, 433
Spruit, H. C., 1998, astro-ph/9806141
Stover, R. J., 1981, ApJ, 248, 684
Szkody, P., & Mattei, J. A., 1984, PASP, 96, 988
Unda-Sanzana, E., Marsh, T. R. & Morales-Rueda, L., 2006, MNRAS, 369, 805
Wade, R. A., 1981, ApJ, 246, 215
Warner, B., 1995, “Cataclysmic Variable Stars”, Cambridge University Press
Warner, B. & Nather, R.E., 1971, MNRAS, 152, 219
Zahn, J.-P., 1966. Ann. d’Astrophys., 29, 489.
Zhang, E. H., & Robinson, E. L., 1987, ApJ, 321, 813
This preprint was prepared with the AAS LATEX macros v5.2.
http://arxiv.org/abs/astro-ph/9806141
– 25 –
Table 2: Orbital parameters derived from several radial velocities calculations of the Hα
emission line.
Orbital Peaks (a) Fxc (b) Wings (c)
Parameters
γ (km s−1) 38 ± 5 35 ± 3 34 ± 2
K (km s−1) 162 ± 7 119 ± 3 107 ± 2
HJD⊙ 0.8259(2) 0.82462(6) 0.82152(9)
(+2437638 days)
Porb (days) (d) (d) (d)
σ 25.2 12.2 9.1
aDerived from measurements of the double-peaks
bDerived from cross correlation methods
cResults from the fitting of fixed double gaussians to the wings
dPeriod fixed, P=0.1769061911 d
– 26 –
Fig. 4.— Diagnostic Diagram One. Orbital Parameters as a function of width of individual Gaussians for
several separations. Crosses correspond to a = 180 pixels; Dots to a = 230 pixels (≈ 34 Å) and Open circles
to a = 280 pixels;
– 27 –
Fig. 5.— Diagnostic Diagram Two. The best estimate of the semi-amplitude of the white
dwarf is 107 km s−1, corresponding to a ≈ 34 Å.
– 28 –
0 0.5 1 1.5 2
Fig. 6.— Radial velocities for U Gem. The open circles correspond to the measurements
of the first 21 spectra single spectra, while the dots correspond to those of the co-added
spectra (see section 3.2). The solid line, close to the points, correspond to the solution with
Kem = 107 km s
−1 (see text), while the large amplitude line correspond to the solution found
for K2 (see section 3.3.1).
– 29 –
Fig. 7.— Maximum flux depth of the individual NaI lines λ8183.3 Å (top), λ8194.8 Å (bot-
tom) and mean (middle) as a function of Kpr.
– 30 –
Fig. 8.— Co-phased spectrum around the NaI doublet.
– 31 –
Fig. 9.— U Gem TiO Head Band near 7050 Å compared with GJ406, an M6V star (upper
diagram), and GJ402, an M4 V star (lower diagram) (see text).
– 32 –
Fig. 10.— Maximum peak flux of the co-added Hα spectra as a function of Kpr
– 33 –
Fig. 11.— Shape of the co-added Hα spectrum for K2 = 310 km s
– 34 –
Fig. 12.— Trailed spectra of the Hα emission line. Original (left) and reconstructed data
(right).
– 35 –
Fig. 13.— Doppler Tomography of U Gem. The various features are discussed in the text.
The vx and vy axes are in km s
−1. A compact hot spot, close to the inner Lagrangian point is
detected instead of the usual bright spot and/or broad stream, where the material, following
a Keplerian or ballistic trajectory strikes the disc. The Tomogram reveals a full disc whose
outer edge is very close to the L1 point (see text).
– 36 –
Fig. 14.— Blow-up of the region around the hot spot. Note that this feature is slightly
ahead of the center of mass of the secondary star. Since this is a velocity map and not a
geometrical one, its physical position in the binary is carefully discussed in the text.
Introduction
Observations
Radial Velocities
Orbital Parameters Calculations
The Primary Star
Measurements from the double-peaks
Cross Correlation using a Template
Measurements from the wings and Diagnostic Diagrams
A new method to determine K2
Determination of K2 for U Gem
Improved Ephemeris of U Gem
Doppler Tomography
Discussion
|
0704.1642 | Collective excitations of hard-core Bosons at half filling on square and
triangular lattices: Development of roton minima and collapse of roton gap | Collective excitations of hard-core Bosons at half filling on square and triangular
lattices: Development of roton minima and collapse of roton gap
Tyler Bryant and Rajiv R. P. Singh
Department of Physics, University of California, Davis, CA 95616, USA
(Dated: November 28, 2018)
We study ground state properties and excitation spectra for hard-core Bosons on square and tri-
angular lattices, at half filling, using series expansion methods. Nearest-neighbor repulsion between
the Bosons leads to the development of short-range density order at the antiferromagnetic wavevec-
tor, and simultaneously a roton minima in the density excitation spectra. On the square-lattice, the
model maps on to the well studied XXZ model, and the roton gap collapses to zero precisely at the
Heisenberg symmetry point, leading to the well known spectra for the Heisenberg antiferromagnet.
On the triangular-lattice, the collapse of the roton gap signals the onset of the supersolid phase.
Our results suggest that the transition from the superfluid to the supersolid phase maybe weakly
first order. We also find several features in the density of states, including two-peaks and a sharp
discontinuity, which maybe observable in experimental realization of such systems.
PACS numbers:
I. INTRODUCTION
A microscopic theory for rotons in the excitation spec-
tra of superfluids was first developed by Feynman, where
he showed that the roton minima was related to a
peak in the static structure factor.1 This study has had
broad impact in condensed matter physics ranging from
Quantum Hall Effect2 to frustrated antiferromagnets.3,4,5
In recent years considerable interest has also centered
on Supersolid phases of matter.6 While the existence
of such homogeneous bulk phases in Helium remains
controversial,7,8 in case of lattice models such phases have
been clearly established. One such example is that of
hard-core Bosons hopping on a triangular-lattice, where
a large enough nearest-neighbor repulsion leads to su-
persolid order.9,10,11,12,13,14 The nature of the excitation
spectra in the superfluid phase and on approach to the
supersolid transition has not been addressed for the spin-
half model.
Here we use series expansion methods to study the
ground state properties and excitation spectra of hard-
core Bosons, at half filling, on square and triangular lat-
tices, with nearest neighbor repulsion. On the square-
lattice, the model is equivalent to the antiferromagnetic
XXZ model, and we present the elementary excitation
spectra for the XXZ model with XY type anisotropy. To
our knowledge this calculation has not been done before.
It should be useful for experimental studies of antiferro-
magnetic materials with XY anisotropy. We set the XY
coupling to unity and study the spectra as a function of
the Ising coupling Jz. For the XY model, the spectra is
gapless at q = 0 (the Goldstone mode of the superfluid)
and has a maximum at the antiferromagnetic wavevector
(π, π). As the Ising coupling is increased a roton minima
develops at the antiferromagnetic wavevector, which goes
to zero at the point of Heisenberg symmetry (Jz = 1), as
expected for the system with doubled unit cell.
For the triangular-lattice, the hard-core Boson model
maps onto a ferromagnetic XY model, which is unfrus-
trated. The nearest-neighbor repulsion, on the other
hand corresponds to an antiferromagnetic Ising coupling,
which is frustrated. This model cannot be mapped onto
an antiferromagnetic XXZ model on the triangular lat-
tice. For this model, we calculate the equal-time struc-
ture factor S(q) as well as the excitation spectra, ω(q).
Once again, we find that in the absence of nearest-
neighbor repulsion, the excitation spectra is gapless at
q = 0 and has a maximum at the antiferromagnetic
wavevector ((4π/3, 0) and equivalent points). As the re-
pulsion is increased, a pronounced peak develops in S(q)
at these wavevectors and simultaneously a sharp roton
minima develops in the spectra. Series extrapolations
suggest that the roton gap vanishes when the repulsion
term (Jz) reaches a value of ≈ 4.5. However, we are un-
able to estimate any critical exponents for the vanishing
of the gap or for the divergence of the structure factor. A
comparison of our structure factor data with the Quan-
tum Monte Carlo data of Wessel and Troyer, leads us to
suggest that the transition to the supersolid phase maybe
weakly first order and occurs for a value of Jz slightly less
than 4.5.
Our calculations also show a near minimum and flat re-
gions in the spectra at the wavevectors (π, π/
3), which
correspond to the midpoint of the faces of the Bril-
louin zone. These are points where the antiferromagnetic
Heisenberg model has a well defined minima.4 In our case
the dispersion is very flat along some directions and a
minimum along others. There are several distinguishing
features in the density of states (DOS) of the excitation
spectra. The largest maximum in the DOS is close to the
maximum excitation energy and is not unlike many other
antiferromagnets. But, here, in addition, we get a second
maximum in the DOS from the flat regions in the spectra
at the midpoint of the faces of the Brillouin zone and a
sharp drop in the DOS at the roton energy. It maybe
possible to engineer such hard-core Boson systems on a
triangular-lattice in cold atomic gases. It should, then,
be possible to excite these collective exciations either op-
http://arxiv.org/abs/0704.1642v2
tically or by driving the system out of equilibrium. A
measurement of the energies associated with the charac-
teristic features in the density of states can be used to
accurately determine the microscopic parameters of the
system.
II. METHOD
The linked-cluster series expansions performed here in-
volve writing the Hamiltonian of interest as
H = H0 + λH1 (1)
where the eigenstates of H0 define the basis to be used
and H1 is the perturbation to be applied in a linked clus-
ter expansion. Ground state properties are then obtained
as a power series in λ using Raleigh-Schrodinger pertur-
bation theory.
Excited state properties are obtained following the pro-
cedure outlined in22, in which a similarity transformation
is obtained in order to block diagonalize the Hamiltonian
where the ground state sits in a block by itself and the
one-particle states form another block.
Heff = S−1HS (2)
where Heff is an effective Hamiltonian for the states
which are the perturbatively constructed extensions of
the single spin-flip states. The effective Hamiltonian
is then used to obtain a set of transition amplitudes
r=0 λ
rcr,m,n that describe propagation of the excita-
tion through a distance (mx̂+ nŷ) for the square lattice
and (1
nŷ) with m and n both even or both odd
for the triangular lattice.
These transition amplitudes are used to obtain the
transition amplitudes for the bulk lattice by summing
over clusters. Fourier transformation of the bulk transi-
tion amplitudes then gives the excitation energy in mo-
mentum space.
∆(qx, qy) =
cr,m,nfm,n(qx, qy) (3)
where fm,n(qx, qy) is given by the symmetry of the lattice,
f sqrm,n(qx, qy) = [cos(mqx + nqy) + cos(mqx − nqy)
+cos(nqx +mqy) + cos(nqx −mqy)] /4 (4)
for the square lattice and
f trim,n(qx, qy) =
qx)cos(
+ cos(
3(m+ n)
qy)cos(
m− 3n
qx) (5)
+ cos(
3(m− n)
qy)cos(
m+ 3n
for the triangular lattice.
In order to access values of the expansion parameter λ
up to and including λ = 1, we use standard first order
integrated differential approximants18 (IDAs) of the form
QL(x)
+RM (x)f + ST (x) = 0 (6)
where QL,RM ,ST are polynomials of degree L,M,and T
determined uniquely from the expansion coefficients.
When gapless modes are present, estimates of the spin-
wave velocity are made using the technique of Singh and
Gelfand15. For small q = |q| the spectrum is assumed
to have the form ∆(q) ∼ [A(λ) + B(λ)q2]1/2. To calcu-
late the spin-wave velocity, we expand ∆(q) in powers
of q, ∆(q) = C(λ) +D(λ)q2 + ... and identify C = A1/2
and D = B/2A1/2. Thus the series 2C(λ)D(λ) provides
an estimate for B, which is the square of the spin-wave
velocity.
III. SQUARE LATTICE
On the square lattice we perform two distinct types
of expansions. For Jz ≥ J⊥, one can expand directly in
J⊥/Jz by choosing
<i,j>
Szi S
<i,j>
(Sxi S
j + S
j ) (7)
In this case λ in (1) is J⊥ (setting Jz = 1). Since H1
conserves the total Sz, one can perform the computation
to high order by restricting the full Hilbert space to the
total Sz sector of interest, which in this paper will be
restricted to total Sz = 0 (half filling).
Series expansion studies of the excitation spectra by
Singh et al.15 and subsequently extended by Zheng et
al.16 have been performed for Jz ≥ J⊥, with expansions
involving linked clusters of up to 11 sites (λ10) and 15
sites (λ14) respectively.
Fig. 1 shows the results of the spin-wave disper-
sion analysis for J⊥ from the dispersionless Ising model
J⊥ = 0 to the Heisenberg model J⊥ = 1. One can see the
development of minima at (0, 0) and (π, π) with increas-
ing J⊥, with the gap completely closing at J⊥ = 1. Since
IDAs are not accurate near the gapless points, the dot-
ted line shows the estimated spin-wave velocity v = 1.666
when J⊥ = 1.
To obtain the spectra with XY anisotropy (Jz ≤ J⊥),
we need to develop a different type of expansion. We
consider the following break up of the Hamiltonian: (for
Jz ≤ J⊥)
<i,j>
Sxi S
<i,j>
j + JzS
j ) (8)
(π/2,π/2)(π,0)(0,0)(π,π)(π,0)
FIG. 1: (Color online) The spin-wave dispersion of the XXZ
model on the square lattice for various values of J⊥ (Jz = 1).
The error bars give an indication of the spread of various
IDAs. The lines around the gapless points for J⊥ = 1 show
the calculated spin-wave velocity.
where J⊥ = 1. Now, a new series is obtained for each
value of Jz, and the XXZ model is only obtained upon
extrapolation to λ = 1. In contrast to the first type of
expansion, H1 does not conserve total Sz, and so the
entire Hilbert space must be used, limiting the order of
computation of the series to λ10 (11 sites).
Fig. 2 shows the results of the spin-wave dispersion
analysis for several values of Jz from the XY model
(Jz = 0) to the Heisenberg model (J⊥ = 1). We find
that for the pure XY model, there is gapless excitations
at q = 0 (Goldstone modes of the superfluid phase),
but there is no roton minima at the antiferromegnetic
wavevector. As Jz is increased, the spin-wave veloc-
ity increases and a clear roton-minima develops at the
antiferromagnetic wavevector. This minima collapses to
zero as the Heisenberg point is approached. In fact, the
doubling of the unit cell implies that for the Heisenberg
limit, the spectra at q and at q+(π, π) become identi-
cal. Another point of interest is that along the direction
(π, 0) to (π/2, π/2), which corresponds to the antiferro-
magnetic zone boundary, the dispersion is very flat for
the pure XY model. A weak minimum develops at (π, 0)
as the Heisenberg symmetry point is reached. These re-
sults should be useful in comparing with spectra of two-
dimensional antiferromagnets, where there is significant
exchange anisotropy.
IV. TRIANGULAR LATTICE
There has been much recent interest in the XXZ model
on the triangular lattice. The spin- 1
XXZ model with
ferromagnetic in-plane coupling J⊥ < 0 and antiferro-
magnetic coupling in the z direction Jz > 0 can be
mapped to a hard-core boson model with nearest neigh-
(π/2,π/2)(π,0)(0,0)(π,π)(π,0)
FIG. 2: (Color online) The spin-wave dispersion of the XXZ
model on the square lattice for various values of Jz (J⊥ = 1).
The error bars give an indication of the spread of various
IDAs. The lines around the gapless points show the calculated
spin-wave velocities.
TABLE I: Series coefficients for the ground state energy per
site E0/N and M
n E0/N for Jz=0 M for Jz=0
0 -5.000000e-01 1.250000e-01
2 -4.166667e-02 -6.944444e-03
4 -4.282407e-03 -2.267072e-03
6 -1.251190e-03 -1.141688e-03
8 -5.538567e-04 -7.184375e-04
10 -2.990401e-04 -5.039687e-04
12 -1.823004e-04 -3.784068e-04
14 -1.015895e-04 -2.494459e-04
bor repulsion.
Hb = −t
<i,j>
ibj + bib
j) + V
<i,j>
ninj (9)
where b
i is the bosonic creation operator, ni = b
ibi. The
parameters are related by t = −J⊥/2 and V = Jz . For
the rest of this section, we let J⊥ = −1, and so V/t =
−2Jz/J⊥ = 2Jz.
We will continue to use the spin language as it is natu-
ral for our study. For Jz = 0, the ferromagnetic in-plane
coupling is unfrustrated. As Jz is increased, the com-
peting interaction leads to an emergence of a supersolid
order.
We have performed expansions for the triangular lat-
tice XXZ model of the form
H0 = −
<i,j>
Sxi S
<i,j>
(−Syi S
j + JzS
j ) (10)
where J⊥ = −1. Series are obtained for each value of
Jz, and the XXZ model is obtained upon extrapolation
to λ = 1.
The static structure factor
S(k) =
eik·r〈Sz0Szr 〉 (11)
is shown in Fig. 3 along contours shown in Fig. 4. As
Jz increases, a peak forms at wavevector q=(4π/3, 0). A
plot of this point is shown in fig. 5 along with QMC data
from Wessel and Troyer.10
EBQPCOBA
FIG. 3: (Color online) The static structure factor of the
XXZ model on the triangular lattice for various values of Jz
(J⊥=−1). The error bars give an indication of the spread of
IDAs.
2π/31/2
-2π/31/2
4π/32π/30-2π/3-4π/3
O P Q
FIG. 4: The hexagonal first Brillouin zone of the triangu-
lar lattice and the path ABOCPQBE along which the static
structure factor and spin-wave dispersion have been plotted
in Figs. 3 and 6.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Series
12x12
24x24
36x36
FIG. 5: (Color online) The static structure factor at
q=(4π/3, 0) of the XXZ model on the triangular lattice ver-
sus Jz (J⊥=−1). The error bars for the series data give
an indication of the spread of IDAs. Also shown are QMC
data for 12x12,24x24,and 36x36 site clusters from Wessel and
Troyer.10
Fig. 6 shows the results of the spin-wave dispersion
analysis for various values of Jz (J⊥ = −1). The error
bars give an indication of the spread of IDAs. The lines
around the gapless points show the calculated spin-wave
velocities. One can see the development of minima at Q
with increasing Jz, with the gap completely closing at
Jz ∼ 4.5. Since IDAs are not accurate near the gapless
point (q = 0), the dotted line shows the estimated spin-
wave velocities.
We have been unable to get any consistent estimates
for the critical exponents characterizing the divergence
of the antiferromagnetic structure factor and the van-
ishing of the roton gap as the supersolid phase is ap-
proached. Furthermore, the comparison with the QMC
data of Wessel and Troyer show that the QMC data begin
to show deviations from our series expansion results be-
fore Jz = 4.5. We believe, this implies that the superfluid
to supersolid transition is weakly first order. Wessel and
Troyer estimate the transition to be at Jz ≈ 4.3 ± 0.2
(|t/V | = 0.115 ± 0.005). Note that the spin-wave the-
ory gives the transition point to be at Jz = 2,
9 so that
quantum fluctuations play a substantial role here. Addi-
tional QMC studies, should provide further insight into
the nature of the transition.24
The calculations also show that near the midpoint of
the faces of the Brillouin Zone (point B in Fig. 4), the dis-
persion is a minima in the direction perpendicular to the
zone face QB and is very flat in other directions. This be-
havior is reminiscent of the dispersion in the Heisenberg
antiferromagnet on the traingular lattice where there is
a true minimum at this point.4,5 Note that this behav-
ior is unrelated to any peak in the static structure factor
and thus, as in case of the Heisenberg model, is more
quantum mechanical in nature.
In Fig. 7, we show the density of states for the spectra
for Jz = 2. There are several distinguishing features
in the density of states. First the largest peak in the
density of states occurs close to the highest excitation
energies. This is not unlike what is found in many other
antiferromegnets. However, here, there is a second peak
that corresponds to the flat regions in the spectra near
the point B. Finally, at the roton energy there is a sharp
drop in the density of states. The only contributions to
the density of states below the roton gap comes from the
Goldstone modes near q = 0. Since the latter have very
small density of states, there is a discontinuity in the
density of states at the roton energy.
EBQPCOBA
FIG. 6: (Color online) The spin-wave dispersion of the XXZ
model on the triangular lattice for various values of Jz (J⊥ =
−1). The error bars give an indication of the spread of IDAs.
The lines around the gapless points show the calculated spin-
wave velocities.
0 0.2 0.4 0.6 0.8 1
FIG. 7: The density of states for the XXZ model on the tri-
angular lattice for Jz = 2 (J⊥ = −1).
V. SUMMARY AND CONCLUSIONS
In this paper, we have studied the excitation spectra of
hard-core Boson models at half-filling on square and tri-
angular lattices. The calculations show the development
of the roton minima at the antiferromagnetic wavevector,
due to nearest-neighbor repulsion. In accord with Feyn-
man’s ideas, the development of the minima is correlated
with the emergence of a sharp peak in the static structure
factor. The case of triangular-lattice is clearly more in-
teresting as one has a phase transition from a superfluid
to a supersolid phase, where the roton gap goes to zero.
Our series results suggest that the roton-gap vanishes at
Jz ≈ 4.5. However, there maybe a weakly first order
transition slightly before this Jz value. A more careful
finite-size scaling analysis of the QMC data should pro-
vide further insight into this issue.
Our results of the spectra suggest two peaks in the
density of states and a sharp drop in the density of states
at the energy of the roton minima. If such a hard-core
Boson system on a triangular-lattice is realized in cold-
atom experiments, a measurement of the two peaks in
the density of states and the roton minima can be used
to determine independently the hopping parameter t and
the nearest-neighbor repulsion V .
Acknowledgments
This research is supported in part by the National Sci-
ence Foundation Grant Number DMR-0240918. We are
greatful to Stefan Wessel for providing us with the QMC
data for the structure factors and to Marcos Rigol and
Stefan Wessel for discussions.
1 R. P. Feynman, Phys. Rev. 94, 262 (1954).
2 S. M. Girvin, A. H. MacDonald, and P. M. Platzman Phys.
Rev. Lett. 54, 581-583 (1985).
3 P.Chandra, P. Coleman and A.I. Larkin, J. Phys. Cond.
Matter 2, 7933 (1990).
4 W. Zheng, et al, Phys. Rev. Lett. 96, 057201 (2006); Phys.
Rev. B 74, 224420 (2006).
5 O. A. Starykh, A. V. Chubukov and A. G. Abanov, Phys
Rev. B74, 180403 (2006); A. L. Chernyshev and M. E.
Zhitomirsky, Phys. Rev. Lett. 97, 207202 (2006).
6 E. Kim and M. H. W. Chan, Nature 427, 225 (2004).
7 See for example M. Boninsegni et al, Phys. Rev. Lett. 97,
080401 (2006).
8 P. W. Anderson, W. F. Brinkman, D. A. Huse, Science
310, 1164 (2005).
9 G. Murthy, D. Arovas, and A. Auerbach, Phys. Rev. B,
55, 3104 (1997).
10 S. Wessel and M. Troyer, Phys. Rev. Lett. 95, 127205
(2005).
11 D. Heidarian and K. Damle, Phys. Rev. Lett. 95, 127206
(2005).
12 R. G. Melko, A. Paramekanti, A. A. Burkov, A. Vish-
wanath, D.N. Sheng, and L. Balents Phys. Rev. Lett. 95,
127207 (2005).
13 M. Boninsegni and N. Prokof’ev Phys. Rev. Lett. 95,
237204 (2005).
14 E. Zhao and Arun Paramekanti Phys. Rev. Lett. 96,
105303 (2006).
15 Rajiv R. P. Singh, Martin P. Gelfand, Phys. Rev. B, 52,
R15 695 (1995)
16 W. Zheng, J. Oitmaa, and C.J. Hamer, Phys. Rev. B, 71,
184440 (2005).
17 W. Zheng, J. Oitmaa, and C.J. Hamer, Phys. Rev. B, 43,
8321 (1991).
18 J. Oitmaa, C. Hamer, and W. Zheng, Series Expansion
Methods for Strongly Interacting Lattice Models (Cam-
bridge: Cambridge University Press) (2006).
19 A. W. Sandvik and R. R. P. Singh, Phys. Rev. Lett., 86,
528 (2001).
20 W. Zheng, C.J. Hamer, R. R. P. Singh, S. Trebst and H.
Monien, Phys. Rev. B, 63, 144410 (2001).
21 H.-Q. Lin, J. S. Flynn and D. D. Betts, Phys. Rev. B, 64,
214411 (2001).
22 M. P. Gelfand and R. R. P. Singh, Adv. Phys. 49, 93
(2000).
23 T. Bryant, Ph.D. Dissertation, University of California,
Davis, to be submitted.
24 S. Wessel, to be published.
TABLE II: Series coefficients for the magnon dispersion on the square lattice for Jz = 0 (XY model), nonzero coefficients up
to r=9 are listed for compactness (the complete series can be found in Ref. 23)
(r,m,n) cr,m,n (r,m,n) cr,m,n (r,m,n) cr,m,n (r,m,n) cr,m,n
(0,0,0) 2.000000e+00 (3,2,1) -7.291667e-02 (8,3,3) -1.715283e-03 (9,5,2) -1.334928e-03
(2,0,0) -4.166667e-02 (5,2,1) -1.968093e-02 (4,4,0) -2.712674e-03 (8,5,3) -7.225832e-04
(4,0,0) -1.023582e-02 (7,2,1) -8.897152e-03 (6,4,0) -2.980614e-03 (9,5,4) -3.993759e-04
(6,0,0) -5.390283e-03 (9,2,1) -4.845897e-03 (8,4,0) -1.932294e-03 (6,6,0) -1.156309e-04
(8,0,0) -2.781363e-03 (4,2,2) -1.627604e-02 (5,4,1) -5.303277e-03 (8,6,0) -3.216877e-04
(1,1,0) -1.000000e+00 (6,2,2) -5.758412e-03 (7,4,1) -4.614353e-03 (7,6,1) -3.803429e-04
(3,1,0) 4.340278e-02 (8,2,2) -2.558526e-03 (9,4,1) -3.055177e-03 (9,6,1) -6.801940e-04
(5,1,0) 1.811921e-02 (3,3,0) -1.215278e-02 (6,4,2) -3.468926e-03 (8,6,2) -3.612916e-04
(7,1,0) 7.679634e-03 (5,3,0) -7.265535e-03 (8,4,2) -2.946432e-03 (9,6,3) -2.662506e-04
(9,1,0) 4.056254e-03 (7,3,0) -3.368594e-03 (7,4,3) -1.901715e-03 (7,7,0) -2.716735e-05
(2,1,1) -2.500000e-01 (9,3,0) -1.895272e-03 (9,4,3) -1.807997e-03 (9,7,0) -1.043517e-04
(4,1,1) -2.314815e-02 (4,3,1) -2.170139e-02 (8,4,4) -4.516145e-04 (8,7,1) -1.032262e-04
(6,1,1) -5.841368e-03 (6,3,1) -1.033207e-02 (5,5,0) -5.303277e-04 (9,7,2) -1.141074e-04
(8,1,1) -1.566143e-03 (8,3,1) -4.852573e-03 (7,5,0) -1.004891e-03 (8,8,0) -6.451636e-06
(2,2,0) -1.250000e-01 (5,3,2) -1.060655e-02 (9,5,0) -9.122910e-04 (9,8,1) -2.852685e-05
(4,2,0) -3.067130e-02 (7,3,2) -6.456544e-03 (6,5,1) -1.387570e-03 (9,9,0) -1.584825e-06
(6,2,0) -9.598676e-03 (9,3,2) -3.716341e-03 (8,5,1) -1.771298e-03
(8,2,0) -3.989385e-03 (6,3,3) -2.312617e-03 (7,5,2) -1.141029e-03
TABLE III: Series coefficients for the ground state energy per site E0/N and M
n E0/N for Jz=0 M for Jz=0 E0/N for Jz=1 M for Jz=1 E0/N for Jz=2 M for Jz=2
0 -7.500000e-01 -2.500000e-01 -7.500000e-01 -2.500000e-01 -7.500000e-01 -2.500000e-01
1 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
2 -3.750000e-02 7.500000e-03 -1.500000e-01 3.000000e-02 -3.375000e-01 6.750000e-02
3 -7.500000e-03 3.000000e-03 6.750000e-02 -2.700000e-02
4 -3.102679e-03 1.989902e-03 -6.428572e-04 3.271769e-03 -3.081696e-02 3.263208e-02
5 -1.557668e-03 1.356060e-03 4.457109e-02 -4.706087e-02
6 -9.211778e-04 1.018008e-03 -1.686432e-03 2.253907e-03 -5.708928e-02 7.245005e-02
7 -5.949646e-04 7.975125e-04 7.181401e-02 -1.117528e-01
8 -4.102048e-04 6.468307e-04 -7.027097e-04 1.498661e-03 -1.001587e-01 1.839365e-01
9 -2.965850e-04 5.380214e-04 1.440002e-01 -3.025679e-01
10 -2.225228e-04 4.565960e-04 -3.752974e-04 1.029273e-03 -2.164303e-01 5.141882e-01
11 -1.719314e-04 3.937734e-04 3.343757e-01 -8.849178e-01
12 -1.360614e-04 3.441169e-04 -2.322484e-04 7.779323e-04 -5.294397e-01 1.545967e+00
TABLE IV: Series coefficients for the magnon dispersion on the triangular lattice Jz = 0, J⊥ = −1, nonzero coefficients up to
r=9 are listed for compactness (the complete series can be found in Ref. 23)
(r,m,n) cr,m,n (r,m,n) cr,m,n (r,m,n) cr,m,n (r,m,n) cr,m,n
(0,0,0) 3.000000e+00 (6,5,1) -6.186994e-03 (5,8,0) -2.318836e-03 (9,10,6) -3.135408e-05
(2,0,0) -3.125000e-02 (7,5,1) -4.204807e-03 (6,8,0) -2.496675e-03 (9,10,8) -4.493212e-07
(3,0,0) 6.927083e-03 (8,5,1) -3.031857e-03 (7,8,0) -2.214348e-03 (6,11,1) -9.073618e-05
(4,0,0) -5.786823e-03 (9,5,1) -2.274746e-03 (8,8,0) -1.849982e-03 (7,11,1) -2.637766e-04
(5,0,0) -2.071746e-03 (4,5,3) -3.466797e-03 (9,8,0) -1.528161e-03 (8,11,1) -3.828143e-04
(6,0,0) -2.263644e-03 (5,5,3) -4.350420e-03 (5,8,2) -1.098718e-03 (9,11,1) -4.387479e-04
(7,0,0) -1.418733e-03 (6,5,3) -3.723124e-03 (6,8,2) -1.554950e-03 (7,11,3) -7.642054e-05
(8,0,0) -1.158308e-03 (7,5,3) -2.959033e-03 (7,8,2) -1.579331e-03 (8,11,3) -1.651312e-04
(9,0,0) -8.857566e-04 (8,5,3) -2.305863e-03 (8,8,2) -1.427534e-03 (9,11,3) -2.316277e-04
(1,2,0) -1.500000e+00 (9,5,3) -1.817374e-03 (9,8,2) -1.241335e-03 (8,11,5) -1.824976e-05
(2,2,0) -2.500000e-01 (3,6,0) -7.265625e-03 (6,8,4) -2.268405e-04 (9,11,5) -4.925111e-05
(3,2,0) 2.236979e-02 (4,6,0) -1.113487e-02 (7,8,4) -4.458355e-04 (9,11,7) -1.797285e-06
(4,2,0) -3.470238e-03 (5,6,0) -7.839022e-03 (8,8,4) -5.558869e-04 (6,12,0) -1.512270e-05
(5,2,0) 6.223272e-03 (6,6,0) -5.440344e-03 (9,8,4) -5.861054e-04 (7,12,0) -9.486853e-05
(6,2,0) 2.258837e-03 (7,6,0) -3.834575e-03 (7,8,6) -1.528411e-05 (8,12,0) -1.917140e-04
(7,2,0) 2.729869e-03 (8,6,0) -2.794337e-03 (8,8,6) -6.113630e-05 (9,12,0) -2.589374e-04
(8,2,0) 1.793769e-03 (9,6,0) -2.110041e-03 (9,8,6) -1.121505e-04 (7,12,2) -4.585233e-05
(9,2,0) 1.431100e-03 (4,6,2) -5.200195e-03 (5,9,1) -5.493588e-04 (8,12,2) -1.209317e-04
(2,3,1) -1.875000e-01 (5,6,2) -5.161886e-03 (6,9,1) -1.094328e-03 (9,12,2) -1.836224e-04
(3,3,1) -5.677083e-02 (6,6,2) -4.187961e-03 (7,9,1) -1.233659e-03 (8,12,4) -2.281220e-05
(4,3,1) -2.743217e-02 (7,6,2) -3.218811e-03 (8,9,1) -1.182811e-03 (9,12,4) -5.685542e-05
(5,3,1) -1.275959e-02 (8,6,2) -2.454043e-03 (9,9,1) -1.066119e-03 (9,12,6) -4.193665e-06
(6,3,1) -8.214468e-03 (9,6,2) -1.905072e-03 (6,9,3) -3.024539e-04 (7,13,1) -1.528411e-05
(7,3,1) -4.959005e-03 (5,6,4) -5.493588e-04 (7,9,3) -5.232866e-04 (8,13,1) -6.113630e-05
(8,3,1) -3.456161e-03 (6,6,4) -1.094328e-03 (8,9,3) -6.255503e-04 (9,13,1) -1.121505e-04
(9,3,1) -2.443109e-03 (7,6,4) -1.233659e-03 (9,9,3) -6.426095e-04 (8,13,3) -1.824976e-05
(2,4,0) -9.375000e-02 (8,6,4) -1.182811e-03 (7,9,5) -4.585233e-05 (9,13,3) -4.925111e-05
(3,4,0) -4.916667e-02 (9,6,4) -1.066119e-03 (8,9,5) -1.209317e-04 (9,13,5) -6.290497e-06
(4,4,0) -2.605934e-02 (4,7,1) -3.466797e-03 (9,9,5) -1.836224e-04 (7,14,0) -2.183444e-06
(5,4,0) -1.289340e-02 (5,7,1) -4.350420e-03 (8,9,7) -2.607109e-06 (8,14,0) -1.878307e-05
(6,4,0) -8.269582e-03 (6,7,1) -3.723124e-03 (9,9,7) -1.376709e-05 (9,14,0) -4.937808e-05
(7,4,0) -5.280757e-03 (7,7,1) -2.959033e-03 (5,10,0) -1.098718e-04 (8,14,2) -9.124881e-06
(8,4,0) -3.753955e-03 (8,7,1) -2.305863e-03 (6,10,0) -4.726374e-04 (9,14,2) -3.135408e-05
(9,4,0) -2.735539e-03 (9,7,1) -1.817374e-03 (7,10,0) -7.110879e-04 (9,14,4) -6.290497e-06
(3,4,2) -2.179687e-02 (5,7,3) -1.098718e-03 (8,10,0) -7.825389e-04 (8,15,1) -2.607109e-06
(4,4,2) -1.596136e-02 (6,7,3) -1.554950e-03 (9,10,0) -7.669912e-04 (9,15,1) -1.376709e-05
(5,4,2) -9.470639e-03 (7,7,3) -1.579331e-03 (6,10,2) -2.268405e-04 (9,15,3) -4.193665e-06
(6,4,2) -6.186994e-03 (8,7,3) -1.427534e-03 (7,10,2) -4.458355e-04 (8,16,0) -3.258886e-07
(7,4,2) -4.204807e-03 (9,7,3) -1.241335e-03 (8,10,2) -5.558869e-04 (9,16,0) -3.685726e-06
(8,4,2) -3.031857e-03 (6,7,5) -9.073618e-05 (9,10,2) -5.861054e-04 (9,16,2) -1.797285e-06
(9,4,2) -2.274746e-03 (7,7,5) -2.637766e-04 (7,10,4) -7.642054e-05 (9,17,1) -4.493212e-07
(3,5,1) -2.179687e-02 (8,7,5) -3.828143e-04 (8,10,4) -1.651312e-04 (9,18,0) -4.992458e-08
(4,5,1) -1.596136e-02 (9,7,5) -4.387479e-04 (9,10,4) -2.316277e-04
(5,5,1) -9.470639e-03 (4,8,0) -8.666992e-04 (8,10,6) -9.124881e-06
TABLE V: Series coefficients for the magnon dispersion on the triangular lattice Jz = 1, J⊥ = −1, nonzero coefficients up to
r=9 are listed for compactness (the complete series can be found in Ref. 23)
(r,m,n) cr,m,n (r,m,n) cr,m,n (r,m,n) cr,m,n (r,m,n) cr,m,n
(0,0,0) 3.000000e+00 (8,4,2) 1.641818e-02 (8,7,3) -3.898852e-03 (6,10,2) -1.174079e-03
(2,0,0) -1.250000e-01 (4,5,1) -7.072545e-02 (6,7,5) -4.696316e-04 (8,10,2) -2.957178e-03
(4,0,0) 8.773202e-02 (6,5,1) -2.266297e-02 (8,7,5) -2.220857e-03 (8,10,4) -1.051498e-03
(6,0,0) -3.747008e-02 (8,5,1) 1.641818e-02 (4,8,0) -3.632812e-03 (8,10,6) -5.518909e-05
(8,0,0) 2.658737e-02 (4,5,3) -1.453125e-02 (6,8,0) -1.296471e-02 (6,11,1) -4.696316e-04
(2,2,0) -1.000000e-00 (6,5,3) -1.709052e-02 (8,8,0) -2.634458e-03 (8,11,1) -2.220857e-03
(4,2,0) 3.067262e-01 (8,5,3) -1.306500e-05 (6,8,2) -8.548979e-03 (8,11,3) -1.051498e-03
(6,2,0) -1.478629e-01 (4,6,0) -4.674479e-02 (8,8,2) -3.898852e-03 (8,11,5) -1.103782e-04
(8,2,0) 1.192248e-01 (6,6,0) -2.107435e-02 (6,8,4) -1.174079e-03 (6,12,0) -7.827194e-05
(2,3,1) -7.500000e-01 (8,6,0) 8.083746e-03 (8,8,4) -2.957178e-03 (8,12,0) -1.178397e-03
(4,3,1) 8.683532e-02 (4,6,2) -2.179687e-02 (8,8,6) -3.787294e-04 (8,12,2) -7.614846e-04
(6,3,1) -7.198726e-02 (6,6,2) -1.775987e-02 (6,9,1) -5.919357e-03 (8,12,4) -1.379727e-04
(8,3,1) 6.383529e-02 (8,6,2) 1.136255e-03 (8,9,1) -4.075885e-03 (8,13,1) -3.787294e-04
(2,4,0) -3.750000e-01 (6,6,4) -5.919357e-03 (6,9,3) -1.565439e-03 (8,13,3) -1.103782e-04
(4,4,0) -2.357440e-02 (8,6,4) -4.075885e-03 (8,9,3) -3.182356e-03 (8,14,0) -1.146580e-04
(6,4,0) -4.667627e-02 (4,7,1) -1.453125e-02 (8,9,5) -7.614846e-04 (8,14,2) -5.518909e-05
(8,4,0) 4.476721e-02 (6,7,1) -1.709052e-02 (8,9,7) -1.576831e-05 (8,15,1) -1.576831e-05
(4,4,2) -7.072545e-02 (8,7,1) -1.306500e-05 (6,10,0) -2.499303e-03 (8,16,0) -1.971039e-06
(6,4,2) -2.266297e-02 (6,7,3) -8.548979e-03 (8,10,0) -3.646508e-03
TABLE VI: Series coefficients for the magnon dispersion on the triangular lattice Jz = 2, J⊥ = −1, nonzero coefficients up to
r=9 are listed for compactness (the complete series can be found in Ref. 23)
(r,m,n) cr,m,n (r,m,n) cr,m,n (r,m,n) cr,m,n (r,m,n) cr,m,n
(0,0,0) 3.000000e+00 (6,5,1) -5.113269e-01 (5,8,0) 1.114660e-01 (9,10,6) 3.094665e-02
(2,0,0) -2.812500e-01 (7,5,1) 3.541550e-01 (6,8,0) -2.789923e-01 (9,10,8) 4.332352e-04
(3,0,0) -6.234375e-02 (8,5,1) -5.927875e-01 (7,8,0) 3.889189e-01 (6,11,1) -1.020493e-02
(4,0,0) 3.427127e-01 (9,5,1) 1.818953e+00 (8,8,0) -5.937472e-01 (7,11,1) 5.778696e-02
(5,0,0) 1.248868e-01 (4,5,3) -9.659180e-02 (9,8,0) 1.024914e+00 (8,11,1) -1.791293e-01
(6,0,0) -3.138557e-01 (5,5,3) 2.102338e-01 (5,8,2) 5.215530e-02 (9,11,1) 3.824414e-01
(7,0,0) -2.065582e-01 (6,5,3) -3.793451e-01 (6,8,2) -1.818856e-01 (7,11,3) 1.646341e-02
(8,0,0) 2.679692e-01 (7,5,3) 4.434905e-01 (7,8,2) 3.088576e-01 (8,11,3) -8.048695e-02
(9,0,0) 6.130976e-01 (8,5,3) -6.470161e-01 (8,8,2) -5.178979e-01 (9,11,3) 2.165711e-01
(1,2,0) 1.500000e+00 (9,5,3) 1.205017e+00 (9,8,2) 8.731811e-01 (8,11,5) -8.618625e-03
(2,2,0) -2.250000e+00 (3,6,0) 6.539063e-02 (6,8,4) -2.551233e-02 (9,11,5) 4.893766e-02
(3,2,0) -2.013281e-01 (4,6,0) -3.105654e-01 (7,8,4) 9.867811e-02 (9,11,7) 1.732941e-03
(4,2,0) 1.349036e+00 (5,6,0) 3.819521e-01 (8,8,4) -2.507655e-01 (6,12,0) -1.700822e-03
(5,2,0) 3.245835e-01 (6,6,0) -4.794666e-01 (9,8,4) 4.867085e-01 (7,12,0) 2.057252e-02
(6,2,0) -9.772865e-01 (7,6,0) 4.227344e-01 (7,8,6) 3.292682e-03 (8,12,0) -9.194406e-02
(7,2,0) -1.324972e+00 (8,6,0) -6.511353e-01 (8,8,6) -2.934241e-02 (9,12,0) 2.400559e-01
(8,2,0) 1.736288e+00 (9,6,0) 1.561357e+00 (9,8,6) 1.091019e-01 (7,12,2) 9.878045e-03
(9,2,0) 1.998149e+00 (4,6,2) -1.448877e-01 (5,9,1) 2.607765e-02 (8,12,2) -5.858223e-02
(2,3,1) -1.687500e+00 (5,6,2) 2.496909e-01 (6,9,1) -1.264872e-01 (9,12,2) 1.748291e-01
(3,3,1) 5.109375e-01 (6,6,2) -4.028414e-01 (7,9,1) 2.546730e-01 (8,12,4) -1.077328e-02
(4,3,1) 1.438694e-01 (7,6,2) 4.542205e-01 (8,9,1) -4.585226e-01 (9,12,4) 5.664036e-02
(5,3,1) 4.631412e-01 (8,6,2) -6.635572e-01 (9,9,1) 7.791699e-01 (9,12,6) 4.043528e-03
(6,3,1) -7.686440e-01 (9,6,2) 1.278006e+00 (6,9,3) -3.401644e-02 (7,13,1) 3.292682e-03
(7,3,1) -2.673657e-01 (5,6,4) 2.607765e-02 (7,9,3) 1.163420e-01 (8,13,1) -2.934241e-02
(8,3,1) 2.424913e-01 (6,6,4) -1.264872e-01 (8,9,3) -2.770943e-01 (9,13,1) 1.091019e-01
(9,3,1) 2.292894e+00 (7,6,4) 2.546730e-01 (9,9,3) 5.242482e-01 (8,13,3) -8.618625e-03
(2,4,0) -8.437500e-01 (8,6,4) -4.585226e-01 (7,9,5) 9.878045e-03 (9,13,3) 4.893766e-02
(3,4,0) 4.425000e-01 (9,6,4) 7.791699e-01 (8,9,5) -5.858223e-02 (9,13,5) 6.065292e-03
(4,4,0) -3.406189e-01 (4,7,1) -9.659180e-02 (9,9,5) 1.748291e-01 (7,14,0) 4.703831e-04
(5,4,0) 5.419257e-01 (5,7,1) 2.102338e-01 (8,9,7) -1.231232e-03 (8,14,0) -8.933341e-03
(6,4,0) -6.661367e-01 (6,7,1) -3.793451e-01 (9,9,7) 1.347384e-02 (9,14,0) 4.854225e-02
(7,4,0) 3.675761e-02 (7,7,1) 4.434905e-01 (5,10,0) 5.215530e-03 (8,14,2) -4.309313e-03
(8,4,0) -2.016564e-01 (8,7,1) -6.470161e-01 (6,10,0) -5.380211e-02 (9,14,2) 3.094665e-02
(9,4,0) 2.293150e+00 (9,7,1) 1.205017e+00 (7,10,0) 1.541458e-01 (9,14,4) 6.065292e-03
(3,4,2) 1.961719e-01 (5,7,3) 5.215530e-02 (8,10,0) -3.358010e-01 (8,15,1) -1.231232e-03
(4,4,2) -4.619167e-01 (6,7,3) -1.818856e-01 (9,10,0) 6.055844e-01 (9,15,1) 1.347384e-02
(5,4,2) 4.570889e-01 (7,7,3) 3.088576e-01 (6,10,2) -2.551233e-02 (9,15,3) 4.043528e-03
(6,4,2) -5.113269e-01 (8,7,3) -5.178979e-01 (7,10,2) 9.867811e-02 (8,16,0) -1.539040e-04
(7,4,2) 3.541550e-01 (9,7,3) 8.731811e-01 (8,10,2) -2.507655e-01 (9,16,0) 3.577909e-03
(8,4,2) -5.927875e-01 (6,7,5) -1.020493e-02 (9,10,2) 4.867085e-01 (9,16,2) 1.732941e-03
(9,4,2) 1.818953e+00 (7,7,5) 5.778696e-02 (7,10,4) 1.646341e-02 (9,17,1) 4.332352e-04
(3,5,1) 1.961719e-01 (8,7,5) -1.791293e-01 (8,10,4) -8.048695e-02 (9,18,0) 4.813724e-05
(4,5,1) -4.619167e-01 (9,7,5) 3.824414e-01 (9,10,4) 2.165711e-01
(5,5,1) 4.570889e-01 (4,8,0) -2.414795e-02 (8,10,6) -4.309313e-03
|
0704.1643 | The LIL for $U$-statistics in Hilbert spaces | The LIL for U -statistics in Hilbert spaces
Rados law Adamczak∗ Rafa l Lata la†
November 13, 2018
Abstract
We give necessary and sufficient conditions for the (bounded) law
of the iterated logarithm for U -statistics in Hilbert spaces. As a tool
we also develop moment and tail estimates for canonical Hilbert-space
valued U -statistics of arbitrary order, which are of independent inter-
Keywords: U -statistics, law of the iterated logarithm, tail and moment
estimates.
AMS 2000 Subject Classification: Primary: 60F15, Secondary: 60E15
1 Introduction
In the last two decades we have witnessed a rapid development in the asymp-
totic theory of U -statistics, boosted by the introduction of the so called
’decoupling’ techniques (see [5, 6, 7]), which allow to treat U -statistics con-
ditionally as sums of independent random variables. This approach yielded
better understanding of U -statistics versions of the classical limit theorems
of probability. Necessary and sufficient conditions were found for the strong
law of large numbers [17], the central limit theorem [19, 10] and the law
of the iterated logarithm [11, 2]. Also some sharp exponential inequalities
for canonical U -statistics have been found [8, 1, 14]. Analysis of the afore-
mentioned results shows an interesting phenomenon. Namely, the natural
counterparts of the necessary and sufficient conditions for sums of i.i.d. ran-
dom variables (U -statistics of degree 1), remain sufficient for U -statistics
Institute of Mathematics, Polish Academy of Sciences, Warsaw, Poland. Email:
[email protected]. Research partially supported by MEiN Grant 2 PO3A 019
†Institute of Mathematics, Warsaw University, Warsaw, Poland. Email:
[email protected]. Research partially supported by MEiN Grant 1 PO3A 012 29.
http://arxiv.org/abs/0704.1643v1
of arbitrary degree, but with an exception for the CLT, they cease to be
necessary. The correct conditions turn out to be much more involved and
are expressed for instance in terms of convergence of some series (LLN) or
as growth conditions for some functions (LIL).
A natural problem is an extension of the above results to the infinite-
dimensional setting. There has been some progress in this direction, and
partial answers have been found, usually under the assumption on the geo-
metrical structure of the space in which the values of a U -statistic are taken.
In general however the picture is far from being complete and the necessary
and sufficient conditions are known only in the case of the CLT for Hilbert
space valued U -statistics (see [5, 10] for the proof of sufficiency in type 2
spaces and necessity in cotype 2 spaces respectively).
In this article we generalize to separable Hilbert spaces the results from
[2] on necessary and sufficient conditions for the LIL for real valued U -
statistics. The conditions are expressed only in terms of the U -statistic
kernel and the distribution of the underlying i.i.d. sequence and can be also
considered a generalization of results from [13], where the LIL for i.i.d. sums
in Hilbert spaces was characterized. We consider only the bounded version
of the LIL and do not give the exact value of the lim sup nor determine the
limiting set. Except for the classical case of sums of i.i.d. random variables,
the problem of finding the lim sup is at the moment open even in the one
dimensional case (see [3, 5, 15] for some partial results) and the problem of
the geometry of the limiting set and the compact LIL is solved only under
suboptimal integrability conditions [3].
The organization of the paper is as follows. First, in Section 3 we prove
sharp exponential inequalities for canonical U -statistics, which generalize
the results of [1, 8] for the real-valued case. Then, after recalling some
basic facts about the LIL we give necessary and sufficient condition for the
LIL for decoupled, canonical U -statistics (Theorem 2). The quite involved
proof is given in the two subsequent sections. Finally we conclude with
our main result (Theorem 4), which gives a characterization of the LIL for
undecoupled U -statistics and follows quite easily from Theorem 2 and the
one dimensional result.
2 Notation
For an integer d, let (Xi)i∈N, (X
i )i∈N,1≤k≤d be independent random vari-
ables with values in a Polish space Σ, equipped with the Borel σ-field F .
Let also (εi)i∈N, (ε
i )i∈N,1≤k≤d be independent Rademacher variables, in-
dependent of (Xi)i∈N, (X
i )i∈N,1≤k≤d.
Consider moreover measurable functions hi : Σ
d → H, where (H, | · |) is a
separable Hilbert space (we will denote both the norm in H and the absolute
value of a real number by | · |, the context will however prevent ambiguity).
To shorten the notation, we will use the following convention. For i =
(i1, . . . , id) ∈ {1, . . . , n}d we will write Xi (resp. Xdeci ) for (Xi1 , . . . ,Xid),
(resp. (X
, . . . ,X
)) and ǫi (resp. ǫ
i ) for the product εi1 · . . . · εid (resp.
· . . . · ε(d)id ), the notation being thus slightly inconsistent, which however
should not lead to a misunderstanding. The U -statistics will therefore be
denoted
i∈Idn
hi(Xi) (an undecoupled U -statistic)
|i|≤n
i ) (a decoupled U -statistic)
i∈Idn
ǫihi(Xi) (an undecoupled randomized U -statistic)
|i|≤n
ǫdeci hi(X
i ) (a decoupled randomized U -statistic),
where
|i| = max
k=1,...,d
Idn = {i : |i| ≤ n, ij 6= ik for j 6= k}.
Since in this notation {1, . . . , d} = I1d we will write
Id = {1, 2, . . . , d}.
Throughout the article we will write Ld, L to denote constants depending
only on d and universal constants respectively. In all those cases the values
of a constant may differ at each occurrence.
For I ⊆ Id, we will write EI to denote integration with respect to vari-
ables (X
i )i∈N,j∈I . We will consider mainly canonical (or completely de-
generated) kernels, i.e. kernels hi, such that for all j ∈ Id, Ejhi(Xdeci ) = 0
3 Moment inequalities for U-statistics in Hilbert
space
In this section we will present sharp moment and tail inequalities for Hilbert
space valued U -statistics, which in the sequel will constitute an important
ingredient in the analysis of the LIL. These estimates are a natural general-
ization of inequalities for real valued U-statistics presented in [1].
Let us first introduce some definitions.
Definition 1. For a nonempty, finite set I let PI be the family consisting of
all partitions J = {J1, . . . , Jk} of I into nonempty, pairwise disjoint subsets.
Let us also define for J as above deg(J ) = k. Additionally let P∅ = {∅}
with deg(∅) = 0.
Definition 2. For a nonempty set I ⊆ Id consider J = {J1, . . . , Jk} ∈ PI .
For an array (hi)i∈Idn of H-valued kernels and fixed value of iI
c, define
‖(hi)iI‖J = sup
EI [hi(X
deg(J )
(XdeciJj
: ΣJj → R
|f (j)iJj (X
)|2 ≤ 1 for j = 1, . . . ,deg(J )
Let moreover ‖(hi)i∅‖∅ = |hi|.
Remark It is worth mentioning that for I = Id, ‖ · ‖J is a deterministic
norm, whereas for I ( Id it is a random variable, depending on X
Quantities given by the above definition suffice to obtain precise moment
estimates for real valued U -statistics. However, to bound the moments of
U -statistics with values in general Hilbert spaces, we will need to introduce
one more definition.
Definition 3. For nonempty sets K ⊆ I ⊆ Id consider J = {J1, . . . , Jk} ∈
PI\K . For an array (hi)i∈Idn of H-valued kernels and fixed value of iIc, define
‖(hi)iI‖K,J = sup
EI [〈hi(Xdeci ), giK (XdeciK )〉
deg(J )
(XdeciJj
)]| :
: ΣJj → R, giK : ΣK → H ,E
|giK (XdeciK )|
2 ≤ 1
|f (j)iJj (X
)|2 ≤ 1 for j = 1, . . . ,deg(J )
Remark One can see that the only difference between the above definition
and Definition 2 is that the latter distinguishes one set of coordinates and
allows functions corresponding to this set to take values in H. Moreover,
since the norm in H satisfies | · | = sup|φ|≤1〈φ, ·〉, we can treat Definition 2
as a counterpart of Definition 3 for K = ∅. We will use this convention to
simplify the statements of the subsequent theorems. Thus, from now on, we
will write
‖ · ‖∅,J := ‖ · ‖J .
Example For d = 2 and I = {1, 2}, the above definition gives
‖(hij(Xi, Yj))i,j‖∅,{{1,2}} = sup
hij(Xi, Yj)fij(Xi, Yj)
f(Xi, Yj)
2 ≤ 1
= sup
φ∈H,|φ|≤1
〈φ, hij(Xi, Yj)〉2,
‖(hij(Xi, Yj))i,j‖∅,{{1}{2}} = sup
hij(Xi, Yj)fi(Xi)gj(Yj)
Ef(Xi)
Eg(Yj)
2 ≤ 1
‖(hij(Xi, Yj))i,j‖{1},{{2}} = sup
〈fi(Xi), hij(Xi, Yj)〉gj(Yj) :
|f(Xi)|2,E
g(Yj)
2 ≤ 1
‖(hij(Xi, Yj))i,j‖{1,2},∅ = sup
〈fij(Xi, Yj), hij(Xi, Yj)〉 :
|f(Xi, Yj)|2 ≤ 1
E|hij(Xi, Yj)|2.
We can now present the main result of this section.
Theorem 1. For any array of H-valued, completely degenerate kernels (hi)i
and any p ≥ 2, we have
h(Xdeci )
p ≤ Lpd
K⊆I⊆Id
J∈PI\K
pp(#I
c+degJ /2)
EIc max
‖(hi)iI‖
The proof of the above theorem proceeds along the lines of arguments
presented in [1, 8]. In particular we will need the following moment estimates
for suprema of empirical processes [8].
Lemma 1 ([8, Proposition 3.1], see also [4, Theorem 12]). Let X1, . . . ,Xn
be independent random variables with values in (Σ,F) and T be a countable
class of measurable real functions on Σ, such that for all f ∈ T and i ∈
In, Ef(Xi) = 0 and Ef(Xi)
2 < ∞. Consider the random variable S :=
supf∈T |
i f(Xi)|. Then for all p ≥ 1,
ESp ≤ Lp
(ES)p + pp/2σp + ppEmax
|f(Xi)|p
where
σ2 = sup
Ef(Xi)
We will also need the following technical lemma.
Lemma 2 (Lemma 5 in [1]). For α > 0 and arbitrary nonnegative kernels
gi : Σ
d → R+ and p > 1 we have
i ≤ L
pαpEmax
I({1,...,d}
p#IpEI max
EIcgi)
Before stating the next lemma, let us introduce some more definitions,
concerning J –norms of deterministic matrices
Definition 4. Let (ai)i∈Idn be a d-indexed array of real numbers. For J =
{J1, . . . , Jk} ∈ PId define
‖(ai)i‖J = sup
· · · x(k)iJk :
)2 ≤ 1, . . . ,
)2 ≤ 1
We will also need
Definition 5. For i ∈ Nd−1 × In let ai : Σ → R be measurable functions
and Z1, . . . , Zn be independent random variables with values in Σ. For a
partition J = {J1, . . . , Jk} ∈ PId (d ∈ J1), let us define
‖(ai(Zid))i‖J = sup
iId\J1
ai(Zid)x
· · · x(k)iJk
)2 ≤ 1, . . . ,
)2 ≤ 1
Remark All the definitions of norms presented so far, seem quite similar
and indeed they can be all interpreted as injective tensor-product norms on
proper spaces. We have decided to introduce them separately by explicit
formulas, because this form appears in our applications.
The next lemma is crucial for obtaining moment inequalities for canon-
ical real-valued U -statistics of order greater than 2. In the context of U -
statistics in Hilbert spaces we will need it already for d = 2.
Lemma 3 (Theorem 5 in [1]). Let Z1, . . . , Zn be independent random vari-
ables with values in (Σ,F). For i ∈ Nd−1 × In let ai : Σ → R be measurable
functions, such that EZai(Zid) = 0. Then, for all p ≥ 2 we have
ai(Zid))iId−1
‖ ≤ Ld
J∈PId
p(1+deg (J )−d)/2‖(ai(Zid))i‖J
J∈PId−1
p1+(1+deg(J )−d)/2
‖(ai(Zid))iId−1‖
where ‖·‖ denotes the norm of a (d−1)-indexed matrix, regarded as a (d−1)-
linear operator on (l2)
d−1 (thus the ‖ · ‖{1}...{d−1}–norm in our notation).
To prove Theorem 1, we will need to adapt the above lemma to be able
to bound the (K,J )-norms of sums of independent kernels.
Definition 6. We define a partial order ≺ on PI as
I ≺ J
if and only if for all I ∈ I, there exists J ∈ J , such that I ⊆ J .
Lemma 4. Assume that
i E|hi(Xdeci )|2 < ∞. Then for any K ⊆ Id−1 and
J = {J1, . . . , Jk} ∈ PId−1\K and all p ≥ 2,
i ))iId−1
‖K,J (1)
K⊆L⊆Id, K∈PId\L
J∪{K,{d}}≺K∪{L}
p(degK−degJ )/2‖(hi)iId‖L,K
K⊆L⊆Id−1, K∈PId−1\L
J∪{K}≺K∪{L}
p1+(degK−degJ )/2
Edmax
‖(hi)iId−1‖
Remark In the above lemma we slightly abuse the notation, by identifying
for K = ∅ the partition {∅} ∪ J with J .
Given Lemma 3, the proof of Lemma 4 is not complicated, the main idea
is just a change of basis, however due to complicated notation it is quite
difficult to write it directly. We find it more convenient to write the proof
in terms of tensor products of Hilbert spaces.
Let us begin with a classical fact.
Lemma 5. Let H be a separable Hilber space and X a Σ-valued random
variable. Then H ⊗ L2(X) ≃ L2(X,H), where L2(X,H) is the space of
square integrable random variables of the form f(X), f : Σ → H-measurable.
With the above identification, for h ∈ H, f(X) ∈ L2(X), we have h⊗f(X) =
hf(X) ∈ L2(X,H).
Proof of Lemma 4. To avoid problems with notation, which would lengthen
an intuitively easy proof, we will omit some technical details, related to
obvious identification of some tensor product of Hilbert spaces (in the spirit
of Lemma 5). Similarly, when considering linear functionals on a space,
which can be written as a tensor product in several ways, we will switch to
the most convenient notation, without further explanations.
H0 = H ⊗
⊗l∈K (⊕ni=1L2(X
i )] ≃ ⊕|iK |≤nL
2(XdeciK ,H)
and, for j = 1, . . . , k,
Hi = ⊗l∈Jj(⊕ni=1L2(X
i )) ≃ ⊕|iJj |≤nL
2(XdeciJj
In the case K = ∅, we have (using the common convention for empty
products) H0 ≃ H.
For id = 1, . . . , n and fixed value of X
, let Aid be a linear functional on
H̃ = ⊕|iId−1 |≤nL
2(XdeciId−1
,H) ≃ ⊗kj=0Hk, given by (hi(Xdeci ))|iId−1 |≤n ∈ H̃,
with the formula
Aid((giId−1
(XdeciId−1
))iId−1
) = 〈(giId−1 (X
iId−1
))iId−1
, (hi(X
i ))iId−1
|iId−1 |≤n
E{1,...,d−1}〈giId−1 (X
iId−1
), hi(X
i )〉H .
As functions of X
, Aid = Aid(X
) are independent random linear func-
tionals. Thus they determine also random (k + 1)-linear functionals on
⊕kj=0Hk, given by
(h0, h1, . . . , hk) 7→ Aid(h0 ⊗ h1 ⊗ . . .⊗ hk).
If we denote by ‖·‖ the norm of a (k+1)-linear functional, the left hand-side
of (1), can be written as
Aid(X
Moreover, denoting by ‖Aid‖HS the norm of Aid seen as a linear operator on
⊗kj=0Hj (by analogy with the Hilbert-Schmidt norm of a matrix), we have
E‖Aid(X
)‖2HS = ‖(hi)i‖2Id,∅ < ∞,
so the sequence Aid(X
), determines a linear functional A on H̃⊗[⊕nid=1L
⊕|i|≤nL2(Xdeci ,H) ≃ ⊕nid=1L
, H̃), given by the formula
A(g1(X
1 ), . . . , gn(X
n )) =
E[Aid(X
)(gid(X
It is easily seen, that if we interpret the domain of this functional as⊕|i|≤nL2(Xdeci ,H),
then it corresponds to the multimatrix (hi(X
i ))i.
Let us now introduce the following notation, consistent with the defini-
tion of ‖ · ‖J . If T is a linear functional on ⊗mj=0Ej for some Hilbert spaces
Ej , and I = {L1, . . . , Lr} ∈ PIm∪{0}, then let ‖T‖I denote the norm of T
as a r-linear functional on ⊕ri=1[⊗j∈LiEj ], given by
(e1, . . . , er) 7→ T (e1 ⊗ . . .⊗ er).
Now, denotingHk+1 = ⊕nid=1L
), we can apply the above definition
to H̃ ⊗ [⊕nid=1L
)] ≃ ⊗k+1j=0Hj and use Lemma 3 to obtain
Aid(X
∥ ≤Ld
I∈PIk+1∪{0}
p(1+deg (I)−(k+2))/2‖A‖I
I∈PIk∪{0}
p1+(1+deg(I)−(k+2))/2
‖Aid(X
)‖2I .
This inequality is just the statement of the Lemma, which follows from
,,associativity” of the tensor product and its ,,distributivity” with respect
to the simple sum of Hilbert spaces. Indeed, denoting Jk+1 = {d}, we have
for 0 /∈ Li and U =
⊗j∈LiHj ≃ ⊗j∈Li ⊗l∈Jj (⊕ns=1L2(X(l)s )) ≃ ⊗l∈U (⊕ns=1L2(X(l)s )) ≃ ⊕|iU |≤nL
2(XdeciU ).
Similarly, if 0 ∈ Li,
⊗j∈LiHj ≃ [⊕|iK |≤nL
2(XdeciK ,H)]× [⊗06=j∈Li ⊗l∈Jj (⊕
2(X(l)s ))]
≃ ⊕|iU |≤nL
2(XdeciU ,H),
where U = (
06=j∈Li
Jj) ∪ K. Using the fact that for fixed X(d)id , Aid
corresponds to the multimatrix (hi(X
i ))|iId−1 |≤n
, and A corresponds to
(hi(X
i ))|i|≤n, we can see, that each summand ‖ · ‖I on the right hand side
of (2) is equal to some summand ‖ · ‖L,K on the right hand side of (1). In-
formally speaking and abusing slightly the notation (in the case K = ∅), we
,,merge” the elements of the partition {{d}, J1, . . . , Jk,K} or {J1, . . . , Jk,K}
in a way described by the partition I, thus obtaining the partition {L}∪K,
where L is the set corresponding in the new partition to the set Li ∈ I,
containing 0 (in particular, if K = ∅ and {0} ∈ I, then L = ∅). Let us also
notice, that deg(I) = deg(K) + 1, hence
1 + deg(I)− (k + 2) = deg(K)− deg(J ),
which shows, that also the powers of p on the right hand sides of (1) and
(2) are the same, completing the proof.
Proof of Theorem 1. For d = 1, the theorem is an obvious consequence of
Lemma 1. Indeed, since | · | = sup|φ|≤1 |φ(·)|, and we can restrict the supre-
mum to a countable set of functionals, we have
hi(Xi)|p ≤ Lp
hi(Xi)|)p + pp/2 sup
|φ|≤1
E〈φ, hi(Xi)〉2)p/2
+ ppEmax
|hi(Xi)|p
But E|
i hi(Xi)| ≤
i hi(Xi)|2 =
i E|hi(Xi)|2 = ‖(hi)i‖{1},∅
and we also have sup|φ|≤1(
i E〈φ, hi(Xi)〉2)1/2 = ‖(hi)i‖∅,{1} and maxi |hi(Xi)| =
maxi ‖hi‖∅,∅.
We will now proceed by induction with respect to d. Assume that the
theorem is true for all integers smaller than d ≥ 2 and denote Ĩc = Ic\{d}
for I ⊆ Id. Then, applying it for fixed X(d)id to the array of functions
hi(x1, . . . , xd−1,X
)iId−1
, we get by the Fubini theorem
i )|p
K⊆I⊆Id−1
J∈PI\K
pp(#Ĩ
c+degJ /2)
EIc‖(
hi)iI‖
where we have replaced the maxima in iIc by sums (we can afford this
apparent loss, since we will be able to fix it with Lemma 2). Now, from
Lemma 1 (applied to Ed) it follows that
hi)iI‖
K,J ≤ L
(Ed‖(
hi)iI‖K,J )p + pp/2‖(hi)iI∪{d}‖
K,J∪{{d}}
Ed‖(hi)iI‖
Since Ĩc = (I ∪ {d})c, degJ ∪ {{d}} = degJ + 1 and #Ic = #Ĩc + 1,
combining the above inequalities gives
i )|p ≤ L
K⊆I⊆Id
J∈PI\K
pp(#I
c+degJ /2)
‖(hi)iI‖
K⊆I⊆Id−1
J∈PI\K
pp(#Ĩ
c+degJ /2)
(Ed‖(
hi)iI‖K,J )p
By applying Lemma 4 to the second sum on the right hand side, we get
i )|p ≤ L
K⊆I⊆Id
J∈PI\K
pp(#I
c+degJ /2)
‖(hi)iI‖
We can now finish the proof using Lemma 2. We apply it to EIc for
I 6= Id, with #Ic instead of d and p/2 instead of p (for p = 2 the theorem is
trivial, so we can assume that p > 2) and α = 2#Ic + degJ +#Ic. Using
the fact that (p/2)α#I
c ≤ Lpd and E‖(hi)iI‖2K,J ≤
EI |hi|2, we get
‖(hi)iI‖
K,J ≤ p
−αp/2L̃
pαp/2EIc max
‖(hi)iI‖
p#Jp/2EJ max
iIc\J
EIc\J‖(hi)iI‖2K,J
≤ L̄pd
EIc max
‖(hi)iI‖
+ p−(#I
c+degJ /2)p max
EJ max
EJc |h(Xdeci )|2)p/2
EIc max
‖(hi)iI‖
K,J + p
−(#Ic+degJ /2)p max
EJ max
‖(hi)iJc‖
which allows us to replace the sums in iIc on the right-hand side of (3) by
the corresponding maxima, proving the inequality in question.
Theorem 1 gives a precise estimate for moments of canonical Hilbert
space valued U -statistics. In the sequel however we will need a weaker
estimate, using the ‖·‖K,J norms only for I = Id and specialized to the case
hi = h. Before we formulate a proper corollary, let us introduce
Definition 7. Let h : Σd → H be a canonical kernel. Let moreover X1,X2, . . . ,Xd
be i.i.d random variables with values in Σ. Denote X = (X1, . . . ,Xd) and
for J ⊆ Id, XJ = (Xj)j∈J . For K ⊆ I ⊆ Id and J = {J1, . . . , Jk} ∈ PI\K ,
we define
‖h‖K,J = sup
EI〈h(X), g(XK )〉
fj(XJj ) : g : Σ
#K → H, E|g(XK)|2 ≤ 1,
fj : Σ
#Jj → R, Efj(XJj ))2 ≤ 1, j = 1, . . . , k
In other words ‖h‖K,J is the ‖ · ‖K,J of an array (hi)|i|=1, with h(1,...,1) = h.
Remark For I = Id, ‖h‖K,J is a norm, whereas for I ( Id, it is a random
variable, depending on XIc .
It is also easy to see that if all the variables X
i are i.i.d. and for all
|i| ≤ n we have hi = h, then for any fixed value of iIc ,
‖(hi)|iI |≤n‖K,J = ‖h‖K,Jn
#I/2,
where ‖h‖K,J is defined with respect to any i.i.d. sequence X1, . . . ,Xd of
the form Xj = X
for j ∈ Ic.
We also have ‖h‖K,J ≤
EI |h(X)|2, which together with the above
observations allows us to derive the following
Corollary 1. For all p ≥ 2, we have
h(Xdeci )|p ≤L
J∈PId\K
ppdegJ /2ndp/2‖h‖pK,J
pp(d+#I
c)/2n#Ip/2EIc max
(EI |h(Xdeci )|2)p/2
The Chebyshev inequality gives the following corollary for bounded ker-
Corollary 2. If h is bounded, then for all t ≥ 0,
h(Xdeci )| ≥ Ld(nd/2(E|h|2)1/2 + t)
Ld exp
K(Id,J∈PId\K
nd/2‖h‖K,J
)2/deg(J ))
n#I/2‖(EI |h|2)1/2‖∞
)2/(d+#Ic))]
Before we formulate the version of exponential inequalities that will be
useful for the analysis of the LIL, let us recall the classical definition of
Hoeffding projections.
Definition 8. For an integrable kernel h : Σd → H, define πdh : Σk → R
with the formula
πdh(x1, . . . , xk) = (δx1 −P)× (δx2 −P)× . . .× (δxd −P)h,
where P is the law of X1.
Remark It is easy to see that πkh is canonical. Moreover πdh = h iff h is
canonical.
The following Lemma was proven for H = R in [2] (Lemma 1). The
proof given there works for an arbitrary Banach space.
Lemma 6. Consider an arbitrary family of integrable kernels hi : Σ
d → H,
|i| ≤ n. For any p ≥ 1 we have
|i|≤n
πdhi(X
|i|≤n
ǫdeci hi(X
In the sequel we will use exponential inequalities to U -statistics gener-
ated by πdh, where h will be a non-necessarily canonical kernel of order
d. Since the kernel h̃((ε1,X1), . . . , (εd,Xd)) = ε1 · · · εdh(X1, . . . ,Xd), where
εi’s are i.i.d. Rademacher variables independent of Xi’s is always canoni-
cal, Corollary 1, Lemma 6 and the Chebyshev inequality give us also the
following corollary (note that ‖h̃‖K,J = ‖h‖K,J )
Corollary 3. If h is bounded, then for all p ≥ 0,
πdh(X
≥ Ld(nd/2(E|h|2)1/2 + t)
Ld exp
K(Id,J∈PId\K
nd/2‖h‖K,J
)2/ deg(J ))
n#I/2‖(EI |h|2)1/2‖∞
)2/(d+#Ic))]
4 The equivalence of several LIL statements
In this section we will recall general results on the correspondence of various
statements of the LIL. We will state them without proofs, since all of them
have been proven in [9] and [2] in the real case and the proofs can be directly
transferred to the Hilbert space case, with some simple modifications that
we will indicate.
Before we proceed, let us introduce the assumptions and notation com-
mon for the remaining part of the article.
• We assume that (Xi)i∈N, (X(k)i )i∈N,1≤k≤d are i.i.d. and h : Σd → H is
a measurable function.
• Recall that (εi)i∈N, (ε(k)i )i∈N,1≤k≤d are independent Rademacher vari-
ables, independent of (Xi)i∈N, (X
i )i∈N,1≤k≤d.
• To avoid technical problems with small values of h let us also define
LLx = loglog (x ∨ ee).
• We will also occasionally write X for (X1, . . . ,Xd) and for I ⊆ Id,
XI = (Xi)i∈I . Sometimes we will write simply h instead of h(X).
• We will use the letter K to denote constants depending only on the
function h.
We will need the following simple fact
Lemma 7. If E|h|2/(LL|h|)d = K < ∞ then E(|h|2∧u) ≤ L(loglog u)d with
L depending only on K and d.
The next lemma comes from [9]. It is proven there for H = R but the
argument is valid also for general Banach spaces.
Lemma 8. Let h : Σd → H be a symmetric function. There exist constants
Ld, such that if
lim sup
(nloglog n)d/2
i∈Idn
h(Xi)
∣ < C a.s., (4)
|i|≤2n
ǫdeci h(X
∣ ≥ D2nd/2 logd/2 n
< ∞ (5)
for D = LdC.
Lemma 9. For a symmetric function h : Σd → H, the LIL (4) is equivalent
to the decoupled LIL
lim sup
(nloglog n)d/2
i∈Idn
h(Xdeci )
∣ < D a.s., (6)
meaning that (4) implies (6) with D = LdC, and conversely (6) implies (4)
with C = LdD.
Proof. This is Lemma 8 in [2]. The proof is the same as there, one needs only
to replace l∞ with l∞(H) – the space of bounded H-valued sequences.
The next lemma also comes from [2] (Lemma 9). Although stated for real
kernels, its proof relies on an inductive argument with a stronger, Banach-
valued hypothesis.
Lemma 10. There exists a universal constant L < ∞, such that for any
kernel h : Σd → H we have
|j|≤n
i : ik≤jk,k=1...d
h(Xdeci )
∣ ≥ t
≤ LdP
|i|≤n
h(Xdeci )
∣ ≥ t/Ld
Corollary 4. Consider a kernel h : Σd → H and α > 0. If
|i|≤2n
h(Xdeci )| ≥ C2nα logα n) < ∞,
lim sup
(nloglog n)α
|i|≤2n
h(Xdeci )
∣ ≤ Ld,αC a.s.
Proof. Given Lemma 10, the proof is the same as the one for real kernels,
presented in [2] (Corollary 1 therein).
The next lemma shows that the contribution to a decoupled U-statistic
from the ’diagonal’, i.e. from the sum over multiindices i /∈ Idn is negligible.
The proof given in [2] (Lemma 10) is still valid, since the only part which
cannot be directly transferred to the Banach space setting is the estimate of
variance of canonical U-statistics, which is the same in the real and general
Hilbert space case.
Lemma 11. If h : Σd → H is canonical and satisfies
E(|h|2 ∧ u) = O((loglog u)β),
for some β, then
lim sup
(nloglog n)d/2
|i|≤n
∃j 6=kij=ik
h(Xdeci )
∣ = 0 a.s. (7)
Corollary 5. The randomized decoupled LIL
lim sup
(nloglog n)d/2
|i|≤n
ǫdeci h(X
∣ < C (8)
is equivalent to (5), meaning then if (8) holds then so does (5) with D = LdC
and (5) implies (8) with C = LdD.
The proof is the same as for the real-valued case, given in [2] (Corollary
2), one only needs to replace h2 by |h|2 and use the formula for the second
moments in Hilbert spaces.
Corollary 6. For a symmetric, canonical kernel h : Σd → H, the LIL (4)
is equivalent to the decoupled LIL ’with diagonal’
lim sup
(nloglog n)d/2
|i|≤n
h(Xdeci )
∣ < D (9)
again meaning that there are constants Ld such that if (4) holds for some D
then so does (9) for D = LdC, and conversely, (9) implies (4) for C = LdD.
Proof. The proof is the same as in the real case (see [2], Corollary 3). Al-
though the integrability of the kernel guaranteed by the LIL is worse in the
Hilbert space case, it still allows one to use Lemma 11.
5 The canonical decoupled case
Before we formulate the necessary and sufficient conditions for the bounded
LIL in Hilbert spaces, we need
Definition 9. For a canonical kernel h : Σd → H, K ⊆ Id, J = {J1, . . . , Jk} ∈
PId\K and u > 0 we define
‖h‖K,J ,u = sup{E〈h(X), g(XK )〉
fi(XJi) : g : Σ
K → H,
fi : Σ
Ji → R, ‖g‖2, ‖fi‖2 ≤ 1, ‖g‖∞, ‖fi‖∞ ≤ u},
where for K = ∅ by g(XK) we mean an element g ∈ H, and ‖g‖2 denotes
just the norm of g in H (alternatively we may think of g as of a random
variable measurable with respect to σ((Xi)i∈∅), hence constant). Thus the
condition on g becomes in this case just |g| ≤ 1.
Example For d = 2, the above definition reads as
‖h(X1,X2)‖∅,{{1,2}},u = sup{|Eh(X1,X2)f(X1,X2)| :
Ef(X1,X2)
2 ≤ 1, ‖f‖∞ ≤ u},
‖h(X1,X2)‖∅,{{1}{2}},u = sup{|Eh(X1,X2)f(X1)g(X2)| :
Ef(X1)
2,Eg(X2)
2 ≤ 1
‖f‖∞, ‖g‖∞ ≤ u},
‖h(X1,X2)‖{1},{{2}},u = sup{E〈f(X1), h(X1,X2)〉g(X2) :
E|f(X1)|2,Eg(X2)2 ≤ 1
‖f‖∞, ‖g‖∞,≤ u}
‖h(X1,X2)‖{1,2},∅,u = sup{E〈f(X1,X2), h(X1,X2)〉 :
E|f(X1,X2)|2 ≤ 1, ‖f‖∞ ≤ u}.
Theorem 2. Let h be a canonical H-valued symmetric kernel in d variables.
Then the decoupled LIL
lim sup
nd/2(loglog n)d/2
|i|≤n
h(Xdeci )| < C (10)
holds if and only if
(LL|h|)d < ∞ (11)
and for all K ⊆ Id,J ∈ PId\K
lim sup
(loglog u)(d−deg J )/2
‖h‖K,J ,u < D. (12)
More precisely, if (10) holds for some C then (12) is satisfied for D = LdC
and conversely, (11) and (12) implies (10) with C = LdD.
Remark Using Lemma 7 one can easily check that the condition (12) with
D < ∞ for I = Id is implied by (11).
6 Necessity
The proof is a refinement of ideas from [16], used to study random matrix
approximations of the operator norm of kernel integral operators.
Lemma 12. If a, t > 0 and h is a nonnegative d-dimensional kernel such
that NdEh(X) ≥ ta and ‖EIh(X)‖∞ ≤ N−#Ia for all ∅ ⊆ I ( {1, . . . , d},
∀λ∈(0,1) P(
|i|≤N
h(Xdeci ) ≥ λta) ≥ (1−λ)2
t+ 2d − 1 ≥ (1−λ)
22−d min(1, t).
Proof. We have
|i|≤N
h(Xdeci )
|i|≤N
|j|≤N
Eh(Xdeci )h(X
|i|≤N
|j|≤N :
{k : ik=jk}=I
Eh(Xdeci )h(X
|i|≤N
|j|≤N :
{k : ik=jk}=I
E[h(Xdeci )EIch(X
≤ N2d(Eh(X))2 +
∅6=I⊆Id
Nd+#I
Eh(X)‖EIch(X)‖∞
≤ N2d(Eh(X))2 + (2d − 1)NdaEh(X)
≤ N2d(Eh(X))2 + (2d − 1)t−1N2d(Eh(X))2
≤ t+ 2
d − 1
|i|≤n
h(Xdeci )
The lemma follows now from the Paley-Zygmund inequality (see e.g. [5],
Corollary 3.3.2.), which says that for an arbitrary nonnegative random vari-
able S,
P(S ≥ λS) ≥ (1− λ)2 (ES)
Corollary 7. Let A ⊆ Σd be a measurable set, such that
∀∅(I({1,...,d}∀xIc∈ΣIc PI((xIc ,XI) ∈ A) ≤ N
−#I .
P(∃|i|≤N Xdeci ∈ A) ≥ 2−d min(NdP(X ∈ A), 1).
Proof. We apply Lemma 12 with h = IA, a = 1, t = N
dP(X ∈ A) and
λ → 0+.
Lemma 13. Suppose that Zj are nonnegative r.v.’s, p > 0 and aj ∈ R are
such that P(Zj ≥ aj) ≥ p for all j. Then
Zj ≥ p
aj/2) ≥ p/2.
Proof. Let α := P(
j Zj ≥ p
j aj/2), then
aj ≤ E(
min(Zj , aj)) ≤ α
aj + p
aj/2.
Theorem 3. Let Y be a r.v. independent of X
i . Suppose that for each n,
an ∈ R, hn is a d+ 1-dimensional nonnegative kernel such that
|i|≤2n
i , Y ) ≥ an
Let p > 0, then there exists a constant Cd(p) depending only on p and d
such that the sets
An :=
x ∈ Sd : ∀n≤m≤2d−1n PY (hm(x, Y ) ≥ Cd(p)2d(n−m)am) ≥ p
satisfy
2dnP(X ∈ An) < ∞.
Proof. We will show by induction on d, that the assertion holds with C1(p) :=
1, C2(p) := 12/p and
Cd(p) := 12p
−1 max
1≤l≤d−1
Cd−l(2
−l−4p/3) for d = 3, 4, . . . .
For d = 1 we have
min(2nP(X ∈ An), 1) ≤ P(∃|i|≤2n Xdeci ∈ An)
= P(∃|i|≤2n PY (hn(Xdeci , Y ) ≥ an) ≥ p)
≤ P(PY (
|i|≤2n
i , Y ) ≥ an) ≥ p)
≤ p−1P(
|i|≤2n
i , Y ) ≥ an).
Before investigating the case d > 1 let us define
Ãn := An \
The sets Ãn are pairwise disjoint and obviously Ãn ⊂ An. Notice that
since Cd(p) ≥ 1,
P(X ∈ An) ≤ P(PY (hn(X,Y ) ≥ an)) ≤ p−1P(
|i|≤2n
i , Y ) ≥ an).
Hence
n P(X ∈ An) < ∞, so P(X ∈ lim supAn) = 0. But if x /∈
lim supAn, then
ndIAn(x) ≤
nd+1I
(x). So it is enough to show
2dnP(X ∈ Ãn) < ∞.
Induction step Suppose that the statement holds for all d′ < d, we will
show it for d. First we will inductively construct sets
Ãn = A
n ⊃ A1n ⊃ . . . ⊃ Ad−1n
such that for 1 ≤ l ≤ d− 1,
∀∅(I({1,...,d−1}, #I≤l ∀xIc PI((xIc ,XI) ∈ A
n) ≤ 2−n#I (13)
2ndP(X ∈ Al−1n \Aln) < ∞. (14)
Suppose that 1 ≤ l ≤ d − 1 and the set Al−1n was already defined. Let
I ⊂ {1, . . . , d} be such that #I = l and let j ∈ I. Notice that
PI((xIc ,XI) ∈ Al−1n ) = EjPI\{j}((xIc ,Xj ,XI\{j}) ∈ Al−1n ) ≤ 2−n(l−1)
by the property (13) of the set Al−1n . Let us define for n(l− 1)+1 ≤ k ≤ nl,
BIn,k := {xIc : PI((xIc ,XI) ∈ Al−1n ) ∈ (2−k, 2−k+1]}
BIn :=
k=n(l−1)+1
BIn,k = {xIc : PI((xIc ,XI) ∈ Al−1n ) > 2−nl}.
We have
2dnP(X ∈ Al−1n ,XIc ∈ BIn) ≤ 2
k=n(l−1)+1
2dn−kP(XIc ∈ BIn)
= 2EkI1(XIc),
where
kI1(xIc) :=
k=n(l−1)+1
2dn−kIBI
(xIc).
Let m ≥ 1 and
CIm := {xIc : 2(m+1)(d−l) > k1(xIc) ≥ 2m(d−l)}.
Notice that for n > m and k ≤ nl, 2dn−k ≥ 2(d−l)(m+1), moreover
n<m/2
k=n(l−1)+1
2dn−k ≤
n<m/2
2(d−l+1)n ≤ 4
2(d−l+1)(m−1)/2 ≤ 2
2(d−l)m.
Hence
xIc ∈ CIm ⇒
m/2≤n≤m
k=n(l−1)+1
2dn−kIBI
(xIc) ≥
2(d−l)m. (15)
Let m ≤ r ≤ 2d−2m, if m/2 ≤ n ≤ m, then since Al−1n ⊂ An we have for all
x ∈ Sd,
PY (hr(x, Y ) ≥ Cd(p)2d(n−r)arIAl−1n (x)) ≥ p,
therefore, since Al−1n ⊂ Ãn are pairwise disjoint,
hr(x, Y ) ≥ Cd(p)2−drar
m/2≤n≤m
Al−1n
Hence, by Lemma 13,
|iI |≤2r
hr(xIc ,X
, Y ) ≥ p
Cd(p)2
−drar
|iI |≤2r
k2,xIc (X
, (16)
where
k2,xIc (xI) :=
m/2≤n≤m
(xIc , xI).
We have ‖k2,xIc‖∞ ≤ 2dm and for ∅ 6= J ( I, by the property (13) of
Al−1n ,
EJk2,xIc (xI\J ,XJ ) =
m/2≤n≤m
2dnPJ
(xIc , xI\J ,XJ) ∈ Al−1n
m/2≤n≤m
2(d−#J)n ≤ 2(d−#J)m+1.
Moreover for xIc ∈ CIm, by the definition of BIn,k and (15),
Ek2,xIc (XI) ≥
m/2≤n≤m
k=n(l−1)+1
2dnPI((xIc ,XI) ∈ Al−1n )IBI
(xIc)
m/2≤n≤m
k=n(l−1)+1
2dn−kIBI
(xIc) ≥
2(d−l)m.
Therefore by Lemma 12 (with l instead of d and a = 2(d−l)m+rl+1, t =
1/6, N = 2r, λ = 1/2), for m ≤ r ≤ 2d−2m,
|iI |≤2r
k2,xIc (X
) ≥ 1
2(d−l)m+rl
2−l−3.
Combining the above estimate with (16) we get (for xIc ∈ CIm and m ≤ r ≤
2d−2m),
|iI |≤2r
hr(xIc ,X
, Y ) ≥ p
Cd(p)2
(d−l)(m−r)ar
2−l−4p.
Let us define Ỹ := ((X
i )j∈I , Y ) and h̃n(xIc , Ỹ ) :=
|iI |≤2n
hn(xIc ,X
, Y ).
|iIc |≤2
h̃n(X
, Ỹ ) ≥ an) =
|i|≤2n
i , Y ) ≥ an) < ∞.
Moreover (since Cd(p) ≥ 12p−1Cd−l(2−l−4p/3)),
CIm ⊆
∀m≤r≤2d−2m PỸ (h̃r(xIc , Ỹ ) ≥ Cd−l(2
−l−4p/3)2(d−l)(m−r)ar) ≥ 2−l−4p/3
Hence by the induction assumption,
2(d−l)mP(XIc ∈ CIm) < ∞,
so EkI1(XIc) < ∞ and thus
∀#I=l
2dnP(X ∈ Al−1n ,XIc ∈ BIn) < ∞. (17)
We set
Aln := {x ∈ Al−1n : xIc /∈ BIn for all I ⊂ {1, . . . , d},#I = l}.
The set Aln satisfies the condition (13) by the definition of B
n and the
property (13) for Al−1n . The condition (14) follows by (17).
Notice that the set Ad−1n satisfies the assumptions of Corollary 7 with
N = 2n, therefore if Cd(p) ≥ 1,
2−d min(1, 2ndP(X ∈ Ad−1n )) ≤ P(∃|i|≤2n Xdeci ∈ Ad−1n ) ≤ P(∃|i|≤2n Xdeci ∈ Ãn)
≤ P(∃|i|≤2n PY (hn(Xdeci , Y ) ≥ Cd(p)an) ≥ p)
≤ P(PY (
|i|≤2n
i , Y ) ≥ an) ≥ p)
≤ p−1P(
|i|≤2n
i , Y ) ≥ an).
Therefore
ndP(X ∈ Ad−1n ) < ∞, so by (14) we get
X ∈ Ãn) =
P(X ∈ Al−1n \ Aln) + P(X ∈ Ad−1n )
Corollary 8. If
|i|≤2n
h2(Xdeci ) ≥ ε2nd(log n)α
for some ε > 0 and α ∈ R,
then E h
(LL|h|)α
Proof. We apply Theorem 3 with hn = h
2 and an = ε2
nd logd n in the
degenerate case when Y is deterministic. It is easy to notice that h2 ≥
C̃d(p, ε)2
dn logd n implies that
∀n≤m≤2d−1n h2 ≥ Cd(p)2d(n−m)am.
To prove the necessity part of Theorem 2 we will also need the following
Lemmas
Lemma 14 ([2], Lemma 12). Let g : Σd → R be a square integrable function.
|i|≤n
g(Xdeci )) ≤ (2d − 1)n2d−1Eg(X)2.
Lemma 15 ([2], Lemma 5). If E(|h|2 ∧ u) = O((loglog u)β) then
E|h|1{|h|≥s} = O(
(loglog s)β
Lemma 16. Let (ai)i∈Idn be a d–indexed array of vectors from a Hilbert space
H. Consider a random variable
|i|≤n
|i|≤n
For any set K ⊆ Id and a partition J = {J1, . . . , Jm} ∈ PId\K let us define
‖(ai)‖∗K,J ,p := sup
|i|≤n
〈ai, α(0)iK 〉
|α(0)iK |
2 ≤ 1,
)2 ≤ p,
∀imaxJk∈In
)2 ≤ 1, k = 1, . . . ,m
where ⋄J = J\{max J} (here
ai = ai).
Then, for all p ≥ 1,
‖S‖p ≥
K⊆Id,J∈PId\K
‖(ai)‖∗K,J ,p.
In particular for some constant cd
P(S ≥ cd
K⊆Id,J∈PId\K
‖(ai)‖∗K,J ,p) ≥ cd ∧ e−p.
Remark For K = ∅, we define
‖(ai)‖∗∅,J ,p := sup
|i|≤n
∈ R(I
)2 ≤ p,
∀imaxJk∈In
)2 ≤ 1, k = 1, . . . ,m
It is also easy to see that for a d-indexed matrix, ‖(ai)i‖Id,{∅},p =
i |ai|2 =
‖S‖2 and thus does not depend on p. Since it will not be important in the
applications, we keep a uniform notation with the subscript p.
Examples For d = 1, we have
‖(ai)i≤n‖∗∅,{{1}},p = sup
α2i ≤ p, |αi| ≤ 1, i = 1, . . . , n
‖(ai)i≤n‖∗{1},∅,p = sup
〈ai, αi〉 :
|αi|2 ≤ 1
|ai|2,
whereas for d = 2, we get
‖(aij)i,j≤n‖∗∅,{{1},{2}},p = sup
i,j=1
aijαiβj
α2i ≤ p,
β2j ≤ p,
∀i∈In |αi| ≤ 1,∀j∈In |βj | ≤ 1
‖(aij)i,j≤n‖∗∅,{I2},p = sup
i,j=1
aijαij
i,j=1
α2ij ≤ p,∀j∈In
α2ij ≤ 1
‖(aij)i,j≤n‖∗{1},{{2}},p = sup
i,j=1
〈aij , αi〉βj
|αi|2 ≤ 1,
β2j ≤ p,∀j∈In|βj | ≤ 1
‖(aij)i,j≤n‖∗I2,∅,p = sup
i,j=1
〈aij , αij〉
i,j=1
α2ij ≤ 1
|aij |2.
Proof of Lemma 16. We will combine the classical hypercontractivity prop-
erty of Rademacher chaoses (see e.g. [5], p. 110-116) with Lemma 3 in [2],
which says that for H = R we have
‖S‖p ≥
J∈PId
‖(ai)‖∅,J ,p. (18)
Since ‖(ai)‖Id,{∅},p =
i |ai|2 = ‖S‖2, the inequality ‖S‖p ≥ L−1‖(ai)‖Id,{∅},p
is just Jensen’s inequality (p ≥ 2) or the aforesaid hypercontractivity of
Rademacher chaos (p ∈ (1, 2)). On the other hand, for K 6= Id and
J ∈ PId\K , we have
‖S‖p =
EId\KEK
iId\K
p)1/p
EId\K
iId\K
2)p/2)1/p
EId\K sup
iId\K
〈α(0)iK , ai〉
p)1/p
EId\K
iId\K
〈α(0)iK , ai〉
p)1/p
L#KLd−#K
〈α(0)iK , ai〉)iId\K
∅,J ,p
L#KLd−#K
‖(ai)‖K,J ,p,
where the first inequality follows from hypercontractivity applied condition-
ally on (ε
i )k/∈K,i∈In, the second is Jensen’s inequality and the third is (18)
applied for a chaos of order d−#K.
The tail estimate follows from moment estimates by the Paley-Zygmund
inequality and the inequality ‖(ai)‖K,J ,tp ≤ tdegJ ‖(ai)‖K,J ,p for t ≥ 1 just
like in [12, 18].
Proof of necessity. First we will prove the integrability condition (11). Let
us notice that by classical hypercontractive estimates for Rademacher chaoses
and the Paley-Zygmund inequality (or by Lemma 16), we have
|i|≤2n
ǫdeci h(X
|i|≤2n
h(Xdeci )
for some constant cd > 0. By the Fubini theorem it gives
|i|≤2n
ǫdeci h(X
≥ D2nd/2 logd/2 n
≥ cdP
|i|≤2n
h(Xdeci )
2 ≥ D2c−2d 2
nd logd n
which together with Lemma 8 yields
|i|≤2n
h(Xdeci )
2 ≥ D2c−2
2nd logd n
The integrability condition (11) follows now from Corollary 8.
Before we proceed to the proof of (12), let us notice that (11) and Lemma
7 imply that
E(|h|2 ∧ u) ≤ K(loglog u)d (19)
for n large enough. The proof of (12) can be now obtained by adapting the
argument for the real valued case.
Since limn→∞
= log 2, (5) implies that there exists N0, such that
for all N > N0, there exists N ≤ n ≤ 2N , satisfying
|i|≤2n
ǫdeci h(X
> LdC2
nd/2 logd/2 n
. (20)
Let us thus fix N > N0 and consider n as above. Let K ⊆ Id, J =
{J1, . . . , Jk} ∈ PId\K . Let us also fix functions g : Σ#K → H, fj : Σ#Jj → R,
j = 1, . . . , k, such that
‖g(Xk)‖2 ≤ 1, ‖g(XK )‖∞ ≤ 2n/(2k+3),
‖fj(XJj )‖2 ≤ 1, ‖fj(XJj)‖∞ ≤ 2n/(2k+3).
The Chebyshev inequality gives
|iJj |≤2
)2 log n ≤ 10 · 2d2#Jjn log n) ≥ 1− 1
10 · 2d . (21)
Similarly, if K 6= ∅,
|iK |≤2n
|g(XdeciK )|
2 ≤ 10 · 2d2#Kn) ≥ 1− 1
10 · 2d (22)
and for K = ∅, |g| ≤ 1 (recall that for K = ∅, the function g is constant).
Moreover for j = 1, . . . , k and sufficiently large N ,
|i⋄Jj |≤2
2n#Jj
)2 · log n ≤ 2
n#⋄Jj22n/(2k+3) log n
2n#Jj
2n/(2k+3) log n
Without loss of generality we may assume that the sequences (X
i )i,j
and (ε
i )i,j are defined as coordinates of a product probability space. If for
each j = 1, . . . , k we denote the set from (21) by Ak, and the set from (22)
by A0, we have P(
j=0Ak) ≥ 0.9. Recall now Lemma 16. On
j=0Ak we
can estimate the ‖ · ‖∗K,J ,logn norms of the matrix (h(Xdeci ))|i|≤2n by using
the test sequences
log n
101/22d/22n#Jj/2
for j = 1, . . . , k and
g(XdeciK )
101/22d/22n#K/2
Therefore with probability at least 0.9 we have
‖(h(Xdeci ))|i|≤2n‖∗K,J ,logn (23)
≥ (log n)
2d(k+1)/210(k+1)/22
j #Jj)n/2
|i|≤2n
〈g(XdeciK ), h(X
(log n)k/2
2d(k+1)/210(k+1)/22dn/2
|i|≤2n
〈g(XdeciK ), h(X
Our aim is now to further bound from below the right hand side of
the above inequality, to have, via Lemma 16, control from below on the
conditional tail probability of
|i|≤2n ǫ
i h(X
i ), given the sample (X
From now on let us assume that
|E〈g(XK), h(X)〉
fj(XJj )| > 1. (24)
The Markov inequality, (19) and Lemma 15 give
|i|≤2n
〈g(XK), h(XdeciK )〉1{|h(Xdeci )|>2n}
2nd|E〈g, h〉
j=1 fj|
2nd(‖g‖∞
j=1 ‖fj‖∞) · E|h|1{|h|>2n}
2nd|E〈g, h〉
j=1 fj|
≤ 42n(k+1)/(2k+3)E|h|1{|h|>2n}
≤ 4K (log n)
n(k+2)
. (25)
Let now hn = h1{|h|≤2n}. By the Chebyshev inequality, Lemma 14 and (19)
|i|≤2n
〈g(XdeciK ),hn(X
)− 2ndE〈g, hn〉
|E〈g, hn〉
|i|≤2n〈g(XdeciK ), hn(X
j=1 fj(X
22nd|E〈g, hn〉
j=1 fj|2
≤ 25 (2
d − 1)2n(2d−1)
22nd|E〈g, hn〉
j=1 fj|2
E|〈g, hn〉
≤ 25(2d − 1) 2
2n(k+1)/(2k+3)E|hn|2
2n|E〈g, hn〉
j=1 fj|2
≤ 25K(2d − 1) log
2n/(2k+3)|E〈g, hn〉
j=1 fj|2
. (26)
Let us also notice that for large n, by (19), Lemma 15 and (24)
|E〈g, hn〉
fj| ≥ |E〈g, h〉
fj| − |E〈g, h〉1{|h|>2n}
≥ |E〈g, h〉
fj| − 2n(k+1)/(2k+3)K
(log n)d
|E〈g, h〉
fj| ≥
Inequalities (25), (26) and (27) imply, that for large n with probability
at least 0.9 we have
|i|≤2n
〈g(XdeciK ), h(X
|i|≤2n
〈g(XdeciK ), hn(X
|i|≤2n
〈g(XdeciK ), h(X
i )〉1{|h(Xdec
)|>2n}
≥ 2nd
|E〈g, hn〉
fj| −
|E〈g, h〉
≥ 2nd
|E〈g, h〉
fj| −
|E〈g, h〉
|E〈g, h〉
fj |.
Together with (23) this yields that for large n with probability at least
‖(hi)|i|≤2n‖∗K,J ,logn ≥
2nd/2 logk/2 n
4 · 2d(k+1)/210(k+1)/2
|E〈g, h〉
Thus, by Lemma 16, for large n
|i|≤2n
ǫdeci h(X
∣ ≥ cd
2nd/2 logk/2 n
4 · 2d(k+1)/210(k+1)/2
|E〈g, h〉
which together with (20) gives
|E〈g, h〉
fj| ≤ LdC
4 · 2d(k+1)/210(k+1)/2
log(d−k)/2 n.
In particular for sufficiently large N , for arbitrary functions g : Σ#K → H,
fj : Σ
#Jj → R, j = 1, . . . , k, such that
‖g(XK)‖∞, ‖fj(XJj )‖2 ≤ 1,
‖g(XK)‖2, ‖fj(XJj )‖∞ ≤ 2N/(2k+3)
we have
|E〈g, h〉
fj| ≤ LdC
4 · 2d(k+1)/210(k+1)/2
log(d−k)/2 n ≤ L̃dC log(d−k)/2 N,
which clearly implies (12).
7 Sufficiency
Lemma 17. Let H = H(X1, . . . ,Xd) be a nonnegative random variable,
such that EH2 < ∞. Then for I ⊆ Id, I 6= ∅,Id,
2l+#I
PIc(EIH
2 ≥ 22l+#Icn) < ∞.
Proof.
2l+#I
PIc(EIH
2 ≥ 22l+#Icn) =
2lEIc
1{EI |H|2≥22l+#I
21−lEIcEIH
2 ≤ 4EH2 < ∞.
Lemma 18. Let X = (X1, . . . ,Xd) and X̃(I) = ((Xi)i∈I , (X
i )i∈Ic). De-
note H = |h|/(LL|h|)d/2. If E|H|2 < ∞ and hn = h1An , where
x : |h(x)|2 ≤ 2nd logd n and ∀I 6=∅,Id EIH
2 ≤ 2#Icn
then for I ⊆ Id, I 6= ∅, we have
2−n#I
log2d n
E[|hn(X)|2|hn(X̃(I))|2] < ∞.
Proof. a) I = Id
E|hn|4
2nd log2d n
≤ E|h|4
2nd log2d n
1{|h|2≤2nd logd n}
≤ LdE|h|4
|h|2(LL|h|)d < ∞.
b) I 6= Id, ∅. Let us denote by EI ,EIc , ẼIc respectively, the expectation
with respect to (Xi)i∈I , (Xi)i∈Ic and (X
i )i∈Ic . Let also h̃, h̃n stand
for h(X̃(I)), hn(X̃(I)) respectively. Then
E(|hn|2 · |h̃n|2)
2n#I log2d n
E(|hn|2 · |h̃n|21{|h|≤|h̃|})
2n#I log2d n
|h|2|h̃|21
{|h|≤|h̃|}
2n#I log2d n
{EIc |h|
{|h|2≤22nd}
#In logd n, |h̃|2≤22nd}
|h|2|h̃|21
{|h|≤|h̃|}
2n#I log2d n
1{EIc |h|21{|h|2≤|h̃|2}≤Ld2
#In logd n, |h̃|2≤22nd}
≤ L̃dE
|h|2|h̃|21
{|h|≤|h̃|}
(EIc |h|21{|h|2≤|h̃|2})(LL|h̃|)d
= L̃dEI ẼIc
|h̃|2EIc
|h|21
{|h|≤|h̃|}
(EIc |h|21{|h|2≤|h̃|2})(LL|h̃|)d
≤ L̃dE
|h̃|2
(LL|h̃|)d
where to obtain the second inequality, we used the fact that
EIc |h|21{|h|2≤22nd,EIcH2≤2#In}
≤ EIc
(LL|h|)d (loglog 2
nd)d1{EIcH2≤2#In}
≤ LdEIcH21{EIcH2≤2#In} log
d n ≤ Ld2#In logd n.
Lemma 19. Consider a square integrable, nonnegative random variable Y .
Let Yn = Y 1Bn , with Bn =
k∈K(n)Ck, where C0, C1, C2, . . . are pairwise
disjoint subsets of Ω and
K(n) = {k ≤ n : E(Y 21Ck) ≤ 2
k−n}.
(EY 2n )
2 < ∞
Proof. Let us first notice that by the Schwarz inequality, we have
k∈K(n)
E(Y 21Ck)
k∈K(n)
2(n−k)/22(k−n)/2E(Y 21Ck)
k∈K(n)
[2n−k(E(Y 21Ck))
k∈K(n)
[2n−k(E(Y 21Ck))
(EY 2n )
k∈K(n)
[2n−k(E(Y 21Ck))
k : E(Y 21Ck )>0
(E(Y 21Ck))
n : k∈K(n)
k : E(Y 21Ck )>0
(E(Y 21Ck))
2 max
n : k∈K(n)
k : E(Y 21Ck )>0
(E(Y 21Ck))
E(Y 21Ck)
E(Y 21Ck) = 4EY
2 < ∞.
Proof of sufficiency. The proof consists of several truncation arguments.
The first part of it follows the proofs presented in [11] and [2] for the real-
valued case. Then some modifications are required, reflecting the diminished
integrability condition in the Hilbert space case. At each step we will show
|i|≤2n
πdhn(X
∣ ≥ C2nd/2 logd/2 n
< ∞, (28)
with hn = h1An for some sequence of sets An. In the whole proof we keep
the notation H = |h|/(LL|h|)d/2.
Let us also fix ηd ∈ (0, 1), such that the following implication holds
∀n=1,2,... |h|2 ≤ η2d2nd logd n =⇒ H2 ≤ 2nd. (29)
Step 1 Inequality (28) holds for any C > 0 if
x : |h(x)|2 ≥ η2d2nd logd n
We have, by the Chebyshev inequality and the inequality E|πdhn| ≤ 2dE|hn|
(which follows directly from the definition of πd or may be considered a
trivial case of Lemma 6),
|i|≤2n
πdhn(X
∣ ≥ C2nd/2 logd/2 n
|i|≤2n πdhn(X
C2nd/2 logd/2 n
2ndE|h|1
{|h|>ηd2
nd/2 logd/2 n}
C2nd/2 logd/2 n
= 2dC−1E
2nd/2
logd/2 n
{|h|>ηd2
nd/2 logd/2 n}
≤ LdC−1E
(LL|h|)d < ∞.
Step 2 Inequality (28) holds for any C > 0 if
x : |h(x)|2 ≤ η2d2nd logd n, ∃I 6=∅,Id EIH
2 ≥ 2#Icn
As in the previous step, it is enough to prove that
|i|≤2n ǫ
i hn(X
2nd/2 logd/2 n
The set An can be written as
I⊆Id,I 6=Id,∅
An(I),
where the sets An(I) are pairwise disjoint and
An(I) ⊆ {x : |h(x)|2 ≤ 22nd, EIH2 ≥ 2#I
Therefore it suffices to prove that
|i|≤2n ǫ
i h(X
i )1An(I)(X
2nd/2 logd/2 n
< ∞. (30)
Let for l ∈ N,
An,l(I) := {x : |h(x)|2 ≤ 22nd,
22l+2+#I
cn > EIH
2 ≥ 22l+#Icn} ∩An(I).
Then hn1An(I) =
l=0 hn,l, where hn,l := hn1An,l(I) (notice that the sum is
actually finite in each point x ∈ Σd as for large l, x /∈ An,l(I)).
We have
|i|≤2n
ǫdeci hn,l(X
i )| ≤
|iIc |≤2
EIcEI |
|iI |≤2n
ǫdeciI hn,l(X
|iIc |≤2
EIc(EI |
|iI |≤2n
ǫdeciI hn,l(X
i )|2)1/2
≤ 2(#Ic+#I/2)nEIc(EI |hn,l|2)1/2
≤ Ld[2(#I
c+d/2)n+l+1 logd/2 n]PIc(EIH
2 ≥ 22l+#Icn),
where in the last inequality we used the estimate
n,l ≤LdEI [(log n)dH21{22l+2+#Icn>EIH2≥22l+#Icn}]
≤Ld22l+2+#I
cn(log n)d1{EIH2≥22l+#I
Therefore to get (30) it is enough to show that
2l+#I
PIc(EIH
2 ≥ 22l+#Icn) < ∞.
But this is just the statement of Lemma 17.
Step 3 Inequality (28) holds for any C > 0 if
x : |h(x)|2 ≤ η2d2nd logd n, ∀I 6=∅,Id EIH
2 ≤ 2#Icn} ∩
with BIn =
k∈K(I,n)C
k and C
0 = {x : EIH2 ≤ 1}, CIk = {x : 2#I
c(k−1) <
2 ≤ 2#Ick}, k ≥ 1, K(I, n) = {k ≤ n : E(H21CI
) ≤ 2k−n}.
By Lemma 6 and the Chebyshev inequality, it is enough to show that
|i|≤2n ǫ
i hn(X
i )|4
22nd log2d n
The Khintchine inequality for Rademacher chaoses gives
|i|≤2n
ǫdeci hn(X
i )|4 ≤ E(
|i|≤2n
|hn(Xdeci )|2)2
|i|≤2n
|j|≤2n :
{k : ik=jk}=I
E|[hn(Xdeci )|2|hn(Xdecj )|2]
2nd2n(d−#I)E[|hn(X)|2 · |hn(X̃(I))|2],
where X = (X1, . . . ,Xd) and X̃(I) = ((Xi)i∈I , (X
i )i∈Ic).
To prove the statement of this step it thus suffices to show that for all
I ⊆ Id,
S(I) :=
2−n#I
log2d n
E[|hn(X)|2|hn(X̃(I))|2] < ∞. (31)
The case of nonempty I follows from Lemma 18. It thus remains to consider
the case I = ∅. Set H2I = EIH2. We have
S(∅) =
(E|hn|2)2
log2d n
logd n
1An))
2 ≤ Ld
(E(H21An))
(E(H2
))2 ≤ L̃d
(E(H21BIn))
= L̃d
(E(H2I1BIn))
2 < ∞
by Lemma 19, applied for Y 2 = EIH
2, since EH2I = EH
2 < ∞.
Step 4 Inequality (28) holds for some C ≤ LdD if
x : |h(x)|2 ≤ η2d2nd logd n, ∀I 6=∅,IdEIH
2 ≤ 2#Icn} ∩
(BIn)
where BIn is defined as in the previous step.
Let us first estimate ‖(EI |hn|2)1/2‖∞ for I ( Id. We have
EI |hn|2 ≤ EI
|h|21{|h|2≤ηd2nd logd n}
k≤n,k/∈K(I,n)
≤ Ld logd n
k≤n,k/∈K(I,n)
The fact that we can restrict the summation to k ≤ n follows directly from
the definition of An for I 6= ∅ and for I = ∅ from (29).
The sets CIk are pairwise disjoint and thus
‖EI |hn|2‖∞ ≤ (Ld logd n) max
k≤n,k/∈K(I,n)
ck = Ld2
#IckI(n) logd n, (32)
where
kI(n) = max{k ≤ n : k /∈ K(I, n)}.
Therefore for C > 0,
( C2nd/2 logd/2 n
2#In/2‖(EI |hn|2)1/2‖∞
)2/(d+#Ic)]
k≤n,k/∈K(I,n)
( C2nd/2 logd/2 n
2#In/22#I
ck/2 logd/2 n
)2/(d+#Ic)]
n≥k, k /∈K(I,n)
c(n−k)/2
)2/(d+#Ic)]
Notice that for each k the inner series is bounded by a geometric series
with the ratio smaller than some qd,C < 1 (qd,C depending only on d and
C). Therefore the right hand side of the above inequality is bounded by
n≥k, k /∈K(I,n)
c(n−k)/2
)2/(d+#Ic)]
with the convention sup ∅ = 0. But k /∈ K(I, n) implies that 2#Ic(n−k)/2 ≥
(E(H21CI
))−#I
c/2. Therefore the above quantity is further bounded by
C−2/#I
E(H21CI
)−#Ic/(d+#Ic)]
≤ L̄dC−2/#I
E(H21CI
= L̄dC
−2/#IcEH2 < ∞,
where we used the inequality ex ≥ cdxα for all x ≥ 0 and 0 ≤ α ≤ 2d. We
have thus proven that for all I ( Id and C,Ld > 0,
n : An 6=∅
( C2nd/2 logd/2 n
2#In/2‖(EI |hn|2)1/2‖∞
)2/(d+#Ic)]
< ∞. (33)
Now we will turn to the estimation of ‖hn‖J0,J . Let us consider J0 ⊆ Id,
J = {J1, . . . , Jl} ∈ PId\J0 and denote as before X = (X1, . . . ,Xd), XI =
(Xi)i∈I . Recall that
‖hn‖J0,J = sup
E〈hn(X), f0(XJ0)〉
fi(XJi) : E|f0(XJ0)|2 ≤ 1,
Ef2i (XJi) ≤ 1, i ≥ 1
In what follows, to simplify the already quite complicated notation, let
us suppress the arguments of all the functions and write just h instead of
h(X) and fi instead of fi(XJi).
Let us also remark that although f0 plays special role in the definition
of ‖ · ‖J0,J , in what follows the same arguments will apply to all fi’s with
the obvious use of Schwarz inequality for the scalar product in H. We will
therefore not distinguish the case i = 0 and f2i will denote either the usual
power or 〈f0, f0〉, whereas ‖fi‖2 for i = 0 will be the norm in L2(H,XJ0),
which may happen to be equal just H if J0 = ∅.
Since E|fi|2 ≤ 1, i = 0, . . . , l, then for each j = 0, . . . , l and J ( Jj by
the Schwarz inequality applied conditionally to XJj\J
E|〈hn, f0〉
fi1{EJf2j >a
≤ EJj\J
(E(Jj\J)c
f2i )
1{EJf2j ≥a
2}(E(Jj\J)c |hn|
2)1/2
≤ EJj\J
1{EJf2j ≥a
2}(E(Jj\J)c |hn|
2)1/2
≤ Ld2
k(Jj\J)
c(n)#(Jj\J)/2 logd/2 nEJj\J [(EJf
1{EJf2j ≥a
≤ Ld[2
k(Jj\J)
c(n)#(Jj\J)/2 logd/2 n]a−1,
where the third inequality follows from (32) and the last one from the ele-
mentary fact E|X|1{|X|≥a} ≤ a−1E|X|2. This way we obtain
‖hn‖J0,J (34)
≤ sup{E[〈hn, f0〉
fi] : ‖fi‖2 ≤ 1,∀J(Ji ‖(EJf2i )1/2‖∞ ≤ 2n#(Ji\J)/2}
2(k(Ji\J)c(n)−n)#(Ji\J)/2 logd/2 n
≤ sup{E[〈hn, f0〉
fi] : ‖fi‖2 ≤ 1,∀J(Ji ‖(EJf2i )1/2‖∞ ≤ 2n#(Ji\J)/2}
2(kI (n)−n)#I
c/2 logd/2 n.
Let us thus consider arbitrary fi, i = 0, . . . , k such that ‖fi‖2 ≤ 1,
‖(EJf2i )1/2‖∞ ≤ 2n#(Ji\J)/2 for all J ( Ji (note that the latter condition
means in particular that ‖fi‖∞ ≤ 2n#Ji/2).
We have by assumption (12) for sufficiently large n,
|E[〈h, f0〉
fi]| ≤ ‖h‖K,J ,2nd/2 ≤ LdD log(d−degJ )/2 n.
We have also
E|〈h, f0〉1{|h|2≥ηd2nd logd n}
fi| ≤ E[|h|1{|h|2≥ηd2nd logd n}]
‖fi‖∞
≤ 2nd/2E[|h|1{|h|2≥ηd2nd logd n}] =: αn.
Also for I ⊆ Id, I 6= ∅, Id, denoting h̃n = h1{|h|2≤ηd2nd logd n}, we get
E|〈h̃n,f0〉
fi1{EIH2≥2n#I
≤ EIc
(EI |h̃n|2)1/21{EIH2≥2n#Ic}
(EJi∩I |fi|2)1/2
2n#(Ji∩I
c)/2]EIc [(EI |h̃n|2)1/21{EIH2≥2n#Ic}]
≤ Ld2n#I
EIc[(EIH
2 logd n)1/21{EIH2≥2n#I
≤ Ld[2n#I
c/2 logd/2 n]EIc[(EIH
2)1/21{EIH2≥2n#I
}] =: β
Let us denote h̄n = h̃n
∅6=I(Id
1{EIH2≤2#I
cn} and γ
n = E|h̄n1BIn |
Combining the three last inequalities we obtain
|E〈hn, f0〉
fi| ≤|E〈h, f0〉
fi|+ |E〈hn1Acn , f0〉
≤LdD log(d−deg J )/2 n+ E|〈h1{|h|2≥2nd logd n}, f0〉
∅6=I(Id
E|〈h̃n1{EIH2≥2n#Ic}, f0〉
E|〈h̄n1BIn , f0〉
≤LdD log(d−deg J )/2 n+ αn +
∅6=I(Id
βIn +
Now, combining the above estimate with (34), we obtain
‖hn‖J0,J ≤ Ld
2(kI (n)−n)#I
c/2 logd/2 n+ LdD log
(d−degJ )/2 n (35)
+ αn +
∅6=I(Id
βIn +
Let us notice that
logd/2 n
∀I 6=∅,Id
logd/2 n
< ∞, (36)
∀I 6=∅,Id
(γIn)
log2d n
The first inequality was proved in Step 1. The proof of the second one
is straightforward. Indeed, we have
logd/2 n
= LdEIc[(EIH
2)1/2
1{EIH2≥2n#I
≤ L̃dEIcEIH2 = L̃dEH2 < ∞.
The third inequality is implicitly proved in Step 3. Let us however present
an explicit argument.
(γIn)
log2d n
|h|21
{|h|≤ηd2
nd/2 logd/2 n}
logd n
(EIcEI(H
))2 < ∞
by Lemma 19 applied to the random variable
EIH2.
We are now in position to finish the proof. Let us notice that we have
either E(|h|21{|h|2≤22nd}) ≤ 1, or we can use the function
h1{‖h|2≤22nd}
(E(|h|21{|h|2≤22nd}))1/2
as a test function in the definition of ‖h‖Id,∅,2nd , obtaining
(E(|h|21{|h|2≤22nd}))1/2 = E〈h, g〉 ≤ ‖h‖Id,∅,2nd < D log
for large n. Combining this estimate with Corollary 3, we can now write
|i|≤2n
πdhn(X
i )| ≥ L̃d(D + C)2nd/2 logd/2 n
≤ L̃d
J0(Id
J∈PId\J0
(C2nd/2 logd/2 n
2nd/2‖hn‖Jo,J
)2/degJ ]
+ L̃d
( C2nd/2 logd/2 n
2n#I/2‖(EI |hn|2)1/2‖∞
)2/(d+#Ic)]
The second series is convergent by (33).
Thus it remains to prove the convergence of the first series. By (35), we
have for all J0,J
(C logd/2 n
‖hn‖Jo,J
)2/degJ ]
( C logd/2 n
2(kI (n)−n)#I
c/2 logd/2 n
)2/degJ ]
+ exp
( C logd/2 n
D log(d−deg J )/2 n
)2/degJ ]
+ exp
(C logd/2 n
)2/degJ ]
+ exp
( C logd/2 n
∅6=I(Id
)2/degJ ]
+ exp
( C logd/2 n
)2/ degJ ]
(under our permanent convention that the values of Ld in different equations
need not be the same). The series determined by the three last components
at the right-hand side are convergent by (36) since e−x ≤ Lrx−r for r > 0.
The series corresponding to the second component is convergent for C large
enough and we can take C = LdD. As for the series corresponding to the
first term, we have, just as in the proof of (33) for any I ( Id,
( C logd/2 n
(kI (n)−n)#Ic/2 logd/2 n
)2/degJ ]
n≥k,k /∈K(I,n)
C2(n−k)#I
)2/degJ ]
n≥k,k /∈K(I,n)
C2(n−k)#I
)2/degJ ]
E(H21CI
) = K̄EH2 < ∞.
We have thus proven the convergence of the series at the left-hand side of
(37) with C ≤ LdD, which ends Step 5.
Now to finish the proof, we just split Σd for each n into four sets, described
by steps 1–4 and use the triangle inequality, to show that
|i|≤2n
h(Xdeci )| ≥ LdD2nd/2 logd/2 n
which proves the sufficiency part of the theorem by Corollary 4.
8 The undecoupled case
Theorem 4. For any function h : Σd → H and a sequence X1,X2, . . . of
i.i.d., Σ-valued random variables, the LIL (4) holds if and only if h
(LL|h|)d < ∞,
h is completely degenerate for the law of X1 and the growth conditions (12)
are satisfied.
More precisely, if (4) holds, then (12) is satisfied with D = LdC and
conversely, (12) together with complete degeneration and the integrability
condition imply (4) with C = LdD.
Proof. Sufficiency follows from Corollary 6 and Theorem 2. To prove the
necessity assume that (4) holds and observe that from Lemma 8 and Corol-
lary 5, h satisfies the randomized decoupled LIL (8) and thus, by Theo-
rem 2, (11) holds and the growth conditions (12) on functions ‖h‖K,J ,u
are satisfied (note that the ‖ · ‖J ,u norms of the kernel h(X1, . . . ,Xd) and
ε1 · · · εdh(X1, . . . ,Xd) are equal). The complete degeneracy of 〈ϕ, h〉 for any
ϕ ∈ H follows from the necessary conditions for real-valued kernels. Since
by (11), Eih is well defined in the Bochner sense, we must have Eih = 0.
References
[1] Adamczak, R. (2005). Moment Inequalities for U -statistics. Ann.
Probab. 34, No. 6, 2288-2314.
[2] Adamczak, R., Lata la, R. (2006) LIL for canonical U-statistics. Sub-
mitted
[3] Arcones, M. and Giné, E. (1995). On the law of the iterated logarithm
for canonical U -statistics and processes. Stochastic Process. Appl. 58
217–245. MR1348376.
[4] Bousquet O., Boucheron S., Lugosi G., Massart P.(2005). Mo-
ment inequalities for functions of independent random variables. Ann.
Probab. 33, no. 2, 514-560. MR2123200.
[5] de la Peña, V.H. and Giné, E. (1999). Decoupling. From Dependence
to Independence. Springer, New York. MR1666908.
[6] de la Peña, V.H. and Montgomery-Smith, S. (1994). Bounds for
the tail probabilities of U -statistics and quadratic forms. Bull. Amer.
Math. Soc. 31 223–227. MR1261237.
[7] de la Peña, V. H., Montgomery-Smith, S. J. (1995) Decoupling
inequalities for the tail probabilities of multivariate U -statistics. Ann.
Probab. 23, no. 2, 806–816. MR1334173.
[8] Giné E., Lata la, R. and Zinn, J. (2000). Exponential and moment in-
equalities for U -statistics. High Dimensional Probability II, 13–38. Progr.
Probab., 47, Birkhauser, Boston. MR1857312.
[9] Giné, E. and Zhang, C.-H. (1996). On integrability in the LIL for
degenerate U -statistics. J. Theoret. Probab. 9 385–412. MR1385404.
[10] Giné, E., Zinn, J. (1994). A remark on convergence in distribution of
U -statistics. Ann. Probab. 22 117–125. MR1258868.
[11] Giné E., Kwapień, S., Lata la, R. and Zinn, J. (2001). The LIL for
canonical U -statistics of order 2, Ann. Probab. 29 520–557. MR1825163.
[12] Gluskin, E.D., Kwapień, S. (1995). Tail and moment estimates for
sums of independent random variables with logarithmically concave tails.
Studia Math. 114 , no. 3, 303–309. MR1338834.
[13] Goodman, V., Kuelbs, J., Zinn, J. (1981). Some results on the LIL
in Banach space with applications to weighted empirical processes. Ann.
Probab. 9 , no. 5, 713–752. MR0628870.
[14] Houdré, C. and Reynaud-Bouret, P. (2003). Exponential Inequal-
ities, with Constants, for U -statistics of Order Two. Stochastic inequal-
ities and applications, 55–69. Progr. Probab., 56, Birkhauser, Basel.
MR2073426.
[15] Kwapień, S., Lata la, R., Oleszkiewicz, K. and Zinn, J. (2003).
On the limit set in the law of the iterated logarithm for U -statistics of
order two. High dimensional probability, III (Sandjberg), 111–126. Progr.
Probab., 55, Birkhauser, Basel. MR2033884.
[16] Lata la, R. (1998) On the almost sure boundedness of norms of
some empirical operators. Statist. Probab. Lett. 38 , no. 2, 177–182.
MR1627869.
[17] Lata la, R. and Zinn, J. (2000). Necessary and sufficient conditions
for the strong law of large numbers for U -statistics. Ann. Probab. 28
1908–1924. MR1813848.
[18] Lata la, R. (2006). Estimation of moments and tails of Gaussian
chaoses. Ann. Probab. 34, No. 6. 2061-2440.
[19] Rubin, H. and Vitale, R.A. (1980). Asymptotic distribution of sym-
metric statistics. Ann. Statist. 8 165–170. MR0557561.
Introduction
Notation
Moment inequalities for U-statistics in Hilbert space
The equivalence of several LIL statements
The canonical decoupled case
Necessity
Sufficiency
The undecoupled case
|
0704.1644 | Circulating Current States in Bilayer Fermionic and Bosonic Systems | Circulating Current States in Bilayer Fermionic and Bosonic Systems
A. K. Kolezhuk∗
Institut für Theoretische Physik C, RWTH Aachen, 52056 Aachen, Germany and
Institut für Theoretische Physik, Universität Hannover, 30167 Hannover, Germany
It is shown that fermionic polar molecules or atoms in a bilayer optical lattice can undergo the transition to
a state with circulating currents, which spontaneously breaks the time reversal symmetry. Estimates of relevant
temperature scales are given and experimental signatures of the circulating current phase are identified. Related
phenomena in bosonic and spin systems with ring exchange are discussed.
PACS numbers: 05.30.Fk, 05.30.Jp, 42.50.Fx, 75.10.Jm
Introduction.– The technique of ultracold gases loaded
into optical lattices [1, 2] allows a direct experimental study of
paradigmatic models of strongly correlated systems. The pos-
sibility of unprecedented control over the model parameters
has opened wide perspectives for the study of quantum phase
transitions. Detection of the Mott insulator to superfluid tran-
sition in bosonic atomic gases [3, 4, 5], of superfluidity [6, 7]
and Fermi liquid [8] in cold Fermi gases, realization of Fermi
systems with low dimensionality [9, 10] mark some of the re-
cent achievements in this rapidly developing field [11]. While
the atomic interactions can be treated as contact ones for most
purposes, polar molecules [12, 13, 14] could provide further
opportunities of controlling longer-range interactions.
In this Letter, I propose several models on a bilayer op-
tical lattice which exhibit a phase transition into an exotic
circulating current state with spontaneously broken time re-
versal symmetry. Those states are closely related to the “or-
bital antiferromagnetic states” proposed first by Halperin and
Rice nearly 40 years ago [15], rediscovered two decades later
[16, 17, 18] and recently found in numerical studies in ex-
tended t-J model on a ladder [19] and on a two-dimensional
bilayer [20]. Our goal is to show how such states can be real-
ized and detected in a relatively simple optical lattice setup.
Model of fermions on a bilayer optical lattice.– Consider
spin-polarized fermions in a bilayer optical lattice shown in
Fig. 1. The system is described by the Hamiltonian
H = V
n1,rn2,r +
〈rr′〉
V ′σσ′nσ,rnσ′,r′ (1)
1,ra2,r + h.c.)− t
〈rr′〉
(a†σ,raσ,r′ + h.c.)
where r labels the vertical dimers arranged in a two-
dimensional (2d) square lattice, σ = 1, 2 labels two layers,
and 〈rr′〉 denotes a sum over nearest neighbors. Amplitudes t
and t′ describe hopping between the layers and within a layer,
respectively. A strong “on-dimer” nearest-neighbor repulsion
V ≫ t, t′ > 0 is assumed, and there is an interaction between
the nearest-neighbor dimers V ′σσ′ which can be of either sign.
This seemingly exotic setup can be realized by using po-
lar molecules [13, 14], or atoms with a large dipolar magnetic
moment such as 53Cr [12], and adjusting the direction of the
dipoles with respect to the bilayer plane. Let θ, ϕ be the polar
and azimuthal angles of the dipolar moment (the coordinate
axes are along the basis vectors of the lattice, z axis is perpen-
dicular to the bilayer plane). Setting ϕ = ±π
,± 3π
ensures
the dipole-dipole interaction is the same along the x and y di-
rections. The nearest neighbor interaction parameters in (1)
take the following values: V = (d20/ℓ
⊥)(1 − 3 cos
2 θ), and
V ′12 = V
21 = (d
3){1−3R−2(ℓ‖ cos θ+ℓ⊥ sin θ cosϕ)
V ′11 = V
22 = V
12(ℓ‖ = 0), where d0 is the dipole moment
of the particle, ℓ⊥ and ℓ‖ are the lattice spacings in the direc-
tions perpendicular and parallel to the layers, respectively, and
R2 = ℓ2
+ ℓ2⊥. The strength and the sign of interactions V ,
Ṽ ′ can be controlled by tuning the angles θ, ϕ and the lattice
constants ℓ⊥, ℓ‖. Below we will see that the physics of the
problem depends on the difference
Ṽ ′ = V ′11 − V
12, (2)
with the most interesting regime corresponding to Ṽ ′ < 0.
Consider the model at half-filling. Since V ≫ t, t′, we may
restrict ourselves to the reduced Hilbert space containing only
states with one fermion per dimer. Two states of each dimer
can be identified with pseudospin- 1
states |↑〉 and |↓〉. Second-
order perturbation theory in t′ yields the effective Hamiltonian
〈rr′〉
J(SxrS
r′ + S
) + JzS
Sxr ,
J = 4(t′)2/V, Jz ≡ J∆ = J + Ṽ
′, H = 2t, (3)
describing a 2d anisotropic Heisenberg antiferromagnet in
a magnetic field perpendicular to the anisotropy axis. The
twofold degenerate ground state has the Néel antiferromag-
netic (AF) order transverse to the field, with spins canted to-
wards the field direction. The AF order is along the y axis for
∆ < 1 (i.e., Ṽ ′ < 0), and along the z axis for∆ > 1 (Ṽ ′ > 0).
t , V2
FIG. 1: Bilayer lattice model described by the Hamiltonian (1). The
arrows denote particle flow in the circulating current phase.
http://arxiv.org/abs/0704.1644v3
The angle α between the spins and the field is classically given
by cosα = H/(2ZJS), where S is the spin value and Z = 4
is the lattice coordination number. This classical ground state
is exact at the special point H = 2SJ
2(1 + ∆) [21]. The
transversal AF order vanishes above a certain critical field Hc;
classically Hc = 2ZJS, and the same result follows from the
spin-wave analysis of (3) (one starts with the fully polarized
spin state at large H and looks when the magnon gap van-
ishes). This expression becomes exact at the isotropic point
∆ = 1 and is a good approximation for ∆ close to 1.
The long-range AF order along the y direction translates
in the original fermionic language into the staggered arrange-
ment of currents flowing from one layer to the other:
Ny = (−)
r〈Syr 〉 7→ (−)
1,ra2,r − a
2,ra1,r)〉. (4)
In terms of the original model (1), the condition H < Hc for
the existence of such a staggered current order becomes
t < 8(t′)2/V. (5)
The continuity equation for the current and the lattice symme-
try dictate the current pattern shown in Fig. 1. This circulating
current (CC) state has a spontaneously broken time reversal
symmetry, and is realized only for attractive inter-dimer inter-
action Ṽ ′ < 0 (i.e., the easy-plane anisotropy ∆ < 1) [22].
If ∆ = 1, the direction of the AF order in the xy plane is
arbitrary, so there is no long-range order at any finite temper-
ature. For ∆ > 1 (i.e., Ṽ ′ > 0) the AF order along the z axis
corresponds to the density wave (DW) phase with in-layer oc-
cupation numbers having a finite staggered component.
The phase diagram in the temperature-anisotropy plane is
sketched in Fig. 2. At the critical temperature T = Tc the dis-
crete Z2 symmetry gets spontaneously broken, so the corre-
sponding thermal phase transition belongs to the 2d Ising uni-
versality class (except the two lines ∆ = 1 and H = 0 where
the symmetry is enlarged to U(1) and the transition becomes
the Kosterlitz-Thouless one). Away from the phase bound-
aries the critical temperature Tc ∼ J , but at the isotropic point
∆ = 0, H = 0 it vanishes due to divergent thermal fluctua-
tions: for 1−∆ ≪ 1 and H ≪ J , it can be estimated as
Tc ∼ J/ ln[min(|1−∆|
−1, J2/H2)]. (6)
<sz><s >y
current
circulating density wave
FIG. 2: Schematic phase diagram of the model (1), (3) at fixed H =
2t. The line ∆ = 1 corresponds to the Kosterlitz-Thouless phase,
with the transition temperature TKT ∝ J/ ln(J/H) at small H .
FIG. 3: Noise correlation function G(r, r′) from time-of-flight im-
ages in the circulating current (CC) phase, shown as the function of
the relative distance Q(r) − Q(r′), with Q(r) = mr/(~t) ex-
pressed in 1/ℓ‖ units: (a) Q(r) = (0, 0); (b) Q(r) =
(1, 1).
Changing the initial point Q(r) leads to the change of relative weight
of the two systems of dips, which is the fingerprint of the CC phase.
The quantum phase transition at T = 0, H = Hc is of the 3d
Ising type (except at the U(1)-symmetric point ∆ = 1 where
the universality class is that of the 2d dilute Bose gas [23]), so
in its vicinity the CC order parameter Ny ∝ (Hc −H)
β with
β ≃ 0.313 [24], and Tc ∝ JN
y ∝ J(Hc −H)
2β . At T > Tc
or H > Hc the only order parameter is 〈S
x〉, corresponding
to the Mott phase with one particle per dimer.
Bilayer lattice design and hierarchy of scales.– The bi-
layer can be realized, e.g., by employing three pairs of mutu-
ally perpendicular counter-propagating laser beams with the
same polarization and adding another pair of beams with
an orthogonal polarization and additional phase shift δ, so
that the resulting field intensity has the form
E⊥(cos kx +
cos ky)+Ez cos kz
+Ẽ2z cos
2(kz+δ). Taking δ = π
(1+ζ)
and Ẽ2z > Ez(2E⊥+ ζEz), with ζ = ±1 for blue and red de-
tuning, respectively, one obtains a three-dimensional stack of
bilayers, separated by large potential barriers U3d. Eq. (5) im-
plies V ≫ t′ ≫ t, |Ṽ ′|, which can be achieved by making the
z-direction potential barrier U⊥ inside the bilayer sufficiently
larger than the in-plane barrier U‖, so that the condition t ≪ t
will be met; e.g., Ẽz/E⊥ ≈ 20, Ez/E⊥ ≈ 15 yields the bar-
rier ratio U3d : U⊥ : U‖ of approximately 16 : 8 : 1, and the
lattice constants ℓ⊥ ≈ 0.45λ, ℓ‖ = λ, where λ = 2π/k is the
laser wave length. The parameter Ṽ ′ has a zero as a function
of the angle θ, so it can be made as small as needed. Taking
λ = 400 nm, one obtains an estimate of Tc = (0.1÷ 0.3) µK
for cyanide molecules ClCN and HCN with the dipolar mo-
ment d0 ≈ 3 Debye, while the Fermi temperature for the same
parameters is Tf ≈ (0.6÷1.3) µK. This estimate corresponds
to the maximum value of Tc ∼ J reached when Ṽ
′ ∼ −J
and t . J . The hopping t′ was estimated assuming the in-
plane potential barrier U‖ is roughly equal to the recoil energy
Er = (~k)
2/2m, where m is the particle mass.
Experimental signatures. – Signatures of the ordered
phases can be observed [25, 26] in time-of-flight experi-
ments by measuring the density noise correlator G(r, r′) =
〈n(r)n(r′)〉 − 〈n(r)〉〈n(r′)〉. If the imaging axis is perpen-
dicular to the bilayer, n(r) =
σ,raσ,r〉 is the local net
density of two layers. For large flight times t it is proportional
to the momentum distribution nQ(r), where Q(r) = mr/~t.
In the Mott phase the response shows fermionic “Bragg dips”
at reciprocal lattice vectors g = (2πh/ℓ‖, 2πk/ℓ‖),
GM(r, r
′) ∝ f0(r, r
′) = −2〈Sx〉2
Q(r)−Q(r′)−g
In the CC and DW phases the noise correlator contains an
additional system of dips shifted by QB = (π/ℓ‖, π/ℓ‖):
GCC,DW(r, r
′) ∝ f0(r, r
′)− 2
〈Sz〉2 + 〈Sy〉2
Qx(r)ℓ‖
+ cos
Qy(r)ℓ‖
))2]}
Q(r)−Q(r′)−QB − g
In the DW phase 〈Sz〉 6= 0, 〈Sy〉 = 0, and so the density cor-
relator depends only on r − r′. In the CC phase 〈Sz〉 = 0,
〈Sy〉 6= 0, and the relative strength of the two systems of dips
varies periodically when one changes the initial point r, see
Fig. 3. This Q-dependent contribution stems from the intra-
layer currents 〈a†σ,raσ,r′〉 = (−)
σ(−)rδ〈rr′〉i〈S
y〉/4, where
comes from the fact that the inter-layer current splits into
four equivalent intra-layer ones (see Fig. 1), δ〈rr′〉 means r
and r′ must be nearest neighbors, and (−)r ≡ eiQB ·r de-
notes an oscillating factor. If the correlator is averaged over
the particle positions, the CC and DW phases become indis-
tinguishable. A direct way to observe the CC phase could be
to use the laser-induced fluorescence spectroscopy to detect
the Doppler line splitting proportional to the current.
Bosonic models.– Consider the bosonic version of the
model (1), with the additional on-site repulsion U . The ef-
fective Hamiltonian has the form (3) with J = −4(t′)2/V
and Jz = Ṽ
′ + 4(t′)2(1/V − 1/U). Due to ferromag-
netic (FM) transverse exchange, instead of spontaneous cur-
rent one obtains the usual Mott phase. CC states can be in-
duced by artificial gauge fields [27]: The vector potential
A(x) = π
(x + 1/2) makes hopping along the x axis imagi-
nary, t′ 7→ it′. The unitary transformation Sx,yr 7→ (−)
rSx,yr
maps the system onto a set of FM chains along the x axis,
AF-coupled in the y direction and subject to a staggered field
H = 2t along the x axis in the easy (xy) plane. In the ground
state net chain moments are arranged in a staggered way along
the y axis, so a current pattern similar to that of Fig. 1 emerges,
now staggered along only one of the two in-plane directions.
A different type of CC states, with orbital currents localized
at lattice sites, can be achieved with p-band bosons [28].
Yet another way to create a CC state in a bosonic bilayer is
to introduce the ring exchange on vertical plaquettes:
Hring =
〈rr′〉
2,r′b2,rb1,r′ + h.c.). (8)
In pseudospin language, the ring interaction modifies the
transverse exchange, J 7→ J + K , so for K > 0 one can
achieve the conditions J > 0, J > |Jz| necessary for the CC
phase to exist. However, engineering a sizeable ring exchange
in bosonic systems is difficult (see [29] for recent proposals).
Spin- 1
bilayer with four-spin ring exchange.– Consider
the Hubbard model for spinful fermions on a bilayer shown
in Fig. 1, with the on-site repulsion U and inter- and intra-
layer hoppings t and t′, respectively. At half filling (i.e., two
fermions per dimer), one can effectively describe the system
in terms of spin degrees of freedom represented by the opera-
tors S = 1
a†ασαβaβ . The leading term in t/U yields the AF
Heisenberg model with the nearest-neighbor exchange con-
stants J⊥ = 4t
2/U (inter-layer) and J‖ = 4(t
′)2/U (intra-
layer), while the next term, with the interaction strength J4 ≃
10t2(t′)2/U3, corresponds to the ring exchange [30, 31]:
H4 = 2J4
(S1 · S2)(S1′ · S2′)
+ (S1 · S1′)(S2 · S2′)− (S1 · S2′)(S2 · S1′)
, (9)
where the sum is over vertical plaquettes only (the interaction
for intra-layer plaquettes is of the order of (t′)4/U3 and is ne-
glected), and the sites (1, 2, 2′, 1′) form a plaquette (traversed
counter-clockwise). In the same order of the perturbation the-
ory, the nearest-neighbor exchange constants get corrections,
J⊥ 7→ JR = J⊥ + J4, J‖ 7→ JL = J‖ + J4/2,
and the interaction JD =
J4 along the diagonals of verti-
cal plaquettes is generated. Generalization for any 2d bipar-
tite lattice built of vertically arranged dimers is trivial. Since
J⊥ ≫ J‖, J4, we can treat the system as a set of weakly cou-
pled spin dimers. The dynamics can be described with the
help of the effective field theory [32] which is a continuum
version of the bond boson approach [33] and is based on dimer
coherent states |u,v〉 = (1−u2−v2)|s〉+
j(uj + ivj)|tj〉.
Here |s〉 and |tj〉, j = (x, y, z) are the singlet and triplet
states, and u, v are real vectors related to the staggered mag-
netization 〈S1 − S2〉 = 2u(1 − u
2 − v2)1/2 and vector chi-
rality 〈S1×S2〉 = v(1−u
2−v2)1/2 of the dimer. Using the
ansatz u(r) = (−)rϕ(r), v(r) = (−)rχ(r), passing to the
continuum in the coherent states path integral, and retaining
up to quartic terms in u, v, one obtains the Euclidean action
dτd2r
~(ϕ · ∂τχ− χ · ∂τϕ) (10)
+ (ϕ2 + χ2)(JR − 3ZJ4/2)− Z[J‖ϕ
2 + J4χ
+ (Z/2)[J‖(∂kϕ)
2 + J4(∂kχ)
2] + ZU4(ϕ,χ)
where the quartic potential U4 has the form
U4 = (ϕ
2 + χ2)[J‖ϕ
2 + J4χ
+ J4(ϕ
2 + χ2)2 + (J‖ + J4)(ϕ× χ)
2. (11)
Interdimer interactions J‖ and J4 favor two competing types
of order: while J‖ tends to establish the AF order (ϕ 6= 0,
χ = 0), strong ring exchange J4 favors another solution with
ϕ = 0, χ 6= 0, describing the state with a staggered vector
chirality. It wins over the AF one for J4 > J‖, J4 >
which for the square lattice (Z = 4) translates into
J4 > max(J‖, J⊥/9). (12)
On the line J4 = J‖ the symmetry is enhanced from SU(2)
to SU(2)× U(1), and the AF and chiral orders can coexist: a
rotation (ϕ+ iχ) 7→ (ϕ+ iχ)eiα leaves the action invariant.
The chiral state may be viewed as an analog of the cir-
culating current state considered above: in terms of the
original fermions of the Hubbard model, the z-component
of the chirality (S1 × S2)z =
1↓a2↓)(a
2↑a1↑) −
2↓a1↓)(a
1↑a2↑)
corresponds to the spin current (particles
with up and down spins moving in opposite directions).
Summary.– I have considered fermionic and bosonic
models on a bilayer optical lattice which exhibit a phase tran-
sition into a circulating current state with spontaneously bro-
ken time reversal symmetry. The simplest of those models
includes just nearest-neighbor interactions and hoppings, and
can possibly be realized with the help of polar molecules.
Acknowledgments.– I sincerely thank U. Schollwöck,
T. Vekua, and S. Wessel for fruitful discussions. Support by
Deutsche Forschungsgemeinschaft (the Heisenberg Program,
KO 2335/1-1) is gratefully acknowledged.
∗ On leave from: Institute of Magnetism, National Academy of
Sciences and Ministry of Education, 03142 Kiev, Ukraine.
[1] A. Kastberg et al., Phys. Rev. Lett. 74, 1542 (1995).
[2] G. Raithel, W. D. Phillips, and S. L. Rolston, Phys. Rev. Lett.
81, 3615 (1998).
[3] M. Greiner et al., Nature (London) 415, 39 (2002).
[4] T. Stöferle et al., Phys. Rev. Lett. 92, 130403 (2004).
[5] D. Jaksch et al., Phys. Rev. Lett. 81, 3108 (1998).
[6] M. Greiner, C. A. Regal, and D. S. Jin, Nature 426, 537 (2003).
[7] T. Bourdel et al., Phys. Rev. Lett. 93, 050401 (2004).
[8] M. Köhl et al., Phys. Rev. Lett. 94, 080403 (2005).
[9] G. Modugno et al., Phys. Rev. A 68, 011601(R) (2003).
[10] H. Moritz et al., Phys. Rev. Lett. 94, 210401 (2005).
[11] I. Bloch, Nature Physics 1, 23 (2005).
[12] L. Santos et al., Phys. Rev. Lett. 85, 1791 (2000).
[13] J. Doyle et al., Eur. Phys. J. D 31, 149 (2004).
[14] H. P. Büchler et al., Phys. Rev. Lett. 98, 060404 (2007).
[15] B. I. Halperin and T. M. Rice, Solid State Physics 21, eds. F.
Seitz, D. Turnbull, and H. Ehrenreich (Academic Press, New
York, 1968), p. 116.
[16] I. Affleck and J. B. Marston, Phys. Rev. B 37, 3774 (1988).
[17] A. Nersesyan, Phys. Lett. A 153, 49 (1991).
[18] H. J. Schulz, Phys. Rev. B 39, 2940 (1989).
[19] U. Schollwöck et al., Phys. Rev. Lett. 90, 186401 (2003).
[20] S. Capponi, C. Wu, and S.-C. Zhang, Phys. Rev. B 70,
220505(R) (2004).
[21] J. Kurmann, H. Thomas, and G. Müller, Physica 112A, 235
(1982).
[22] Note that this regime cannot be reached within the model with
controlled hopping discussed in L.-M. Duan, E. Demler, and M.
D. Lukin, Phys. Rev. Lett. 91, 090402 (2003).
[23] S. Sachdev, Quantum Phase Transitions (Cambridge University
Press, 1999).
[24] J. Garcı́a and J. A. Gonzalo, Physica 326A, 464 (2003).
[25] E. Altman, E. Demler, M. D. Lukin, Phys. Rev. A 70, 013603
(2004).
[26] S. Fölling et al., Nature 434, 481 (2005).
[27] D. Jaksch and P. Zoller, New J. Phys. 5, 56 (2003); G. Juzeli-
unas and P. Ohberg, Phys. Rev. Lett. 93, 033602 (2004); E. J.
Mueller, Phys. Rev. A 70, 041603(R) (2004); A. S. Sorensen,
E. Demler, M. D. Lukin, Phys. Rev. Lett. 94, 086803 (2005).
[28] W. V. Liu and C. Wu, Phys. Rev. A 74, 013607 (2006); C. Wu
et al., Phys. Rev. Lett. 97, 190406 (2006).
[29] H. P. Büchler et al., Phys. Rev. Lett. 95, 040402 (2005).
[30] M. Takahashi, J. Phys. C 10, 1289 (1977).
[31] A. H. MacDonald, S. M. Girvin, and D. Yoshioka, Phys. Rev.
B 41, 2565 (1990).
[32] A. K. Kolezhuk, Phys. Rev. B 53, 318 (1996).
[33] S. Sachdev and R. N. Bhatt, Phys. Rev. B 41, 9323 (1990).
|
0704.1645 | Non-Relativistic Propagators via Schwinger's Method | Non-Relativistic Propagators via Schwinger’s Method
A. Aragão,∗ H. Boschi-Filho,† and C. Farina‡
Instituto de F́ısica, Universidade Federal do Rio de Janeiro,
Caixa Postal 68.528, 21941-972 Rio de Janeiro, RJ, Brazil.
F. A. Barone§
Universidade Federal de Itajubá, Av. BPS 1303
Caixa Postal 50 - 37500-903, Itajubá, MG, Brazil.
In order to popularize the so called Schwinger’s method we reconsider the Feynman propagator of
two non-relativistic systems: a charged particle in a uniform magnetic field and a charged harmonic
oscillator in a uniform magnetic field. Instead of solving the Heisenberg equations for the position
and the canonical momentum operators, R and P, we apply this method by solving the Heisenberg
equations for the gauge invariant operators R and π = P − eA, the latter being the mechanical
momentum operator. In our procedure we avoid fixing the gauge from the beginning and the result
thus obtained shows explicitly the gauge dependence of the Feynman propagator.
PACS numbers: 42.50.Dv
Keywords: Schwinger’s method, Feynman Propagator, Magnetic Field, Harmonic Oscillator.
I. INTRODUCTION
In a recent paper published in this journal [1], three
methods were used to compute the Feynman propaga-
tors of a one-dimensional harmonic oscillator, with the
purpose of allowing a student to compare the advan-
tadges and disadvantadges of each method. The above
mentioned methods were the following: the so called
Schwinger’s method (SM), the algebraic method and the
path integral one. Though extremely powerful and el-
egant, Schwinger’s method is by far the less popular
among them. The main purpose of the present paper is
to popularize Schwinger’s method providing the reader
with two examples slightly more difficult than the har-
monic oscillator case and whose solutions may serve as a
preparation for attacking relativistic problems. In some
sense, this paper is complementary to reference [1].
The method we shall be concerned with was introduced
by Schwinger in 1951 [2] in a paper about QED enti-
tled “Gauge invariance and vacuum polarization”. After
introducing the proper time representation for comput-
ing effetive actions in QED, Schwinger was faced with a
kind of non-relativistic propagator in one extra dimen-
sion. The way he solved this problem is what we mean
by Schwinger’s method for computing quantum propa-
gators. For relativistic Green functions of charged parti-
cles under external electromagnetic fields, the main steps
of this method are summarized in Itzykson and Zuber’s
textbook [3] (apart, of course, from Schwinger’s work [2]).
Since then, this method has been used mainly in relativis-
tic quantum theory [4–16].
∗Electronic address: [email protected]
†Electronic address: [email protected]
‡Electronic address: [email protected]
§Electronic address: [email protected]
However, as mentioned before, Schwinger’s method is
also well suited for computing non-relativistic propaga-
tors, though it has rarely been used in this context. As
far as we know, this method was used for the first time
in non-relativistic quantum mechanics by Urrutia and
Hernandez [17]. These authors used Schwinger’s action
principle to obtain the Feynman propagator for a damped
harmonic oscillator with a time-dependent frequency un-
der a time-dependent external force. Up to our knowl-
edge, since then only a few papers have been written with
this method, namely: in 1986, Urrutia and Manterola
[18] used it in the problem of an anharmonic charged
oscillator under a magnetic field; in the same year, Hor-
ing, Cui, and Fiorenza [19] applied Schwinger’s method
to obtain the Green function for crossed time-dependent
electric and magnetic fields; the method was later applied
in a rederivation of the Feynman propagator for a har-
monic oscillator with a time-dependent frequency [20];
a connection with the mid-point-rule for path integrals
involving electromagnetic interactions was discussed in
[21]. Finally, pedagogical presentations of this method
can be found in the recent publication [1] as well as in
Schwinger’s original lecture notes recently published [22],
which includes a discussion of the quantum action prin-
ciple and a derivation of the method to calculate propa-
gators with some examples.
It is worth mentioning that this same method was inde-
pendently developed by M. Goldberger and M. GellMann
in the autumn of 1951 in connection with an unpublished
paper about density matrix in statistical mechanics [23].
Our purpose in this paper is to provide the reader with
two other examples of non-relativistic quantum propa-
gators that can be computed in a straightforward way
by Schwinger’s method, namely: the propagator for a
charged particle in a uniformmagnetic field and this same
problem with an additional harmonic oscillator potential.
Though these problems have already been treated in the
context of the quantum action principle [18], we decided
http://arxiv.org/abs/0704.1645v2
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
to reconsider them for the following reasons: instead of
solving the Heisenberg equations for the position and the
canonical momentum operators, R and P, as is done in
[18], we apply Schwinger’s method by solving the Heisen-
berg equations for the gauge invariant operators R and
π = P − eA, the latter being the mechanical momen-
tum operator. This is precisely the procedure followed
by Schwinger in his seminal paper of gauge invariance
and vacuum polarization [2]. This procedures has some
nice properties. For instance, we are not obligued to
choose a particular gauge at the beginning of calcula-
tions. As a consequence, we end up with an expression
for the propagator written in an arbitrary gauge. As a
bonus, the transformation law for the propagator under
gauge transformations can be readly obtained.
In order to prepare the students to attack more com-
plex problems, we solve the Heisenberg equations in ma-
trix form, which is well suited for generalizations involv-
ing Green functions of relativistic charged particles under
the influence of electromagnetic fields (constant Fµν , a
plane wave field or even combinations of both). For ped-
agogical reasons, at the end of each calculation, we show
how to extract the corresponding energy spectrum from
the Feynman propagator. Although the way Schwniger’s
method must be applied to non-relativistic problems has
already been explained in the literature [1, 18, 22], it is
not of common knowledge so that we start this paper
by summarizing its main steps. The paper is organized
as follows: in the next section we review Schwinger’s
method, in section III we present our examples and sec-
tion IV is left for the final remarks.
II. MAIN STEPS OF SCHWINGER’S METHOD
For simplicity, consider a one-dimensional time-
independent Hamiltonian H and the corresponding non-
relativistic Feynman propagator defined as
K(x, x′; τ) = θ(τ)〈x| exp
[−iHτ
|x′〉, (1)
where θ(τ) is the Heaviside step function and |x〉, |x′〉
are the eingenkets of the position operator X (in the
Schrödinger picture) with eingenvalues x and x′, respec-
tively. The extension for 3D systems is straightforward
and will be done in the next section. For τ > 0 we have,
from equation (1), that
K(x, x′; τ) = 〈x|H exp
[−iHτ
|x′〉. (2)
Inserting the unity 1l = exp [−(i/~)Hτ ] exp [(i/~)Hτ ] in
the r.h.s. of the above expression and using the well
known relation between operators in the Heisenberg and
Schrödinger pictures, we get the equation for the Feyn-
man propagator in the Heisenberg picture,
K(x, x′; τ) = 〈x, τ |H(X(0), P (0))|x′, 0〉, (3)
where |x, τ〉 and |x′, 0〉 are the eingenvectors of opera-
tors X(τ) and X(0), respectively, with the correspond-
ing eingenvalues x and x′: X(τ)|x, τ〉 = x|x, τ〉 and
X(0)|x′, 0〉 = x′|x′, 0〉, with K(x, x′; τ) = 〈x, τ |x′, 0〉. Be-
sides, X(τ) and P (τ) satisfy the Heisenberg equations,
(τ) = [X(τ),H] ; i~
(τ) = [P (τ),H]. (4)
Schwinger’s method consists in the following steps:
(i) we solve the Heisenberg equations forX(τ) and P (τ),
and write the solution for P (0) only in terms of the
operators X(τ) and X(0);
(ii) then, we substitute the results obtained in (i) into
the expression for H(X(0), P (0)) in (3) and using
the commutator [X(0), X(τ)] we rewrite each term
ofH in a time ordered form with all operatorsX(τ)
to the left and all operators X(0) to the right;
(iii) with such an ordered hamiltonian, equation (3) can
be readly cast into the form
K(x, x′; τ) = F (x, x′; τ)K(x, x′; τ), (5)
with F (x, x′; τ) being an ordinary function defined
F (x, x′; τ) =
〈x, τ |Hord(X(τ), X(0))|x
′, 0〉
〈x, τ |x′, 0〉
. (6)
Integrating in τ , the Feynman propagator takes the
K(x, x′; τ) = C(x, x′) exp
F (x, x′; τ ′)dτ ′
, (7)
where C(x, x′) is an integration constant indepen-
dent of τ and
means an indefinite integral;
(iv) last step is concerned with the evaluation of
C(x, x′). This is done after imposing the follow-
ing conditions
〈x, τ |x′, 0〉 = 〈x, τ |P (τ)|x′, 0〉 , (8)
〈x, τ |x′, 0〉 = 〈x, τ |P (0)|x′, 0〉 , (9)
as well as the initial condition
K(x, x′; τ) = δ(x− x′) . (10)
Imposing conditions (8) and (9) means to substitute in
their left hand sides the expression for 〈x, τ |x′, 0〉 given
by (7), while in their right hand sides the operators P (τ)
and P (0), respectively, written in terms of the operators
X(τ) and X(0) with the appropriate time ordering.
III. EXAMPLES
A. Charged particle in an uniform magnetic field
As our first example, we consider the propagator of a
non-relativistic particle with electric charge e and mass
m, submitted to a constant and uniform magnetic fieldB.
Even though this is a genuine three-dimensional problem,
the extension of the results reviewed in the last section
to this case is straightforward. Since there is no electric
field present, the hamiltonian can be written as
(P− eA)
, (11)
where P is the canonical momentum operator, A is the
vector potential and π = P − eA is the gauge invariant
mechanical momentum operation. We choose the axis
such that the magnetic field is given by B = Be3. Hence,
the hamiltonian can be decomposed as
π21 + π
= H⊥ +
, (12)
with an obvious definition for H⊥.
Since the motion along the OX 3 direction is free,
the three-dimensional propagator K(x,x′; τ) can be
written as a product of a two-dimensional propagator,
K⊥(r, r
′; τ), related to the magnetic field and a one-
dimensional free propagator, K
3 (x3, x
3; τ):
K(x,x′; τ) = K⊥(r, r
′; τ)K
3 (x3, x
3; τ), (τ > 0)
where r = x1e1 + x2e2 and K
3 (x3, x
3; τ) is the well
known propagator of the free particle [24],
3 (x3, x
3; τ) =
2πi~τ
(x3 − x
. (14)
In order to use Schwinger’s method to compute the two-
dimensional propagator K⊥(r, r
′; τ) = 〈r, τ |r′, 0〉, we
start by writing the differential equation
〈r, τ |r′, 0〉 = 〈r, τ |H⊥(R⊥(0),π⊥(0))|r
′, 0〉 , (15)
where R⊥(τ) = X1(τ)e1 + X2(τ)e2 and π⊥(τ) =
π1(τ)e1+π2(τ)e2. In (15) |r, τ〉 and |r
′, 0〉 are the eigen-
vectors of position operators R(τ) = X1(τ)e1 +X2(τ)e2
and R(0) = X1(0)e1 + X2(0)e2, respectively. More
especifically, operators X1(0), X1(τ), X2(0) and X2(τ)
have the eigenvalues x′1, x1, x
2 and x2, respectively.
In order to solve the Heisenberg equations for operators
R⊥(τ) and π⊥(τ), we need the commutators
Xi(τ), π
j (τ)
= 2i~πi(τ) ,
πi(τ), π
j (τ)
= 2i~eBǫij3πj(τ), (16)
where ǫij3 is the usual Levi-Civita symbol. Introducing
the matrix notation
R(τ) =
X1(τ)
X2(τ)
; Π(τ) =
π1(τ)
π2(τ)
, (17)
and using the previous commutators the Heisenberg
equations of motion can be cast into the form
dR(τ)
, (18)
dΠ(τ)
= 2ωCΠ(τ) , (19)
where 2ω = eB/m is the cyclotron frequency and we
defined the anti-diagonal matrix
. (20)
Integrating equation (19) we find
Π(τ) = e2ωCτΠ(0) . (21)
Substituting this solution in equation (18) and integrat-
ing once more, we get
R(τ) −R(0) =
sin (ωτ)
eωCτΠ(0) , (22)
where we used the following properties of C matrix:
2 = −1l; C−1 = −C = CT , eαC = cos (α)1l + sin (α)C
with CT being the transpose of C. Combining equations
(22) and (21) we can write Π(0) in terms of the operators
R(τ) and R(0) as
Π(0) =
sin (ωτ)
e−ωCτ
R(τ)−R(0)
. (23)
In order to express H⊥ = (π
2)/2m in terms of R(τ)
and R(0), we use (23). In matrix notation, we have
ΠT (0)Π(0)
2 sin2 (ωτ)
RT (τ)R(τ) +RT (0)R(0) +
−RT (τ)R(0) −RT (0)R(τ)
. (24)
Last term on the r.h.s. of (24) is not ordered appropri-
ately as required in the step (ii). The correct ordering
may be obtained as follows: first, we write
R(0)TR(τ) = R(τ)TR(0) +
[Xi(0), Xi(τ)] . (25)
Using equation (22), the usual commutator
[Xi(0), πj(0)] = i~δij1l and the properties of matrix
C it is easy to show that
[Xi(0), Xi(τ)] =
2i~ sin(ωτ) cos(ωτ)
, (26)
so that hamiltonian H⊥ with the appropriate time order-
ing takes the form
2 sin2 (ωτ)
R2(τ) +R2(0)− 2RT (τ)R(0)
− i~ω cot(ωτ). (27)
Substituting this hamiltonian into equation (15) and in-
tegrating in τ , we obtain
〈r, τ |r′, 0〉 =
C(r, r′)
sin (ωτ)
cot(ωτ)(r− r′)2
, (28)
where C(r, r ′) is an integration constant to be deter-
mined by conditions (8), (9) and (10), which for the case
of hand read
〈r, τ |πj(τ)|r
′, 0〉 =
− eAj(r)
〈r, τ |r′, 0〉 (29)
〈r, τ |πj(0)|r
′, 0〉 =
− eAj(r
〈r, τ |r′, 0〉 , (30)
〈r, τ |r′, 0〉 = δ(2)(r− r′). (31)
In order to compute the matrix element on the l.h.s.
of (29), we need to express Π(τ) in terms of R(τ) and
R(0). From equaitons (21) and (23), we have
Π(τ) =
sin (ωτ)
R(τ)−R(0)
, (32)
which leads to the matrix element
〈r, τ |πj(τ)|r
′, 0〉 = mω[cot(ωτ)
xj − x
+ ǫjk3 (xk − x
k)]〈r, τ |r
′, 0〉 , (33)
where we used the properties of matrix C and Einstein
convention for repeated indices is summed. Analogously,
the l.h.s. of equation (30) can be computed from (23),
〈r, τ |πj(0)|r
′, 0〉 = mω[cot(ωτ)
xj − x
− ǫjk3 (xk − x
k)]〈r, τ |r
′, 0〉 . (34)
Substituting equations (33) and (34) into (29) and
(30), respectively, and using (28), we have
+ eAj(r)+
eFjk(xk−x
C(r, r ′)= 0,(35)
− eAj(r
eFjk(xk−x
C(r, r ′)= 0,(36)
where we defined Fjk = ǫjk3 B.
Our strategy to solve the above system of differential
equations is the following: we first equation (35) assum-
ing in this equation variables r ′ as constants. Then, we
impose that the result thus obtained is a solution of equa-
tion (36). With this goal, we multiply both sides of (35)
by dxj and sum over j, to obtain
Aj(r)+
Fjk (xk − x
dxj . (37)
Integration of the previous equation leads to
C(r, r ′) = C(r ′, r ′) e
[Aj(ξ)+
Fjk(ξk−x
k)] dξj} ,
where the line integral is assumed to be along curve Γ, to
be specified in a moment. As we shall see, this line inte-
gral does not depend on the curve Γ joining r ′ and r, as
expected, since the l.h.s. of (37) is an exact differencial.
In order to determine the differential equation for
C(r ′, r ′) we must substitue expression (38) into equa-
tion (36). Doing that and using carefully the fundamen-
tal theorem of differential calculus, it is straightforward
to show that
(r ′, r ′) = 0 , (39)
which means that C(r ′, r ′) is a constant, C0, indepen-
dent of r ′. Noting that
[B× (ξ − r ′)]j = −Fjk (ξk − x
k) , (40)
equation (38) can be written as
C(r, r ′)= C0 exp
A(ξ)−
B× (ξ − r ′)
Observe, now, that the integrand in the previous equa-
tion has a vanishing curl,
A(ξ)−
B× (ξ − r ′)
= B−B = 0 ,
which means that the line integral in (42) is path in-
dependent. Choosing, for convenience, the straightline
from r ′ to r, it can be readly shown that
[B× (ξ − r ′)] · dξ = 0 ,
where Γsl means a straightline from r
′ to r. With this
simplification, the C(r ′, r) takes the form
C(r, r ′)= C0 exp
~ Γsl
A(ξ) · dξ
. (42)
Substituting last equation into (28) and using the initial
condition (10), we readly obtain C0 =
. Therefore
the complete Feynman propagator for a charged particle
under the influence of a constant and uniform magnetic
field takes the form
K(x,x′; τ)
2πi~ sin (ωτ)
2πi~τ
A(ξ) · dξ
cot(ωτ)(r − r ′)2
(x3 − x
,(43)
where in the above equation we omitted the symbol Γsl
but, of course, it is implicit that the line integral must be
done along a straightline, and we brought back the free
propagation along the OX 3 direction. A few comments
about the above result are in order.
1. Firstly, we should emphasize that the line integral
which appears in the first exponencial on the r.h.s.
of (43) must be evaluated along a straight line be-
tween r′ and r. If for some reason we want to choose
another path, instead of integral
A(ξ) · dξ, we
must evaluate
[A(ξ)− (1/2)B× (ξ − r ′)] · dξ.
2. Since we solved the Heisenberg equations for the
gauge invariant operators R⊥ and π⊥, our final
result is written for a generic gauge. Note that
the gauge-independent and gauge-dependent parts
of the propagator are clearly separated. The gauge
fixing corresponds to choose a particular expression
for A(ξ). Besides, from (43) we imediately obtain
the transformation law for the propagator under a
gauge transformation A → A+∇Λ, namely,
K(r, r ′; τ) 7−→ e
Λ(r)K(r, r ′; τ) e−
Λ(r ′) .
Although this transformation law was obtained in
a particular case, it can be shown that it is quite
general.
3. It is interesting to show how the energy spectrum
(Landau levels), with the corresponding degener-
acy per unit area, can be extracted from prop-
agator (43). With this purpose, we recall that
the partition function can be obtained from the
Feynman propagator by taking τ = −i~β, with
β = 1/(KBT ), and taking the spatial trace,
Z(β) =
dx2 K(r, r;−i~β) .
Substituting (43) into last expression, we get
Z(β) =
2π~ senh(~βω)
where we used the fact that sin(−iθ) = −i sinh θ.
Observe that the above result is divergent, since
the area of the OX 1X2 plane is infinite. This is a
consequence of the fact that each Landau level is
infinitely degenerated, though the degeneracy per
unit area is finite. In order to proceed, let us as-
sume an area as big as we want, but finite. Adopt-
ing this kind or regularization, we write
∫ L/2
∫ L/2
dx2 K (r, r;−i~β) ≈
2π~ senh(~βω)
L2 eB
~βω − e−~βω
L2 eB
1− e−~βωc
L2 eB
−β(n+ 1
)~ωc ,
where we denoted by ωc = eB/2m the ciclotron
frequency. Comparing this result with that of a
partition function whose energy level En has de-
generacy gn, given by
Z(β) =
−βEn ,
we imediately identify the so called Landau leves
and the corresponding degeneracy per unit area,
~ωc ;
(n = 0, 1, ...) .
B. Charged harmonic oscillator in a uniform
magnetic field
In this section we consider a particle with mass m and
charge e in the presence of a constant and uniform mag-
netic field B = Be3 and submitted to a 2-dimensional
isotropic harmonic oscillator potential in the OX 1X2
plane, with natural frequency ω0. Using the same no-
tation as before, we can write the hamiltonian of the
system in the form
H = H⊥ +
, (44)
where
π21 + π
X21 +X
. (45)
As before, the Feynman propagator for this problem
takes the form K(x,x′; τ) = K⊥(r, r
′; τ)K
3 (x3, x
3; τ),
with K
3 (x3, x
3; τ) given by equation (14). The propa-
gator in the OX 1X2-plane satisfies the differential equa-
tion (15) and will be determined by the same used in the
previous example.
Using hamiltonian (45) and the usual commutation re-
lations the Heisenberg equations are given by
dR(τ)
, (46)
dΠ(τ)
= 2ωCΠ(τ) −mω20R(τ) , (47)
where we have used the matrix notation introduced in
(17) and (20). Equation (46) is the same as (18), but
equation (47) contains an extra term when compared to
(19). In order to decouple equations (46) and (47), we
differentiate (46) with respect to τ and then use (47).
This procedure leads to the following uncoupled equation
d2R(τ)
− 2ωC
dR(τ)
+ ω20R(τ) = 0 (48)
After solving this equation, R(τ) and Π(τ) are con-
strained to satisfy equations (46) and (47), respectively.
A straightforward algebra yields the solution
R(τ) = M−R(0) + NΠ(0) (49)
Π(τ) = M+Π(0)−m2ω20NR(0) , (50)
where we defined the matrices
sin (Ωτ)
eωτC (51)
± = eωτC
cos (Ωτ)1l±
sin (Ωτ)C
, (52)
and frequency Ω =
ω2 + ω20 . Using (49) and (50), we
write Π(0) and Π(τ) in terms of R(τ) and R(0),
Π(0) = N−1R(τ)− N−1M−R(0) , (53)
Π(τ)= M+N−1R(τ)−
−+m2ω20N
R(0). (54)
Now, we must order appropriately the hamiltonian
operator H⊥ = Π
T (0)Π(0)/(2m) + mω20R
T (0)R(0)/2,
which, with the aid of equation (53), can be written as
RT (τ)(N−1)T −RT (0)(M−)T (N−1)T
−1R(τ)− N−1M−R(0)
+mω20R
T (0)R(0)
2 sin2 (Ωτ)
RT (τ) −RT (0)(M−)T
R(τ)−M−R(0)
+mω20R
T (0)R(0)
2 sin2 (Ωτ)
RT (τ)R(τ)−RT (τ)M−R(0)
−RT (0)(M−)TRT (τ) +RT (0)(M−)TM−R(0)
+mω20R
T (0)R(0)
2 sin2 (Ωτ)
R2(τ) −RT (τ)M−R(0)
−RT (0)(M−)TR(τ) +R2(0)
, (55)
where superscript T means transpose and we have used
the properties of the matrices N and M− given by (51)
and (52). In order to get the right time ordering, observe
first that
RT (0)(M−)TR(τ) = RT (τ)M−R(0)+
−R(0)
,Xi(τ)
where
−R(0)
,Xi(τ)
= i~Tr
N(M−)T
sin (2Ωτ) .
Using the last two equations into (55) we rewrite the
hamiltonian in the desired ordered form, namely,
2 sin2 (Ωτ)
R2(τ) +R2(0)− 2RT (τ)M−R(0)
sin (2Ωτ)
. (56)
For future convenience, let us define
U(τ) = cos (ωτ) cos (Ωτ) +
sin (ωτ) sin (Ωτ) ,(57)
V (τ) = sin (ωτ) cos (Ωτ) −
cos (ωτ) sin (Ωτ) (58)
and write matrix M−, defined in (52), in the form
− = U(τ)1l + V(τ)C. (59)
Substituting (59) in (56) we have
2 sin2 (Ωτ)
R2(τ) +R2(0)− 2U(τ)RT (τ)R(0)
− 2V (τ)RT (τ)CR(0)−
sin (2Ωτ)
. (60)
The next step is to compute the classical function
F (r, r′; τ). Using the following identities
ΩU(τ)
sin2 (Ωτ)
[cos (ωτ)
sin (Ωτ)
, (61)
ΩV (τ)
sin2 (Ωτ)
[ sin (ωτ)
sin (Ωτ)
, (62)
into (60), we write F (r, r′; τ) in the convenient form
F (r, r′; τ) =
(r2 + r′
)csc(Ωτ)2+mΩr · r′
[cos (ωτ)
sin (Ωτ)
+ mΩr · Cr′
[ sin (ωτ)
sin (Ωτ)
− i~Ω
cos (Ωτ)
sin (Ωτ)
. (63)
Inserting this result into the differential equation
〈r, τ |r′, 0〉 = F (r, r′; τ)〈r, τ |r′, 0〉 ,
and integrating in τ , we obtain
〈r, τ |r′,0〉 =
C(r, r′)
sin (Ωτ)
(r2 + r′
) cot (Ωτ)
r · r′
cos (ωτ)
sin (Ωτ)
+ r · Cr′
sin (ωτ)
sin (Ωτ)
. (64)
where C(r, r ′) is an arbitrary integration constantto be
determined by conditions (29), (30) and (31). Using (54)
we can calculate the l.h.s. of condition (29),
〈r,τ |πj(τ)|r
′, 0〉 =
sin (Ωτ)
cos (Ωτ)xj − cos (ωτ)x
sin (Ωτ)xk − sin (ωτ)x
〈r, τ |r′, 0〉, (65)
and using (53) we get the l.h.s. of condition (30),
〈r,τ |πj(0)|r
′, 0〉 =
sin (Ωτ)
cos (ωτ)xj − cos (Ωτ)x
sin (Ωτ)x′k − sin (ωτ)xk
〈r, τ |r′, 0〉. (66)
With the help of the simple identities
(r2 + r′
) = 2xj ;
(r2 + r′
) = 2x′j
r · r′ = x′j ;
r · r′ = xj
r · Cr′ = ǫjk3x
r · Cr′ = −ǫjk3xk.
and also using equation (64), we are able to compute the
right hand sides of conditions (29) and (30), which are
given, respectively, by
C(r, r′)
∂C(r, r′)
cos (Ωτ)
sin (Ωτ)
xj −mΩ
cos (ωτ)
sin (Ωτ)
sin (ωτ)
sin (Ωτ)
ǫjk3x
k − eAj(r)
〈r, τ |r′, 0〉 (67)
C(r, r′)
∂C(r, r′)
cos (Ωτ)
sin (Ωτ)
x′j +mΩ
cos (ωτ)
sin (Ωτ)
sin (ωτ)
sin (Ωτ)
ǫjk3xk − eAj(r
〈r, τ |r′, 0〉. (68)
Equating (65) and (67), and also (66) and (68)), we get
the system of differential equations for C(r, r′)
∂C(r, r′)
Aj(r) +
C(r, r′) = 0 , (69)
∂C(r, r′)
C(r, r′) = 0. (70)
Proceeding as in the previous example, we first integrate
(69). With this goal, we multiply it by dxj , sum in j and
integrate it to obtain
C(r, r ′) = C(r ′, r ′) exp
Aj(ξ) +
where the path of integration Γ will be specified in a
moment. Inserting expression (71) into the second differ-
ential equation (70), we get
C(r ′, r ′) = 0 =⇒ C(r ′, r ′) = C0 ,
where C0 is a constant independent of r
′, so that equa-
tion (71) can be cast, after some convenient rearrange-
ments, into the form
C(r, r′) = C0 exp
A(ξ)−
Note that the integrand has a vanishing curl so that we
can choose the path of integration Γ at our will. Choos-
ing, as before, the straight line between r′ and r, it can
be shown that
A(ξ)−
·dξ =
A(ξ)·dξ+
Br ·Cr′ , (73)
where, for simplicity of notation, we omitted the symbol
Γsl indicating that the line integral must be done along
a straight line. From equations (71), (72) e (73), we get
C(r, r′) = C0 exp
A(ξ) · dξ
r · Cr′
which substituted back into equation (64) yields
〈r, τ |r′, 0〉 =
sin (Ωτ)
A(ξ) · dξ
{ imΩ
2~ sin (Ωτ)
(r2 + r′
) cos (Ωτ)
−2r · r′ cos (ωτ)− 2
sin (ωτ) −
sinΩτ
The initial condition implies C0 = mΩ/(2πi~). Hence,
the desired Feynman propagator is finally given by
K(x,x′; τ) = K⊥(r, r
′; τ)K
3 (x3, x
3; τ)
2 π i ~ sin (Ωτ)
2πi~τ
A(ξ) · dξ
2~ sin (Ωτ)
cos (Ωτ)(r2 + r′
− 2 cos (ωτ)r · r′ − 2
sin (ωτ)−
sin (Ωτ)
r · Cr′
(x3 − x
, (76)
where we brought back the free part of the propagator
corresponding to the movement along the OX 3 direction.
Of course, for ω0 = 0 we reobtain the propagator found
in our first example and for B = 0 we reobtain the prop-
agator for a bidimensional oscillator in the OX 1X2 plane
multiplied by a free propagator in the OX 3 direction, as
can be easily checked.
Regarding the gauge dependence of the propagator,
the same comments done before are still valid here,
namely, the above expression is written for a generic
gauge, the transformation law for the propagator under a
gauge transformation is the same as before, etc. We fin-
ish this section, extracting from the previous propagator,
the corresponding energy spectrum. With this purpose,
we first compute the trace of the propagator,
dx2 K
⊥(x1, x1, x2, x2; τ) =
2πi~ sin(Ωτ)
dx2 exp
2~ sin(Ωτ)
cos(Ωτ) − cos(ωτ)
(x21 + x
2[cos(Ωτ) − cos(ωτ)]
, (77)
where we used the well known result for the Fresnel in-
tegral. Using now the identity
cos(Ωτ)−cos(ωτ) = −2 sin[(Ω+ω)τ/2] sin[(Ω−ω)τ/2)] ,
we get for the corresponding energy Green function
G (E) =−i
dx2 K
⊥(x1, x1, x2, x2; τ)
sen(Ωτ
τ) sen(Ω−ω
e−(l+
)(Ω+ω)τ
e−i(n+
(Ω−ω)τ
where is tacitly assumed that E → E − iε and we also
used that (with the assumption ν → ν − iǫ)
sen(ν
e−i(n+
)ντ .
Changing the order of integration and summations, and
integrating in τ , we finally obtain
G(E) =
l,n=0
E − Enl
, (78)
where the poles of G(E), which give the desired energy
levels, are identified as
Enl = (l+n+1)~Ω+(l−n)~ω , (l, n = 0, 1, ...) . (79)
The Landau levels can be reobtained from the previous
result by simply taking the limit ω0 → 0:
Enl −→ (2l + 1)~ω = (l +
)~ωc , (80)
with l = 0, 1, ... and ωc = eB/m, in agreement to the
result we had already obtained before.
IV. FINAL REMARKS
In this paper we reconsidered, in the context of
Schwinger’s method, the Feynman propagators of two
well known problems, namely, a charged particle under
the influence of a constant and uniform magnetic field
(Landau problem) and the same problem in which we
added a bidimensional harmonic oscillator potential. Al-
though these problems had already been treated from
the point of view of Schwinger’s action principle, the
novelty of our work relies on the fact that we solved
the Heisenberg equations for gauge invariant operators.
This procedure has some nice properties, as for instance:
(i) the Feynman propagator is obtained in a generic
gauge; (ii) the gauge-dependent and gauge-independent
parts of the propagator appear clearly separated and
(iii) the transformation law for the propagator under
gauge transformation can be readly obtained. Besides,
we adopted a matrix notation which can be straightfor-
wardly generalized to cases of relativistic charged parti-
cles in the presence of constant electromagnetic fields and
a plane wave electromagnetic field, treated by Schwinger
[2]. For completeness, we showed explicitly how one can
obtain the energy spectrum directly from que Feynman
propagator. In the Landau problem, we obtained the (in-
finitely degenerated) Landau levels with the correspond-
ing degeneracy per unit area. For the case where we
included the bidimensional harmonic potential, we ob-
tained the energy spectrum after identifying the poles of
the corresponding energy Green function. We hope that
this pedagogical paper may be useful for undergraduate
as well as graduate students and that these two simple
examples may enlarge the (up to now) small list of non-
relativistic problems that have been treated by such a
powerful and elegant method.
Acknowledgments
F.A. Barone, H. Boschi-Filho and C. Farina would like
to thank Professor Marvin Goldberger for a private com-
munication and for kindly sending his lecture notes on
quantum mechanics where this method was explicitly
used. We would like to thank CNPq and Fapesp (brazil-
ian agencies) for partial financial support.
[1] F.A. Barone, H. Boschi-Filho and C. Farina, Three meth-
ods for calculating the Feynman propagator, Am. J. Phys.
71, 483-491 (2003).
[2] J. Schwinger, On Gauge Invariance And Vacuum Polar-
ization, Phys. Rev. 82, 664 (1951).
[3] Claude Itzykson and Jean-Bernard Zuber, Quantum
Field Theory, (McGraw-Hill Inc., NY, 1980), pg 100.
[4] E.S Fradkin, D.M Gitman and S.M. Shvartsman, Quan-
tum Eletrodinamics with Unstable Vacuum (Springer,
Berlim, 1991).
[5] V. V. Dodonov, I. A. Malkin, and V. I. Manko, “In-
variants and Green’s functions of a relativistic charged
particle in electromagnetic fields,” Lett. Nuovo Cimento
Soc. Ital. Fis. 14, 241-244 (1975).
[6] V. V. Dodonov, I. A. Malkin, and V. I. Manko, “Green’s
functions for relativistic particles in non-uniform external
fields,” J. Phys. A 9, 1791- 1796 (1976).
[7] J. D. Likken, J. Sonnenschein, and N. Weiss, “The theory
of anyonic superconductivity: A review,” Int. J. Mod.
Phys. A 6, 5155-5214 (1991).
[8] A. Ferrando and V. Vento, “Hadron correlators and the
structure of the quark propagator,” Z. Phys. C63, 485
(1994).
[9] H. Boschi-Filho, C. Farina and A. N. Vaidya,
“Schwinger’s method for the electron propagator in a
plane wave field revisited,” Phys. Lett. A 215, 109-112
(1996).
[10] S. P. Gavrilov, D. M. Gitman and A. E. Goncalves, “QED
in external field with space-time uniform invariants: Ex-
act solutions,” J. Math. Phys. 39, 3547 (1998).
[11] D. G. C. McKeon, I. Sachs and I. A. Shovkovy, “SU(2)
Yang-Mills theory with extended supersymmetry in a
background magnetic field,” Phys. Rev. D59, 105010
(1999).
[12] T. K. Chyi, C. W. Hwang, W. F. Kao, G. L. Lin,
K. W. Ng and J. J. Tseng, “The weak-field expansion for
processes in a homogeneous background magnetic field,”
Phys. Rev. D 62, 105014 (2000).
[13] N. C. Tsamis and R. P. Woodard, “Schwinger’s propaga-
tor is only a Green’s function,” Class. Quant. Grav. 18,
83 (2001).
[14] M. Chaichian, W. F. Chen and R. Gonzalez Felipe,
“Radiatively induced Lorentz and CPT violation in
Schwinger constant field approximation,” Phys. Lett. B
503, 215 (2001).
[15] J. M. Chung and B. K. Chung, “Induced Lorentz-
and CPT-violating Chern-Simons term in QED: Fock-
Schwinger proper time method,” Phys. Rev. D 63,
105015 (2001).
[16] H. Boschi-Filho, C. Farina e A.N. Vaidya, Schwinger’s
method for the electron propagator in a plane wave field
revisited, Phys. Let. A215 (1996) 109-112.
[17] Luis F. Urrutia and Eduardo Hernandez, Calculation of
the Propagator for a Time-Dependent Damped, Forced
Harmonic Oscillator Using the Schwinger Action Princi-
ple, Int. J. Theor. Phys. 23, 1105-1127 (1984)
[18] L.F. Urrutia and C. Manterola, Propagator for the
Anisotropic Three-Dimensional Charged Harmonic Os-
cillator in a Constant Magnetic Field Using the
Schwinger Action Principle, Int. J. Theor. Phys. 25, 75-
88 (1986).
[19] N. J. M. Horing, H. L. Cui, and G. Fiorenza, Nonrel-
ativistic Schrodinger Greens function for crossed time-
dependent electric and magnetic fields, Phys. Rev. A34,
612615 (1986).
[20] C. Farina and Antonio Segui-Santonja, Schwinger’s
method for a harmonic oscillator with a time-dependent
frequency, Phys. Lett. A184, 23-28 (1993).
[21] S.J. Rabello and C. Farina, Gauge invariance and the
path integral, Phys. Rev. A51, 2614-2615 (1995).
[22] J. Schwinger, Quantum Mechanics: Symbolism of Atomic
Measurements, edited by B.G. Englert (Springer, 2001).
[23] Private communication with Professor M. Goldberger.
We thank him for kindly sending us a copy of his notes
on quantum mechanics given at Princeton for more than
ten years.
[24] R.P. Feynman e A.R. Hibbs, Quantum Mechanics and
Path Integrals (McGraw-Hill, New York, 1965).
|
0704.1646 | A linear RFQ ion trap for the Enriched Xenon Observatory | A linear RFQ ion trap for the Enriched Xenon
Observatory
B. Flatt a,∗, M. Green a, J. Wodin a, R. DeVoe a, P. Fierlinger a,
G. Gratta a, F. LePort a, M. Montero Dı́ez a, R. Neilson a,
K. O’Sullivan a, A. Pocar a, S. Waldman a,1, E. Baussan b,
M. Breidenbach c, R. Conley c, W. Fairbank Jr. d, J. Farine e
C. Hall c,2, K. Hall d, D. Hallman e, C. Hargrove f , M. Hauger b,
J. Hodgson c, F. Juget b, D.S. Leonard g, D. Mackay c,
Y. Martin b, B. Mong d, A. Odian c, L. Ounalli b, A. Piepke g,
C.Y. Prescott c, P.C. Rowson c, K. Skarpaas c, D. Schenker b,
D. Sinclair f , V. Strickland f , C. Virtue e, J.-L. Vuilleuimier b,
J.-M. Vuilleuimier b, K. Wamba c, P. Weber b
aPhysics Department, Stanford University, Stanford CA, USA
bInstitut de Physique, Université de Neuchatel, Neuchatel, Switzerland
cStanford Linear Accelerator Center, Menlo Park CA, USA
dPhysics Department, Colorado State University, Fort Collins CO, USA
ePhysics Department, Laurentian University, Sudbury ON, Canada
fPhysics Department, Carleton Univerisity, Ottawa ON, Canada
gDept. of Physics and Astronomy, University of Alabama, Tuscaloosa AL, USA
Abstract
The design, construction, and performance of a linear radio-frequency ion trap
(RFQ) intended for use in the Enriched Xenon Observatory (EXO) are described.
EXO aims to detect the neutrinoless double-beta decay of 136Xe to 136Ba. To sup-
press possible backgrounds EXO will complement the measurement of decay energy
and, to some extent, topology of candidate events in a Xe filled detector with the
identification of the daughter nucleus (136Ba). The ion trap described here is capa-
ble of accepting, cooling, and confining individual Ba ions extracted from the site of
the candidate double-beta decay event. A single trapped ion can then be identified,
with a large signal-to-noise ratio, via laser spectroscopy.
Key words: RFQ trap, EXO, fluorescence spectroscopy
PACS: 34.10.+x, 42.62.Fi, 14.60.Pq
Preprint submitted to Elsevier 30 October 2018
1 Introduction
In the last decade, compelling evidence for flavor mixing in the neutrino sec-
tor has clearly shown that neutrinos have finite masses [1]. These experiments
reveal mass differences between single mass eigenstates, but not their absolute
values. The measurement of such masses has become arguably the most im-
portant frontier in neutrino physics, with implications in astrophysics, particle
physics, and cosmology. β-decay endpoint spectroscopy measurements provide
an increasingly sensitive probe of neutrino mass [2]. However, a less direct but
potentially more sensitive technique is the observation and measurement of
the rate of neutrinoless double-beta (0νββ) decay [3]. The discovery of this
exotic nuclear decay mode would provide an absolute scale for neutrino masses
and establish the existence of two-component Majorana particles [4].
Sensitivity to Majorana neutrino masses in the interesting 10 - 100 meV re-
gion is achievable by experiments utilizing a ton-scale 0νββ isotope source [3].
This assumes that backgrounds from natural radioactivity, cosmic rays, and
the standard-model two-neutrino double-beta (2νββ) decay can be sufficiently
reduced and understood. Several proposals exist to perform this daunting
task [3]. The Enriched Xenon Observatory (EXO) is designed to identify the
atomic species (136Ba) produced in the decay process, using high resolution
atomic spectroscopy [5]. This isotope specific “Ba tagging,” working in con-
junction with more conventional measurements of decay energy and crude
event topology, will potentially provide a clean signature of 0νββ decay.
The EXO collaboration is currently pursuing a 0νββ detector R&D program,
focusing on a time projection chamber (TPC) filled with xenon enriched to
80% 136Xe in liquid (LXe) or gaseous (GXe) phase. Many of the detector
parameters and, in particular, the details of the Ba tagging technique would
be different in LXe and GXe. The ion trap described here is designed to accept,
trap, and cool individual Ba ions extracted from a 0νββ detector. While the
technique to efficiently transport ions from their production site is still under
investigation (and is beyond the scope of this article), the ion trap discussed
here is optimized to operate with a LXe detector and a mechanical system
to retrieve and inject the ions. This ion trap is capable of confining ions for
extended periods of time (∼ min) to a small volume (∼ (500 µm)3), essential
for observing single ions via laser spectroscopy with a high signal-to-noise
ratio. These properties are required to drastically suppress candidate decays
∗ Corresponding author. Address: Physics Department, Stanford University, 382
Via Pueblo Mall, Stanford, CA 94305, USA. Tel: +1-650-723-2946; fax: +1-650-
725-6544.
Email address: [email protected] (B. Flatt).
1 Now at Caltech, Pasadena CA, USA
2 Now at University of Maryland, College Park MD, USA
that do not create Ba ions in the TPC, while maintaining a high detection
efficiency for Ba-ion-producing events. In addition, this trap can operate in
the presence of some Xe contamination, which is likely in any Ba tagging
system coupled to a Xe filled detector. The ion trap system described here is
designed to be appropriately flexible as an R&D device. Simplifications and
modifications of this system can be adopted for the actual trap to be used in
2 Linear RFQ traps and buffer gas cooling
RF Paul traps confine charged particles in a quadrupole RF field [7]. Spherical
traps have a closed geometry consisting of a ring and two endcap electrodes,
providing trapping in three spatial dimensions. Linear Paul traps generally
consist of four parallel cylindrical electrodes, placed symmetrically about a
central (longitudinal) axis, as shown in fig. 1. An RF field applied across
diagonally opposing electrodes provides transverse (x− y plane in fig. 1) con-
finement of the ion.
...UDC 0V
S1 S4 S5 S16S14S13 S15
-1.2 V -1.2 V
Ba oven
e--gun
-0.1 V 10V-0.2 V
Spectro-
scopy
lasers
Trapped
Fig. 1. Schematic of the linear RF trap. Ions are loaded in S3 and stored in S14. The
lower part of the figure shows the DC potential distribution.
Appropriately chosen DC voltages, applied to longitudinally (z axis in fig. 1)
segmented electrodes, provide longitudinal confinement. A single group of four
symmetrically placed electrodes is referred to as a “segment”. Electrodes are
constructed and positioned such that their radius, re, is related to the char-
acteristic radial trap size, r0, by
re = 1.148r0 (1)
where r0 is the distance from the axis of the trap to the innermost edge of an
electrode. This configuration creates the closest approximation to a hyperbolic
potential at the trap center for cylindrically shaped electrodes [6]. The ion’s
orbit in the transverse plane is described by the Mathieu equation [9]. Analysis
of the solutions to this equation reveal stability criteria for the ion’s motion in
the trap. The dimensionless Mathieu stability parameters, a and q, are defined
q = 2
mr20ω
a = 4
mr20ω
where where e and m are the ion’s charge and mass, ωRF is the angular RF
frequency, URF is the RF voltage (amplitude), and UDC is the DC voltage.
Values of a and q between 0 and 0.91, falling within a region defined by
the characteristic numbers of the Mathieu equation, correspond to stable ion
orbits [8].
Transverse confinement is attributed to a pseudopotential
VRF (r) =
r2. (4)
quadratically dependent on the radial distance, r, from the longitudinal axis of
the trap. The DC voltages applied to the longitudinally segmented electrodes
are chosen to create a trapping potential
VDC(r, z) =
where z and z0 are the longitudinal coordinate and length of the segment in
which the ion is trapped (the “trapping segment”). The radial dependence of
the longitudinal potential well arises from Laplace’s equation applied to the
interior region of the ion trap. This radial defocusing reduces the depth of the
transverse pseudopotential well, created by the RF field. The total potential
well used to trap ions is the sum of eqns. 4 and 5.
The open geometry of this type of trap allows for an unobstructed view of
the trapping region, with large optical angular acceptance. In addition, the
longitudinal electrode segmentation allows for multiple configurations of the
longitudinal potentials as required for the injection, trapping, and ejection of
a single ion.
In order to confine an energetic ion of mass m injected from outside the trap,
a mechanism of energy loss must be provided in order to dissipate the ion’s
kinetic energy to below both eVRF and eVDC . Collisions with a ”buffer” gas
of mass mB can provide such an energy-loss mechanism. The phenomenology
of ion-neutral interactions in an RF Paul trap can be divided into three cases.
If mB � m, the ion is cooled via a large number of ion-neutral collisions,
each exchanging a small amount of energy and momentum. In this case, the
cooling process is adiabatic compared to the period of the ion’s motion in
the RF field, and the pseudopotential formulation is valid during the cooling
process. If mB � m, each collision can add or remove substantial momentum
and energy from a trapped ion. This large instantaneous momentum transfer
can alter the ion’s trajectory appreciably, which may result in energy transfer
from the RF-field to the ion (”RF heating”). Under these conditions, the ion
is unstable in the trap, and is rapidly ejected. In the intermediate regime,
mB ≈ m, a form of RF heating also occurs and the amount of time that a
single ion is trapped depends on trap parameters.
3 Simulation of ion cooling and trapping
The DC and RF voltage amplitudes, the longitudinal dimensions of the indi-
vidual segments, and the buffer gas pressure and type are optimized using the
SIMION 7.0 simulation package 3 for single ion stability. Ion-neutral collisions
are implemented using a hard-sphere model with a variable radius, depending
on the buffer gas and trapped ion species. This model, applicable in the case
of a single atomic ion interacting with a noble buffer gas [12], uses a cross sec-
tion dependent on the ion’s velocity, v, and buffer gas polarizability to account
for the dipole moment of the neutral atom induced by the ion. Collisions are
implemented by specifying a mean free path λ, the buffer gas mass mB, and
a buffer gas temperature TB. The probability that a trapped ion collides with
a buffer gas atom in a time interval ∆t is given by
P (∆t) = 1− e−v∆t/λ (6)
Before each time-step of the ion’s trajectory, a random number is chosen.
This number is used to decide if a collision occurs during that time-step.
If a collision occurs, the kinematics of the collision are calculated assuming
that the velocity distribution of the neutral buffer gas atoms follows Maxwell-
Boltzmann statistics with a temperature TB.
The total longitudinal trap length, 604 mm, is chosen as a result of cooling
simulations of an ion at various initial kinetic energies, interacting with a
3 http://www.sisweb.com/simion.htm
range of buffer gases (He, Ar, Kr, and Xe). The trap is split into 16 segments,
to provide sufficient versatility in shaping the longitudinal field for different
phases of the R&D program. The segments are labeled Si, where i runs from
1 to 16 as shown in fig. 1. The segments are chosen to be 40 mm long, except
for a single short 4 mm segment (S14, the “trapping segment”), used to tightly
confine the ion longitudinally, optimizing the single ion fluorescence signal-to-
noise ratio.
The DC potential profile chosen for most operations is UDC = {0V, -0.1V, -
0.2V, ..., -1.2V, -8.0V, -1.2V, +10V}, with the minimum of -8 V at S14. Because
of a limitation in the number of input parameters of SIMION, this profile is
approximated as UDC = {+10V, +10V, +10V, -0.4V, -0.5V, ..., -1.3V, -1.2V,
-8.0V, -1.2V, +10V}, in the simulation. This does not appreciably affect the
ion cooling at the potential minimum.
The values of the trap radius, r0, and electrode diameter, re are chosen to
optimize the external optical access to a trapped ion, as well as the shape of
the RF field. The electrode radius is re = 3 mm, resulting in r0 = 2.61 mm
(see eqn. 1). The RF frequency is ωRF/2π = 1.2 MHz, with an amplitude of
150 V. These parameters correspond to q = 0.52 and a = 0.05 (see eqn. 3),
well within the region of stability of the Mathieu equation.
An example of the simulated cooling process is shown in fig. 2. In this simula-
tion, a single Ba ion is created in S3, with an energy of 10 eV in 1× 10−2 torr
He. The ion’s kinetic energy and z-trajectory are plotted during the initial
cooling (panels a and b), and after the ion is confined in the potential well at
S14 (panels c and d). During the initial cooling, the ion is reflected back and
forth longitudinally in the trap. On average, the ion loses energy with each col-
lision with a He atom. Once the ion is confined to S14, the ion continues to cool
to the minimum, until it comes into thermal equilibrium with the buffer gas.
The same processes are shown in fig. 3 in the case of Ar as a buffer gas. The
ion cools much faster in the presence of Ar; however, the frequency and ampli-
tude of RF heating collisions increase as well. Ar is therefore a more efficient
cooling gas for Ba ions, though the higher rate of RF heating likely decreases
the ion’s stability in the trap. Whereas SIMION is useful for studying these
cooling and heating processes, reliable trajectory simulation is limited to the
timescale of a few seconds. This is due to finite computational resources, as
well as error buildup during trajectory integration. For this reason, single ion
storage times longer than a few seconds, relevant for the study of RF heating
and ion deconfinement, cannot be simulated.
Fig. 2. Simulation of a single 136Ba+ in He (1 · 10−2torr) using SIMION. The ion
in the simulation was started in the center of segment S3 of the trap. Panel (a)
shows the collisional cooling during the first few hundred µs after the start of the
simulation. Panel (b) shows the respective trajectory. Panels (c) and (d) show the
evolution of the same quantities on a longer time scale. The ion is confined to the
trapping segment and it cools further down to the buffer gas temperature.
Fig. 3. The same simulation as in fig.2 of a single 136Ba+ in Ar at a pressure of
3.7 · 10−3torr. Faster cooling and larger momentum transfer collisions are evident.
4 Trap construction
A single trap segment is made of four stainless steel tubes threaded onto a
center stainless steel rod, as shown in an exploded view in fig. 4. Vespel 4
tube spacers insulate each segment from its neighbors, and from the center
4 Vespel is a trademark of DuPont de Nemours
rod. Special care is taken to insure that all vespel parts are recessed behind
conductors, in order to avoid any insulator charging that could affect the DC
field inside the trap. Details of the RF and DC feed circuitry for two segments
are shown in fig. 5. A DC voltage is applied to all four electrodes in a segment,
using a 16-bit computer controlled DAC. The RF is applied to one diagonal
pair of electrodes in a segment, while the other pair is RF grounded through
a capacitor. The RF signal is supplied by a function generator 5 , which is
amplified by a broadband 50 dB amplifier 6 , internally back-terminated with
50 Ω. The system can deliver the RF voltage required without the use of a
tuned circuit. Each segment has a capacitance of ∼ 18 pF, however the total
capacitance of the trap is closer to 600 pF due to contributions from cables
and vacuum feedthroughs.
Fig. 4. Exploded view of electrodes 13-16, showing the internal support and electrical
insulation.
The whole trap is housed in a custom-made, electropolished stainless steel
UHV tank pumped by a turbomolecular pump 7 backed by a dry scroll pump 8 (fig. 6).
A septum inside the vacuum tank allows for the installation of an aperture,
to be used in a differentially pumped scheme (not used for the work described
here), to maintain different buffer gas pressures in the injection and trapping
regions of the system. The pressure in the tank is read out in the upper (injec-
tion) and lower (trapping) regions of the vacuum system by vacuum gauges 9 .
A gas handling manifold, connected to the trap by a computer-controlled leak
5 HP 8656B
6 ENI A150
7 Pfeiffer TMU521P
8 BOC Edwards XDS5
9 Pfeiffer PKR251
valve 10 , allows for the introduction of individual buffer gas species and binary
mixtures (fig. 7). The leak valve keeps the buffer gas pressure in the vacuum
chamber constant by regulating gas flow into the chamber, based on the vac-
uum gauge measurement closest to S14. The turbo pump runs continuously, so
that the gas pressure can be either increased or decreased at any time. Using
this method, the gas pressure in the vacuum system can be regulated between
3× 10−9 and 1× 10−2 torr, with a stability of ≤ 1 %. The lower range is the
limit of the vacuum gauges, while the upper limit is the maximum allowable
pressure in front of the turbo-pump running at full speed. The upper pressure
limit can be extended if required, with simple modifications to the vacuum
system. Before any ion trapping operations begin, the entire vacuum system
is baked for two days at 135 ◦C in an oven that completely encloses the tank.
After bakeout, the system reaches a base pressure of < 3× 10−9 torr.
Vacuum system
A1 RF Monitor
1 nF 5 pF
100 nF
100 nF
1 MΩ 100 nF
Fig. 5. Electrical schematics of the ion trap. Only two of 16 identical segments are
shown.
Fluorescence from a trapped Ba ion is induced, following the classic “shelving”
scheme [13], by resonant lasers cycling the ion between the 6S1/2 (ground),
6P1/2 (excited), and 5D3/2 (metastable) states. The ion undergoes sponta-
neous emission when in the 6P1/2 state, emitting either a 493 nm or 650 nm
photon. The 493 nm fluorescence photons are the signal collected for this ex-
periment. The 6S1/2 ↔ 6P1/2 transition at 493 nm is excited by a frequency-
10 Pfeiffer EVR116
Fig. 6. Cutaway view of the vacuum system with the linear trap mounted inside.
doubled external-cavity diode laser (ECDL) 11 . The 6P1/2 ↔ 5D3/2 transition
at 650 nm is excited by a ECDL 12 . 20 mW (10 mW) of 493 nm (650 nm) light
is available for spectroscopy, far in excess of that required to observe a single
ion. The blue laser is frequency stabilized to ∼ 20 MHz (relative) using an
11 TOPTICA SHG 100
12 TOPTICA DL 100
V6 V8
Ar He Xe
V3 V2 V1
Getter
P1
(70 l/s turbo)
(400 l/s turbo)
Linear trap vacuum
system
Pressure
control
Pressure feedback loop
P3 (Scroll)
V12 V13
Fig. 7. Schematic view of the gas handling system.
Invar Fabry-Perot reference cavity. Long term absolute frequency stabilization
is achieved by locking both lasers to a hollow-cathode Ba lamp 13 . A schematic
of the laser setup is shown in fig. 8.
The laser systems reside on a vibration isolated, optics table in a dust con-
trolled environment, completely separated from the linear ion trap vacuum
system. Both lasers are fed into one single-mode fiber, which is routed over an
arbitrary distance to the ion trap system. The output beams are coupled into
the ion trap via injection optics, consisting of an aspheric focusing lens, an iris
to reduce beam halo, and a beam-steering mirror. The beams are directed into
the vacuum system through an anti-reflection (AR) coated window 14 along
the longitudinal axis of the trap, from the end of the trap closest to S14. An
additional aperture inside the vacuum system aids in beam alignment and
further halo reduction. Due to the chromaticity of the aspheric lens, the beam
waists are separated by 260 mm, which is on the order of the beams Rayleigh
length. The 493 nm and 650 nm waists at the trapping segment (S14) are
370 µm and 570 µ m, respectively. The laser powers at the injection region are
sampled in real-time by photodiodes. These powers are fed-back to acousto-
optic modulators before the fiber input on the laser table, and used to keep the
injected beam powers stable to ∼ 1 % indefinitely. This configuration is found
to suppress background laser light levels to the level required for observing a
single ion.
13 Perkin-Elmer Ba Lumina HCL model N305-0109
14 “Super-V” AR coating (99.98 % transmission at 493 nm) by OptoSigma
Chopper
wheel
Ba hollow
cathode lamp
Lock-in
amplifier
Laser
Controller
650 nm ECDL
493 nm ECDL (986 SHG)
Isolator
Isolator
Lock-in
amplifier
Laser
Controller
Chopper
wheel
Ba hollow
cathode lamp
Wavemeter
Scanning
Fabry-Perot
AOM 3
EOM 1
Cavity
controller
Optogalvanic lock power control
Pound-Drever-Hall lock
Optogalvanic lock
Optogalvanic lock
Optogalvanic lock power control
AOM 1
AOM 4
Fiber
launch to
Burleigh
confocal
cavity
Scope
Red power
feedback
Blue
power
feedback
Fig. 8. Schematic view of the laser setup.
The fluorescence from a trapped ion is detected by an electron-multiplied
CCD camera (EMCCD), sensitive to single photons 15 . The fluorescence is
imaged onto the EMCCD by a 64 mm working distance microscope 16 , with
a numerical aperture of 0.195. The outer lens of the microscope objective is
placed close to S14, outside a re-entrant vacuum window (see fig. 6). A spherical
mirror inside the vacuum tank, directly behind S14, reflects fluorescence light
back through the trap that would otherwise be lost. This roughly doubles
the fluorescence light collection efficiency. A filter in front of the EMCCD
attenuates the 650 nm light by more than 99 % while transmitting ∼ 85% of
the 493 nm light. The light collection efficiency of the system is estimated to
be 10−2, including the 90 % quantum efficiency of the EMCCD.
15 Andor iXon EM+
16 Infinity InFocus KC with IF4 objective
5 Trap operation and results
After the initial pump-down and bakeout, the trap is loaded with ions by
ionizing neutral Ba in the central region of S3 (see fig. 1). Barium is chemically
produced, after the system has been pumped to good vacuum, by heating a
“barium dispenser” 17 loaded with BaAl4-Ni, and depositing Ba on a Ta foil.
The foil can be resistively heated repeatedly, producing a Ba vapor in S3,
which is ionized by a 500 eV electron beam from an electron gun 18 . A 32-
gauge thermocouple on the Ta foil is used to control the temperature of the
foil, regulating the amount of Ba emitted. Before loading ions into the trap,
a buffer gas is introduced into the system. To load small numbers of ions
(< 10), the oven is operated at 100 ◦C and the e-gun pulsed for 10 s. Ions
are cooled by the buffer gas, and trapped at S14 as discussed earlier. The trap
can also be loaded by turning on one of the cold cathode vacuum gauges.
This effect is explained by the possible emission of electrons and ions from the
gauge, which is presumably coated with Ba from the initial Ba source creation
process. This should not be considered a background for Ba tagging in EXO,
as the trap used in EXO will not be heavily contaminated with Ba, nor will
vacuum gauges be operated during the tagging process.
Fig. 9 shows a grayscale image of the fluorescence from three ions in the trap,
imaged by the EMCCD over 20 s. The DC potentials are set as shown in
fig. 1. The brightness of each pixel is proportional to the number of collected
photons. The cloud of white pixels (encompassed by the black square) is flu-
orescence from three ions trapped in S14. The cloud has two lobes, due to
a slight misalignment of the spherical mirror behind S14. The white vertical
bands on either side of the cloud are due to laser light scattering off the elec-
trodes, which is at the same wavelength as the fluorescence photons. Precise
alignment of the laser beams is required to minimize scattered laser light and
optimize the signal to noise in the region of interest, in order to observe the
fluorescence from a single ion in the trap.
A time-series of the Ba ion fluorescence signal is shown in fig. 10 [10], starting
with four ions loaded into the trap at 4.4 × 10−3 torr He. The x-axis is in
seconds, and the y-axis pepresents the fluorescence in arbitrary units of EM-
CCD counts. Each data point in this series is the sum of the pixels the the
square box in fig. 9, after 5 s of integration by the EMCCD. The y-axis is zero-
suppressed due to the large EMCCD pedestal. Over time, ions spontaneously
eject from the trap due to RF heating, and possibly ion-ion Coulomb interac-
tions. As individual ions eject, the fluorescence signal decreases in quantized
steps. The difference in the fluorescence signals for the single steps is constant
17 SAES: http://www.saesgetters.com/default.aspx?idpage=460
18 Kimball Physics FRA-2X1-2016
http://www.saesgetters.com/default.aspx?idpage=460
r (mm)
-1.5 -1 -0.5 0 0.5 1 1.5
Fig. 9. Greyscale picture of three ions contained in segment 14 of the trap. The
image was taken at 5 · 10−4 torr He. The signal in the region of interest (black box)
was integrated over 20 seconds with the EMCCD.
within 4%, clearly establishing the capability of the system in detecting single
ions.
A high signal-to-noise ratio of the fluorescence from a single trapped ion will
be required to confirm a 0νββ decay. The signal-to-noise ratio for this purpose
is defined as
S/N =
〈RI〉 − 〈RB〉√
where 〈RI〉 and 〈RS〉 are the average single-ion fluorescence and background
rates, σI and σB are the Gaussian widths of the single-ion fluorescence and
background rates, tI and tB are the total single-ion fluorescence and back-
ground rate observation times, and ∆t is the integration time of single mea-
surement (hence tI/∆t and tB/∆t are the number of measurements for the
signal and the background, respectively). This metric assumes that both the
single ion fluorescence and background rates follow Gaussian statistics. If the
Time [s]
0 200 400 600 800 1000 1200 1400 1600 1800
4 ions
3 ions
2 ions
1 ion
Background level
Fig. 10. Time series of the 493 nm ion fluorescence rate in the trap at 4.4·10−3 torr
He. Ions unload, causing clear quantized drops in the fluorescence rate. Each point
represents 5 s of integration with the EMCCD [10].
two integration times are equal (tI = tB = t), eqn. 7 becomes
S/N =
〈RI〉 − 〈RB〉√
σ2I + σ
t (8)
where the signal and background rates have been absorbed into the constant k,
in units of Hz−1/2. The signal-to-noise ratio of the fluorescence from an individ-
ual ion increases with the square root of the measurement time, or equivalently
number of measurements. For the single ion fluorescence and background rates
in fig. 10, k = 2.75 Hz−1/2, so that S/N = 18 for a 60s measurement time.
Similar values are found in the cases of Ar, and He/Xe mixtures as buffer
gases. Drifts in the laser beam position at S14 may lead to fluctuations in the
background rate, RB, that are not accounted for by the assumptions of Gaus-
sian statistics made here. Such drifts are caused primarily by temperature
variations in the lab, affecting the alignment of the trap injection optics. In
the setup used for the data presented here, no provision is made for the tem-
perature stabilization of such optics that drift by as much as ±2◦C on a daily
cycle. Even under these non-ideal conditions, the system is stable enough to
apply the S/N description of eqn. 7 for periods of minutes, much longer than
required for non-ambiguous single Ba ion identification. Temperature stabi-
lization of the injection optics capable of ±0.1◦C would be straightforward
to implement and would reduce non-Gaussian background fluctuations to a
negligible level.
The lifetimes of single Ba ions in the trap at different He pressures have been
measured and reported elsewhere [10]. In the same paper, the destabilizing
effects of Xe contaminations are also studied and modeled. It is found that
a sufficient partial pressure of He in the trap can counter the effects of Xe,
and provide lifetimes that are sufficient for single-ion detection with very high
significance.
6 Summary
A linear RFQ ion trap, designed and built within the R&D program towards
the EXO experiment, is described. The trap is capable of confining individual
Ba ions for observation by laser spectroscopy, in the presence of light buffer
gases and low Xe concentrations. Single trapped Ba ions are observed, with a
high signal-to-noise ratio. A similar trap will be used to identify single Ba ions
produced in the 0νββ decay of 136Xe in the EXO experiment. The successful
operation of this trap, as described here, is one of the cornerstones of a full
Ba tagging system for EXO, which will lead to a new method of background
suppression in low-background experiments. In parallel, several systems to
capture single Ba ions in LXe, and transfer them into the ion trap are under
development.
Acknowledgements
This work was supported, in part, by DoE grant FG03-90ER40569-A019 and
by private funding from Stanford University. We also gratefully acknowledge
a substantial equipment donation from the IBM corporation, as well as very
informative discussions with Guy Savard.
References
[1] Y. Fukuda, T. Hayakawa, E. Ichihara, K. Inoue,et al., Phys. Rev. Lett. 81 1562
(1998).
M.H. Ahn, E. Aliu, S. Andringa, S. Aoki et al., Phys. Rev. D 74, 072003 (2006).
Q.R. Ahmad, R.C. Allen, T.C. Andersen, J.D Anglin et al., Phys. Rev. Lett 89
011301 (2002).
T. Araki, K. Eguchi, S. Enomoto, K. Furuno, Phys. Rev. Lett. 94 081801 (2005).
B.T. Cleveland,T. Daily, R. Davis, Jr., J.R. Distel et al., Astrophys. J. 496, 505
(1998).
J.N. Abdurashitov, V.N. Gavrin, S.V. Girin, V.V. Gorbachev et al., Phys. Rev.
C 60, 055801 (1999).
W. Hampel, J. Handt, G. Heusser, J. Kiko et al., Phys. Lett. B 447, 127 (1999).
D.G. Michael, P. Adamson, T. Alexopoulos, W.W.M. Allison et al., Phys. Rev.
Lett. 97, 191801 (2006).
[2] A. Osipowicz, H. Blumer, G. Drexlin, K. Eitel et al., arxiv hep-ex/0109033.
A. Minfardini, C. Arnaboldi, C. Brofferio, S. Capelli et al., Nucl. Instr. Meth.
A 559 346 (2006).
[3] S. Elliott, P. Vogel, Ann. Rev. Nucl. Part. Sci. 52, 11551 (2002).
[4] E. Majorana, Nuovo Cim. 14 (1937) 171.
[5] M. Danilov, R. DeVoe, A. Dolgolenko, G. Giannini et al., Phys. Lett. B 480
(2000) 12.
M. Breidenbach, M Danilov, J. Detwiler et al., R&D proposal, Feb 2000,
Unpublished.
[6] D. Denison, J. Vac. Sci. Tech. 8, 266 (1971).
[7] W. Paul, Rev. Mod. Phys. 62, 531 (1990).
[8] R. Marchetal., Quadrupole Ion Trap Mass Spectrometry, Wiley-Interscience,
(2005).
[9] N. McLachlan, Theory and Application of Mathieu Functions (Dover, 1964).
[10] M. Green, J. Wodin, R. deVoe, P. Fierlinger et al., arXiv:physics/0702122
(2007), submitted to PRL.
[11] S. Waldman, PhD Thesis, Stanford University 2005.
J. Wodin, PhD Thesis, Stanford University 2007.
[12] K. Taeman, PhD Thesis, McGill University 1997.
[13] W. Neuhauser, M. Hohenstatt, P. Toscheck, H. Dehmelt, Phys. Rev. Lett. 41
(1978) 233.
http://arxiv.org/abs/hep-ex/0109033
http://arxiv.org/abs/physics/0702122
Introduction
Linear RFQ traps and buffer gas cooling
Simulation of ion cooling and trapping
Trap construction
Trap operation and results
Summary
References
|
0704.1647 | How much entropy is produced in strongly coupled Quark-Gluon Plasma
(sQGP) by dissipative effects? | How much entropy is produced in strongly coupled
Quark-Gluon Plasma (sQGP) by dissipative effects?
M.Lublinsky and E.Shuryak
Department of Physics and Astronomy, State University of New York, Stony Brook NY 11794-3800, USA
(Dated: November 4, 2018)
We argue that estimates of dissipative effects based on the first-order hydrodynamics with shear
viscosity are potentially misleading because higher order terms in the gradient expansion of the
dissipative part of the stress tensor tend to reduce them. Using recently obtained sound dispersion
relation in thermal N=4 supersymmetric plasma, we calculate the resummed effect of these high
order terms for Bjorken expansion appropriate to RHIC/LHC collisions. A reduction of entropy
production is found to be substantial, up to an order of magnitude.
PACS numbers:
Hydrodynamical description of matter created in high
energy collisions have been proposed by Landau [1] more
than 50 years ago, motivated by large coupling at small
distance, as followed from the beta functions of QED and
scalar theories known at the time. Hadronic matter is of
course described by QCD, in which the coupling runs in
the opposite way. And yet, recent RHIC experiments
have shown spectacular collective flows, well described
by relativistic hydrodynamics. More specifically, one ob-
served three types of flow: (i) outward expansion in trans-
verse plane, or radial flow, (ii) azimuthal asymmetry or
“elliptic flow” [2, 3], as well as recently proposed (iii)
“conical flow” from quenched jets [4]. These observation
lead to conclusion that QGP at RHIC is a near-perfect
liquid, in a strongly coupled regime [5]. The issue we
discuss below is at what “initial time” τ0 one is able to
start hydrodynamical description of heavy ion collisions,
without phenomenological/theoretical contradictions.
Phenomenologically, it was argued in [2, 3] that elliptic
flow is especially sensitive to τ0. Indeed, ballistic motion
of partons may quickly erase the initial spatial anisotropy
on which this effect is based. In practice, hydrodynamics
at RHIC is usually used starting from time τ0 ∼ 1/2fm,
otherwise the observed ellipticity is not reproduced.
Can one actually use hydrodynamics reliably at such
short time? How large is τ0 compared to a relevant “mi-
croscopic scales” of sQGP? How much dissipation occurs
in the system at this time? As a measure of that, we will
calculate below the ratio of the amount of entropy pro-
duced at τ > τ0 to its “primordial” value at τ0, ∆S/S0.
To set up the problem, let us start with a very crude
dimensional estimate. If we think that the QCD effective
coupling is large αs ∼ 1 and the only reasonable micro-
scopic length is given by temperature [14], then the rele-
vant micro-to-macro ratio of scales is simply T0τ0. With
T0 ∼ 400MeV at RHIC, one finds this ratio to be close
to one. We are then lead to a pessimistic conclusion: at
such time application of any macroscopic theory, thermo-
or hydro-dynamics, seems to be impossible, since order
one corrections are expected.
Let us then do the first approximation, including the
explicit viscosity term to the first order. Zeroth order (in
mean free path) stress tensor used in the ideal hydrody-
namics has the form
T (0)µν = (ǫ+ p)uµuν + p gµν (1)
while dissipative corrections are induced by gradients of
the velocity field. The well known first order corrections
are due to shear (η) and bulk (ξ) viscosities
δT (1)µν = η(∇µuν +∇νuµ −
∆µν∇ρuρ) + ξ(∆µν∇ρuρ)(2)
In this equation the following projection operator onto
the matter rest frame was used:
∇µ ≡ ∆µν∂ν , ∆µν ≡ gµν − uµuν (3)
The energy-momentum conservation ∂µ Tµν at this order
corresponds to Navier-Stokes equation.
Because colliding nuclei are Lorentz-compressed, the
largest gradients at early time are longitudinal, along the
beam direction. The expansion at this time can be ap-
proximated by well known Bjorken rapidity-independent
setup [6], in which hydrodynamical equations depend on
only one coordinate – proper time τ =
t2 − x2.
ǫ + p
= − 1
1− (4/3)η + ξ
(ǫ+ p)τ
where we have introduced the entropy density s = (ǫ +
p)/T . Note that for traceless Tµν (conformally invariant
plasma), the bulk viscosity ξ = 0.
For reasons which will become clear soon, let us com-
pare this eqn to another problem, in which large longi-
tudinal gradients appear as well, namely sound wave in
the medium. The dispersion relation (the pole position)
for a sound wave with frequency ω and wave vector q is,
at small q
ω = csq −
q2Γs, Γs ≡
Notice that the right hand side of (4) contains precisely
the same combination of viscosity and thermodynamical
http://arxiv.org/abs/0704.1647v1
parameters as appears in the sound attenuation problem:
the length Γs, which measures directly the magnitude
of the dissipative corrections. At proper times τ ∼ Γs
one has to abandon the hydrodynamics altogether, as
the dissipative corrections cannot be ignored.
For the entropy production (4) the first correction to
the ideal case is (1−Γs/τ). Since the correction to one is
negative, it reduces the rate of the entropy decrease with
time. Equivalently statement is that the total positive
sign shows that some amount of entropy is generated
by the dissipative term. Danielewicz and Gyulassy [7]
have analyzed eq. (4) in great details considering vari-
ous values of η. Their results indicate that the entropy
production can be substantial.
Our present study is motivated by the following ar-
gument. If the hydrodynamical description is forced to
begin at early time τ0 which is not large compared to
the intrinsic micro scale 1/T , then limiting dissipative
effects to the first gradient only (δT
µν ) is parametrically
not justified and higher order terms have to be accounted
for. Ideally those effects need to be resummed. As a first
step, however, we may attempt to guess their sign and
estimate the magnitude.
Formally one can think of the dissipative part of the
stress tensor δTµν as expended in a series containing all
derivatives of the velocity field u, δT 1µν being the first
term in the expansion. In general 3+1 dimensional case
there are many structures, each entering with a new and
independent viscosity coefficient. We call them “higher
order viscosities” and the expansion is somewhat similar
to twist expansion. For 1+1 Bjorken problem, the ap-
pearance of the extra terms modifies eq. (4), which can
be written as a series in inverse proper time
∂τ (sτ)
s (τ T )
(τ T )2
(Tτ)2n
We have put T here simply for dimensional reasons:
clearly Tτ is a micro-to-macro scale ratio which deter-
mines convergence of these series and the total amount of
produced entropy. Similarly, the sound wave dispersion
relation becomes nonlinear as we go beyond the lowest
order:
ω = ℜ[ω(q)] + iℑ[ω(q)] ; (7)
2 π T
2 π T
2 π T
)2n+1
= − 4πη
Based on T-parity arguments we keep only odd (even)
powers of q for the real (imaginary) parts of ω. The co-
efficients cn, rn and ηn are related since they originate
from the very same gradient expansion of Tµν . Although
both the entropy production series above and sound ab-
sorption should converge to sign-definite answer, the co-
efficients of the series may well be of alternating sign (as
we will see shortly).
Clearly, keeping these next order terms can be use-
ful only provided there is some microscopic theory which
would make it possible to determine the values of the
high order viscosities. For strongly coupled QCD plasma
this information is at the moment beyond current theo-
retical reach, and we have to rely on models. A particu-
larly useful and widely studied model of QCD plasma is
N = 4 supersymmetric plasma, which is also conformal
(CFT). The AdS/CFT correspondence [8] (see [9] for re-
view) relates the strongly coupled gauge theory descrip-
tion to weakly coupled gravity problem in the background
of AdS5 black hole metric. Remarkably, certain informa-
tion on higher order viscosities in the CFT plasma can be
read of from the literature and we exploit this possibility
below.
The viscosity-to-entropy ratio (η/s = 1/4π) deduced
from AdS [10] turns out to be quite a reasonable ap-
proximation to the values appropriate for the RHIC data
description. Thus one may hope that the information on
the higher viscosities gained from the very same model
can be well trusted as a model for QCD. Admittedly hav-
ing no convincing argument in favor, we simply assume
that the viscosity expansion of the QCD plasma displays
very similar behavior, both qualitative and quantitative,
as its CFT sister.
Our estimates are based on the analysis of the quasi-
normal modes in the AdS black hole background due to
Kovtun and Starinets [11]. The dispersion relation for
the sound mode, calculated in ref. [11], is shown in Fig.1.
The real and imaginary parts of ω correspond to the ex-
pressions given in (7). At q → 0 they agree with the
leading order hydrodynamical dispersion relation (5).
The first important observation is that the next order
coefficient η2 is negative, reducing the effect of the first
one when gradients are large. The second is that |ℑ[ω]|
has maximum at q/2πT ∼ 1, and at large q the imaginary
part starts to decrease. This means that the expansion
(7) has a radius of convergence q/2πT ∼ 1.
In order to estimate the effect of higher viscosities on
the entropy production in the Bjorken setup we first iden-
tify τ in (6) with 2π/q in (7). Second we identify the coef-
ficients cn with ηn. Both sound attenuation and entropy
production in question are one dimensional problems as-
sociated with the same longitudinal gradients and pre-
sumably the same physics. In practice we use the curve
for the imaginary part of ω (Fig. 1) as an input for the
right hand side of (6).
The numerical results are shown in Figs. 2 and 3 in
which we compare our estimates with the “conventional”
shear viscosity results from (4). To be fully consistent
with the model we set η/s = 1/4π. We also set the initial
temperature T0 = 300MeV while the standard equation
of state s = 4 kSB T
3. For the coefficient kSB we use the
“QCD” value
kSB =
2(Nc − 1)2 +
; nf = 3; Nc = 3
Fig. 2 presents the results for entropy production as a
1 2 3 4 5
w q qq
1 2 3 4 5
Imw q
FIG. 1: Sound dispersion (real and imaginary parts) obtained
from the analysis of quasinormal modes in the AdS black hole
background. The result and figure are taken from Ref.[11].
function of proper time for two initial times τ0 = 0.2 fm
and τ0 = 0.5 fm. The dashed lines correspond to the first
order result (4) while the solid curves include the higher
order viscosity corrections. Noticeably there is a dra-
matic effect toward reduction of the entropy production
as we start the hydro evolution at earlier times (the ef-
fect is almost invisible on the temperature profile). This
is the central message of the present paper.
Fig. 3 illustrates the relative amount of entropy pro-
duced during the hydro phase as a function of initial time.
If the fist order hydrodynamics is launched at very early
times, the hydro phase produces too large amount of en-
tropy, up to 250%. (Such a large discrepancy is not seen
in the RHIC data.) In sharp contrast, the results from
the resummed viscous hydrodynamics is very stable, and
does not produce more than some 25% of initial entropy,
even if pushed to start from extremely early times. The
right figure displays the absence of any pathological ex-
plosion at small τ0.
It is worth commenting that we carried the analysis
using the minimal value for the ratio η/s = 1/4π. We
expect that if this ratio is taken larger, the discrepancy
between the first order dissipative hydro and all orders
will be even stronger.
Before concluding this paper we note that a practi-
cal implementation of relativistic viscous hydrodynamics
had followed Israel-Stewart second order formalism (for
recent publications see [12]) in which one introduces ad-
0 2 4 6 8 10
0 2 4 6 8 10
(fm)τ
τ (fm)
= 0.2 (fm)0τ = 0.5 (fm)0τ
FIG. 2: Entropy production as a function of proper time for
initial time τ0 = 0.2 fm (left) and τ0 = 0.5 fm (right). The
initial temperature T0 = 300MeV. The dashed (blue) curves
correspond to the first order (shear) viscosity approximation
Eq.(4). The solid curve (red) is the all order dissipative re-
summation Eq.(6).
0 0.2 0.4 0.6 0.8 1 1.2
0 0.2 0.4 0.6 0.8 1 1.2
1.25T = 300 (MeV)0 T = 300 (MeV)0
τ0 (fm) τ0 (fm)
τs 0 0
∆ ( )
FIG. 3: Fraction of entropy produced during the hydro phase
as a function of initial proper time. The initial temperature
T0 = 300MeV. The left (blue) points correspond to the first
order (shear) viscosity approximation. The right (red) points
are for the all order resummation.
ditional parameter , the relaxation time for the system.
Then the dissipative part of the stress tensor is found as
a solution of an evolution equation, with the relaxation
time being its parameter. For the Bjorken setup, the dis-
sipative tensor thus obtained has all powers in 1/τ and
might resemble the expansion in (6) and (7). The use of
AdS/CFT may shed light on the interrelation between
the two approaches: the first step in this direction has
been made recently [13], resulting in numerically very
small relaxation time.
Finally, why can it be that macroscopic approaches like
hydrodynamics can be rather accurate at such a short
time scale? Trying to answer this central question one
should keep in mind that 1/T is not the shortest micro-
scopic scale. The inter-parton distance is much smaller,
∼ 1/(T ∗N1/3dof ) where the number of effective degrees of
freedom Ndof ∼ 40 in QCD while Ndof ∼ N2c → ∞ in
the AdS/CFT approach.
In summary, we have argued that the higher order dis-
sipative terms strongly reduce the effect of the usual vis-
cosity. Therefore an “effective” viscosity-to-entropy ratio
found from comparison Navier-Stokes results to experi-
ment, can even be below the (proposed) lower bound of
1/4π. We conclude that it is not impossible to use a hy-
drodynamic description of RHIC collision starting from
very early times. In particular, our study suggests that
the final entropy observed and its “primordial” value ob-
tained right after collision should indeed match, with an
accuracy of 10-20 percent.
Acknowledgment
We are thankful to Adrian Dumitru whose results (pre-
sented in his talk at Stony Brook) inspired us to think
about the issue of entropy production during the hydro
phase. He emphasized to us the important problem of
matching the final entropy measured after late hydro
stage with the early-time partonic predictions, based on
approaches such as color glass condensate. This work is
supported by the US-DOE grants DE-FG02-88ER40388
and DE-FG03-97ER4014.
[1] L. D. Landau, Izv. Akad Nauk SSSR, ser. fiz. 17 (1953)
51. Reprinted in Collected works by L.D.Landau.
[2] D. Teaney, J. Lauret and E. V. Shuryak, Phys. Rev. Lett.
86, 4783 (2001) [arXiv:nucl-th/0011058]. “A hydrody-
namic description of heavy ion collisions at the SPS and
RHIC,” arXiv:nucl-th/0110037.
[3] P. F. Kolb and U. W. Heinz, “Hydrodynamic
description of ultrarelativistic heavy-ion collisions,”
arXiv:nucl-th/0305084.
[4] J. Casalderrey-Solana, E. V. Shuryak and D. Teaney, J.
Phys. Conf. Ser. 27, 22 (2005) [Nucl. Phys. A 774, 577
(2006)] [arXiv:hep-ph/0411315].
[5] E.V.Shuryak, Prog. Part. Nucl. Phys. 53, 273 (2004) [
hep-ph/0312227].
[6] J. Bjorken, Phys. Rev. D27(1983)140
[7] P. Danielewicz and M. Gyulassy, Phys. Rev. D 31 (1985)
[8] J. M. Maldacena, Adv. Theor. Math. Phys. 2,
231 (1998) [Int. J. Theor. Phys. 38, 1113 (1999)]
[arXiv:hep-th/9711200].
[9] O. Aharony, S. S. Gubser, J. M. Maldacena,
H. Ooguri and Y. Oz, Phys. Rept. 323, 183 (2000)
[arXiv:hep-th/9905111].
[10] P. Kovtun, D. T. Son and A. O. Starinets, Phys. Rev.
Lett. 94, 111601 (2005) [arXiv:hep-th/0405231].
[11] P. K. Kovtun and A. O. Starinets, Phys. Rev. D 72,
086009 (2005) [arXiv:hep-th/0506184].
[12] U. W. Heinz, arXiv:nucl-th/0512049. R. Baier and
P. Romatschke, arXiv:nucl-th/0610108. R. Baier, P. Ro-
matschke and U. A. Wiedemann, Nucl. Phys. A 782, 313
(2007) [arXiv:nucl-th/0604006].
[13] M. P. Heller and R. A. Janik, arXiv:hep-th/0703243.
[14] Note we have ignored e.g. ΛQCD .
http://arxiv.org/abs/nucl-th/0011058
http://arxiv.org/abs/nucl-th/0110037
http://arxiv.org/abs/nucl-th/0305084
http://arxiv.org/abs/hep-ph/0411315
http://arxiv.org/abs/hep-ph/0312227
http://arxiv.org/abs/hep-th/9711200
http://arxiv.org/abs/hep-th/9905111
http://arxiv.org/abs/hep-th/0405231
http://arxiv.org/abs/hep-th/0506184
http://arxiv.org/abs/nucl-th/0512049
http://arxiv.org/abs/nucl-th/0610108
http://arxiv.org/abs/nucl-th/0604006
http://arxiv.org/abs/hep-th/0703243
|
0704.1648 | Spectral Analysis of the Chandra Comet Survey | Astronomy & Astrophysics manuscript no. ms c© ESO 2021
August 26, 2021
Spectral Analysis of the Chandra Comet Survey
D. Bodewits1, D. J. Christian2, M. Torney3, M. Dryer4, C. M. Lisse5, K. Dennerl6, T. H. Zurbuchen7, S. J. Wolk8,
A. G. G. M. Tielens9, and R. Hoekstra1
1 kvi atomic physics, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen, The Netherlands
e-mail: [email protected], [email protected]
2 Queen’s University Belfast, Department of Physics and Astronomy, Belfast, BT7 1NN, UK
e-mail: [email protected]
3 Atoms Beams and Plasma Group, University of Strathclyde, Glasgow, G4 0NG, UK
e-mail: [email protected]
4 noaa Space Environment Center, 325 Broadway, Boulder, CO 80305, USA
e-mail: [email protected]
5 Planetary Exploration Group, Space Department, Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Rd,
Laurel, MD 20723, USA
e-mail: [email protected]
6 Max-Planck-Institut für extraterrestrische Physik, Giessenbachstrasse, 85748 Garching, Germany
e-mail: [email protected]
7 The University of Michigan, Department of Atmospheric, Oceanic and Space Sciences, Space Research Building, Ann Arbor, MI
48109-2143, USA
e-mail: [email protected]
8 Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA
e-mail: [email protected]
9 nasa Ames Research Center, MS 245-3, Moffett Field, CA 9435-1000, USA
e-mail: [email protected]
Received August 26, 2021
ABSTRACT
Aims. We present results of the analysis of cometary X-ray spectra with an extended version of our charge exchange emission
model (Bodewits et al. 2006). We have applied this model to the sample of 8 comets thus far observed with the Chandra X-ray
observatory and acis spectrometer in the 300–1000 eV range. The surveyed comets are C/1999 S4 (linear), C/1999 T1 (McNaught–
Hartley), C/2000 WM1 (linear), 153P/2002 (Ikeya–Zhang), 2P/2003 (Encke), C/2001 Q4 (neat), 9P/2005 (Tempel 1) and 73P/2006-
B (Schwassmann–Wachmann 3) and the observations include a broad variety of comets, solar wind environments and observational
conditions.
Methods. The interaction model is based on state selective, velocity dependent charge exchange cross sections and is used to explore
how cometary X-ray emission depend on cometary, observational and solar wind characteristics. It is further demonstrated that
cometary X-ray spectra mainly reflect the state of the local solar wind. The current sample of Chandra observations was fit using
the constrains of the charge exchange model, and relative solar wind abundances were derived from the X-ray spectra.
Results. Our analysis showed that spectral differences can be ascribed to different solar wind states, as such identifying comets
interacting with (I) fast, cold wind, (II), slow, warm wind and (III) disturbed, fast, hot winds associated with interplanetary coronal
mass ejections. We furthermore predict the existence of a fourth spectral class, associated with the cool, fast high latitude wind.
Key words. Surveys, atomic processes, molecular processes, Sun: solar wind, coronal mass ejections (cmes), X-rays: solar system,
Comets: general Comets: individual: C/1999 S4 (linear), C/1999 T1 (McNaught–Hartley), C/2000 WM1, 153P/2002 (Ikeya–Zhang),
2P/2003 (Encke), C/2001 Q4 (neat), 9P/2005 (Tempel 1) and 73/P-B 2006 (Schwassmann–Wachmann 3B)
1. Introduction
When highly charged ions from the solar wind collide on a
neutral gas, the ions get partially neutralized by capturing elec-
trons into an excited state. These ions subsequently decay to the
ground state by the emission of one or more photons. This pho-
ton emission is called charge exchange emission (cxe) and it has
been observed from comets, planets and the interstellar medium
in X-rays and the Far-UV Lisse et al. (1996); Krasnopolsky
(1997); Snowden et al. (2004); Dennerl (2002). The spec-
tral shape of the cxe depends on properties of both the neutral
Send offprint requests to: D. Bodewits
gas and the solar wind and the subsequent emission can there-
fore be regarded as a fingerprint of the underlying interactions
Cravens et al. (1997); Kharchenko and Dalgarno (2000, 2001);
Beiersdorfer et al. (2003); Bodewits et al. (2004a, 2006).
Since the first observations of cometary X-ray emission,
more than 20 comets have been observed with various X-ray
and Far-UV observatories Lisse et al. (2004); Krasnopolsky et
al. (2004). This observational sample contains a broad variety of
comets, solar wind environments and observational conditions.
The observations clearly demonstrate the diagnostics available
from cometary charge exchange emission.
First of all, the emission morphology is a tomography of
the distribution of neutral gas around the nucleus Wegmann et
2 D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey
al. (2004). Gaseous structures in the collisionally thin parts
of the coma brighten, such as the jets in 2P/Encke Lisse et al.
(2005), the Deep Impact triggered plume in 9P/Tempel 1 Lisse
et al. (2007) and the unusual morphology of comet 6P/d’Arrest
Mumma et al (1997). In other comets, the X-ray emission
clearly mapped a spherical gas distribution. This resulted in a
characteristic crescent shape for larger and hence collisionally
thick comets observed at phase angles of roughly 90 degrees
(e.g. Hyakutake - Lisse et al. (1996), linear S4 - Lisse et al.
(2001)). Macroscopic features of the plasma interaction such as
the bowshock are observable, too Wegmann & Dennerl (2005).
Secondly, by observing the temporal behavior of the
comets X-ray emission, the activity of the solar wind and
comet can be monitored. This was first shown for comet
C/1996 B2 (Hyakutake) Neugebauer et al. (2000) and re-
cently in great detail by long term observations of comet
9P/2005 (Tempel 1) Willingale et al. (2006); Lisse et al.
(2007) and 73P/2006 (Schwassmann–Wachmann 3C) Brown et
al. (2007), where cometary X-ray flares could be assigned to
either cometary outbursts and/or solar wind enhancements.
Thirdly, cometary spectra reflect the physical characteristics
of the solar wind; e.g. spectra resulting from either fast, cold
(polar) wind and slow, warm equatorial solar wind should be
clearly different Schwadron and Cravens (2000); Kharchenko
and Dalgarno (2001); Bodewits et al. (2004a). Several attempts
were made to extract ionic abundances from the X-ray spectra.
The first generation spectral models have all made strong
assumptions when modelling the X-ray spectra Haeberli et al
(1997); Wegmann et al. (1998); Kharchenko and Dalgarno
(2000); Schwadron and Cravens (2000); Lisse et al. (2001);
Kharchenko and Dalgarno (2001); Krasnopolsky et al. (2002);
Beiersdorfer et al. (2003); Wegmann et al. (2004); Bodewits et
al. (2004a); Krasnopolsky (2004); Lisse et al. (2005). Here,
we present a more elaborate and sophisticated procedure to an-
alyze cometary X-ray spectra based on atomic physics input,
which for the first time allows for a comparative study of all
existing cometary X-ray spectra. In Section 2, our comet-wind
interaction model is briefly introduced. In Section 3, it is demon-
strated how cometary spectra are affected by the velocity and
target dependencies of charge exchange reactions. In Section 4,
the various existing observations performed with the Chandra
X-ray Observatory, as well as the solar wind data available are
introduced. Based upon our modelling, we construct an analyt-
ical method of which the details and results are presented in
Section 5. In Section 6, we discuss our results in terms of comet
and solar wind characteristics. Lastly, in Section 7 we summa-
rize our findings. Details of the individual Chandra comet ob-
servations are given in Appendix A.
2. Charge Exchange Model
2.1. Atomic structure of He-like ions
Electron capture by highly charged ions populates highly excited
states, which subsequently decay to the ground state. These cas-
cading pathways follow ionic branching ratio statistics. Because
decay schemes work as a funnel, the lowest transitions (n = 2→
1) are the strongest emission lines in cxe spectra. For helium-like
ions, these are the forbidden line (z: 1s2 1S0–1s2s 3S1), the inter-
combination lines (y, x: 1s2 1S0–1s2p 3P1,2), and the resonance
line (w: 1s2 1S0–1s2p 1P1), see Figure 1.
The apparent branching ratio, Beff , for the intercombination
transitions is determined by weighting branching ratios (B j) de-
rived from theoretical transition rates compiled by Porquet et al.
Fig. 1. Part of the decay scheme of a helium–like ion. The 1S0 decays to
the ground state via two-photon processes (not indicated).
Table 1. Apparent effective branching ratios (Beff) for the relaxation of
the 23P-state of He-like carbon, nitrogen, oxygen and neon.
transition C v N vi O vii Ne ix
1s2 (1S0)–1s2p (3P1,2) 0.11 0.22 0.30 0.34
1s2s (3S1)–1s2p (3P0,1,2) 0.89 0.78 0.70 0.66
(2000, 2001), by an assumed statistical population of the triplet
P-term:
Beff =
(2 j + 1)
(2L + 1)(2S + 1)
· B j (1)
The resulting effective branching ratios are given in Table 1.
These ratios can only be observed at conditions where the
metastable state is not destroyed (e.g. by UV flux or collisions)
before it decays. In contrast to many other astrophysical X-
ray sources, this condition is fulfilled in cometary atmospheres,
making the forbidden lines strong markers of cxe emission.
2.2. Emission Cross Sections
To obtain line emission cross sections we start with an initial
state population based on state selective electron capture cross
sections and then track the relaxation pathways defined by the
ion’s branching ratios.
Electron capture reactions can be strongly dependent on
target effects. An important difference between reactions with
atomic hydrogen and the other species is the presence of multi-
ple electrons, hence allowing for multiple (mostly double) elec-
tron transfer. It has been demonstrated both experimentally and
theoretically that double electron capture can be an important
reaction channel in multi-electron targets and that after autoion-
ization to an excited state it may contribute to the X-ray emis-
sion Ali et al. (2005); Hoekstra et al. (1989); Beiersdorfer
et al. (2003); Otranto et al (2006); Bodewits et al. (2006).
Unfortunately, experimental data on reactions with species typ-
ical for cometary atmospheres, such as H2O, atomic O and CO
are at best scarcely available. Because the first ionization po-
tentials of these species are all close to that of atomic H, using
state selective one electron capture cross sections for bare ions
charge exchanging with atomic hydrogen from theory is a rea-
sonable assumption, which is also confirmed by experimental
studies Greenwood et al. (2000, 2001); Bodewits et al. (2006).
Here, we will use the working hypothesis that effective one elec-
tron cross sections for multi-electron targets present in cometary
atmospheres are at least roughly comparable to cross sections for
D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey 3
Table 2. Compilation of theoretical, velocity dependent emission cross sections for collisions between bare- and H-like solar wind ions and atomic
hydrogen, in units of 10−16 cm2. See text for details. We estimate uncertainties to be ca. 20%. The ion column contains the resulting ion, not the
original solar wind ion. Line energies compiled from Garcia & Mack (1965); Vainshtein & Safronova (1985); Drake (1988); Savukov et al.
(2003) and the chianti database Dere et al. (1997); Landi et al. (2006).
E (eV) Ion Transition 200 km s−1 400 km s−1 600 km s−1 800 km s−1 1000 km s−1
299.0 C v z 8.7 12 16 18 20
304.4 C v x,y 0.65 1.0 1.5 1.7 1.8
307.9 C v w 1.8 3.0 4.1 4.8 5.2
354.5 C v 1s3p-1s2 0.55 0.71 0.81 1.0 1.3
367.5 C v 1s4p-1s2 0.70 0.66 0.76 0.74 0.72
367.5 C vi 2p-1s 15 26 30 33 34
378.9 C v 1s5p-1s2 0.00 0.02 0.05 0.04 0.04
419.8 N vi z 13 23 28 29 29
426.3 N vi x,y 2.7 4.3 5.3 5.7 6.0
430.7 N vi w 3.8 6.0 7.4 8.1 8.5
435.5 C vi 3p-1s 1.6 4.0 4.7 4.7 4.8
459.4 C vi 4p-1s 2.9 5.9 7.0 6.4 6.0
471.4 C vi 5p-1s 0.55 1.0 1.3 0.85 0.54
497.9 N vi 1s3p-1s2 0.43 0.99 1.3 1.3 1.3
500.3 N vii 2p-1s 40 45 44 42 42
523.0 N vi 1s4p-1s2 0.81 1.6 1.9 1.8 1.7
534.1 N vi 1s5p-1s2 0.14 0.31 0.33 0.21 0.14
561.1 O vii z 37 34 33 32 31
568.6 O vii x,y 10 10 10 9.9 9.7
574.0 O vii w 9.9 11 11 11 10
592.9 N vii 3p-1s 6.3 4.9 4.8 4.5 4.3
625.3 N vii 4p-1s 2.9 2.9 3.7 4.3 4.6
640.4 N vii 5p-1s 11 5.2 3.7 2.7 2.2
650.2 N vii 6p-1s 0.00 0.21 0.13 0.09 0.08
653.5 O viii 2p-1s 27 40 48 51 53
665.6 O vii 1s3p-1s2 1.7 1.3 1.3 1.2 1.2
697.8 O vii 1s4p-1s2 0.81 0.79 1.0 1.2 1.3
712.8 O vii 1s5p-1s2 2.8 1.3 0.92 0.68 0.54
722.7 O vii 1s6p-1s2 0.00 0.06 0.04 0.02 0.02
774.6 O viii 3p-1s 2.6 4.7 5.6 5.3 5.0
817.0 O viii 4p-1s 1.0 1.6 2.0 2.2 2.3
836.5 O viii 5p-1s 2.4 4.0 4.6 4.1 3.7
849.1 O viii 6p-1s 1.6 1.6 1.5 1.1 0.67
one electron capture from H. Based on this hypothesis, we will
use our comet-wind interaction model to evaluate the contribu-
tion of the different species.
For our calculations, we use a compilation of theoretical state
selective, velocity dependent cross sections for collisions with
atomic hydrogen Errea et al. (2004); Fritsch and Lin (1984);
Green et al. (1982); Shipsey et al. (1983). We furthermore
assume that capture by H-like ions leads to a statistical triplet
to singlet ratio of 3:1, based on measurements by Suraud et al.
(1991); Bliek et al. (1998). We will first focus on the strongest
emission features, which are the n = 2 → 1 transitions, i.e.,
the Ly-α transition (H-like ions) or the forbidden, resonance and
intercombination lines (He-like ions).
In Fig. 2, the emission cross sections of the Ly-α or the sum
of the emission cross sections of the forbidden, resonance and
intercombination lines of different ions (C, N, O) are shown as
a function of collision velocity, for one electron capture reac-
tions with atomic hydrogen. This figure sets the stage for solar
wind velocity induced effects in cometary X-ray spectra. Most
important is the effect of the velocity on the two carbon emis-
sion features; their prime emission features increase by a factor
of almost two when going from typical ‘slow’ to typical ‘fast’
solar wind velocities. The O viii Ly-α emission cross section can
be seen to drop steeply below ca. 300 km s−1. The N vi K-α dis-
plays a similar, though somewhat less strong behavior.
Fig. 2. Velocity dependence of Ly-α or the sum of the forbid-
den/resonance/intercombination emission cross sections of different so-
lar wind ions: O viii (dashed, grey line), O vii (solid, black line), N vii
(dotted, black line), N vi (solid, grey line), C vi (dashed, black line) and
C v (dash-dotted, black line).
The relative intensity of the emission lines (per species) is
governed by the state selective electron capture cross sections
4 D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey
Fig. 3. Velocity dependence of the hardness ratio of different solar wind
ions: O viii (solid line), O vii (dashed line) N vii (dashed line) and C vi
(dash-dotted line). Also shown are two experimentally obtained hard-
ness ratios by Beiersdorfer et al. (2001) and Greenwood et al. (2000)
for O8+ colliding on CO2 and H2O, respectively (see text).
of the charge exchange reaction and the branching ratios of the
resulting ion. A measure of these intensities is the hardness ra-
tio (Beiersdorfer et al. 2001), which is defined as the ratio be-
tween the emission cross sections of the higher order terms of
the Lyman-series and Ly-α (or between the higher order K-series
and K-α in case of He-like ions):∑
n>2 σem(Ly−n)
σem(Ly−α)
For electron capture by H-like ions, we will use the ratio be-
tween the sum of the resonance-, intercombination and forbid-
den emission lines and the rest of the K-series as the hardness
ratio. Fig. 3 shows the hardness ratios of cxe from abundant so-
lar wind ions. The figure shows that most hardness ratios are
constant at typical solar wind velocities (above 300 km s−1) but
it also clearly demonstrates the suggestion made by Beiersdorfer
et al. (2001) that hardness ratios are good candidates for studies
of velocimetry deep within the coma when the solar wind has
slowed down by mass loading.
2.3. Interaction Model
Cometary high-energy emission depends upon certain properties
of both the comet (gas production rate, composition, distance to
the Sun) and the solar wind (speed, composition). Recently, we
developed a model that takes each of these effects into account
Bodewits et al. (2006), which we will briefly describe here.
The neutral gas model is based on the Haser-equation, which
assumes that a comet has a spherically expanding neutral coma
Haser (1957); Festou (1981). The lifetime of neutrals in the
solar radiation field varies greatly amongst species typical for
cometary atmospheres Huebner et al. (1992). The dissociation
and ionization scale lengths also depend on absolute UV fluxes,
and therefore on the distance to the Sun. The coma interacts
with solar wind ions, penetrating from the sunward side follow-
ing straight line trajectories. The charge exchange processes be-
tween solar wind ions and coma neutrals are explicitly followed
both in the change of the ionization state of the solar wind ions
Fig. 4. Modeled charge state distribution along the comet-Sun line, as-
suming an equatorial 300 km s−1 wind interacting with a comet with
outgassing rate Q=1029 molecules s−1 at 1 AU from the Sun. A compo-
sition typical for the slow, equatorial wind was assumed.
and in the relaxation cascade of the excited ions (as discussed
above).
Due to its interaction with the cometary atmosphere, the so-
lar wind is both decelerated and heated in the bow shock. This
bow shock does not affect the ionic charge state distribution. The
bow shock lowers the drift velocity of the wind but at the same
time increases its temperature and the net collision velocity of
the ions is ca. 77% of the initial velocity v(∞) throughout the
interaction zone. We use a rule of thumb derived by Wegmann
et al. (2004) to estimate the stand-off distance Rbs of the bow
shock.
Deep within the coma, the solar wind finally cools down as
the hot wind ions, neutralized by charge exchange, are replaced
by cooler cometary ions. For simplicity however, we shall as-
sume that the wind keeps a constant velocity and temperature
after crossing the bow shock.
Initially, the charge state distribution depends on the solar
wind state. For most simulation purposes, we will assume the
‘average’ ionic composition for the slow, equatorial solar wind
as given by Schwadron and Cravens (2000). Using our compi-
lation of charge changing cross sections, we can solve the differ-
ential equations that describe the charge state distribution in the
coma in the 2D-geometry fixed by the comet-Sun axis. Figure 4
shows the charge state distribution for a 300 km s−1 equato-
rial wind interacting with a comet with an outgassing rate Q of
= 1029 molecules s−1 comet. From this charge state distribution,
it can be seen that along the comet-Sun axis, the comet becomes
collisionally thick between 3500 km (O8+) to 2000 km (C6+),
depending on the cross section of the ions. A maximum in the
C5+ abundance can be seen around 2,000 km, which is due to the
relatively large initial C6+ population and the small cross section
of C5+ charge exchange.
A 3D integration assuming cylindrical symmetry around the
comet-Sun axis finally yields the absolute intensity of the emis-
sion lines. Effects due to the observational geometry (i.e. field of
view and phase angle) are included at this step in the model.
D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey 5
Fig. 5. Relative contribution of target species to the total intensity of
O vii 570 eV emission complex with increasing field of view, for an
active Q= 1029 molecules s−1 comet, interacting with a 300 km s−1
solar wind at 1 AU from the Sun. The shaded area indicates the range
of apertures used to obtain spectra discussed within this survey.
3. Model Results
3.1. Relative Contribution of Target Species
Figure 5 shows the dominant collisions which underly the X-ray
emission of comets. Shown is the total intensity projected on the
sky, with increasing field of view. Within 104 km around the nu-
cleus, water is the dominant collision partner. Farther outward
(≥ 2 × 105 km), the atomic dissociation products of water take
over, and atomic oxygen becomes the most important collision
partner. When the field of view exceeds 107 km, atomic hydro-
gen becomes the sole collision partner. Note that collisions with
water never account for 100% of the emission, even with very
small apertures, due to the contribution of collisions with atomic
hydrogen, OH and oxygen in the line of sight towards the nu-
cleus.
The comets observed with Chandra are all observed with an
aperture of ca. 7.5′ centered on the nucleus. This corresponds to
a range of 1.6−22×104 km (as indicated in Figure 5). Our model
predicts that the emission from nearby comets will be dominated
by cxe from water, but that for comets observed with a larger
field of view, up to 60% of the emission can come from cxe
interactions with the water dissociation products atomic oxygen
and OH, and 10% from interactions with atomic hydrogen.
3.2. Solar Wind Velocity
To illustrate solar wind velocity induced variations in charge ex-
change spectra, we simulated charge exchange spectra follow-
ing solar wind interactions between an equatorial wind and a
Q = 1029 molecules s−1 comet, and assumed the same solar
wind composition in all cases. In Fig. 6, spectra resulting from
collisional velocities of 300 km s−1 and 700 km s−1 are shown.
In the spectrum from the faster wind, the C vi 367 eV and O vii
570 eV emission features are roughly equally strong, whereas at
300 km s−1, the oxygen feature is clearly stronger. Assuming the
wind’s composition remains the same, within the range of typ-
ical solar wind velocities (300–700 km s−1), the cross sectional
dependence on solar wind velocity does not affect cometary X-
ray spectra by more than a factor 1.5. In practice, the composi-
tional differences between slow and fast wind will induce much
stronger spectral changes.
Fig. 6. Simulated X-ray spectra for a 1029 molecules s−1 comet interact-
ing with an equatorial wind with velocities of 300 km s−1 (solid grey
line) and 700 km s−1 (dashed black line). The spectra are convolved
with Gaussians with a width of σ = 50 eV to simulate the Chandra
spectral resolution. To indicate the different lines, also the 700 km s−1
σ = 1 eV spectrum is indicated (not to scale). A field of view of 105 km
and ‘typical’ slow wind composition were used.
3.3. Collisional Opacity
Many of the 20+ comets that have been observed in X-ray dis-
play a typical crescent shape as the solar wind ion content
is depleted via charge exchange. Comets with low outgassing
rates around 1028 molecules s−1, such as 2P/2002 (Encke) and
9P/2005 (Tempel 1), did not display this emission morphology
Lisse et al. (2005, 2007). Whether or not the crescent shape
can be resolved depends mainly on properties of the comet (out-
gassing rate), but, to a minor extent, also on the solar wind
(velocity dependence of cross sections). Other parameters (sec-
ondary, but important), are the spatial resolution of the instru-
ment and the distance of the comet to the observer.
In a collisionally thin environment, the ratio between emis-
sion features is the product of the ion abundance ratios and the
ratio between the relevant emission cross sections:
rthin =
n(Aq+)
n(Bq+)
em (v)
em (v)
The flux ratio for a collisionally thick system depends on the
charge states considered. In case of a bare ion A and a hydro-
genic ion B, the ratio between the photon fluxes from A and B
is given by the abundance ratio weighted by efficiency factors µ
and η:
rthick =
n(Aq+)
n(B(r−1)+) + µ(Br+)n(Br+)
η(Aq+)
η(B(r−1)+)
The efficiency factor µ is a measure of how much B(r−1)+ is pro-
duced by charge exchange reactions by Bq+:
σr,r−1(v)
σr(v)
where σr is the total charge exchange cross section and σr,r−1
the one electron charge changing cross section. The efficiency
factor η describes the emission yield per reaction and is given
by the ratio between the relevant emission cross section σem and
the total charge changing cross section σr:
σem(v)
σr(v)
6 D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey
Fig. 7. Collisional opacity effects on flux ratios within the field of view.
The outer bounds of the fields of view within this survey were between
104 − 105 km, as indicated by the shaded area. We considered a 500 km
s−1 equatorial wind interacting with comets with different activities:
Q = 1028 molecules s−1 (dashed lines) and Q = 1029 molecules s−1
(solid lines). All flux ratios are normalized to 1 at infinity.
To explore the effect of collisional opacity on spectra, we
simulated two comets at 1 AU from the Sun, with gas pro-
duction rates of 1028 and 1029 molecules s−1, interacting with
a solar wind with a velocity of 500 km s−1 and an averaged
slow wind composition Schwadron and Cravens (2000). The
results are summarized in Figure 7 where different flux ratios
are shown. The behavior of these ratios as a function of aper-
ture is important because they can be used to derive relative
ionic abundances. All ratios are normalized to 1 at infinite dis-
Fig. 8. Simulated X-ray spectra for a 1029 molecules s−1 comet inter-
acting with an equatorial wind with a velocity of 300 km s−1 for fields
of view decreasing from 105 km (solid line), 104 km (dashed line) and
103 km (dotted line).
tance from the comet’s nucleus. For low activity comets with
Q ≤ 1028 molecules s−1, the collisional opacity does not affect
the comet’s X-ray spectrum. Within typical field of views all line
flux ratios are close to the collisionally thin value. For more ac-
tive comets (Q = 1029 molecules s−1), collisional opacity can
become important within the field of view. Observed flux ratios
involving C v should be treated with care, see e.g. C v/O vii and
C vi/C v, because the flux ratios within the field of view can be
affected by almost 50% and 35%, respectively. The effect is the
strongest in these cases because of the large relative abundance
of C6+, that contributes to the C v emission via sequential elec-
tron capture reactions in the collisionally thick zones. For N vii
and O viii, a small field of view of 104 km could affect the ob-
served ionic ratios by some 20%.
To further illustrate these results, we show the result-
ing X-ray spectra in Fig. 8. There, we consider a Q =
1029 molecules s−1 comet interacting with a 300 km s−1 wind
and show the effect of slowly zooming from the collisionally
thin to the collisionally thick zone around the nucleus. The field
of view decreases from 105 to 103 km. At 105 km, the spectrum
is not affected by collisionally thick emission, whereas the emis-
sion within an aperture of 1000 km is almost purely from the
interactions within the collisionally thick zones of the comet,
which can be most clearly seen by the strong enhancement of
the C v emission around 300 eV.
The results of our model efforts demonstrate that cometary
X-ray spectra reflect characteristics of the comet, the solar wind
and the observational conditions. Firstly, charge exchange cross
sections depend on the velocity of the solar wind, but its effects
are the strongest at velocities below regular solar wind velocities.
Secondly, collisional opacity can affect cometary X-ray spectra
but mainly when an active comet (Q = 1029 molecules s−1) is
observed with a small field of view (≤ 5×104 km). The dominant
factor however to explain differences in cometary CXE spectra
is therefore the state and hence composition of the solar wind.
This implies that the spectral analysis of cometary X-ray spectra
can be used as a direct, remote quantitative and qualitative probe
of the solar wind.
D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey 7
8 D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey
Fig. 9. Chandra comet observations during the descending phase of so-
lar cycle # 23. Monthly sunspot numbers (grey line) and smoothed
monthly sunspot number (black lines) from the Solar Influences Data
Analysis Center of the Department of Solar Physics, Royal Observatory
of Belgium (http://sidc.oma.be/). Letters refer to the chronological or-
der of observation.
4. Observations
In this section, we will briefly introduce the different comet ob-
servations performed with Chandra. A summary of comet and
solar wind parameters is given in Table 3. More observational
details on the comet and a summary of the state of the solar wind
at the location of the comet during the X-ray observations can be
found in Appendix A.
4.1. Solar Wind Data
Our survey spans the whole period between solar maximum (mid
2000) and solar minimum (mid 2006), see Fig. 9. During solar
minimum, the solar wind can be classified in polar- and equa-
torial streams, where the polar can be found at latitudes larger
than 30◦ and the equatorial wind within 15◦ of the helioequator.
Polar streams are fast (ca. 700 km s−1) and show only small vari-
ations in time, in contrast to the irregular equatorial wind. Cold,
fast wind is also ejected from coronal holes around the equa-
tor, and when these streams interact with the slower background
wind corotating interaction regions (cirs) are formed. As was
illustrated by Schwadron and Cravens (2000), different wind
types vary greatly in their compositions, with the cooler, fast
wind consisting of on average lower charged ions than the hot-
ter equatorial wind. This clear distinction disappears during solar
maximum, when at all latitudes the equatorial type of wind dom-
inates. In addition, coronal mass ejections are far more common
around solar maximum.
There is a strong variability of heavy ion densities due to
variations in the solar source regions and dynamic changes in
the solar wind itself Zurbuchen & Richardson (2006). The vari-
ations mainly concern the charge state of the wind as elemental
variations are only on the order of a factor of 2 (Von Steiger et
al. (2000), and references therein).
We obtained solar wind data from the online data archives
of ace (proton velocities and densities from the swepam instru-
ment, heavy ion fluxes from the swics and swims instruments1)
and soho (proton fluxes from the Proton Monitor Instrument2).
Both ace and soho are located near Earth, at its Lagrangian
1 http://www.srl.caltech.edu/ace/ASC/level2/index.html
2 http://umtof.umd.edu/pm/crn/
point L1. In order to map the solar wind from L1 to the posi-
tion of the comets, we used the time shift procedure described
by Neugebauer et al. (2000). The calculations are based on
the comet ephemeris, the location of L1 and the measured wind
speed. With this procedure, the time delay between an element
of the corotating solar wind arriving at L1 and the comet can be
predicted. A disadvantage of this procedure is that it cannot ac-
count for latitudinal structures in the wind or the magnetohydro-
dynamical behavior of the wind (i.e., the propagation of shocks
and cmes). These shortcomings imply that especially for comets
that have large longitudinal, latitudinal and/or radial separations
from Earth, the solar wind data is at best an estimate of the local
wind conditions. The resulting proton velocities at the comets
near the time of the Chandra observations are shown in Fig. 10.
Parallel to this helioradial and heliolongitudinal mapping, we
compared our comet survey to a 3D mhd time–dependent so-
lar wind model that was employed during most of Solar Cycle
23 (1997 - 2006) on a continuous basis when significant solar
flares were observed. The model (reported by Fry et al. (2003);
McKenna-Lawlor et al. (2006) and Z.K. Smith, private com-
munication, for, respectively, the ascending, maximum, and de-
scending phases) treats solar flare observations and maps the
progress of interplanetary shocks and cmes. The papers men-
tioned above provide an rms error for ”hits” of ±11 hours Smith
et al. (2000); McKenna-Lawlor et al. (2006). cir fast forward
shocks were also taken into account in order to differentiate be-
tween the co-rotating ”quiet” and transient structures. It was im-
portant, in this differentiating analysis, to examine (as we have
done here) the ecliptic plane plots of both of these structures as
simulated by the deforming interplanetary magnetic field lines
(see, for example, Lisse et al. (2005, 2007) for several of the
comets discussed here.) Therefore, the various comet locations
(Table 3) were used to estimate the probability of their X-ray
emission during the observations being influenced by either of
these heliospheric situations.
4.2. X-ray Observations
After its launch in 1999, 8 comets have been observed with
the Chandra X-ray Observatory and Advanced ccd Imaging
Spectrometer (acis). Here, we have mainly considered obser-
vations made with the acis-S3 chip, which has the most sensi-
tive low energy response and for which the majority of comets
were centered. The Chandra’s acis-S instrument provides mod-
erate energy resolution (σ ≈ 50 eV) in the 300 to 1500 eV en-
ergy range, the primary range for the relatively soft cometary
emission. All comets in our sample were re-mapped into comet-
centered coordinates using the standard Chandra Interactive
Analysis of Observations (ciao v3.4) software ‘sso freeze’ al-
gorithm.
Comet source spectra were extracted from the S3 chip with
a circular aperture with a diameter of 7.5′, centered on the
cometary emission. The exception was comet C/2001 Q4, which
filled the chip and a 50% larger aperture was used. acis’ re-
sponse matrices were used to model the instrument’s effective
area and energy dependent sensitivity matrices were created for
each comet separately using the standard ciao tools.
Due to the large extent of cometary X-ray emission, and
Chandra’s relatively narrow field of view, it is not trivial to ob-
tain a background uncontaminated by the comet and sufficiently
close in time and viewing direction. We extracted background
spectra using several techniques: spectra from the S3 chip in
an outer region generally > 8′, an available acis S3 blank sky
observation, and backgrounds extracted from the S1 ccd. For
D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey 9
Fig. 10. Solar wind proton velocities estimated from ace and soho data. For all comets, the time of the observations is indicated with a dotted line.
Letters refer to the chronological order of observation.
several comets there are still a significant number of cometary
counts in the outer region of the S3 ccd. Background spec-
tra taken from the S1 chip have the advantage of having been
taken simultaneous with the S3 observation and thus having
the same space environment as the S3 observation. In general
the background spectra were extracted with the same 7.5′ aper-
ture as the source spectra but centered on the S1 chip. For
comet Encke, where the S1 chip was off during the observa-
tion the background from the outer region of the S3 chip was
used. Comet C/2000 WM1 (linear) was observed with the Low-
Energy Transmission Grating (letg) and acis-S array. For the
latter, we analyzed the zero-th order spectrum, and used a back-
ground extracted from the outer region of the S3 chip. It is possi-
ble that the proportion of incident X-rays diffracted onto the S3
chip will vary with photon energy. Background-subtracted spec-
tra generally have a signal-to-noise at 561 eV of at least 10, and
over 50 for 153P/2002 C1 (Ikeya–Zhang).
5. Spectroscopy
The observed spectra are shown in Figure 11. The spectra
suggest a classification based upon three competing emission
features, i.e. the combined carbon and nitrogen emission (be-
low 500 eV), O vii emission around 565 eV and O viii emis-
sion at 654 eV. Firstly, the C+N emission (<500 eV) seems to
be anti-correlated with the oxygen emission. This clearly sets
the spectra of 73P/2006 S.–W.3B and 2P/2003 (Encke) apart, as
for those two comets the C+N features are roughly as strong as
the O vii emission. In the spectra of the remaining five comets,
oxygen emission dominates over the carbon and nitrogen emis-
sion below 500 eV. The O viii/O vii ratio can be seen to increase
continuously, culminating in the spectrum of 153P/2002 (Ikeya–
Zhang) where the spectrum is completely dominated by oxygen
emission with almost comparable O viii and O vii emission fea-
tures. From our modelling, we expect that the separate classes
reflect different states of the solar wind, which imply different
ionic abundances. To explore the obtained spectra more quanti-
tatively, we will use a spectral fitting technique based on our cxe
model to extract X-ray line fluxes.
5.1. Spectral Fitting
The charge exchange mechanism implies that cometary X-ray
spectra result from a set of solar wind ions, which produce at
least 35 emission lines in the regime visible with Chandra. As
comets are extended sources, these lines cannot all be resolved.
All spectra were therefore fit using the 6 groups of fixed lines of
our cxe model (see Table 2) and spectral parameters were de-
rived using the least squares fitting procedure with the xspec
package. The relative strengths from all lines were fixed per
ionic species, according to their velocity dependent emission
cross sections. Thus, the free parameters were the relative fluxes
of the C, N and O ions contained in our model.
Two additional Ne lines at 907 eV (Ne ix) and 1024 eV
(Ne x) were also included, giving a total of 8 free parameters.
All line widths were fixed at the acis-S3 instrument resolution.
The spectra were fit in the 300 to 1000 eV range. This pro-
vided 49 spectral bins, and thus 41 degrees of freedom. acis
spectra below 300 eV are discarded because of the rising back-
ground contributions, calibration problems and a decreased ef-
fective area near the instrument’s carbon edge.
As a more detailed example of the cxe model and compari-
son to the data, we show in Fig 12 the acis-S3 data for C/1999
S4 (linear). The figure shows the background subtracted source
spectrum over-plotted with the background spectrum, the differ-
ence between the model and data, and the model spectrum and
10 D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey
Fig. 11. Observed spectrum and fit of all 8 comets observed with
Chandra, grouped by their spectral shape (see text). The histogram lines
indicate the CXE model fit.
data to indicate to contribution of the different ions. Only the
emission lines with >3% strength of the strongest line in their
species are shown for ease of presentation.
The fluxes obtained by our fitting are converted into relative
ionic abundances by weighting them by their velocity dependent
emission cross sections. For comets observed near the ecliptic
plane (< 15◦), solar wind conditions mapped to the comet were
used (Section 4.1). For comets observed at higher latitudes, these
data are most likely not applicable and a solar wind velocity of
500 km s−1 was assumed.
Fig. 12. Details of the cxe fit for the spectrum of comet 1999/S4
(linear). Top panel: Comet (filled triangles) and background (open
squares) spectrum. Middle panel: Residuals of cxe fit Bottom panel:
cxemodel and observed spectrum indicating the different lines and their
strengths. Carbon - red; nitrogen - orange; oxygen - blue; neon - green.
The unfolded model is scaled above the emission lines for the ease of
presentation.
Fig. 13. Parameter sensitivity for the major emission features in the fit
of C/1999 S4 (linear), with respect to the O vii 561 eV feature. All units
are 10−4 photons cm−2 s−1. The contours indicate a χ2R of 9.2 (or 99%
confidence, largest, green contour), a χ2R of 4.6 (90%, red contour) and
a χ2R of 2.3 (68%, smallest, blue contour).
5.2. Spectroscopic Results
The fits to all cometary spectra are shown in Fig. 11 and the
results of the fits are given in Table 4. For the majority of the
comets, the model is a good fit to the data within a 95% con-
fidence limit (χ2R ≈ 1.4). Results for comet 153P/2002 (Ikeya–
Zhang) are presented in Table 5 with an additional systematic
error to account for its brightness and any uncertainties in the
response.
The spectra for all comets are well reproduced in the 300 to
1000 eV range. The nitrogen contribution is statistically signifi-
cant for all comets except the fainter ones, 2P/2003 (Encke) and
D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey 11
12 D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey
Table 6. Solar wind abundance relative to O7+, obtained for comet linear S4. References: Bei ’03 – Beiersdorfer et al. (2003), Kra ’04 –
Krasnopolsky (2004), Kra ’06 – Krasnopolsky (2006), Otr ’06 – Otranto et al (2006) and S&C ’00 – Schwadron and Cravens (2000). Dots
indicate that an ion was included in the fitting, but no abundances were derived; dash means that an ion was not included in the fitting. Otranto et
al (2006) did not fit the observed spectrum, but used a combination of ace-data and solar wind averages from Schwadron and Cravens (2000) to
compute a syntectic spectrum of the comet. Solar wind averages are given for comparison Schwadron and Cravens (2000)
Ion this work Bei 03 Kra 04 Kra 06 Otr 06
O8+ 0.32 ± 0.03 0.13 ± 0.03 0.13 ± 0.05 0.15 ± 0.03 0.35 0.35
C6+ 1.4 ± 0.4 0.9 ± 0.3 0.7 ± 0.2 0.7 ± 0.2 1.02 1.59
C5+ 12 ± 4.0 11 ± 9 . . . 1.7 ± 0.7 1.05 1.05
N7+ 0.07 ± 0.06 0.06 ± 0.02 – – 0.03 0.03
N6+ 0.63 ± 0.21 0.5 ± 0.3 – – 0.29 0.29
Ne10+ 0.02 ± 0.01 – – – – –
Ne9+ . . . – (15 ± 6) × 10−3 (20 ± 7) × 10−3 – –
73P/2006 (S.-W.3B). For example, removing the nitrogen com-
ponents from linear S4’s cxe model and re-fitting, increases χ2R
to over 7.
χ2 contours for C/1999 S4 (linear) are presented in Fig 13.
The line strengths for each ionic species are generally well con-
strained, except where spectral features overlap. This can be
readily seen when comparing the contours for the N vii 500 eV
and O vii 561 eV features where a strong anti-correlation exists
(Figure 12). Due to the limited resolution of acis an increase in
the N vii feature will decrease the O vii strength. Similar anti-
correlations exist between the nitrogen N vi or N vii and C v
299 eV lines. Since the line strength for the main line in each
ionic species is linked to weaker lines, a range of energies can
contribute and better constrain its strength. However with O vii
as the strongest spectral feature the nitrogen and carbon compo-
nents may be artificially lower as a result of the aforementioned
anti-correlations. The lack of effective area due to the carbon
edge in the acis response also may over-estimate the C v line
flux. The neon features were well constrained for the brighter
comets, but this is a region of lower signal and some caution
must be taken when treating the neon line strengths and they are
included here largely for completeness.
In the case of 153P/Ikeya–Zhang, the χ2R > 1.4. The main
discrepancy is that the model produces not enough flux in the
700 to 850 eV range compared to the observed spectrum. This
may reflect an underestimation of higher O viii transitions or the
presence of species not (yet) included in the model, such as Fe.
This will be discussed further in the last section of this paper and
in a separate paper dedicated to the observations of this comet
(K. Dennerl, private communication).
One of the best studied comets is C/1999 S4 (linear), be-
cause of its good signal-to-noise ratio. To discuss our results, we
will compare our findings with earlier studies of this comet. In
general, the spectra analyzed here have more counts than ear-
lier analyzes, because of improvements in the Chandra process-
ing software and because we took special care to use a back-
ground that is as comet-free as possible. Previous studies appear
to have removed true comet signal when the background subtrac-
tion was performed. In particular, both the Krasnopolsky (2004)
and Lisse et al. (2001) studies used background regions from
the outer part of the S3 chip and this may have still had true
cometary emission. Krasnopolsky (2004) subtracted over 70%
of the total signal as background. We find that using the S1-chip,
the background contributes only 20% of the total counts.
Different attempts to derive relative ionic abundances from
C1999/S4’s X-ray spectrum are compared in Table 6. Our atomic
physics based spectral analysis combines the benefits of ear-
lier analytical approaches by Kharchenko and Dalgarno (2000,
2001); Beiersdorfer et al. (2003). These methods were all ap-
plied to just one or two comets. Beiersdorfer et al. (2003) inter-
pret C1999/S4’s X-ray spectrum by fitting it with 6 experimental
spectra obtained with their ebit setup. The resulting abundances
are very similar to ours. The advantage of their method is that
it includes multiple electron capture, but in order to observe the
forbidden line emission, the spectra were obtained with trapped
ions colliding at CO2, at collision energies of 200 to 300 eV or
ca. 30 km s−1. As was shown in Fig. 3, the cxe hardness ratio
may change rapidly below 300 km s−1, implying an overesti-
mation of the higher order lines compared to the n = 2 → 1
transition, which for O vii overlap with the O viii emission. We
therefore find higher abundances of O8+.
Krasnopolsky (2004, 2006) obtained fluxes and ionic abun-
dances by fitting the spectrum with 10 lines of which the energies
were semi-free. Their analysis thus does not take the contamina-
tion of unresolved emission into account, and N vi and N vii are
not included in the fit. The line energies were attributed to cxe
lines of mainly solar wind C and O but also to ions of Mg and Ne.
The inclusion of the resulting low energy emission (near 300 eV)
results in lower C5+ fluxes (see also Otranto et al (2006)).
There are several factors that may contribute to the unexpect-
edly low C vi/C v ratios: 1) There may be a small contribution to
the C v line from other ions in the 250-300 eV range (e.g. Si,
Mg, Ne) that are currently not included in the model. Including
these species in the model would lower the C v flux, but proba-
bly only with a small amount. 2) The low acis effective area in
the 250-300 eV region allows the C v flux to be unconstrained,
and this increases the uncertainty in the C v flux. We estimate
that the uncertainty in the effective area, introduced by the car-
bon edge, can account for an uncertainty as large as a factor of
10 in the observed C v/C vi ratios.
We will not compare our results with measured ace/swics
ionic data. As discussed in section 4, the solar wind is highly
variable in time and its composition can change dramatically
over the course of less than a day. Variations in the solar wind’s
ionic composition are often more than 50% during the course of
an observation. Data on N, Ne, and O8+ ions have not been well
documented as the errors of these abundances are dominated by
counting statistics. As discussed above, latitudinal and corota-
tional separations imply large inaccuracies in any solar wind
mapping procedure. These conditions clearly disfavor modelling
based on either average solar wind data or ace/swics data.
6. Comparative Results
As noted in Section 5, spectral differences show up in the be-
havior of the low energy C+N emission (< 500 eV), the O vii
D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey 13
Table 7. Correlation between classification according to spectral shape and comet/solar wind characteristics during the observations. Comet
families from Marsden & Williams (2005). Phase refers to where in the solar cycle the comet was observed, where 1 is the solar maximum and 0
the solar minimum of cycle #23’s descending phase. For other references, see Table 3.
Class # Comet Comet Q Latitude Wind Type
Family (1028 mol. s−1)
cold H 73P/2006 (S.-W.3B) Jupiter 2 0.5 cir
E 2P/2003 (Encke) Jupiter 0.7 11.4 Flare/PS
warm F C/2001 Q4 (neat) unknown 10 -3 Quiet
G 9P/2005 (Tempel 1) Jupiter 0.9 0.8 Quiet
hot C C/2000 WM1 (linear) unknown 3-9 -34 PS
A C/1999 S4 (linear) unknown 3 24 icme
B C/1999 T1 (McNaught–Hartley) unknown 6-20 15 Flare/cir
D C/2002 C1 (Ikeya–Zhang) Oort 20 26 icme
Fig. 14. Flux ratios of all observed comets. The low energy C+N feature
is anti-correlated to the oxygen ionic ratio. Letters refer to the chrono-
logical order of observation.
Fig. 15. Ion ratios of all observed comets. The C+N ionic abundantie is
anti-correlated to the oxygen ionic ratio. Letters refer to the chronolog-
ical order of observation.
emission at 561 eV and the O viii emission at 653 eV. Figure 14
shows a color plot of the fluxes of these three emission features,
and Figure 15 the corresponding ionic abundances. There is a
clear separation between the two comets with a large C+N con-
tribution and the other ‘oxygen-dominated’ comets, which on
their turn show a gradual increase in the oxygen ionic ratio. This
sample of comet observations suggest that we can distinguish
two or three spectral classes.
Table 7 surveys the comet parameters for the different spec-
tral classes. The outgassing rate, heliocentric- or geocentric dis-
tance and comet family do not correlate to the different classes,
in accordance with our model findings. The data does suggest
a correlation between latitude and wind conditions during the
observations. At first sight, the apparent correlation between lat-
itude and oxygen ratio seems paradoxical. According to the bi-
modal structure of the solar wind the fast, cold wind dominates
at latitudes > 15◦, implying less O viii emission. In Figure 9,
the comet observations are shown with respect to the phase of
the last solar cycle. Interestingly, we note that all comets that
were observed at higher latitudes were observed around solar
maximum. The solar wind is highly chaotic during solar maxi-
mum and the frequency of impulsive events like CMEs is much
higher than during the descending and minimum phase of the
cycle. This explains both why the comets observed in the period
2000–2002 encountered a disturbed solar wind and why our sur-
vey does not contain a sample of the cool fast wind from polar
coronal holes.
The observed classification can therefore be fully ascribed
to solar wind states. The first class is associated with cold, fast
winds with lower average ionization. These winds are found in
cirs and behind flare related shocks. The spectra due to these
winds are dominated by the low energy x-rays, because of the
low abundances of highly charged oxygen. At the relevant tem-
peratures, most of the solar wind oxygen is He-like O6+, which
does not produce any emission visible in the 300–1000 eV
regime accessible with Chandra. Secondly, there is an interme-
diate class with two comets that were all observed during periods
of quiet solar wind. These comets interacted with the equatorial,
warm slow wind. The third class then comprises comets that in-
teracted with a fast, hot, disturbed wind associated with icmes
or flares. From the solar wind data, Ikeya–Zhang was proba-
bly the most extreme example of this case. This comet had 10
times more signal than any other comet in our sample and small
discrepancies in the response may be important at this level.
Extending into the 1-2 keV regime, a preliminary analysis indi-
cates the presence of bare and H-like Si, Mg and Fe xv-xx ions, in
accordance with ace measurements of icme compositions Lepri
& Zurbuchen (2004).
The variability and complex nature of the solar wind allows
for many intermediate states in between these three categories
Zurbuchen et al. (2002), which explain the gradual increase
of the O viii/O vii ratio that we observed in the cometary spec-
tra. As the solar wind is a collisionless plasma, the charge state
14 D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey
Fig. 16. Spectrum derived ionic oxygen ratios and corresponding
freezing-in temperatures from Mazotta et al. (1998). The shaded area
indicates the typical range of slow wind associated with streamers.
Letters refer to the chronological order of observation.
distribution in the solar wind is linked to the temperature in its
source region. Ionic temperatures are therefore a good indicator
of the state of the wind encountered by a comet. The ratio be-
tween O7+ and O6+ ionic abundances has been demonstrated to
be a good probe of solar wind states. Zurbuchen et al. (2002)
observed that slow, warm wind associated with streamers typi-
cally lies within 0.1 < O7+/O6+ < 1.0, corresponding to freez-
ing in temperatures of 1.3–2.1 MK. The corresponding temper-
ature range is indicated in the Figure 16. In the figure, we show
the observed O8+ to O7+ ratios and the corresponding freezing-
in temperatures from the ionizational/recombination equilibrium
model by Mazotta et al. (1998). Most observations are within or
near to the streamer-associated range of oxygen freezing in tem-
peratures. Four comets interacted with a wind significantly hot-
ter than typical streamer winds, and in all four cases we found
evidence in solar wind archives that the comets most likely en-
countered a disturbed wind.
7. Conclusions
Cometary X-ray emission arises from collisions between bare-
and H-like ions (such as C, N, O, Ne, . . . ) with mainly water and
its dissociation products OH, O and H. The manifold of depen-
dencies of the cxe mechanism on characteristics of both comet
and wind offers many diagnostic opportunities, which are ex-
plored in the first part of this paper. Charge exchange cross sec-
tions are strongly dependent on the velocity of the solar wind,
and these effects are strongest at velocities below the regular
wind conditions. This dependency might be used as a remote
plasma diagnostics in future observations. Ruling out collisional
opacity effects, we used our model to demonstrate that the spec-
tral shape of cometary cxe emission is in the first place deter-
mined by local solar wind conditions. Cometary X-ray spectra
hence reflect the state of the solar wind.
Based on atomic physic modelling of cometary charge ex-
change emission, we developed an analytical method to study
cometary X-ray spectra. First, the data of 8 comets observed
with Chandra were carefully reprocessed to avoid the subtrac-
tion of cometary signal as background. The spectra were then
fit using an extensive data set of velocity dependent emission
cross sections for eight different solar wind species. Although
the limited observational resolution currently available hampers
the interpretation of cometary X-ray spectra to some degree, our
spectral analysis allows for the unravelling of cometary X-ray
spectra and allowed us to derive relative solar wind abundances
from the spectra.
Because the solar wind is a collisionless plasma, local ionic
charge states reflect conditions of its source regions. Comparing
the fluxes of the C+N emission below 500 eV, the O vii emission
and the O viii emission yields a quantitative probe of the state
of the wind. In accordance with our modelling, we found that
spectral differences amongst the comets in our survey could be
very well understood in terms of solar wind conditions. We are
able to distinguish interactions with three different wind types,
being the cold, fast wind (I), the warm, slow wind (II); and the
hot, fast, disturbed wind (III). Based on our findings, we pre-
dict the existence of even cooler cometary X-ray spectra when a
comet interacts with the fast, cool high latitude wind from polar
coronal holes. The upcoming solar minimum offers the perfect
opportunity for such an observation.
Acknowledgements. DB and RH acknowledge support within the framework of
the fom–euratom association agreement and by the Netherlands Organization
for Scientific Research (nwo). MD thanks the noaa Space Environment Center
for its post-retirement hospitality. We are grateful for the cometary ephemerides
of D. K. Yeomans published at the jpl/horizons website. Proton velocities used
here are courtesy of the soho/celias/pm team. soho is a mission of international
cooperation between esa and nasa. Chianti is a collaborative project involving
the nrl (USA), ral (UK), mssl (UK), the Universities of Florence (Italy) and
Cambridge (UK), and George Mason University (USA).
References
Ali, R., Neill, P. A., Beiersdorfer, P., Harris, C. L., Rakovic, M. J., Wang, J. G.,
Schultz, D. R., Stancil, P. C., 2005, ApJ, 629, L125
Beiersdorfer, P., Lisse, C. M., Olson, R. E., Brown, G. V., Chen, H., 2001 ApJ,
549 L147
Beiersdorfer, P. et al., 2003, Science, 300, 1558
Biver, N., et al., 2006, A&A, 449, 1255
Bliek, F. W., Woestenenk, G. R., Hoekstra, R. and Morgenstern, R., 1998,
Phys. Rev. A, 57, 221
Bockelée-Morvan, D., et al., 2001, Science, 292, 1339
Bodewits, D., Juhász, Z., Hoekstra, R., & Tielens, A. G. G. M., 2004a, ApJ, 606,
Bodewits, D., McCullough, R. W., Tielens, A. G. G. M. & Hoekstra, R., 2004b,
Phys. Scr. 70, C17
Bodewits, D., Hoekstra, R., Seredyuk, B., R. W. McCullough, G. H. Jones & A.
G. G. M. Tielens, 2006, ApJ, 642, 593
Brown et al., in prep
Cravens, T., 1997, Geophys. Res. Lett., 24, 105
Dennerl, K., et al., 1997, Science, 277, 1625
Dennerl, K., 2002, A&A, 394, 1119
Dere, K. P., Landi, E., Mason, H. E., Monsignori Fossi, B. C., Young, P. R., 1997,
Astron. Astrop. Suppl., 125, 149
Drake, G. W., 1988, Can. J. Phys.,66, 586
Dello Russo, N., Disanti, M. A., Magee-Sauer, K., Gibb, E. L.; Mumma, M. J.,
Barber, R. J., Tennyson,J., 2004, Icarus, 168, 186
Dryer, M., Fry, C.D., Sun, W., Deehr, C.S., Smith, Z., Akasofu, S.-I. and
Andrews, M.D., 2001, Solar. Phys., 204, 627
Errea, L. F., Illescas, C., Méndez, L., Pons, B., Riera, A., Suárez, J., 2004, J.Phys
B, 37, 4323
Farnham, T. L., et al., 2001, Science, 292, 1348
Festou, M. C., 1981, A&A, 95, 69
Friedel, D. N., Remijan, A. J., Snyder, L. E., A’Hearn, M.F., Blake, G. A.,
de Pater, I., Dickel, H. R., Forster, J. R., Hogerheijde, M. R., Kraybill, C.,
Looney, L. W., Palmer, P., Wright, M. C. H., 2005, ApJ, 630, 623
Fritsch, W., & Lin, C. D., 1984, Phys. Rev. A, 29, 3039
Fry, C.D., Dryer, M., Deehr, C.S., Sun, W., Akasofu, S.-I, Smith, Z., 2003,
J. Geophys. Res., 108 (A2), 1070
Green, T. A., Shipsey, E. J., Browne, J. C., 1982, Phys. Rev. A, 25, 1364
Garcia, J. D., Mack, J. E., 1965, J. Opt. Soc. Am., 55, 654
Greenwood, J. B., Williams, I. D., Smith, S. J., & Chutjian, A., 2000, ApJ, 2533,
D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey 15
Greenwood, J. B., Williams, I. D., Smith, S. J., and Chutjian, A., 2001,
Phys. Rev. A63, 062707
Haeberli, R. M., Gombosi, T. I., DeZeeuw, D. L., Combi, M. R., Powell, K. G.,
1997, Science, 276, 939
Haser, L., 1957, Bull. Acad. Roy. Sci. Liège, 43, 740
Hoekstra, R., Čirič, D., de Heer, F. J. and Morgenstern, R., 1989, Phys. Scr., 28,
Huebner, W. F., & Keady, J. J., & Lyon, S. P. 1992, Ap&SS, 195,1
Kharchenko, V., Dalgarno, A., 2000, J. Geophys. Res., 105, 18351
Kharchenko, V., Dalgarno, A., J., 2001, ApJ, 554, 99
Krasnopolsky, V. A., 1997, Icarus, 128, 368
Krasnopolsky, V. A., et al., 1997, Science, 277,1488
Krasnopolsky, V. A., Christian, D. J., Kharchenko, V., Dalgarno, A., Wolk, S. J.,
Lisse, C. M., Stern, S. A., 2002, Icarus, 160, 437
Krasnopolsky, V. A., 2004, Icarus 167 417
Krasnopolsky, V. A., Greenwood, J. B., Stancil, P. C., 2004, Space Sci. Rev.,
113, 271
Krasnopolsky, V. A., 2006, J. Geophys. Res.111, A12102
Landi et al., 2006, Ap J. Supp., 162, 261
Lepri, S.T, and Zurbuchen, T. H., 2004, J. Geophys. Res., 109, A01112
Lisse, C. M. et al., 1996, Science, 274, 205
Lisse, C. M. et al., 2001, Science, 292, 1343
Lisse, C. M., Cravens, T. E., Dennerl, K., 2004, in Comets II, M. C. Festou,
H. U. Keller, and H. A. Weaver (eds.), University of Arizona Press, Tucson,
p.631-643
Lisse, C. M., Christian, D. J., Dennerl, K., Wolk, S. J., Bodewits, D., Hoekstra,
R., Combi, M. R., Makinen, T., Dryer, M., Fry, C. D., Weaver, H., 2005, ApJ,
635, 1329
Lisse, C. M. et al., 2007, Icarus, in press
Marsden, B.G. & Williams, G.V., 2005, Catalogue of Cometary Orbits 2005,
XX-th edition, (Cambridge: Smithson. Astrophys. Obs.)
Mazzotta, P., Mazzitelli, G., Colafrancesco, S., and Vittorio, N., 1998, A&AS,
133, 403
McKenna-Lawlor et al., 2006, J. Geophys. Res., 111, A11103
Mumma, M. J., Krasnopolsky, V. A., Abbott, M.J., 1997, ApJ, 491, 125
Mumma, M. J., et al., 2001, IAU Circ., 7578
Mumma, M. J. et al., 2005, Science, 310, 2703
Neugebauer, M., Cravens, T. E., Lisse, C. M., Ipavich F. M., Christian, D.,
Von Steiger, R., Bochsler, P., Shah, P. D. and Armstrong, T. P., 2000,
J. Geophys. Res., 105, 20949
Otranto, S., Olson, R. E., Beiersdorfer, P., 2006, Phys. Rev. A73, id. 022723
Porquet, D. & Dubau, J., 2000, Astron. Astroph. Suppl., 143, 495
Porquet, D., Mewe, R., Dubau, J., Raassen, A. J. J., Kaastra, J. S., 2001, Astron.
Astroph., 376, 1113
Schwadron, N. A., Cravens, T. E., 2000, ApJ, 544, 558
Schleicher, D .L., 2001, IAU Circ., 7558
Schleicher, D .L., Woodney, L. M., Birch, P. V., 2002, Earth, Moon and Planets,
90, 401
Schleicher, D .L., Barnes, K. L., Baugh, N. F., 2006, AJ, 131, 1130
Shipsey, E. J., Green, T. A., Browne, J. C., 1982, Phys. Rev. A, 27, 821
Smith, Z.M., Dryer, M., Ort, E. and Murtagh, W., 2000, J. Atm. Solar-Terr. Phys.,
62, 1265
Snowden et al, 2004, ApJ, 610, 1182
Savukov, I. M., Johnson, W. R., Safronova, U. I., 2003, Atomic Data and Nuclear
Data Tables, 85, 83
Suraud, M. G., Bonnet, J. J., Bonnefoy, M., Chassevent, M., Fleury, A., Bliman,
S., Dousson, S., Hitz, D., 1991, J. Phys. B., 24, 2543
Vainshtein, L. A.; Safronova, U. I., 1985, Phys. Scr., 31, 519
von Steiger, R., Schwadron, N. A., Fisk, L. A., Geiss, J., Gloeckler, G.,
Hefti, S., Wilken, B., Wimmer-Schweingruber, R.F., Zurbuchen, T. H., 2000,
J. Geophys. Res., 105, 27217
Weaver, H. A., P. D. Feldman, M. R. Combi, V. A. Krasnopolsky, C. M. Lisse,
and D. E. Shemansky, 2002, ApJ, 576, L95
Wegmann, R., Schmidt, H. U., Lisse, C. M., Dennerl, K., Englhauser, J., 1998,
Planet. Space Sci., 46, 603
Wegmann, R., Dennerl, K., & Lisse, C.M., 2004, A&A, 428, 647
Wegmann, R., Dennerl, K., 2005, A&A, 430, L33
Willingale, R. et al., 2006, ApJ, 649, 541
Zurbuchen, T. H., et al., 2002, Geophys. Res. Lett., 29, 66-1
Zurbuchen, T.H. & Richardson, 2006, Space Sci. Rev., in press
Appendix A: Observations within this Survey
This Appendix presents the observational details of the Chandra
data and the corresponding solar wind state. The prefix ’FF’
(fearless forecast) used in this appendix refers to the real time
forecasting of coronal mass ejection shocks arrivals at Earth. The
numbers were so-named for flare/coronal shock events during
solar cycle #23.
A.1. C/1999 S4 (linear)
X-rays. The first Chandra cometary observation was of comet
C/1999 S4 (linear) Lisse et al. (2001), with observations being
made both before and after the breakup of the nucleus. Due to the
low signal-to-noise ratio of the second detection, only the July
14th 2000 pre-breakup observation is discussed here. Summing
the 8 pointings of the satellite gave a total time interval of 9390 s.
In this period, the acis-S3 ccd collected a total of 11 710 photons
were detected in the range 300–1000 eV. Detections out side this
range or on other acis-ccds were not attributed to the comet. As
a result, data from the S1-ccd (which is configured identically to
S3) may be used as an indicator of the local X-ray background.
The morphology can be described by a crescent shape, with
the maximum brightness point 24 000 km from the nucleus
on the Sun-facing side. The brightness dims to 10% of the
maximum level at 110 000 km from the nucleus.
Solar wind. A large velocity jump can be seen around DoY 199,
which was due to the famous ”Bastille Day” flare on 14 July
(FF#153, Dryer et al (2001); Fry et al. (2003)). This flare
reached the comet only after the first observation. At July 12,
2017UT a solar flare started at N17W65 (FF#152), which was
nicely placed to hit this comet with a very high probability dur-
ing the first observations Fry et al. (2003). As for the second
observation, there was another flare on July 28, S17E24, at 1713
UT (FF#164) and there was a high probability that its shock’s
weaker flank hit the comet.
A.2. C/1999 T1 (McNaught–Hartley)
X-rays. The allocated observing time of comet McNaught–
Hartley was partitioned into 5 one-hour-slots between January
8th and January 15th, 2001 Krasnopolsky et al. (2002). The
strongest observing period was on January 8th, when ∆ = 1.37
AU and rh = 1.26 AU.
There were 15 000 total counts observed by the acis-S3 ccd
between 300 and 1000 eV. The emission region can be described
by a crescent, with the peak brightness is at 29 000 km from
the nucleus. The brightness dims to 10% of the maximum at a
cometocentric distance of 260 000 km. Again, the acis-S1 ccd
may be used to indicate the local background signal.
Solar wind. The comet was not within the heliospheric cur-
rent/plasma sheet (HCS/HPS). Two corotating cirs are probably
associated with the first two observations. Two flares (FF#233
and #234) took place; however, another corotating cir more
likely arrived before the flare’s transient shock’s effects did
McKenna-Lawlor et al. (2006).
A.3. C/2000 WM1 (linear)
X-rays. The only attempt to use the high-resolution grat-
ing capability of the acis-S array was made with comet
C/2000 WM1 (linear). Here, the Low-Energy Transmission
Grating (letg) was used. The dimness of the observed X-rays,
and the extended nature of the emitting atmosphere meant that
the grated spectra did not yield significant results. It is still
16 D. Bodewits et al.: Spectral Analysis of the Chandra Comet Survey
possible to extract a spectrum based on the pulse-heights gener-
ated by each X-ray detection on the acis-S3 chip, although the
morphology is not recorded. 6300 total counts were recorded for
the pulse-height spectrum of the S3 chip in the 300 to 1000 eV
range.
Solar wind. Comet WM1 was observed at the highest latitude
available within this survey, and at a latitude of 34 degrees, it was
far outside the hcs. During the observations, this comet might
have experienced the southerly flank of the shock of a strong
X3.4 flare at S20E97 and its icme and shock on December 28,
2001 (FF#359) McKenna-Lawlor et al. (2006).
A.4. 153P/2002 (Ikeya–Zhang)
X-rays. The brightest X-ray comet in the Chandra archive is
153P/2002 (Ikeya–Zhang). The heliographic latitude, geocentric
distance and heliocentric distance were comparable to those for
comet C/1999 S4 (linear), with a latitude of 26◦, ∆ = 0.457 AU
and rh = 0.8 AU. Rather than periodically re-point the detector
to track the comet, the pointing direction was fixed and the
comet was monitored as it passed through the field of view, thus
increasing the effective FoV. There were two observing periods
on April 15th 2002, each lasting for approximately 3 hours and
15 minutes. In both periods, a strong cometary signal is detected
on all of the activated acis-ccds. Consequently, a background
signal cannot be taken from the observation. A crescent shape
on the Sun side of the comet is observed over all of the ccd
array. Over 200 000 total counts were observed from the S3
chip in the 300 to 1000 eV range. The time intervals for each
observing period are 11 570 and 11 813 seconds.
Solar wind. Like C/2000 WM1, this comet was observed at a
relatively high heliographic latitude. Solar wind data obtained
in the ecliptic plane can therefore not be used to determine the
wind state at the comet. 153P/2002 (Ikeya–Zhang) was well-
positioned during the first observation on 15 April 2002 for a
flare at N16E05 (FF#388) on 12 April 2002. During the second
observation on 16 April, there was an earlier flare on 14 April at
N14W57, but this flare was probably too far to the west to be ef-
fective McKenna-Lawlor et al. (2006). The comet was observed
at a high latitude, and hence ace solar wind data is most likely
not applicable.
A.5. 2P/2003 (Encke)
X-rays. The Chandra observation of Encke took place on the
24th of November 2003 Lisse et al. (2005), when the comet
had a heliocentric distance of rh = 0.891 AU and a geocentric
distance of ∆ = 0.275 AU and a heliographic latitude of 11.4
degrees. The comet was continuously tracked for over 15 hours,
resulting in a useful exposure of 44 000 seconds. The acis-S3 ccd
counted 6140 X-rays in the range 300–1000 eV.
The brightest point was offset from the nucleus by
11 000 km, dimming to 10% of this value at a distance of
60 000 km.
The acis-S1 ccd was not activated in this observation. The
low quantum efficiency of the other activated ccds below 0.5 keV
makes them unsuitable as background references.
Solar wind. The proton velocity decreased during observations
from 600 km s−1 to 500 km s−1. A flare on 20 November
2003, at N01W08 (FF#525), was well-positioned to affect the
observations on 23 November (data from work in progress by
Z.K. Smith et al.). The comet most likely interacted with the
overexpanded, rarified plasma flow that followed the earlier hot
shocked and compressed flow behind the flare’s shock.
A.6. C/2001 Q4 (neat)
X-rays. A short observation of comet C/2001 Q4 was made on
May 12 2004, when the geocentric and heliocentric distances
were ∆ = 0.362 AU and rh = 0.964 AU respectively. With a
heliographic latitude of 3 degrees, the comet was almost in
the ecliptic plane. From 3 pointings, the useful exposure was
10 328 seconds. The acis-S3 chip detected 6540 X-rays in be-
tween 300 and 1000 eV. The acis-S1 was used as a background
signal.
Solar wind. There was no significant solar activity during the
observations (Z.K. Smith et al., ibid.). From solar wind data, the
comet interacted with a quiet, slow 352 km s−1 wind.
A.7. 9P/2005 (Tempel 1)
X-rays. The observation of comet 9P/2005 (Tempel 1) was de-
signed to coincide with the Deep Impact mission Lisse et al.
(2007). The allocated observation time of 291.6 ks was split
into 7 periods, starting on June 30th, July 4th (encompassing the
Deep Impact collision), July 5th, July 8th, July 10th, July 13th
and July 24th. The brightest observing periods were June 30th
and July 8th. The focus here is on the June 30th observation. On
this date, rh = 1.507 AU and ∆ = 0.872 AU.
The useful exposure was 50 059 seconds, with a total of 7300
counts, 4000 from the June 30th flare alone, were detected in the
energy range of 300–1000 eV.
The brightest point for the June 30th observation was
located 11 000 km from the nucleus. The morphology appears
to be more spherical than in other comet observations.
Solar wind. Observations were taken over a long time span cov-
ering different solar wind environments. There was no signifi-
cant solar activity during the 30 June 2005 observations (Z.K.
Smith et al., ibid. Lisse et al. (2007)). From the ace data, it can
be seen that at June 30, the comet most likely interacted with a
quiet, slow solar wind.
A.8. 73P/2006 (Schwassmann–Wachmann 3B)
X-rays. The close approach of comet 73P/2006 (Schwassmann–
Wachmann 3B) in May 2005 (∆ = 0.106 AU, rh = 0.965 AU)
provided an opportunity to examine cometary X-rays in high
spatial resolution. Chandra was one of several X-ray missions to
focus on one of the large fragments of the comet. Between 300
and 1000 eV, 6285 counts were obtained in a useful exposure of
20 600 seconds.
Solar wind. There was a weak flare on 22 May 2006 (FF#655,
Z.K. Smith, priv. comm.). A sequence of three high speed coro-
nal hole streams passed the comet in the period around the ob-
servations and a corotating cir might have reached the comet in
association with the observations on 23 May, which is confirmed
by the mapped solar wind data.
|
0704.1649 | Hamiltonian formalism in a problem of 3-th waves hierarchy | arXiv:0704.1649v1 [hep-lat] 12 Apr 2007
Hamiltonian formalism in a problem of 3-th
waves hierarchy
A. N. Leznov
Abstract
By the method of discrete transformation equations of 3-th wave hier-
archy are constructed. We present in explicit form two Poisson structures,
which allow to construct Hamiltonian operator consequent application of
which leads to all equations of this hierarchy. For calculations it will be
necessary results of previous paper [1], which for convenience of the reader
we present in corresponding place of the text. The obtained formulae are
checked by independent calculations.
1 Introduction
All system equations of 3-th wave hierarchy are invariant with respect to two
mutually commutative discrete transformation of this problem [1],[3],[5],[6][1].
In this introduction we present the solution of the same problem in the case A1
algebra follow to the paper [2].
We repeat here briefly the most important punks of general construction
from [2].
The discrete invertible substitution (mapping) defined as
ũ = T (u, u′, ..., ur) ≡ T (u) (1)
u is s dimensional vector function; ur its derivatives of corresponding order with
respect to ”space” coordinates.
The property of invertibility means that (1) can be resolved and ”old” func-
tion u may expressed in terms of new one ũ and its derivatives.
Freshet derivative T ′(u) of (1) is s× s matrix operator defined as
T ′(u) = Tu + Tu′D + Tu′′D
2 + ... (2)
where Dm is operator of m-times differentiation with respect to space coordi-
nates.
∗Universidad Autonoma del Estado de Morelos, CCICAp,Cuernavaca, Mexico
http://arxiv.org/abs/0704.1649v1
Let us consider equation
Fn(T (u)) = T
′(u)Fn(u) (3)
where Fn(u) is s-component unknown vector function, each component of which
depend on u and its derivatives not more than n order. It is not difficult to
understand that evolution type equation
ut = Fn(u)
is invariant with respect substitution (1).
Two other equations and its solutions are important in what follows
T ′(u)J(u)(T ′(u))T = J(T (u)), T ′(u)H(u)(T ′(u))−1 = H(T (u)) (4)
where (T ′(u))T = T Tu −DT
u′ +D
2T Tu′′ + ... and J(u), H(u) are unknown s× s
matrix operators, the matrix elements of which are polynomial of some finite
order with respect to operator of differentiation (of its positive and negative
degrees).
JT (u) = −J(u) may be connected with the Poisson structure and equation
(4) means its invariance with respect to discrete transformation T .
The second equation (4) determine operator H(u), which after application
to arbitrary solution of (3) F (u) leads to new solution of the same system
F̃ (u) = H(u)F (u)
And thus we obtain reccurent procedure to construct solutions of (3) from few
simple ones.
If it is possible to find two different J1, J2 (Hamiltonian operators,Poisson
structures) then
H(u) = J2J
1 (5)
satisfy second equation (4).
In [2] are presented arguments that Hamiltonian operator it is the sense find
in a form
J(u) = Fn(u)D
−1Fn(u)
i (6)
where Fn some solution of (3) and Ai some s × s matrices constructed from u
and its derivatives.
The direct generalization of (7) which will be used below is the following
J(u) =
Fi(u)D
−1Fi(u)
i (7)
where first term in (6) is changed on sum of some number of different solutions
of (3).
2 Necessary facts from [1]
In [1] was constructed equations of 3-th waves hierarchy of zero, first and second
order for six unknown functions f±1.0, f
0.1, f
1.1. The form of these equations will
be essentially used in what follows.
2.1 Equations of the zero order
1.0 = ±bf
0.1 = ±cf
1.1 = ±af
1.1 (8)
where (b, c) arbitrary numerical parameters a = b+ c.
2.2 Equations of the first order
1.1 = θ1.1(f
′ + σ1.1f
1.0 = ν1.0(f
′ + σ1.0f
0.1 = ν0.1(f
′ + σ0.1f
0.1 = ν0.1(f
′ + σ0.1f
1.1 (9)
1.0 = ν1.0(f
′ + σ1.0f
0.1(f
1.1 = θ1.1(f
′ + σ1.1f
where νi,j , σij , θ11 are numerical parameters connected by condition
ν01 − ν10 = 2σ10, ν01 + ν10 = 2θ11, −2σ11 = σ10 = σ01 (10)
Thus solution is defined by two independent parameters ν01, ν10 as in the case
of zero degree solution of the previous subsection.
2.3 Equations of the second order
1.1 = ν1.1(f
′′ + γ1.1f
1.0(f
′ + δ1.1f
0.1(f
′ + f+1.1R11,
where a ≡ (b+ c), Rij ≡ 2aijf
1.1 + bijf
1.0 + cijf
1.0 = ν1.0(f
′′ + γ1.0f
0.1(f
′ + δ1.0f
1.1(f
′ + f+1.0R10
0.1 = ν0.1(f
′′ + γ0.1f
1.0(f
′ + δ0.1f
1.1(f
′ + f+0.1R01,
− ˙f−0.1 = ν0.1(f
′′ + γ0.1f
1.0(f
′ + δ0.1f
1.1(f
′ + f−0.1R01,
− ˙f−1.0 = ν1.0(f
′′ + γ1.0f
0.1(f
′ + δ1.0f
1.1(f
′ + f−1.0R10,
− ˙f−1.1 = ν1.1(f
′′ + γ1.1f
1.0(f
′ + δ1.1f
0.1(f
′) + f−1.1R11.
All numerical parameters in (11) may be expressed in terms of only two ones
and are connected by relations
ν11 = a11, γ1.1 + δ1.1 = (b11 − c11), ν10 = 2b10
γ1.0 + δ1.0 = 2(c10 − b10), ν01 = 2c01, γ0.1 + δ0.1 = 2(c01 − b01)
a11 = −2c10 b11 = a10 c11 = −3c10 − b10
a10 = c10 + b10 b10 = b10 c10 = c10
a01 = −3c10 − b10 b01 = c10 c01 = −b10 − 4c10
(12)
δ10 = 4c10, γ10 = −2(c10 + b10), 2γ11 = δ10 − γ10,
2δ11 = −γ10, δ01 = −δ10, γ01 = γ10 − δ10
2.4 Hamiltonian form of equations
As it was shown in (1) equations of the previous subsections may be considered
as Hamiltonian ones with following non zero Poisson breakets
{f+1.1, f
1.1} =
, {f+1.0, f
1.0} = 1, {f
0.1, f
0.1} = 1 (13)
This fact leads to existence of the first Poisson structure, inverse to which is the
following
0 0 0 0 0 −2
0 0 0 0 −1 0
0 0 0 −1 0 0
0 0 1 0 0 0
0 1 0 0 0 0
2 0 0 0 0 0
3 Hamiltonian operator of 3-th waves problem
Let us seek in connection the proposition (7) of introduction the second Poisson
structure in a form J2 = J
2 + J
2 , where J
2 contain terms with D
−1 and J
terms with non negative degree of D
2 = −F
−1(F 10 )
T − F 20D
−1(F 20 )
where F
0 are two different solutions of equations of the zero order (first sub-
section of the previous section).
0 0 0 − 1
0 0 f+1.1 0 D −
0 −f+1.1 0 D 0
1.0 0 D 0 −f
1.1 0
0.1 D 0 f
1.1 0 0
0.1 −
1.0 0 0 0
In the last expression we present finally result. Really it is necessary to write
anti symmetrical matrix with arbitrary coefficients, which will be found after
calculations described below.
Now let us consider how reccurent operator H (5) acts on some solution of
(3) F . At first F it is necessary multiply on J−11 with the result
1 F =
−2F−1.1
−F−1.0
−F−0.1
2F+1.1
and this column vector multiply on J2 = J
2 from (15). In two terms of J
it is necessary multiply vector line (F i0)
T on the last vector column with scalar
result
(F i0)
1 F = −2a
i(f+1.1F
1.1+f
1.1)−b
i(f+1.0F
1.0+f
1.0)−c
i(f+0.1F
0.1+f
Thus input of two first terms of Jn2 into ”new solution” will be
1.1 = −f
−1(2a2(f+1.1F
1.1+f
1.1)+(ab)(f
1.0+f
1.0)+(ac)(f
0.1+f
0.1))
1.0 = −f
−1(2(ba)(f+1.1F
1.1+f
1.1)+b
2(f+1.0F
1.0+f
1.0)+(bc)(f
0.1+f
0.1))
0.1 = −f
−1(2(ca)(f+1.1F
1.1+f
1.1)+(cb)(f
1.0+f
1.0)+c
2(f+0.1F
0.1+f
0.1))
and the same expressions with opposite sign for components with negative upper
indexes. a2 =
aiai, (ab) =
aibi and so on.
The result of multiplication J
1 F is determined by usual rules of multi-
plication matrix on vector
1 F =
(F+1.1)
′ + 1
(f+0.1F
1.0 − f
(F+1.0)
′ − f+1.1F
0.1 −
(F+0.1)
′ + f+1.1F
1.0 +
−(F−0.1)
′ − f−1.1F
1.0 −
−(F+1.0)
′ + f−1.1F
0.1 +
(F−1.1)
′ − 1
(f−0.1F
1.0 − f
Now let us take for F right hand side zero degree equations (subsection 1 from
previous section) F±1.1 = ±(ν1.0 + ν0.1)f
1.1, F
1.0 = ±ν1.0f
1.0, F
0.1 = ±ν0.1f
(b = ν1.0, c = ν0.1). In this case input from J
2 terms (16) equal to zero and
input from J
2 terms exactly coincides with right hand side of equations of the
first order (subsection 2 from previous section). Really calculations must be
done in a back direction: in definition of J
2 (15) it is necessary to use arbitrary
skew symmetrical matrix and after comparison result of calculations above with
first order equations obtain finally form J
2 (15).
Now we repeat the same trick with equations of the first degree. In this case
1 F =
−(ν1.0 + ν0.1)(f
′ − ν1.0−ν0.1
−ν1.0(f
′ + ν1.0−ν0.1
−ν0.1(f
′ + ν1.0−ν0.1
ν0.1(f
′ − ν1.0−ν0.1
ν1.0(f
′ − ν1.0−ν0.1
(ν1.0 + ν0.1)(f
′ + ν1.0−ν0.1
We present result of action J
1 F below
ν1.0 + ν0.1
(f+1.1)
3ν1.0 − ν0.1
(f+1.0)
0.1 +
ν1.0 − 3ν0.1
1.0)(f
ν1.0 − ν0.1
1.1(f
1.0 − f
0.1) (18)
The terms in the first line exactly coincide with terms with derivatives in equa-
tion of the second order for f+1.1 component. Indeed (ν1.0+ν0.1) = 2b10+2c01 =
−8c10 = 4a11 = 4ν11, b10 =
, c10 = −
ν1.0+ν0.1
, δ11 = b10 + c10 =
3ν1.0−ν0.1
and so on. The same situation takes place with respect all other components:
terms with derivatives coincide with calculated from J
1 F .
Terms without derivatives arise from terms of the second line of (18) and
from (16). In the last equations after substitution F from equation of the first
order (and in all other cases) under the sign D−1 arises sign D which lead to
unity and (16) plus terms of second line of (18) look as
1.1(2θ11a
1.1+[(ab)ν10+
ν1.0 − ν0.1
]f+1.0f
1.0+[(ac)ν01−
ν1.0 − ν0.1
]f+0.1f
1.0([2(ba)θ11+
ν1.0 − ν0.1
1.1+b
2ν10f
1.0+[(bc)ν01−
ν1.0 − ν0.1
]f+0.1f
0.1([2(ca)θ11−
ν1.0 − ν0.1
]f+1.1f
1.1+[(cb)ν10+
ν1.0 − ν0.1
]f+1.0f
1.0+c
All terms above with scalar products arise from (16). All others from the ”second
line” of (18). Now it is necessary compleat these expressions with terms f+ijRij of
the right hand side of equation of the second order (subsection 3 of the previous
section). This comparison leads to the following conclusion:
a2 = b2 = c2 =
, (ab) = (ac) = −(bc) =
And now recurrent operator H is defined uniquely. From this result it is clear
that all attempts to construct H using only one solution in the anzats for J2
lead to contradiction and it was the main difficult to the author.
The following observation take place. Let us consider three elements of
Cartan subalgebra R3 =
h1+h2
, R2 = h2, R1 = h1 and the positive root system
of A2 algebra system X
3 = X
12, X
2 , X
1 . Let us define 3× 3 ”Cartan matrix”
KW by the condition
[Ri, X
j ] = K
a2 (ab) (ac)
(ba) b2 (bc)
(ca) (cb) c2
4 Hamiltonian formalism II
In calculations of the previous section results of [1] were used in the whole
measure. But the corresponding calculations were not simple and not strait
forward. Now having the explicit expression for J2 we are able to check that
equation it defined (4) is satisfied. Let us rewrite it notations of the previous
section
−T ′(f)F 10D
−1(F 10 )
T (T ′(f))T − T ′(f)F 20D
−1(F 20 )
T (T ′(f))T+
T ′(f)J
′(f))T = (Jn2 + J
2 )(T (f)) (20)
( we think that the same sign T for substitution and transposition will not lead
to mixing). We will do all calculations below with respect to T3 transformation
(explicit formulae for it and F ′3(f) reader can find in Appendix). We remind the
rule of multiplication of quadratical in derivatives operator on scalar function
(A+DB+DC2)R = AR+BRA′+CRT ′′+(BRA+2CRT ′)D+CDR2 (21)
Matrix elements of 3 first lines of F ′3(f) do not contain operator of differen-
tiation. Its fourth and fifth lines linear in D and its sixth one quadratical in D.
By definition (3) all terms without operator D lead to
F i0 . All others can be
simple calculated using (21) with the result
T ′3(f)F
F i0 +
1.0D − a
F i0 +
Γi (22)
where ∆i = ci − bi, ai = ci + bi, 03- three dimensional zero vector After substi-
tution (22) into (20) and cancelation equivalent terms in both sides we come to
the following equality have to be checked
Γi D−1
Γi D−1
Γi T )+
T ′3(f)J
3(f))
T = J
2 (T3(f)) (23)
The matrix of the first sum in the first line has different from zero only elements
of its last three columns. The second sum - only elements of its three last line.
And the last sum only elements of 3× 3 matrix in its left down corner.
At first let us calculate the first sum. Result is the following (we present
below only two first lines of this matrix operator)
F i0 D
03 0 0
D + 1
0.1D −
(24)
(where 03 is three dimensional row vector)
(T ′3(f)J
3(f))
T )1l =
(T ′3(f))1m(J
2 )mk((T
3(f))
T )kl =
(T ′3(f))16(J
2 )6k(T
t)lk =
(because only one elements of the first line ((T ′3(f))16) of Frechet derivative
matrix and three elements of sixth line matrix J
2 are different from zero).
(f−1.1)
D(T ′3(f))
0.1(T
3(f))
1.0(T
3(f))
where (kD)t ≡ −Dkt. or the first line of the matrix T ′3(f)J
3(f))
T looks as
0.1 −
D − 1
Now let us calculate second line
(T ′3(f)J
3(f))
T )2l =
(T ′3(f))2m(J
2 )mk((T
3(f))
T )kl =
(T ′3(f))26(J
2 )6k(T
t)lk +
(T ′3(f))24(J
2 )4k(T
t)lk =
(because only two elements (T ′3(f))24, (T
3(f))26 of second line of Frechet deriva-
tive matrix are different from zero )
(f−1.1)
D(T ′3)
0.1(T
1.0(T
1.0(T
l1+D(T
1.1(T
Result of simple algebraical calculation lead to explicit form of second row
1.1 −
D + 3
0.1D +
1.0 +
[(f−0.1)
After summation (25) and (25) with corresponding lines of (24) we obtain
exactly first two lines of left hand side of (23).
By similar calculation it is possible to check that (23) is satisfied. Finally
expression for Jn2
27) and (15) solve the problem of the construction of the second Poisson struc-
ture for the equations of 3-th waves hierarchy.
5 Comments about multi-soliton solutions of the
systems of 3-th waves hierarchy
Let us find solution of the system equations of the second order (subsection 3 of
section 2) under additional condition f+i.j = 0. The system under consideration
looks as
− ˙f−0.1 = ν0.1(f
′′, − ˙f−1.0 = ν1.0(f
− ˙f−1.1 = ν1.1(f
′′ + γ1.1f
1.0(f
′ + δ1.1f
0.1(f
where ν11 =
ν10+ν01
, δ1.1 =
3ν10−ν01
, γ1.1 =
ν10−3ν01
Solution of two first linear equation are obvious
0.1 =
dµe−ν01tµ
2+µxq(µ), f−1.0 =
dλe−ν10tλ
2+λxp(λ)
Let us find partial solution on nonhomogineous third equation in a form
1.1 =
−(ν01µ
2+ν10λ
2)t+(µ+λ)x
r(µ, λ)
After substitution this anzats for f−1.1 and obtained above solutions for f
1.0, f
into third equation we come to an equality
[ν01µ
2 + ν10λ
ν10 + ν01
(µ+ λ)2]r(µ, λ) = (γ1.1µ+ δ1.1λ)q(µ)p(λ)
Quadratical multiplier of the left side is equal to 2(λ − µ)(γ1.1µ + δ1.1λ) and
finally we obtain
r(µ, λ) =
q(µ)p(λ)
1.1 =
dλe−(ν01µ
2+ν10λ
2)t+(µ+λ)x 1
q(µ)p(λ)
The last expression coincides (up to nonessential multiplier 1
) with solution in
the case of 3-th problem. But in resolving of equations of discrete transfor-
mation only differentiation with respect to space coordinate take place. Thus
all equations of discrete transformation in the case of 3-th wave problem and
in the case under consideration will be have the same solution except of time
dependent multiplier. And the form of multi soliton solution (up to this factor)
will be the same for all systems of 3-th waves hierarchy.
6 Outlook
The main result of the present paper the explicit expression for second Pois-
son structure (15) and (27), which allow to construct Hamiltonian reccurent
operator and obtain all equations of 3-th wave hierarchy in explicit form. From
the physical point of view these systems may be considered as three interacting
fields of nonlinear Schredinger hierarchy connected with A1 algebra. All equa-
tions depended on one arbitrary numerical parameter, which can be connected
with parameters of the particles of the fields describing by nonlinear Schredinger
hierarchy.
The discrete transformation for n-wave problem in the case of arbitrary
semisimple algebra was presented in [4] form and author have no doubts that
the problem of equations of n-wave hierarchy and its multi-soliton solution may
be resolved in explicit form.
The most riddle to the author remain question about the nature of the group
of discrete transformation. As it follows from its introduction [4] it has some
connection with the Weil group of the root space of semisimple algebra. Weil
group is discrete one but non commutative. Discrete transformation in the case
of the present paper is some reduction from group of discrete transformation
of four dimensional self-dual Yang-Mills equations [9]. And thus understanding
this situation in the case n-waves interaction will give possible guess to solution
of this problem in Yang-Mills case.
Aknowledgements
The author thanks CONACYT for financial support.
7 Appendix
We present here different from zero matrix elements of Frechet derivative for T3
discrete transformation - 6× 6 matrix operator (see Introduction). All calcula-
tions are done in connection with its definition (2) and explicit formulae for T3
discrete transformation presented below.
7.1 Discrete transformation T3
1.0= −
0.1= −2(f
′ − f−1.1f
1.0 −
(f−0.1)
(f−1.1)
1.0= −2(f
′ + f−1.1f
0.1 +
(f−1.0)
(f−1.1)
1.1= f
1.1+(ln f
(f−0.1)
0.1 − (f
(f+1.0f
1.0+f
0.1)+
7.2 Frechet derivative
We present below different from zero matrix elements of Frechet operator
F ′16= −
(f−1.1)
F ′24= −
F ′26=
(f−1.1)
F ′35=
F ′36= −
(f−1.1)
F ′42= −f
F ′44= −2D− Z +
(f−1.1)
F ′45= −
(f−0.1)
2f−1.1
F ′46= −f
1.0 +
2f−1.1
0.1(f
(f−1.1)
53= f
(f−1.0)
2f−1.1
55= −2D+ Z +
(f−1.1)
F ′56= f
0.1 −
2f−1.1
1.0(f
(f−1.1)
F ′61= (f
F ′62=
F ′63=
F ′64=
(f−1.0D−(f
1.0Z+
F ′65= −
(f−0.1D − (f
0.1Z +
F ′66= D
2 − 2
(f−1.1)
(f−1.1)
′(f−1.1)
+ 2f−1.1f
1.1 +
(f−1.0f
1.0 + f
0.1)−
where Z =
References
[1] A.N.Leznov , Equations of 3-th wave hierarchy arXiv:math-ph/0703063
[2] Derjagin V.B. and A.N.Leznov Preprint MPI 96-3 Discrete Symmetries and
and multi-Poisson structures of 1+1 integrable systems
[3] A.N.Leznov and R.Torres-CordobaJ.Math.Phys 44(5):2342-2352(2003)
A.N.Leznov, J.Escobedo-Alatrore and R.Torres-Cordoba J.Nonlinear
Math.Phys 10(2):243-251(2003)
[4] A.N.Leznov, Discrete symmetries of the n-wave problem, Theoretical and
mathematical physics, 132(1): 955-969 (2002)
[5] A.N.Leznov, G.R.Toker and R.Torres-Cordoba, Multisoliton solution of 3-th
wave problemhep-th/060500906
[6] A.N.Leznov, G.R.Toker and R.Torres-Cordoba, Resolving of discrete trans-
formation and multisoliton solution of 3-th wave problem Non.Lin.Math-
Phys. v 14 N2 238-249 (2007)
[7] A.N.Leznov and M.V.Saveliev Group theoretical methods for integration of
nonlinear dynamic systems. Birchoiser, 1992
[8] A.N.Leznov Theoretical and mathematical physics, +122(2): 211-228 (1998)
[9] A.N. Leznov Discrete and backlund (!) transformations of SDYM system.
math-ph/0504004
|
0704.1650 | Correlations, fluctuations and stability of a finite-size network of
coupled oscillators | Correlations, fluctuations and stability of a finite-size network of coupled oscillators
Michael A. Buice and Carson C. Chow
Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, MD
(Dated: November 19, 2018)
The incoherent state of the Kuramoto model of coupled oscillators exhibits marginal modes in
mean field theory. We demonstrate that corrections due to finite size effects render these modes
stable in the subcritical case, i.e. when the population is not synchronous. This demonstration is
facilitated by the construction of a non-equilibrium statistical field theoretic formulation of a generic
model of coupled oscillators. This theory is consistent with previous results. In the all-to-all case,
the fluctuations in this theory are due completely to finite size corrections, which can be calculated
in an expansion in 1/N , where N is the number of oscillators. The N → ∞ limit of this theory is
what is traditionally called mean field theory for the Kuramoto model.
I. INTRODUCTION
Systems of coupled oscillators have been used to de-
scribe the dynamics of an extraordinary range of phe-
nomena [1], including networks of neurons [2, 3], syn-
chronization of blinking fireflies [4, 5], chorusing of chirp-
ing crickets [6], neutrino flavor oscillations [7], arrays of
lasers [8], and coupled Josephson junctions [9]. A com-
mon model of coupled oscillators is the Kuramoto model
[10], which describes the evolution of N coupled oscilla-
tors. A generalized form is given by
θ̇i = ωi +
f(θj − θi) (1)
where i labels the oscillators, θi is the phase of oscillator
i, f(θ) is the phase dependent coupling, and the intrin-
sic driving frequencies ωi are distributed according to
some distribution g(ω). In the original Kuramoto model,
f(θ) = sin(θ). Here, we consider f to be any smooth
odd function. The system can be characterized by the
complex order parameter
Z(t) =
eiθj(t) ≡ r(t)eiΨ(t) (2)
where the magnitude r gives a measure of synchrony in
the system.
In the limit of an infinite oscillator system, Kuramoto
showed that there is a bifurcation or continuous phase
transition as the coupling K is increased beyond some
critical value, Kc [10]. Below the critical point the steady
state solution has r = 0 (the “incoherent” state). Be-
yond the critical point, a new steady state solution with
r > 0 emerges. Strogatz and Mirollo analyzed the lin-
ear stability of the incoherent state of this system us-
ing a Fokker-Planck formalism [11]. In the absence of
external noise, the system displays marginal modes as-
sociated with the driving frequencies of the oscillators.
However, numerical simulations of the Kuramoto model
for a large but finite number of oscillators show that the
oscillators quickly settle into the incoherent state below
the critical point. The paradox of why the marginally
stable incoherent state seemed to be an attractor in sim-
ulations was partially resolved by Strogatz, Mirollo and
Matthews [12] who demonstrated (within the context of
the N → ∞ limit) that there was a dephasing effect akin
to Landau damping in plasma physics which brought r to
zero with a time constant that is inversely proportional to
the width of the frequency distribution. Recently, Stro-
gatz and Mirollo have shown that the fully locked state
r = 1 is stable [13] but the partially locked state is again
marginally stable [14]. Although dephasing can explain
how the order parameter can go to zero, the question of
whether the incoherent state is truly stable for a finite
number of oscillators remains unknown. Even with de-
phasing, in the infinite oscillator limit the system still has
an infinite memory of the initial state so there may be
classes of initial conditions for which the order parameter
or the density exhibits oscillations.
The applicability of the results for the infinite size
Kuramoto model to a finite-size network of oscillators
is largely unknown. The intractability of the finite size
case suggests a statistical approach to understanding the
dynamics. Accordingly, the infinite oscillator theories
should be the limits of some averaging process for a finite
system. While the behavior of a finite system is expected
to converge to the “infinite” oscillator behavior, for a fi-
nite number of oscillators the dynamics of the system will
exhibit fluctuations. For example, Daido [15, 16] consid-
ered his analytical treatments of the Kuramoto model
using time averages and he was able to compute an ana-
lytical estimate of the variance. In contrast, we will pur-
sue ensemble averages over oscillator phases and driving
frequencies. As the Kuramoto dynamics are determin-
istic, this is equivalent to an average over initial phases
and driving frequencies. Furthermore, the averaging pro-
cess imparts a distinction between the order parameter
Z and its magnitude r. Namely, do we consider 〈Z〉 or
〈r〉 = 〈|Z|〉 to be the order parameter? This is important
as the two are not equal. In keeping with the density as
the proper degree of freedom for the system (as in the in-
finite oscillator theories mentioned above), we assert that
〈Z〉 is the natural order parameter, as it is obtained via
a linear transformation applied to the density.
Recently, Hildebrand et al. [17] produced a kinetic
theory inspired by plasma physics to describe the fluctu-
ations within the system. They produced a Bogoliubov-
Born-Green-Kirkwood-Yvon (BBGKY) moment hierar-
http://arxiv.org/abs/0704.1650v3
chy and truncation at second order in the hierarchy
yielded analytical results for the two point correlation
function from which the fluctuations in the order param-
eter could be computed. At this order, the system still
manifested marginal modes. Going beyond second or-
der was impractical within the kinetic theory formalism.
Thus, it remained an open question as to whether going
to higher order would show that finite size fluctuations
could stabilize the marginal modes.
Here, we introduce a statistical field theory approach
to calculate the moments of the distribution function gov-
erning the Kuramoto model. The formalism is equivalent
to the Doi-Peliti path integral method used to derive sta-
tistical field theories for Markov processes, even though
our model is fully deterministic [18, 19, 20, 21]. The
field theoretic action we derive produces exactly the same
BBGKY hierarchy of the kinetic theory approach [17].
The advantages of the field theory approach are that 1)
difficult calculations are easily visualized and performed
through Feynman graph analysis, 2) the theory is eas-
ily extendable and generalizable (e.g. to local coupling),
3) the field theoretic formalism permits the use of the
renormalization group (which will be necessary near the
critical point), and 4) in the case of the all-to-all ho-
mogeneous coupling of the Kuramoto model proper, the
formalism results in an expansion in 1/N and verifies
that mean field theory is exact in the N → ∞ limit. We
will demonstrate that this theory predicts that finite size
corrections will stabilize the marginal modes of the infi-
nite oscillator theory. Readers unfamiliar with the tools
of field theory are directed to one of the standard texts
[22]. A review of field theory for non-equilibrium dynam-
ics is [23].
In section II, we present the derivation of the the-
ory and elaborate on this theory’s relationship to the
BBGKY hierarchy. In section III, we describe the com-
putation of correlation functions in this theory and, in
particular, describe the tree level linear response. This
will connect the present work directly with what was
computed using the kinetic theory approach [17]. In sec-
tion IV, after describing two example perturbations, we
calculate the one loop correction to the linear response
and demonstrate that the modes which are marginal at
mean field level are rendered stable by finite size effects.
In addition, we demonstrate how generalized Landau
damping arises quite naturally within our formalism. We
compare these results to simulations in section V.
II. FIELD THEORY FOR THE KURAMOTO
MODEL
The Kuramoto model (1) can be described in terms of
a density of oscillators in θ, ω space
η(θ, ω, t) =
δ(θ − θi(t))δ(ω − ωi) (3)
that obeys the continuity equation
C(η) ≡
f(θ′ − θ)η(θ′, ω′, t)η(θ, ω, t)dθ′dω′
= 0 (4)
Equation (4) remains an exact description of the dynam-
ics of the Kuramoto model (1) [17]. Although equa-
tion (4) has the same form as that used by Strogatz
and Mirollo [24], it is fundamentally different because
solutions need not be smooth. Rather, the solutions of
Eq. (4) are treated in the sense of distributions as defined
by Eq. (3). As we will show, imposing smooth solutions
is equivalent to mean field theory (the infinite oscilla-
tor limit). Drawing an analogy to the kinetic theory of
plasmas, Eq. (4) is equivalent to the Klimontovich equa-
tion while the mean field equation used by Strogatz and
Mirollo [24] is equivalent to the Vlasov equation [25, 26].
Our goal is to construct a field theory to calculate the
response functions and moments of the density η. Even-
tually we will construct a theory akin to a Doi-Peliti field
theory [18, 19, 20, 21], a standard approach for reaction-
diffusion systems. We will arrive at this through a con-
struction using a Martin-Siggia-Rose response field [27].
Since the model is deterministic, the time evolution of
η(θ, ω, t) serves to map the initial distribution forward in
time. We can therefore represent the functional proba-
bility measure P [η(θ, ω, t)] for the density η(θ, ω, t) as a
delta functional which enforces the deterministic evolu-
tion from equation (4) along with an expectation taken
over the distribution P0[η0] of the initial configuration
η0(θ, ω) = η(θ, ω, t0). We emphasize that no external
noise is added to our system. Any statistical uncertainty
completely derives from the distribution of the initial
state. Hence we arrive at the following path integral,
P [η(θ, ω, t)]
Dη0P0[η0]δ [N {C(η(θ, ω, t))− δ(t− t0)η0(θ, ω)}]
The definition of the delta functional contains an ar-
bitrary scaling factor, which we have taken to be N .
We will show later that this choice is necessary for the
field theory to correctly describe the statistics of η. The
probability measure obeys the normalization condition
DηP [η].
We first write the generalized Fourier decomposition of
the delta functional.
P [η(θ, ω, t)]
Dη̃Dη0P0[η0]
× exp
dθdωdt η̃ [C(η)− δ(t− t0)η0(θ, ω)]
where η̃(θ, ω, t) is usually called the “response field” after
Martin-Siggia-Rose [27] and the integral is taken along
the imaginary axis. It is more convenient to work with
the generalized probability including the response field:
P̃[η, η̃] =
Dη0P0[η0] exp
dθdωdt η̃C(η)
dθdωη̃(θ, ω, t0)η0(θ, ω)
which obeys the normalization
DηDη̃P̃[η, η̃; η0] (8)
We now compute the integral over η0 in equation (7),
which is an ensemble average over initial phases and driv-
ing frequencies. We assume that the initial phases and
driving frequencies for each of the N oscillators are inde-
pendent and obey the distribution ρ0(θ, ω). P [η0] repre-
sents the functional probability distribution for the ini-
tial number density of oscillators. Noting that η0(θ, ω) is
given by
η0(θ, ω) =
δ(θ − θi(0))δ(ω − ωi) (9)
Dη0P0[η0] =
dθidωiρ0(θi, ωi), one can show
that the distribution from equation (7) is given by
P̃ [η, η̃] = exp
dθdωdt η̃C(η)
+ N ln
eη̃(θ,ω,t0) − 1
ρ0(θ, ω)
In deriving Eq. (10), we have used the fact that
Dη0P0[η0] exp
dθdωη̃(θ, ω, t0)η0(θ, ω)
dθidωiρ0(θi, ωi) exp
η̃(θi, ωi, t0)
dθdωρ0(θ, ω)e
η̃(θ,ω,t0)
= exp
dθdωρ0(θ, ω)
eη̃(θ,ω,t0) − 1
We see that the fluctuations (i.e. terms non-linear in η̃)
appear only in the initial condition of (10), which is to
be expected since the Kuramoto system is deterministic.
In this form the continuity equation (4) appears as a
Langevin equation sourced by the noise from the initial
state.
Although the noise is contained entirely within the ini-
tial conditions, it is still relatively complicated. We can
simplify the structure of the noise in (10) by performing
the following transformation [21]:
ϕ(θ, ω, t) = η exp(−η̃)
ϕ̃(θ, ω, t) + 1 = exp(η̃) (12)
Under the transformation (12), P̃ [n, ñ] becomes P̃[ϕ, ϕ̃],
which is given by
P̃ [ϕ, ϕ̃] = exp (−NS[ϕ, ϕ̃]) (13)
where the action S[ϕ, ϕ̃] is
S[ϕ, ϕ̃] =
dωdθdt
dω′dθ′ (ϕ̃′ϕ̃+ ϕ̃)
{f(θ′ − θ)ϕ′ϕ}
dθdωϕ̃(θ, ω, t0)ρ0(θ, ω)
The form (14) is obtained from the transformation (12)
only after several integrations by parts. In most cases,
these integrations do not yield boundary terms because
of the periodic domain (i.e. those in θ). In the case of the
∂t operator, however, we are left with boundary terms of
the form [ln(ϕ̃ + 1) − 1]ϕ̃ϕ. These terms will not affect
computations of the moments because of the causality of
the propagator (see section III A).
We are interested in fluctuations around a smooth so-
lution ρ(θ, ω, t) of the continuity equation (4) with initial
condition ρ(θ, ω, t0) = ρ0(θ, ω). We transform the field
variables via ψ = ϕ−ρ and ψ̃ = ϕ̃ in (14) and obtain the
following action:
S[ψ, ψ̃] =
dωdθdtψ̃
dω′dθ′
{f(θ′ − θ) (ψ′ρ+ ρ′ψ + ψ′ψ)}
dω′dθ′ψ̃′
{f(θ′ − θ) (ψ′ + ρ′) (ψ + ρ)}
(−1)k+1
dθdωψ̃(θ, ω, t0)ρ0(θ, ω)
For fluctuations about the incoherent state: ρ(θ, ω, t) =
ρ0(θ, ω) = g(ω)/2π, where g(ω) is a fixed frequency dis-
tribution. The incoherent state is an exact solution of
the continuity equation (4). Due to the homogeneity in
θ and the derivative couplings, there are no corrections to
it at any order in 1/N . The action (15) with ρ = g(ω)/2π
therefore describes fluctuations about the true mean dis-
tribution of the theory, i.e. 〈η(θ, ω, t)〉 = g(ω)/2π. We
can evaluate the moments of the probability distribution
(13) with (15) using the method of steepest descents,
which treats 1/N as an expansion parameter. This is a
standard method in field theory which produces the loop
expansion [22]. We first separate the action (15) into
“free” and “interacting” terms.
S[ψ, ψ̃] = SF [ψ, ψ̃] + SI [ψ, ψ̃] (16)
where
SF [ψ, ψ̃]
dωdθdtψ̃
dω′dθ′
{f(θ′ − θ) (ψ′ρ+ ρ′ψ)}
dxdtdx′dt′ψ̃(x′, t′)Γ0(x, t;x
′, t′)ψ(x, t)
SI [ψ, ψ̃]
dωdθdtψ̃
dω′dθ′
{f(θ′ − θ) (ψ′ψ)}
dω′dθ′ψ̃′
{f(θ′ − θ) (ψ′ + ρ′) (ψ + ρ)}
(−1)k+1
dθdωψ̃(θ, ω, t0)ρ0(θ, ω)
In deriving the loop expansion, the action is expanded
around a saddle point, resulting in an asymptotic series
whose terms consist of moments of the Gaussian func-
tional defined by the terms in the action (15) which are
bilinear in ψ and ψ̃, i.e. SF [ψ, ψ̃]. Hence, the loop expan-
sion terms consist of various combinations of the inverse
of the operator Γ0, defined by SF , called the bare prop-
agator, with the higher order terms in the action, called
the vertices. Vertices are given by the terms in SI .
The terms in the loop expansion are conveniently rep-
resented by diagrams. The bare propagator is repre-
sented diagrammatically by a line, and should be com-
pared to the variance of a gaussian distribution. Each
term in the action (other than the bilinear term) with n
powers of ψ and m powers of ψ̃ is represented by a vertex
with n incoming lines and m outgoing lines. The initial
state vertices produce only outgoing lines and, like the
non-initial state or “bulk” vertices, are integrated over
θ, ω, and t for each point at which the operators are de-
fined. The bulk vertices are represented by a solid black
dot (or square, see Figure 1) and initial state vertices
by an open circle. The bare propagator and vertices are
shown in Figure 1. Unlike conventional Feynman dia-
grams used in field theory, the vertices in Figure 1 rep-
resent nonlocal operators defined at multiple points. In
particular, the initial state terms involve operators at a
different point for each outgoing line. Although uncon-
ventional, this is the natural way of characterizing the
1/N expansion.
Adopting the shorthand notation of x ≡ {θ, ω}, each
arm of a vertex must be connected to a line (propagator)
at either x or x′ and lines connect outgoing arms in one
vertex to incoming arms in another. The moment with
n powers of ψ and m powers of ψ̃ is calculated by sum-
ming all diagrams with n outgoing lines and m incoming
{ f (θ′−θ) . . .}
{ f (θ′−θ) . . .}
g(ω′)
{ f (θ′−θ) . . .}
{ f (θ′−θ)}
(2π)2
g(ω)g(ω′)
{ f (θ′−θ)}
P0(x, t|x
′)x x
(−1)k+1
ρ0(θ j,ω j)
FIG. 1: Diagrammatic (Feynman) rules for the fluctuations
about the mean. Time moves from right to left, as indicated
by the arrow. The bare propagator P0(x, t|x
′, t′) (see Eq. (31))
connects points at x′ to x, where x ≡ {θ, ω}. Each branch of
a vertex is labeled by x and x′ and is connected to a factor of
the propagator at x or x′. Each vertex represents an operator
given to the right of that vertex. The “. . . ” on which the
derivatives act only include the incoming propagators, but
not the outgoing ones. There are integrations over θ, θ′, ω, ω′
and t at each vertex.
lines. This means that each diagram will stand for sev-
eral terms which are equivalent by permutations of the
vertices or the edges in the graph, equivalently permuta-
tions of the factors of ψ̃ and ψ in the terms in the series
expansion. In a typical field theory, this results in combi-
natoric factors. In the present case, diagrams which are
not topologically distinct can produce different contribu-
tions to a given moment. Nonetheless, we will designate
the sum of these terms with a single graph. Generically,
the combinatoric factors we expect are due to the ex-
change of equivalent vertices, which typically cancel the
factorial in the series expansion. Additionally, each line
in a diagram contributes a factor of 1/N and each vertex
contributes a factor of N . Hence each loop in a diagram
carries a factor of 1/N . The terms in the expansion with-
out loops are called “tree level”. The bare propagator
is the tree level expansion of 〈ψ(θ, ω, t)ψ̃(θ′, ω′, t′)〉. The
tree level expansion of each moment beyond the first car-
ries an additional factor of 1/N , i.e. the propagator and
two-point correlators are each O(1/N).
Mean field theory is defined as the N → ∞ limit of this
field theory. In the infinite size limit, all moments higher
than the first are zero (provided the terms in the series
are not at a singular point, i.e. the onset of synchrony).
Hence, the only surviving terms in the action (15) are
those which contribute to the mean of the field at tree
level. These terms sum to give solutions to the continuity
equation (4). If the initial conditions are smooth, then
mean field theory is given by the relevant smooth solution
of (4). In most of the previous work (e.g. [12, 24]),
smooth solutions to (4) were taken as the starting point
and hence automatically assumed mean field theory.
We can now validate our choice of N as the correct
scaling factor in the delta functional of (5) by considering
the equal time two-point correlator. Using the definition
of η from (3) we get
〈η(x, t)η(x′, t)〉 = C(x, x′, t) + ρ(x, t)ρ(x′, t)
ρ(x, t)ρ(x′, t) +
δ(x − x′)ρ(x′, t)
where ρ(x, t) = 〈η(x, t)〉 = 〈ϕ(x, t)〉. Using the fields ϕ
and ϕ̃ (defined in (12)) and taking η at different times
gives
〈η(x, t)η(x′, t′)〉
= 〈[ϕ̃(x, t)ϕ(x, t) + ϕ(x, t)] [ϕ̃(x′, t′)ϕ(x′, t′) + ϕ(x′, t′)]〉
for t > t′. The response field has the property that ex-
pectation values containing ϕ̃(x, t) are zero unless an-
other field insertion of ϕ(x, t) is also present but at a
later time (this is because the propagator is causal; it is
zero for t− t′ ≤ 0). Therefore
〈η(x, t)η(x′, t′)〉 = 〈ϕ(x, t)ϕ(x′, t′)〉
+ 〈ϕ(x, t)ϕ̃(x′, t′)〉〈ϕ(x′, t′)〉 (21)
As we will show later when we discuss the propagator in
more detail, we have
〈ϕ(x, t)ϕ̃(x′, t′)〉 =
δ(x − x′) (22)
Comparing (21) in the limit t → t′ with (19) allows the
immediate identification of
〈ϕ(x, t)ϕ(x′, t)〉 = C(x, x′, t) + ρ(x, t)ρ(x′, t)
ρ(x, t)ρ(x′, t) (23)
C(x, x′, t) is the two oscillator “connected” correlation
or moment function. This is consistent with (15) which
gives
〈ϕ(x, t0)ϕ(x
′, t0)〉 = ρ(x, t0)ρ(x
′, t0)−
ρ(x, t0)ρ(x
′, t0)
as the initial condition. Thus, comparing the second mo-
ment of η using the Doi-Peliti fields (20) with the expres-
sion from the direct computation given by (19) shows
that the factor of N in the delta functional of (5) was
necessary to obtain the correct scaling for the moments.
The Doi-Peliti action (15) can also be derived by con-
sidering an effective Markov process on a circular lattice
representing the angle θ where the probability of an oscil-
lator moving to a new point on the lattice is determined
by its native driving frequency ω and the relative phases
of the other oscillators (see Appendix C). This is the
primary reason we refer to the theory as being of Doi-
Peliti type. The continuum limit of this process yields a
theory described by the action (15). The Markov picture
provides an intuitive description and underscores the fun-
damental idea that we have produced a statistical theory
obeyed by a deterministic process.
Although our formalism is statistical, we emphasize
that no approximations have been introduced. The sta-
tistical uncertainty is inherited from averaging over the
initial phases and driving frequencies. This formalism
could be applied to a wide variety of deterministic dy-
namical systems that can be represented by a distribu-
tional continuity equation like Eq. (4). In general, a solu-
tion for the moment generating functional for our action
(15) is as difficult to obtain as solving the original sys-
tem. The advantage of formulating the system as a field
theory is that a controlled perturbation expansion with
the inverse system size as the small parameter is possible.
A. Relation to Kinetic Theory and Moment
Hierarchies
The theory defined by the action (15) is equivalently
expressed as a Born-Bogoliubov-Green-Kirkwood-Yvon
(BBGKY) moment hierarchy starting with the continuity
equation (4). To construct the moment hierarchy, one
takes expectation values of the continuity equation with
products of the density, η(θ, ω, t). This results in coupled
equations of motion for the various moments of η(θ, ω, t),
where each equation depends upon one higher moment in
the hierarchy. The Dyson-Schwinger equations derivable
from (15) with 〈ϕ̃〉 = 0 are exactly the BBGKY hierarchy
derived in Ref. [17]. Thus the kinetic theory and field
theory approaches are entirely equivalent.
The moments of η can be computed in the BBGKY hi-
erarchy by truncating at some level. In Ref. [17], this was
done at Gaussian order by assuming that the connected
three-point function was zero. The first two equations
of the hierarchy then form a closed system which can by
solved. The first two equations of the BBGKY hierar-
chy [17] are
f(θ′ − θ)ρ(x, t)ρ(x′, t)dθ′dω′
f(θ′ − θ)C(x, x′, t)dθ′dω′ (25)
where ρ(x, t) = 〈η(x, t)〉, and the connected (equal-time)
correlation function
C(x, x′, t) = 〈η(x, t)η(x′, t)〉 − ρ(x, t)ρ(x′, t)
ρ(x, t)ρ(x′, t)−
δ(x− x′)ρ(x′, t)
obeys
f(θ3 − θ1) +
f(θ3 − θ2)]ρ(x3, t)dθ3dω3
C(x1, x2, t)
f(θ3 − θ1)ρ(x1, t)C(x2, x3, t)dθ3dω3 +K
f(θ3 − θ2)ρ(x2, t)C(x3, x1, t)dθ3dω3}
f(θ2 − θ1) +
f(θ1 − θ2)]ρ(x1, t)ρ(x2, t).
In the field theoretic approach, instead of truncating
the BBGKY hierarchy, one instead truncates the loop
expansion. Truncating the moment hierarchy at the mth
order is equivalent to truncating the loop expansion for
the lth moment at the (m − l)th order. Thus the solu-
tion to the moment equations (25) and (27) is the one
loop expression for the first moment and the tree level
expression for the second moment. The advantage of us-
ing the action (15) is that the terms in the perturbation
expansion are given automatically by the relevant dia-
grams at any level of the hierarchy. Ref. [17] suggested
that a higher order in the hierarchy would be necessary to
check whether the mean field marginal modes are stable
for finite N . We demonstrate below that the field the-
ory facilitates the calculation of the linearization of equa-
tion (25) to higher order in 1/N and show that marginal
modes are stabilized by finite size fluctuations.
One can compare this approach to the maximum en-
tropy approach of Rangan and Cai [28] for developing
consistent moment closures for such hierarchies. In the
moment hierarchy approach of Ref. [17], moment closure
is obtained via the somewhat ad hoc approach of setting
the nth cumulant to zero. In contrast, Rangan and Cai
maximize the entropy of the distribution subject to cer-
tain normalization constraints. The moment closure is
facilitated by constraining higher moments from the hi-
erarchy. However, one still must solve the resulting equa-
tions. In the loop expansion, moment closure is obtained
implicitly via truncating the loop expansion. The loop
expansion approach offers the advantage of providing a
natural means for determining when the approximation,
thus the implicit closure, breaks down and avoids deal-
ing with the moment hierarchy explicitly. In fact, Ran-
gan and Cai’s procedure has a natural interpretation in
field theory, namely the minimization of a generalized
effective action in terms of various moments. The sim-
plest and most common is the effective action in terms
of the mean field, which is the generating functional of
one partical irreducible (1PI) graphs [22]. The next level
of approximation is a generalized effective action (the
“effective action for composite operators” [29]) in terms
of the mean and the two-point function (or functions),
which is the generating functional of two particle irre-
ducible (2PI) graphs. One can continue in this way. The
equations of motion of these effective actions will produce
a closure of the moment hierarchy implicit in the action
for the theory. At tree level these equations will be equiv-
alent to those produced by Rangan and Cai’s maximum
entropy approach. The loop expansion allows for sys-
tematic corrections to these equations without explicitly
invoking higher equations in the hierarchy.
III. TREE LEVEL LINEAR RESPONSE,
CORRELATIONS AND FLUCTUATIONS
As a first example, we reproduce the calculation of the
variation of the order parameter Z, which was calculated
previously using the BBGKY moment hierarchy [17]. To
do so requires the calculation of the tree level linear re-
sponse or bare propagator and the tree level connected
two-oscillator correlation function.
A. The Propagator
The propagator P (θ, ω, t|θ′, ω′, t′) is given by the ex-
pectation value
P (θ, ω, t|θ′, ω′, t′) = 〈ϕ(θ, ω, t)ϕ̃(θ′, ω′, t′)〉 (28)
It is the linear response of (4). This can be shown by
considering a small perturbation δρ0 to the initial state
ρ in the action (15). Expanding to first order then yields:
δρ(θ, ω, t) = N
dθ′dω′ 〈ϕ(θ, ω, t)ϕ̃(θ′, ω′, t′)〉 δρ0(θ
′, ω′)
The tree level linear response or bare propagator
P0(θ, ω, t|θ
′, ω′, t′) ≡ P0(x, t;x
′, t′) is the functional in-
verse of the operator Γ0 defined by the free part of the
action (17). The bare propagator is therefore given by
Γ0 · P0 ≡
dx′′dt′′Γ0(x, t;x
′′, t′′)P0(x
′′, t′′;x′, t′)
δ(x− x′)δ(t− t′) (30)
Using the action (15) with Eq. (30) gives
Γ0 · P0 ≡
f(θ1 − θ)ρ(x1, t)dθ1dω1
P0(x, x
′, t− t′)
f(θ1 − θ)ρ(x, t)P0(x1, x
′, t− t′)dθ1dω1
δ(θ − θ′)δ(ω − ω′)δ(t− t′) (31)
Due to the rotational invariance in θ of f(θ),
P (x, t;x′, t′) ≡ P (θ − θ′, ω, ω′, t− t′).
In the incoherent state, ρ(θ, ω, t) = g(ω)/2π. Thus, for
f(θ) odd, Eq. (31) becomes
P0(x, x
′, t− t′)
f(θ1 − θ)P0(x1, x
′, t− t′)dθ1dω1
δ(θ − θ′)δ(ω − ω′)δ(t− t′) (32)
We can invert this equation using Fourier and Laplace
transforms.
Taking the Fourier transform of Eq. (32) (with respect
to θ and θ′) yields
+ inω
P0(n, ω;m,ω
′, t− t′)
+ inKg(ω)f(−n)
P0(n, ω1;m,ω
′, t− t′)dω1
δ(t− t′)δ(ω − ω′)δn+m (33)
where we use the following convention for the Fourier
transform
f(n) =
f(θ)e−inθdθ
f(θ) =
f(n)einθ (34)
Hereon, we will suppress the index m since the propaga-
tor must be diagonal (i.e. P0(n,m)) ∝ δm+n).
We Laplace transform in τ = t− t′ to get
[s+ inω] P̃0(n, ω, ω
′, s)
+ inKg(ω)f(−n)
P̃0(n, ω1, ω
′, s)dω1
δ(ω − ω′) (35)
using the convention
f̃(s) =
f(t)e−sτdτ
f(τ) =
f̃(s)esτds (36)
where the contour L is to the right of all poles in f̃(s).
We can solve for P̃0(n, ω, ω
′, s) using a self-consistency
condition. Integrate (35) over ω after dividing by s+ inω
to get
dωP̃0(n, ω, ω
′, s)
inKg(ω)f(−n)
s+ inω
dω1P̃0(n, ω1, ω
′, s)
s+ inω′
which we can solve to obtain
dωP̃0(n, ω, ω
′, s) =
s+ inω′
Λn(s)
where
Λn(s) = 1 + inKf(−n)
s+ inω
Λn(s) is defined for Re(s) ≤ 0 via analytic continuation.
In the kinetic theory context of an oscillator density obey-
ing the continuity equation (4), Λn(s) is analogous to a
plasma dielectric function [17]. If we assume that g(ω) is
even and f(θ) is odd, then there is a single real number
sn such that Λn(sn) = 0. (Mirollo and Strogatz proved
that there is at most one single, real root of (39) and that
it must satisfy Re(s) ≥ 0 [11, 30]. In our case, Λn(s) is
defined for Re(s) < 0 not by (39), but rather via ana-
lytic continuation.) Using (39) in (35) and solving for
P̃0(n, ω, ω
′, s) gives
P̃0(n, ω, ω
′, s) =
δ(ω − ω′)
s+ inω
inKg(ω)f(−n)
(s+ inω) (s+ inω′)
Λn(s)
Here we identify the spectrum with the zeroes of Γ0 or,
equivalently, the poles of the propagator, as these will de-
termine the time evolution of perturbations. Analogous
to the analysis of Strogatz and Mirollo [24] we define the
operator O by
O[bn(ω, t)] ≡ inωbn(ω, t)
+ inKf−n
bn(ω1, t)g(ω1)dω1 (41)
FIG. 2: Diagrams for the connected two-point function at tree
level a) and to one loop b).
(cf. equation (62) below). The continuous spectrum of
O consists of the frequencies inω whereas the discrete
spectrum (according to Ref. [24]) only exists for K >
Kc. Consistent with that approach (i.e. linear operator
theory), we identify the poles in P due to s+ inω as the
continuous spectrum and those due to the zeroes of Λn(s)
as the discrete spectrum. If Λn(s) is not analytically
continued for Re(s) < 0, it will not have zeroes for that
domain, as in Ref. [24]. However, zeros can exist for
Re(s) < 0 when Λn(s) is analytically continued and this
is why analytic continuation is of such crucial importance
to the conclusions of Ref. [12].
B. Correlation Function
The connected correlation function (cumulant func-
tion) is given by
C(x1, t1;x2, t2) = 〈ψ(x1, t1)ψ(x2, t2)〉 (42)
This is equivalent to (26), which was computed in
Ref. [17] when t1 = t2. If the initial phases are un-
correlated (i.e. C(x1, 0;x2, 0) = 0) then at tree level
C(x1, t1;x2, t2) is given by the diagram shown in Fig-
ure 2 a). It is comprised of vertex III (see Figure 1) com-
bined with a bare propagator on each arm. For general
(i.e. not odd) f(θ), the diagram in Figure 2 a) actually
corresponds to two different terms because the arms of
vertex III can be interchanged, giving two terms in equa-
tion (43) below. Unlike, conventional field theory, these
interchanges are not symmetric. These two terms are
equal when f(θ) is odd. More generally other vertices
do not exhibit the symmetries typical of Feynman dia-
grams even for odd f(θ). Applying the Feynman rules
then gives at tree level
C(x1, t1;x2, t2)
(2π)2
dω′1dω
dθ′1dθ
× P0(x1, x
1, t1 − t
′)P0(x2, x
2, t2 − t
f(θ′2 − θ
f(θ′1 − θ
2)]g(ω
1)g(ω
2)(43)
where t = min(t1, t2). This is essentially identical to the
ansatz used in Ref. [17] for the solution of the second
moment equation in the BBGKY hierarchy (27). For
f(θ) = sin θ and t1 > t2, Fourier transforming Eq. (43)
and inserting the bare propagator from Eq. (40) (after
an inverse Laplace transformation) gives
C1(ω1, t1, ω2, t2) =
g(ω1)g(ω2)
(iω1 +
)(−iω2 +
(iω1 +
)(−iω2 +
e(i(ω1−ω2)t2)
i(ω1 − ω2)
i(ω1 − ω2)
eiω1(t1−t2)
(iω2 −
− iω1)((
)2 + (ω2)2)
+iω2)(t2) − 1
)(t1−t2)
(−iω1 −
+ iω2)((
)2 + (ω1)2)
−iω1)t2 − 1
e−iω1(t1−t2)
4(K − 2Kc
− iω1)(
+ iω2)
e−(Kc−K)t2 − 1
e−(Kc−K))(t1−t2)
The equal time correlator is given by (t1 = t2 = τ):
C1(ω1, ω2, τ) =
g(ω1)g(ω2)
(iω1 +
)(−iω2 +
(iω1 +
)(−iω2 +
e(i(ω1−ω2)τ)
i(ω1 − ω2)
i(ω1 − ω2)
(iω2 −
− iω1)((
)2 + (ω2)2)
+iω2)τ − 1
(−iω1 −
+ iω2)((
)2 + (ω1)2)
−iω1)τ − 1
4(K − 2Kc
− iω1)(
+ iω2)
e−(Kc−K)τ − 1
C−1 = C
1 and the other modes vanish. Note that the
initial condition C1(ω, ω
′, 0) = 0 is satisfied and that the
time constants and frequencies which appear are every
possible way of pairing those from the tree level propa-
gator.
For illustrative purposes, Figure 2 b) shows the one
loop diagrams which contribute to C; these diagrams are
O(1/N2). We should note here the special role played by
the diagrams with initial state terms, in particular the
vertex proportional to ψ̃(θ, ω, t0)
2. This diagram evalu-
ates to exactly the same result as (44), with an additional
factor of −1/N . It serves to provide the proper normal-
ization for the two point function, which should go as
(N − 1)/N since the self-interaction (diagonal) terms are
not included. The other diagrams diverge faster as one
approaches criticality (K = Kc). They are of negligible
importance at small coupling but become increasingly
important near the onset of synchrony.
C. Order Parameter Fluctuations
We now compute the fluctuations in the order parame-
ter Z given in Eq. (2). The variance of Z (second moment
〈ZZ̄〉) is given by
〈ZZ̄〉 = 〈r2(t)〉
dωdω′dθdθ′〈η(ω, θ, t)η(ω′, θ′, t)〉ei(θ−θ
Using equation (19) in equation (46) gives
〈r2(t)〉 =
dωdω′dθdθ′C(x, x′, t)ei(θ−θ
since in the incoherent state ρ(x, t) = g(ω)/2π is inde-
pendent of θ, so that 〈Z〉 = 0. Hence
〈r2(τ)〉 = 4π2
dωdω′C−1(ω, ω
′, τ) +
which evaluates to [17]
〈r2(τ)〉 =
Λ1(s− s0)− 1
Λ1(s− s0)
× Res
Λ1(s)
esτ +
The time evolution of 〈r2〉 is then determined by the
poles of Λ1(s). As an example, consider f(θ) = sin θ and
g(ω) a Lorentz distribution (see Appendix B); we have
the result
〈r2(τ)〉 =
Kc −K
Kc −K
e−(Kc−K)τ (50)
where Kc = 2γ. Note that this diverges as τ → ∞ for
K = Kc. In the mean field limit N → ∞, 〈r
2〉 = 0 as
expected. As was shown in Ref [17], the tree level calcu-
lation adequately captures the fluctuations except near
the onset of synchrony (K = Kc). An advantage of the
field theoretic formalism is that it allows us to approach
even higher moments without needing to worry about
the moment hierarchy. In particular, for f(θ) = sin θ it
is straightforward to show that higher cumulants, such
as 〈(ZZ̄)2〉 − 〈ZZ̄〉2, must be zero at tree level because
of rotational invariance (more precisely, any cumulant of
Z higher than quadratic). The cumulants are given by
graphs which are connected. Vertex III produces two
lines with wave numbers n = ±1. Additionally, the IV
and V vertices impose a shift in wave number, whereas
Z and Z̄ project onto ±1. In order to calculate these
higher fluctuations it is necessary to go to the one loop
level. Note that this does not imply that the higher cu-
mulants of η are zero. Figure 2 b) gives the diagrams
for the correlation function at one loop. The one loop
calculation would also give a better estimate for 〈r2〉, es-
pecially nearer to criticality. The non-interacting distri-
bution is Gaussian, with non-Gaussian behavior growing
as one approaches criticality.
IV. LINEAR STABILITY AND MARGINAL
MODES
We analyze linear stability by convolving the linear
response or propagator with an initial perturbation. Al-
though we are free in our formalism to consider arbitrary
perturbations, we will consider two specific kinds for il-
lustrative purposes. In the first case we perturb only the
angular distribution. In the second case we consider a
perturbation which fixes one oscillator to be at a given
angle θ and frequency ω at a given time t. We first calcu-
late the results at tree level which reproduces the mean
field theory results of Ref. [24]. In particular, we arrive
at the same spectrum and Landau damping results of
Refs. [24] and [12]. We then define and calculate the op-
erator Γ (an extension of Γ0) to one loop order which
ultimately allows us to calculate the corrections to the
spectrum to order 1/N .
A. Mean field theory
The bare propagator which is the full linear response
for mean field theory is given by Eq. (40). The zeroes
of the operator Γ with respect to s specify the spectrum
of the linear response. There is a set of marginal modes
(continuous spectrum) along the imaginary axis spanning
an interval given by the support of g(ω). There is also a
set of discrete modes given by the zeros of the dielectric
function Λn(s) as was found in Ref. [24], aside from the
issue of the analytic continuation of Λn(s).
However, even though there are marginally stable
modes, the order parameter Z can still decay to zero due
to a generalized Landau damping effect as was shown in
Ref. [12]. Consider a generalization of the order param-
Zn(t) =
einθj , (51)
which represents Fourier modes of the density integrated
over all frequencies:
Zn(t) =
dθdωη(θ, ω, t)einθ (52)
and hence
〈Zn(t)〉 =
dθdωρ(θ, ω, t)einθ
dωρ(−n, ω, t). (53)
In the incoherent state, ρ(θ, ω, t) is independent of θ so
〈Zn〉 is zero for n > 0. The density response δρ(θ, ω, t)
to an initial perturbation δρ(θ, ω, 0) is given by
δρ(θ, ω, t) = N
P0(x, x
′, t)δρ(θ′, ω′, 0)dθ′dω′
Recall from the definition of the action (15), the prop-
agator operates on an initial condition defined by Nρ0.
The perturbed order parameter thus obeys
δ〈Zn(t)〉 =
dθdωδρ(θ, ω, t)einθ. (55)
We will show that for any initial condition involving a
smooth distribution in frequency and angle, δ〈Zn(t)〉 will
decay to zero. However, for non-smooth initial perturba-
tions, δ〈Zn(t)〉 will not decay to zero but will oscillate.
We first consider an initial perturbation of the form
δρ(θ, ω, 0) = g(ω)c(θ) (56)
where
c(θ)dθ = 0 (57)
Inserting into (54) yields
δρ(θ, ω, t) = N
P0(x, x
′, t)c(θ′)g(ω′)dθ′dω′
which is consistent with the perturbation considered in
Ref. [24]. Taking the Laplace transform of (58) gives
δρ̃n(ω, s) = Ncn
P̃0(n, ω, ω
′, s)g(ω′)dω′ (59)
Using the tree level propagator (40), we can show that
P̃0(n, ω, ω
′, s)g(ω′)dω′ =
s+ inω
Λn(s)
where Λn(s) is given in (39). Hence
δρ̃(n, ω, s) =
s+ inω
Λn(s)
From Eq. (61), we see that the continuous spectrum is
given by inω and the discrete spectrum by the zeros of
Λn(s). If we define bn(ω, t) = δρ(n, ω, t)/(Ncng(ω)) then
using (58) and (40) we can show
+ inω
bn(ω, t)
+ inKf(−n)
bn(ω1, t)g(ω1)dω1
δ(t) (62)
which is equivalent to the linearized perturbation equa-
tion derived by Strogatz and Mirollo [24], with the ex-
ception that (62) includes the effects of the initial config-
uration through the source term proportional to δ(t).
Inserting (61) into the the Laplace transform of (55)
yields
δ〈Z̃n(s)〉 = c−n
s− inω
Λ−n(s)
= c−n
Λ−n(s)− 1
−inKf(n)
Λ−n(s)
. (63)
For f(θ) = sin θ, f(±1) = ∓i/2 which leads to
δ〈Z̃1(s)〉 = c−1
Λ−1(s)− 1
Λ−1(s)
. (64)
We note that δZ1(s) is identical to what was calculated
in Ref. [12] in which it was shown that δZ1(t) → 0 as
t → ∞. Even in the presence of marginal modes, the
order parameter decays to zero through dephasing of the
oscillators. This dephasing effect is similar to Landau
damping in plasma physics.
We can see this explicitly for the case of the Lorentz
distribution
g(ω) =
γ2 + ω2
From (39) we can calculate
Λ±1(s) =
s+ γ − K
and Λn(s) = 1 for n 6= ±1. The zero of Λ±1(s) is at
s±1 = −(γ −K/2), which provides a critical coupling
Kc = 2γ (67)
above which the system begins to synchronize. The in-
coherent state is reached when K < Kc, which gives
s±1 < 0. Thus
δ〈Z±1〉 = c∓1e
−(γ−K
δ〈Zn6=±1〉 = c−ne
−|n|γτ (68)
Hence angular perturbations decay away in the order pa-
rameter.
Landau damping due to dephasing is sufficient to de-
scribe the relaxation of Zn(t) to zero for a smooth per-
turbation. However, for non-smooth perturbations, this
may not be true. Consider the linear response to a stim-
ulus consisting of perturbing a single oscillator to have
initial position θ0 and frequency ω0:
ρ0(θ, ω) =
(N − 1)
+ δ(θ − θ0)δ(ω − ω0)
and so the initial perturbation is
δρ0(θ, ω) =
+ δ(θ − θ0)δ(ω − ω0)
Inserting into (54) gives the time evolution of this initial
perturbation
δρ(θ, ω, t) = −
+ P (θ, ω, t; θ0, ω0, t
′) (71)
Substituting into (55) and taking the Lapace transform
gives
δ〈Z̃n(s)〉 = 2π
dωδρ̃−n(ω, s)
dωP̃ (−n, ω, ω0, s)
s− inω0
Λ−n(s)
There are therefore two modes in δZn(t), one which de-
cays due to dephasing (determined by the zero of Λ−n(s))
and one which oscillates at frequency ω0. Thus, for this
perturbation, the tree level prediction is that δ〈Z〉 is not
zero but oscillates. Inverse Laplace transforming (72),
gives the time dependence of the order parameter
δ〈Z1(t)〉 =
ω20 +
γ − K
+ ω20 −
e−iω0t
−iω0 + γ −
e−(γ−
The perturbed oscillator has phase θ0 − ω0t. It can al-
ways be located; no information is lost as the time evo-
lution progresses. Hence, for a single oscillator pertur-
bation, the tree level calculation predicts that the order
parameter will not decay to zero. In the next section, we
show that to the next order in the loop expansion, which
accounts for finite size effects, the marginal modes are
moved off of the imaginary axis stabilizing the incoher-
ent state, and the order parameter for a single oscillator
perturbation decays to zero.
B. Finite size effects
Let us define a generalization of the operator Γ0.
Γ · P =
δ(x− x′)δ(t− t′) (74)
where P without a subscript denotes the full propagator
and the operator Γ is the functional inverse of P . We
can estimate the effect of finite size on the stability of the
incoherent state by calculating the one loop correction to
the operator Γ. We will see that the one loop correction
FIG. 3: The diagrams contributing to the propagator at one
loop order, organized by topology. We consider d) to be of
different topology than c) because it is equivalent to a tree
level diagram with an additional factor of 1/N , due to the
initial state vertex.
FIG. 4: Diagrammatic equation for the propagator. The dou-
ble lines represent the summation of the entire series in 1/N
for the propagator.
produces the effect, among others, of adding a diffusion
operator to Γ, which is enough to stabilize the continuum
of marginal modes because the continuous spectrum is
pushed off the imaginary axis by an amount proportional
to the diffusion coefficient.
We calculate the correction to Γ to one loop order. The
propagator is represented by diagrams with one incom-
ing line and one outgoing line. There are four groups of
diagrams which contribute to the propagator at one loop
order. They are shown in Figure 3 and are labeled by
a), b), c), and d). Using these graphs to calculate the
propagator to order 1/N is not sufficient to demonstrate
the behavior of the spectrum to order 1/N . However,
we can use these graphs to construct an approximation
of Γ to order 1/N and derive the spectrum from this.
If we denote the full propagator (i.e. the entire series
in 1/N for P ) by a double line, we can approximate the
full propagator recursively by the diagrammatic equation
shown in Figure 4. The only terms which are neglected
in this relation are those which are from two loop and
higher graphs and therefore would contribute O(1/N2)
to Γ. Readers familiar with field theory will note that we
are simply calculating the two point proper vertex, which
is the inverse of the full propagator, to one loop order.
If we act on both sides of this equation with Γ0 (the
operator whose inverse is the tree level propagator), we
arrive at an equation of the form:
Γ0 · P =
δ(x− x′)δ(t− t′)− Γ1 · P (75)
where we have implicitly defined the one loop correction
to Γ, which we label Γ1. The action of Γ0 converts the
leftmost propagator in each diagram into a delta func-
tion, so that the delta function term in (75) arises from
the tree level propagator line and Γ1 is then comprised of
the loop portions of the remaining diagrams (the “ampu-
tated” graphs). We denote the contribution to Γ1 from
each group of one loop diagrams by Γ1r(θ, ω;φ, η; t− t
where r represents a, b, c, or d indicating the group of
diagrams in question. Γ1b = 0 because the derivative
coupling acts on the incoherent state, which is homoge-
neous in θ. The equation of motion for the one loop
propagator P1(x, x
′, t) then has the form
Γ0 · P1 + Γ1 · P1 =
δ(θ − θ′)δ(ω − ω′)δ(t− t′) (76)
where Γ0 · P1 is given by
Γ0 · P1 =
f(θ1 − θ)ρ(x1, t)dθ1dω1
P1(x, x
′, t− t′)
f(θ1 − θ)ρ(x, t)P1(x1, x
′, t− t′)dθ1dω1 (77)
Γ1 · P1 =
dt′′Γ1a(θ, ω;φ, η; t− t
′′)P1(φ, η, t
′′; θ′, ω′; t′)
dt′Γ1c(θ, ω;φ, η; t− t
′′)P1(φ, η, t
′′; θ′, ω′; t′)
dt′Γ1d(θ, ω;φ, η; t− t
′′)P1(φ, η, t
′′; θ′, ω′; t′) (78)
is the one loop contribution. The kernels Γ1a, Γ1c, and
Γ1d are explicitly computed in Appendix A.
The expressions for the one loop contribution to Γ are
rather complicated but several key features can be ex-
tracted, namely 1) the introduction of a diffusion opera-
tor, 2) a shift in the driving frequency, and 3) the addi-
tion of higher order harmonics to the coupling function
f . The diffusion operator has the effect of shifting the
marginal spectrum from the imaginary axis into the left
hand plane. The effect is that the finite size fluctuations
to order 1/N stabilize the incoherent state.
We can see these effects more easily by considering the
special case of f(θ) = sin θ and g(ω) being a Lorentz dis-
tribution (see Appendix B). The Fourier-Laplace trans-
formed equation of motion for the one loop propagator
has the form
α±1(s;ω)P̃1(n, ω, ω
′, s)
g(ω)(1 + β1(s;ω))
dνP̃1(n, ν, ω
′, s)
δ(ω − ω′), n = ±1, (79)
α±2(s;ω)P̃1(n, ω, ω
′, s)
dνβ±2(s;ω, ν)P̃1(n, ν, ω
′, s)
δ(ω − ω′), n = ±2, (80)
αn(s;ω)P̃1(n, ω, ω
′, s) =
δ(ω − ω′), |n| > 2
where
αn(s;ω) =
s+ inω +
s+ γ − K
+ i(m+ n)ω
n(2m+ n) + n(m+ n)
2γ −K
β±1(s;ω) = −
s+ γ − K
± 2iω
γ − K
2γ − K
2γ −K
s+ γ − K
β±2(s;ω, η) = −
s∓ iω
s∓ iω + γ − K
s± i(η − ω)
±iω + γ
γ − K
s+ γ − K
s+ 2γ −K
s+ γ − K
2γ −K
−γ + K
2γ −K
s+ γ − K
s+ γ − K
s+ 2γ − K
s+ 2γ −K
. (82)
We can solve for P̃1 using the same kind of self-
consistency computation that we used for P̃0. This pro-
duces an analogously defined dialectric function, Λ1n(s).
Λ1n(s) = 1−
g(ω)(1 + βn(s;ω))
αn(s;ω)
Stability of the incoherent state is determined by the
spectrum of the operator Γ. Analogous to tree level, the
continuous spectrum is given by the zeros of αn(s;ω) and
the discrete spectrum by the zeros of Λ1n(s) given by (83).
However, at one loop order, the expressions for P̃1 rep-
resent solutions to a coupled system of equations. Thus,
the poles of tree level are shifted, and there are also new
poles reflecting the interaction of the mean density with
the two-point correlation function. These poles will have
residues of O(1/N2) owing to their higher order nature.
For n = ±1,±2, we cannot solve for the poles exactly
but we can approximate the shift in the tree level spec-
trum by evaluating the loop correction at the value of the
tree level pole, s = ∓inω, which is equivalent to using
the “on-shell” condition in field theory. Since the higher
order modes will decay faster than the tree level modes,
this essentially amounts to ignoring short time scales and
is similar to the Bogoliubov approximation [25, 26]. The
remaining effective equation for P̃1 is now first order and,
consequently, we can consider the spectrum of the implic-
itly defined operator analogous to (41).
The continuous spectrum consists of all the zeros of the
function αn. We expect a term of the form inω+O(1/N)
because of the tree level continuous spectrum. This will
govern the behavior at large times. In this case, the “on-
shell” condition is equivalent to Taylor expanding the
loop correction via s = inω + O(1/N) and keeping only
terms which are O(1/N). This yields:
αn(s;ω) = s+ in(ω + δω) + n
2D (84)
where
δω = −
γ − K
4γ −K
2γ −K
is a frequency shift and
γ − K
is a diffusion coefficient. The frequency shift, which is
negative, serves to tighten the distribution around the
average frequency. The diffusion operator serves to damp
the modes which are marginal at tree level.
The discrete spectrum arises from the zeroes of Λ1n(s).
We can again approximate the shift in the tree level zero
by using the on-shell condition. This gives
Λ1n(s) = 1−
s+ in(ω + δω) + n2D
2γ − K
2γ −K
γ − K
+ inω
γ − K
− inω
We assume the shift will be small which allows us to write
the zero of Λ1n(s) as
s = −
δΛ1n(s0)
) (88)
where s0 = −
γ − K
and δΛ1n(s) is the O(1/N) correc-
tion to Λ1n(s). This results in
sn = −
2γ −K
γ − K
6γ −K
2γ −K
Away from criticality (K ≪ 2γ) or for large N , this cor-
rection is small.
We conclude this section by writing down an effective
equation of motion for the density function which incor-
porates the effect of fluctuations. Recall the first equa-
tion of the BBGKY hierarchy is
f(θ′ − θ)ρ(x, t)ρ(x′, t)dθ′dω′
f(θ′ − θ)C(x;x′, t)dθ′dω′(90)
This equation has a term (the “collision” integral) involv-
ing the 2-point correlation function, C, on the right hand
side. The equation determining the tree level propaga-
tor (31) is the linearization of the first BBGKY equation
with C considered to be zero. The one loop correction
to this equation (78) incorporates the effect of the cor-
relations on the linearization. The diagrams in Figure 3
provide the linearization of the collision term, where C
is considered as a functional of ρ. Using our one loop
calculation, we can propose an effective density equation
at one loop order
sin(θ′ − θ)ρ(θ′,Ω′, t)ρ(θ,Ω, t)dθ′dω′
= −K2(ω)
sin(2θ′ − 2θ)ρ(θ′,Ω′, t)ρ(θ,Ω, t)dθ′dω′ (91)
where Ω = ω + δω and D is given by Eq. (86). The field
ρ(θ,Ω, t) is now defined in terms of the shifted frequency
distribution
G(Ω) ≈ g(ω)(1−
) (92)
The new coupling constant K2(ω) is O(1/N) and is due
solely to the fluctuations. It arises from the term β±2
in the equation for P̃1. In fact, given the structure of
the diagrams, it is clear that for O(1/Nn) there will be a
new coupling, Kn+1, which corresponds to a sin[(n+1)θ]
term. In the language of field theory, all odd couplings
are generated under renormalization.
The generation of higher order couplings is especially
interesting in light of the results of Crawford and Davies
concerning the scaling of the density η beyond the onset
of synchronization, i.e. η − ρ0 ∼ (K − Kc)
β [31, 32].
Although our calculations pertain to the incoherent state,
the fact that the loop corrections generate higher order
couplings is a general feature of the bulk theory defined
by the action of Eq. (14) as well. Thus, we expect a
crossover from β = 1/2 to β = 1 behavior to occur as
N gets smaller. This is consistent with [31], wherein a
crossover manifested as the rate constant became smaller
than the externally applied diffusion. In our case, the
magnitude of the diffusion is governed by the distance to
criticality and the number of oscillators.
Our proposed effective equation (91) is not self-
consistent because we use the propagator to infer the
form of the mean field equation. Thus, we neglect non-
linear terms which may arise due to the loop corrections.
In addition, our calculation applies specifically to per-
turbations in the incoherent state. There are likely other
terms we are neglecting for both of these reasons. The
consistent approach would be to calculate the effective ac-
tion to one loop order and derive the equation for ρ from
that. This would involve essentially the same calculation
we have performed here, but for arbitrary ρ(θ, ω, t) (i.e.
we would need to solve (31) for the propagator in the
presence of an arbitrary mean).
V. NUMERICAL SIMULATIONS
We compare our analytical results to simulations of sin-
gle oscillator perturbations, since this provides a direct
measurement of the propagator per equation (71). We
perform simulations of N oscillators with f(θ) = sin θ.
We fix 2% of the oscillators at a specific angle (θ0 = 0)
and driving frequency (unless N = 10, in which case we
fix a single oscillator; the plots with N = 10 have been
rescaled to match the other data). The remaining oscilla-
tors are initially uniformly distributed over angle θ with
driving frequencies drawn from a Lorentz distribution.
We measure the real part of Z1(t). This measurement
allows us to observe the behavior of the modes which are
marginal at tree level.
Equation (73) gives the behavior of δZ1(t) with a single
oscillator fixed at θ0 and ω0 at time t = 0. Recall that
Z1 = 0 in the incoherent state, so that we expect δZ1 ≈
Z1. To tree level
Z1(t) =
ω20 +
γ − K
(γ(γ −
) + ω20 −
iω0)e
−iω0t − (−iω0 + γ −
e−(γ−
In other words, the initially fixed oscillator has phase θ0−ω0t; no information is lost as the time evolution progresses.
Incorporating the one loop computation gives
Z1(t) =
ω20 +
γ − K
(γ(γ −
) + ω20 −
iω0)e
i(ω+δω)t−Dt − (−iω0 + γ −
where we have ignored a term of amplitude O(1/N2); we are only considering the contributions coming from the poles
described in the previous section. With the one loop corrections taken into account, we see that Z1(t) relaxes back
to zero as t→ ∞. In the simulations, we compute the real part of Z1(t) with θ0 = 0. This gives
Re(Z1(t)) =
ω20 +
γ − K
γ(γ −
) + ω20
cos ((ω0 + δω)t) e
sin ((ω0 + δω)t) e
−Dt −
)es1t
The special case of ω0 = 0 and θ0 = 0 gives:
Z1(t) =
γ − K
γe−Dt −
The imaginary part vanishes so that Re(Z1(t)) = Z1(t).
We first compare our estimate of the diffusion coef-
ficient D given by (86) with the simulations. We plot
the measured decay constant D of Z1 compared to the
theoretical estimate of (96) for the long time behavior in
Figure 5. These data only include values of K = 0.3Kc
and K = 0.5Kc (Kc = 2γ = 0.1). Higher values of K did
not yield good fits due to the neglected contributions to
Z1. These decay constants are obtained via fitting the
time evolution of Z1 to an exponential for t > 200s. In
both cases they behave as 1/N for large N as predicted.
There is a consistent discrepancy likely due to round-
ing error after simulating for such a long period of time
(≈ 30000 time steps). This error appears as a small de-
gree of noise which further damps the response, hence the
decay constants appear slightly larger in Figure 5. This
effect can be seen in Figures 6 and 7 as well. For large
times, the simulation data consistently fall slightly under
10 100
0.00001
0.00010
0.00100
0.01000
K = 0.03
K = 0.05
FIG. 5: The large time (> 200s) decay constants with zero
driving frequency for the perturbed oscillator. Lines are the
predictions given by Eq. 86). Solid line and circles represent
K = 0.03. Dashed line and boxes represent K = 0.05.
the analytic prediction. Similarly, the data is noisier at
large times.
Figures 6 and 7 show the evolution of Z1(t) over time
along with the analytical predictions and the tree level
result for K = 0.3Kc and K = 0.5Kc respectively. For
K = 0.3Kc, the prediction works quite well, with perhaps
the beginning of a systematic deviation appearing atN =
10 and N = 50 (there is a slight initial overshoot followed
by an undershoot at larger times). This same deviation
is more pronounced for K = 0.5Kc, although the data
follow the prediction quite well nonetheless.
Consistent with our expectations from the loop expan-
sion, as we move closer to criticality, i.e. the onset of syn-
chronization, the results for K = 0.7Kc and K = 0.9Kc
do not fare as well. Figure 8 demonstrates a marked devi-
ation from the prediction. We have not shown analytical
results for the lower values of N because the deviation
is so severe. The same holds true for all the results for
K = 0.9Kc, so that we have just plotted the simula-
tion data in Figure 9. The general trend of approaching
the mean field result still holds. The primary feature to
take from these plots is that the fluctuations increase the
decay constant. The closer to criticality, the more impor-
tant the fluctuations and the faster the decay, hence the
systematic undershoot which grows as one nears critical-
ity. The fastest relaxation to the incoherent state appears
at high K for a given N and at low N for a given K, ei-
ther limit results in increased effects from fluctuations.
It would be necessary to carry the loop expansion to two
or more loops in order to obtain good matches with these
data.
In Figures 10 and 11, we plot the time evolution of
Z1(t) given that the favored oscillator has a driving fre-
quency of ω0 = 0.05. Note first that Z1(t) approaches
the tree level calculation as N → ∞. The amplitude
of the oscillation also shows the same deviation as the
ω0 = 0 data, namely that of a slight initial overshoot of
the one loop prediction followed by an undershoot. In
0 500 1000 1500
Tree Level
N = 10
N = 50
N=100
N = 500
N = 1000
FIG. 6: Z1(t) vs. t for various values of N and K = 0.3Kc.
Each graph shows N = {10, 50, 100, 500, 1000}. Note that as
N → ∞ the curve approaches the tree level value. From top
to bottom: Black line represents tree level. X’s and violet line
represent N = 1000. Triangles and blue line represent N =
500. Diamonds and green line represent N = 100. Boxes and
red line represent N = 50. Circles and purple line represent
N = 10.
0 500 1000 1500
Tree Level
N = 10
N = 50
N = 100
N = 500
N = 1000
FIG. 7: Z1(t) vs. t for various values of N and K = 0.5Kc.
Each graph shows N = {10, 50, 100, 500, 1000}. Note that as
N → ∞ the curve approaches the tree level value. Symbols
as in Figure 6.
addition to this, we can see an increasing frequency shift
as N → 0. The data, prediction, and mean field results
eventually become out of phase. For intermediate values
of N , one can see that the one loop correction follows
this shift, while for N = 10, the mean field, data, and
one loop results each have a different phase.
In the case of n > 2 it is easier to write down a complete
analytical solution for the time evolution. With ω0 = 0,
θ0 = 0 we have,
Zn(t) =
γ − K
γ − K
)e−(γ−
)t+Dn2t
0 500 1000 1500
Tree Level
N = 50
N = 100
N = 500
N = 1000
0 500 1000 1500
Tree Level
N = 10
N = 50
N = 100
N = 500
N = 1000
FIG. 8: Z1(t) vs. t for various values of N and K = 0.7Kc.
Each graph shows N = {10, 50, 100, 500, 1000}. Note that as
N → ∞ the curve approaches the tree level value. Symbols
as in Figure 6.
This is compared with a simulation result in Figure 12.
We see the same general trends as the previous graphs.
At large N , the simulation follows the prediction quite
well. For small N , the simulation seems consistently
higher than the one loop prediction with K = 0.3Kc.
For K = 0.5Kc, the prediction is again sufficiently sin-
gular that we have not plotted N = 10. The deviation is
already apparent for N = 50.
VI. DISCUSSION
Using techniques from field theory, we have produced
a theory which captures the fluctuations and correlations
of the Kuramoto model of coupled oscillators. Although
we have used the Kuramoto model as an example system,
the methodology is readily extendible to other systems of
coupled oscillators, even those which are not interacting
via all-to-all couplings. Moreover, the methodology can
be readily applied to any system which obeys a conti-
nuity equation. We derive an action that describes the
dynamics of the Kuramoto model. The path integral de-
fined by this action constitutes an ensemble average over
the configurations of the system, i.e. the phases and driv-
ing frequencies of the oscillators. Because the dynamics
0 500 1000 1500
Tree Level
N = 10
N = 50
N = 100
N = 500
N = 1000
FIG. 9: Z1(t) vs. t for various values of N and K = 0.9Kc.
Each graph shows N = {10, 50, 100, 500, 1000}. Note that as
N → ∞ the curve approaches the tree level value. Symbols
as in Figure 6.
of the model are deterministic, this is equivalent to an
ensemble average over initial phases and driving frequen-
cies. Using the loop expansion, we can compute moments
of the oscillator density function perturbatively with the
inverse system size as an expansion parameter. However,
it is important to point out that the loop expansion is
equivalent to an expansion in 1/N only because of the
all-to-all coupling. A local coupling will produce fluctu-
ations which do not vanish in the thermodynamic limit.
Our previous work in this direction developed a mo-
ment hierarchy analogous to the BBGKY hierarchy in
plasma physics. This paper fully encompasses that earlier
work. The equations of motion for the multi-oscillator
density functions derivable from the action are in fact the
equations of that BBGKY hierarchy. The calculation in
Ref. [17] is in the present context the tree level calculation
of the 2-point correlation function, given by the Feyn-
man graph in Figure 2a. With the BBGKY hierarchy,
the calculational approach involves arbitrary truncation
at some order, with no a priori knowledge of how this
approximation is related to the system size, N . Here we
show that this approximation is entirely equivalent to the
loop expansion approximation. Truncating the hierarchy
at the nth moment is equivalent to truncating the loop
expansion at the (n− l)th loop for the lth moment. The
one loop calculation is performed in the BBGKY context
by considering the linear response in the presence of the
2-point correlation function. This would produce a more
roundabout manner of arriving at our one loop linear
response. One should also compare our one loop calcula-
tion with the Direct-Interaction-Approximation of fluid
dynamics; the path integral approach in that context is
the Martin-Siggia-Rose formalism [27]. Another possible
equivalent means of approaching this problem is through
the Ito Calculus, treating the density as a stochastic vari-
able and developing a stochastic differential equation for
An important aspect of our theory is that it is di-
0 500 1000 1500
-0.03
-0.02
-0.01
Tree Level
One Loop
N = 10
0 500 1000 1500
-0.03
-0.02
-0.01
Tree Level
One Loop
N = 50
0 500 1000 1500
-0.03
-0.02
-0.01
Tree Level
One Loop
N = 100
0 500 1000 1500
-0.03
-0.02
-0.01
N = 500
One Loop
Tree Level
FIG. 10: As the previous figures, but with K = 0.3Kc and
ω0 = 0.05. Solid line represents tree level. Dashed blue line
represents the one loop calculation. Circles represent the sim-
ulation data.
0 500 1000 1500
Tree Level
One Loop
N = 10
0 500 1000 1500
-0.03
-0.02
-0.01
Tree Level
One Loop
N = 50
0 500 1000 1500
-0.03
-0.02
-0.01
Tree Level
One Loop
N = 100
0 500 1000 1500
-0.03
-0.02
-0.01
Tree Level
One Loop
N = 500
FIG. 11: As the previous figures, but with K = 0.5Kc and
ω0 = 0.05. Symbols as in Figure 10.
0 500
Tree Level
N = 10
N = 50
N = 100
N = 500
N = 1000
0 100 200 300 400 500
0.005
0.015
Tree Level
N = 50
N = 100
N = 500
N = 1000
FIG. 12: δZ3(t) vs. t for K = 0.3Kc (top) and K = 0.5Kc
(bottom). Symbols as in Figure 6.
rectly related to a Markov process derivable from the
Kuramoto equation. One can employ the standard Doi-
Peliti method for deriving an action from a Markov pro-
cess to arrive at the same theory. Although the Ku-
ramoto model is deterministic, the probability distribu-
tion evolves in a manner indistinguishable from a funda-
mentally random process. The stochasticity of the effec-
tive Markov process is due to the distribution of phases
and driving frequencies. In other words, it is a state-
ment about information available to us about the state
of the system. The incoherent state is a state of high en-
tropy. The single oscillator perturbation is one in which
we have gained a small amount of information about the
system and we ask a question concerning our knowledge
about future states. In the mean field limit for the sin-
gle oscillator perturbation, we always know where to find
the perturbed oscillator given a prescription of its initial
state. In the finite case, our ignorance of the positions
and driving frequencies of the other oscillators makes a
determination of its future location difficult. Eventually,
we lose all ability to locate the perturbed oscillator as it
interacts with the “heat bath” of the population. Fur-
thermore, this result should be time reversal invariant.
Just as we have no way of determining with accuracy
where to find the oscillator in the future, likewise we
have no means of determining where it has been at some
time in the past. To prove this statement in the con-
text of our theory would require an analysis of the “time
reversed” theory, obtained essentially by switching the
roles of ϕ̃ and ϕ. The relevant propagator for this time
reversed theory will be the solution of the linearization
of the adjoint of the mean field equation. Accordingly,
this adjoint theory will have loop corrections which will
damp the time reversed propagator as well.
It is important to point out that our formulation ac-
counts for the local stability of the incoherent state
ρ(θ, ω, t) = g(ω)/2π to linear perturbations along with
demonstrating the order parameters Zn approach zero.
In mean field theory, there is the possibility of quasiperi-
odic oscillations so that Zn = 0, i.e. the modes de-phase,
while the incoherent state is marginally stable and infor-
mation of the initial state is retained. Our work shows
that in a finite size system, this is does not happen; the
incoherent state is linearly stable.
We have considered exclusively the case of fluctua-
tions about the incoherent state below the critical point.
Above criticality, a fraction of the population synchro-
nizes. In this case, to analyze the fluctuations one may
need to employ a “low temperature” expansion in con-
trast to our “high temperature” treatment. In essence,
one separates the populations into locked and unlocked
oscillators and derives a perturbation expansion from the
locked action. At criticality, each term in the loop expan-
sion diverges. This is an indication that fluctuations at
all scales become relevant near the transition and thus
a renormalization group approach is suggested. Our for-
malism provides a natural basis for this approach.
In summary, we have provided a method for deriving
the statistics of theories defined via a Klimontovich, or
continuity, equation for a number density. This method
produces a consistent means for approximating arbitrary
multi-point functions. In the case of all-to-all coupling,
this approximation becomes a system size expansion. We
have demonstrated further that the system size correc-
tions are sufficient to render the incoherent state of the
Kuramoto model stable to perturbations.
ACKNOWLEDGMENTS
This research was supported by the Intramural Re-
search Program of NIH/NIDDK.
APPENDIX A: ONE LOOP CALCULATION OF THE PROPAGATOR
The loop correction Γ1a applied to P is given by
dt′Γ1a(θ, ω;φ, ν; t− t
′′)P (φ, ν, t′′; θ′, ω′; t′) = (A1)
dθ1dω1dθ
[f(θ′2 − θ) {P0(θ
2, t; θ
1, t1)P0(θ, ω, t; θ1, ω1, t1)
+ P0(θ
2, t; θ1, ω1, t1)P0(θ, ω, t; θ
1, t1)}]
[f(θ′1 − θ1) {ρ(θ
1, t1)P (θ1, ω1, t1; θ
′, ω′, t′) + ρ(θ1, ω1, t1)P (θ
1, t1; θ
′, ω′, t′)}]
This term arises from one vertex with a single incoming line and two outgoing lines and one vertex with two incoming
lines and a single outgoing line, hence the product of two tree level propagators, P0. We can represent Γ1a in
Fourier/Laplace space as
Γ1a(n, ω, ν, t− t
′) = (A2)
dω′1dω
2(2π)
n(m+ n)f(−m)g(ω′1)P
0(−m,ω′2, t;ω
′)P 0(m+ n, ω, t; ν, t′)
(−nm)f(m)g(ω′1)P
0(−m,ω′2, t;ω
′)P 0(m+ n, ω, t; ν, t′)
(−mn)f(m+ n)g(ω′1)P
0(−m,ω′2, t; ν, t
′)P 0(m+ n, ω, t;ω′1, t
n(m+ n)f(−n−m)g(ω′1)P
0(−m,ω′2, t; ν, t
′)P 0(m+ n, ω, t;ω′1, t
If f(θ) is odd then f(m) = −f(−m). Therefore,
Γ1a(n, ω, ν, t− t
′) = (A3)
dω′1dω
2(2π)
n(2m+ n)f(−m)g(ω′1)P
0(−m,ω′2, t;ω
′)P 0(m+ n, ω, t; ν, t′)
n(2m+ n)f(−n−m)g(ω′1)P
0(−m,ω′2, t; ν, t
′)P 0(m+ n, ω, t;ω′1, t
In this form it is easy to see the different channels which appear in the correction.
Evaluating this expression using the tree level propagator (40) gives us
Γ̃1a(n, ω, ν, s) = (A4)
(2π)2
n(2m+ n)f(−m)
imKf(m)
Λ−m(s1)
s1=sn
NP̃ 0(m+ n, ω; ν, s− sn)
n(2m+ n)f(−m− n)
Λ−m(s1)
s1=sn
(sn − imν)
(s− sn + i(m+ n)ω)
Λm+n(s− sn)
n(2m+ n)f(−m− n)
Λ−m(imν)
(s− imν + i(m+ n)ω)
Λm+n(s− imν)
The diagram Γ1c is given by
dt′Γ1c(n, θ, ω;φ, ν; t− t
′)P (φ, ν; θ′, ω′; t′) = (A6)
(2π)2
dθ′3dω
dθ′idω
idθidωidti
f(θ′3 − θ)
f(θ′1 − θ1)g(ω1)g(ω
{P0(θ
3, t; θ2, ω2, t2)P0(θ, ω, t; θ1, ω1, t1)
+ P0(θ
3, t; θ1, ω1, t1)P0(θ, ω, t; θ2, ω2, t2)}
f(θ′2 − θ2) {P0(θ
2, t2; θ
1, t1)P (θ2, ω2, t2; θ
′, ω′, t′)
+ P (θ′2, ω
2, t2; θ
′, ω′, t′)P0(θ2, ω2, t2; θ
1, t1)}] (A7)
In Fourier space, we can write:
Γ1c(n, ω, ν, t− t
′) = 2N2K3(2π)3
dω′3dω1dω
1dt1g(ω1)g(ω
dω′2(imn)(m+ n)f(−m− n)f(m)f(−m)P (n+m,ω
3, t; ν, t
′)P (−m,ω, t;ω1, t1)P (m,ω
′;ω′1, t1)
dω′2inm(m+ n)f(m)f(m)f(−m)P (−m,ω
3, t;ω1, t1)P (n+m,ω, t; ν, t
′)P (m,ω′2, t
′;ω′1, t1)
dω2inm(m+ n)f(−n−m)f(m)f(−n)P (n+m,ω
3, t;ω2, t
′)P (−m,ω, t;ω1, t1)P (m,ω2, t
′;ω′1, t1)
dω2inm(m+ n)f(m)f(m)f(−n)P (−m,ω
3, t;ω1, t1)P (n+m,ω, t;ω2, t
′)P (m,ω2, t
′;ω′1, t1)
and taking the Laplace transform:
Γ̃1c(n, ω, ν, s) =
2N2K3(2π)3
dω′3dω1dω
1ds1g(ω1)g(ω
dω′2(imn)(m+ n)f(−m− n)f(m)f(−m)P̃ (n+m,ω
3; ν, s− s1)P (−m,ω;ω1, s1)P (m)ω
1,−s1)
dω′2inm(m+ n)f(m)f(m)f(−m)P (−m,ω
3;ω1, s1)P (n+m,ω; ν, s− s1)P (m,ω
1,−s1)
dω2inm(m+ n)f(−n−m)f(m)f(−n)P (n+m,ω
3;ω2, s− s1)P (−m,ω;ω1, s1)P (m,ω2;ω
1,−s1)
dω2inm(m+ n)f(m)f(m)f(−n)P (−m,ω
3;ω1, s1)P (n+m,ω;ω2, s− s1)P (m,ω2;ω
1,−s1)
where the contour for s1 lies between 0 and 0 < s < −Re(sn). Performing the integrals, we have:
Γ̃1c(n, ω, ν, s) =
(imn)(m+ n)
f(−m− n)f(m)f(−m)
Λn+m(s− s1)
s− s1 + i(m+ n)ν
s1 − imω
Λ−m(s1)
imKf(−m)
Λm(−s1)
+ f(m)f(m)f(−m)
imKf(m)
Λ−m(s1)
(2πN)P (n+m,ω; ν, s− s1)
imKf(−m)
Λm(−s1)
dω2f(−n−m)f(m)f(−n)
Λn+m(s− s1)
s− s1 + i(m+ n)ω2
s1 − imω
Λ−m(s1)
g(ω2)
−s1 + imω2
Λm(−s1)
dω2f(m)f(m)f(−n)
imKf(m)
Λ−m(s1)
(2πN)P (n+m,ω;ω2, s− s1)
g(ω2)
−s1 + imω2
Λm(−s1)
(A10)
Finally, the diagram Γ1d is given by
dt′Γ1d(θ, ω;φ, ν; t− t
′)P (φ, ν; θ′, ω′; t′) = (A11)
(2π)2
dθ′3dω
dθ′idω
idθidωi
[f(θ′3 − θ)g(ω1)g(ω
{P0(θ
3, t; θ2, ω2, t2)P0(θ, ω, t; θ1, ω1, t0)
+ P0(θ
3, t; θ1, ω1, t0)P0(θ, ω, t; θ2, ω2, t2)}
f(θ′2 − θ2) {P0(θ
2, t2; θ
1, t0)P (θ2, ω2, t2; θ
′, ω′, t′)
+ P (θ′2, ω
2, t2; θ
′, ω′, t′)P0(θ2, ω2, t2; θ
1, t0)}] (A12)
Using the fact that
dω′dθ′P (θ, ω, t; θ′, ω′, t′)g(ω′) = g(ω)/N and
dθf(θ) = 0 we have
dt′Γ1d(θ, ω;φ, ν; t− t
′)P (φ, ν; θ′, ω′; t′) = (A13)
(2π)2
dθ′3dω
dθ′idω
idθidωi
[f(θ′3 − θ)g(ω1)g(ω
1)P0(θ
3, t; θ2, ω2, t2)
f(θ′2 − θ2) P (θ
2, t2; θ
′, ω′, t′)] (A14)
In Fourier-Laplace we have
Γ̃1d(n;ω, ν; s) =
inKf(−n)g(ω)
Λn(s)
(A15)
APPENDIX B: f(θ) = sin θ AND g(ω) LORENTZ
In order to both simplify the correction and to provide
a concrete example, we will specialize to the case that
g(ω) is a Lorentz distribution and f(θ) = sin θ. f(θ) =
sin θ is the traditional coupling for the Kuramoto model
(and has the advantage of being bounded in Fourier space
so that we avoid “ultraviolet” singularities) and using a
Lorentz frequency distribution yields analytical results.
The Lorentz frequency distribution is given by
g(ω) =
γ2 + ω2
From this we can calculate
Λ±1(s) =
s+ γ − K
and Λn(s) = 1 for n 6= ±1. The residue of the function
Λ±1(s) is
Λ−m(s1)
s1=sn
and the pole is at s±1 = −(γ − K/2). This provides a
critical coupling
Kc = 2γ (B4)
above which the system begins to synchronize. The in-
coherent state is reached when K < Kc, which gives
s± < 0. From f(θ) = sin θ we also have f(±1) = ∓i/2.
In this case, diagram a evaluates to
Γ̃1a(n, ω, ν, s) = (B5)
(2π)2
n(2m+ n)f(−m)
imKf(m)
NP̃ 0(m+ n, ω; ν, s+ γ −
n(2m+ n)f(−m− n)
− γ − imν
s+ γ − K
+ i(m+ n)ω
s+ 2γ − K
s+ 2γ −K
n(2m+ n)f(−m− n)
Λ−m(imν)
(s− imν + i(m+ n)ω)
s+ γ − imν
s+ γ − K
− imν
There is no contribution for n = 0 which we expect
from probability conservation. The terms proportional
to n(2m + n)f(−n − m) will always evaluate to 0, be-
cause f(n 6= ±1) = 0. Since the tree level propagator
contains such a term as well, we have the simplification
Γ̃1a(n, ω, ν, s) = (B7)
n(2m+ n)
δ(ω − ν)
s+ γ − K
+ i(m+ n)ω
For diagram c, we see that we can immediately ignore
the third term because nmf(−n)f(−m− n) is always 0.
We also see that the first term is only non-zero for n =
±2, and the last term only for n = ±1. After performing
the s1 integration, this leaves us with
Γ̃1c(n, ω, ν, s) =
n(m+ n)
δ(ω − ν)
s+ γ − K
+ i(n+m)ω
2γ −K
+ δn±1
s+ γ − K
+ 2inω
γ − K
+ inω
2γ − K
2γ −K
+ δn±2(−g(ω))
iω + γ − K
i(ν − ω)
iω + γ
γ − K
s+ γ − K
s+ 2γ −K
s+ γ − K
2γ −K
−γ + K
2γ −K
s+ γ − K
s+ γ − K
s+ 2γ − K
s+ 2γ −K
Γ1d is given simply by
Γ̃1d(±1;ω, ν; s) = −
s+ γ − K
Γ̃1d(n 6= ±1;ω, ν; s) = 0 (B9)
APPENDIX C: EQUIVALENT MARKOV
PROCESS
The action (15) can be derived by applying the Doi-
Peliti method to a Markov process equivalent to the Ku-
ramoto dynamics. Consider a two dimensional lattice L,
periodic in one dimension, with lattice constants aθ in
the periodic direction and aω in the other. (The radius
of the eventual cylinder is R.) The indices i and j will
be used for the frequency and periodic domains, respec-
tively. The oscillators obey an equation of the form:
θ̇i = v
~θ, ~ω
The indices on ~θ and ~ω run over the lattice points of the
periodic and frequency variables. The state of the system
is described by the number of oscillators ni,j at each site.
Given this, the fraction of oscillators found on the lattice
sites is governed by the following Master equation:
dP (~n, t)
ni,jP (~n, t)
vi,j−1
(ni,j−1 + 1)P
~nj , t
where the indices of the vector ~n run over the lattice
points, L, and ~nj (note the superscript) is equal to ~n
except for the jth and j − 1st components. At those
points we have n
i,j = ni,j − 1 and n
i,j−1 = ni,j−1 + 1.
The first term on the RHS represents the outward flux
of oscillators from the state with nj oscillators at each
periodic lattice point while the second term is the inward
flux due to oscillators “hopping” from j − 1 to j. There
is no flux in the other direction (ω); this lattice variable
simply serves to label each oscillator by its fundamental
frequency.
Consider a generalization of the Kuramoto model of
the form:
θ̇i = ωi +
f (θj − θi) (C3)
N is the total number of oscillators and we impose f(0) =
0. The velocity in equation (C1) now has the form:
vij = iaω +
i′,j′
f ([j′ − j]aθ)ni′,j′ (C4)
In the limit aω → 0 we have iaω = ω. Similarly, iaθ → θ.
The factor of ni′,j has been added because the sum must
cover all oscillators, and this factor describes the number
at each site. We also sum over all frequency sites i′. The
master equation (C2) now takes the form:
dP (~n, t)
i′,j′
f ([j′ − j]aθ)ni′,j′
×ni,jP (~n, t)
i′,j′
f ([j′ − j + 1]aθ)ni′,j′
×(ni,j−1 + 1)P
~nj , t
= −HP (~n, t) (C5)
The matrix H is the Hamiltonian. From this point, one
can develop an operator representation as in Doi-Peliti.
Using coherent states and taking the continuum and ther-
modynamic limits results in the action (15), after “shift-
ing” the field ϕ̃.
[1] A. T. Winfree, Journal of Theoretical Biology 16, 15
(1967).
[2] C. Liu, D. Weaver, S. Strogatz, and S. Reppert, Cell 91,
855 (1987).
[3] D. Golomb and D. Hansel, Neural Computation 12, 1095
(2000).
[4] G. B. Ermentrout and J. Rinzel, Am. J. Physiol 246,
R102 (1984).
[5] G. B. Ermentrout, Journal of Mathematical Biology 29,
571 (1991).
[6] T. J. Walker, Science 166, 891 (1969).
[7] J. Pantaleone, Physical Review D 58 (1998).
[8] S. Y. Kourtchatov, V. V. Likhanskii, and A. P. Npar-
tovich, Physical Review A 52 (1995).
[9] K. Wiesenfeld, P. Colet, and S. H. Strogatz, Physical
Review Letters 76 (1996).
[10] Y. Kuramoto, Chemical Oscillations, Waves, and Turbu-
lence (Springer-Verlag, 1984).
[11] S. H. Strogatz, Physica D 143, 1 (2000).
[12] S. H. Strogatz, R. E. Mirollo, and P. C. Matthews, Phys-
ical Review Letters 68, 2730 (1992).
[13] R. E. Mirollo and S. H. Strogatz, Physica D 205, 249
(2005).
[14] R. E. Mirollo and S. H. Strogatz, The spectrum of the
partially locked state for the kuramoto model, eprint
arXiv:nlin/0702043.
[15] H. Daido, Journal of Statistical Physics 60, 753 (1990).
[16] H. Daido, Progress of Theoretical Physics 75, 1460
(1986).
[17] E. J. Hildebrand, M. A. Buice, and C. C. Chow, Physical
Review Letters 98 (2007).
[18] L. Peliti, Journal de Physique 46, 1469 (1985).
[19] M. Doi, Journal of Physics A: Mathematical and General
9, 1465 (1976).
[20] M. Doi, Journal of Physics A: Mathematical and General
9, 1479 (1976).
[21] H.-K. Janssen and U. C. Tauber, Annals of Physics 315,
147 (2005).
[22] J. Zinn-Justin, Quantum Field Theory and Critical Phe-
nomena (Oxford Science Publications, 2002), 4th ed.
[23] U. C. Tauber, Lecture Notes in Physics 716, 295 (2006).
[24] S. H. Strogatz and R. E. Mirollo, Journal of Statistical
Physics 63, 613 (1991).
[25] S. Ichimaru, Basic principles of Plasma Physics: A Sta-
tistical Approach (W.A. Benjamin Advanced Book Pro-
gram, 1973).
[26] D. R. Nicholson, Introduction to Plasma Theory (Krieger
Publishing Company, 1992).
[27] P. C. Martin, E. D. Siggia, and H. A. Rose, Physical
Review A 8, 423 (1973).
[28] A. V. Rangan and D. Cai, Physical Review Letters 96
(2006).
[29] J. M. Cornwall, R. Jackiw, and E. Tomboulis, Physical
Review D 10, 2428 (1974).
[30] R. E. Mirollo and S. H. Strogatz, Journal of Statistical
Physics 60, 245 (1990).
[31] J. D. Crawford, Physical Review Letters 74 (1995).
[32] J. D. Crawford and K. Davies, Physica D 125, 1 (1999).
|
0704.1651 | Route to Lambda in conformally coupled phantom cosmology | Route to Lambda in conformally coupled phantom cosmology
Orest Hrycyna∗
Department of Theoretical Physics, Faculty of Philosophy,
The John Paul II Catholic University of Lublin, Al. Rac lawickie 14, 20-950 Lublin, Poland and
Astronomical Observatory, Jagiellonian University, Orla 171, 30-244 Kraków, Poland
Marek Szyd lowski†
Astronomical Observatory, Jagiellonian University, Orla 171, 30-244 Kraków, Poland and
Mark Kac Complex Systems Research Centre, Jagiellonian University, Reymonta 4, 30-059 Kraków, Poland
In this letter we investigate acceleration in the flat cosmological model with a conformally coupled
phantom field and we show that acceleration is its generic feature. We reduce the dynamics of the
model to a 3-dimensional dynamical system and analyze it on a invariant 2-dimensional submanifold.
Then the concordance FRW model with the cosmological constant Λ is a global attractor situated
on a 2-dimensional invariant space. We also study the behaviour near this attractor, which can
be approximated by the dynamics of the linearized part of the system. We demonstrate that
trajectories of the conformally coupled phantom scalar field with a simple quadratic potential crosses
the cosmological constant barrier infinitely many times in the phase space. The universal behaviour
of the scalar field and its potential is also calculated. We conclude that the phantom scalar field
conformally coupled to gravity gives a natural dynamical mechanism of concentration of the equation
of state coefficient around the magical value weff = −1. We demonstrate route to Lambda through
the infinite times crossing the weff = −1 phantom divide.
PACS numbers: 98.80.Bp, 98.80.Cq, 11.15.Ex
At present the scalar fields play a crucial role in modern cosmology. In an inflationary scenario they generate an
exponential rate of evolution of the universe as well as density fluctuations due to vacuum energy. The Lagrangian
for a phantom scalar field on the background of the Friedmann-Robertson-Walker (FRW) universe is assumed in the
[gµν∂µψ∂νψ + ξRψ
2 − 2U(ψ)], (1)
where gµν is the metric of the spacetime manifold, ψ = ψ(t), t is the cosmological time, R = R(g) is the Ricci scalar
for the spacetime metricg, ξ is a coupling constant which assumes zero for a scalar field minimally coupled to gravity
and 1/6 for a conformally coupled scalar field, U(ψ) is a potential of the scalar field.
The minimally coupled slowly evolving scalar fields with a potential function U(ψ) are good candidates for a
description of dark energy. In this model, called quintessence [1, 2], the energy density and pressure from the scalar
field are ρψ = −1/2ψ̇2 +U(ψ), pψ = −1/2ψ̇2−U(ψ). From recent studies of observational constraints we obtain that
wψ ≡ pψ/ρψ < −0.55 [3]. This model has been also extended to the case of a complex scalar field [4, 5].
Observations of distant supernovae support the cosmological constant term which corresponds to the case ψ̇ ≃ 0.
Then we obtain that wψ = −1. But there emerge two problems in this context. Namely, the fine tuning and the cosmic
coincidence problems. The first problem comes from the quantum field theory where the vacuum expectation value
is of 123 orders of magnitude larger than the observed value of 10−47GeV4. The lack of a fundamental mechanism
which sets the cosmological constant almost zero is called the cosmological constant problem. The second problem
called “cosmic conundrum” is a question why the energy densities of both dark energy and dark matter are nearly
equal at the present epoch.
One of the solutions to this problem offers the idea of quintessence, which is a version of the time varying cosmological
constant conception. Quintessence solves the first problem through the decaying Λ term from the beginning of the
Universe to a small value observed at the present epoch. Also the ratio of energy density of this field to the matter
density increases slowly during the expansion of the Universe because the specific feature of this model is the variation
of the coefficient of the equation of state with respect to time. The quintessence models [2, 6] describe the dark energy
with the time varying equation of state for which wX > −1, but recently quintessence models have been extended to
the phantom quintessence models with wX < −1. In this class of models the weak energy condition is violated and
∗Electronic address: [email protected]
†Electronic address: [email protected]
http://arxiv.org/abs/0704.1651v3
mailto:[email protected]
mailto:[email protected]
such a theoretical possibility is realized by a scalar field with a switched sign in the kinetic term ψ̇2 → −ψ̇2 [7, 8, 9].
From theoretical point of view it is necessary to explore different evolutional scenarios for dark energy which provide
a simple and natural transition to wX = −1. The methods of dynamical systems with notion of attractor (a limit
set with an open inset) offers the possibility of description of transition trajectories to the regime with wX = −1.
Moreover they demonstrate whether this mechanism is generic.
Inflation and quintessence with non-minimal coupling constant are studied in the context of formulation of necessary
conditions for the acceleration of the universe [10] (see also [11, 12]). We can find two important arguments which
favour the choice of conformal coupling over ξ 6= 1/6. The first, equation for the massless scalar field is conformally
invariant [13, 14]. The second argument is that if the scalar field satisfy Klein-Gordon equation in the curved space
then ψ does not violate the equivalence principle, and ξ is forced to assume the value 1/6 [15].
While recent astronomical observations give support that the equation of state parameter for dark energy is close to
constant value −1 they do not give a “corridor” around this value. Moreover, Alam et al. [16] pointed out that evolving
state parameter is favoured over constant wX = −1. The first step in the direction of description of the dynamics of
the dark energy seems to be investigation of the system with evolving dark energy in the close neighbourhood of the
value wX = −1. For this aim we linearize dynamical system at this critical point and then describe the system in a
good approximation (following the Hartman-Grobman theorem [17]) by its linearized part.
Other dark energy models like the Chaplygin gas model [18, 19, 20], [9, and references therein] and the model with
tachyonic matter can also be interpreted in terms of a scalar field with some form of a potential function.
Recent applications of the Bayesian framework [21, 22, 23, 24, 25] of model selection to the broad class of cosmo-
logical models with acceleration indicate that a posteriori probability for the ΛCDM model is 96%. Therefore the
explanation why the current universe is such close to the ΛCDM model seems to be a major challenge for modern
theoretical cosmology.
In this letter we present the simplest mechanism of concentration around wX = −1 basing on the influence of a
single scalar field conformally coupled to gravity acting in the radiation epoch. Phantom cosmology non-minimally
coupled to the Ricci scalar was explored in the context of superquintesence (wX < −1) by Faraoni [26, 27] and there
was pointed out that the superacceleration regime can be achieved by the conformally coupled scalar field in contrast
to the minimally coupled scalar field.
Let us consider the flat FRW model which contains a negative kinetic scalar field conformally coupled to gravity
(ξ = 1/6) (phantom) with the potential function U(ψ). For the simplicity of presentation we assume U(ψ) ∝ ψ2.
In this model the phantom scalar field is coupled to gravity via the term ξRψ2. We consider massive scalar fields
(for recent discussion of cosmological implications of massive and massless scalar fields see [28]). The dynamics of a
non-minimally coupled scalar field for some self-interacting potential U(ψ) and for an arbitrary ξ is equivalent to the
action of the phantom scalar field (which behaves like a perfect fluid) with energy density ρψ and pressure pψ [29]
ρψ = −
ψ̇2 + U(ψ) − 3ξH2ψ2 − 3ξH(ψ2)̇, (2)
pψ = −
ψ̇2 − U(ψ) + ξ
2H(ψ2)̇ + (ψ2)̈
2Ḣ + 3H2
ψ2, (3)
where the conservation condition ρ̇ψ = −3H(ρψ + pψ) gives rise to the equation of motion for the field
ψ̈ + 3Hψ̇ + ξRψ2 − U ′(ψ) = 0, (4)
where R = 6
Ḣ + 2H2
is the Ricci scalar.
Let us assume that both the homogeneous scalar field ψ(t) and the potential U(ψ) depend on time through the
scale factor, i.e.
ψ(t) = ψ(a(t)), U(ψ) = U(ψ(a)); (5)
then due to this simplified assumption the coefficient of the equation of state wψ is parameterized by the scale factor
wψ = wψ(a), pψ = wψ(a)ρψ(a), (6)
ψ′2H2a2 − U(ψ) + ξ
2(ψ2)′H2a+ (ψ2 )̈
Ḣ + 3H2
ψ′2H2a2 + U(ψ) − 3ξH2ψ2 − 3ξ(ψ2)′H2a
where prime denotes the differentiation with respect to the scale factor.
We assume the flat model with the FRW geometry, i.e., the line element has the form
ds2 = −dt2 + a2(t)[dr2 + r2(dθ2 + sin2 θdϕ2)], (8)
where 0 ≤ ϕ ≤ 2π, 0 ≤ θ ≤ π and 0 ≤ r ≤ ∞ are comoving coordinates, t stands for the cosmological time. It is also
assumed that a source of gravity is the phantom scalar field ψ with the conformal coupling to gravity ξ = 1/6. The
dynamics is governed by the action
m2pR+ (g
µνψµψν +
Rψ2 − 2U(ψ))
where m2p = (8πG)
−1; for simplicity and without lost of generality we assume 4πG/3 = 1 and U(ψ) is the scalar field
potential
U(ψ) =
m2ψ2. (10)
After dropping the full derivatives with respect to time, rescaling phantom field ψ → φ = ψa and the time variable
to the conformal time dt = adη we obtain the energy conservation condition
a′2 +
φ′2 −
m2a2φ2 = ρr,0 (11)
where ρr,0 is constant corresponding to the radiation in the model. The equations of motion are
a′′ = m2aφ2,
φ′′ = m2a2φ
where a prime denotes the differentiation with respect to the conformal time dt = adη and m2 > 0.
From the energy conservation condition we have
a′2 =
ρr,0 +
m2a2φ2
1 + φ̇2
and now from the equations of motion (12) we receive
(ρr,0 +
m2a2φ2)φ̈+
m2aφ(1 + φ̇2)(φφ̇ − a) = 0. (14)
The effective equation of state parameter is
weff =
ρφ + ρr
, (15)
for our model this parameter reduces to
weff = −
φ′2 + 1
m2a2φ2 − ρr,0
φ′2 + 1
m2a2φ2 + ρr,0
where a prime denotes the differentiation with respect to the conformal time and finally taking into account equa-
tion (13) we have
weff = −
φ̇2 +
m2a2φ2 − ρr,0
m2a2φ2 + ρr,0
(1 + φ̇2)
. (17)
For a, φ≫ ρr,0 this equation reduces to
weff = −
(2φ̇2 + 1), (18)
and it is clear that for any value of φ̇ weff is always negative.
To analyze equation (14) we reintroduce the original phantom field variable ψ = φ
and da/a = d ln a. Now equation
(14) reads
(ψ′′ + ψ′) +
m2ψ(1 + (ψ′ + ψ)2)(ψ(ψ′ + ψ) − 1) = 0 (19)
where a prime now denotes the differentiation with respect to a natural logarithm of the scale factor. Introducing
new variables y = ψ′ and ρr = ρr,0a
−4 we can represent this equation as an autonomous dynamical system
ψ′ = y
y′ = −y −
(ψ(y + ψ) − 1)(1 + (y + ψ)2) (20)
ρ′r = −4ρr.
There are the two critical points in the phase space (ψ, y, ρr), namely ψ = ±1, y = 0, ρr = 0. The linearization
matrix reads
0 1 0
−2(1 + ψ2) −1 − (1 + ψ2) 2
(ψ2 − 1)(1 + ψ2)
0 0 −4
y=0,ρr=0
0 1 0
−4 −3 0
0 0 −4
y=0,ρr=0,ψ=±1
. (21)
The eigenvalues for this matrix are λ1,2 =
(−3 ± i
7) and λ3 = −4.
To find a global phase portrait it is necessary to study the system in the neighbourhood of the critical points which
correspond, from the physical point of view, stationary states (or asymptotic solutions). Then the Hartman-Grobman
theorem guaranties us that the linearized system at this point is a well approximation of the nonlinear system. First,
we must note that ρr = 0 is in the invariant submanifold of the 3-dimensional nonlinear system. It is also useful to
calculate the eigenvectors for any eigenvalue. We obtain following eigenvectors
v1,2 =
, v3 =
. (22)
They are helpful in construction of the exact solution of the linearized system
~x(t) = ~x(0) exp t
0 1 0
−4 −3 0
0 0 −4
0 − 3
0 1 0
1 0 0
e−4t 0 0
t cos
t −e− 32 t sin
t sin
t cos
0 0 1
0 1 0
where x = ψ − ψ0, y = ψ′ − ψ′0, z = ρr − ρ0 and x0, y0, z0 are initial conditions and we have substituted ln a = t.
If we consider linearized system on the invariant stable submanifold z = 0, it is easy to find the exact solution. If
we return to the original variables ψ, ψ′, then ψ(ln a) is the solution of the linear equation
(ψ − ψ0)′′ + 3(ψ − ψ0)′ + 4(ψ − ψ0) = 0, (24)
i. e.,
(ψ − ψ0) = C1 exp
+ C2 exp
(φ− φ0) = C1a−
2 cos
+ C2a
2 sin
. (26)
Because of the lack of alternatives to the mysterious cosmological constant [21, 22] we allow that energy might vary
in time following assumed a priori parameterization of w(z). In the popular parameterization [30, 31, 32, 33] appears
a free function in most scenarios which is a source of difficulties in constraining parameters by observations. However
most parameterizations of the dark energy equation of state cannot reflect real dynamics of cosmological models with
dark energy. The assumed form of w(z) can be incompatible with the w(z) obtained from the underlying dynamics
of the cosmological model. For example some of parameters can be determined from the dynamics which can be
crucial in testing and selection of cosmological models [21]. Our point of view is to obtain the form of w(z) specific
for given class of cosmological models from dynamics of this models and apply it in further analysis both theoretical
and empirical. In practice we put the cosmological model in the form of the dynamical system and linearize it around
the neighbourhood of the present epoch to find the exact formula of w(z). For the phantom scalar field model this
incompability manifests by the presence of a focus type critical point (therefore damping oscillations) in the phase
space rather than a stable node (Fig. 1 and its 3D version Fig. 2).
The properties of the minimally coupled phantom field in the FRW cosmology using the phase portrait have been
investigated by Singh et al. [34] (see also [35] for more recent studies). Authors showed the existence of the deSitter
attractor and damped oscillations (the ratio of the kinetic to the potential energy |T/U | to oscillate to zero).
We can also express weff in these new variables
weff = −
(ψ + ψ′)2 −
ρr − 12m
1 + (ψ + ψ′)2
w′eff =
dweff
d ln a
(ψ + ψ′)(ψ′ + ψ′′) + ρrm
2 + ψ
(ρr +
m2ψ2)2
1 + (ψ + ψ′)2
. (28)
Recently Caldwell and Linder [36] have discussed dynamics of quintessence models of dark energy in terms of w−w′
phase variables, where w′ was the differentiation with respect to the logarithm of the scale factor. These methods
were extended to the phantom and quintom models of dark energy [37, 38]. Guo et al. [38] examined the two-field
quintom models as the illustration of the simplest model of transition across the wX = −1 barrier. The interesting
mechanism of acceleration with a periodic crossing of the w = −1 barrier have been recently discussed in the context
of the cubic superstring field theory [39]. In the model under consideration we obtain this effect but trajectories cross
the barrier infinitely many times. The main advantage of the discovered road to Λ is that it takes place in the simple
flat FRW model with the quadratic potential of the scalar field.
It is easy to check that at the critical points weff = −1 and dweffd lna = 0. Since these points are sinks there is infinite
many crossings of weff = −1 during the evolution.
The methods of the Lyapunov function are useful in discussion of stability of the critical point of the non-linear
system. The stability of any hyperbolic critical point of dynamical system is determined by the signs of the real parts
of the eigenvalues λi of the Jacobi matrix. A hyperbolic critical point is asymptotically stable iff real λi < 0 ∀i, if x0
is a sink. The hyperbolic critical point is unstable iff it is either a source or a saddle. The method of the Lyapunov
function is especially useful in deciding the stability of a non-hyperbolic critical points [17, p.129]. The construction
of the Lyapunov function was used by [40] for demonstration that periodic behaviour of a single scalar field is not
possible for minimally coupled phantom scalar field (see also [41]).
The quantity w′eff in terms of weff and (lnψ)
′ reads
w′eff = −(1 − 3weff)(1 + weff +
(lnψ)′). (29)
It is interesting that equation (29) can be solved in terms of w̄(a) – the mean of the equation of the state parameter
in the logarithmic scale defined by Rahvar and Movahed [42] as
w̄(a) =
w(a′)d(ln a′)
d(ln a′)
, (30)
namely:
w(a) =
a3(1+w̄(a))ψ2. (31)
They argued that this phenomenological parameterization removes the fine tuning of dark energy and ρX/ρm ∝
a−3w̄(a) approaches a unity at the early universe. Note that in w̄(a) = −1 that
w(z) + 1 =
(1 − ψ2), (32)
FIG. 1: The phase portrait (weff, w
eff) of the investigated model on the submanifold ρr = 0. This figure illustrates the evolution
of the dark energy equation of the state parameter as a function of redshift for different initial conditions. In all cases trajectories
cross the boundary line weff = −1 infinite many times but this state also represents the global attractor.
where
ψ = ψ0 + (1 + z)
C1 cos(
ln(1 + z)) − C2 sin(
ln(1 + z))
. (33)
In Fig. 3 we present the relation w(z) for different values of parameters ψ0 = ±1, C1 and C2.
In this letter we regarded the phantom scalar field conformally coupled to gravity in the context of the problem of
acceleration of the Universe. We applied the methods of dynamical systems and the Hartman-Grobman theorem to find
universal behaviour at the late times – damping oscillations around weff = −1. We argued that most parameterizations
of the dark energy, such as linear evolution of w(z) in redshift or the scale factor, cannot reflect realistic physical
models because of the presence of non-hyperbolic critical point of a focus type on the phase plane (w,w′). We
suggested a parameterization of a type
wX(z) = −1 + (1 + z)3
C1 cos(ln(1 + z)) + C2 sin(ln(1 + z))
which parameterizes damping oscillations around wX = −1 “phantom divide”, and finally, with the help of this
formula one can simply calculate energy density for dark energy ρX
ρX = ρX,0 exp
(1 + z)3
A sin(ln(1 + z)) +B cos(ln(1 + z))
. (35)
Acknowledgments
The work of M.S. has been supported by the Marie Curie Actions Transfer of Knowledge project COCOS (contract
MTKD-CT-2004-517186).
[1] C. Wetterich, Nucl. Phys. B 302, 668 (1988).
[2] B. Ratra and P. J. E. Peebles, Phys. Rev. D37, 3406 (1988).
[3] K.-H. Chae, A. Biggs, R. Blandford, I. Browne, A. de Bruyn, C. Fassnacht, P. Helbig, N. Jackson, L. King, L. Koopmans,
et al. (The CLASS collaboration), Phys. Rev. Lett. 89, 151301 (2002), arXiv:astro-ph/0209602.
[4] J.-A. Gu and W.-Y. P. Hwang, Phys. Lett. B 517, 1 (2001), arXiv:astro-ph/0105099.
[5] C.-J. Gao and Y.-G. Shen, Phys. Lett. B 541, 1 (2002).
[6] R. R. Caldwell, R. Dave, and P. J. Steinhardt, Phys. Rev. Lett. 80, 1582 (1998), arXiv:astro-ph/9708069.
[7] R. R. Caldwell, Phys. Lett. B 545, 23 (2002), arXiv:astro-ph/9908168.
[8] M. P. Dabrowski, T. Stachowiak, and M. Szydlowski, Phys. Rev. D68, 103519 (2003), arXiv:hep-th/0307128.
[9] E. J. Copeland, M. Sami, and S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006), arXiv:hep-th/0603057.
[10] V. Faraoni, Phys. Rev. D62, 023504 (2000), arXiv:gr-qc/0002091.
FIG. 2: The phase portrait of the 3-dimensional dynamical system (20) in terms of the variables (weff, w
eff, ρr) and projections
of trajectories on the submanifold (weff, w
eff, 0). The critical point represents the state where weff = −1 – the cosmological
constant. The trajectories approach this point as the scale factor goes to infinity (or the redshift z → −1). Before this stage
the weak energy condition is violated infinite number of times.
[11] V. Faraoni, Phys. Lett. A269, 209 (2000), arXiv:gr-qc/0004007.
[12] M. Bellini, Gen. Rel. Grav. 34, 1953 (2002), arXiv:hep-ph/0205171.
[13] N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space (Cambridge University Press, Cambridge, 1984).
[14] R. Penrose, in Relativity, Groups and Topology, edited by C. M. DeWitt and B. S. DeWitt (Gordon and Breach, New
York, U.S.A., 1964).
[15] V. Faraoni, Phys. Rev. D53, 6813 (1996), arXiv:astro-ph/9602111.
[16] U. Alam, V. Sahni, and A. A. Starobinsky, J. Cosmol. Astropart. Phys. 06, 008 (2004), arXiv:astro-ph/0403687.
[17] L. Perko, Differential Equations and Dynamical Systems (Springer-Verlag, New York, 1991).
[18] A. Y. Kamenshchik, U. Moschella, and V. Pasquier, Phys. Lett. B511, 265 (2001), arXiv:gr-qc/0103004.
[19] N. Bilic, G. B. Tupper, and R. D. Viollier, Dark matter, dark energy and the Chaplygin gas (2002), arXiv:astro-ph/0207423.
[20] M. C. Bento, O. Bertolami, and A. A. Sen, Phys. Rev. D66, 043507 (2002), arXiv:gr-qc/0202064.
[21] A. Kurek and M. Szydlowski, The LambdaCDM model on the lead – a Bayesian cosmological models comparison (2007),
arXiv:astro-ph/0702484.
[22] M. Szydlowski, A. Kurek, and A. Krawiec, Phys. Lett. B 642, 171 (2006), arXiv:astro-ph/0604327.
[23] M. Szydlowski and W. Godlowski, Phys. Lett. B 623, 10 (2005), arXiv:astro-ph/0507322.
[24] M. Szydlowski and W. Godlowski, Phys. Lett. B 633, 427 (2006), arXiv:astro-ph/0509415.
[25] M. Szydlowski and W. Godlowski, Phys. Lett. B 639, 5 (2006), arXiv:astro-ph/0511259.
[26] V. Faraoni, Int. J. Mod. Phys. D 11, 471 (2002), arXiv:astro-ph/0110067.
[27] V. Faraoni, Class. Quantum Grav. 22, 3235 (2005), arXiv:gr-qc/0506095.
[28] M. Jankiewicz and T. W. Kephart, Phys. Rev. D73, 123514 (2006), arXiv:hep-ph/0510009.
[29] E. Gunzig, V. Faraoni, A. Figueiredo, T. M. Rocha, and L. Brenig, Class. Quant. Grav. 17, 1783 (2000).
[30] B. F. Gerke and G. Efstathiou, Mon. Not. R. Astron. Soc. 335, 33 (2002), arXiv:astro-ph/0201336.
[31] E. V. Linder, Phys. Rev. Lett. 90, 091301 (2003), arXiv:astro-ph/0208512.
[32] A. R. Cooray and D. Huterer, Astrophys. J. 513, L95 (1999), arXiv:astro-ph/9901097.
[33] T. Padmanabhan and T. R. Choudhury, Mon. Not. R. Astron. Soc. 344, 823 (2003), arXiv:astro-ph/0212573.
[34] P. Singh, M.Sami, and N. Dadhich, Phys. Rev. D68, 023522 (2003), arXiv:hep-th/0305110.
[35] Y.-H. Wei, Critical state of phantom universe (2006), arXiv:astro-ph/0607359.
[36] R. R. Caldwell and E. V. Linder, Phys. Rev. Lett. 95, 141301 (2005), arXiv:astro-ph/0505494.
FIG. 3: The relation w(z) for different values of the parameters: C1 = 0.01, C2 = 0 dash-dotted line (red el. version); C1 = 0,
C2 = −0.01 dotted line (green el. version); C1 = 0.01, C2 = −0.01 solid line (blue el. version) and ψ0 = ±1. For all solutions
w(z) approaches the cosmological constant as z → −1. It is obvious that this relation is true only in the vicinity of the critical
point (weff, w
eff, ρr) = (−1, 0, 0).
[37] T. Chiba, Phys. Rev. D73, 063501 (2006), arXiv:astro-ph/0510598.
[38] Z.-K. Guo, Y.-S. Piao, X. Zhang, and Y.-Z. Zhang, Phys. Rev. D74, 127304 (2006), arXiv:astro-ph/0608165.
[39] I. Y. Aref’eva and A. S. Koshelev, JHEP 02, 041 (2007), arXiv:hep-th/0605085.
[40] H. Giacomini and L. Lara, Gen. Rel. Grav. 38, 137 (2006).
[41] M. A. Castagnino, H. Giacomini, and L. Lara, Phys. Rev. D61, 107302 (2000), arXiv:gr-qc/9912008.
[42] S. Rahvar and M. S. Movahed, Phys. Rev. D75, 023512 (2007), arXiv:astro-ph/0604206.
Acknowledgments
References
|
0704.1652 | Interaction of Supernova Ejecta with Nearby Protoplanetary Disks | Interaction of Supernova Ejecta with Nearby Protoplanetary
Disks
N. Ouellette
Department of Physics, Arizona State University, PO Box 871504, Tempe, AZ 85287-1504
S. J. Desch and J. J. Hester
School of Earth and Space exploration, Arizona State University, PO Box 871404, Tempe,
AZ 85287-1404
Received ; accepted
http://arxiv.org/abs/0704.1652v1
– 2 –
ABSTRACT
The early Solar System contained short-lived radionuclides such as 60Fe
(t1/2 = 1.5 Myr) whose most likely source was a nearby supernova. Previous
models of Solar System formation considered a supernova shock that triggered
the collapse of the Sun’s nascent molecular cloud. We advocate an alternative hy-
pothesis, that the Solar System’s protoplanetary disk had already formed when a
very close (< 1 pc) supernova injected radioactive material directly into the disk.
We conduct the first numerical simulations designed to answer two questions re-
lated to this hypothesis: will the disk be destroyed by such a close supernova;
and will any of the ejecta be mixed into the disk? Our simulations demonstrate
that the disk does not absorb enough momentum from the shock to escape the
protostar to which it is bound. Only low amounts (< 1%) of mass loss occur, due
to stripping by Kelvin-Helmholtz instabilities across the top of the disk, which
also mix into the disk about 1% of the intercepted ejecta. These low efficiencies
of destruction and injectation are due to the fact that the high disk pressures
prevent the ejecta from penetrating far into the disk before stalling. Injection
of gas-phase ejecta is too inefficient to be consistent with the abundances of ra-
dionuclides inferred from meteorites. On the other hand, the radionuclides found
in meteorites would have condensed into dust grains in the supernova ejecta, and
we argue that such grains will be injected directly into the disk with nearly 100%
efficiency. The meteoritic abundances of the short-lived radionuclides such as
60Fe therefore are consistent with injection of grains condensed from the ejecta
of a nearby (< 1 pc) supernova, into an already-formed protoplanetary disk.
Subject headings: methods: numerical—shock waves—solar system: formation—stars:
formation—supernovae: general
– 3 –
1. Introduction
Many aspects of the formation of the Solar System are fundamentally affected by the
Sun’s stellar birth environment, but to this day the type of environment has not been well
constrained. Did the Sun form in a quiescent molecular cloud like the Taurus molecular
cloud in which many T Tauri stars are observed today? Or did the Sun form in the vicinity
of massive O stars that ionized surrounding gas, creating an H ii region before exploding as
core-collapse supernovae? Recent isotopic analyses of meteorites reveal that the early Solar
System held live 60Fe at moderately high abundances, 60Fe/56Fe ∼ 3− 7× 10−7 (Tachibana
& Huss 2003; Huss & Tachibana 2004; Mostefaoui et al. 2004, 2005; Quitte et al. 2005;
Tachibana et al. 2006). Given these high initial abundances, the origin of this short-lived
radionuclide (SLR), with a half-life of 1.5 Myr, is almost certainly a nearby supernova, and
these meteoritic isotopic measurements severely constrain the Sun’s birth environment.
Since its discovery, the high initial abundance of 60Fe in the early Solar System has
been recognized as demanding an origin in a nearby stellar nucleosynthetic source, almost
certainly a supernova (Jacobsen 2005; Goswami et al. 2005; Ouellette et al. 2005; Tachibana
et al. 2006, Looney et al. 2006). Inheritance from the interstellar medium (ISM) can be
ruled out: the average abundance of 60Fe maintained by ongoing Galactic nucleosynthesis in
supernovae and asymptotic-giant-branch (AGB) stars is estimated at 60Fe/56Fe = 3× 10−8
(Wasserburg et al. 1998) to 3 × 10−7 (Harper 1996), lower than the meteoritic ratio.
Moreover, this 60Fe is injected into the hot phase of the ISM (Meyer & Clayton 2000), and
incorporation into molecular clouds and solar systems takes ∼ 107 years or more (Meyer &
Clayton 2000; Jacobsen 2005), by which time the 60Fe has decayed. A late source is argued
for (Jacobsen 2005; see also Harper 1996, Meyer & Clayton 2000). Production within the
Solar System itself by irradiation of rocky material by solar energetic particles has been
proposed for the origin of other SLRs (e.g., Lee et al. 1998; Gounelle et al. 2001), but
– 4 –
neutron-rich 60Fe is produced in very low yields by this process. Predicted abundances are
60Fe/56Fe ∼ 10−11, too low by orders of magnitude to explain the meteoritic abundance
(Lee et al. 1998; Leya et al. 2003; Gounelle et al. 2006). The late source is therefore a stellar
nucleosynthetic source, either a supernova or an AGB star. AGB stars are not associated
with star-forming regions: Kastner & Myers (1994) used astronomical observations to
estimate a firm upper limit of ≈ 3× 10−6 per Myr to the probability that our Solar System
was contaminated by material from an AGB star. The yields of 60Fe from an AGB star also
may not be sufficient to explain the meteoritic ratio (Tachibana et al. 2006). Supernovae,
on the other hand, are commonly associated with star-forming regions, and a core-collapse
supernova is by far the most plausible source of the Solar System’s 60Fe.
Supernovae are naturally associated with star-forming regions because the typical
lifetimes of the stars massive enough to explode as supernovae (>∼ 8M⊙) are
7 yr, too
short a time for them to disperse away from the star-forming region they were born in.
Low-mass (∼ 1M⊙) stars are also born in such regions. In fact, astronomical observations
indicate that the majority of low-mass stars form in association with massive stars. Lada &
Lada (2003) conducted a census of protostars in deeply embedded clusters complete to 2 kpc
and found that 70-90% of stars form in clusters with > 100 stars. Integration of the cluster
initial mass function indicates that of all stars born in clusters of at least 100 members,
about 70% will form in clusters with at least one star massive enough to supernova (Adams
& Laughlin 2001; Hester & Desch 2005). Thus at least 50% of all low-mass stars form in
association with a supernova, and it is reasonable to assume the Sun was one such star.
Astronomical observations are consistent with, and the presence of 60Fe demands, formation
of the Sun in association with at least one massive star that went supernova.
While the case for a supernova is strong, constraining the proximity and the timing
of the supernova is more difficult. The SLRs in meteorites provide some constraints on
– 5 –
the timing. The SLR 60Fe must have made its way from the supernova to the Solar
System in only a few half-lives; models in which multiple SLRs are injected by a single
supernova provide a good match to meteoritic data only if the meteoritic components
containing the SLRs formed <∼ 1 Myr after the supernova (e.g., Meyer 2005, Looney et al.
2007). The significance of this tight timing constraint is that the formation of the Solar
System was somehow associated with the supernova. Cameron & Truran (1977) suggested
that the formation of the Solar System was triggered by the shock wave from the same
supernova that injected the SLRs, and subseqeuent numerical simulations show this is a
viable mechanism, provided several parsecs of molecular gas lies between the supernova
and the Solar System’s cloud core, or else the supernova shock will shred the molecular
cloud (Vanhala & Boss 2000, 2002). The likelihood of this initial condition has not yet been
established by astronomical observations. Also in 1977, T. Gold proposed that the Solar
System acquired its radionuclides from a nearby supernova, after its protoplanetary disk had
already formed (Clayton 1977). Astronomical observations strongly support this scenario,
especially since protoplanetary disks were directly imaged ∼ 0.2 pc from the massive star
θ1 Ori C in the Orion Nebula (McCaughrean & O’Dell 1996). Further imaging has revealed
protostars with disks near (≤ 1 pc) massive stars in the Carina Nebula (Smith et al. 2003),
NGC 6611 (Oliveira et al. 2005), and M17 and Pismis 24 (de Marco et al. 2006). This
hypothesis, that the Solar System acquired SLRs from a supernova that occurred < 1 pc
away, after the Sun’s protoplanetary disk had formed, is the focus of this paper.
In this paper we address two main questions pertinent to this model. First, are
protoplanetary disks destroyed by the explosion of a supernova a fraction of a parsec away?
Second, can supernova ejecta containing SLRs be mixed into the disk? These questions
were analytically examined in a limited manner by Chevalier (2000). Here we present the
first multidimensional numerical simulations of the interaction of supernova ejecta with
protoplanetary disks. In §2 we describe the numerical code, Perseus, we have written to
– 6 –
study this problem. In §3 we discuss the results of one canonical case in particular, run
at moderate spatial resolution. We examine closely the effects of our limited numerical
resolution in §4, and show that we have achieved sufficient convergence to draw conclusions
about the survivability of protoplanetary disks hit by supernova shocks. We conduct a
parameter study, investigating the effects of supernova energy and distance and disk mass,
as described in §5. Finally, we summarize our results in §6, in which we conclude that
disks are not destroyed by a nearby supernova, that gaseous ejecta is not effectively mixed
into the disks, but that solid grains from the supernova likely are, thereby explaining the
presence of SLRs like 60Fe in the early Solar System.
2. Perseus
We have written a 2-D (cylindrical) hydrodynamics code we call Perseus. Perseus (son
of Zeus) is based heavily on the Zeus algorithms (Stone & Norman 1992). The code evolves
the system while obeying the equations of conservation of mass, momentum and energy:
+ ρ∇ · ~v = 0 (1)
= −∇p− ρ∇Φ (2)
= −p∇ · ~v, (3)
where ρ is the mass density, ~v is the velocity, p is the pressure, e is the internal energy
density and Φ is the gravitational potential (externally imposed). The Lagrangean, or
comoving derivative D/Dt is defined as
+ ~v · ∇. (4)
The pressure and energy are related by the simple equation of state appropriate for the
ideal gas law, p = e(γ − 1), where γ is the adiabatic index. The term p∇ · ~v represents
mechanical work.
– 7 –
Currently, the only gravitational potential Φ used is a simple point source, representing
a star at the center of a disk. This point mass is constrained to remain at the origin.
Technically this violates conservation of momentum by a minute amount by excluding
the gravitational force of the disk on the central star. As discussed in §4, the star should
acquire a velocity ∼ 102 cm s−1 at the end of our simulations. In future simulations we will
include this effect, but for the problem explored here this is completely negligible.
The variables evolved by Perseus are set on a cylindrical grid. The program is separated
in two steps: the source and the transport step. The source step calculates the changes
in velocity and energy due to sources and sinks. Using finite difference approximations, it
evolves ~v and e according to
= −∇p− ρ∇Φ−∇ ·Q (5)
= −p∇ · ~v −Q : ∇~v, (6)
where Q is the tensor artificial viscosity. Detailed expressions for the artificial viscosity can
be found in Stone & Norman (1992).
The transport step evolves the variables according to the velocities present on the
grid. For a given variable A, the conservation equation is solved, using finite difference
approximations:
AdV = −
A~v · d~S. (7)
The variables A advected in this way are density ρ, linear and angular momentum ρ~v and
Rρvφ, and energy density e. As in the Zeus code, A on each surface element is found with
an upwind interpolation scheme; we use second-order van Leer interpolation.
Perseus is an explicit code and must satisfy the Courant-Friedrichs-Lewis (CFL)
stability criterion. The amount of time advanced per timestep, essentially, must not exceed
– 8 –
the time it could take for information to cross a grid zone in the physical system. In every
grid zone, the thermal time step δtcs = ∆x/(cs) is computed, where ∆x is the size of the
zone (smallest of the r and z dimension) and cs is the sound speed. Also computed are
δtr = δr/(|vr|) and δtz = δz/(|vz|), where ∆r and ∆z are the sizes of the zone in the r and
z directions respectively. Because of artificial viscosity, a viscous time step must also be
added for stability. For a given grid zone, the viscous time step δtvisc = max(|(l ∇ · ~v/δr
|(l ∇ · ~v/δz2)|) is computed, where l is a length chosen to be a 3 zone widths. The final ∆t
is taken to be
∆t = C0 (δt
+ δt−2r + δt
z + δt
visc)
−1/2, (8)
where C0 is the Courant number, a safety factor, taken to be C0=0.5. To insure stability,
∆t is computed over all zones, and the smallest value is kept for the next step of the
simulation.
Boundary conditions were implemented using ghost zones as in the Zeus code. To
allow for supernova ejecta to flow past the disk, inflow boundary conditions were used at
the upper boundary (z = zmax), and outflow boundary conditions were used at the lower
boundary (z = zmin) and outer boundary (r = rmax). Reflecting boundary conditions,
were used on the inner boundary (r = rmin 6= 0) to best model the symmetry about the
protoplanetary disk’s axis. The density and velocity of gas flowing into the upper boundary
were varied with time to match the ejecta properties (see §3).
A more detailed description of the algorithms used in Perseus can be found in Stone &
Norman (1992).
– 9 –
2.1. Additions to Zeus
To consider the particular problem of high-velocity ejecta hitting a protoplanetary
disk, we wrote Perseus with the following additions to the Zeus code. One minor change
is the use of a non-uniform grid. In all of our simulations we used an orthogonal grid
with uniform spacing in r but non-uniform spacing in the z direction. For example, in the
canonical simulation (§3), the computational domain extends from r = 4 to 80 AU, with
spacing ∆r = 1AU, for a total of 76 zones in r. The computational domain extends from
z = −50AU to +90AU, but zone spacings vary with z, from ∆z = 0.2AU at z = 0, to
∆z ≈ 3AU at the upper boundary. Grid spacings increased geometrically by 5% per zone,
for a total of 120 zones in z.
Another addition was the use of a radiative cooling term. The simulations bear out
the expectation that almost all of the shocked supernova ejecta flow past the disk before
they have time to cool significantly. Cooling is significant only where the ejecta collide with
the dense gas of the disk itself, but there the cooling is sensitive to many unconstrained
physical properties to do with the the chemical state of the gas, properties of dust, etc.
To capture the gross effects of cooling (especially compression of gas near the dense disk
gas) in a computationally simple way, we have adopted the following additional term in the
energy equation, implemented in the source step:
= −nenpΛ, (9)
where ne and np are the number of protons and electrons in the gas, and Λ is the cooling
function. The densities ne and np are obtained simply by assuming the hydrogen gas is
fully ionized, so ne = np = ρ/1.4mH. For gas temperatures above 10
4K, we take Λ of
a solar-metallicity gas from Sutherland and Dopita (1993); Λ typically ranges between
10−24 erg cm3 s−1 (at T = 104K) and Λ = 10−21 erg cm3 s−1 (at T = 105K). Below 104K
we adopted a flat cooling function of Λ = 10−24 erg cm3 s−1. At very low temperatures it
– 10 –
is necessary to include heating processes as well as cooling, or else the gas rapidly cools
to unreasonable temperatures. Rather than handle transfer of radiation from the central
star, we defined a minimum temperature below which the gas is not allowed to cool:
Tmin = 300 (r/1AU)
−3/4 K. Perseus uses a simple first-order, finite-difference equation to
handle cooling. Although this method is not as precise as a predictor-corrector method, in
§2.4 we show that it is sufficiently accurate for our purposes.
Because Perseus is an explicit code, the implementation of a cooling term demands the
introduction of a cooling time step to insure that the gas doesn’t cool too rapidly during
one time step, resulting in negative temperatures or other instabilities. For a radiating gas,
the cooling timescale can be approximated by tcool ≈ kBT/nΛ, where kB is the Boltzmann
constant, T is the temperature of the gas, n is the number density and Λ is the appropriate
cooling function. This cooling timescale is calculated on all the grid zones where the
temperature exceeds 103K, and the cooling time step δtcool is defined to be 0.025 times
the shortest cooling timescale on the grid. If the smallest cooling time step is shorter that
the previously calculated ∆t as defined by eq. [8], then it becomes the new time step. We
ignore zones where the temperature is below 103K because heating and cooling are not fully
calculated anyway, and because these zones are always associated with very high densities
and cool extremely rapidly, on timescales as short as hours, too rapidly to reasonably follow
anyway.
Finally, to follow the evolution of the ejecta gas with respect to the disk gas, a tracer
“color density” was added. By defining a different density, the color density ρc, it is possible
to follow the mixing of a two specific parts of a system, in this case the ejecta and the disk.
By comparing ρc to ρ, it is possible to know how much of the ejecta is present in a given
zone relative to the original material. It is important to note that ρc is a tracer and does
not affect the simulation in any way.
– 11 –
2.2. Sod Shock-Tube
We have benchmarked the Perseus code against a well-known analytic solution, the
Sod shock tube (Sod 1978). Tests were performed to verify the validity of Perseus’s results.
It is a 1-D test, and hence was only done in the z direction, as curvature effects would
render this test invalid in the r direction. Therefore, the gas was initially set spatially
uniform in r. 120 zones were used in the z direction. The other initial conditions of the Sod
shock-tube are as follows: the simulation domain is split in half and filled with a γ=1.4 gas;
in one half (z < 0.5 cm), the gas has a pressure of 1.0 dyne cm−2 and a density of 1.0 g cm−3,
while in the other half (z > 0.5 cm) the gas has a pressure of 0.1 dyne cm−2 and a density
of 0.125 g cm−3. The results of the simulation and the analytical solution at t = 0.245 s
are shown in Figure 1. The slight discrepancies between the analytic and numerical results
are attributable to numerical diffusion associated with the upwind interpolation (see Stone
& Norman 1992), match the results of Stone & Norman (1992) almost exactly, and are
entirely acceptable.
2.3. Gravitational Collapse
As a test problem involving curvature terms, we also simulated the pressure-free
gravitational collapse of a spherical clump of gas. A uniform density gas (ρ = 10−14 g cm−3)
was imposed everywhere within 30 AU of the star. As stated above, the only source of
gravitational acceleration in our simulations is the central protostar, with mass M = 1M⊙.
The grid on which this simulation takes place has 120 zones in the z direction and 80 in the
r direction The free-fall timescale under the gravitational potential of a 1 M⊙ star is 29.0
yrs. The results of the simulation can be seen in Figure 2. After 28 years, the 30AU clump
has contracted to the edge of the computational volume. Spherical symmetry is maintained
throughout as the gas is advected despite the presence of the inner boundary condition.
– 12 –
2.4. Cooling
To test the accuracy of the cooling algorithm, a simple 2D grid of 64 zones by 64 zones
was set up. The simulation starts with gas at T = 1010K. The temperature of the gas is
followed until it reaches T = 104K. Simulations were run varying the cooling time step
δtcool. As the cooling subroutine does not use a predictor-corrector method, decreasing the
time step increases the precision. A range of cooling time steps, varying from 10 times what
is used in the code to 0.1 times what is used in the code, were tested. Since in the range of
T = 104K − 1010K, the cooling rate varies with temperature (according to Sutherland &
Dopita 1993), the size of the time step should affect the time evolution of the temperature.
This evolution is depicted in Figure 3, from which one can see that δtcool used in the code
is sufficient, as using smaller time steps gives the same result. In addition, we can see that
even the lesser precision runs give comparably good results, as the thermal time step of the
CFL condition prevents a catastrophically rapid cooling. The precision of the cooling is
limited by the accuracy of the cooling factors used, not the algorithm.
2.5. “Relaxed Disk”
Finally, we have modeled the long-term evolution of an isolated protoplanetary disk.
To begin, a minimum-mass solar nebula disk (Hayashi et al. 1985) in Keplerian rotation
is truncated at 30AU. The code then runs for 2000 years, allowing the disk to find its
equilibrium configuration under gravity from the central star (1M⊙), pressure and angular
momentum. We call this the “relaxed disk”, and use it as the initial state for the runs that
follow. To check the long term stability of the system, we allow the relaxed disk to evolve
an extra 2000 years. This test verifies the stability of the simulated disk against numerical
effects. In addition, using a color density, we can assess how much numerical diffusion
occurs in the code.
– 13 –
After the extra 2000 years, the disk maintains its shape, and is deformed only at
its lowest isodensity contour, because of the gravitational infall of the surrounding gas
(Figure 4). Comparing this deformation to the results from the canonical run (§3), this
is a negligible effect. Some of the surrounding gas has accreted on the disk due to the
gravitational potential of the central star. The color density allows us to follow the location
of the accreted gas. After 2000 years, roughly 20% of the accreted mass has found its way
to the midplane of the disk due to the effects of numerical diffusion. Hence some numerical
diffusion exists and must be considered in what follows.
3. Canonical Case
In this section, we adopt a particular set of parameters pertinent to the disk and the
supernova, and follow the evolution of the disk and ejecta in some detail. The simulation
begins with our relaxed disk (§2.5), seen in Figure 5. Its mass is about 0.00838 M⊙, and
it extends from 4AU to 40AU, the inner parts of the disk being removed to improve
code performance. The gas density around the disk is taken to be a uniform 10 cm−3,
which is a typical density for an H ii region. This disk has similar characteristics to those
found in the Orion nebula, which have been photoevaporated down to tens of AU by the
radiation of nearby massive O stars (Johnstone, Hollenbach & Bally 1998). In setting up
our disk, we have ignored the effects of the UV flash that accompanies the supernova,
in which approximately 3 × 1047 erg of high-energy ultraviolet photons are emitted over
several days (Hamuy et al. 1988). The typical UV opacities of protoplanetary disk dust
are κ ∼ 102 cm2 g−1 (D’Allesio et al. 2006), so this UV energy does not penetrate below
a column density ∼ κ−1 ∼ 10−2 g cm−2. The gas density at the base of this layer is
typically ρ ∼ 10−15 g cm−3; if the gas reaches temperatures < 105K, tcool will not exceed
a few hours (§2.1). The upper layer of the disk absorbing the UV is not heated above a
– 14 –
temperature T ∼ (EUV/4πd
2)mHκ/kB ∼ 10
5K. Because the gas in the disk absorbs and
then reradiates the energy it absorbs from the UV flash, we have ignored it. We have also
neglected low-density gas structures that are likely to have surrounded the disk, including
photoevaporative flows and bow shocks from stellar winds, as these are beyond the scope
of this paper. It is likely that the UV flash would greatly heat this low-density gas and
cause it to rapidly escape the disk anyway. Our “relaxed disk” initial state is a reasonable,
simplified model of the disks seen in H ii regions before they are struck by supernova shocks.
After a stable disk is obtained, supernova ejecta are added to the system. The canonical
simulation assumes Mej = 20M⊙ of material was ejected isotropically by a supernova
d = 0.3 pc away, with an explosion kinetic energy Eej = 10
51 erg, (1 f.o.e.). This is typical of
the mass ejected by a 25M⊙ progenitor star, as considered by Woosley & Weaver (1995),
and although more recent models show that progenitor winds are likely to reduce the ejecta
mass to < 10M⊙ (Woosley, Heger & Weaver 2002), we retain the larger ejecta mass as a
worst-case scenario for disk survivability. The ejecta are assumed to explode isotropically,
but with density and velocity decreasing with time. The time dependence is taken from
the scaling solutions of Matzner & McKee (1999); in analogy to their eq. [1], we define the
following quantities:
where R∗ is the radius of the exploding star, taken to be 50R⊙. The travel time from the
supernova to the disk is computed as ttrav = d/v∗, and is typically ∼ 100 years. Finally,
expressions for the time dependence of velocity, density and pressure of the ejecta, are
– 15 –
obtained for any given time t after the shock strikes the disk:
vej(t) = v∗
ttrav
t+ ttrav
ρej(t) = ρ∗
ttrav
ttrav
t + ttrav
pej(t) = p∗
ttrav
ttrav
t+ ttrav
We acknowledge that supernova ejecta are not distributed homogeneously within the
progenitor (Matzner & McKee 1999), nor are they ejected isotropically (Woosley, Heger
& Weaver 2002), but more detailed modeling lies beyond the scope of this paper.
Our assumption of homologous expansion is in any case a worst-case scenario for disk
survivability in that the ejecta are front-loaded in a way that overestimates the ram pressure
(C. Matzner, private communication). As our parameter study (§5) shows, density and
velocity variations have little influence on the results.
The incoming ejecta and the shock they create while propagating through the
low-density gas of the H ii region can be seen in Figure 6. When the shock reaches the disk,
the lower-density outer edges are swept away, as the ram pressure of the ejecta is much
higher than the gas pressure in those areas. However, the shock stalls at the higher density
areas of the disk, as the gas pressure is higher there. A snapshot of the stalling shock can
be seen in Figure 7. As the ejecta hit the disk, they shock and thermalize, heating the
gas on the upper layers of the disk. This increases the pressure in that area, causing a
reverse shock to propagate into the incoming ejecta. The reverse shock will eventually stall,
forming a bow shock around the disk (Figures 8 and 9). Roughly 4 months have passed
between the initial contact and the formation of the bow shock.
Some stripping of the low density gas at the disk’s edge (> 30 AU) may occur as the
supernova ejecta is deflected around it, due primarily to the ram pressure of the ejecta. As
the stripped gas is removed from the top and the sides of the disk, it either is snowplowed
– 16 –
away from the disk if enough momentum has been imparted to it, or it is pushed behind
the disk, where it can fall back onto it (Figure 10). In addition to stripping the outer
layers of the disk, the pressure of the thermalized shocked gas will compress the disk to a
smaller size; although they do not destroy the disk, the ejecta do temporarily deform the
disk considerably. Figure 11 shows the effect of the pressure on the disk, which has been
reduced in thickness and has shrunk to a radius of 30 AU. The extra external pressure
effectively aids gravity and allows the gas to orbit at a smaller radius with the same angular
momentum. As the ejecta is deflected across the top edge of the disk, some mixing between
the disk gas and the ejecta may occur through Kelvin-Helmholtz instabilities. Figure 12
shows a close up of the disk where a Kelvin-Helmholtz roll is occurring at the boundary
between the disk and the flowing ejecta. In addition, some ejecta mixed in with the stripped
material under the disk might also accrete onto the disk. As time goes by and slower ejecta
hit the disk, the ram pressure affecting the disk diminishes, and the disk slowly returns to
its original state, recovering almost completely after 2000 years (Figure 13).
The exchange of material between the disk and the ejecta is mediated through the
ejecta-disk interface, which in our simulations is only moderately well resolved. As discussed
in §4, the numerical resolution will affect how well we quantify both the destruction of the
disk and the mixing of ejecta into the disk. In the canonical run, at least, disk destruction
and gas mixing are minimal. Although some stripping has occurred while the disk was being
hit by the ejecta, it has lost less than 0.1% of its mass. The final disk mass, computed from
the zones where the density is greater than 100 cm−3, remains roughly at 0.00838 M⊙. Some
of the ejecta have also been mixed into the disk, but only with very low efficiency. A 30AU
disk sitting 0.3 pc from the supernova intercepts roughly one part in 1.7 × 107 of the total
ejecta from the supernova, assuming isotropic ejecta distribution. For 20 M⊙ of ejecta, this
corresponds to roughly 1.18 × 10−6M⊙ intercepted. At the end of the simulation, we find
only 1.48× 10−8M⊙ of supernova ejecta was injected in the disk, for an injection efficiency
– 17 –
of about 1.3%. Some of the injected material could be attributed to numerical diffusion
between the outer parts of the disk and the inner layers: as seen in §2.5, Perseus is diffusive
over long periods of time. However, the distribution of the colored mass is qualitatively
different from that obtained from a simple numerical diffusion process. Figure 14 compares
the percentage of colored mass within a given isodensity contour for the canonical case and
the relaxed disk simulation of §2.5, at a time 500 years after the beginning of each of these
simulations. From this graph, it is clear that the process that injects the supernova ejecta is
not simply numerical diffusion, as it is much more efficient at injecting material deep within
the disk. The post-shock pressure of the ejecta gas, 100 years after initial contact, when
its forward progession in the disk has stalled is ∼ 2ρejv
ej/(γ + 1) = 2.8 × 10
−5 dyne cm−2.
(After 100 years, ρej = 2.2 × 10
−21 g cm−3 and vej = 1300 km s
−1.) The shock stalls where
the post-shock pressure is comparable to the disk pressure ∼ ρkBT/m̄. Hence at 20AU,
where the temperature of the disk is T ≈ 30K, the shock stalls at the isodensity contour
∼ 1.5 × 10−14 g cm−3. As about half of the color mass is mixed to just this depth, this is
further evidence that the color field in the disk represents a real physical mixing.
4. Numerical Resolution
The results of canonical run show many similarities to related problems that have
been studied extensively in the literature. The interaction of a supernova shock with a
protoplanetary disk resembles the interaction of a shock with a molecular cloud, as modeled
by Nittmann et al. (1982), Bedogni & Woodward (1990), Klein, McKee & Colella (1994;
hereafter KMC), Mac Low et al. (1994), Xu & Stone (1995), Orlando et al. (2005) and
Nakamura et al. (2006). Especially in Nakamura et al. (2006), the numerical resolutions
achieved in these simulations are state-of-the-art, reaching several ×103 zones per axis.
In those simulations, as in our canonical run, the evolution is dominated by two physical
– 18 –
effects: the transfer of momentum to the cloud or disk; and the onset of Kelvin-Helmholtz
(KH) instabilities that fragment and strip gas from the cloud or disk. KH instabilities
are the most difficult aspect of either simulation to model, because there is no practical
lower limit to the lengthscales on which KH instabilities operate (they are only suppressed
at scales smaller than the sheared surface). Increasing the numerical resolution generally
reveals increasingly small-scale structure at the interface between the shock and the cloud
or disk (see Figure 1 of Mac Low et al. 1994). The numerical resolution in our canonical
run is about 100 zones per axis; more specifically, there are about 26 zones in one disk
radius (of 30 AU), and about 20 zones across two scale heights of the disk (one scale-height
being about 2 AU at 20 AU). Our highest-resolution run used about 50 zones along the
radius of the disk, and placed about 30 zones across the disk vertically. In the notation of
KMC, then, our simulations employ about 20-30 zones per cloud radius, a factor of 3 lower
than the resolutions of 100 zones per cloud radius argued by Nakamura et al. (2006) to be
necessary to resolve the hydrodynamics of a shock hitting a molecular cloud.
Higher numerical resolutions are difficult to achieve; unlike the case of a supernova
shock with speed ∼ 2000 km s−1 striking a molecular cloud with radius of 1 pc, our
simulations deal with a shock with the same speed striking an object whose intrinsic
lengthscale is ∼ 0.1AU. Satisfying our CFL condition requires us to use timesteps that are
only ∼ 103 s, four orders of magnitude smaller than the timesteps needed for the case of a
molecular cloud. This and other factors conspire to make simulations of a shock striking
a protoplanetary disk about 100 times more computationally intensive than the case of a
shock striking a molecular cloud. Due to the numerous lengthscales in the problem imposed
by the star’s gravity and the rotation of the disk, it is not possible to run the simulations
at low Mach numbers and then scale the results to higher Mach numbers. We intend to
create a parallelized version of Perseus to run on a computer cluster in the near future, but
until then, our numerical resolution cannot match that of simulations of shocks interacting
– 19 –
with molecular clouds. This begs the question, if our resolution is not as good as has been
achieved by others, is it good enough?
To quantify what numerical resolutions are sufficient, we examine the physics of a
shock interacting with a molecular cloud, and review the convergence studies of the same
undertaken by previous authors. In the most well-known simulations (Nittmann et al.
1982; KMC; Mac Low et al. 1994; Nakamura et al. 2006), it is assumed that a low-density
molecular cloud with no gravity or magnetic fields is exposed to a steady shock. The
shock collides with the cloud, producing a reverse shock that develops into a bow shock;
a shock propagates through the cloud, passing through it in a “cloud-crushing” time tcc.
The cloud is accelerated, but as long as a velocity difference between the high-velocity gas
and the cloud exists, KH instabilities grow that create fragments with significant velocity
dispersions, ∼ 10% of the shock speed (Nakamura et al. 2006). Cloud destruction takes
place before the cloud is fully accelerated, and the cloud is effectively fragmented in a few
× tcc before the velocity difference diminishes. These fragments are not gravitationally
bound to the cloud and easily escape. As long as the shock remains steady for a few × tcc,
it is inevitable that the cloud is destroyed.
As KH instabilities are what fragment the cloud and accelerate the fragments, it is
important to model them carefully, with numerical resolution as high as can be achieved.
KMC stated in their abstract and throughout their paper that 100 zones per cloud
radius were required for “accurate results”; however, all definitions of what was meant
by “accurate”, or what were the physically relevant “results” were deferred to a future
“Paper II”. A companion paper by Mac Low et al. (1994) referred to the same Paper II
and repeated the claim that 100 zones per axis were required. Nakamura et al. (2006),
published this year, appears to be the Paper II that reports the relevant convergence
study and quantifies what is meant by accurate results. Global quantities, including the
– 20 –
morphology of the cloud, its forward mass and momentum, and the velocity dispersions
of cloud fragments, were defined and calculated at various levels of numerical resolution.
These were then compared to the same quantities calculated using the highest achievable
resolutions, about 500 zones per cloud radius (over 1000 zones per axis). The quantities
slowest to converge with higher numerical resolution were the velocity dispersions, probably,
they claim, because these quantities are so sensitive to the hydrodynamics at shocks and
contact discontinuities where the code becomes first-order accurate only. The velocity
dispersions converged to within 10% of the highest-resolution values only when at least
100 zones per cloud radius were used. For this single arbitrary reason, Nakamura et al.
(2006) claimed numerical resolutions of 100 zones per cloud radius were necessary. We note,
however, that the other quantities to do with cloud morphology and momentum were found
to converge much more readily; according to Figure 1 of Nakamura et al. (2006), numerical
resolutions of only 30 zones per cloud radius are sufficient to yield values within 10% of the
values found in the highest-resolution simulations. And although the velocity dispersions
are not so well converged at 30 zones per cloud radius, even then the errors do not exceed
a factor of 2. Assuming that the problem we have investigated is similar enough to that
investigated by Nakamura et al. (2006) so that their convergence study could be applied
to our problem, we would conclude that even our canonical run is sufficiently resolving
relevant physical quantities, the one possible exception being the velocities of fragments
generated by KH instabilities, where the errors could be a factor of 2.
Of course, the problem we have investigated, a supernova shock striking a
protoplanetary disk, is different in four very important ways from the cases considered by
KMC, Mac Low et al. (1994) and Nakamura et al. (2006). The most important fundamental
difference is that the disk is gravitationally bound to the central protostar. Thus, even
if gas is accelerated to supersonic speeds ∼ 10 km s−1, it is not guaranteed to escape the
star. Second, the densities of gas in the disk, ρdisk, are significantly higher than the density
– 21 –
in the gas colliding with the disk, ρej. In the notation of KMC, χ = ρdisk/ρej. Because
the disk density is not uniform, no single value of χ applies, but if χ is understood to
refer to different parcels of disk gas, χ would vary from 104 to over 108. This affects the
magnitudes of certain variables (see, e.g., Figure 17 of KMC regarding mix fractions), but
also qualitatively alters the problem: the densities and pressures in the disk are so high that
the supernova shock cannot cross through the disk, instead stalling at several scale heights
above the disk. Unlike the case of a shock shredding a molecular cloud, the cloud-crushing
timescale tcc is not even a relevant quantity for our calculations. The third difference is
that shocks cannot remain non-radiative when gas is as dense as it is near the disk. Using
ρ = 10−14 g cm−3 and Λ = 10−24 erg cm3 s−1, tcool is only a few hours, and shocks in the disk
are effectively isothermal. Shocks propagating into the disk therefore stall at somewhat
higher locations above the disk than they would have if they were adiabatic. Finally, the
fourth fundamental difference between our simulations and those investigated in KMC,
Mac Low et al. (1994) and Nakamura et al. (2006) is that we do not assume steady shocks.
For supernova shocks striking protoplanetary disks about 0.3 pc away, the most intense
effects are felt only for a time ∼ 102 years, and after only 2000 years the shock has for all
purposes passed. There are limits, therefore, to the energy and momentum that can be
delivered to the disk. Very much unlike the case of a steady, non-radiative shock striking
a low-density, gravitationally unbound molecular cloud, where ultimately destruction of
the cloud is inevitable, many factors contribute to the survivability of protoplanetary disks
struck by supernova shocks.
This conclusion is borne out by a resolution study we have conducted that shows
that the vertical momentum delivered to the disk is certainly too small to destroy it, and
that we are not significantly underresolving the KH instabilities at the top of the disk.
Using the parameters of our canonical case, we have conducted 6 simulations with different
numerical resolutions. The resolutions range from truly awful, with only 8 zones in the
– 22 –
radial direction (∆r = 10AU) and 18 zones in the vertical direction (with ∆z = 1AU at
the midplane, barely sufficient to resolve a scale height), to our canonical run (76 x 120),
to one high-resolution run with 152 radial zones (∆r = 0.5AU) and 240 vertical zones
(∆z = 0.13AU at the midplane). On an Apple G5 desktop with two 2.0-GHz processors,
these simulations took from less than a day to 80 days to run. To test for convergence,
we calculated several global quanities Q, including: the density-weighted cloud radius, a;
the density-weighted cloud thickness, c; the density-weighted vertical velocity, 〈vz〉; the
density-weighted velocity dispersion in r, δvr; the density-weighted velocity dispersion in z,
〈vz〉; as well as the mass of ejecta injected into the disk, Minj. Except for the last quantity,
these are defined exactly as in Nakamura et al. (2006), but using a density threshold
corresponding to 100 cm−3. Each global quantity was measured at a time 500 years into
each simulation. We define each global quantity Q as a function of numerical resolution n,
where n is the geometric mean of the number of zones along each axis, which ranges from
12 to 191. To compare to the resolutions of KMC, one must divide this number by about 3
to get the number of zones per “cloud radius” (two scale heights at 20 AU) in the vertical
direction, and divide by about 2 to get the number of zones per cloud radius in the radial
direction. The convergence is measured by computing |Q(n)−Q(nmax)| /Q(nmax), where
nmax = 191 corresponds to our highest resolution case. In Figure 15 we plot each quantity
Q(n) as a function of resolution n (except 〈vz〉). All of the quantities have converged
to within 10%, the criterion imposed by Nakamura et al. (2006) as signifying adequate
convergence. It is significant that δvr has converged to within 10%, because this is the
quantity relevant to disk destruction by KH instabilities. Material is stripped from the
disk only if supersonic gas streaming radially above the top of the disk can generate KH
instabilities and fragments of gas that can then be accelerated radially to escape velocities.
If we were underresolving this layer significantly, one would expect large differences in δvr
as the resolution was increased, but instead this quantity has converged. Higher-resolution
– 23 –
simulations are likely to reveal smaller-scale KH instabilities and perhaps more stripping of
the top of the disk, but not an order of mangitude more.
The convergence of 〈vz〉 with resolution is handled differently because unlike the other
quantities, 〈vz〉 can vanish at certain times. The disk absorbs the momentum of the ejecta
and is pushed downward, but unlike the case of an isolated molecular cloud, the disk feels
a restoring force from the gravity of the central star. The disk then undergoes damped
vertical oscillations about the origin as it collides with incoming ejecta at lower and lower
speeds. This behavior is illustrated by the time-dependence of 〈vz〉, shown in Figure 16
for two numerical resolutions, our canonical run (n = 95) and our highest-resolution run
(n = 191). Figure 16 shows that the vertical velocity of the disk oscillates about zero, but
with an amplitude ∼ 0.1 km s−1. The time-average of this amplitude can be quantified by
−< vz >2
, where the bar represents an average over time; the result is 825 cm s−1
for the highest-resolution run and is only 2% smaller for the canonical resolution. The
difference between the two runs is generally much smaller than this; except for a few times
around t = 150 yr, and t = 300 yr, when the discrepancies approach 30%, the agreement
between the two resolutions is within 10%. The time-averaged dispersion of the amplitude
of the difference (defined as above for 〈vz〉 itself) is only 12.0 cm s
−1, which is only 1.5% of
the value for 〈vz〉 itself. Taking a time average of |〈vz〉95 − 〈vz〉191| / |〈vz〉191| yields 8.7%.
We therefore claim convergence at about the 10% level for 〈vz〉 as well.
Using these velocities, we also note here that the neglect of the star’s motion is entirely
justified. The amplitude of 〈vz〉 is entirely understandable as reflecting the momentum
delivered to the disk by the supernova ejecta, which is ∼ 20M⊙ (πR
disk/4πd
2) Vej ∼
10−3M⊙ km s
−1, and which should yield a disk velocity ∼ 0.1 km s−1. The period of
oscillation is about 150 years, which is consistent with most of this momentum being
delivered to the outer reaches of the disk from 25 to 30 AU where the orbital periods are
– 24 –
125 to 165 years. These velocities are completely unaffected by the neglected velocity of the
central star, whose mass is 120 times greater than the disk’s mass. If the central star, with
mass ∼ 1M⊙, had been allowed to absorb the ejecta’s momentum, it would only move at
∼ 100 cm s−1 and be displaced at most 0.4 AU after 2000 years. This neglected velocity, is
much smaller than all other relevant velocities in the problem, including |〈vz〉| ∼ 800 cm s
as well as the escape velocities (∼ 10 km s−1), the velocities of gas flowing over the disk
(∼ 102 km s−1), and of course the shock speeds (∼ 103 km s−1).
Our analysis shows that we have reached adequate convergence with our canonical
numerical resolution (n = 95). We observe KH instabilities in all of our simulations (except
n = 12), and we see the role they play in stripping the disk and mixing ejecta gas into it.
We are therefore confident that we are adequately resolving these hydrodynamic features;
nevertheless, we now consider a worst-case scenario in which we KH instabilities can strip
the disk with 100% efficiency where they act, and ask how much mass the disk could
possibly lose under such conditions.
Supernova ejecta that has passed through the bow shock and strikes the disk necessarily
stalls where the gas pressure in the disk exceeds the ram pressure of the ejecta. Below this
level, the momentum of the ejecta is transferred not as a shock but as a pressure (sound)
wave. Gas motions below this level are subsonic. Note that this is drastically different from
the case of an isolated molecular cloud as studied by KMC and others; the high pressure in
the disk is maintained only because of the gravitational pull of the central star.
The location where the incoming ejecta stall is easily found. Assuming the vertical
isothermal minimum-mass solar nebula disk of Hayashi et al. (1985), the gas density varies
as ρ(r, z) = 1.4×10−9 (r/1AU)−21/8 exp(−z2/2H2) g cm−3, where H = cs/Ω, cs is the sound
speed and Ω is the Keplerian orbital frequency. Using the maximum density and velocity
of the incoming ejecta (ρej = 1.2 × 10
−20 g cm−3 and Vej = 2200 km s
−1), the ram pressure
– 25 –
of the shock striking the disk does not exceed pram = ρejV
ej/4 = 1.5 × 10
−4 dyne cm−2 (the
factor of 1/4 arises because the gas must pass through the bow shock before it strikes the
disk). At 10 AU the pressure in the disk, ρc2s , exceeds the ram pressure at z = 2.7H , and at
20 AU the ejecta stall at z = 1.7H ; the gas densities at these locations are ≈ 10−13 g cm−3.
At later times, the ejecta stall even higher above the disk, because pram ∝ t
−5 (cf. eq. [11]).
For example, at t = 100 yr, the ram pressure drops below 1×10−5 dyne cm−2, and the ejecta
stall above z = 3.6H (10 AU) and z = 2.9H (20 AU).
The column density above a height z in a vertically isothermal disk is easily found to
be Σ(> z) ≈ ρ(z)H2/z = p(z)/(Ω2z). Integrating over radius, the total amount of disk gas
that ever comes into contact with ejecta is (approximating z = 2H):
Mss =
pramr
πpramR
. (12)
Using a disk radius Rd = 30AU, the maximum amount of disk gas that is actually
exposed to a shock at any time is only 1.5 × 10−5M⊙, or 0.2% of the disk mass. This
fraction decreases with time as pram ∝ t
−5 (eq. [11]); the integral over time of pram is
pram(t = 0) × ttrav/4. The ram pressure drops so quickly, that effectively ejecta interact
with this uppermost 0.2% of the disk mass only for about 30 years. This is equivalent to
one orbital timescale at 10 AU, so the amount of disk gas that is able to mix or otherwise
interact with the ejecta hitting the upper layers of the disk is very small, probably a few
percent at most. As for KH instabilities, they are initiated when the Richardson number
drops below a critical value, when
(∂U/∂z)2
, (13)
where g = −Ω2z is the vertical gravitational acceleration, Ω is the Keplerian orbital
frequency, and (∂U/∂z) is the velocity gradient at the top of the disk. Below the stall
point, all gas motions are subsonic and the velocity gradient would have to be execptionally
– 26 –
steep, with an unreasonably thin shear layer thickness, <∼H/10, to initiate KH instabilities.
Mixing of ejecta into the disk is quite effective above where the shock stalls, as illustrated
by Figure 14; it is in these same layers (experiencing supersonic velocities) that we expect
that KH instabilities to occur, but again <∼ 1% of the disk mass can be expected to interact
with these layers.
To summarize, our numerical simulations are run at a lower simulation (by a factor
of about 3) than has been claimed necessary to study the interaction of steady shocks
with gravitationally unbound molecular clouds, but the drastically different physics of the
problem studied here as allowed us to achieve numerical convergence and allowed us to reach
meaningful conclusions. Our global quantities have converged to within 10%, the same
criterion used by Nakamura et al. (2006) to claim convergence. The problem is so different
because the disk is tightly gravitationally bound to the star and the supernova shock is
of finite duration. The high pressure in the disk makes the concept of a cloud-crushing
time meaningless, because the ejecta stall before they drive through even 1% of the disk
gas. Rather than a sharp interface between the ejecta and the disk, the two interact via
sound waves within the disk, which entails smoother gradients. While we do resolve KH
instabilities in this interface, we allow that we may be underresolving this layer; but even
if we are, this will not affect our conclusions regarding the disk survival or the amount of
gas mixed into the disk. This is because we already find that mass is stripped from the
disk and ejecta are mixed into the disk very effectively (see Figure 14) above the layer
where the ejecta stall, and below this layer mixing is much less efficient and all the gas is
subsonic and bound to the star. It is inevitable that mass loss and mixing of ejecta should
be only at the ∼ 1% level. Similar studies using higher numerical resolutions are likely to
reveal more detailed structures at the disk-ejecta interface, but it is doubtful that more
than a few percent of the disk mass can be mixed-in ejecta, and it is even more doubtful
that even 1% of the disk mass can be lost. We therefore have sufficient confidence in our
– 27 –
canonical resolution to use it to test the effects of varying parameters on gas mixing and
disk destruction.
5. Parameter Study
5.1. Distance
Various parameters were changed from the canonical case to study their effect on the
survival of the disk and the injection efficiency of ejecta, including: the distance between the
supernova and the disk, d; the explosion energy of the supernova, Eej; and the mass of gas
in the disk, Mdisk. In all these scenarios, the resolution stayed the same as in the canonical
case. The first parameter studied was the distance between the supernova and the disk.
From the canonical distance of 0.3 pc, the disk was moved to 0.5 pc and 0.1 pc. The main
effect of this change is to vary the density of the ejecta hitting the disk (see eq. [11]). If the
disk is closer, the gaseous ejecta is less diluted as it hits the disk. Hence these simulations
are essentially equivalent to simulating a denser or a more tenuous clump of gas hitting the
disk in an non-homogeneous supernova explosion. The results of these simulations can be
seen in Table 2. The “% injected” column gives the percentage of the ejecta intercepted by
the disk [with an assumed cross-section of π(30 AU)2] that was actually mixed into the disk.
The third column gives the estimated 26Al/27Al ratio that one would expect in the disk
if the SLRs were delivered in the gas phase. This quantity was calculated using a disk
chemical composition taken from Lodders (2003), and the ejecta isotopic composition from
a 25 M⊙ supernova taken from Woosley & Weaver (1995), which ejects M = 1.27× 10
of 26Al. Although the injection efficiency increases for denser ejecta, and the geometric
dilution decreases for a closer supernova, gas-phase injection of ejecta into a disk at 0.1 pc
cannot explain the SLR ratios in meteorites. The 26Al/27Al ratio is off by roughly an order
of magnitude from the measured value of 5 × 10−5 (e.g., MacPherson et al. 1995). Stripping
– 28 –
was more important with denser ejecta (d = 0.1 pc), although still negligible compared to
the mass of the disk; only 0.7% of the disk mass was lost.
5.2. Explosion Energy
We next varied the explosion energy, which defines the velocity at which the ejecta
travel. The explosion energy was changed from 1 f.o.e. to 0.25 and 4 f.o.e., effectively
modifying the ejecta velocity from 2200 km/s to 1100 km/s and 4400 km/s, respectively.
The results of the simulations can be seen in Table 3. Slower ejecta thermalizes to a lower
temperature, and does not form such a strong reverse shock. Therefore, slower ejecta
is injected at a slightly higher efficiency into a disk. Primarily, though, the results are
insensitive to the velocity of the incoming supernova ejecta.
5.3. Disk Mass
The final parameter varied was the mass of the disk. From these simulations, the mass
of the the minimum mass disk used in the canonical simulation was increased by a factor
of 10, and decreased by a factor of 10. The results of the simulations can be seen in Table
4. Increasing the mass by a factor of 10 slightly increases, but this could be due to the fact
that the disk does not get compressed as much as the canonical disk (it has a higher density
and pressure at each radius). Hence the disk has a larger surface to intercept the ejecta
(the calculation for injection efficiency assumes a radius of 30 AU). Reducing the mass by
a factor of 10 increases the efficiency. As the gas density in the disk is less, the pressure
is less, and hence the ejecta is able to get closer to the midplane, increasing the amount
injected.
– 29 –
6. Conclusions
In this paper, we have described a 2-D cylindrical hydrodynamics code we wrote,
Perseus, and the results from the application of this code to the problem of the interaction
of supernova shocks with protoplanetary disks. A main conclusion of this paper is that disks
are not destroyed by a nearby supernova, even one as close as 0.1 pc. The robustness of
the disks is a fundamentally new result that differs from previous 1-D analytical estimates
(Chevalier 2000) and numerical simulations (Ouellette et al. 2005). In those simulations, in
which gas could not be deflected around the disk, the full momentum of the supernova ejecta
was transferred directly to each annulus of gas in the disk. Chevalier (2000) had estimated
that disk annuli would be stripped away from the disk wherever MejVej/4πd
2 > ΣdVesc,
where Σd is the surface density of the disk [Σd = 1700 (r/1AU)
−3/2 g cm−2 for a minimum
mass disk; Hayashi et al. 1985), and Vesc is the escape velocity at the radius of the annulus.
In the geometry considered here, the momentum is applied at right angles to the disk
rotation, so vesc can be replaced with the Keplerian orbital velocity, as the total kinetic
energy would then be sufficient for escape. Also, integrating the momentum transfer over
time (eq. [11]), we find Vej = 3v⋆/4. Therefore, using the criterion of Chevalier (2000), and
considering the parameters of the canonical case but with d = 0.1 pc, the disk should have
been destroyed everywhere outside of 30.2AU, representing a loss of 13% of the mass of a
40 AU radius disk. Comparable conclusions were reached by Ouellette et al. (2005).
In contrast, as these 2-D simulations show, the disk becomes surrounded by high-
pressure shocked gas that cushions the disk and deflects ejecta around the disk. This
high-pressure gas has many effects. First, the bow shock deviates the gas, making part
of the ejecta that would have normally hit the disk flow around it. From Figure 11, by
following the velocity vectors, it is possible to estimate that the gas initially on trajectories
withr > 20AU will be deflected by > 14◦ after passing through the bow showk, and will
– 30 –
miss the disk. For a disk 30 AU in size, this represents a reduction in the mass flux hitting
by ≈ 45%; more thorough calculations give a reduction of ≈ 50%. Second, the bow shock
reduces the forward velocity of the gas that does hit the disk. Gas deviated sideways about
14◦, will have lost more than 10% of its forward velocity upon reaching the disk. These
two effects combined conspire to reduce the amount of momentum hitting the disk by 55%
overall. By virtue of the smaller escape velocity and the lower disk surface density, gas at
the disk edges is most vulnerable to loss by the momentum of the shock, but it at the disk
edges that the momentum of the supernova shock is most sharply reduced. Because of
the loss of momentum, the disk in the previous paragraph could survive out to a radius of
about 45AU.
A third, significant effect of the surrounding high-pressure shocked gas, though, is
its ability to shrink the disk to a smaller radius. The pressure in the post-shock gas is
∼ 2ρejv
ej/(γ + 1) = 4.4 × 10
−4 dyne cm−2, so the average pressure gradient in the disk
between about 30 and 35 AU is ≈ 1.9 × 10−18 dyne cm−3. This is to be compared to
the gravitational force per volume at 35AU, ρg = 4.8 × 10−19 dyne cm−3 (at 35 AU,
ρ ∼ 1.0× 10−15 in the canonical disk.) The pressure of the shocked gas enhances the inward
gravitational force by a significant amount, causing gas of a given angular momentum
to orbit at a smaller radius than it would if in pure Keplerian rotation. When this high
pressure is relieved after the supernova shock has passed, the disk is restored to Keplerian
rotation and expands to its original size. While the shock is strongest, the high-pressure gas
forces a protoplanetary disk to orbit at a reduced size, ≈ 30AU, where it is invulnerable to
being stripped by direct transfers of momentum. Because of these combined effects of the
cushion of high-pressure shocked gas surrounding the disk—reduction in ejecta momentum
and squeezing of the disk—protoplanetary disks even 0.1 pc from the supernova lose < 1%
of their mass.
– 31 –
Destruction of the disk, if it occurs at all, is due to stripping of the low-density upper
layers of the disk by Kelvin-Helmholtz (KH) instabilities. We observed KH instabilities in
all of our simulations (except n = 12), and we observe their role in stripping gas from the
disk and mixing supernova ejecta into the disk (e.g., Figure 12). Our canonical numerical
resolution (n = 95) and our highest-resolution simulation (n = 191), corresponding to
effectively 20-30 zones per cloud radius in the terminology of KMC, are just adequate to
provide convergence at the 10% level, as described in §4. We are confident we are capturing
the relevant physics in our simulations, but we have shown that even if KH instabilities
are considerably more effective than we are modeling, that no more than about 1% of
the disk mass could ever be affected by KH instabilities. This is because the supernova
shock stalls where the ram pressure is balanced by the pressure in the disk, and for typical
protoplanetary disk conditions, this occurs several scale heights above the midplane. It is
unlikely that higher-resolution simulations would observe loss of more than ∼ 1% of the
disk mass. We observed that the ratio of injected mass to disk mass was typically ∼ 1% as
well, for similar reasons. We stipulate that mixing of ejecta into the disk is more subtle
than stripping of disk mass, but given the limited ability of the supernova ejecta to enter
the disk, we find it doubtful that higher-resolution simulations would increase by more than
a few the amount of gas-phase radionuclides injected into the disk. Therefore, while disks
like those observed in the Orion Nebula (McCaughrean & O’Dell 1995) should survive the
explosions of the massive stars in their vicinity, and while these disks would then contain
some fraction of the supernova’s gas-phase ejecta, they would not retain more than a small
fraction (∼ 1%) of the gaseous ejecta actually intercepted by the disk. If SLRs like 26Al are
in the gas phase of the supernova (as modeled here), they will not be injected into the disk
in quantities large enough to explain the observed meteoritic ratios, failing by 1-2 orders of
magnitude.
Of course, the SLRs inferred from meteorites, e.g., 60Fe and 26Al, would not be detected
– 32 –
if they were not refractory elements. These elements should condense out of the supernova
ejecta as dust grains before colliding with a disk (Ebel & Grossman 2001). Colgan et al.
(1994) observed the production of dust at the temperature at which FeS condenses, 640
days after SN 1987A, suggesting that the Fe and other refractory elements should condense
out of the cooling supernova ejecta in less than a few years. (The supernova ejecta is
actually quite cool because of the adiabatic expansion of the gas.) As the travel times from
the supernova to the disks in our simulations are typically 20 − 500 years, SLRs can be
expected to be sequestered into dust grains condensed from the supernova before striking a
disk.
Dust grains will be injected into the disk much more effectively than gas-phase ejecta.
When the ejecta gas and dust, moving together at the same speed, encounter the bow
shock, the gas is almost instantaneously deflected around the disk, but the dust grains will
continue forward by virtue of their momentum. The dust grains will be slowed only as fast
as drag forces can act. The drag force F on a highly supersonic particle is F ≈ πa2 ρg ∆v
where a is the dust radius, ρg is the gas density, and ∆v is the velocity difference between
the gas and the dust. Assuming the dust grains are spherical with internal density ρs, the
resultant acceleration is dv/dt = −(3ρg∆v
2)/(4ρsa). Immediately after passage through the
bow shock, the gas velocity has dropped to 1/4 of the ejecta velocity, so ∆v ≈ (3/4)vej.
Integrating the acceleration, we find the time t1/2 for the dust to lose half its initial velocity:
t1/2 =
16ρsa
9ρgvej
. (14)
Measurements of SiC grains with isotopic ratios indicative of formation in supernova ejecta
reveal typical radii of a ∼ 0.5µm (Amari et al. 1994; Hoppe et al. 2000). Assuming
similar values for all supernova grains, and an internal density ρs = 2.5 g cm
−3, and using
the maximum typical gas density in the region between the bow shock and the disk,
ρg ≈ 5 × 10
−20 g cm−3 , we find a minimum dust stopping time t1/2 ≈ 2 × 10
7 s. In that
– 33 –
time, the dust will have travelled about 300AU. As the bow shock lies about 20 AU from
the disk, the dust will encounter the protoplanetary disk well before travelling this distance,
and we conclude that the dust the size of typical supernova SiC grains is not deflected
around the disk. We estimate that nearly all the dust in the ejecta intercepted by the
disk will be injected into the disk. With nearly 100% injection efficiency, the abundances
of 26Al and 60Fe in a disk 0.15 pc from a supernova would be 26Al/27Al = 6.8 × 10−5
and 60Fe/56Fe = 4.8 × 10−7 (using the yields from Woosley & Weaver 1995). These
values compare quite favorably to the meteoritic ratios (26Al/27Al = 5.0 × 10−5 and
60Fe/56Fe = 3−7×10−7; MacPherson et al. 1995, Tachibana & Huss 2003), and we conclude
that injection of SLRs into an already formed protoplanetary disk by a nearby supernova
is a viable mechanism for delivering radionuclides to the early Solar System, provided the
SLRs have condensed into dust. In future work we will present numerical simulations of
this process (Ouellette et al. 2007, in preparation).
We thank an anonymous referee for two very thorough reviews that significantly
improved the manuscript. We also thank Chris Matzner for helpful discussions.
– 34 –
REFERENCES
Adams, F. C. & Laughlin, G., 2001, Icarus, 150, 151
Amari, S., Lewis, R. S. & Anders, E., 1994, GeCoA, 58, 459
Bedogni, R., & Woodward, P. R. 1990, A&A, 231, 481
Cameron, A. G. W. & Truran, J. W., 1977, Icarus, 30, 447
Chevalier, R. A., 2000, ApJ, 538, 151
Clayton, D. D., 1977, Icarus, 32, 255
Colgan, S. W. J., Haas, M. R., Erickson, E. F., Lord, S. D. & Hollenbach, D. J., 1994, ApJ,
427, 874
D’Alessio, P., Calvet, N., Hartmann, L., Franco-Hernández, R., & Serv́ın, H. 2006, ApJ,
638, 314
de Marco, O., O’Dell, C. R., Gelfond, P., Rubin, R. H. & Glover, S. C. O., 2006, AJ,
131,2580
Ebel, D. S., & Grossman, L. 2001, Geochim. Cosmochim. Acta, 65, 469
Goswami, J. N., Marhas, K. K., Chaussidon, M., Gounelle, M. & Meyer, B. S., 2005, ,
Chondrites and the Protoplanetary Disk, ed. A. N. Krot, E. R. D. Scott and B.
Reipurth (San Francisco: Astronomical Society of the Pacific), 485
Gounelle, M., Shu, F. H., Shang, H., Glassgold, A. E., Rehm, K. E.& Lee, T., 2006, ApJ,
640,1163
Harper, C. L., Jr., 1996, ApJ, 466, 1026
– 35 –
Hayashi, C., Nakazawa, K. & Nakagawa, Y., 1985, Protostars and Planets II , Tucson, AZ,
University of Arizona Press, 1100
Hawley, J. F., Wilson, J. R. & Smarr, L. L., 1984, ApJ, 277, 296
Hester, J. J. & Desch, S. J., 2005, Chondrites and the Protoplanetary Disk, ed. A. N. Krot,
E. R. D. Scott and B. Reipurth (San Francisco: Astronomical Society of the Pacific),
Hoppe, P., Strebel, R., Eberhardt, P., Amari & S., Lewis, R. S., 2000, M&PS, 35, 1157
Hamuy, M., Suntzeff, N. B., Gonzalez, R., & Martin, G. 1988, AJ, 95, 63
Huss, G. R. & Tachibana, S., 2004, LPI, 35, 1811
Jacobsen, S. B., 2005, Chondrites and the Protoplanetary Disk, ed. A. N. Krot, E. R. D.
Scott and B. Reipurth. (San Francisco: Astronomical Society of the Pacific), 548
Johnstone, D., Hollenbach, D. & Bally, J.,1998, ApJ, 499, 758
Kastner, J. H. & Myers, P. C., 1994, ApJ, 421, 605
Klein, R. I., McKee, C. F & Colella, P., 1994, (KMC) ApJ, 420, 213
Lada, C. J. & Lada, E. A., 2003, ARA&A, 41, 57L
Lee, T., Shu, F. H., Shang, H., Glassgold, A. E. & Rehm, K. E., 1998, ApJ, 506, 898
Leya, I., Halliday, A. N. & Wieler, R., 2003, ApJ, 594,605
Lodders, K., 2003, ApJ, 591, 1220
Looney, L. W., Tobin, J. J., & Fields, B. D. 2006, ApJ, 652, 1755
– 36 –
Mac Low, M.-M., McKee, C. F., Klein, R. I., Stone, J. M., & Norman, M. L. 1994, ApJ,
433, 757
MacPherson, G. J., Davis, A. M. & Zinner, E. K., 1995, Meteoritics, 30, 365
Matzner, C. D. & McKee, C. F., 1999, ApJ, 510, 379
McCaughrean, Mark J.; O’Dell, C. Robert, 1996, AJ, 111, 1977
Meyer, B. S. & Clayton, D. D., 2000, SSRv, 92, 133
Meyer, B. S., 2005, Chondrites and the Protoplanetary Disk, ed. A. N. Krot, E. R. D. Scott
and B. Reipurth. (San Francisco: Astronomical Society of the Pacific), 515
Mostefaoui, S., Lugmair, G. W. & Hoppe, P., 2005, ApJ, 625, 271
Mostefaoui, S., Lugmair, G. W., Hoppe, P. & El Goresy, A., 2004, NewAR, 48, 155
Nakamura, F., McKee, C. F., Klein, R. I., & Fisher, R. T. 2006, ApJS, 164, 477
Nittmann, J., Falle, S. A. E. G., & Gaskell, P. H. 1982, MNRAS, 201, 833
Oliveira, J. M., Jeffries, R. D., van Loon, J. Th., Littlefair, S. P. & Naylor, T., 2005,
MNRAS, 358, 21
Orlando, S., Peres, G., Reale, F., Bocchino, F., Rosner, R., Plewa, T., & Siegel, A. 2005,
A&A, 444, 505
Ouellette, N., Desch, S. J., Hester, J. J. & Leshin, L. A., 2005, Chondrites and the
Protoplanetary Disk, ed. A. N. Krot, E. R. D. Scott and B. Reipurth. (San Francisco:
Astronomical Society of the Pacific), 527
Quitté, G., Latkoczy, C., Halliday, A. N., Schönbächler, M. & Günther, D., 2005, LPI, 36,
– 37 –
Sod, G. A., 1978, J. Comput. Phys., 27, 1
Smith, N., Bally, J & Morse, J. A., 2003, ApJ, 587,105
Stone, J. M. & Norman, M. L., 1992 ApJS, 80, 753
Sutherland, R. S. & Dopita, M. A., 1993, ApJS, 88, 253
Tachibana, S. & Huss, G. R., 2003, ApJ, 588, 41
Tachibana, S., Huss, G. R., Kita, N. T., Shimoda, G. & Morishita, Y., 2006, ApJ, 639, 87
Vanhala H. A. T. & Boss A. P., 2000 ApJ538, 911
Vanhala H. A. T. & Boss A. P., 2002 ApJ575, 1144
Wasserburg, G. J., Gallino, R. & Busso, M.,1998, ApJ, 500, 189
Woosley, S. E.,Heger, A. & Weaver, T. A., 2002, RvMP, 74,1015
Woosley, S. E. & Weaver, T., 1995. ApJS, 101, 181
Xu, J., & Stone, J. M. 1995, ApJ, 454, 172
This manuscript was prepared with the AAS LATEX macros v5.2.
– 38 –
Table 1 Mass injected
# of zones (r × z) % injected
8×18 0.83
16×30 0.77
30×54 0.96
39×74 1.28
60×88 1.31
76 ×120 1.26
152×240 1.25
– 39 –
Table 2 Effect of Distance
d % injected 26Al/27Al
0.1 pc 4.3 6.4 × 10−6
0.3 pc 1.3 2.2 × 10−7
0.5 pc 1.0 5.6 × 10−8
– 40 –
Table 3 Effect of Explosion Energy
Eej % injected
26Al/27Al
4.0 f.o.e. 1.0 1.7 × 10−7
1.0 f.o.e. 1.3 2.2 × 10−7
0.25 f.o.e. 1.7 2.8 × 10−7
– 41 –
Table 4 Effect of Disk Mass
Mdisk % injected
26Al/27Al
0.084 M⊙ 1.4 2.3 × 10
0.0084 M⊙ 1.3 2.2 × 10
0.00084 M⊙ 2.2 3.6 × 10
– 42 –
Fig. 1.— Sod shock-tube problem benchmark. The squares are the results of the simulation
using Perseus, and the solid line is the analytical solution from Hawley et al. (1984).
– 43 –
Fig. 2.— Pressure-free collapse of a 30 AU, uniform-density clump of gas, as simulated by
Perseus. Spherical symmetry is maintained despite the cylindrical geometry and the inner
boundary condition at r = 2 AU.
– 44 –
Fig. 3.— (a) Gas temperature evolution using various cooling time steps (tc is the time step
normally used in the code) (b) Close-up of the time interval 31-33 years. Convergence is
achieved using tc and higher resolution timesteps.
– 45 –
Fig. 4.— (a) Isodensity contours of an equilibrium (“relaxed”) protoplanetary disk. Con-
tours are spaced a factor of 10 apart, with the outermost contour representing a density
10−20 g cm−3. The rotating disk has already evolved for 2000 years and is stable; this is the
configuration used as the initial state for our subsequent runs. (b) The “relaxed” disk, after
an additional 2000 years of evolution. While some slight deformation of the lowest-density
contours is seen, attributable to gravitational infall of surrounding gas, the disk is stable and
non-evolving over the spans of time relevant to supernova shock passage, ≈ 2000 years.
– 46 –
Fig. 5.— Isodensity contours of the relaxed disk, just before impact of the supernova ejecta.
Contours are spaced a factor of 10 apart, with the outermost contour representing a density
10−21 g cm−3.
– 47 –
Fig. 6.— Protoplanetary disk immediately prior to impact by the supernova shock. Isoden-
sity contours are as in Figure 4. Arrows represent gas velocities. The supernova ejecta are
travelling through the H ii region toward the disk at about 2200 km s−1.
– 48 –
Fig. 7.— Protoplanetary disk 0.05 years after being first hit by supernova ejecta. As the
supernova shock sweeps around the disk edge, it snowplows the low-density disk material
with it, but the shock stalls in the high-density gas in the disk proper. Isodensity contours
and velocity vectors as in Figure 5.
– 49 –
Fig. 8.— Protoplanetary disk 0.1 years after first being hit by supernova ejecta. As the
pressure increases on the side of the disk facing the ejecta, a reverse shock forms, visible as
the outermost (dashed) isodensity contour. Isodensity contours and velocity vectors as in
Figure 5.
– 50 –
Fig. 9.— Protoplanetary disk 0.3 years after first being hit by supernova ejecta. The reverse
shock, visible as the outermost (dashed) contour, has stalled and formed a bow shock. The
bow shock deflects incoming gas around the disk, which is effectively protected in a high-
pressure “bubble” of gas. Isodensity contours and velocity vectors as in Figure 5.
– 51 –
Fig. 10.— Protoplanetary disk 4 years after first being hit by supernova ejecta. Gas is being
stripped from the disk (e.g., the clump between R = 35 and 45 AU). Gas stripped from the
top of the disk either is entrained in the flow of ejecta and escapes the simulation domain,
or flows under the disk and falls back onto it. Isodensity contours and velocity vectors as in
Figure 5.
– 52 –
Fig. 11.— Protoplanetary disk 50 years after first being hit by supernova ejecta. The disk is
substantially deformed by the high pressures in the surrounding shocked gas. The pressures
compress the disk in the z direction, and also effectively aid gravity in the r direction,
allowing the gas to orbit at smaller radii with the same angular momentum. Isodensity
contours and velocity vectors as in Figure 5.
– 53 –
Fig. 12.— (a) Protoplanetary disk 400 years after first being hit by supernova ejecta. At
this instant mass is being stripped off the top of the disk by a Kelvin-Helmholtz instability,
seen in detail in (b). Isodensity contours and velocity vectors as in Figure 5.
– 54 –
Fig. 13.— (a) Protoplanetary disk prior to impact by supernova ejecta (same as the relaxed
disk of Figure 4), and (b) 2000 years after first being struck by supernova ejecta. This “before
and after” picture of the disk illustrates how the disk recovers almost completely from the
shock of a nearby supernova. Isodensity contours and velocity vectors as in Figure 5.
– 55 –
Fig. 14.— Ratio of color mass to total mass within a given isodensity contour (abscissa).
The dashed line represents the mass ratio after allowing the relaxed disk to evolve for 2000
years in the absence of a supernova; the solid line represents the mass ratio after 2000 years
of interaction with the supernova shock (our canonical simulation). Supernova ejecta is
injected very effectively up to densities where the shock would stall (∼ 10−14 g cm−3), much
more effectively than can be accounted for by numerical diffusion alone.
– 56 –
Fig. 15.— Convergence properties of selected global variables. The global variables are: the
mass-weighted radius of the cloud, a; the mass-weighted cloud thickness, c; the dispersions
in the mass-weighted radial (δvr) and vertical (〈vz〉) velocities; and the mass of ejecta gas
injected into the disk, Minj. The quantities are calculated at a time t = 500 years, but using
6 different numerical resolutions, n = 12, 22, 40, 54, 98 and 191. The deviation of each global
quantity Q from the highest-resolution value Q191 is plotted against numerical resolution n.
For our canonical simulation (n = 98), all quantities have converged at about the 10% level.
– 57 –
Fig. 16.— Density-weighted velocity along the z axis using the highest numerical simulation
n = 191 (solid line) and the canonical resolution n = 98 (dotted line). The difference
between them is plotted as the dashed line. After absorbing the initial impulse of downward
momentum from the supernova ejecta, the disk oscillates vertically about the position of the
central protostar with a period ∼ 150 years, characteristic of the most affected gas at about
30 AU.
Introduction
Perseus
Additions to Zeus
Sod Shock-Tube
Gravitational Collapse
Cooling
``Relaxed Disk''
Canonical Case
Numerical Resolution
Parameter Study
Distance
Explosion Energy
Disk Mass
Conclusions
|
0704.1653 | Scaling cosmologies, geodesic motion and pseudo-susy | UG-07-01
Scaling cosmologies,
geodesic motion and pseudo-susy
Wissam Chemissany, André Ploegh and Thomas Van Riet
Centre for Theoretical Physics, University of Groningen,
Nijenborgh 4, 9747 AG Groningen, The Netherlands
w.chemissany, a.r.ploegh, [email protected]
Abstract
One-parameter solutions in supergravity carried by scalars and a metric trace out curves
on the scalar manifold. In ungauged supergravity these curves describe a geodesic mo-
tion. It is known that a geodesic motion sometimes occurs in the presence of a scalar
potential and for time-dependent solutions this can happen for scaling cosmologies. This
note contains a further study of such solutions in the context of pseudo-supersymmetry for
multi-field systems whose first-order equations we derive using a Bogomol’nyi-like method.
In particular we show that scaling solutions that are pseudo-BPS must describe geodesic
curves. Furthermore, we clarify how to solve the geodesic equations of motion when the
scalar manifold is a maximally non-compact coset such as occurs in maximal supergravity.
This relies upon a parametrization of the coset in the Borel gauge. We then illustrate this
with the cosmological solutions of higher-dimensional gravity compactified on a n-torus.
http://arxiv.org/abs/0704.1653v3
Contents
1 Preliminaries 2
2 (Pseudo-) supersymmetry 3
3 Multi-field scaling cosmologies 4
3.1 Pure kinetic solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Potential-kinetic scaling solutions . . . . . . . . . . . . . . . . . . . . . . . 5
4 Geodesic curves and the Borel gauge 8
4.1 A solution-generating technique . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 An illustration from dimensional reduction . . . . . . . . . . . . . . . . . . 9
5 Discussion 10
A Curvatures 11
B The coset SL(N, IR)/ SO(N) 11
1 Preliminaries
We consider scalar fields Φi that parametrize a Riemannian manifold with metric Gij
coupled to gravity through the standard action
µν∂µΦ
j − V (Φ)
. (1)
We restrict to solutions with the following D-dimensional space-time metric
ds2D = g(y)
2ds2D−1 + ǫf(y)
2dy2 , ds2D−1 = (ηǫ)abdx
adxb , (2)
where ǫ = ±1 and ηǫ = diag(−ǫ, 1, . . . , 1). The case ǫ = −1 describes a flat FLRW-space-
time and ǫ = +1 a Minkowski-sliced domain wall (DW) space-time. The scalar fields that
source these space-times can only depend on the y-coordinate Φi = Φi(y). The function f
corresponds to the gauge freedom of reparameterizing the y-coordinate.
Of particular interest in this note are scaling comologies, which have received a great
deal of attention in the dark-energy literature, see [1] for a review and references. One def-
inition (amongst many) of scaling cosmologies is that they are solutions for which all terms
in the Friedmann equation have the same time dependence. For pure scalar cosmologies
this implies that
H2 ∼ V ∼ T ∼ τ−2 , (3)
where τ denotes cosmic time, H the Hubble parameter and T is the kinetic energy T =
GijΦ̇
iΦ̇j . These relations imply that the scale factor is power-law a(τ) ∼ τ p. In the
case of curved FLRW-universes we also demand that H ∼ k/a2, which is only possible for
p = 1. Interestingly, scaling solutions correspond to the FLRW-geometries that possess a
time-like conformal vectorfield ξ coming from the transformation
τ → eλτ , xa → e(1−p)λxa , (4)
where xa are the space-like cartesian coordinates1. In the forthcoming we reserve the indices
a, b, . . . to denote space-like coordinates when we consider cosmological space-times. Apart
from the intriguing cosmological properties of scaling solutions they are also interesting
for understanding the dynamics of a general cosmological solution since scaling solutions
are often critical points of an autonomous system of differential equations and therefore
correspond to attractors, repellors or saddle points [2]. Scaling cosmologies often appear in
supergravity theories (see for instance [3,4]) but, remarkably, they also appear by spatially
averaging inhomogeneous cosmologies in classical general relativity [5].
We will use two coordinate frames to describe scaling comologies
τ − frame : ds2 = −dτ 2 + τ 2p ds2D−1 , (5)
t− frame : ds2 = −e2t dt2 + e2ptds2D−1 . (6)
The first is the usual FLRW-coordinate system and the second can be obtained by the
substitution t = ln τ .
2 (Pseudo-) supersymmetry
If the scalar potential V (Φ) can be written in terms of another function W (Φ) as follows
V = ǫ
Gij∂iW∂jW − D−14(D−2)W
, (7)
then the action can be written as “a sum of squares” plus a boundary term when reduced
to one dimension:
dy fgD−1
(D−1)
4(D−2)
W − 2(D − 2) ġ
+Gij∂jW ||2
gD−1W − 2(D − 1)ġgD−2f−1
, (8)
where a dot denotes a derivative w.r.t. y. The term ||Φ̇i/f + Gij∂jW ||2 is a shorthand
notation and the square involves a contraction with the field metric Gij. It is clear that
the action is stationary under variations if the terms within brackets are zero2, leading to
the following first-order equations of motion
W = 2(D − 2)
+Gij∂jW = 0 . (9)
1For curved FLRW-space-times the space-like coordinates are invariant.
2For completeness we should have added the Gibbons-Hawking term [6] in the action which deletes
that part of the above boundary term that contains ġ.
For ǫ = +1 these equations are the standard Bogomol’nyi-Prasad-Sommerfield (BPS)
equations for domain walls that arise from demanding the susy-variation of the fermions to
vanish, which guarantees that the DW preserves a fraction of the total supersymmetry of
the theory. The function W is then the superpotential that appears in the susy-variation
rules and equation (7) with ǫ = +1 is natural for supergravity theories. It is clear that for
every W that obeys (7) we can find a corresponding DW-solution, and if W is not related
to the susy-variations we call the solutions fake supersymmetric [7].
For ǫ = −1 these equations are the generalization to arbitrary space-time dimension D
and field metric Gij of the framework found in references [8–11]. So here we generalized
and derived in a different way (some of) the results of [8–11] by showing that analogously
to DW’s we can write the Lagrangian as a sum of squares. We refer to these first-order
equations as pseudo-BPS equations and W is named the pseudo-superpotential because
of the immediate analogy with BPS domain walls in supergravity [10, 11]. For the case of
cosmologies there is no natural choice for W as cosmologies cannot be found by demanding
vanishing susy-variations of the fermions3.
In [11] it is proven that for all single-scalar cosmologies (and domain walls) a pseudo-
superpotential W exists such that the cosmology is pseudo-BPS and that one can give
a fermionic interpretation of the pseudo-BPS flow in terms of so-called pseudo-Killing
spinors. This does not necessarily carry over to multi-scalar solutions as was shown in [14].
Nonetheless, a multi-field solution can locally be seen as a single-field solution [15] because
locally we can redefine the scalar coordinates such that the curve Φ(y) is aligned with a
scalar axis and all other scalars are constant on this solution. A necessary condition for
the single-field pseudo-BPS flow to carry over (locally) to the multi-field system is that the
truncation down to a single scalar is consistent (this means that apart from the solution
one can put the other scalars always to zero) [14].
3 Multi-field scaling cosmologies
Let us turn to scaling solutions in the framework of pseudo-supersymmetry and see how
geodesic motion arises. First we consider the rather trivial case with vanishing scalar
potential V and then in section 3.2 we add a scalar potential V . Pseudo-supersymmetry
is only discussed in the case of non-vanishing V .
3.1 Pure kinetic solutions
If there is no scalar potential the solutions trace out geodesics since after a change of
coordinates y → ỹ(y) via dỹ = fg1−Ddy, the scalar field action becomes
′iΦ′jdỹ,
where a prime means a derivative w.r.t. ỹ. This new action describes geodesic curves
with affine parameter ỹ. The affine velocity is constant by definition and positive since the
metric is positive definite
′iΦ′j = ||v||2 . (10)
3Star supergravity is an exception [12] and that seems related to pseudo-supersymmetry [13].
The Einstein equation is
Ryy = 12GijΦ̇
iΦ̇j =
||v||2
g2−2Df 2 , Rab = 0 . (11)
In the gauge f = 1 the solution is given by g = eC2(y +C1)
D−1 , with C1 and C2 arbitrary
integration constants, but with a shift of y we can always put C1 = 0 and C2 can always
be put to zero by re-scaling the space-like coordinates. In the case of a four-dimensional
cosmology the geometry is a power-law FLRW-solution with p = 1/3.
3.2 Potential-kinetic scaling solutions
In a recent paper of Tolley and Wesley an interesting interpretation was given to scaling
solutions [16], which we repeat here. The finite transformation (4) leaves the equations
of motion invariant if the action S scales with a constant factor, which is exactly what
happens for scaling solutions since all terms in the Lagrangian scale like τ−2. Under (4)
the metric scales like e2λgµν and in order for the action to scale as a whole we must have
V → e−2λV , T = 1
gττGijΦ̇
iΦ̇j → e−2λT . (12)
Equations (12) imply that GijΦ̇
iΦ̇j remains invariant from which one deduces that dΦ
must be a Killing vector. The curve that describes a scaling solution follows an isometry
of the scalar manifold. It depends on the parametrization whether the tangent vector Φ̇
itself is Killing. This happens for the parametrization in terms of t = ln τ since
= limλ→0
Φi(eλτ)− Φi(τ)
d ln τ
. (13)
Thus a scaling solution is associated with an invariance of the equations of motion for
a rescaling of cosmic time and is therefore associated with a conformal Killing vector on
space-time and a Killing vector on the scalar manifold.
Pseudo-supersymmetry comes into play when we check the geodesic equation of motion
∇Φ̇Φ̇i = Φ̇
j∇jΦ̇i = Φ̇j
∇(jΦ̇i) +∇[jΦ̇i]
, (14)
where we denote Φ̇i = GikΦ̇
k. Now we have that the symmetric part is zero if we
parametrize the curve with t = ln τ since scaling makes Φ̇ a Killing vector. We also have
that ∇[jΦ̇i] = 0 since the pseudo-BPS condition makes Φ̇ a curl-free flow Φ̇i = −f∂iW . To
check that the curl is indeed zero (when f 6= 1) one has to notice that in the parametriza-
tion of the curve in terms of t = ln τ the gauge is such that ġ/g is constant and that
f ∼ W−1. Since the curl is also zero we notice that the curve is a geodesic with ln τ as
affine parametrization4
∇Φ̇Φ̇
i = 0 = Φ̈i + ΓijkΦ̇
jΦ̇k . (15)
4 One could wonder whether the results works in two ways. Imagine that a scaling solution is a geodesic.
This then implies that ∇[jΦ̇i] = 0 and therefore the flow is locally a gradient flow Φ̇i = ∂i lnW ∼ f∂iW .
The link between scaling and geodesics was discovered by Karthauser and Saffin in [17],
but no conditions on the Lagrangian were given in [17] such that the relation scaling-
geodesic holds. An example of a scaling solution that is not a geodesic was given by
Sonner and Townsend in [18].
A more intuitive understanding of the origin of the geodesic motion for some scaling
cosmologies comes from the on-shell substitution V = (3p− 1) T in the Lagrangian to get
a new Lagrangian describing seemingly massless fields. Although this is rarely a consistent
procedure we believe that this is nonetheless related to the existence of geodesic scaling
solutions.
Single field
For single-field models the potential must be exponential V = Λeαφ in order to have scaling
solutions. The simplest pseudo-superpotential belonging to an exponential potential is
itself exponential
W = ±
2 . (16)
If we choose the plus sign the solution to the pseudo-BPS equation is
φ(τ) = − 2
ln τ + 1
ln[6−2α
] , g(τ) ∼ τ
α2 . (17)
The minus sign corresponds to the time reversed solution.
Multiple fields
For a general multi-field model a scaling solution with power-law scale factor τ p obeys
V = (3p− 1)T from which we derive the on-shell relation
Gij∂iW∂jW =
⇒ W = ±
8 p V
3p− 1
. (18)
In general the above expression for the superpotential W ∼
V does not hold off-shell,
unless the potential is a function of a specific kind:
Gij∂iV ∂jV
. (19)
Scalar potentials that obey (19) with the extra condition that p ≷ 1
↔ V ≷ 0 allow for
multi-field scaling solutions. For a given scalar potential that obeys (19) there probably
exist many pseudo-superpotentials W compatible with V but if we make the specific choice
8 p V/(3p− 1) then all pseudo-BPS solutions must be scaling and hence geodesic.
As a consistency check we substitute the first-order pseudo-BPS equations into the right-
hand-side of the following second-order equations of motion
Φ̈i + ΓijkΦ̇
kΦ̇j = −f 2Gij∂jV −
3 ˙(ln g)− ˙(ln f)
Φ̇i , (20)
and choose a gauge for which
W , (21)
then we indeed find an affine geodesic motion since the right hand side of (20) vanishes.
For some systems one first needs to perform a truncation in order to find the above
relation (19). A good example is the multi-field potential appearing in Assisted Inflation
V (Φ1, . . . ,Φn) =
, Gij = δij . (22)
The scaling solution of this system was proven to be the same as the single-exponential
scaling [20]. The reason is that one can perform an orthogonal transformation in field
space such that the form of the kinetic term is preserved but the scalar potential is given
V = eαϕ U(Φ1, . . . ,Φn−1) ,
. (23)
The scaling solution is such that Φ1, . . . ,Φn−1 are frozen in a stationary point of U and
therefore the system is truncated to a single-field system that obeys (19). The same was
proven for Generalized Assisted Inflation [21] in reference [22]. The scaling solution in the
original field coordinates reads Φi = Ai ln τ + Bi, which is clearly a straight line and thus
a geodesic.
The scaling solutions of [14, 18] were constructed for an axion-dilaton system with an
exponential potential for the dilaton
(∂φ)2 − 1
eµφ(∂χ)2 − Λeαφ
. (24)
Clearly this two-field system obeys (19) and (one of) the pseudo-superpotential(s) is given
by (16). The pseudo-BPS scaling solution therefore has constant axion and is effectively
described by the dilaton in an exponential potential. Note that this solution indeed de-
scribes a geodesic on SL(2, IR)/ SO(2) with ln τ as affine parameter. All examples of scaling
solutions in the literature seem to occur for exponential potentials, however by performing
a SL(2, IR)-transformation on the Lagrangian (24) the kinetic term is unchanged and the
potential becomes a more complicated function of the axion and the dilaton. The same
scaling solution then trivially still exists (and (19) still holds) but the axion is not con-
stant in the new frame and instead the solution follows a more complicated geodesic on
SL(2, IR)/ SO(2).
However another scaling solution is given in [18] that is not geodesic and with varying
axion in the frame of the above action (24). This is an illustration of the above, since the
solution is not geodesic we know that there does not exists any other pseudo-superpotential
for which the varying axion solution is pseudo-BPS, consistent with what is shown in [14]
for that particular solution.
4 Geodesic curves and the Borel gauge
For the last example of the previous section the pseudo-BPS scaling solutions described
geodesics on the symmetric space SL(2, IR)/ SO(2). In this section we consider a general
class of symmetric spaces of which SL(2, IR)/ SO(2) is an example and they are known as
maximally non-compact cosets U/K. It seems that for this class of spaces the geodesic
equations of motion can be solved easily. The symmetry of the geodesic equations is the
symmetry of the scalar coset U/K. In the case of maximal supergravity the symmetry U is
a U-duality and is a maximal non-compact real slice of a complex semisimple group. The
isotropy group K is the maximal compact subgroup of U .
4.1 A solution-generating technique
In the Borel gauge the scalar fields are divided into r dilatons φI and (n − r) axions χα,
with r the rank of U and n the dimension of U/K (see for instance [23]). The dilatons
are related to the generators HI of the Cartan sub-algebra (CSA) and the axions to the
positive root generators Eα through the following expression for the coset representative
L in the Borel gauge
L = Παexp[χ
αEα]ΠIexp[−12φ
IHI ] . (25)
In this language the geodesic equation is
φ̈I + ΓIJKφ̇
J φ̇K + ΓIαJ χ̇
αφ̇J + ΓIαβχ̇
αχ̇β = 0 , (26)
χ̈α + ΓαJKφ̇
J φ̇K + ΓαβJ χ̇
βφ̇J + Γαβγχ̇
βχ̇γ = 0 . (27)
Since ΓIJK = 0 and Γ
JK = 0 at points for which χ
α = 0 a trivial solution is given by
φI = vI y , χα = 0 . (28)
How many other solutions are there? A first thing we notice is that every global U -
transformation Φ → Φ̃ brings us from one solution to another solution. Since U generically
mixes dilatons and axions we can construct solutions with non-trivial axions in this way.
We now prove that in this way all geodesics are obtained and this depends on the fact that
U is maximally non-compact with K the maximal compact subgroup of U .
Consider an arbitrary geodesic curve Φ(t) on U/K. The point Φ(0) can be mapped to
the origin L = 1 using a U -transformation, since we can identify Φ(0) with an element of U
and then we multiply the geodesic curve Φ(t) with Φ(0)−1, generating a new geodesic curve
Φ2(t) = Φ(0)
−1Φ(t) that goes through the origin. The origin is invariant under K-rotations
but the tangent space at the origin transforms under the adjoint of K. One can prove that
there always exists an element k ∈ K, such that AdjkΦ̇2(0) ∈ CSA [24]. Therefore χ̇α2 = 0
and this solution must be a straight line. So we started out with a general curve Φ(t) and
proved that the curve Φ3(t) = kΦ(0)
−1Φ(t) is a straight line.
4.2 An illustration from dimensional reduction
The metric Ansatz for the dimensional reduction of (4 + n)-dimensional Einstein-gravity
on the n-torus (Tn) is
ds24+n = e
2αϕds24 + e
2βϕMabdza ⊗ dzb , (29)
where
4(n+ 2)
, β = −
. (30)
The matrix M is a positive-definite symmetric n×n matrix with unit determinant, which
depends on the 4-dimensional coordinates, describing the moduli of Tn. The modulus
ϕ controls the overall volume and is named the breathing mode or radion field. Notice
that we already truncated the Kaluza–Klein vectors in the Ansatz. The reduction of the
Einstein–Hilbert term gives
−g{R− 1
(∂ϕ)2 + 1
Tr∂M∂M−1} . (31)
The scalars parametrize IR × SL(n, IR)/ SO(n) where ϕ belongs to the decoupled IR-part
and M is the SL(n, IR)/ SO(n) part.
If we take the four-dimensional part of space-time to be a flat FLRW-space then that
part of the metric will be power-law with p = 1/3 and the scalars follow a geodesic with
ln τ as an affine parameter. According to the solution-generating technique, the Ansatz for
the scalars is
ϕ = v0 ln τ + c0 , M = ΩDΩT , D = diag(e−
~βa·~φ) , (32)
with ~φ = ~v ln τ and ~β the weights of SL(n, IR) in the fundamental representation (see
appendix B for some explanations on the SL(n, IR)/ SO(n)-coset in this representation).
The diagonal matrix D represents the straight-line solution and Ω is an arbitrary SL(n, IR)-
matrix in the fundamental representation. Therefore M = ΩDΩT is the most general coset
matrix describing a geodesic curve.
The Friedmann equation implies that the affine velocity is restricted to be
v20 + ||v||2 = 43 , (33)
which is the only constraint coming from the 4-dimensional Einstein equation. If we sub-
stitute this solution in (29) and define new coordinates ~y = ~zΩ we find
ds24+n = −τ 2α v0dτ 2 + τ
+2αv0d~x23 +
~βa·~v+2β v0dy2a . (34)
This is similar to what is called a Kasner solution in general relativity (see for instance [25]).
Kasner solutions are a general class of time-dependent geometries that look like
ds2 = −τ 2p0dτ 2 +
τ 2padx2a . (35)
Kasner solutions solve the Einstein equations in vacuum if the following two conditions are
satisfied
p0 + 1 =
pa , (p0 + 1)
p2a . (36)
For the metric (34) these conditions are satisfied if the lower-dimensional Friedmann equa-
tion is satisfied. For this calculation one needs the properties of the weight-vectors ~βa
(given in appendix B) and the relation between α and β (30). We therefore conclude that
the general spatially flat FLRW-solution lifts up to the most general Kasner solution with
SO(3)-symmetry in 4 + n dimensions.
5 Discussion
In this note we have studied multi-field scaling solutions using a first-order formalism for
scalar cosmologies a.k.a. pseudo-supersymmetry. We derived these first-order equations
via a Bogomol’nyi-like method that was known to work for domain wall solutions as was
first shown in [26, 27]5 and we showed that it trivially extends to cosmological solutions.
This first-order formalism allows a better understanding of the geodesic motion that comes
with a specific class of scaling solutions. One of the main results of this note is a proof that
shows that all pseudo-BPS cosmologies that are scaling solutions must be geodesic. This
complements to the discussion in [14] where the first example of a non-geodesic scaling
cosmology was shown to be non-pseudo-BPS. Moreover we gave constraints on multi-field
Lagrangians for which the pseudo-BPS cosmologies are geodesic scaling solutions.
Having illustrated the importance of geodesic motion in scalar cosmology, we tackled
the problem of solving the geodesic equations in the second part of this note. We showed
that the most general geodesic curve can be written down for maximally non-compact
coset spaces U/K. These coset spaces appear in all maximal and some less-extended
supergravities [29]. We used a solution-generating technique based on the symmetries
of the coset. We were able to prove that the most general solution is given by a U-
transformation on the “straight line”, (φI(t) = vIt, χα = 0) in the Borel gauge. We
illustrated this technique for the coset SL(n, IR)/ SO(n). Since SL(n, IR)/ SO(n) is also
the moduli space of the n-torus we applied it to find the cosmological solutions of higher-
dimensional gravity compactified on a n-torus. This exercise nicely illustrates why the
straight line is the generating solution since, from a higher-dimensional point of view,
all solutions that correspond to the non-straight line geodesics can be seen as coordinate
transformations of the solutions associated with the straight line. The oxidation of the
straight line solutions corresponds to the most general SO(3)-invariant Kasner solution of
(4 + n)-dimensional vacuum GR.
The same technique was used in [3] to find all geodesic scaling cosmologies of the
CSO-gaugings in maximal supergravity.
The solution-generating technique presented here should be considered complementary
to the “compensator method” developed by Fré et al in [30]. There the straight line
5See also [28].
also serves as a generating solution but instead of rigid U -transformations one uses local K
transformations that preserve the solvable gauge to generate new non-trivial solutions. This
technique is a nice illustration of the integrability of the second–order geodesic equations
of motion [31].
Acknowledgments
We are grateful to Dennis Westra for useful discussions and comments on the manuscript
and to Jan Rosseel for many useful discussions. This work is supported in part by the Eu-
ropean Communitys Human Potential Programme under contract MRTN-CT-2004-005104
in which the authors are associated to Utrecht University. The work of AP and TVR is
part of the research programme of the Stichting voor Fundamenteel Onderzoek der Materie
(FOM).
A Curvatures
For the metric Ansatz (2) the Ricci tensor is given by
Rab = −ǫ(ηǫ)ab
gġḟ
+ (D − 3) ġ
, Ryy = (D − 1)
− ( g̈
. (37)
B The coset SL(N, IR)/ SO(N)
Consider a general coset U/K. It is not difficult to construct a coset representative using
the Lie algebras U and K of U and K respectively. Since K is a subgroup of U we have the
decomposition U = K ⊕ F, with F the complement of K in U. For a given representation
of the algebra U we define a coset representative via L(y) = exp(yifi) where the fi form a
basis of F in some representation of U.
To derive the metric we define a Lie algebra valued one-form from the coset represen-
tative L(y) via
L−1dL ≡ E + Ω , (38)
where E takes values in F and Ω in K. We notice that L−1dL is invariant under left
multiplication with a y-independent element g ∈ U . Multiplying L from the right with
local elements k ∈ K results in
E → k−1E k , Ω → k−1Ω k + k−1dk . (39)
In supergravity the parameters yi are scalar fields that depend on the space-time coordi-
nates yi = φi(x). The one-form L−1dL can be written out in terms of coset-coordinate
one-forms dφi which themselves can be pulled back to space-time coordinate one-forms
dφi = ∂µφ
idxµ. Now we can write
L−1dL = Eµdx
µ + Ωµdx
µ . (40)
Under the φ-dependent K-transformations k(φ(x)) we have that Ωµ → k−1Ωµk + k−1∂µk
and Eµ → k−1Eµk. It is clear that Eµ is covariant under local K-transformations and
Ωµ transforms like a connection. Using this connection Ωµ we can make the following
K-covariant derivative on L and L−1
DµL = ∂µL− LΩµ , DµL−1 = ∂µL−1 + ΩµL−1 . (41)
To find a kinetic term for the scalars we notice that the object
Tr[DµLD
µL−1] = −Tr[EµEµ] , (42)
has all the right properties as it contains single derivatives on the scalars, it is a space-time
scalar, it is invariant under rigid U transformations and under local K-transformations.
Thus,
e−1Lscalar = −Tr[EµEµ] ≡ −12g(φ)ij∂µφ
i∂µφj . (43)
If SO(N) is the maximal compact subgroup of U and we work in the fundamental
representation, then the Lie algebra of SO(N) is the vector space of antisymmetric matrices,
L−1dL+ (L−1dL)T
, Ω =
L−1dL− (L−1dL)T
, (44)
and a calculation shows that
e−1Lscalar = −Tr[E2] = +14Tr[∂M∂M
−1] , (45)
where M is the SO(N)-invariant matrix M = LLT .
No we specify to U = SL(N, IR). In general SL(N, IR) has rank N − 1 and its maximal
compact subgroup is SO(N). There will therefore be N−1 dilaton fields φI and N(N−1)/2
axion fields χα. The Cartan generators are given in terms of the weights ~β of SL(N, IR) in
the fundamental representation
( ~H)ij = (~βi)δij . (46)
The weights can be taken to obey the following algebra
βiI = 0 ,
βiIβiJ = 2δIJ , ~βi · ~βj = 2δij −
. (47)
The first of these identities holds in all bases since it follows from the tracelessness of the
SL generators. The second and third identity can be seen as convenient normalizations of
the generators. The positive step operators Eij are all upper triangular and a handy basis
is that they have only one non-zero entry [Eij ]ij = 1. The negative step operators are the
transpose of the positive. The SO(N) algebra is spanned by the following combinations
(Eβ − E−β) . (48)
The action will generically look complicated but when all axions are set to zero L is diagonal
L = diag[ exp(−1
~βi · ~φ)] and the action becomes
Tr∂M∂M−1 = −1
βiJβiI)∂φ
I∂φJ = −1
δIJ∂φ
I∂φJ . (49)
This action describes N − 1 dilatons that parametrize the flat scalar manifold IRN−1.
References
[1] E. J. Copeland, M. Sami and S. Tsujikawa, Dynamics of dark energy, Int. J. Mod.
Phys. D15 (2006) 1753–1936 [hep-th/0603057].
[2] E. J. Copeland, A. R. Liddle and D. Wands, Exponential potentials and cosmological
scaling solutions, Phys. Rev. D57 (1998) 4686–4690 [gr-qc/9711068].
[3] J. Rosseel, T. Van Riet and D. B. Westra, Scaling cosmologies of N = 8 gauged
supergravity, Class. Quant. Grav. 24 (2007) 2139–2152 [hep-th/0610143].
[4] M. de Roo, D. B. Westra and S. Panda, Gauging CSO groups in N = 4 supergravity,
JHEP 09 (2006) 011 [hep-th/0606282].
[5] T. Buchert, J. Larena and J.-M. Alimi, Correspondence between kinematical
backreaction and scalar field cosmologies: The ’morphon field’, Class. Quant. Grav.
23 (2006) 6379–6408 [gr-qc/0606020].
[6] G. W. Gibbons and S. W. Hawking, Action Integrals and Partition Functions in
Quantum Gravity, Phys. Rev. D15 (1977) 2752–2756.
[7] D. Z. Freedman, C. Nunez, M. Schnabl and K. Skenderis, Fake supergravity and
domain wall stability, Phys. Rev. D69 (2004) 104027 [hep-th/0312055].
[8] D. Bazeia, C. B. Gomes, L. Losano and R. Menezes, First-order formalism and dark
energy, Phys. Lett. B633 (2006) 415–419 [astro-ph/0512197].
[9] A. R. Liddle and D. H. Lyth, Cosmological inflation and large-scale structure,
Cambridge, UK: Univ. Pr. (2000) 400 p.
[10] K. Skenderis and P. K. Townsend, Pseudo-supersymmetry and the domain-wall /
cosmology correspondence, hep-th/0610253.
[11] K. Skenderis and P. K. Townsend, Hidden supersymmetry of domain walls and
cosmologies, Phys. Rev. Lett. 96 (2006) 191301 [hep-th/0602260].
[12] C. M. Hull, De Sitter space in supergravity and M theory, JHEP 11 (2001) 012
[hep-th/0109213].
[13] E. A. Bergshoeff, J. Hartong, A. Ploegh, J. Rosseel and D. Van den Bleeken,
Pseudo-supersymmetry and a tale of alternate realities, arXiv:0704.3559 [hep-th].
[14] J. Sonner and P. K. Townsend, Axion-Dilaton Domain Walls and Fake Supergravity,
hep-th/0703276.
[15] A. Celi, A. Ceresole, G. Dall’Agata, A. Van Proeyen and M. Zagermann, On the
fakeness of fake supergravity, Phys. Rev. D71 (2005) 045009 [hep-th/0410126].
http://www.arXiv.org/abs/hep-th/0603057
http://www.arXiv.org/abs/gr-qc/9711068
http://www.arXiv.org/abs/hep-th/0610143
http://www.arXiv.org/abs/hep-th/0606282
http://www.arXiv.org/abs/gr-qc/0606020
http://www.arXiv.org/abs/hep-th/0312055
http://www.arXiv.org/abs/astro-ph/0512197
http://www.arXiv.org/abs/hep-th/0610253
http://www.arXiv.org/abs/hep-th/0602260
http://www.arXiv.org/abs/hep-th/0109213
http://www.arXiv.org/abs/arXiv:0704.3559 [hep-th]
http://www.arXiv.org/abs/hep-th/0703276
http://www.arXiv.org/abs/hep-th/0410126
[16] A. J. Tolley and D. H. Wesley, Scale-invariance in expanding and contracting
universes from two-field models, hep-th/0703101.
[17] J. L. P. Karthauser and P. M. Saffin, Scaling solutions and geodesics in moduli space,
Class. Quant. Grav. 23 (2006) 4615–4624 [hep-th/0604046].
[18] J. Sonner and P. K. Townsend, Recurrent acceleration in dilaton-axion cosmology,
Phys. Rev. D74 (2006) 103508 [hep-th/0608068].
[19] A. R. Liddle, A. Mazumdar and F. E. Schunck, Assisted inflation, Phys. Rev. D58
(1998) 061301 [astro-ph/9804177].
[20] K. A. Malik and D. Wands, Dynamics of assisted inflation, Phys. Rev. D59 (1999)
123501 [astro-ph/9812204].
[21] E. J. Copeland, A. Mazumdar and N. J. Nunes, Generalized assisted inflation, Phys.
Rev. D60 (1999) 083506 [astro-ph/9904309].
[22] J. Hartong, A. Ploegh, T. Van Riet and D. B. Westra, Dynamics of generalized
assisted inflation, Class. Quant. Grav. 23 (2006) 4593–4614 [gr-qc/0602077].
[23] L. Andrianopoli, R. D’Auria, S. Ferrara, P. Fre and M. Trigiante, R-R scalars,
U-duality and solvable Lie algebras, Nucl. Phys. B496 (1997) 617–629
[hep-th/9611014].
[24] A. W. Knapp, Lie groups beyond an introduction, Birkhäuser, Second Edition (2002).
[25] S. S. Kokarev, A multidimensional generalization of the Kasner solution, Grav.
Cosmol. 2 (1996) 321 [gr-qc/9510059].
[26] I. Bakas and K. Sfetsos, States and curves of five-dimensional gauged supergravity,
Nucl. Phys. B573 (2000) 768–810 [hep-th/9909041].
[27] K. Skenderis and P. K. Townsend, Gravitational stability and renormalization-group
flow, Phys. Lett. B468 (1999) 46–51 [hep-th/9909070].
[28] I. Bakas, A. Brandhuber and K. Sfetsos, Domain walls of gauged supergravity,
M-branes, and algebraic curves, Adv. Theor. Math. Phys. 3 (1999) 1657–1719
[hep-th/9912132].
[29] P. Fre et al., Tits-Satake projections of homogeneous special geometries, Class.
Quant. Grav. 24 (2007) 27–78 [hep-th/0606173].
[30] P. Fre et al., Cosmological backgrounds of superstring theory and solvable algebras:
Oxidation and branes, Nucl. Phys. B685 (2004) 3–64 [hep-th/0309237].
[31] P. Fre and A. Sorin, Integrability of supergravity billiards and the generalized Toda
lattice equation, Nucl. Phys. B733 (2006) 334–355 [hep-th/0510156].
http://www.arXiv.org/abs/hep-th/0703101
http://www.arXiv.org/abs/hep-th/0604046
http://www.arXiv.org/abs/hep-th/0608068
http://www.arXiv.org/abs/astro-ph/9804177
http://www.arXiv.org/abs/astro-ph/9812204
http://www.arXiv.org/abs/astro-ph/9904309
http://www.arXiv.org/abs/gr-qc/0602077
http://www.arXiv.org/abs/hep-th/9611014
http://www.arXiv.org/abs/gr-qc/9510059
http://www.arXiv.org/abs/hep-th/9909041
http://www.arXiv.org/abs/hep-th/9909070
http://www.arXiv.org/abs/hep-th/9912132
http://www.arXiv.org/abs/hep-th/0606173
http://www.arXiv.org/abs/hep-th/0309237
http://www.arXiv.org/abs/hep-th/0510156
Preliminaries
(Pseudo-) supersymmetry
Multi-field scaling cosmologies
Pure kinetic solutions
Potential-kinetic scaling solutions
Geodesic curves and the Borel gauge
A solution-generating technique
An illustration from dimensional reduction
Discussion
Curvatures
The coset SL(N,IR)/SO(N)
|
0704.1654 | The Peculiar Velocities of Local Type Ia Supernovae and their Impact on
Cosmology | Draft version September 15, 2021
Preprint typeset using LATEX style emulateapj v. 08/22/09
THE PECULIAR VELOCITIES OF LOCAL TYPE Ia SUPERNOVAE AND THEIR IMPACT ON COSMOLOGY
James D. Neill
California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125
Michael J. Hudson
University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, CANADA
Alex Conley
University of Toronto, 60 Saint George Street, Toronto, ON M5S 3H8, CANADA
Draft version September 15, 2021
ABSTRACT
We quantify the effect of supernova Type Ia peculiar velocities on the derivation of cosmological
parameters. The published distant and local Ia SNe used for the Supernova Legacy Survey first-year
cosmology report form the sample for this study. While previous work has assumed that the local SNe
are at rest in the CMB frame (the No Flow assumption), we test this assumption by applying peculiar
velocity corrections to the local SNe using three different flow models. The models are based on the
IRAS PSCz galaxy redshift survey, have varying β = Ω0.6m /b, and reproduce the Local Group motion
in the CMB frame. These datasets are then fit for w, Ωm, and ΩΛ using flatness or ΛCDM and a
BAO prior. The χ2 statistic is used to examine the effect of the velocity corrections on the quality
of the fits. The most favored model is the β = 0.5 model, which produces a fit significantly better
than the No Flow assumption, consistent with previous peculiar velocity studies. By comparing the
No Flow assumption with the favored models we derive the largest potential systematic error in w
caused by ignoring peculiar velocities to be ∆w = +0.04. For ΩΛ, the potential error is ∆ΩΛ = −0.04
and for Ωm, the potential error is ∆Ωm < +0.01. The favored flow model (β = 0.5) produces the
following cosmological parameters: w = −1.08+0.09
−0.08, Ωm = 0.27
+0.02
−0.02 assuming a flat cosmology, and
ΩΛ = 0.80
+0.08
−0.07 and Ωm = 0.27
+0.02
−0.02 for a w = −1 (ΛCDM) cosmology.
Subject headings: cosmology: large-scale structure of the universe – galaxies: distances and redshifts
– supernovae: general
1. INTRODUCTION
Dark Energy has challenged our knowledge of funda-
mental physics since the direct evidence for its existence
was discovered using Type Ia supernovae (Riess et al.
1998; Perlmutter et al. 1999). Because there are cur-
rently no compelling theoretical explanations for Dark
Energy, the correct emphasis, as pointed out by the Dark
Energy Task Force (DETF, Albrecht et al. 2006), is on
refining our observations of the accelerated expansion of
the universe. Recommendation V from the DETF Re-
port (Albrecht et al. 2006) calls for an exploration of the
systematic effects that could impair the needed observa-
tional refinements.
A couple of recent studies (Hui & Greene 2006;
Cooray & Caldwell 2006) point out that the redshift
lever arm needed to accurately measure the universal
expansion requires the use of a local sample, but that
coherent large-scale local (z < 0.2) peculiar velocities
add additional uncertainty to the Hubble diagram and
hence to the derived cosmological parameters.
Current analyses (e.g., Astier et al. 2006; Riess et al.
2007; Wood-Vasey et al. 2007) of the cosmological pa-
rameters do not attempt to correct for the effect of local
peculiar velocities. As briefly noted by Hui & Greene
Electronic address: [email protected]
Electronic address: [email protected]
Electronic address: [email protected]
(2006) and Cooray & Caldwell (2006), it is possible
to use local data to measure the local velocity field
and hence limit the impact on the derived cosmo-
logical parameters. Measurements of the local ve-
locity field have improved to the point where there
is consistency among surveys and methods (Hudson
2003; Hudson et al. 2004; Radburn-Smith et al. 2004;
Pike & Hudson 2005; Sarkar et al. 2006). Type Ia
supernova peculiar velocities have been studied re-
cently by Radburn-Smith et al. (2004); Pike & Hudson
(2005); Jha et al. (2006); Haugboelle et al. (2006);
Watkins & Feldman (2007) and others. Their results
demonstrate that the local flows derived from SNe are
in agreement with those derived from other distance in-
dicators, such as the Tully-Fisher relation and the Funda-
mental Plane. Our aim is to use the current knowledge
of the local peculiar motions to correct local SNe and,
together with a homogeneous set of distant SNe, fit for
cosmological parameters and measure the effect of the
corrections on the cosmological fits.
To produce this measurement, we analyze the lo-
cal and distant SN Ia sample used in the first-year
cosmology results from the Supernova Legacy Survey
(SNLS, Astier et al. 2006, hence A06). This sam-
ple is composed of 44 local SNe (A06, Table 8:
Hamuy et al. 1996; Riess et al. 1999; Krisciunas et al.
2001; Jha 2002; Strolger et al. 2002; Altavilla et al.
http://arxiv.org/abs/0704.1654v1
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
2 Neill, Hudson, & Conley
2004; Krisciunas et al. 2004a,b) and 71 distant SNe
(A06, Table 9). The distant SNe are the largest homo-
geneous set currently in the literature. The local sample
span the redshift range 0.015 < z < 0.125 and were
selected to have good lightcurve sampling (A06, § 5.2).
Using three different models encompassing the range of
plausible local large-scale flow, we assign and correct for
the peculiar velocity of each local SN. We then re-fit the
entire sample for w, Ωm, and ΩΛ to assess the system-
atics due to the peculiar velocity field, and to asses the
change in the quality of the resulting fits.
2. PECULIAR VELOCITY MODELS
Peculiar velocities, v, arise due to inhomogeneities in
the mass density and hence in the expansion. Their ef-
fect is to perturb the observed redshifts from their cos-
mological values: czCMB = cz + v · r̂, where cz is the
cosmological redshift the SN would have in the absence
of peculiar velocities. With the advent of all-sky galaxy
redshift surveys, it is possible to predict peculiar veloc-
ities from the galaxy distribution provided one knows
β = f(Ω)/b, where b is a linear biasing parameter relating
fluctuations in the galaxy density, δ, to fluctuations in the
mass density. The peculiar velocity in the CMB frame is
then given by linear perturbation theory (Peebles 1980)
applied to the density field (see, e.g. Yahil et al. 1991;
Hudson 1993):
∫ Rmax
δ(r′)
(r′ − r)
|r′ − r|3
d3r′ +V. (1)
In this Letter, we use the density field of IRAS PSCz
galaxies (Branchini et al. 1999), which extends to a
depth Rmax = 20000 km s
−1. Contributions to the pe-
culiar velocity arising from masses on scales larger than
Rmax are modeled by a simple residual dipole, V. Thus,
given a density field, the parameters β and V describe
the velocity field within Rmax. For galaxies with dis-
tances greater than Rmax, the first term above is set to
zero.
The predicted peculiar velocities from the PSCz den-
sity field are subject to two sources of uncertainty: the
noisiness of the predictions due to the sparsely-sampled
density field, and the inapplicability of linear perturba-
tion theory on small scales. Typically these uncertainties
are accounted for by adding an additional “thermal” dis-
persion, which is assumed to be Gaussian. From a care-
ful analysis of predicted and observed peculiar velocities,
Willick & Strauss (1998) estimated these uncertainties
to be ∼ 100 km s−1, albeit with a dependence on den-
sity. Radburn-Smith et al. (2004) found reasonable χ2
values if 150 km s−1 was assumed in the field, with an
extra contribution to the small-scale dispersion added in
quadrature for SNe in clusters. Here we adopt a thermal
dispersion of 150 km s−1.
For this study, we explore the results of three different
models of large-scale flows and compare them to a case
where no flow model is used. These models have been
chosen to span the range of flow models permitted by
peculiar velocity data, and all of these models reproduce
the observed ∼ 600 km s−1 motion of the Local Group
with respect to the CMB. The first model assumes a
pure bulk flow (model PBF, hence β = 0) with V having
vector components (57,−540, 314) km s−1 in Galactic
Cartesian coordinates. The second model assumes β =
0.5 (model B05), with a dipole vector of (70,−194, 0) km
s−1. The third model adopts β = 0.7 (model B07) which
requires no residual dipole. We compare these models
to the no-correction scenario adopted by A06 and others
with β = 0, V = 0 which we call the “No Flow” or NF
scenario. Note that a recent comparison (Pike & Hudson
2005) of results from IRAS predictions versus peculiar
velocity data yields a mean value fit with β = 0.50±0.02
(stat), so the B05 model is strongly favored over the NF
scenario by independent peculiar velocity analyses.
3. COSMOLOGICAL FITS
Prior to the fitting procedure, the peculiar veloci-
ties for each model are used to correct the local SNe
(using a variation of Hui & Greene 2006, equations 11
and 13). We then fit our corrected SN data in two
ways using a χ2-gridding cosmology fitter1 (also used by
Wood-Vasey et al. 2007). The first fit uses a flat cos-
mology (Ω = 1) with the equation of state parameter w
and Ωm as free parameters. The second fit assumes a
ΛCDM (w = −1) cosmology with ΩΛ and Ωm as free
parameters. We used the same intrinsic SN photometric
scatter (σint = 0.13 mag, A06) for every fit. The result-
ing χ2 probability surfaces for both fits are then further
constrained using the BAO result from Eisenstein et al.
(2005). The final derived cosmological parameters are
then used to calculate the χ2 for each fit (see A06, § 5.4).
The fitting procedure employed here differs in imple-
mentation from that used in A06. Three additional pa-
rameters, often called nuisance parameters, must be fit
along with the two cosmological parameters. These pa-
rameters are the constant of proportionality for the SN
lightcurve shape, αs, the correction for the SN observed
color, βc, and a SN brightness normalization,M. We dis-
tinguish βc from the β used to describe the flow models
above. A06 used analytic marginalization of the nuisance
parameters αs and βc in their fits. Here these parame-
ters are fully gridded like the cosmological parameters.
This avoids a bias in the nuisance parameters that re-
sults because, in the analytic method, their values must
be held fixed to compute the errors. The result is that
our fits using the NF scenario produces slightly different
cosmological parameters than quoted in A06.
4. RESULTS
The results of the cosmological fits for each model are
listed in Table 1 and plotted in Figure 1 and Figure 2.
They demonstrate two effects of the peculiar velocity
corrections: a change in the values of the cosmological
parameters, and a change in the quality of the fits as
measured by the χ2 statistic.
We expect, if a given model is correct, to improve the
fitting since our corrected data should more closely re-
semble the homogeneous universe described by a few cos-
mological parameters. The χ2 of the fits for each flow
model can be compared to the χ2 for the NF scenario
(shown by the dashed line in the figures) as a test of this
hypothesis. Using ∆χ2 = −2 lnL/LNF , where L is the
likelihood, we find that the pure bulk flow is over 103
times less likely than the NF scenario, while the B05 and
1 http://qold.astro.utoronto.ca/conley/simple cosfitter/
http://qold.astro.utoronto.ca/conley/simple_cosfitter/
Peculiar Velocities of SNe 3
TABLE 1
Peculiar Velocity Model Parameters and Results
Ω = 1 + BAO prior w = −1 + BAO prior
Model β V (km s−1) w Ωm χ
ΩΛ Ωm χ
ΩΛ,Ωm
A06a 0.0 · · · −1.023± 0.090 0.271± 0.021 · · · 0.751± 0.082 0.271 ± 0.020 · · ·
NF 0.0 · · · −1.054
+0.086
−0.084
0.270
+0.024
−0.018
115.5 0.770
+0.083
−0.071
0.269
+0.033
−0.017
115.4
PBF 0.0 57,-540,314 −1.026+0.085
−0.083
0.273+0.024
−0.019
129.4 0.741+0.084
−0.073
0.273+0.034
−0.017
129.2
B05 0.5b 70,-194,0 −1.081
+0.087
−0.085
0.268
+0.024
−0.018
110.3 0.796
+0.081
−0.070
0.267
+0.032
−0.017
110.1
B07 0.7 · · · −1.094
+0.087
−0.085
0.267
+0.024
−0.018
111.2 0.809
+0.082
−0.069
0.265
+0.032
−0.017
111.1
a results quoted in A06 marginalizing analytically over αs and βc (see § 3)
b best fit value from Pike & Hudson (2005)
Fig. 1.— Parameter values for the w, Ωm fit (Ω = 1 + BAO prior)
for each of the four peculiar velocity models in Table 1. The values
for the NF scenario are indicted by the dashed lines. The largest
systematic error in w compared with the NF fit is +0.040 for the
B07 model, which demonstrates the amplitude of the systematic
error if peculiar velocity is not accounted for. The offsets for Ωm
are all within ±0.003 showing that this parameter is not sensitive
to the peculiar velocity corrections due to the BAO prior. The χ2
of the fits improve when using the two β models (B05, B07), while
the PBF model provides a significantly worse fit.
B07 models are 13.5 and 8.6 times more likely, respec-
tively.
We also use these data to assess the systematic er-
rors made in the parameters if no peculiar velocities are
accounted for. The largest of these are obtained by com-
paring the B07 model with the NF scenario. This com-
parison yields ∆wB07 = +0.040 and ∆ΩΛ,B07 = −0.039.
The same comparison for the B05 model, which is only
slightly preferred by the χ2 statistic over model B07, pro-
duces ∆wB05 = +0.027 and ∆ΩΛ,B05 = −0.026. The
systematic offsets for Ωm are all 0.004 or less, demon-
strating the insensitivity of this parameter to peculiar
velocities. This is due to the BAO prior which is insensi-
tive to local flow and provides a much stronger constraint
for Ωm than for w or ΩΛ (see A06, Figures 5 and 6).
5. DISCUSSION AND SUMMARY
The systematic effect of different flow models is at the
level of ±0.04 in w. This is smaller than the present level
of random error in w, which is largely due to the small
numbers of high- and low-redshift SNe. However, com-
pared to other systematics discussed in A06, which total
Fig. 2.— Parameter values for the ΩΛ, Ωm fit (w = −1 + BAO
prior) for each of the four peculiar velocity models as in Figure 1.
Again, comparing the NF fits to the B07 model produces the largest
systematic in ΩΛ of −0.039. We also find Ωm insensitive to the
corrections, having all offsets within ±0.004. The χ2 values show
the same pattern as in Figure 1, favoring the β models over no
correction (NF), and over pure bulk flow.
∆w = ±0.054, the systematic effect of large-scale flows
is important. Wood-Vasey et al. (2007, Table 5) list 16
sources of systematic error which total ∆w = ±0.13.
Aside from three method-dependent systematics and the
photometric zero-point error, they are all smaller than
the flow systematic. As the number of SNe continues to
increase, and understanding of other systematics (e.g.
photometric zero-points) improves, it is possible that
large-scale flows will become one of the dominant sources
of systematic uncertainty.
The peculiar velocities of SN host galaxies arise from
large-scale structures over a range of scales. The compo-
nent arising from small-scale, local structure is the least
important: it is essentially a random variable which is
reduced by
N . More problematic is the large-scale co-
herent component. Such a large-scale component can
take several forms: an overdensity or underdensity; a
large-scale dipole, or “bulk” flow.
The existence of a large-scale, but local (< 7400
km s−1) underdensity, or “Hubble Bubble” was first
discussed by Zehavi et al. (1998). Recently Jha et al.
(2006) have re-enforced this claim with a larger SN data
set: they find that the difference in the Hubble constant
inside the Bubble and outside is ∆H/H = 6.5 ± 1.8%.
4 Neill, Hudson, & Conley
If correct, this could have a dramatic effect on the de-
rived cosmological parameters (Jha et al. 2006, Fig 17),
especially for those studies that extend their local sample
down below z < 0.015. However, the “Hubble Bubble”
was not confirmed by Giovanelli et al. (1999) who found
∆H/H = 1.0 ± 2.2% using the Tully-Fisher (TF) pe-
culiar velocities, nor by Hudson et al. (2004) who found
∆H/H = 2.3± 1.9% using the Fundamental Plane (FP)
distances.
According to equation 1, a mean underdensity of IRAS
galaxies of order ∼ 40% within 7400 km s−1 would
be needed to generate the “Hubble Bubble” quoted by
Jha et al. (2006). However, we find that the IRAS
PSCz density field of Branchini et al. (1999) is not un-
derdense in this distance range; instead it is mildly over-
dense (by a few percent) within 7400 km s−1 (see also
Branchini et al. 1999, Figure 2). As a further cross-
check, when we refit the Jha et al. (2006) data after hav-
ing subtracted the predictions of the B05 flow model, the
“Bubble” remains in the Jha et al. (2006) data. Thus,
the Jha et al “Bubble” cannot be explained by local
structure, unless that structure is not traced by IRAS
galaxies. Moreover, when we analyze the 99 SNe within
15000 km s−1 from Tonry et al. (2003) in the same way,
we find no evidence of a significant “Hubble Bubble”
(∆H/H = 1.5 ± 2.0%), in agreement with the results
from TF and FP surveys. The Tonry et al. (2003) sam-
ple and that of Jha et al. (2006) have 67 SNe in common.
The high degree of overlap suggests that the difference
lies in the different methods for converting the photom-
etry into SN distance moduli.
A local large-scale flow can also introduce systematic
errors if the low-z sample is biased in its sky coverage:
in this case, an uncorrected dipole term can corrupt the
monopole term, which then biases the cosmological pa-
rameters. For the large-scale flow directions considered
here, this does not appear to affect the A06 sample: we
note that the PBF-corrected case has similar cosmolog-
ical parameters to the “No Flow” case. However, if co-
herent flows exist on large scales, this may affect surveys
with unbalanced sky coverage, such as the SN Factory
(Aldering et al. 2002) or the SDSS SN survey2.
The most promising approach to treating the effect
of large-scale flows is a more sophisticated version of
the analysis presented here: combine low-redshift SNe
with other low-redshift peculiar velocity tracers, such as
Tully-Fisher SFI++ survey (Masters et al. 2006) and the
NOAO Fundamental Plane Survey (Smith et al. 2004),
and use these data to constrain the parameters of the
flow model (β and the residual large-scale flow V) di-
rectly. One can then marginalize over the parameters of
the flow model while fitting the cosmological parameters
to the low- and high-z SNe.
2 http://sdssdp47.fnal.gov/sdsssn/sdsssn.html
REFERENCES
Albrecht, A., Bernstein, G., Cahn, R., Freedman, W. L., Hewitt,
J., Hu, W., Huth, J., Kamionkowski, M., et al. 2006, preprint
(astro-ph/0609591)
Aldering, G., Adam, G., Antilogus, P., Astier, P., Bacon, R.,
Bongard, S., Bonnaud, C., Copin, Y., et al. 2002, in Survey and
Other Telescope Technologies and Discoveries. Edited by
Tyson, J. Anthony; Wolff, Sidney. Proceedings of the SPIE,
Volume 4836, pp. 61-72 (2002), ed. J. A. Tyson & S. Wolff,
61–72
Altavilla, G., Fiorentino, G., Marconi, M., Musella, I.,
Cappellaro, E., Barbon, R., Benetti, S., Pastorello, A., et al.
2004, MNRAS, 349, 1344
Astier, P., Guy, J., Regnault, N., Pain, R., Aubourg, E., Balam,
D., Basa, S., Carlberg, R. G., et al. 2006, A&A, 447, 31, A06
Branchini, E., Teodoro, L., Frenk, C. S., Schmoldt, I., Efstathiou,
G., White, S. D. M., Saunders, W., Sutherland, W., et al. 1999,
MNRAS, 308, 1
Cooray, A. & Caldwell, R. R. 2006, Phys. Rev. D, 73, 103002
Eisenstein, D. J., Zehavi, I., Hogg, D. W., Scoccimarro, R.,
Blanton, M. R., Nichol, R. C., Scranton, R., Seo, H.-J., et al.
2005, ApJ, 633, 560
Giovanelli, R., Dale, D. A., Haynes, M. P., Hardy, E., &
Campusano, L. E. 1999, ApJ, 525, 25
Hamuy, M., Phillips, M. M., Suntzeff, N. B., Schommer, R. A.,
Maza, J., & Aviles, R. 1996, AJ, 112, 2391
Haugboelle, T., Hannestad, S., Thomsen, B., Fynbo, J.,
Sollerman, J., & Jha, S. 2006, preprint (astro-ph/0612137)
Hudson, M. J. 1993, MNRAS, 265, 43
Hudson, M. J. 2003, in Proceedings of the 15th Rencontres De
Blois: Physical Cosmology: New Results In Cosmology And
The Coherence Of The Standard Model, ed. J. Bartlett, in
press, preprint (astro-ph/0311072)
Hudson, M. J., Smith, R. J., Lucey, J. R., & Branchini, E. 2004,
MNRAS, 352, 61
Hui, L. & Greene, P. B. 2006, Phys. Rev. D, 73, 123526
Jha, S. 2002, PhD thesis, Harvard University
Jha, S., Riess, A. G., & Kirshner, R. P. 2006, preprint
(astro-ph/0612666)
Krisciunas, K., Phillips, M. M., Stubbs, C., Rest, A., Miknaitis,
G., Riess, A. G., Suntzeff, N. B., Roth, M., et al. 2001, AJ, 122,
Krisciunas, K., Phillips, M. M., Suntzeff, N. B., Persson, S. E.,
Hamuy, M., Antezana, R., Candia, P., Clocchiatti, A., et al.
2004a, AJ, 127, 1664
Krisciunas, K., Suntzeff, N. B., Phillips, M. M., Candia, P.,
Prieto, J. L., Antezana, R., Chassagne, R., Chen, H.-W., et al.
2004b, AJ, 128, 3034
Masters, K. L., Springob, C. M., Haynes, M. P., & Giovanelli, R.
2006, ApJ, 653, 861
Peebles, P. J. E. 1980, The Large-Scale Structure of the Universe
(Princeton, N.J.: Princeton University Press)
Perlmutter, S., Aldering, G., Goldhaber, G., Knop, R. A.,
Nugent, P., Castro, P. G., Deustua, S., Fabbro, S., et al. & The
Supernova Cosmology Project. 1999, ApJ, 517, 565
Pike, R. W. & Hudson, M. J. 2005, ApJ, 635, 11
Radburn-Smith, D. J., Lucey, J. R., & Hudson, M. J. 2004,
MNRAS, 355, 1378
Riess, A. G., Filippenko, A. V., Challis, P., Clocchiatti, A.,
Diercks, A., Garnavich, P. M., Gilliland, R. L., Hogan, C. J., et
al. 1998, AJ, 116, 1009
Riess, A. G., Kirshner, R. P., Schmidt, B. P., Jha, S., Challis, P.,
Garnavich, P. M., Esin, A. A., Carpenter, C., et al. 1999, AJ,
117, 707
Riess, A. G., Strolger, L.-G., Casertano, S., Ferguson, H. C.,
Mobasher, B., Gold, B., Challis, P. J., Filippenko, A. V., et al.
2007, ApJ, 659, 98
Sarkar, D., Feldman, H. A., & Watkins, R. 2006, preprint
(astro-ph/0607426)
Smith, R. J., Hudson, M. J., Nelan, J. E., Moore, S. A. W.,
Quinney, S. J., Wegner, G. A., Lucey, J. R., Davies, R. L., et
al. 2004, AJ, 128, 1558
Strolger, L.-G., Smith, R. C., Suntzeff, N. B., Phillips, M. M.,
Aldering, G., Nugent, P., Knop, R., Perlmutter, S., et al. 2002,
AJ, 124, 2905
http://sdssdp47.fnal.gov/sdsssn/sdsssn.html
http://arxiv.org/abs/astro-ph/0609591
http://arxiv.org/abs/astro-ph/0612137
http://arxiv.org/abs/astro-ph/0311072
http://arxiv.org/abs/astro-ph/0612666
http://arxiv.org/abs/astro-ph/0607426
Peculiar Velocities of SNe 5
Tonry, J. L., Schmidt, B. P., Barris, B., Candia, P., Challis, P.,
Clocchiatti, A., Coil, A. L., Filippenko, A. V., et al. 2003, ApJ,
594, 1
Watkins, R., & Feldman, H. A. 2007, preprint, (astro-ph/0702751)
Willick, J. A., & Strauss, M. A. 1998, ApJ, 507, 64
Wood-Vasey, W. M., Miknaitis, G., Stubbs, C. W., Jha, S., Riess,
A. G., Garnavich, P. M., Kirshner, R. P., Aguilera, C., et al.
2007, preprint (astro-ph/0701041)
Yahil, A., Strauss, M. A., Davis, M., & Huchra, J. P. 1991, ApJ,
372, 380
Zehavi, I., Riess, A. G., Kirshner, R. P., & Dekel, A. 1998, ApJ,
503, 483
http://arxiv.org/abs/astro-ph/0702751
http://arxiv.org/abs/astro-ph/0701041
|
0704.1655 | Creation of Quark-gluon Plasma in Celestial Laboratories | Creation of Quark-gluon Plasma in Celestial
Laboratories
R. K. Thakur
*Retired Professor of Physics, School of Studies in Physics
Pt.Ravishakar Shukla University, Raipur, India
21 College Road, Choube Colony, Raipur-492001, India
Abstract
It is shown that a gravitationally collapsing black hole acts as an ultrahigh energy
particle accelerator that can accelerate particles to energies inconceivable in any ter-
restrial particle accelerator, and that when the energy E of the particles comprising
the matter in the black hole is ∼ 102 GeV or more,or equivalently the temperature
T is ∼ 1015 K or more, the entire matter in the black hole will be in the form of
quark-gluon plasma permeated by leptons.
Key words: Quark-gluon plasma, black holes, particle accelerators
PACS: 12.38 Mh, 25.75 Nq, 97.60 Lf, 04.70−s
1 Introduction
Efforts are being made to create quark-gluon plasma (QGP)in terrestrial labo-
ratories. A report released by CERN, the European Organization for Nuclear
Research, at Geneva, on February 10, 2000 said, “A series of experiments
using CERN’s lead beam have presented compelling evidence for the exis-
tence of a new state of matter 20 times denser than nuclear matter, in which
quarks instead of being bound up into more complex particles such as pro-
tons and neutrons, are liberated to roam freely“. By smashing together lead
ions at CERN’s accelerator at temperatures 100,000 times as hot as sun’s cen-
tre, i.e. at temperatures T ∼ 1.5 × 1012 K, and energy densities never before
reached in laboratory experiments, a team of 350 scientists from institutes
in 20 countries succeeded in isolating quarks from more complex particles,
e.g. protons and neutrons. However, the evidence of creation QGP at CERN
Email address: [email protected] (R. K. Thakur).
Preprint submitted to Elsevier 10 August 2021
http://arxiv.org/abs/0704.1655v1
is indirect,involving detection of particles produced when QGP changes back
to hadrons. The production of these particles can be explained alternatively
without having to have QGP. Therefore, the evidence of the creation of QGP
at CERN is not enough and conclusive. In view of this CERN will start a
new experiment, ALICE (A Large Ion Collider Experiment), at much higher
energies available at its LHC (Large Hadron Collider). First collisions in the
LHC will occur in November 2007. A two months run in 2007, with beams
colliding at an energy of 0.9 TeV, will give the accelerator and detector teams
the opportunity to run-in their equipment, ready for a run at the full collision
energy of 14 Tev to start in spring 2008.
In the meantime, the focus of research on QGP has shifted to the Relativistic
Heavy Ion Collider (RHIC), the world’s newest and largest particle accelerator
for nuclear research, at Brookhaven National Laboratory (BNL) in Upton,
New York. RHIC’s goal is to create and study QGP by head-on collisions of
two beams of gold ions at energies 10 times those of CERN’s programme,
which ought to produce QGP with higher temperature and longer life time
thereby allowing much clear and direct observation. The programme at RHIC
started in June 2000.Researchers at RHIC generated thousands of head-on
collisions between gold ions at energies of 130 GeV creating fireballs of matter
having density hundred times greater than that of the nuclear matter and
temperature ∼ 2 × 1012 K (175 MeV in the energy scale). Fireballs were
of size ∼ 5 femtometre which lasted a few times 10−24 second. All the four
detector systems, viz., STAR, PHENIX, BRAHMS, PHOBOS, detected “jet
quenching“ and suppression of “leading particles“, highly energetic individual
particles that emerge from the nuclear fireballs in gold-gold collisions. Jet
quenching and suppression of leading particles are signs of QGP formation.
Eventually, with plenty of data in hand, all the four detector collaborations -
STAR, PHENIX, BRAHMS, PHOBOS - operating at the BNL have converged
on a consensus opinion that the fireball is a liquid of strongly interacting quarks
and gluons rather than a gas of weakly interacting quarks and gluons. More-
over, this liquid is almost a “perfect“ liquid with very low viscosity. The RHIC
findings were reported at the meeting of the American Physical Society (APS)
held during April 16-19, 2005 in Tampa, Florida in a talk delivered by Gary
Westfall. Thus, it is obvious that the existence of QGP theoretically predicted
by Quantum Chromodynamics (QCD) has been experimentally validated at
RHIC.
But the QGP created hitherto in terrestrial laboratories is ephemeral, its life-
time is, as mentioned earlier, a few times 10−24 second, presumably because
its temperature is not well above the transition temperature for transition
from the hadronic phase to the QGP phase. In addition to this, it is difficult
to maintain it even at that temperature for long enough time. However, as
shown in the sequel, in nature we have celestial laboratories in the form of
gravitationally collapsing black holes wherein QGP is created naturally; this
QGP is at much higher temperature than the transition temperature, and pre-
sumably therefore it is not ephemeral. More so, because the temperature of
the QGP created in black holes continually increases and as such it is always
above the transition temperature.
2 Gravitationally collapsing black hole as a particle accelerator
We consider a gravitationally collapsing black hole (BH). In the simplest treat-
ment (1) a BH is considered to be a spherically symmetric ball of dust with
negligible pressure, uniform density ρ = ρ(t), and at rest at t = 0. These
assumptions lead to the unique solution of the Einstein field equations, and
in the comoving co-ordinate system the metric inside the BH is given by
ds2 = dt2 − R2(t)
1− k r2
+ r2dθ2 + r2 sin2 θ dφ2
in units in which the speed of light in vacuum, c = 1, and where k = 8πGρ(0)/3
is a constant.
On neglecting mutual interactions the energy E of any one of the particles
comprising the matter in the BH is given by E2 = p2 + m2 > p2, in units
in which again c = 1, and where p is the magnitude of the 3-momentum
of the particle and m its rest mass. But p = h
, where λ is the de Broglie
wavelength of the particle and h Planck’s constant of action. Since all length
in the collapsing BH scale down in proportion to the scale factor R(t) in
equation (1), it is obvious that λ ∝ R(t). Therefore it follows that p ∝ R−1(t),
and hence p = aR−1(t), where a is the constant of proportionality. From this
it follows that E > a/R(t). Consequently, E as well as p increases continually
as R decreases. It is also obvious that E and p → ∞ as R → 0. Thus, in effect,
we have an ultrahigh energy particle accelerator, so far inconceivable in any
terrestrial laboratory, in the form of a gravitationally collapsing BH, which
can, in the absence of any physical process inhibiting the collapse, accelerate
particles to an arbitrarily high energy and momentum without any limit.
What has been concluded above can also be demonstrated alternatively, with-
out resorting to the general theory of relativity, as follows. As an object col-
lapses under self-gravitation, the inter-particle distance s between any pair of
particles in the object decreases. Obviously, the de Broglie wavelength λ of
any particle in the object is less than or equal to s, a simple consequence of
Heisenberg’s uncertainty principle. Therefore, s ≥ h
. Consequently, p ≥ h
hence E ≥ h
. Since during the gravitational collapse of an object s decreases
continually, the energy E as well as p, the magnitude of the 3-momentum of
each of the particles is the object increases continually. Moreover, from E ≥ h
and p ≥ h
it follows that E and p → ∞ as s → 0. Thus, any gravitation-
ally collapsing object in general, and a BH in particular, acts as an ultrahigh
energy particle accelerator.
It is also obvious that ρ, the density of matter in the BH, continually increases
as the BH collapses. In fact, ρ ∝ R−3, and hence ρ → ∞ as R → 0.
3 Creation of quark-gluon plasma inside gravitationally collapsing
black holes
It has been shown theoretically that when the energy E of the particles in mat-
ter is ∼ 102 GeV (s ∼ 10−16 cm ) corresponding to a temperature T ∼ 1015
K, all interactions are of the Yang-Mills type with SUc(3)×SUIW (2)×UYW (1)
gauge symmetry, where c stands for colour, IW for weak isospin, and YW for
weak hypercharge; and at this stage quark deconfinement occurs as a result
of which the matter now consists of its fundamental constituents: spin 1/2
leptons, namely, the electrons, the muons, the tau leptons, and their neu-
tirnos, which interact only through the electroweak interaction; and the spin
1/2 quarks, u(up), d(down), s(strange), c(charm), b(bottom), t(top), which
interact electroweakly as well as through the colour force generated by gluons
(2). In this context it may be noted that, as shown in section 2, the energy
E of each of the particles comprising the matter in a gravitationally collaps-
ing BH continually increases, and so does the density ρ of the matter in the
BH. During the continual collapse of a BH a stage will be reached when E
and ρ will be so large and s so small that the quarks confined in the hadrons
will be liberated from the infrared slavery and acquire asymptotic freedom,
i.e., the quark deconfinement will occur.This will happen when E ∼ 102 GeV
(s ∼ 10−16 cm) corresponding to T ∼ 1015 K. Consequently, during the con-
tinual gravitational collapse of a BH, when E ≥ 102 GeV (s ≤ 10−16 cm)
corresponding to T ≥ 1015 K, the entire matter in the BH will be in the form
QGP permeated by leptons.
One may understand what happens eventually to the matter in a gravitation-
ally collapsing BH in another way as follows. As a BH collapses continually,
gravitational energy is released continually. Since, inter alia, gravitational en-
ergy so released cannot escape the BH, it will continually heat the matter
comprising the BH. Consequently, the temperature of the matter in the BH
will increase continually. When the temperature reaches the transition tem-
perature for transition from the hadronic phase to the QGP phase, which is
predicted to be ∼ 170 MeV (∼ 1012 K) by the Lattice Gauge Theory, the
entire matter in the BH will be converted into QGP permeated by leptons.
It may be noted that in a BH the QGP will not be ephemeral like what it
has hitherto been in the case of the QGP created in terrestrial laboratories, it
will not go back to the hadronic phase, because the temperature of the matter
in the BH continually increases and, after crossing the transition temperature
for the transition from the hadronic phase to the QGP phase, it will be more
and more above the transition temperature. Consequently, once the transition
from the hadronic phase to the QGP phase occurs in a BH, there is no going
back; the entire matter in the BH will remain in the form of QGP permeated
by leptons.
4 Conclusion
From the foregoing it is obvious that a BH acts as an ultrahigh energy par-
ticle accelerator that can accelerate particles to energies inconceivable in any
terrestrial particle accelerator, and that the matter in any gravitationally col-
lapsing BH is eventually converted into QGP permeated by leptons. However,
the snag is that it is not possible to probe and study the properties of the
QGP in a BH because nothing can escape outside the event horizon of a BH.
5 Acknowledgment
The author thanks Professor S. K. Pandey, the Co-ordinator of the Refer-
ence Centre at Pt. Ravishankar Shukla University, Raipur of the University
Grants Commission’s Inter-university Centre for Astronomy and Astrophysics
at Pune. He also thanks Mr. Laxmikant Chaware and Miss Leena Madharia
for typing the manuscript.
References
[1] S. Weinberg, Gravitation and Cosmology (John Wiley & Sons, New York,
1972), 342.
[2] P. Ramond, Ann. Rev. Nucl. Part.Sc., 33 (1983) 31.
Introduction
Gravitationally collapsing black hole as a particle accelerator
Creation of quark-gluon plasma inside gravitationally collapsing black holes
Conclusion
Acknowledgment
|
0704.1656 | Temperature-driven transition from the Wigner Crystal to the
Bond-Charge-Density Wave in the Quasi-One-Dimensional Quarter-Filled band | Temperature-driven transition from the Wigner Crystal to the Bond-Charge-Density
Wave in the Quasi-One-Dimensional Quarter-Filled band
R.T. Clay,1 R.P. Hardikar,1 and S. Mazumdar2
1Department of Physics and Astronomy and HPC2 Center for Computational Sciences,
Mississippi State University, Mississippi State MS 39762
2 Department of Physics, University of Arizona Tucson, AZ 85721
(Dated: August 21, 2021)
It is known that within the interacting electron model Hamiltonian for the one-dimensional 1
filled band, the singlet ground state is a Wigner crystal only if the nearest neighbor electron-electron
repulsion is larger than a critical value. We show that this critical nearest neighbor Coulomb
interaction is different for each spin subspace, with the critical value decreasing with increasing
spin. As a consequence, with the lowering of temperature, there can occur a transition from a
Wigner crystal charge-ordered state to a spin-Peierls state that is a Bond-Charge-Density Wave
with charge occupancies different from the Wigner crystal. This transition is possible because
spin excitations from the spin-Peierls state in the 1
-filled band are necessarily accompanied by
changes in site charge densities. We apply our theory to the 1
-filled band quasi-one-dimensional
organic charge-transfer solids in general and to 2:1 tetramethyltetrathiafulvalene (TMTTF) and
tetramethyltetraselenafulvalene (TMTSF) cationic salts in particular. We believe that many recent
experiments strongly indicate the Wigner crystal to Bond-Charge-Density Wave transition in several
members of the TMTTF family. We explain the occurrence of two different antiferromagnetic
phases but a single spin-Peierls state in the generic phase diagram for the 2:1 cationic solids. The
antiferromagnetic phases can have either the Wigner crystal or the Bond-Charge-Spin-Density Wave
charge occupancies. The spin-Peierls state is always a Bond-Charge-Density Wave.
PACS numbers: 71.30.+h, 71.45.Lr, 74.70.Kn
I. INTRODUCTION
Spatial broken symmetries in the quasi-one-
dimensional (quasi-1D) 1
-filled organic charge-
transfer solids (CTS) have been of strong
experimental1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 and
theoretical18,19,20,21,22,23,24,25,26 interest. The bro-
ken symmetry states include charge order (hereafter
CO, this is usually accompanied by intramolecular
distortions), intermolecular lattice distortions (hereafter
bond order wave or BOW), antiferromagnetism (AFM)
and spin-Peierls (SP) order. Multiple orderings may
compete or even coexist simultaneously. Interestingly,
these unconventional insulating states in the CTS are
often proximate to superconductivity27, the mecha-
nism of which has remained perplexing after intensive
investigations over several decades. Unconventional
behavior at or near 1
-filling has also been observed
in the quasi-two-dimensional organic CTS with higher
superconducting critical temperatures11,28,29,30,31,
sodium cobaltate32,33,34 and oxides of titanium35 and
vanadium36,37.
In spite of extensive research on 1D instabilities in the
CTS, detailed comparisons of theory and experiments re-
main difficult. Strong electron-electron (e-e) Coulomb
interactions in these systems make calculations particu-
larly challenging, and with few exceptions38,39 existing
theoretical discussions of broken symmetries in the in-
teracting 1
-filled band have been limited to the ground
state18,19,20,21,22,23,24,25,26. This leaves a number of im-
portant questions unresolved, as we point out below.
4k 1010F
AsF6 PF6SbF6 PF6
Pressure
FIG. 1: Schematic of the proposed T-vs.-P phase diagram for
(TMTCF)2X, where C=T or S, along with the charge occu-
pancies of the sites in the low T phases as determined in this
work. The P-axis reflects the extent of interchain coupling.
Open (filled) arrows indicate the ambient pressure locations
of TMTTF (TMTSF) salts.
Quasi-1D CTS undergo two distinct phase transitions
as the temperature (hereafter T) is reduced40. The 4kF
transition at higher T involves charge degrees of free-
dom (T4kF ∼ 100 K), with the semiconducting state be-
low T4kF exhibiting either a dimerized CO or a dimer-
ized BOW. Charges alternate as 0.5 + ǫ and 0.5 – ǫ
on the molecules along the stack in the dimerized CO.
The site charge occupancy in the dimerized CO state is
commonly written as · · · 1010· · · (with ‘1’ and ‘0’ denot-
ing charge-rich and charge-poor sites), and this state is
http://arxiv.org/abs/0704.1656v2
also referred to as the Wigner crystal. The 4kF BOW
has alternating intermolecular bond strengths, but site
charges are uniformly 0.5. It is generally accepted that
the dimer CO (dimer BOW) is obtained for strong (weak)
intermolecular Coulomb interactions (the intramolecular
Coulomb interaction can be large in either case, see Sec-
tion III). At T < T2kF ∼ 10 – 20 K, there occurs a sec-
ond transition in the CTS, involving spin degrees of free-
dom, to either an SP or an AFM state. Importantly,
the SP and the AFM states both can be derived from
the dimer CO or the dimer BOW. Coexistence of the SP
state with the Wigner crystal would require that the SP
state has the structure 1 = 0 = 1 · · · 0 · · · 1, with singlet
“bonds” alternating in strength between the charge-rich
sites of the Wigner crystal. Similarly, coexisting AFM
and Wigner crystal would imply site charge-spin occu-
pancies ↑ 0 ↓ 0. The occurrence of both have been sug-
gested in the literature20,26,38,39. The SP state can also
be obtained from dimerization of the dimer 4kF BOW,
in which case there occurs a spontaneous transition from
the uniform charge-density state to a coexisting bond-
charge-density wave (BCDW)18,22,25 1− 1 = 0 · · · 0. The
unit cells here are dimers with site occupancies ‘1’ and
‘0’ or ‘0’ and ‘1’, with strong and weak interunit 1–1 and
0· · · 0 bonds, respectively. The AFM state with the same
site occupancies · · · 1100· · · is referred to as the bond-
charge-spin density wave (BCSDW)21 and is denoted as
↑↓ 00.
The above characterizations of the different states and
the related bond, charge and spin patterns are largely
based on ground state calculations. The mechanism of
the transition from the 4kF state to the 2kF state lies
outside the scope of such theories. Questions that remain
unresolved as a consequence include: (i) Does the nature
of the 4kF state (charge versus bond dimerized) predeter-
mine the site charge occupancies of the 2kF state? This
is assumed in many of the published works. (ii) What de-
termines the nature of the 2kF state (AFM versus SP)?
(iii) How does one understand the occurrence of two dif-
ferent AFM phases (hereafter AFM1 and AFM2) strad-
dling a single SP phase in the proposed T vs. P (where P
is pressure) phase diagram (see Fig. 1)9 for the cationic 1
filled band quasi-1D CTS? (iv) What is the nature of the
intermediate T states between the 4kF dimerized state
and the 2kF tetramerized state? As we point out in the
next section, where we present a brief review of the ex-
perimental results, answers to these questions are crucial
for a deeper understanding of the underlying physics of
the 1
-filled band CTS.
In the present paper we report the results of our calcu-
lations of T-dependent behavior within theoretical mod-
els incorporating both e-e and electron-phonon (e-p) in-
teractions to answer precisely the above questions. One
key result of our work is as follows: in between the strong
and weak intermolecular Coulomb interaction parame-
ter regimes, there exists a third parameter regime within
which the intermolecular Coulomb interactions are in-
termediate, and within which there can occur a novel
transition from a Wigner crystal CO state to a BCDW
SP state as the T is lowered. Thus the charge or bond
ordering in the 4kF phase does not necessarily decide
the same in the 2kF state. For realistic intramolecular
Coulomb interactions (Hubbard U), we show that the
width of this intermediate parameter regime is compara-
ble to the strong and weak intersite interaction regimes.
We believe that our results are directly applicable to the
family of the cationic CTS (TMTTF)2X, where a redis-
tribution of charge upon entering the SP state from the
CO state has been observed15,16 in X=AsF6 and PF6. A
natural explanation of this redistribution emerges within
our theory. The SP state within our theory is unique and
has the BCDW charge occupancy, while the two AFM re-
gions in the phase diagram of Fig. 1 have different site
occupancies. Our theory therefore provides a simple di-
agnostic to determine the pattern of CO coexisting with
low-temperature magnetic states in the 1
-filled CTS.
In addition to the above theoretical results directly
pertaining to the quasi-1D CTS, our work gives new in-
sight to excitations from a SP ground state in a non-
half-filled band. In the case of the usual SP transition
within the 1
-filled band, the SP state is bond-dimerized
at T = 0 and has uniform bonds at T > T2kF . The site
charges are uniform at all T. This is in contrast to the
-filled band, where the SP state at T = 0 is bond and
charge-tetramerized and the T > T2kF state is dimerized
as opposed to being uniform. Furthermore, we show that
the high T phase here can be either charge- or bond-
dimerized, starting from the same low T state. This
clearly requires two different kinds of spin excitations in
the 1
-filled band. We demonstrate that spin excitations
from the SP state in the 1
-filled band can lead to two
different kinds of defects in the background BCDW.
In the next section we present a brief yet detailed sum-
mary of relevant experimental results in the quasi-1D
CTS. The scope of this summary makes the need for hav-
ing T-dependent theory clear. Following this, in Section
III we present our theoretical model along with conjec-
tures based on physical intuitive pictures. In Section IV
we substantiate these conjectures with accurate quantum
Monte Carlo (QMC) and exact diagonalization (ED) nu-
merical calculations. Finally in Section V we compare
our theoretical results and experiments, and present our
conclusions.
II. REVIEW OF EXPERIMENTAL RESULTS
Examples of both CO and BOW broken symmetry at
T < T4kF are found in the
-filled CTS. The 4kF phase
in the anionic 1:2 CTS is commonly bond-dimerized. The
most well known example is MEM(TCNQ)2, which un-
dergoes a metal-insulator transition accompanied with
bond-dimerization at 335 K41. Site charges are uniform
in this 4kF phase. The 2kF phase in the TCNQ-based
systems is universally SP and not AFM. The SP transi-
tion in MEM(TCNQ)2 occurs below T2kF = 19 K, and
low T neutron diffraction measurements of deuterated
samples41 have established that the bond tetramerization
is accompanied by 2kF CO · · · 1100· · · . X-ray
42,43 and
neutron diffraction44 experiments have confirmed a sim-
ilar low T phase in TEA(TCNQ)2. We will not discuss
these further in the present paper, as they are well de-
scribed within our previous work18,25. We will, however,
argue that the SP ground state in (DMe-DCNQI)2Ag
(as opposed to AFM) indicates · · · 1100· · · CO in this.
The cationic (TMTCF)2X, C= S and Se, exhibit more
variety, presumably because the counterions affect pack-
ing as well as site energies in the cation stack. Differ-
ences between systems with centrosymmetric and non-
centrosymmetric anions are also observed. Their overall
behavior is summarized in Fig. 1, where as is custom-
ary pressure P can also imply larger interchain coupling.
We have indicated schematically the possible locations of
different materials on the phase diagram. The most sig-
nificant aspect of the phase diagram is the occurrence of
two distinct antiferromagnetic phases, AFM1 and AFM2,
straddling a single SP phase.
Most TMTTF lie near the low P region of the phase
diagram and are insulating already at or near room tem-
perature because of charge localization, which is due to
the intrinsic dimerization along the cationic stacks1,4,11.
CO at intermediate temperatures TCO has been found
in dielectric permittivity4, NMR7, and ESR46 experi-
ments on materials near the low and intermediate P
end. Although the pattern of the CO has not been de-
termined directly, the observation of ferroelectric behav-
ior below TCO is consistent with · · · 1010· · · type CO
in this region5,23. With further lowering of T, most
(TMTTF)2X undergo transitions to the AFM1 or SP
phase (with X = Br a possible exception, see below).
X = SbF6 at low T lies in the AFM1 region
9, with a very
high TCO and relatively low Neel temperature TN = 8
K. As the schematic phase diagram indicates, pressure
suppresses both TCO and TN in this region. For P > 0.5
GPa, (TMTTF)2SbF6 undergoes a transition from the
AFM1 to the SP phase9, the details of which are not com-
pletely understood; any charge disproportionation in the
SP phase is small9. (TMTTF)2ReO4 also has a relatively
high TCO = 225 K, but the low T phase here, reached
following an anion-ordering transition is spin singlet14.
Nakamura et al. have suggested, based on NMR exper-
iments, that the CO involves the Wigner crystal state,
but the low T state is the · · · 1100· · · BCDW14. Further
along the P axis lie X = AsF6 and PF6, where TCO are re-
duced to 100 K and 65 K, respectively7. The low T phase
in both cases is now SP. Neutron scattering experiments
on (TMTTF)2PF6 have found that the lattice distortion
in the SP state is the expected 2kF BOW distortion,
but that the amplitude of the lattice distortion is much
smaller10 than that found in other organic SP materials
such as MEM(TCNQ)2. The exact pattern of the BOW
has not been determined yet. Experimental evidence ex-
ists that some form of CO persists in the magnetic phases.
For example, the splitting in vibronic modes below TCO
in (TMTTF)2PF6 and (TMTTF)2AsF6, a signature of
charge disproportionation, persists into the SP phase17,
indicating coexistence of CO and SP. At the same time,
the high T CO is in competition with the SP ground
state8, as is inferred from the different effects of pressure
on TCO and TSP : while pressure reduces TCO, it in-
creases TSP . This is in clear contrast to the effect of pres-
sure on TN in X = SbF6. Similarly, deuteration of the hy-
drogen atoms of TMTTF increases TCO but decreases
TSP . That higher TCO is accompanied by lower TSP for
centrosymmetric X (TSP= 16.4 K in X = PF6 and 11.1 K
in X = AsF6) has also been noted
48. This trend is in ob-
vious agreement with the occurrence of AFM instead of
SP state under ambient pressure in X = SbF6. Most in-
terestingly, Nakamura et al. have very recently observed
redistribution of the charges on the TMTTF molecules
in (TMTTF)2AsF6 and (TMTTF)2PF6 as these systems
enter the SP phase from CO states15,16. Charge dispro-
portionation, if any, in the SP phase is much smaller
than in the CO phase15,16, which is in apparent agree-
ment with the above observations9,14 in X = ReO4 and
SbF6.
The bulk of the (TMTTF)2X therefore lie in the
AFM1 and SP regions of Fig. 1. (TMTSF)2X, in
contrast, occupy the AFM2 region. Coexisting 2kF
CDW and spin-density wave, SDW, with the same
2kF periodicity
49,50 here is explained naturally as the
· · · 1100· · · BCSDW21,22,51. In contrast to the TMTTF
salts discussed above, charge and magnetic ordering in
(TMTTF)2Br occur almost simultaneously
46,52. X-ray
studies of lattice distortions point to similarities with
(TMTSF)2PF6
49, indicating that (TMTTF)2Br is also
a · · · 1100· · · BCSDW21. We do not discuss AFM2 re-
gion in the present paper, as this can be found in our
earlier work21,25.
III. THEORETICAL MODEL AND
CONJECTURES
The 1D Hamiltonian we investigate is written as
H = HSSH +HHol +Hee (1a)
HSSH = t
[1 + α(a
i + ai)](c
i,σci+1,σ + h.c.)
+ ~ωS
iai (1b)
HHol = g
i + bi)ni + ~ωH
ibi (1c)
Hee = U
ni,↑ni,↓ + V
nini+1 (1d)
In the above, c
i,σ creates an electron with spin σ (↑,↓) on
molecular site i, ni,σ = c
i,σci,σ is the number of electrons
with spin σ on site i, and ni =
ni,σ. U and V are the
on-site and intersite Coulomb repulsions, and a
i and b
create (dispersionless) Su-Schrieffer-Heeger (SSH)53 and
Holstein (Hol)54 phonons on the ith bond and site respec-
tively, with frequencies ωS and ωH . Because the Peierls
instability involves only phonon modes near q = π, keep-
ing single dispersionless phonon modes is sufficient for
the Peierls transitions to occur55,56. Although purely 1D
calculations cannot yield a finite temperature phase tran-
sition, as in all low dimensional theories57,58 we antici-
pate that the 3D ordering in the real system is principally
determined by the dominant 1D instability.
The above Hamiltonian includes the most important
terms necessary to describe the family of quasi-1D CTS,
but ignores nonessential terms that may be necessary for
understanding the detailed behavior of individual sys-
tems. Such nonessential terms include (i) the intrinsic
dimerization that characterizes many (TMTTF)2X, (ii)
interaction between counterions and the carriers on the
quasi-1D cations stacks, (iii) interchain Coulomb inter-
action, and (iv) interchain hopping. Inclusion of the in-
trinsic dimerization will make the Wigner crystal ground
state even less likely24, and this is the reason for exclud-
ing it. We have verified the conclusions of reference 24
from exact diagonalization calculations. The inclusion of
interactions with counterions may enhance the Wigner
crystal ordering5,23 for some (TMTTF)2X. We will dis-
cuss this point further below, and argue that it is im-
portant in the AFM1 region of the phase diagram. The
effects of intrinsic dimerization and counterion interac-
tions can be reproduced by modifying the V/|t| in our
Hamiltonian, and thus these are not included explicitly.
Rather the V in Eq. (1a) should be considered as the
effective V for the quasi-1D CTS. Interchain hopping, at
least in the TMTTF (though not in the TMTSF), are in-
deed negligible. The interchain Coulomb interaction can
promote the Wigner crystal within a rectangular lattice
framework, but for the realistic nearly triangular lattice
will cause frustration, thereby making the BCDW state
of interest here even more likely. We therefore believe
that Hamiltonian (1a) captures the essential physics of
the quasi-1D CTS.
For applications to the quasi-1D CTS we will be inter-
ested in the parameter regime18,24,25,26 |t| = 0.1 − 0.25
eV, U/|t| = 6 − 8. The exact value of V is less known,
but since the same cationic molecules of interest as well
as other related molecules (for e.g., HMTTF, HMTSF,
etc.) also form quasi-1D 1
-filled band Mott-Hubbard
semiconductors with short-range antiferromagnetic spin
correlations59,60, it must be true that V < 1
U (since for
V > 1
U the 1D 1
-filled band is a CDW61,62). Two other
known theoretical results now fix the range of V that
will be of interest. First, the ground state of the 1
-filled
band within Hamiltonian (1a) in the limit of zero e-p
coupling is the Wigner crystal · · · 1010· · · only for suffi-
ciently large V > Vc(U). Second, Vc(U → ∞) = 2|t|, and
is larger for finite U19,24,25. With the known U/|t| and
the above restrictions in place, it is now easily concluded
that (i) the Wigner crystal is obtained in the CTS for a
relatively narrow range of realistic parameters, and (ii)
even in such cases the material’s V is barely larger than
Vc(U)
24,25.
We now go beyond the above ground state theories
of spatial broken symmetries to make the following cru-
cial observation: each different spin subspace of Hamil-
tonian (1a) must have its own Vc at which the Wigner
crystal is formed. This conclusion of a spin-dependent
Vc = Vc(U, S) follows from the comparison of the fer-
romagnetic subspace with total spin S = Smax and the
S = 0 subspace. The ferromagnetic subspace is equiva-
lent to the 1
-filled spinless fermion band, and therefore
Vc(U, Smax) is independent of U and exactly 2|t|. The
increase of Vc(U, S = 0) with decreasing U
19,25 is then
clearly related to the occurrence of doubly occupied and
vacant sites at finite U in the S = 0 subspace. Since
the probability of double occupancy (for fixed U and V )
decreases monotonically with increasing S, we can in-
terpolate between the two extreme cases of S = 0 and
S = Smax to conclude Vc(U, S) > Vc(U, S+1). We prove
this explicitly from numerical calculations in the next
section.
Our conjecture regarding spin-dependent Vc(U) in turn
implies that there exist three distinct parameter regimes
for realistic U and V : (i) V ≤ V (Smax) = 2|t|, in
which case the ground state is the BCDW with 2kF CO
and the high temperature 4kF state is a BOW
18,25; (ii)
V > Vc(U, S = 0), in which case both the 4kF and
the 2kF phases have Wigner crystal CO; (iii) and the
intermediate regime 2|t| ≤ V ≤ Vc(U, S = 0). Pa-
rameter regime (i) has been discussed in our previous
work18,21,25,63. This is the case where the description of
the 1
-filled band as an effective 1
-filled band is appropri-
ate: the unit cell in the 4kF state is a dimer of two sites,
and the 2kF transition can be described as the usual SP
dimerization of the dimer lattice. We will not discuss
this parameter region further. Given the U/|t| values for
the CTS, parameter regime (ii) can have rather narrow
width. For U/|t| = 6, Vc(U, S = 0) = 3.3|t|, and there
is no value of realistic V for which the ground state is a
Wigner crystal. For U/|t| = 8, Vc(U, S = 0) = 3.0|t|, and
the widths of parameter regime (iii), 2|t| ≤ V ≤ 3|t| and
of parameter regime (ii), 3|t| ≤ V ≤ 4|t|, are comparable.
We will discuss parameter regime (ii) only briefly, as
the physics of this region has been discussed by other
authors20,26. We investigated thoroughly the intermedi-
ate parameter regime (iii), which has not been studied
previously. Within the intermediate parameter regime
we expect a novel transition from the BCDW in the 2kF
state at low T to a · · · 1010· · · CO in the 4kF phase at
high T that has not been discussed in the theoretical lit-
erature.
Our observation regarding the T-dependent behavior
of these CTS follows from the more general observation
that thermodynamic behavior depends on the free en-
ergy and the partition function. For the strong e-e inter-
actions of interest here, thermodynamics of 1D systems
at temperatures of interest is determined almost entirely
by spin excitations. Since multiplicities of spin states
increase with the total spin S, the partition function is
dominated by high (low) spin states with large (small)
multiplicities at high (low) temperatures. While at T=0
such a system must be a BCDW for V < Vc(U, S = 0), as
the temperature is raised, higher and higher spin states
begin to dominate the free energy, until V for the ma-
terial in question exceeds Vc(U, S), at which point the
charge occupancy reverts to · · · 1010· · · We demonstrate
this explicitly in the next section. A charge redistribu-
tion is expected in such a system at T2kF , as the devi-
ation from the average charge of 0.5 is much smaller in
the BCDW than in the Wigner crystal25.
The above conjecture leads to yet another novel impli-
cation. The ground state for both weak and intermediate
intersite Coulomb interaction parameters are the BCDW,
even as the 4kF phases are different in the two cases:
BOW in the former and CO in the latter. This necessarily
requires the existence of two different kinds of spin exci-
tations from the BCDW. Recall that within the standard
theory of the SP transition in the 1
-filled band57,64,65,
thermal excitations from the T = 0 ground state gener-
ates spin excitations with bond-alternation domain walls
(solitons), with the phase of bond alternation in between
the solitons being opposite to that outside the solitons.
Progressive increase in T generates more and more soli-
tons with reversed bond alternation phases, until overlaps
between the two phases of bond alternations lead ulti-
mately to the uniform state. A key difference between
the 1
-filled and 1
-filled SP states is that site charge oc-
cupancies in spin excitations in the former continue to be
uniform, while they are necessarily nonuniform in the lat-
ter case, as we show in the next section. We demonstrate
that defect centers with two distinct charge occupancies
(that we will term as type I and II), depending upon the
actual value of V , are possible in the 1
-filled band. Pre-
ponderance of one or another type of defects generates
the distinct 4kF states.
IV. RESULTS
We present in this section the results of QMC investi-
gations of Eq. (1a), for both zero and nonzero e-p cou-
plings, and of ED studies of the adiabatic (semiclassical)
limit of Eq. (1a). Using QMC techniques, we demon-
strate explicitly the spin-dependence of Vc(U), as well as
the transition from the Wigner crystal to the BCDW for
the intermediate parameter regime (iii). The ED studies
demonstrate the exotic nature of spin excitations from
the BCDW ground state. In what follows, all quantities
are expressed in units of |t| (|t|=1).
The QMC method we use is the Stochastic Series Ex-
pansion (SSE) method using the directed loop update
for the electron degrees of freedom66. For 1D fermions
with nearest-neighbor hopping SSE provides statistically
exact results with no systematic errors. While the SSE
directed-loop method is grand canonical (with fluctuat-
ing particle density), we restrict measurements to only
the 1
-filled density sector to obtain results in the canoni-
cal ensemble. Quantum phonons are treated within SSE
by directly adding the bosonic phonon creation and an-
nihilation operators in Eq. (1a) to the series expansion67.
An upper limit in the phonon spectrum must be imposed,
but can be set arbitrarily large to avoid any systematic
errors67. For the results shown below we used a cutoff of
100 SSH phonons per bond and either 30 (for g = 0.5) or
50 (for g = 0.75) Holstein phonons per site.
The observables we calculate within SSE are the
standard wave-vector dependent charge structure factor
Sρ(q), defined as,
Sρ(q) =
eiq(j−k)〈O
〉 (2)
and charge and bond-order susceptibilities χρ(q) and
χB(q), defined as,
χx(q) =
eiq(j−k)
dτ〈Oxj (τ)O
k (0)〉 (3)
In Eqs. (2) and (3) N is the number of lattice sites, O
nj,↑ + nj,↓, O
j+1,σcj,σ + h.c.), and β is the
inverse temperature in units of t.
The presence of CO or BOW can be detected by the
divergence of the 2kF or 4kF charge or bond-order sus-
ceptibility as a function of increasing system size. Strictly
speaking, in a purely 1D model these functions diverge
only at T = 0; as already explained above, we make the
reasonable assumption58 that in the presence of realis-
tic inter-chain couplings transitions involving charge or
bond-order instabilities, as determined by the dominant
susceptibility, occur at finite T.
A. Spin-dependent Vc(U)
We first present computational results within Hamil-
tonian (1a) in the absence of e-p coupling to demon-
strate that the Vc(U) at which the Wigner crystal order
is established in the lowest state of a given spin sub-
space decreases with increasing spin S. Our computa-
tional approach conserves the total z-component of spin
Sz and not the total S. Since the Lieb-Mattis theorem
E(S) < E(S+1), where E(S) is the energy of the lowest
state in the spin subspace S, applies to the 1D Hamilto-
nian (1a), and since in the absence of a magnetic field
all Sz states for a given S are degenerate, our results for
the lowest state within each different Sz must pertain to
S = Sz.
To determine Vc(U, S) we use the fact that the purely
electronic model is a Luttinger liquid (LL) for V < Vc
with correlation functions determined by a single expo-
nent Kρ (see Reference 69 for a review). The Wigner
crystal state is reached when Kρ =
. The exponent
Kρ may be calculated from the long-wavelength limit of
Sρ(q)
70. In Fig. 2(a) we have plotted our calculated Kρ
for U = 8 as a function of V for different Sz sectors. The
2.2 2.4 2.6 2.8 3 3.2
0 2 4 6 8
FIG. 2: Vc for Eq. (1a) in the limit of zero e-p interactions
(α = g = 0) as a function of Sz. Results are for a N=32
site periodic ring with U = 8. For N = 32 Sz=8 corresponds
to the fully polarized (spinless fermion) limit. (a) Luttinger
Liquid exponent Kρ as a function of V . Kρ =
determines
the boundary for the · · · 1010· · · CO phase. (b) Vc plotted vs.
temperature chosen is small enough (β = 2N) that in all
cases the results correspond to the lowest state within
a given Sz. In Fig. 2(b) we have plotted our calculated
Vc(U = 8), as obtained from Fig. 2(a), as a function of
Sz. Vc is largest for Sz = 0 and decreases with increas-
ing Sz, in agreement with the conjecture of Section III.
Importantly, the calculated Vc for Sz = 8 is close to the
correct limiting value of 2, indicating the validity of our
approach. We have not performed any finite-size scaling
in Fig. 2, which accounts for the the slight deviation from
the exact value of 2.
B. T-dependent susceptibilities
We next present the results of QMC calculations within
the full Hamiltonian (1a). To reproduce correct relative
energy scales of intra- and intermolecular phonon modes
in the CTS, we choose ωH > ωS , specifically ωH = 0.5
and ωS = 0.1 in our calculations. Small deviations from
these values do not make any significant difference. In
all cases we have chosen the electron-molecular vibration
coupling g larger than the coupling between electrons and
the SSH phonons α, thereby deliberately enhancing the
T=0.25t
T=0.125t
T=0.042t
0 0.2 0.4 0.6 0.8 1
T=0.25t
T=0.125t
T=0.042t
0 0.05 0.1 0.15 0.2 0.25
FIG. 3: (color online) QMC results for the temperature-
dependent charge susceptibilities for a N=64 site periodic
ring with U = 8, V = 2.75, α = 0.15, ωS = 0.1, g = 0.5,
and ωH = 0.5. (a) and (b) Wavevector-dependent charge
and bond-order susceptibilities. (c) 2kF and 4kF charge sus-
ceptibilities as a function of temperature. (d) 2kF and 4kF
bond-order susceptibilities as a function of temperature. If
error bars are not shown, statistical error bars are smaller
than the symbol sizes. Lines are guides to the eye.
likelihood of the Wigner crystal CO. We report results
only for intermediate and strong weak intersite Coulomb
interactions; the weak interaction regime V < 2 has been
discussed extensively in our previous work25.
1. 2 ≤ V ≤ Vc(U, S = 0)
Our results are summarized in Figs. 3–5, where we re-
port results for two different V of intermediate strength
and several different e-p couplings. Fig. 3 first shows re-
sults for relatively weak SSH coupling α. The charge sus-
ceptibility is dominated by a peak at 4kF = π (Fig. 3(a)).
Figs. 3(c) and (d) show the T-dependence of the 2kF
as well as 4kF charge and bond susceptibility. For
V < Vc(U, S = 0) the 4kF charge susceptibility does
not diverge with system size, and the purely 1D system
remains a LL with no long range CO at zero tempera-
ture. The dominance of χρ(4kF ) over χρ(2kF ) suggests,
however, that · · · 1010· · · CO will likely occur in the 3D
T=0.25t
T=0.125t
T=0.042t
0 0.2 0.4 0.6 0.8 1
T=0.25t
T=0.125t
T=0.042t
g=0.50
g=0.50
g=0.75
g=0.75
0 0.05 0.1 0.15 0.2 0.25
g=0.50
g=0.50
g=0.75
g=0.75
FIG. 4: (color online) Same as Fig. 3, but with parameters
U = 8, V = 2.25, α = 0.27, ωS = 0.1, and ωH = 0.5. In panels
(a) and (b), data are for g = 0.50 only. In panels (c) and (d),
data for both g = 0.50 and g = 0.75 are shown. Arrows
indicate temperature where χρ(2kF ) = χρ(4kF ) (solid and
broken arrows correspond to g = 0.50 and 0.75, respectively.)
system, especially if this order is further enhanced due to
interactions with the counterions9. We have plotted the
bond susceptibilities χB(2kF ) and χB(4kF ) in Fig. 3(d).
A SP transition requires that χB(2kF ) diverges as T → 0.
χB(2kF ) weaker than χB(4kF ) at low T indicates that
the SP order is absent in the present case with weak SSH
e-p coupling. This result is in agreement with earlier
result55 that the SP order is obtained only above a critical
αc (αc may be smaller for the infinite system than found
in our calculation). The most likely scenario with the
present parameters is the persistence of the · · · 1010· · ·
CO to the lowest T with no SP transition. These pa-
rameters, along with counterion interactions, could then
describe the (TMTTF)2Xmaterials with an AFM ground
state9.
In Figs. 4 and Fig. 5 we show our results for larger
SSH e-p coupling α and two different intersite Coulomb
interaction V=2.25 and 2.75. The calculations of Fig. 4
were done for two different Holstein e-p couplings g. For
both the V parameters, χρ(4kF ) dominates at high T
but χρ(2kF ) is stronger at low T in both Fig. 4(c) and
Fig. 5(c). The crossing between the two susceptibilities
T=0.25t
T=0.125t
T=0.042t
0 0.2 0.4 0.6 0.8 1
T=0.25t
T=0.125t
T=0.042t
g=0.50
g=0.50
0 0.05 0.1 0.15 0.2 0.25
g=0.50
g=0.50
FIG. 5: (color online) Same as Fig. 3, but with parameters
U = 8, V = 2.75, α = 0.27, ωS = 0.1, g = 0.5, and ωH = 0.5.
Arrow indicates temperature where χρ(2kF ) = χρ(4kF ).
is clear indication that in the intermediate parameter
regime, as T is lowered the 2kF CDW instability domi-
nates over the 4kF CO.
The rise in χρ(2kF ) at low T is accompanied by a
steep rise in χB(2kF ) in both cases (see Fig. 4(d) and
Fig. 5(d)). Importantly, unlike in Fig. 3(d), χB(2kF ) in
these cases clearly dominates over χB(4kF ) by an order
of magnitude at low temperatures. There is thus a clear
signature of the SP instability for these parameters. The
simultaneous rise in χB(2kF ) and χρ(2kF ) indicates that
the SP state is the · · · 1100· · · BCDW. Comparison of
Figs. 4 and 5 indicates that the effect of larger V is to
decrease the T where the 2kF and 4kF susceptibilities
cross. Since larger V would imply larger TCO, this result
implies that larger TCO is accompanied by lower TSP .
Our calculations are for relatively modest α < g. Larger
α (not shown) further strengthens the divergence of 2kF
susceptibilities.
The motivation for performing the calculations of
Fig. 4 with multiple Holstein couplings was to deter-
mine whether it is possible to have a Wigner crystal at
low T even for V < V (U, S = 0), by simply increas-
ing g. The argument for this would be that in strong
coupling, increasing g amounts to an effective increase
in V 71. The Holstein coupling cannot be increased arbi-
T=0.25t
T=0.125t
T=0.042t
0 0.2 0.4 0.6 0.8 1
T=0.25t
T=0.125t
T=0.042t
0 0.05 0.1 0.15 0.2 0.25
FIG. 6: (color online) Same as Fig. 3, but with parameters
U = 8, V = 3.5, α = 0.24, ωS = 0.1, g = 0.5, and ωH = 0.5.
trarily, however, as beyond a certain point, g promotes
formation of on-site bipolarons67. Importantly, the co-
operative interaction between the 2kF BOW and CDW
in the BCDW21,25 implies that in the V < V (U, S = 0)
region, larger g not only promotes the 4kF CO but also
enhances the BCDW. In our calculations in Fig. 4(b)
and (c) we we find both these effects: a weak increase in
χρ(4kF ) at intermediate T, and an even stronger increase
in the T → 0 values of χρ(2kF ) and χB(2kF ). Actually,
this result is in qualitative agreement with our obser-
vation for the ground state within the adiabatic limit
of Eq. (1a) that in the range 0 < V < V (U, S = 0),
V enhances the BCDW25. The temperature at which
χρ(2kF ) and χρ(4kF ) cross does not change significantly
with larger g. Our results for g = 0.75 for V = 2.75 (not
shown) are similar, except that the data are more noisy
now (probably because all parameters in Fig. 5 are much
too large for the larger g). We conclude therefore that in
the intermediate V region, merely increasing g does not
change the BCDW nature of the SP state. For g to have
a qualitatively different effect, V should be much closer
to V (U, S = 0) or perhaps even larger.
2. V > Vc(U, S = 0)
In principle, calculations of low temperature instabili-
ties here should be as straightforward as the weak inter-
site Coulomb interaction V < 2t regime. The 4kF CO -
AFM1 state ↑ 0 ↓ 0 would occur naturally for the case
of weak e-p coupling. Obtaining the 4kF CO-SP state
1 = 0 = 1 · · · 0 · · · 1, with realistic V < 1
U is, however,
difficult25. Previous work72, for example, finds this state
for V > 1
U . Recent T-dependent mean-field calculations
of Seo et al.39 also fail to find this state for nonzero V .
There are two reasons for this. First, the spin exchange
here involves charge-rich sites that are second neighbors,
and are hence necessarily small. Energy gained upon
alternation of this weak exchange interactions is even
smaller, and thus the tendency to this particular form
of the SP transition is weak to begin with. Second, this
region of the parameter space involves either large U (for
e.g., U = 10, for which Vc(U, S = 0) ≃ 2) or relatively
large V (for e.g. Vc(U, S = 0) ≃ 3 for U = 8). In either
case such strong Coulomb interactions make the applica-
bility of mean-field theories questionable.
Our QMC calculations do find the tendency to SP in-
stability in this parameter region. In Fig. 6 we show
QMC results for V > Vc(U = 8, S = 0). In contrast to
Figs. 4 and 5, χρ(4kF ) now dominates over χρ(2kF ) at
all T. The weaker peak at q = 2kF at low T is due to
small differences in site charge populations between the
charge-poor sites of the Wigner crystal that arises upon
bond distortion25, and that adds a small period 4 compo-
nent to the charge modulation. The bond susceptibility
χB(q) has a strong peak at 2kF and a weaker peak at 4kF ,
exactly as expected for the 4kF CO-SP state. Previous
work has shown that the difference in charge densities
between the charge-rich and charge-poor sites in the 4kF
CO-SP ground state is considerably larger than in the
BCDW25.
C. Spin excitations from the BCDW
The above susceptibility calculations indicate that in
the intermediate V region (Figs. 4 and 5) the BCDW
ground state with · · · 1100· · · CO can evolve into the
· · · 1010· · · CO as T increases. Our earlier work had
shown that for weak V , the BCDW evolves into the 4kF
BOW at high T18,25. Within the standard theory of the
SP transition57,64,65, applicable to the 1
-filled band, the
SP state evolves into the undistorted state for T > TSP .
Thus the 4kF distortions, CO and BOW, take the role of
the undistorted state in the 1
-filled band, and it appears
paradoxical that the same ground state can evolve into
two different high T states. We show here that this is inti-
mately related to the nature of the spin excitations in the
-filled BCDW. Spin excitations from the conventional
SP state leave the site charge occupancies unchanged.
We show below that not only do spin excitations from
the 1
-filled BCDW ground state lead to changes in the
site occupancies, two different kinds of site occupancies
are possible in the localized defect states that characterize
spin excited states here. We will refer to these as type
I and type II defects, and depending on which kind of
defect dominates at high T (which in turn depends on
the relative magnitudes of V and the e-p couplings), the
4kF state is either the CO or the BOW.
We will demonstrate the occurrence of type I and II
defects in spin excitations numerically. Below we present
a physical intuitive explanation of this highly unusual
behavior, based on a configuration space picture. Very
similar configuration space arguments can be found else-
where in our discussion of charge excitations from the
BCDW in the interacting 1
-filled band73.
We begin our discussion with the standard 1
-filled
band, for which the mechanism of the SP transition is
well understood57,64,65. Fig. 7(a) shows in valence bond
(VB) representation the generation of a spin triplet from
the standard 1
-filled band. Since the two phases of bond
alternation are isoenergetic, the two free spins can sepa-
rate, and the true wavefunction is dominated by VB dia-
grams as in Fig. 7(b), where the phase of the bond alter-
nation in between the two unpaired spins (spin solitons)
is opposite to that in the ground state. With increasing
temperature and increasing number of spin excitations
there occur many such regions with reversed bond al-
ternations, and overlaps between regions with different
phases of bond alternations leads ultimately to the uni-
form state.
The above picture needs to be modified for the dimer-
ized dimer BCDW state in the 1
-filled band, in which
the single site of the 1
-filled band system is replaced by
a dimer unit with site populations 1 and 0 (or 0 and 1),
and the stronger interdimer 1–1 bond (weaker interdimer
0· · · 0 bond) corresponds to the strong (weak) bond in the
standard SP case. Fig. 7(c), showing triplet generation
from the BCDW, is the 1
-filled analog of Fig. 7(a): a
singlet bond between the dimer units has been broken to
generate a localized triplet. The effective repulsion be-
tween the free spins of Fig. 7(a) is due to the absence
of binding between them, and the same is expected in
Fig. 7(c). Because the site occupancies within the dimer
units are nonuniform now, the repulsion between the
spins is reduced from changes in intradimer as well as
interdimer site occupancies: the site populations within
the neighboring units can become uniform (0.5 each), or
the site occupancies can revert to 1001 from 0110. There
is no equivalent of this step in the standard SP case.
The next steps in the separation of the resultant defects
are identical to that in Fig. 7(b), and we have shown
these possible final states in Fig. 7(d) and Fig. 7(e), for
the two different intraunit spin defect populations. For
V < Vc(U, S), defect units with site occupancies 0.5 occu-
pancies (type I defects) are expected to dominate; while
for V > Vc(U, S) site populations of 10 and 01 (type II
defects) dominate. From the qualitative discussions it is
difficult to predict whether the defects are free, in which
case they are solitons, or if they are bound, in which case
0 1 0 1.5 .5 .5 .5
1 0 0 1 0 0 1
0 0 0 1 1 0 0 1 1 0
0 1 1 0
FIG. 7: (a) and (b) S = 1 VB diagrams in the Heisenberg SP
chain. The SP bond order in between the S = 1
solitons in (b)
is opposite to that elsewhere. (c) 1
-filled band equivalent of
(a); the singlet bonds in the background BCDW are between
units containing a pair of sites but a single electron (see text).
(d) and (e) 1
-filled band equivalents of (b). Defect units with
two different charge distributions are possible. The charge
distribution will depend on V .
two of them constitute a triplet. What is more relevant
is that type I defects generate bond dimerization locally
(recall that the 4kF BOW has uniform site charge densi-
ties) while type II defects generate local site populations
1010, which will have the tendency to generate the 4kF
The process by which the BCDW is reached from the
4kF BOW or the CO as T is decreased from T2kF is ex-
actly opposite to the discussion of spin excitations from
the BCDW in Fig. 7. Now · · · 1100· · · domain walls
appear in the background bond-dimerized or charge-
dimerized state. In the case of the · · · 1010· · · 4kF state,
the driving force for the creation of such a domain wall
is the energy gained upon singlet formation (for V <
Vc(U,S)).
Fig. 7(e) suggests the appearance of localized
· · · 1010· · · regions in excitations from the · · · 1100· · ·
BCDW SP state for 2|t| ≤ V ≤ Vc(U, S = 0). We have
verified this with exact spin-dependent calculations for N
= 16 and 20 periodic rings with adiabatic e-p couplings,
HASSH = t
[1− α(ui − ui+1)](c
i,σci+1,σ + h.c.)
(ui − ui+1)
2 (4a)
HAHol = g
vini +
v2i (4b)
and Hee as in Eq. (1d). In the above ui is the displace-
ment from equilibrium of a molecular unit and vi is a
molecular mode. Note that due to the small sizes of
the systems we are able to solve exactly, the e-p cou-
plings in Eq. (4a) and Eq. (4b) cannot be directly com-
pared to those in Eq. (1a). In the present case, we
take the smallest e-p couplings necessary to generate the
BCDW ground state self-consistently within the adia-
batic Hamiltonian22,23,25. We then determine the bond
distortions, site charge occupancies and spin-spin corre-
4 8 12 16 20
-0.04
-0.04
FIG. 8: (a) Self-consistent site charges (vertical bars) and
spin-spin correlations (points, lines are to guide the eye) in
the N=20 periodic ring with U = 8, V = 2.75, α2/KS = 1.5,
g2/KH = 0.32 for Sz = 2. (b) Same parameters, except
Sz = 3. Black and white circles at the bottoms of the panels
denote charge-rich and charge-poor sites, respectively, while
gray circles denote sites with population nearly exactly 0.5.
The arrows indicate locations of the defect units with nonzero
spin densities.
lations self-consistently with the same parameters for ex-
cited states with Sz > 0.
In Fig. 8(a) we have plotted the deviations of the site
charge populations from average density of 0.5 for the
lowest Sz=2 state for N = 20, for U = 8, V = 2.75 and e-p
couplings as given in the figure caption. Deviations of the
charge occupancies from the perfect · · · 0110· · · sequence
identify the defect centers, as seen from Fig. 7(c) - (e).
In the following we refer to dimer units composed of sites
i and j as [i,j]. Based on the charge occupancies in the
Fig, we identify units [1,2], [7,8], [13,14] and [14,15] as the
defect centers. Furthermore, based on site populations of
nearly exactly 0.5, we identify defects on units [1,2] and
[7,8] as type I; the populations on units [13,14] and [14,15]
identify them type II. Type I defects appear to be free
and soliton like, while type II defects appear to be bound
into a triplet state, but both could be finite size effects.
Fig. 7(d) and (e) suggest that the spin-spin z-
component correlations, 〈Szi S
j 〉, are large and positive
only between pairs of sites which belong to the defect cen-
ters, as all other spins are singlet-coupled. For our char-
acterization of units [1,2], [7,8], [13,14] and [14,15] to be
correct therefore, 〈Szi S
j 〉 must be large and positive if
sites i and j both belong to this set of sites, and small
(close to zero) when either i or j does not belong to
this set (as all sites that do not belong to the set are
singlet-bonded). We have superimposed the calculated
z-component spin-spin correlations between site 2 and
all sites j = 1 – 20. The spin-spin correlations are in
complete agreement with our characterization of units
[1,2], [7,8], [13,14] and [14,15] as defect centers with free
spins. The singlet 1–1 bonds between nearest neighbor
charge-rich sites in Fig. 7(c) - (e) require that spin-spin
couplings between such pairs of sites are large and neg-
ative, while spin-spin correlations between one member
of the pair any other site is small. We have verified such
strong spin-singlet bonds between sites 4 and 5, 10 and
11, and 18 and 19, respectively (not shown). Thus mul-
tiple considerations lead to the identification of the same
sites as defect centers, and to their characterization as
types I and II.
We report similar calculations in Fig. 8(b) for Sz = 3.
Based on charge occupancies, four out of the six defect
units with unpaired spins in the Sz = 3 state are type II;
these occupy units [3,4], [9,10], [13,14] and [19,20]. Type
I defects occur on units [1,2] and [11,12]. As indicated in
the figure, spin-spin correlations are again large between
site 2 and all other sites that belong to this set, while they
are again close to zero when the second site is not a defect
site. As in the previous case, we have verified singlet spin
couplings between nearest neighbor pairs of charge-rich
sites. There occur then exact correspondences between
charge densities and spin-spin correlations, exactly as for
Sz = 2.
The susceptibility calculations in Fig. 4 and Fig. 5 are
consistent with the microscopic calculations of spin de-
fects presented above. As 4kF defects are added to the
2kF background, the 2kF susceptibility peak is expected
to broaden and shift towards higher q. This is exactly
what is seen in the charge susceptibility, Fig. 4(a) and
Fig. 5(a) as T is increased. A similar broadening and
shift is seen in the bond order susceptibility as well.
V. DISCUSSIONS AND CONCLUSIONS
In summary, the SP state in the 1
-filled band CTS is
unique for a wide range of realistic Coulomb interactions.
Even when the 4kF state is the Wigner crystal, the SP
state can be the · · · 1100· · · BOW. For U = 8, for exam-
ple, the transition found here will occur for 2 < V < 3.
This novel T-dependent transition from the Wigner crys-
tal to the BCDW is a consequence of the spin-dependence
of Vc. Only for V > 3 here can the SP phase be the
tetramerized Wigner crystal 1 = 0 = 1 · · · 0 · · · 1 (note,
however, that V ≤ 4 for U = 8). We have ignored the
intrinsic dimerization along the 1D stacks in our reported
results, but this increases Vc(U, S = 0) even further, and
makes the Wigner crystal that much more unlikely24.
Although even larger U (U = 10, for example) reduces
Vc(U, S = 0), we believe that the Coulomb interactions
in the (TMTTF)2X lie in the intermediate range.
A Wigner crystal to BCDW transition would explain
most of the experimental surprises discussed in Section
II. The discovery of the charge redistribution upon en-
tering the SP phase15,16 in X = AsF6 and PF6 is prob-
ably the most dramatic illustration of this. Had the SP
state maintained the same charge modulation pattern
as the Wigner crystal that exist above T2kF , the dif-
ference in charge densities between the charge-rich and
the charge-poor sites would have changed very slightly25.
The dominant effect of the SP transition leading to
1 = 0 = 1 · · · 0 · · · 1 is only on the charge-poor sites,
which are now inequivalent (note that the charge-rich
sites remain equivalent). The difference in charge den-
sity of the charge-rich sites and the average of the charge
density of the charge-poor sites thus remains the same25.
In contrast, the difference in charge densities between
the charge-rich and the charge-poor sites in the BCDW
is considerably smaller than in the Wigner crystal25,
and we believe that the experimentally observed smaller
charge density difference in the SP phase simply reflects
its BCDW character.
The experiments of references 15,16 should not be
taken in isolation: we ascribe the competition between
the CO and the SP states in X = PF6 and AsF6, as
reflected in the different pressure-dependences of these
states8, to their having different site charge occupan-
cies. The observation that the charge density difference
in X = SbF6 decreases considerably upon entering the SP
phase from the AFM1 phase9 can also be understood if
the AFM1 and the SP phases are assigned to be Wigner
crystal and the BCDW, respectively. The correlation be-
tween larger TCO and smaller TSP
48 is expected. Larger
TCO implies larger effective V , which would lower TSP .
This is confirmed from comparing Figs. 4 and 5: the
temperature at which χρ(2kF ) begins to dominate over
χρ(4kF ) is considerably larger in Fig. 4 (smaller V ) than
in Fig. 5 (larger V ). The isotope effect, strong enhance-
ment of the TCO (from 69 K to 90 K) with deutera-
tion of the methyl groups in X = PF6, and concomi-
tant decrease in TSP
6,47 are explained along the same
line. Deuteration decreases ωH in Eq. (1a), which has
the same effect as increasing V . Thus from several dif-
ferent considerations we come to the conclusion that the
transition from the Wigner crystal to the BCDW that we
have found here theoretically for intermediate V/|t| does
actually occur in (TMTTF)2X that undergo SP transi-
tion. This should not be surprising. Given that the 1:2
anionic CTS lie in the “weak” V/|t| regime, TMTTF with
only slightly smaller |t| (but presumably very similar V ,
since intrastack intermolecular distances are comparable
in the two families) lies in the “intermediate” as opposed
to “strong” V/|t| regime.
Within our theory, the two different antiferromagnetic
regions that straddle the SP phase in Fig. 1, AFM1 and
AFM2, have different charge occupancies. The Wigner
crystal character of the AFM1 region is revealed from the
similar behavior of TN and TCO in (TMTTF)2SbF6 un-
der pressure9, indicating the absence of the competition
of the type that exists between CO and SP, in agree-
ment with our assignment. The occurrence of a Wigner
crystal AFM1 instead of SP does not necessarily imply
a larger V/|t| in the SbF6. A more likely reason is that
the interaction with the counterions is strong here, and
this interaction together with V pins the electrons on al-
ternate sites. (TMTSF)2X, and possibly (TMTTF)2Br,
belong to the AFM2 region. The observation that the
CDW and the spin-density wave in the TMTSF have the
same periodicities1 had led to the conclusion that the
charge occupancy here is · · · 1100· · · 21. This conclusion
remains unchanged. Finally, we comment that the ob-
servation of Wigner crystal CO3 in (DI-DCNQI)2Ag is
not against our theory, as the low T phase here is an-
tiferromagnetic and not SP. We predict the SP system
(DMe-DCNQI)2Ag to have the · · · 1100· · · charge order-
In summary, it appears that the key concept of spin-
dependent Vc within Eq. (1a) can resolve most of the
mysteries associated with the temperature dependence
of the broken symmetries in the 1
-filled band CTS. One
interesting feature of our work involves demonstration
of spin excitations from the BCDW state that necessar-
ily lead to local changes in site charges. Even for weak
Coulomb interactions, the 1
-filled band has the BCDW
character18. The effects of magnetic field on 1
-filled band
CDWs is of strong recent interest74,75. We are currently
investigating the consequences of the Zeeman interaction
on the mixed spin-charge excitations of the BCDW.
VI. ACKNOWLEDGMENTS
We acknowledge illuminating discussions with S.E.
Brown. This work was supported by the Department
of Energy grant DE-FG02-06ER46315 and the American
Chemical Society Petroleum Research Fund.
1 J. P. Pouget and S. Ravy, J. Physique I 6, 1501 (1996).
2 B. Dumoulin, C. Bourbonnais, S. Ravy, J. P. Pouget, and
C. Coulon, Phys. Rev. Lett. 76, 1360 (1996).
3 K. Hiraki and K. Kanoda, Phys. Rev. Lett. 80, 4737
(1998); T. Kakiuchi, Y. Wakabayashi, H. Sawa, T. Itou,
and K. Kanoda, Phys. Rev. Lett. 98, 066402 (2007).
4 F. Nad and P. Monceau, J. Phys. Soc. Jpn. 75, 051005
(2006), and references therein.
5 P. Monceau, F. Y. Nad, and S. Brazovskii, Phys. Rev. Lett.
86, 4080 (2001).
6 F. Nad, P. Monceau, T. Nakamura, and K. Furukawa, J.
Phys.: Condens. Matter 17, L399 (2005).
7 D. S. Chow, F. Zamborszky, B. Alavi, D. J. Tantillo,
A. Baur, C. A. Merlic, and S. E. Brown, Phys. Rev. Lett.
85, 1698 (2000).
8 F. Zamborszky, W. Yu, W. Raas, S. E. Brown, B. Alavi,
C. A. Merlic, and A. Baur, Phys. Rev. B 66, 081103(R)
(2002).
9 W. Yu, F. Zhang, F. Zamborszky, B. Alavi, A. Baur, C. A.
Merlic, and S. E. Brown, Phys. Rev. B 70, 121101(R)
(2004).
10 P. Foury-Leylekian, D. LeBolloc’h, B. Hennion, S. Ravy,
A. Moradpour, and J.-P. Pouget, Phys. Rev. B 70,
180405(R) (2004).
11 T. Takahashi, Y. Nogami, and K. Yakushi, J. Phys. Soc.
Jpn. 75, 051008 (2006).
12 T. Nakamura, J. Phys. Soc. Jpn. 72, 213 (2003).
13 S. Fujiyama and T. Nakamura, Phys. Rev. B 70, 045102
(2004).
14 T. Nakamura, K. Furukawa, and T. Hara, J. Phys. Soc.
Jpn. 75, 013707 (2006).
15 S. Fujiyama and T. Nakamura, J. Phys. Soc. Jpn. 75,
014705 (2006).
16 T. Nakamura, K. Furukawa, and T. Hara, J. Phys. Soc.
Jpn. 76, 064715 (2007).
17 M. Dumm, M. Abaker, M. Dressel, and L. K. Montgomery,
J. Physique IV 131, 55 (2005); J. Low Temp. Phys. 142,
609 (2006).
18 K. C. Ung, S. Mazumdar, and D. Toussaint, Phys. Rev.
Lett. 73, 2603 (1994).
19 K. Penc and F. Mila, Phys. Rev. B 49, 9670 (1994).
20 H. Seo and H. Fukuyama, J. Phys. Soc. Jpn. 66, 1249
(1997).
21 S. Mazumdar, S. Ramasesha, R. T. Clay, and D. K. Camp-
bell, Phys. Rev. Lett. 82, 1522 (1999).
22 J. Riera and D. Poilblanc, Phys. Rev. B 62, R16243 (2000).
23 J. Riera and D. Poilblanc, Phys. Rev. B 63, 241102(R)
(2001).
24 Y. Shibata, S. Nishimoto, and Y. Ohta, Phys. Rev. B 64,
235107 (2001).
25 R. T. Clay, S. Mazumdar, and D. K. Campbell, Phys. Rev.
B 67, 115121 (2003).
26 H. Seo, J. Merino, H. Yoshioka, and M. Ogata, J. Phys.
Soc. Jpn. 75, 051009 (2006).
27 T. Ishiguro, K. Yamaji, and G. Saito, Organic Supercon-
ductors (Springer-Verlag, New York, 1998).
28 H. Mori, S. Tanaka, and T. Mori, Phys. Rev. B 57, 12023
(1998).
29 A. Kawamoto, Y. Honma, and K. I. Kumagai, Phys. Rev.
B 70, 060510(R) (2004).
30 K. Miyagawa, A. Kawamoto, and K. Kanoda, Phys. Rev.
B 62, R7679R (2000); Phys. Rev. Lett. 89,017003 (2002).
31 B. J. Powell and R. H. McKenzie, J. Phys: Condens. Mat-
ter 18, R827 (2006).
32 B. Pedrini et al., Phys. Rev. B 72, 214407 (2005).
33 M. Lee, L. Viciu, L. Li, Y. Wang, M. L. Foo, S. Watauchi,
R. A. Pascal Jr, R. J. Cava, and N. P. Ong, Nature Mat.
5, 537 (2006).
34 T.-P. Choy, D. Galanakis, and P. Phillips, Phys. Rev. B
75, 073103 (2007).
35 S. Lakkis, C. Schlenker, B. K. Chakraverty, R. Buder, and
M. Marezio, Phys. Rev. B 14, 1429 (1976).
36 M. Isobe and Y. Ueda, J. Phys. Soc. Jpn. 65, 1178 (1996).
37 T. Yamauchi, Y. Ueda, and N. Mori, Phys. Rev. Lett. 89,
057002 (2002).
38 H. Yoshioka, M. Tsuchiizu, and H. Seo, J. Phys. Soc. Jpn.
75, 063706 (2006).
39 H. Seo, Y. Motome, and T. Kato, J. Phys. Soc. Jpn. 76,
013707 (2007).
40 We assume equal intermolecular distances along the or-
ganic stack. We ignore the intrinsic dimerization that can
exist in TMTTF stacks and that can lead to charge local-
ization above TCO .
41 R. J. J. Visser, S. Oostra, C. Vettier, and J. Voiron, Phys.
Rev. B 28, 2074 (1983).
42 H. Kobayashi, Y. Ohashi, F. Marumo, and Y. Saito, Acta.
Cryst. B 26, 459 (1970).
43 A. Filhol and M. Thomas, Acta. Cryst. B 40, 44 (1984).
44 A. Filhol et al., Acta. Cryst. B 36, 2719 (1980).
45 Y. Nakazawa, A. Sato, M. Seki, K. Saito, K. Hiraki,
T. Takahashi, K. Kanoda, and M. Sorai, Phys. Rev. B
68, 085112 (2003).
46 C. Coulon, G. Lalet, J. P. Pouget, P. Foury-Leylekian,
A. Moradpour, and J. M. Fabre, Phys. Rev. B 76, 085126
(2007).
47 K. Furukawa, T. Hara, and T. Nakamura, J. Phys. Soc.
Jpn. 74, 3288 (2005).
48 J.-P. Pouget, P. Foury-Leylekian, D. L. Bolloc’h, B. Hen-
nion, S. Ravy, C. Coulon, V. Cardoso1, and A. Moradpour,
J. Low Temp. Phys. 142, 147 (2006).
49 J. P. Pouget and S. Ravy, Synth. Metals 85, 1523 (1997).
50 S. Kagoshima, Y. Saso, M. Maesato, and R. Kondo, Solid
St. Comm. 110, 479 (1999).
51 N. Kobayashi and M. Ogata, J. Phys. Soc. Jpn. 66, 3356
(1997).
52 S. Fujiyama and T. Nakamura, J. Phys. Chem. Solids 63,
1259 (2002).
53 W. P. Su, J. R. Schrieffer, and A. J. Heeger, Phys. Rev.
Lett. 42, 1698 (1979).
54 T. Holstein, Ann. Phys.(N.Y.) 8, 325 (1959).
55 P. Sengupta, A. W. Sandvik, and D. K. Campbell, Phys.
Rev. B 67, 245103 (2003).
56 K. Louis and X. Zotos, Phys. Rev. B 72, 214415 (pages 7)
(2005).
57 M. C. Cross and D. S. Fisher, Phys. Rev. B 19, 402 (1979).
58 J. E. Hirsch and D. J. Scalapino, Phys. Rev. B 29, 5554
(1984).
59 M. E. Hawley, T. O. Poehler, T. F. Carruthers, A. N.
Bloch, D. O. Cowan, and T. J. Kistenmacher, Bull. Am.
Phys. Soc. 23, 424 (1978).
60 J. B. Torrance, J. J. Mayerle, K. Bechgaard, B. D. Silver-
man, and Y. Tomkiewicz, Phys. Rev. B 22, 4960 (1980).
61 S. N. Dixit and S. Mazumdar, Phys. Rev. B 29, 1824
(1984).
62 J. E. Hirsch, Phys. Rev. Lett. 53, 2327 (1984).
63 S. Mazumdar, R. T. Clay, and D. K. Campbell, Phys. Rev.
B 62, 13400 (2000).
64 E. Sorensen, I. Affleck, D. Augier, and D. Poilblanc, Phys.
Rev. B 58, R14701 (1998).
65 W. Yu and S. Haas, Phys. Rev. B 62, 344 (2000).
66 O. F. Syljuasen and A. W. Sandvik, Phys. Rev. E 66,
046701 (2002).
67 R. P. Hardikar and R. T. Clay, Phys. Rev. B 75, 245103
(2007).
68 E. H. Lieb and D. Mattis, J. Math. Phys. (NY) 3, 749
(1962).
69 J. Voit, Rep. Prog. Phys. 58, 977 (1995).
70 R. T. Clay, A. W. Sandvik, and D. K. Campbell, Phys.
Rev. B 59, 4665 (1999).
71 J. E. Hirsch and E. Fradkin, Phys. Rev. B 27, 4302 (1983).
72 M. Kuwabara, H. Seo, and M. Ogata, J. Phys. Soc. Jpn.
72, 225 (2003).
73 R. T. Clay, S. Mazumdar, and D. K. Campbell, Phys. Rev.
Lett. 86, 4084 (2001).
74 D. Graf, E. S. Choi, J. S. Brooks, M. Matos, R. T. Hen-
riques, and M. Almeida, Phys. Rev. Lett. 93, 076406
(2004).
75 R. D. McDonald, N. Harrison, L. Balicas, K. H. Kim,
J. Singleton, and X. Chi, Phys. Rev. Lett. 93, 076405
(2004).
|
0704.1657 | Bubbling Surface Operators And S-Duality | arXiv:0704.1657v3 [hep-th] 28 May 2008
arXiv:0704.1657
Bubbling Surface Operators
And S-Duality
Jaume Gomis1 and Shunji Matsuura2
Perimeter Institute for Theoretical Physics
Waterloo, Ontario N2L 2Y5, Canada1,2
Department of Physics
University of Tokyo, 7-3-1 Hongo, Tokyo2
Abstract
We construct smooth asymptotically AdS5×S5 solutions of Type IIB supergravity
corresponding to all the half-BPS surface operators in N = 4 SYM. All the parameters
labeling a half-BPS surface operator are identified in the corresponding bubbling geome-
try. We use the supergravity description of surface operators to study the action of the
SL(2, Z) duality group of N = 4 SYM on the parameters of the surface operator, and find
that it coincides with the recent proposal by Gukov and Witten in the framework of the
gauge theory approach to the geometrical Langlands with ramification. We also show that
whenever a bubbling geometry becomes singular that the path integral description of the
corresponding surface operator also becomes singular.
04/2007
[email protected]
[email protected]
http://arxiv.org/abs/0704.1657v3
1. Introduction and Summary
Gauge invariant operators play a central role in the gauge theory holographically
describing quantum gravity with AdS boundary conditions [1][2][3], as correlation functions
of gauge invariant operators are the only observables in the boundary gauge theory. Finding
the bulk description of all gauge invariant operators is necessary in order to be able to
formulate an arbitrary bulk experiment in terms of gauge theory variables.
In this paper we provide the bulk description of a novel class of half-BPS operators
in N = 4 SYM which are supported on a surface Σ [4]. These nonlocal surface operators
OΣ are defined by quantizing N = 4 SYM in the presence of a certain codimension two
singularity for the classical fields of N = 4 SYM. The singularity characterizing such a
surface operator OΣ depends on 4M real parameters, whereM is the number of U(1)’s left
unbroken byOΣ. Surface operators are a higher dimensional generalization of Wilson and ’t
Hooft operators, which are supported on curves and induce a codimension three singularity
for the classical fields appearing in the Lagrangian. In this paper we extend the bulk
description of all half-BPS Wilson loop operators found in [5] (see also1 [8][9][10][11][12])
to all half-BPS surface operators.
We find the asymptotically AdS5×S5 solutions of Type IIB supergravity corresponding
to all half-BPS surface operators OΣ in N = 4 U(N) SYM. The topology and geometry
of the “bubbling” solution is completely determined in terms of some data, very much like
in the case studied by Lin, Lunin and Maldacena (LLM) in the context of half-BPS local
operators [13]2. In fact, we identify the system of equations determining the supergravity
solution corresponding to the half-BPS surface operators inN = 4 SYM with that obtained
by “analytic” continuation of the LLM equations [13][16].
The data determining the topology and geometry of a supergravity solution is charac-
terized by the position of a collection of M point particles in a three dimensional space X ,
whereX is a submanifold of the ten dimensional geometry. Different particle configurations
give rise to different asymptotically AdS5×S5 geometries.
1 The description of Wilson loops in the fundamental representation goes back to [6][7].
2 The bubbling geometry description of half-BPS Wilson loops was found in [10][11] while that
of half-BPS domain wall operators was found in [11][14]. For the bubbling Calabi-Yau geometries
for Wilson loops in Chern-Simons, see [15].
Fig. 1: a) The metric and five-form flux is determined once the position of the
particles in X – labeled by coordinates (~xl, yl) where y ≥ 0 – is given. The l-th
particle is associated with a point Pl ∈ X. b) The configuration corresponding to
the AdS5×S
5 vacuum.
Even though the choice of a particle distribution in X completely determines the
geometry and topology of the metric and the corresponding RR five-form field strength,
further choices have to be made to fully characterize a solution of Type IIB supergravity
on this geometry3.
Given a configuration ofM particles inX , the corresponding ten dimensional geometry
developsM non-trivial disks which end on the boundary4 of AdS5×S5 on a non-contractible
S1. Since Type IIB supergravity has two two-form gauge fields, one from the NS-NS sector
and one from the RR sector, a solution of the Type IIB supergravity equations of motion
is fully determined only once the holonomy of the two-forms around the various disks is
specified:
l = 1, . . . ,M. (1.1)
Therefore, an asymptotically AdS5×S5 solution depends on the position of theM particles
in X – given by (~xl, yl) – and on the holonomies of the two-forms (1.1).
A precise dictionary is given between all the 4M parameters that label a half-BPS
surface operator OΣ and all the parameters describing the corresponding supergravity
solution. We show that the supergravity solution describing a half-BPS surface operator is
regular and that whenever the supergravity solution develops a singularity the N = 4 SYM
path integral description of the corresponding surface operator also develops a singularity.
We study the action of the SL(2, Z) symmetry of Type IIB string theory on the
supergravity solutions representing the half-BPS surface operators in N = 4 SYM. By
3 This is on top of the obvious choice of dilaton and axion, which gets identified with the
complexified coupling constant in N = 4 SYM.
4 The conformal boundary in this case is AdS3×S
1, where surface operators in N = 4 SYM
can be studied by specifying non-trivial boundary conditions.
using the proposed dictionary between the parameters of a supergravity solution and the
parameters of the corresponding surface operator, we can show that the action of S-duality
induced on the parameters of a surface operator coincides with the recent proposal by
Gukov and Witten [4] in the framework of the gauge theory approach to the geometrical
Langlands [17]5 with ramification.
Whether surface operators can serve as novel order parameters in gauge theory remains
an important open question. It is our hope that the viewpoint on these operators provided
by the supergravity solutions in this paper may help shed light on this crucial question.
The plan of the rest of the paper is as follows. In section 2 we study the gauge theory
singularities corresponding to surface operators in N = 4 SYM, study the symmetries
preserved by a half-BPS surface operator and review the proposal in [4] for the action of
S-duality on the parameters that a half-BPS surface operator depends on. We also compute
the scaling weight of these operators and show that it is invariant under Montonen-Olive
duality. In section 3 we construct the solutions of Type IIB supergravity describing the
half-BPS surface operators. We identify all the parameters that a surface operator depends
on in the supergravity solution and show that the action of S-duality on surface operators
proposed in [4] follows from the action of SL(2, Z) on the classical solutions of supergravity.
The Appendices contain some details omitted in the main text.
2. Surface Operators in Gauge Theories
A surface operator OΣ is labeled by a surface Σ in R1,3 and by a conjugacy class U of
the gauge group G. The data that characterizes a surface operator OΣ, the surface Σ and
the conjugacy class U , can be identified with that of an external string used to probe the
theory. The surface Σ corresponds to the worldsheet of a string while the conjugacy class
U is associated to the Aharonov-Bohm phase acquired by a charged particle encircling the
string.
The singularity6 in the gauge field produced by a surface operator is that of a non-
abelian vortex. This singularity in the gauge field can be characterized by the phase
5 See e.g. [18] for a review of the geometric Langlands program.
6 Previous work involving codimension two singularities in gauge theory include [19][20][21].
acquired by a charged particle circumnavigating around the string. This gives rise to a
group element7 U ⊂ U(N)
U ≡ P exp i
A ⊂ U(N), (2.1)
which corresponds to the Aharonov-Bohm phase picked up by the wavefunction of the
charged particle. Since gauge transformations act by conjugation U → gUg−1, a surface
operator is labeled by a conjugacy class of the gauge group.
By performing a gauge transformation, the matrix U can be diagonalized. If we
demand that the gauge field configuration is scale invariant – so that OΣ has a well defined
scaling weight – then the gauge field produced by a surface operator can then be written
α1 ⊗ 1N1 0 . . . 0
0 α2 ⊗ 1N2 . . . 0
. . .
0 0 . . . αM ⊗ 1NM
dθ, (2.2)
where θ is the polar angle in the R2 ⊂ R1,3 plane normal to Σ and 1n is the n-dimensional
unit matrix. We note that the matrix U takes values on the maximal torus TN = RN/ZN
of the U(N) gauge group. Therefore the parameters αi take values on a circle of unit
radius.
The surface operator corresponding to (2.2) spontaneously breaks the U(N) gauge
symmetry along Σ down to the so called Levi group L, where a group of Levi type is char-
acterized by the subgroup of U(N) that commutes with (2.2). Therefore, L =
l=1 U(Nl),
where N =
l=1Nl.
Since the gauge group is broken down to the Levi group L =
l=1 U(Nl) along Σ,
there is a further choice [4] in the definition of OΣ consistent with the symmetries and
equations of motion. This corresponds to turning on a two dimensional θ-angle for the
unbroken U(1)’s along the string worldsheet Σ. The associated operator insertion into the
N = 4 SYM path integral is given by:
Tr Fl
. (2.3)
7 We now focus on G = U(N) as it is the relevant gauge group for describing string theory
with asymptotically AdS5×S
5 boundary conditions.
The parameters ηi takes values in the maximal torus of the S-dual or Langlands dual
gauge group LG [4]. Therefore, since LG = U(N) for G = U(N), we have that the matrix
of θ-angles of a surface operator OΣ characterized by the Levi group L =
l=1 U(Nl) is
given by the L-invariant matrix:
η1 ⊗ 1N1 0 . . . 0
0 η2 ⊗ 1N2 . . . 0
. . .
0 0 . . . ηM ⊗ 1NM
. (2.4)
The parameters ηi, being two dimensional θ-angles, also take values on a circle of unit
radius.
Therefore, a surface operator OΣ in pure gauge theory with Levi group L =
l=1 U(Nl) is labeled by 2M L-invariant parameters (αl, ηl) up to the action of SM ,
which acts by permuting the different eigenvalues in (2.2) and (2.4). The operator is then
defined by expanding the path integral with the insertion of the operator (2.3) around
the singularity (2.2), and by integrating over connections that are smooth near Σ. In
performing the path integral, we must divide [4] by the gauge transformations that take
values in L =
l=1 U(Nl) when restricted to Σ. This means that the operator becomes
singular whenever the unbroken gauge symmetry near Σ gets enhanced, corresponding to
when eigenvalues in (2.2) and (2.4) coincide.
Surface Operators in N = 4 SYM
In a gauge theory with extra classical fields like N = 4 SYM, the surface operator OΣ
may produce a singularity for the extra fields near the location of the surface operator.
The only requirement is that the singular field configuration solves the equations of motion
of the theory away8 from the surface Σ. The global symmetries imposed on the operator
OΣ determine which classical fields in the Lagrangian develop a singularity near Σ together
with the type of singularity.
A complementary viewpoint on surface operators is to add new degrees of freedom on
the surface Σ. Such an approach to surface operators in N = 4 SYM has been considered
8 For pure gauge theory, the field configuration in (2.2) does satisfy the Yang-Mills equation
of motion DmF
mn = 0 away from Σ. Moreover, adding the two dimensional θ-angles (2.3) does
not change the equations of motion.
in [22][23] where the new degrees of freedom arise from localized open strings on a brane
intersection.
The basic effect of OΣ is to generate an Aharonov-Bohm phase corresponding to a
group element U (2.1). If we let z be the complex coordinate in the R2 ⊂ R1,3 plane
normal to Σ, the singularity in the gauge field configuration is then given by
, (2.5)
where AI are constant matrices. Scale invariance of the singularity – which we are going
to impose – restricts AI = 0 for I ≥ 2.
The operator OΣ can also excite a complex scalar field Φ of N = 4 SYM near Σ
while preserving half of the Poincare supersymmetries of N = 4 SYM. Imposing that the
singularity is scale invariant9 yields
, (2.6)
where Φ1 is a constant matrix.
A surface operator OΣ is characterized by the choice of an unbroken gauge group
L ⊂ G along Σ. Correspondingly, the singularity of all the fields excited by OΣ must be
invariant under the unbroken gauge group L. For L =
l=1 U(Nl) ⊂ U(N) the singularity
in the gauge field is the non-abelian vortex configuration in (2.2) and the two dimensional
θ-angles are given by (2.4). L-invariance together with scale invariance requires that Φ
develops an L-invariant pole near Σ:
β1 + iγ1 ⊗ 1N1 0 . . . 0
0 β2 + iγ2 ⊗ 1N2 . . . 0
. . .
0 0 . . . βM + iγM ⊗ 1NM
. (2.7)
Therefore, a half-BPS surface operator OΣ in N = 4 SYM with Levi group L =
l=1 U(Nl) is labeled by 4M L-invariant parameters (αl, βl, γl, ηl) up to the action of SM ,
which permutes the different eigenvalues in (2.2)(2.4)(2.7). The operator is defined by the
9 If we relax the restriction of scale invariance, one can then get other supersymmetric singu-
larities with higher order poles Φ =
and A (2.5). The surface operators associated with
these singularities may be relevant [4] for the gauge theory approach to the study of the geometric
Langlands program with wild ramification.
path integral of N = 4 SYM with the insertion of the operator (2.3) expanded around the
L-invariant singularities (2.2)(2.7) and by integrating over smooth fields near Σ. As in the
pure gauge theory case, we must mode out by gauge transformations that take values in
L ⊂ U(N) when restricted to Σ. The surface operator OΣ becomes singular whenever the
the parameters that label the surface operator (αl, βl, γl, ηl) for l = 1, . . . ,M are such that
they are invariant under a larger symmetry than L, the group of gauge transformations
we have to mode out when evaluating the path integral.
S-duality of Surface Operators
In N = 4 SYM the coupling constant combines with the four dimensional θ-angle into
a complex parameter taking values in the upper half-plane:
. (2.8)
The group of duality symmetries ofN = 4 SYM is an infinite discrete subgroup of SL(2, R),
which depends on the gauge group G. For N = 4 SYM with G = U(N) the relevant
symmetry group is SL(2, Z):
∈ SL(2, Z). (2.9)
Under S-duality τ → −1/τ and G gets mapped10 to the S-dual or Langlands dual gauge
group LG. For G = U(N) the S-dual group is LG = U(N), and SL(2, Z) is a symmetry of
the theory, which acts on the coupling of the theory by fractional linear transformations:
τ → aτ + b
cτ + d
. (2.10)
In [4], Gukov and Witten made a proposal of how S-duality acts on the parameters
(αl, βl, γl, ηl) labeling a half-BPS surface operator. The proposed action is given by [4]:
(βl, γl) → |cτ + d| (βl, γl)
(αl, ηl) → (αl, ηl)M−1.
(2.11)
10 For G not a simply-laced group, τ → −1/nτ , where n is the ratio of the length-squared of
the long and short roots of G.
With the aid of this proposal, it was shown in [4] that the gauge theory approach to
the geometric Langlands program pioneered in [17] naturally extends to the geometric
Langlands program with tame ramification.
Symmetries of half-BPS Surface Operators in N = 4 SYM
We now describe the unbroken symmetries of the half-BPS surface operators OΣ.
These symmetries play an important role in determining the gravitational dual description
of these operators, which we provide in the next section.
In the absence of any insertions, N = 4 SYM is invariant under the PSU(2, 2|4)
symmetry group. If we consider the surface Σ = R1,1 ⊂ R1,3, then Σ breaks the SO(2, 4)
conformal group to a subgroup. A surface operator OΣ supported on this surface inserts
into the gauge theory a static probe string. This surface is manifestly invariant under
rotations and translations in Σ and scale transformations. It is also invariant under the
action of inversion I : xµ → xµ/x2 and consequently11 invariant under special conformal
transformations in Σ. Therefore, the symmetries left unbroken by Σ = R1,1 generate
an SO(2, 2)× SO(2)23 subgroup of the SO(2, 4) conformal group, where SO(2)23 rotates
the plane transverse to Σ in R1,3. In Euclidean signature, the surface Σ =S2 preserves
an SO(1, 3) × SO(2)23 subgroup of the Euclidean conformal group. This surface can be
obtained from the surface Σ = R2 ∈ R4 by the action of a broken special conformal
generator and can also be used to construct a half-BPS surface operator OΣ in N = 4
Since the symmetry of a surface operator with Σ = R1,1 is SO(2, 2) × SO(2)23 one
can study such an operator either by considering the gauge theory in R1,3 or in AdS3×S1,
which can be obtained from R1,3 by a conformal transformation. Studying the gauge
theory in AdS3×S1 has the advantage of making the symmetries of the surface operator
manifest, as the conformal symmetries left unbroken by the surface act by isometries on
AdS3×S1. Surface operators in R1,3 are described by a codimension two singularity while
surface operators in AdS3×S1 are described by a boundary condition on the boundary of
AdS3. A surface operator with Σ = R
1,1 corresponds to a boundary condition on AdS3
in Poincare coordinates while a surface operator on Σ =S2 corresponds to a boundary
condition on global Euclidean AdS3.
11 We recall that a special conformal transformation Kµ is generated by IPµI, where Pµ is the
translation generator and I is an inversion.
The singularity in the classical fields produced by OΣ in (2.2)(2.7) is also invariant
under SO(2, 2). The N = 4 scalar field Φ carries charge under an SO(2)R subgroup of the
SO(6) R-symmetry and is therefore SO(4) invariant. The surface operator OΣ is therefore
invariant under SO(2, 2)×SO(2)a×SO(4), where SO(2)a is generated by the anti-diagonal
product12 of SO(2)23 × SO(2)R.
N = 4 SYM has sixteen Poincare supersymmetries and sixteen conformal super-
symmetries, generated by ten dimensional Majorana-Weyl spinors ǫ1 and ǫ2 of opposite
chirality. As shown in the Appendix A, the surface operator OΣ for Σ = R1,1 preserves
half of the Poincare and half of the conformal supesymmetries13 and is therefore half-BPS.
With the aid of these symmetries we study in the next section the gravitational de-
scription of half-BPS surface operators in N = 4 SYM.
Scaling Weight of half-BPS Surface Operators in N = 4 SYM
Conformal symmetry constraints the form of the OPE of the energy-energy tensor
Tmn with the operators in the theory. For a surface operator OΣ supported on Σ = R1,1,
SO(2, 2)× SO(2)23 invariance completely fixes the OPE of Tmn with OΣ:
< Tµν(x)OΣ >
< OΣ >
< Tij(x)OΣ >
< OΣ >
[4ninj − 3δij ] ; < Tµi(x)OΣ >= 0.
(2.12)
Here xm = (xµ, xi), where xµ are coordinates along Σ and ni = xi/r, and r is the radial
coordinate in the R2 transverse to R1,1. h is the scaling weight of OΣ, which generalizes
[24] the notion of conformal dimension of local conformal fields to surface operators.
In order to calculate the scaling dimension of a half-BPS surface operator OΣ in N = 4
SYM we evaluate the classical field configuration (2.2)(2.4)(2.7) characterizing a half-BPS
surface operator on the classical energy-momentum tensor of N = 4 SYM:
Tmn =
Tr[DmφDnφ−
gmn(Dφ)
2 − 1
(DmDn − gmnD2)φ2]
Tr[−FmlFnl +
gmnFlpFlp].
(2.13)
12 Since SO(2)a leaves Φ · z in (2.7) invariant.
13 For Σ =S2, the operator is also half-BPS, but it preserves a linear combination of Poincare
and special conformal supersymmetries.
A straightforward computation14 leads to:
h = − 2
l + γ
l ) = −
i + γ
i ). (2.14)
The action of an SL(2, Z) transformation (2.10) on the coupling constant of N = 4
SYM implies that:
Imτ → Imτ|cτ + d|2 . (2.15)
Combining this with the action (2.11) of Montonen-Olive duality on the parameters of the
surface operator, we find that the scaling weight (2.14) of a half-BPS surface operator OΣ
is invariant under S-duality:
h→ h. (2.16)
In this respect half-BPS surface operators behave like the half-BPS local operators of
N = 4 SYM, whose conformal dimension is invariant under SL(2, Z), and unlike the
half-BPS Wilson-’t Hooft operators whose scaling weight is not S-duality invariant [24].
3. Bubbling Surface Operators
In this section we find the dual gravitational description of the half-BPS surface op-
erators OΣ described in the previous section. The bulk description is given in terms of
asymptotically AdS5×S5 and singularity free solutions of the Type IIB supergravity equa-
tions of motion. The data from which the solution is uniquely determined encodes the
corresponding data about the surface operator OΣ.
The strategy to obtain these solutions is to make an ansatz for Type IIB supergravity
which is invariant under all the symmetries preserved by the half-BPS surface operators
OΣ. As discussed in the previous section, the bosonic symmetries preserved by a half-
BPS surface operator OΣ are SO(2, 2) × SO(4) × SO(2)a. Therefore the most general
ten dimensional metric invariant under these symmetries can be constructed by fibering
AdS3×S3×S1 over a three manifold X , where the symmetries act by isometries on the
fiber. The constraints imposed by unbroken supersymmetry on the ansatz are obtained by
demanding that the ansatz for the supergravity background possesses a sixteen component
14 Contact terms depending on α, β, γ and proportional to the derivative of the two-dimensional
δ-function appear when evaluating the on-shelll energy-momentum tensor. It would be interesting
to understand the physical content of these contact terms.
Killing spinor, which means that the background solves the Killing spinor equations of Type
IIB supergravity. A solution of the Killing spinor equations and the Bianchi identity for the
five-form field strength guarantee that the full set of equations of Type IIB supergravity
are satisfied and that a half-BPS solution has been obtained.
The problem of solving the Killing spinor equations of Type IIB supergravity with
an SO(2, 2)× SO(4)× SO(2)a symmetry can be obtained by analytic continuation of the
equations studied by LLM [13][16], which found the supergravity solutions describing the
half-BPS local operators of N = 4 SYM, which have an SO(4)×SO(4)×R symmetry. The
equations determining the metric and five-form flux can be read from [13][16], in which the
analytic continuation that we need to construct the gravitational description of half-BPS
surface operators OΣ was considered.
The ten dimensional metric and five-form flux is completely determined in terms of
data that needs to be specified on the three manifold X in the ten dimensional space. An
asymptotically AdS5×S5 metric is uniquely determined in terms of a function z(x1, x2, y),
where (x1, x2, y) ≡ (~x, y) are coordinates in X . The ten dimensional metric in the Einstein
frame is given by15
ds2 = y
2z + 1
2z − 1ds
2z − 1
2z + 1
dΩ3 +
4z2 − 1
(dχ+ V )2 +
4z2 − 1
(dy2 + dxidxi),
(3.1)
where ds2X = dy
2+dxidxi with y ≥ 0 and V is a one-form in X satisfying dV = 1/y ∗X dz.
AdS3 in Poincare coordinate corresponds to a surface operator on Σ = R
1,1 while AdS3
in global Euclidean coordinate corresponds to a surface operator on Σ =S2. The U(1)a
symmetry acts by shifts on χ while SO(2, 2) and SO(4) act by isometries on the coordinates
of AdS3 and S
3 respectively.
A non-trivial solution to the equations of motion is obtained by specifying a config-
uration of M point-like particles in X . The data from which the solution is determined
is the “charge” Ql of the particles together with their positions (~xl, yl) in X (see Figure
1). Given a “charge” distribution, the function z(x1, x2, y) solves the following differential
equation:
∂i∂iz(x1, x2, y) + y∂y
∂yz(x1, x2, y)
Qlδ(y − yl)δ(2)(~x− ~xl). (3.2)
15 The “analytic” continuation from the bubbling geometries dual to the half-BPS local opera-
tors is given by z → z, t → χ, y → −iy, ~x → i~x, dΩ3 → −ds
[13][16].
Introducing a “charge” at the point (~xl, yl) in X has the effect of shrinking
16 the
S1 with coordinate χ in (3.1) to zero size at that point. In order for this to occur in a
smooth fashion the magnitude of the “charge” has to be fixed [13][16] so that Ql = 2πyl.
Therefore, the independent data characterizing the metric and five-form of the solution is
the position of the M “charges”, given by (~xl, yl).
In summary, a smooth half-BPS SO(2, 2)× SO(4)× SO(2)a invariant asymptotically
AdS5×S5 metric (3.1) solving the Type IIB supergravity equations of motion is found by
solving (3.2) subject to the boundary condition z(x1, x2, 0) = 1/2 [13][16], so that the S
in (3.1) shrinks in a smooth way at y = 0. The function z(x1, x2, y) is given by
z(x1, x2, y) =
zl(x1, x2, y), (3.3)
where
zl(x1, x2, y) =
(~x− ~xl)2 + y2 + y2l
((~x− ~xl)2 + y2 + y2l )2 − 4y2l y2
, (3.4)
and V can be computed from z(x1, x2, y) from dV = 1/y ∗X dz. Both the metric and
five-form field strength are determined by an integer M and by (~xl, yl) for l = 1, . . . ,M .
Topology of Bubbling Solutions And Two-form Holonomies
The asymptotically AdS5×S5 solutions constructed from (3.3)(3.4) are topologically
quite rich. In particular, a solution withM point “charges” hasM topologically non-trivial
S5’s. We can associate to each point Pl ∈ X a corresponding five-sphere S5l . S5l can be
constructed by fibering the S1×S3 in the geometry (3.1) over a straight line between the
point (~xl, 0) and the point (~xl, yl) in X . The topology of this manifold is indeed an S
as an S5 can be represented17 by an S1×S3 fibration over an interval where the S1 and S3
shrink to zero size at opposite ends of the interval, which is what happens in our geometry
where the S3 shrinks at (~xl, 0) while the S
1 shrinks at the other endpoint (~xl, yl).
16 Near y = yl the form of the relevant part of the metric is that of the Taub-NUT space. Fixing
the value of the “charge” at y = yl by imposing regularity of the metric coincides with the usual
regularity constraint on the periodicity of the circle in Taub-NUT space.
17 This can be seen explicitly by writing dΩ5 = cos
2 θdΩ3 + dθ
2 + sin2 θdφ2.
Fig. 2: A topologically non-trivial S5 can be constructed by fibering S1×S3 over
an interval connecting the y = 0 plane and the location of the “charge” at the
point Pl ∈ X with (~xl, yl) coordinates.
Following [13][16] we can now integrate the five-form flux over the topologically non-
trivial S5’s (see Appendix B):
F5 = y
l . (3.5)
Since flux has to be quantized, the position in the y-axis of the l-th particle in X is also
quantized
y2l = 4πNll
p Nl ∈ Z, (3.6)
where lp is the ten dimensional Planck length. For an asymptotically AdS5×S5 geometry
with radius of curvature R4 = 4πNl4p, which is dual to N = 4 U(N) SYM, we have that
the total amount of five-form flux must be N :
Nl. (3.7)
The asymptotically AdS5×S5 solutions constructed from (3.3)(3.4) also contain non-
trivial surfaces. In particular, a solution with M point “charges” has M non-trivial disks
Dl. Just as in the case of the S
5’s, we can associate to each point Pl ∈ X a disk Dl.
Inspection of the asymptotic form of the metric (3.1) given in (3.3)(3.4) reveals that
the metric is conformal to AdS3×S1. This geometry on the boundary of AdS5×S5, which
is where the dual N = 4 U(N) SYM lives, is the natural background geometry on which to
study conformally invariant surface operators in N = 4 SYM. As explained in section 2, an
SO(2, 2)× SO(2)23 invariant surface operator can be defined by specifying a codimension
two singularity in R1,3 or by specifying appropriate boundary conditions for the classical
fields in the gauge theory at the boundary of AdS3×S1. In the latter formulation, the
worldsheet of the surface operator Σ is the boundary of AdS3.
Therefore, in the boundary of AdS5×S5 we have a non-contractible S1. If we fiber the
S1 parametrized by χ in (3.1) over a straight line connecting a point (~xl, yl) in X – where
the S1 shrinks to zero size – to a point in X corresponding to the boundary of AdS5×S5
– given by ~x, y → ∞ – we obtain a surface Dl. This surface is topologically a disk18 and
there are M of them for a “charge” distribution of M particles in X .
Fig. 3: A disk D can be constructed by fibering S1 over an interval connecting
the “charge” at the point Pl ∈ X with (~xl, yl) coordinates and the boundary of
AdS5×S
Due to the existence of the disks Di, the supergravity solution given by the metric
and five-form flux alone is not unique. Type IIB supergravity has a two-form gauge field
from the NS-NS sector and another one from the RR sector. In order to fully specify a
solution of Type IIB supergravity in the bubbling geometry (3.1) we must complement the
metric and the five-form with the integral of the two-forms around the disks19
αl = −
l = 1, . . . ,M, (3.8)
where we have used notation conducive to the later comparison with the parameters char-
acterizing a half-BPS surface operator OΣ. Since both BNS and BR are invariant under
large gauge transformations, the parameters (αl, ηl) take values on a circle of unit radius.
Apart from the M disks Dl, the bubbling geometry constructed from (3.3)(3.4) also
has topologically non-trivial S2’s. One can construct an S2 by fibering the S1 in (3.1) over
18 Such disks also appear in the study of the high temperature regime of N = 4 SYM, where
the bulk geometry [25] is the AdS Schwarzschild black hole, which also has a non-contractible S1
in the boundary which is contractible in the full geometry.
19 The overall signs in the identification are fixed by demanding consistent action of S-duality
of N = 4 SYM with that of Type IIB supergravity.
a straight line connecting the points Pl and Pm in X . Since the S
1 shrinks to zero size in
a smooth manner at the endpoints we obtain an S2. Therefore, to every pair of “charges”
in X , characterized by different points Pl and Pm in X , we can construct a corresponding
S2, which we label by S2l,m. The integral of BNS and BR over S
l,m do not give rise to new
parameters, since [S2l,m] = [Dl] − [Dm] in homology, and the periods can be determined
from (3.8).
Fig. 4: An S2 can be constructed by fibering S1 over an interval connecting the
“charge” at the point Pl ∈ X with a different “charge” at point Pm ∈ X.
Bubbling Geometries as Surface Operators
As we discussed in section 2, a surface operator OΣ is characterized by an unbroken
gauge group L ∈ U(N) along together with 4M L-invariant parameters (αl, βl, γl, ηl). On
the other hand, the Type IIB supergravity solutions we have described depend on the
positions (~xl, yl) of M “charged” particles in X and the two-form holonomies:
. (3.9)
We now establish an explicit dictionary between the parameters in gauge theory and the
parameters in supergravity.
For illustration purposes, it is convenient to start by considering the half-BPS surface
operator OΣ with the largest Levi group L, which is L = U(N) for G = U(N). U(N)
invariance requires that the singularity in the fields produced by OΣ take values in the
center of U(N). Therefore, the gauge field and scalar field produced by OΣ is given by
A = α01Ndθ
(β0 + iγ0)1N ,
(3.10)
where 1N is the identity matrix. We can also turn a two-dimensional θ-angle (2.3) for the
overall U(1), so that:
η = η01N . (3.11)
We now identify this operator with the supergravity solution obtained by having a
single point “charge” source in X (see Figure 1b). If we let the position of the “charge”
be (~x0, y0) then
z(x1, x2, y) =
(~x− ~x0)2 + y2 + y20
((~x− ~x0)2 + y2 + y20)2 − 4y20y2
VI = −ǫIJ
(xJ − xJ0 )((~x− ~x0)2 + y2 − y20)
2(~x− ~x0)2
((~x− ~x0)2 + y2 + y20)2 − 4y20y2
(3.12)
where V = VIdx
I . The metric (3.1) obtained using (3.12) is the metric of AdS5×S5. This
can be seen by the following change of variables [16]
x1 − x10 + i(x2 − x20) = rei(ψ+φ)
r = y0 sinhu sin θ
y = y0 coshu cos θ
(ψ − φ),
(3.13)
which yields the AdS5×S5 metric with AdS5 foliated by AdS3×S1 slices:
ds2 = y0
(cosh2 uds2AdS3 + du
2 + sinh2 udψ2) + (cos2 θdΩ3 + dθ
2 + sin2 θdφ2)
. (3.14)
We note that the U(1)a symmetry of the metric (3.1) – which acts by shifts on χ – identifies
via (3.13) an SO(2)R subgroup of the the SO(6) symmetry of the S
5, acting by shifts on
φ, with an SO(2)23 subgroup of the SO(2, 4) isometry group of AdS5, acting by opposite
shifts on ψ. This is precisely the same combination of generators discussed in section 2
that is preserved by a half-BPS surface operator OΣ in N = 4 SYM.
The radius of curvature of AdS5×S5 in (3.14) is given by R4 = y20 . Therefore using
that R4 = 4πNl4p, where N is the rank of the N = 4 YM theory, we have that
4πl4p
, (3.15)
and the position of the “charge” in y gets identified with the rank of the unbroken gauge
group and is therefore quantized.
The residue of the pole in Φ (3.10) gets identified with the position of the “charge” in
the ~x-plane. It follows from (3.1) that the coordinates ~x and y have dimensions of length2.
Therefore, we identify the residue of the pole of Φ with the position of the “charge” in the
~x-plane in X via:
(β0, γ0) =
2πl2s
. (3.16)
Unlike the position in y, the position in ~x is not quantized.
The remaining parameters of the surface operator OΣ with U(N) Levi group – given
by (α0, η0) – get identified with the holonomy of the two-forms of Type IIB supergravity
over D
α0 = −
, (3.17)
where D is the disk ending on the AdS5×S5 boundary on the S1. This identification
properly accounts for the correct periodicity of these parameters, which take values on a
circle of unit radius.
The path integral which defines a half-BPS surface operator OΣ when L = U(N)
is never singular as the gauge symmetry cannot be further enhanced by changing the
parameters (α0, β0, γ0, η0) of the surface operator. Correspondingly, the dual supergravity
solution with one “charge” also never acquires a singularity by changing the parameters
of the solution.
Let’s now consider the most general half-BPS surface operator OΣ. First we need
to characterize the operator by its Levi group, which for a U(N) gauge group takes the
l=1 U(Nl) with N =
l=1Nl = N . The operator then depends on 4M L-invariant
parameters (αl, βl, γl, ηl) for l = 1. . . . ,M up to the action of SM , which acts by permuting
the parameters.
The corresponding supergravity solution associated to such an operator is given by
the metric (3.1). The number of unbroken gauge group factors – given by the integer M –
corresponds to the number of point “charges” in (3.2). For M > 1, the metric that follows
from (3.3)(3.4) is AdS5×S5 only asymptotically and not globally.
The rank of the various gauge group factors in the Levi group
l=1 U(Nl) – given by
the integers Nl – correspond to the position of the “charges” along y ∈ X , given by the
coordinates yl. The precise identification follows from (3.5)(3.6):
4πl4p
l = 1, . . . ,M. (3.18)
Nl also corresponds to the amount of five-form flux over S
l , the S
5 associated with the
l-th point charge:
4π4l4p
F5 l = 1, . . . ,M. (3.19)
This identification quantizes the y coordinate in X into lp size bits. Thus length is quan-
tized as opposed to area, which is what happens for the geometry dual to the half-BPS
local operators [13][16], where it can be interpreted as the quantization of phase space in
the boundary gauge theory.
A half-BPS surface operator OΣ develops a pole for the scalar field Φ (2.7). The
pole is characterized by its residue, which is given by 2M real parameters (βl, γl). These
parameters are identified with the position of the M “charges” in the ~x-plane in X via:
(βl, γl) =
2πl2s
. (3.20)
All these parameters take values on the real line.
The remaining parameters characterizing a half-BPS surface operator OΣ are the
periodic variables (αl, ηl), which determine the holonomy produced by OΣ and the corre-
sponding two-dimensional θ-angles. These parameters get identified with the holonomy of
the two-forms of Type IIB supergravity over the M non-trivial disks Dl that the geometry
generates in the presence of M “charges” in X :
αl = −
(3.21)
The identification respects the periodicity of (αl, ηl), which in supergravity arises from the
invariance of BNS and BR under large gauge transformations
We have given a complete dictionary between all the parameters that a half-BPS
surface operator in N = 4 SYM depends on and all the parameters in the corresponding
bubbling geometry. We note that a surface operator OΣ depends on a set of parameters
up to the action of the permutation group SM on the parameters, which is part of the
U(N) gauge symmetry. The corresponding statement in supergravity is that the solution
dual to a surface operator is invariant under the action of SM , which acts by permuting
the “charges” in X .
20 Since the gauge invariant variables are e
The supergravity solution is regular as long as the “charges” do not collide. A singu-
larity arises whenever two point “charges” in X coincide (see Figure 1):
(~xl, yl) → (~xm, ym) for l 6= m. (3.22)
Whenever this occurs, there is a reduction in the number of independent disks since (see
Figure 3):
Dl → Dm for l 6= m, (3.23)
and therefore
for l 6= m.
(3.24)
In this limit of parameter space the non-trivial S2 connecting the points Pl and Pm in X
shrinks to zero size as [S2l,m] = [Dl]− [Dm] → 0, and the geometry becomes singular.
By using the dictionary developed in this paper, such a singular geometry corresponds
to a limit when two of each of the set of parameters (αl, βl, γl, ηl) defining a half-BPS surface
operator OΣ become equal:
αl → αm, βl → βm, γl → γm, ηl → ηm for l 6= m. (3.25)
In this limit the unbroken gauge group preserved by the surface operator OΣ is enhanced
to L′ from the original Levi group
l=1 U(Nl), where L ⊂ L′. As explained in section 2
the path integral from which OΣ is defined becomes singular.
In summary, we have found the description of all half-BPS surface operators OΣ in
N = 4 SYM in terms of solutions of Type IIB supergravity. The asymptotically AdS5×S5
solutions are regular and when they develop a singularity then the corresponding operator
also becomes singular.
S-Duality of Surface Operators from Type IIB Supergravity
The group of dualities of N = 4 SYM acts non-trivially [4] on surface operators OΣ
(see discussion in section 2). For G = U(N) the duality group is SL(2, Z) and its proposed
action on the parameters on which OΣ depends on is [4]:
(βl, γl) → |cτ + d| (βl, γl)
(αl, ηl) → (αl, ηl)M−1,
(3.26)
where M is an SL(2, Z) matrix
. (3.27)
We now reproduce21 this transformation law by studying the action of the SL(2, Z)
subgroup of the SL(2, R) classical symmetry of Type IIB supergravity, which is in fact the
appropriate symmetry group of Type IIB string theory. For that we need to analyze the
action of S-duality on our bubbling geometries.
SL(2, Z) acts on the complex scalar τ = C0 + ie
−φ of Type IIB supergravity in the
familiar fashion
τ → aτ + b
cτ + d
, (3.28)
where as usual τ gets identified with the complexified coupling constant of N = 4 SYM
(2.8). SL(2, Z) also rotates the two-form gauge fields22 of Type IIB supergravity
, (3.29)
while leaving the metric in the Einstein frame and the five-form flux invariant.
Given that the metric in (3.1) is in the Einstein frame, SL(2, Z) acts trivially on the
coordinates (~x, y). Nevertheless, since
ls = g
s lp with gs = e
φ (3.30)
the string scale transforms under SL(2, Z) as follows:
l2s →
|cτ + d| . (3.31)
Therefore, under S-duality:
2πl2s
→ |cτ + d| ~xl
2πl2s
. (3.32)
Given our dictionary in (3.20), we find that the surface operator parameters (βl, γl) trans-
form as in (3.26), agreeing with the proposal in [4].
21 If we apply the same idea to the LLM geometries dual to half-BPS local operators in [13],
we conclude that the half-BPS local operators are invariant under S-duality.
22 See e.g. [26][27].
The identification of the rest of the parameters is (3.21):
αl = −
(3.33)
Using the action of SL(2, Z) on the two-forms (3.29) and the identification (3.33), it
follows from a straightforward manipulation that the surface operator paramaters (αl, ηl)
transform as in (3.26), agreeing with the proposal in [4].
Acknowledgements
We would like to thank Xiao Liu for very useful discussions. Research at Perimeter In-
stitute for Theoretical Physics is supported in part by the Government of Canada through
NSERC and by the Province of Ontario through MRI. We also acknowledge further sup-
port from an NSERC Discovery Grant. SM acknowledges support from JSPS Research
Fellowships for Young Scientists.
Appendix A. Supersymmetry of Surface Operator in N=4 SYM
In this Appendix we study the Poincare and conformal supersymmetries preserved
by a surface operator in N=4 SYM supported on R1,1. These symmetries are generated
by ten dimensional Majorana-Weyl spinors ǫ1 and ǫ2 of opposite chirality. We determine
the supersymmetries left unbroken by a surface operator by studying the supersymmetry
variation of the gaugino in the presence of the surface operator singularity in (2.2)(2.7).
The metric is given by:
ds2 = −(dx0)2 + (dx1)2 + (dx2)2 + (dx3)2 = −(dx0)2 + (dx1)2 + 2dzdz̄. (A.1)
where z = 1√
(x2 + ix3) = |z|eiθ, while the singularity in the fields is
Φω = Φω̄ =
(Φ8 + iΦ9) =
β + iγ√
A = αdθ, F = dA = 2παδD,
(A.2)
where [α, β] = [α, γ] = [β, γ] = 0 and δD = d(dθ) is a two form delta function. The relevant
Γ-matrices are
(Γ2 − iΓ3)
Γz̄ =
(Γ2 + iΓ3)
{Γz,Γz̄} = 2.
(A.3)
A Poincare supersymmetry transformation is given by
δλ = (
µν +∇µΦiΓµi +
[Φi,Φj]Γ
ij)ǫ1 (A.4)
where µ runs from 0 to 3 and i runs from 4 to 9, while a superconformal supersymmetry
transformation is given by
δλ = [(
µν +∇µΦiΓµi +
[Φi,Φj]Γ
ij)xσΓσ − 2ΦiΓi]ǫ2 (A.5)
From (A.4), it follows that the unbroken Poincare supersymmetries are given by:
Γω̄zǫ1 = 0 ⇔ Γ2389ǫ1 = −ǫ1. (A.6)
The unbroken superconformal supersymmetries are given by:
−β + iγ
Γzω̄ − β − iγ
0 + x1Γ
1 + zΓz̄ + z̄Γz)− 2ΦωΓω − 2Φω̄Γω̄
ǫ2 = 0.
(A.7)
From the terms proportional to x0 and x1, we find that the unbroken superconformal
supersymmetries are given by:
Γzω̄ǫ2 = 0 ⇔ Γ2389ǫ2 = −ǫ2. (A.8)
The rest of the conditions
[Γz̄Γω̄Γz]ǫ2 = 0, (A.9)
are automatically satisfied once (A.8) is imposed.
We conclude that the singularity (2.2)(2.7) is half-BPS and that the preserved su-
persymmetry is generated by ǫ1 and ǫ2 subject o the constraints Γ2389ǫ1 = −ǫ1 and
Γ2389ǫ2 = −ǫ2.
By acting with a broken special conformal transformation on Σ = R2 ⊂ R4 to get a
surface operator supported on Σ =S2, one can show following [28] that such an operator
also preserves half of the thirty-two supersymmetries, but are now generated by a linear
combination of the Poincare and special conformal supersymmetries.
Appendix B. Five form flux
In this Appendix, we calculate (3.5) explicitly to evaluate the flux over a non-trivial
The five-form flux is [13][16]
{d[y2 2z + 1
2z − 1(dχ+ V )]− y
3 ∗3 d(
z + 1
)} ∧ dV olAdS3
{d[y2 2z − 1
2z + 1
(dχ+ V )]− y3 ∗3 d(
z − 1
)} ∧ dΩ3
(B.1)
The five-cycle S5l in the bubbling geometry is spanned by coordinates Ω3, χ and y.
Then the integration is:
4π4l4p
F5 = −
16π4l4p
2z − 1
2z + 1
dχ] ∧ dΩ3 =
4πl4p
= N. (B.2)
References
[1] J. M. Maldacena, “The large N limit of superconformal field theories and supergrav-
ity,” Adv. Theor. Math. Phys. 2, 231 (1998) [Int. J. Theor. Phys. 38, 1113 (1999)]
[arXiv:hep-th/9711200].
[2] S. S. Gubser, I. R. Klebanov and A. M. Polyakov, “Gauge theory correlators from
non-critical string theory,” Phys. Lett. B 428, 105 (1998) [arXiv:hep-th/9802109].
[3] E. Witten, “Anti-de Sitter space and holography,” Adv. Theor. Math. Phys. 2, 253
(1998) [arXiv:hep-th/9802150].
[4] S. Gukov and E. Witten, “Gauge theory, ramification, and the geometric langlands
program,” arXiv:hep-th/0612073.
[5] J. Gomis and F. Passerini, “Holographic Wilson loops,” JHEP 0608, 074 (2006)
[arXiv:hep-th/0604007].
[6] S. J. Rey and J. T. Yee, “Macroscopic strings as heavy quarks in large N gauge
theory and anti-de Sitter supergravity,” Eur. Phys. J. C 22, 379 (2001) [arXiv:hep-
th/9803001].
[7] J. M. Maldacena, “Wilson loops in large N field theories,” Phys. Rev. Lett. 80, 4859
(1998) [arXiv:hep-th/9803002].
[8] N. Drukker and B. Fiol, JHEP 0502, 010 (2005) [arXiv:hep-th/0501109].
[9] S. Yamaguchi, “Bubbling geometries for half BPSWilson lines,”arXiv:hep-th/0601089.
[10] S. Yamaguchi, “Wilson loops of anti-symmetric representation and D5-branes,” JHEP
0605, 037 (2006) [arXiv:hep-th/0603208].
[11] O. Lunin, “On gravitational description of Wilson lines,” JHEP 0606, 026 (2006)
[arXiv:hep-th/0604133].
[12] J. Gomis and F. Passerini, “Wilson loops as D3-branes,” JHEP 0701, 097 (2007)
[arXiv:hep-th/0612022].
[13] H. Lin, O. Lunin and J. M. Maldacena, “Bubbling AdS space and 1/2 BPS geome-
tries,” JHEP 0410, 025 (2004) [arXiv:hep-th/0409174].
[14] J. Gomis and C. Romelsberger, “Bubbling defect CFT’s,” JHEP 0608, 050 (2006)
[arXiv:hep-th/0604155].
[15] J. Gomis and T. Okuda, “Wilson loops, geometric transitions and bubbling Calabi-
Yau’s,” JHEP 0702, 083 (2007) [arXiv:hep-th/0612190].
[16] H. Lin and J. M. Maldacena, “Fivebranes from gauge theory,” Phys. Rev. D 74,
084014 (2006) [arXiv:hep-th/0509235].
[17] A. Kapustin and E. Witten, “Electric-magnetic duality and the geometric Langlands
program,” arXiv:hep-th/0604151.
[18] E. Frenkel, “Lectures on the Langlands program and conformal field theory,”
arXiv:hep-th/0512172.
http://arxiv.org/abs/hep-th/9711200
http://arxiv.org/abs/hep-th/9802109
http://arxiv.org/abs/hep-th/9802150
http://arxiv.org/abs/hep-th/0612073
http://arxiv.org/abs/hep-th/0604007
http://arxiv.org/abs/hep-th/9803001
http://arxiv.org/abs/hep-th/9803001
http://arxiv.org/abs/hep-th/9803002
http://arxiv.org/abs/hep-th/0501109
http://arxiv.org/abs/hep-th/0601089
http://arxiv.org/abs/hep-th/0603208
http://arxiv.org/abs/hep-th/0604133
http://arxiv.org/abs/hep-th/0612022
http://arxiv.org/abs/hep-th/0409174
http://arxiv.org/abs/hep-th/0604155
http://arxiv.org/abs/hep-th/0612190
http://arxiv.org/abs/hep-th/0509235
http://arxiv.org/abs/hep-th/0604151
http://arxiv.org/abs/hep-th/0512172
[19] R. M. Rohm, “Some Current Problems In Particle Physics Beyond The Standard
Model,”
[20] J. Preskill and L. M. Krauss, “Local Discrete Symmetry And Quantum Mechanical
Hair,” Nucl. Phys. B 341, 50 (1990).
[21] M. G. Alford, K. M. Lee, J. March-Russell and J. Preskill, Nucl. Phys. B 384, 251
(1992) [arXiv:hep-th/9112038].
[22] A. Kapustin and S. Sethi, “The Higgs branch of impurity theories,” Adv. Theor. Math.
Phys. 2, 571 (1998) [arXiv:hep-th/9804027].
[23] N. R. Constable, J. Erdmenger, Z. Guralnik and I. Kirsch, “Intersecting D3-branes
and holography,” Phys. Rev. D 68, 106007 (2003) [arXiv:hep-th/0211222].
[24] A. Kapustin, “Wilson-’t Hooft operators in four-dimensional gauge theories and Phys.
Rev. D 74, 025005 (2006) [arXiv:hep-th/0501015].
[25] E. Witten, “Anti-de Sitter space, thermal phase transition, and confinement in gauge
Adv. Theor. Math. Phys. 2, 505 (1998) [arXiv:hep-th/9803131].
[26] J. Polchinski, “String Theory”, Cambridge University Press, Chapter 12 in Volume 2.
[27] K. Becker, M. Becker and J. Schwarz, ”String Theory and M-theory”, Cambridge
University Press, Chapter 8.
[28] M. Bianchi, M. B. Green and S. Kovacs, “Instanton corrections to circular Wilson
loops in N = 4 supersymmetric JHEP 0204, 040 (2002) [arXiv:hep-th/0202003].
http://arxiv.org/abs/hep-th/9112038
http://arxiv.org/abs/hep-th/9804027
http://arxiv.org/abs/hep-th/0211222
http://arxiv.org/abs/hep-th/0501015
http://arxiv.org/abs/hep-th/9803131
http://arxiv.org/abs/hep-th/0202003
|
0704.1658 | Resolving Cosmic Gamma Ray Anomalies with Dark Matter Decaying Now | UCI-TR-2007-17
Resolving Cosmic Gamma Ray Anomalies with Dark Matter Decaying Now
Jose A. R. Cembranos, Jonathan L. Feng, and Louis E. Strigari
Department of Physics and Astronomy, University of California, Irvine, CA 92697, USA
Dark matter particles need not be completely stable, and in fact they may be decaying now. We
consider this possibility in the frameworks of universal extra dimensions and supersymmetry with
very late decays of WIMPs to Kaluza-Klein gravitons and gravitinos. The diffuse photon background
is a sensitive probe, even for lifetimes far greater than the age of the Universe. Remarkably, both
the energy spectrum and flux of the observed MeV γ-ray excess may be simultaneously explained by
decaying dark matter with MeV mass splittings. Future observations of continuum and line photon
fluxes will test this explanation and may provide novel constraints on cosmological parameters.
PACS numbers: 95.35.+d, 11.10.Kk, 12.60.-i, 98.80.Cq
The abundance of dark matter is now well known from
observations of supernovae, galaxies and galactic clusters,
and the cosmic mocriwave background (CMB) [1], but
its identity remains elusive. Weakly-interacting massive
particles (WIMPs) with weak-scale masses ∼ 0.1−1 TeV
are attractive dark matter candidates. The number of
WIMPs in the Universe is fixed at freeze-out when they
decouple from the known particles about 1 ns after the
Big Bang. Assuming they are absolutely stable, these
WIMPs survive to the present day, and their number
density is naturally in the right range to be dark matter.
The standard signatures of WIMPs include, for example,
elastic scattering off nucleons in underground laborato-
ries, products fromWIMP annihilation in the galaxy, and
missing energy signals at colliders [2].
The stability of WIMPs is, however, not required to
preserve the key virtues of the WIMP scenario. In fact,
in supersymmetry (SUSY) and other widely-studied sce-
narios, it is just as natural for WIMPs to decay after
freeze-out to other stable particles with similar masses,
which automatically inherit the right relic density to be
dark matter [3]. If the resulting dark matter interacts
only gravitationally, the WIMP decay is very late, in
some cases leading to interesting effects in structure for-
mation [4] and other cosmological observables. Of course,
the WIMP lifetime depends on ∆m, the mass splitting
between the WIMP and its decay product. For high de-
generacies, the WIMP lifetime may be of the order of or
greater than the age of the Universe t0 ≃ 4.3 × 1017 s,
leading to the tantalizing possibility that dark matter is
decaying now.
For very long WIMP lifetimes, the diffuse photon back-
ground is a promising probe [3, 5]. Particularly interest-
ing is the (extragalactic) cosmic gamma ray background
(CGB) shown in Fig. 1. Although smooth, the CGB
must be explained by multiple sources. For Eγ <∼ 1 MeV
and Eγ >∼ 10 MeV, the CGB is reasonably well-modeled
by thermal emission from obscured active galactic nu-
clei (AGN) [9] and beamed AGN, or blazars [10], respec-
tively. However, in the range 1 MeV <∼ Eγ <∼ 5 MeV, no
astrophysical source can account for the observed CGB.
Blazars are observed to have a spectral cut-off ∼ 10 MeV,
FIG. 1: The CGB measured by HEAO-1 [6] (circles), COMP-
TEL [7] (squares), and EGRET [8] (triangles), along with the
known astrophysical sources: AGN (long-dash), SNIa (dot-
ted), and blazars (short-dash, and dot-dashed extrapolation).
and also only a few objects have been detected below this
energy [11, 12]; a maximal upper limit [13] on the blazar
contribution for Eγ <∼ 10 MeV is shown in Fig. 1. Dif-
fuse γ-rays from Type Ia supernovae (SNIa) contribute
below ∼ 5 MeV, but the most recent astronomical data
show that they also cannot account for the entire spec-
trum [14, 15]; previous calculations suggested that SNIa
are the dominant source of γ-rays at MeV energies [16].
In this paper, we study the contribution to the CGB
from dark matter decaying now. We consider simple
models with extra dimensions or SUSY in which WIMP
decays are highly suppressed by both the weakness of
gravity and small mass splittings and are dependent on
a single parameter, ∆m. We find that the CGB is an
extremely sensitive probe, even for lifetimes τ ≫ t0. In-
triguingly, we also find that both the energy spectrum
and the flux of the gamma ray excess described above are
naturally explained in these scenarios with ∆m ∼ MeV.
As our primary example we consider minimal univer-
sal extra dimensions (mUED) [17], one of the simplest
imaginable models with extra dimensions. In mUED all
http://arxiv.org/abs/0704.1658v3
particles propagate in one extra dimension compactified
on a circle, and the theory is completely specified by mh,
the Higgs boson mass, and R, the compactification ra-
dius. (In detail, there is also a weak, logarithmic depen-
dence on the cutoff scale Λ [18]. We present results for
ΛR = 20.) Every particle has a Kaluza-Klein (KK) part-
ner at every mass level ∼ m/R, m = 1, 2, . . ., and the
lightest KK particle (LKP) is a dark matter candidate,
with its stability guaranteed by a discrete parity.
Astrophysical and particle physics constraints limit
mUED parameters to regions of (R−1,mh) parameter
space where the two lightest KK particles are the KK
hypercharge gauge boson B1, and the KK graviton G1,
with mass splitting ∆m <∼ O(GeV) [19]. This extreme
degeneracy, along with the fact that KK gravitons inter-
act only gravitationally, leads to long NLKP lifetimes
b cos2 θW
(∆m)3
4.7× 1022 s
, (1)
where MP ≃ 2.4× 1018 GeV is the reduced Planck scale,
θW is the weak mixing angle, b = 10/3 for B
1 → G1γ,
and b = 2 for G1 → B1γ [20]. Note that τ depends only
on the single parameter ∆m. For 795 GeV <∼ R
−1 <∼
809 GeV and 180 GeV <∼ mh <∼ 215 GeV, the model is
not only viable, but the B1 thermal relic abundance is
consistent with that required for dark matter [21] and
∆m <∼ 30 MeV, leading to lifetimes τ(B
1 → G1γ) >∼ t0.
We will also consider supersymmetric models, where
small mass splittings are also possible, since the gravitino
mass is a completely free parameter. If the two light-
est supersymmetric particles are a Bino-like neutralino B̃
and the gravitino G̃, the heavier particle’s decay width
is again given by Eq. (1), but with b = 2 for B̃ → G̃γ,
and b = 1 for G̃ → B̃γ. As in mUED, τ depends only on
∆m, and ∆m <∼ 30 MeV yields lifetimes greater than t0.
The present photon flux from two-body decays is
δ (Eγ − aεγ) , (2)
whereN(t) = N ine−t/τ andN in is the number of WIMPs
at freeze-out, V0 is the present volume of the Universe,
a is the cosmological scale factor with a(t0) ≡ 1, and
εγ = ∆m is the energy of the produced photons. Pho-
tons from two-body decays are observable in the diffuse
photon background only if the decay takes place in the
late Universe, when matter or vacuum energy dominates.
In this case, Eq. (2) may be written as
N in e−P (Eγ/εγ )/τ
V0τEγH(Eγ/εγ)
Θ(εγ − Eγ) , (3)
where P (a) = t is the solution to (da/dt)/a = H(a) =
ΩMa−3 +ΩDE a−3(1+w) with P (0) = 0, and ΩM
and ΩDE are the matter and dark energy densities. If
dark energy is a cosmological constant Λ with w = −1,
P (a) ≡
ΩΛa3 +
ΩM +ΩΛa3
. (4)
The flux has a maximum at Eγ = εγ [
U(H20 τ
2ΩΛ)]
where U(x) ≡ (x+ 1−
3x+ 1)/(x− 1).
The energy spectrum is easy to understand for very
long and very short decay times. For τ ≪ t0,
H20 τ
2ΩDE ≪ 1, and the flux grows due to the deceler-
ated expansion of the Universe as dΦ/dEγ ∝ E1/2 until
it reaches its maximum at Emaxγ ≃ εγ(ΩMH20 τ2/4)1/3.
Above this energy, the flux is suppressed exponentially
by the decreasing number of decaying particles [3].
On the other hand, if τ ≫ t0, H20 τ2ΩDE ≫ 1, and
the flux grows as dΦ/dEγ ∝ E1/2 only for photons that
originated in the matter-dominated epoch. For decays
in the vacuum-dominated Universe, the flux decreases
asymptotically as dΦ/dEγ ∝ E(1+3w)/2 due to the accel-
erated expansion. The flux reaches its maximal value at
Emaxγ ≃ εγ [−ΩM/((1 + 3w)ΩDE)]−1/(3w) where photons
were produced at matter-vacuum equality. Note that this
value and the spectrum shape depend on the properties
of the dark energy. Assuming ΩM = 0.25, ΩDE = 0.75,
w = −1, and h = 0.7, and that these particles make up
all of non-baryonic dark matter, so that
= 1.0× 10−9 cm−3
ΩNBDM
, (5)
we find that the maximal flux is
(Emaxγ ) = 1.33× 10
−3 cm−2 s−1 sr−1 MeV−1
1021 s
ΩNBDM
. (6)
Fig. 2 shows example contributions to the CGB from
decaying dark matter in mUED and SUSY. The mass
splittings have been chosen to produce maximal fluxes
at Eγ ∼ MeV. These frameworks are, however, highly
constrained: once ∆m is chosen, τ and the flux are es-
sentially fixed. It is thus remarkable that the predicted
flux is in the observable, but not excluded, range and
may explain the current excess above known sources.
To explore this intriguing fact further, we relax model-
dependent constraints and consider τ and ∆m to be in-
dependent parameters in Fig. 3. The labeled curves give
the points in (τ,∆m) parameter space where, for the
WIMP masses indicated and assuming Eq. (5), the max-
imal flux from decaying dark matter matches the flux of
the observed photon background in the keV to 100 GeV
range [6]. For a given WIMP mass, all points above the
corresponding curve predict peak fluxes above the ob-
served diffuse photon background and so are excluded.
The shaded band in Fig. 3 is the region where the max-
imal flux falls in the unaccounted for range of 1-5 MeV.
FIG. 2: Data for the CGB in the range of the MeV excess,
along with predicted contributions from extragalactic dark
matter decay. The curves are for B1 → G1γ in mUED with
lifetime τ = 103 t0 and mB1 = 800 GeV (solid) and B̃ → G̃γ
in SUSY with lifetime τ = 5 × 103 t0 and mB̃ = 80 GeV
(dashed). We have assumed ΩNBDM = 0.2 and smeared all
spectra with energy resolution ∆E/E = 10%, characteristic
of experiments such as COMPTEL. The dot-dashed curve is
the upper limit to the blazar spectrum, as in Fig. (1).
For τ >∼ t0, E
γ ≃ 0.55∆m. However, for τ <∼ t0, E
does not track ∆m, as the peak energy is significantly
redshifted. For example, for a WIMP with mass 80 GeV,
τ ∼ 1012 s and ∆m ∼ MeV, Emaxγ ∼ keV. The over-
lap of this band with the labeled contours is where the
observed excess may be explained through WIMP de-
cays. We see that it requires 1020 s <∼ τ <∼ 10
22 s and
1 MeV <∼ ∆m <∼ 10 MeV. These two properties may
be simultaneously realized by two-body gravitational de-
cays: the diagonal line shows the relation between τ and
∆m given in Eq. (1) for B1 → G1γ, and we see that this
line passes through the overlap region! Similar conclu-
sions apply for all other decay models discussed above.
These considerations of the diffuse photon background
also have implications for the underlying models. For
mUED, ∆m = 2.7− 3.2 MeV and τ = 4− 7× 1020 s can
explain the MeV excess in the CGB. This preferred region
is realized for the decay B1 → G1γ for R−1 ≈ 808 GeV.
(See Fig. 4.) Lower R−1 predicts larger ∆m and shorter
lifetimes and is excluded. The MeV excess may also be
realized for G1 → B1γ for R−1 ≈ 810.5 GeV, though in
this case the G1 must be produced non-thermally to have
the required dark matter abundance [20, 22].
So far we have concentrated on the cosmic, or extra-
galactic, photon flux, which is dependent only on cosmo-
logical parameters. The Galactic photon flux depends
on halo parameters and so is less robust, but it has
the potential to be a striking signature, since these pho-
tons are not redshifted and so will appear as lines with
Eγ = ∆m. INTEGRAL has searched for photon lines
within 13◦ from the Galactic center [23]. For lines with
energyE ∼ MeV and width ∆E ∼ 10 keV, INTEGRAL’s
FIG. 3: Model-independent analysis of decaying dark mat-
ter in the (τ,∆m) plane. In the shaded region, the result-
ing extragalactic photon flux peaks in the MeV excess range
1 MeV ≤ Emaxγ ≤ 5 MeV. On the contours labeled with
WIMP masses, the maximal extragalactic flux matches the
extragalactic flux observed by COMPTEL; points above these
contours are excluded. The diagonal line is the predicted re-
lation between τ and ∆m in mUED. On the dashed line, the
predicted Galactic flux matches INTEGRAL’s sensitivity of
10−4 cm−2 s−1 for monoenergetic photons with Eγ ∼ MeV.
energy resolution at these energies, INTEGRAL’s sensi-
tivity is Φ ∼ 10−4 cm−2 s−1. The Galactic flux from
decaying dark matter saturates this limit along the ver-
tical line in Fig. 3, assuming mχ = 800 GeV. This flux is
subject to halo uncertainties; we have assumed the halo
density profiles of Ref. [24], which give a conservative up-
per limit on the flux within the field of view. Remarkably,
however, we see that the vertical line also passes through
the overlap region discussed above. If the MeV CGB
anomaly is explained by decaying dark matter, then, the
Galactic flux is also observable, and future searches for
photon lines will stringently test this scenario.
In conclusion, well-motivated frameworks support the
possibility that dark matter may be decaying now. We
have shown that the diffuse photon spectrum is a sensi-
tive probe of this possibility, even for lifetimes τ ≫ t0.
This is the leading probe of these scenarios. Current
bounds from the CMB [25] and reionization [26] do not
exclude this scenario, but they may also provide comple-
mentary probes in the future. We have also shown that
dark matter with mass splittings ∆m ∼ MeV and life-
times τ ∼ 103−104 Gyr can explain the current excess of
observations above astrophysical sources at Eγ ∼ MeV.
Such lifetimes are unusually long, but it is remarkable
that these lifetimes and mass splittings are simultane-
ously realized in simple models with extra dimensional
or supersymmetric WIMPs decaying to KK gravitons
and gravitinos. Future experiments, such as ACT [27],
with large apertures and expected energy resolutions of
∆E/E = 1%, may exclude or confirm this explanation of
Excluded Charged DM
Excluded Overproduction
R (GeV)
807 808 809 810 811
FIG. 4: Phase diagram of mUED. The top and bottom shaded
regions are excluded for the reasons indicated [19]. In the
yellow (light) shaded region, the B1 thermal relic density is
in the 2σ preferred region for non-baryonic dark matter [21].
In the vertical band on the left (right) the decay B1 → G1γ
(G1 → B1γ) can explain the MeV diffuse photon excess.
the MeV excess through both continuum and line signals.
Finally, we note that if dark matter is in fact decaying
now, the diffuse photon signal is also sensitive to the
recent expansion history of the Universe. For example,
as we have seen, the location of the spectrum peak is a
function of ΩM/ΩDE and w. The CGB may therefore, in
principle, provide novel constraints on dark energy prop-
erties and other cosmological parameters.
We thank John Beacom, Matt Kistler, and Hasan Yuk-
sel for Galactic flux insights. The work of JARC and
JLF is supported in part by NSF Grants PHY–0239817
and PHY–0653656, NASA Grant No. NNG05GG44G,
and the Alfred P. Sloan Foundation. The work of JARC
is also supported by the FPA 2005-02327 project (DG-
ICYT, Spain). LES and JARC are supported by the
McCue Postdoctoral Fund, UCI Center for Cosmology.
[1] D. N. Spergel et al. [WMAP Collaboration],
astro-ph/0603449.
[2] G. Bertone, D. Hooper and J. Silk, Phys. Rept. 405, 279
(2005) [hep-ph/0404175].
[3] J. L. Feng, A. Rajaraman and F. Takayama, Phys. Rev.
Lett. 91, 011302 (2003) [hep-ph/0302215]; Phys. Rev. D
68, 063504 (2003) [hep-ph/0306024].
[4] K. Sigurdson and M. Kamionkowski, Phys. Rev. Lett. 92,
171302 (2004) [astro-ph/0311486]; S. Profumo, K. Sig-
urdson, P. Ullio and M. Kamionkowski, Phys. Rev. D
71, 023518 (2005) [astro-ph/0410714]; M. Kaplinghat,
Phys. Rev. D 72, 063510 (2005) [astro-ph/0507300];
J. A. R. Cembranos, J. L. Feng, A. Rajaraman and
F. Takayama, Phys. Rev. Lett. 95, 181301 (2005)
[hep-ph/0507150]; L. E. Strigari, M. Kaplinghat and
J. S. Bullock, Phys. Rev. D 75, 061303 (2007)
[astro-ph/0606281].
[5] See also G. D. Kribs and I. Z. Rothstein, Phys. Rev. D 55,
4435 (1997) [hep-ph/9610468]; K. Ahn and E. Komatsu,
Phys. Rev. D 72, 061301 (2005) [astro-ph/0506520];
S. Kasuya and M. Kawasaki, Phys. Rev. D 73, 063007
(2006) [astro-ph/0602296]; K. Lawson and A. R. Zhitnit-
sky, 0704.3064 [astro-ph].
[6] D. E. Gruber, et al., Astrophys. J. 520, 124 (1999)
[astro-ph/9903492].
[7] G. Weidenspointner et al., American Institute of Physics
Conference Series 510, 467 (2000).
[8] P. Sreekumar et al. [EGRET Collaboration], Astrophys.
J. 494, 523 (1998) [astro-ph/9709257].
[9] Y. Ueda, M. Akiyama, K. Ohta and T. Miyaji, Astro-
phys. J. 598, 886 (2003) [astro-ph/0308140].
[10] V. Pavlidou and B. D. Fields, Astrophys. J. 575, L5
(2002) [astro-ph/0207253].
[11] K. McNaron-Brown et al., Astrophys. J. 451, 575 (1995).
[12] F. Stecker, M. H. Salamon, C. Done, astro-ph/9912106.
[13] A. Comastri, A. di Girolamo and G. Setti, Astron. As-
trophys. S. 120, 627 (1996).
[14] L. E. Strigari, J. F. Beacom, T. P. Walker and P. Zhang,
JCAP 0504, 017 (2005) [astro-ph/0502150].
[15] K. Ahn, E. Komatsu and P. Hoflich, Phys. Rev. D 71,
121301 (2005) [astro-ph/0506126].
[16] K. Watanabe, D. H. Hartmann, M. D. Leising
and L. S. The, Astrophys. J. 516, 285 (1999)
[arXiv:astro-ph/9809197]; P. Ruiz-Lapuente, M. Casse
and E. Vangioni-Flam, Astrophys. J. 549, 483 (2001)
[arXiv:astro-ph/0009311].
[17] T. Appelquist, H.-C. Cheng and B. A. Dobrescu, Phys.
Rev. D 64, 035002 (2001) [hep-ph/0012100].
[18] H. C. Cheng, K. T. Matchev and M. Schmaltz, Phys.
Rev. D 66, 036005 (2002) [hep-ph/0204342].
[19] J. A. R. Cembranos, J. L. Feng and L. E. Strigari, Phys.
Rev. D 75, 036004 (2007) [hep-ph/0612157].
[20] J. L. Feng, A. Rajaraman and F. Takayama, Phys. Rev.
D 68, 085018 (2003) [hep-ph/0307375].
[21] M. Kakizaki, S. Matsumoto and M. Senami, Phys. Rev.
D 74, 023504 (2006) [hep-ph/0605280].
[22] N. R. Shah and C. E. M. Wagner, Phys. Rev. D 74,
104008 (2006) [hep-ph/0608140].
[23] B. J. Teegarden and K. Watanabe, Astrophys. J. 646,
965 (2006) [astro-ph/0604277].
[24] A. Klypin, H. Zhao and R. S. Somerville, Astrophys. J.
573, 597 (2002) [astro-ph/0110390].
[25] K. Ichiki, M. Oguri and K. Takahashi, Phys. Rev. Lett.
93, 071302 (2004) [astro-ph/0403164].
[26] X. L. Chen and M. Kamionkowski, Phys. Rev. D 70,
043502 (2004) [astro-ph/0310473]; L. Zhang, X. Chen,
M. Kamionkowski, Z. Si, Z. Zheng, 0704.2444 [astro-ph].
[27] S. E. Boggs et al. [Larger ACT Collaboration],
astro-ph/0608532.
http://arxiv.org/abs/astro-ph/0603449
http://arxiv.org/abs/hep-ph/0404175
http://arxiv.org/abs/hep-ph/0302215
http://arxiv.org/abs/hep-ph/0306024
http://arxiv.org/abs/astro-ph/0311486
http://arxiv.org/abs/astro-ph/0410714
http://arxiv.org/abs/astro-ph/0507300
http://arxiv.org/abs/hep-ph/0507150
http://arxiv.org/abs/astro-ph/0606281
http://arxiv.org/abs/hep-ph/9610468
http://arxiv.org/abs/astro-ph/0506520
http://arxiv.org/abs/astro-ph/0602296
http://arxiv.org/abs/astro-ph/9903492
http://arxiv.org/abs/astro-ph/9709257
http://arxiv.org/abs/astro-ph/0308140
http://arxiv.org/abs/astro-ph/0207253
http://arxiv.org/abs/astro-ph/9912106
http://arxiv.org/abs/astro-ph/0502150
http://arxiv.org/abs/astro-ph/0506126
http://arxiv.org/abs/astro-ph/9809197
http://arxiv.org/abs/astro-ph/0009311
http://arxiv.org/abs/hep-ph/0012100
http://arxiv.org/abs/hep-ph/0204342
http://arxiv.org/abs/hep-ph/0612157
http://arxiv.org/abs/hep-ph/0307375
http://arxiv.org/abs/hep-ph/0605280
http://arxiv.org/abs/hep-ph/0608140
http://arxiv.org/abs/astro-ph/0604277
http://arxiv.org/abs/astro-ph/0110390
http://arxiv.org/abs/astro-ph/0403164
http://arxiv.org/abs/astro-ph/0310473
http://arxiv.org/abs/astro-ph/0608532
|
0704.1659 | Neutrino-cooled accretion and GRB variability | Astronomy & Astrophysics manuscript no. neutrino˙cooling January 2, 2019
(DOI: will be inserted by hand later)
Neutrino-cooled accretion and GRB variability
Dimitrios Giannios
Max Planck Institute for Astrophysics, Box 1317, D-85741 Garching, Germany
Received / Accepted
Abstract. For accretion rates Ṁ ∼ 0.1 M⊙/s to a few solar mass black hole the inner part of the disk is expected to make a
transition from advection dominance to neutrino cooling. This transition is characterized by sharp changes of the disk properties.
I argue here that during this transition, a modest increase of the accretion rate leads to powerful enhancement of the Poynting
luminosity of the GRB flow and decrease of its baryon loading. These changes of the characteristics of the GRB flow translate
into changing gamma-ray spectra from the photosphere of the flow. The photospheric interpretation of the GRB emission
explains the observed narrowing of GRB pulses with increasing photon energy and the luminosity-spectral peak relation within
and among bursts.
Key words. Gamma rays: bursts – Accretion, accretion disks
1. Introduction
The commonly assumed model for the central engine of
gamma-ray bursts (hereafter GRBs) consists of a compact ob-
ject, most likely a black hole, surrounded by a massive ac-
cretion disk. This configuration results naturally from the col-
lapse of the core of a fast rotating, massive star (Woosley 1993;
MacFadyen & Woosley 1999) or the coalescence of a neutron
star-neutron star or a neutron star-black hole binary (for simu-
lations see Ruffert et al. 1997).
The accretion rates needed to power a GRB are in the range
Ṁ ∼ 0.01 − 10 M⊙/s. Recently, much theoretical work has
been done to understand the microphysics and the structure of
the disk at this very high accretion-rate regime (e.g., Chen &
Beloborodov 2007; hereafter CB07). These studies have shown
that while for accretion rates Ṁ ≪ 0.1M⊙/s the disk is advec-
tion dominated, when Ṁ ∼ 0.1M⊙/s it makes a sharp transition
to efficient neutrino cooling. This transition results to a thiner,
much denser and neutron rich disk.
Here I show that, for reasonable scalings of the magnetic
field strength with the properties of the inner disk, the advection
dominance-neutrino cooling transition results in large changes
in the Poynting flux output in the GRB flow. During this transi-
tion, a moderate increase of the accretion rate is accompanied
by large increase of the Poynting luminosity and decrease of
the baryon loading of the GRB flow. This leads to powerful
and “clean” ejections of material. The photospheric emission
from these ejections explains the observed narrowing of GRB
pulses with increasing photon energy (Fenimore et al. 1995)
and the luminosity-spectral peak relation within and among
bursts (Liang et al. 2004).
Send offprint requests to: [email protected]
2. Disk transition to efficient neutrino cooling
In accretion powered GRB models the outflow responsible for
the GRB is launched in the polar region of the black-hole-disk
system. This can be done by neutrino-antineutrino annihilation
and/or MHD mechanisms of energy extraction. In either case,
the power output in the outflow critically depends on the physi-
cal properties of the inner part of the accretion disk. In this sec-
tion, I focus on the disk properties around the transition from
advection dominance to neutrino cooling. The implications of
this transition on the energy output to the GRB flow are the
topic of the next section.
Recent studies have explored the structure of accretion
disks that surround a black hole of a few solar masses for accre-
tion rates Ṁ ∼ 0.01 − 10 M⊙/s. Most of these studies focus on
1-D “α”-disk models (where α relates the viscous stress to the
pressure in the disk; Shakura & Sunyaev 1973) and put empha-
sis on the treatment of the microphysics of the disks connected
to the neutrino emission and opacity, nuclear composition and
electron degeneracy (Di Matteo et al. 2002; Korhi & Mineshige
2002; Kohri et al. 2005; CB07; Kawanaka & Mineshige 2007;
hereafter KM07) and on general relativistic effects on the hy-
drodynamics (Popham et al. 1999; Pruet at al. 2003; CB07).
These studies have shown that for Ṁ<
0.1 M⊙/s and viscos-
ity parameter α ∼ 0.1 the disk is advection dominated since the
large densities do not allow for photons to escape. The temper-
ature at the inner parts of the disk is T >
1 MeV and the den-
sity ρ ∼ 108 gr/cm3 which results in a disk filled with mildly
degenerate pairs. In this regime of temperatures and densities
the nucleons have dissociated and the disk consists of free pro-
tons and neutrons of roughly equal number. The pressure in the
disk is: P = Pγ,e± + Pb. The first term accounts for the pres-
sure coming from radiation and pairs and the second for that
http://arxiv.org/abs/0704.1659v1
2 Dimitrios Giannios: Neutrino-cooled accretion and GRB variability
of the baryons. In the advection dominated regime the pressure
is dominated by the “light particle” contribution (i.e. the first
term in the last expression).
For accretion rates Ṁ ∼ 0.1 M⊙/s, a rather sharp transition
takes place in the inner parts of the disk. During this transition,
the mean electron energy is high enough for electron capture
by protons to be possible: e− + p → n + ν. As a result, the
disk becomes neutron rich, enters a phase of efficient neutrino
cooling and becomes thinner. The baryon density of the disk in-
creases dramatically and the total pressure is dominated by the
baryon pressure. After the transition is completed the neutron-
to-proton ratio in the disk is ∼ 10. Hereafter, I refer to this
transition as “neutronization” transition.
The neutronization transition takes place at an approxi-
mately constant disk temperature T ≈ several×1010 K and is
completed for a moderate increase of the accretion rate by a
factor of ≈ 2 − 3. During the transition the baryon density in-
creases by ≈ 1.5 orders of magnitude and the disk pressure by
a factor of several (see CB07; KM07).
2.1. Scalings of the disk properties with Ṁ
Although the numbers quoted in the previous section hold quite
generally, the range of accretion rates for which the neutroniza-
tion transition takes place depends on the α viscosity parame-
ter and on the spin of the black hole. For more quantitative
statements to be made, I extract some physical quantities of the
disk before and after transition from Figs. 13-15 of CB07 for
disk viscosity α = 0.1 and spin parameter of the black hole
a = 0.95. I focus at a fixed radius close to the inner edge of
the disk (for convenience, I choose r = 6GM/c2). The quanti-
ties before and after the transition are marked with the super-
scripts “A” and “N” and stand for Advection dominance and
Neutrino cooling respectively. At ṀA = 0.03 M⊙/s, the density
of the disk is ρA ≃ 3 · 109gr/cm3 and has similar number of
protons and neutrons, while at ṀN = 0.07 M⊙/s, the density is
ρN ≃ 9 · 1010gr/cm3 and the neutron-to-proton ratio is ∼ 10.
The temperature remains approximately constant for this range
of accretion rates at T ≃ 5 · 1010 K. A factor of ṀN/ṀA ≃ 2.3
increase in the accretion rate in this specific example leads to
the transition from advection dominance to neutrino cooling.
Around the transition the (mildly degenerate) pairs con-
tribute a factor of ∼ 2 more to the pressure w.r.t. radiation. The
total pressure is: P = Pγ,e± + Pb ≈ arT
4 + ρkBT/mp, where ar
and kB are the radiation and Boltzmann constants respectively
(Beloborodov 2003; CB07). Using the last expression, the disk
pressure before the transition is found: PA ≃ 6 · 1028 erg/cm3;
dominated by the contribution of light particles as expected for
an advection dominated disk. At the higher accretion rate ṀN,
one finds for the pressure of the disk PN ≃ 4 · 1029 erg/cm3.
Now the disk is baryon pressure supported.
From the previous exercise one gets indicative scalings for
the dependence of quantities in the disk as a function of Ṁ
during the neutronization transition: ρ ∝ Ṁ4 and P ∝ Ṁ2.3.
Doubling of the accretion rate during the transition leads to a
factor of ∼ 16 and ∼ 5 increase of the density and pressure of
the disk respectively.
Similar estimates for the dependence of the disk density
and pressure on the accretion rate can be done when the inner
disk is in the advection dominance and neutrino cooling regime
but fairly close to the transition. In these regimes, I estimate
that ρ ∝ P ∝ Ṁ (see, for example Figs. 1-3 in KM07).
Does this sharp change of the disk properties associated
with the neutronization transition affect the rate of energy re-
lease in the polar region of the disk where the GRB flow is
expected to form? The answer depends on the mechanism re-
sponsible for the energy release.
3. Changes in the GRB flow from the
neutronization transition
Gravitational energy released by the accretion of matter to the
black hole can be tapped by neutrino-antineutrino annihilation
or via MHD mechanisms and power the outflow responsible for
the GRB. We consider both of these energy extraction mecha-
nisms in turn.
The neutrino luminosity of the disk just after the neutron-
ization transition is of the order of Lν ∼ 10
52 erg/s and con-
sists of neutrinos and antineutrinos of all flavors. The fraction
of these neutrinos that annihilate and power the GRB flow de-
pends on their spatial emission distribution which, in turn, de-
pends critically on the disk microphysics. For Ṁ ∼ 0.1M⊙/s,
this fraction is of the order of ∼ 10−3 (Liu et al. 2007), pow-
ering an outflow of Lνν̄ ∼ 10
49 erg/s; most likely too weak to
explain a cosmological GRB. The efficiency of the neutrino-
antineutrino annihilation mechanism can be much higher for
accretion rates Ṁ>
1M⊙/s (e.g., Liu et al. 2007; Birkl et al.
2007) which are not considered here.
The second possibility is that energy is extracted by strong
magnetic fields that thread the inner part of the disk (Blandford
& Payne 1982) or the rotating black hole (Blandford &
Znajek 1977) launching a Poynting-flux dominated flow. The
Blandford-Znajek power output can be estimated to be (e.g.
Popham et al. 1999)
LBJ ≈ 10
50a2B215M
3 erg/s, (1)
where B = 1015B15 Gauss and M = 3M3M⊙. taking into
account that magnetic fields of similar strength are expected
to thread the inner parts of the disk, the Poynting luminosity
output from the disk is rather higher than LBJ because of the
larger effective surface of the disk (Livio et al. 1999). In con-
clusion, magnetic field strengths in the inner disk of the order
of B ∼ 1015 erg/s are likely sufficient to power a GRB via MHD
mechanisms of energy extraction.
3.1. Luminosity and baryon loading of the GRB flow as
functions of Ṁ
In this section, I estimate the Poynting luminosity of the GRB
flow for different assumptions on the magnetic field-disk cou-
pling. The mass flux in the GRB flow is harder to constrain
since it depends on the disk structure and the magnetic field
geometry on the disk’s surface. During the neutronization tran-
sition, the disk becomes thinner and, hence, more bound grav-
itationally. One can thus expect that a smaller fraction of Ṁ is
Dimitrios Giannios: Neutrino-cooled accretion and GRB variability 3
injected in the outflow. Here, I make the, rather conservative,
assumption that throughout the transition, the mass flux in the
outflow is a fixed fraction of accretion rate Ṁ.
How is the magnetic field strength related to the properties
of the disk? The magneto-rotational instability (hereafter MRI;
see Balbus & Hawley 1998 for a review) can amplify magnetic
field with energy density up to a fraction ǫ of the pressure in the
disk. This provides an estimate for the magnetic field: B2MRI =
8πǫP. This scaling leads to magnetic field strength of the order
of∼ 1015 Gauss for the fiducial values of the pressure presented
in the previous Sect. and for ǫ ≃ 0.2.
The Poynting luminosity scales as Lp ∝ B
MRI ∝ P ∝ Ṁ
with the accretion rate during the neutronization transition (see
previous Sect.). This leads to a rather large increase of the lu-
minosity of the GRB flow by a factor of ∼ 7 for a moderate
increase of the accretion rate by a factor of ≃ 2.3. Furthermore,
if we assume that a fixed fraction of the accreting gas is chan-
neled to the outflow, then the baryon loading of the Poynting-
flux dominated flow scales as η ∝ Lp/Ṁ ∝ Ṁ
1.3. This means
that during the transition the outflow becomes “cleaner” de-
creasing its baryon loading by a factor of ∼ 3.
The disk can support large-scale fields more powerful that
those generated by MRI. These fields may have been advected
with the matter during the core collapse of the star (or the bi-
nary coalescence) or are captured by the disk in the form a mag-
netic islands and brought in the inner parts of the disk (Spruit &
Uzdensky 2005). These large scale fields can arguably provide
much more promising conditions to launch a large scale jet.
Stehle & Spruit (2001) have shown that a disk threaded by
a large scale field becomes violently unstable once the radial
tension force of the field contributes substantially against grav-
ity. This instability is suppressed if the radial tension force is a
faction δ ∼ a few % of the gravitational attraction. Large-scale
magnetic fields with strength: B2LS = δ8πρcsvk ∝ (ρP)
1/2 can
be supported over the duration of a GRB for δ ∼ a few %. In
the last expression cs =
P/ρ stands for the sound speed and
vk is the Keplerian velocity at the inner boundary.
The last estimate suggests that large scale field strong
enough to power a GRB can be supported by the disk. The
output Poynting luminosity scales, in this case , as Lp ∝ B
(ρP)1/2. During the neutronization transition, the Poynting lu-
minosity increases steeply as a function of the accretion rate:
Lp ∝ (ρP)
∝ Ṁ3.2. This translates to a factor of ∼ 15 in-
crease of the luminosity of the jet for a modest increase by
∼ 2.3 of the accretion rate. Assuming that the rate of ejection
of material in the GRB flow is proportional to the mass accre-
tion rate, the baryon loading of the flow is found to decrease by
a factor of ∼ 6 during the transition (since η ∝ Lp/Ṁ ∝ Ṁ
2.2).
Before and after the transition the disk is advection dom-
inated and neutrino cooled respectively. When the disk is in
either of these regimes the disk density and pressure scale
roughly linearly with the accretion rate (at least for accretion
rates fairly close to the neutronization transition; see previous
Sect.), leading to Lp ∝ Ṁ and η ∼ constant. The Poynting lu-
minosity and the baryon loading of the GRB flow around the
neutronization transition are summarized by Fig. 1.
Although the Poynting flux output depends on assumptions
on the scaling of the magnetic field with the disk properties,
0.1 1
accretion rate (solar masses per second)
Poynting luminosity L
Baryon loading η
I II III
Fig. 1. Poynting luminosity and baryon loading (both in arbi-
trary units) of the GRB flow around the neutronization transi-
tion of the inner disk. In regions marked with I and III the inner
disk is advection and neutrino cooling dominated respectively.
In region II, the neutronization transition takes place. During
the transition, the Poynting luminosity increases steeply with
the accretion rate while the baryon loading of the flow is re-
duced (i.e. η increases).
the neutronization transition generally leads to steep increase
of the Poynting luminosity as function of the accretion rate and
to a “cleaner” (i.e. less baryon loaded) flow. Observational im-
plications of the transition are discussed in the next section.
4. Connection to observations
The mechanism I discuss here operates for accretion rates
around the neutronization transition of the inner disk and pro-
vides the means by which modest variations in the accretion
rate give magnified variability in the Poynting flux output and
baryon loading of the GRB flow. Since the transition takes
place at Ṁ ∼ 0.1 M⊙/s which is close to the accretion rates ex-
pected for the collapsar model (MacFadyen & Woosley 1999),
it is particularly relevant for that model. To connect the flow
variability to the observed properties of the prompt emission,
one has to assume a model for the prompt emission. Here we
discuss internal shock and photospheric models.
Episodes of rapid increase of the luminosity of the flow can
be viewed as the ejection of a distinct shells of material. These
shells can collide with each other further out in the flow lead-
ing to internal shocks that power the prompt GRB emission
(Rees & Mészáros 1994). For the internal shocks to be efficient
in dissipating energy, there must be a substantial variation of
the baryon loading among shells. This may be achieved, in the
context of the model presented here, if the accretion rate, at
which the neutronization transition takes place, changes during
the evolution of the burst. The accretion rate at the transition
decreases, for example, with increasing spin of the black hole
(CB07). Since the black hole is expected to be substantially
span up because of accretion of matter during the evolution of
the burst (e.g. MacFadyen & Woosley 1999), there is the possi-
4 Dimitrios Giannios: Neutrino-cooled accretion and GRB variability
bility, though speculative at this level, that this leads to ejection
of shells with varying baryon loading.
4.1. Photospheric emission
Photospheric models for the origin of the prompt emission have
been recently explored for both fireballs (Mészáros & Rees
2000; Ryde 2004; Rees & Mészáros 2005; Pe’er et al. 2006)
and Poynting-flux dominated flows (Giannios 2006; Giannios
& Spruit 2007; hereafter GS07). Here, I focus mainly to the
photosphere of a Poynting-flux dominated flow since it is di-
rectly applicable to this work.
In the photospheric model, the observed variability of the
prompt emission is direct manifestation of the central engine
activity. Modulations of the luminosity and baryon loading of
the GRB flow result in modulations of the location of the pho-
tosphere of the flow and of the strength and the spectrum of the
photospheric emission (Giannios 2006; GS07). In particular, in
GS07 it is demonstrated that if the increase of the luminosity of
the flow is accompanied by decrease of the baryon loading such
that1 η ∝ L0.6, the photospheric model can explain the observed
narrowing of the width of the GRB pulses with increasing pho-
ton energy reported by Fenimore et al. (1995). The same η-L
scaling also leads to the photospheric luminosity scaling with
the peak of the ν · f (ν) spectrum as Lph ∝ E
p during the burst
evolution in agreement with observations (Liang et al. 2004).
The simple model for the connection of the GRB flow to
the properties of the central engine presented here predicts that
L ∝ Ṁ2.3...3.2 and η ∝ Ṁ1.3...2.2 during the neutronization tran-
sition. The range in the exponents comes from the different as-
sumptions on the disk-magnetic field connection (see Sect. 3).
This translates to η ∝ L0.6...0.7 which is very close that assumed
by GS07 to explain the observed spectral and temporal proper-
ties of the GRB light curves.
Although the launched flow is Poynting-flux dominated, it
is conceivable that it undergoes an initial phase of rapid mag-
netic dissipation resulting to a fireball. The photospheric lu-
minosity and the observed temperature of fireballs scale as
Lph ∝ η
8/3L1/3, Tobs ∝ η
8/3L−5/12 respectively (Mészáros &
Rees 2000). Using the scaling η ∝ L0.6...0.7 found in this work
and identifying the peak of the photospheric component with
the peak of the emission Ep one finds that Lph ∝ L
1.9...2.2 and
Ep ∝ L
1.2...1.4. The last scalings suggest that the photospheric
emission from a fireball can further enhance variations in the
gamma-ray luminosity while Lph and Ep follow the Liang et al.
relation. Still dissipative processes have to be considered in the
fireball so that to explain the observed non-thermal spectra.
5. Conclusions
In this work, a mechanism is proposed by which moderate
changes of the accretion rate at around Ṁ ∼ 0.1 M⊙/s to a
few solar mass black hole can give powerful energy release
episodes to the GRB flow. This mechanism is directly applica-
ble to the collapsar scenario for GRBs (Woosley; MacFadyen
1 In GS07, the parameterization of the baryon loading of the flow is
done by the magnetization σ0 that is related to η through η = σ
& Woosley 1999) and can explain how moderate changes in the
accretion rate result in extremely variable GRB light curves.
This mechanism operates when the inner part of the ac-
cretion disk makes the transition from advection dominance to
neutrino cooling. This, rather sharp, transition is accompanied
by steep increase of the density and the pressure in the disk
(CB07; KM07). This leads to substantial increase of the mag-
netic field strength in the vicinity of the black hole and conse-
quently boosts the Poynting luminosity of the GRB flow by a
factor of ∼ 7 − 15. At the same time, assuming that the ejec-
tion rate of material scales linearly with the accretion rate, the
baryon loading of the flow decreases by a factor ∼ 3 − 6. This
results in a luminosity-baryon loading anticorrelation.
The changes of the characteristics of the GRB flow can be
directly observed as modulations of the photospheric emission
giving birth to pulses with spectral and temporal properties
similar to the observed ones (GS07). The photospheric inter-
pretation of the prompt emission is in agreement with the ob-
served narrowing of the pulses with increasing photon energy
(Fenimore et al. 1995) and the luminosity-peak energy corre-
lation during the evolution of GRBs (Liang et al 2004). The
Amati relation (Amati et al. 2002) is possibly result of the fact
that more luminous bursts are on average less baryon loaded.
Acknowledgements. I wish to thank H. Spruit for illuminating discus-
sions on the disk-magnetic-field coupling.
References
Amati, L., et al. 2002, A&A, 390, 81
Balbus, S. A., & Hawley, J. F. 1998, Rev. Mod. Physics, 70, 1
Beloborodov, A. M. 2003, ApJ, 588, 931
Birkl, R., Aloy, M. A., Janka, H.-T., Müller, E. 2007, A&A, 463, 51
Blandford, R. D., & Payne, D. G. 1982, MNRAS, 199, 883
Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433
Chen, W.-X., & Beloborodov, A. M. 2007, ApJ, 657, 383 (CB07)
Di Matteo, T., Perna, R., & Narayan, R. 2002, ApJ, 579, 706
Fenimore, E. E., in ’t Zand, J. J. M., Norris, J. P., Bonnell, J. T., &
Nemiroff, R. J. 1995, ApJ, 448, L101
Giannios, D. 2006, A&A, 457, 763
Giannios, D., & Spruit, H. C. 2007, A&A, in press,
arXiv:astro-ph/0611385 (GS07)
Kawanaka, N., & Mineshige, S. 2007, ApJ, in press,
arXiv:astro-ph/0702630 (KM07)
Kohri, K., & Mineshige, S. 2002, ApJ, 577, 311
Kohri, K., Narayan, R., & Piran, T. 2005, ApJ, 629, 341
Liang, E. W., Dai, Z. G., & Wu, X. F. 2004, ApJ, 606, L29
Liu, T., Gu, W.-M., Xue, L., & Lu, J.-F. 2007, ApJ, in press,
arXiv:astro-ph/0702186
Livio, M., Ogilvie, G. I., & Pringle, J. E. 1999, ApJ, 512, 100
Mészáros, P., & Rees, M. J. 2000, ApJ, 530, 292
Pe’er, A., Mészáros, P., & Rees, M. J. 2006, ApJ, 642, 995
Popham, R., Woosley, S. E., & Fryer, C. 1999, ApJ, 518, 356
Pruet, J., Woosley, S. E., & Hoffman, R. D. 2003, ApJ, 586, 1254
Rees, M. J., & Mészáros, P. 1994, ApJ, 430, L93
Rees, M. J., & Mészáros, P. 2005, ApJ, 628, 847
Ruffert, M., Janka, H.-T., Takahashi, K., & Schaefer, G. 1997, A&A,
319, 122
Ryde, F. 2004, ApJ, 614, 827
Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337
Spruit, H. C., & Uzdensky, D. A. 2005, ApJ, 629, 960
Stehle, R., & Spruit, H. C. 2001, MNRAS, 323, 587
Woosley, S. E. 1993, ApJ, 405, 273
http://arxiv.org/abs/astro-ph/0611385
http://arxiv.org/abs/astro-ph/0702630
http://arxiv.org/abs/astro-ph/0702186
Introduction
Disk transition to efficient neutrino cooling
Scalings of the disk properties with "705FM
Changes in the GRB flow from the neutronization transition
Luminosity and baryon loading of the GRB flow as functions of "705FM
Connection to observations
Photospheric emission
Conclusions
|
0704.1660 | The VVDS type-1 AGN sample: The faint end of the luminosity function | Astronomy & Astrophysics manuscript no. ABongiorno c© ESO 2018
October 27, 2018
The VVDS type–1 AGN sample: The faint end of the luminosity
function
A. Bongiorno1, G. Zamorani2, I. Gavignaud3, B. Marano1, S. Paltani4,5, G. Mathez6, J.P. Picat6, M. Cirasuolo7, F.
Lamareille2,6, D. Bottini8, B. Garilli8, V. Le Brun9, O. Le Fèvre9, D. Maccagni8, R. Scaramella10,11, M. Scodeggio8, L.
Tresse9, G. Vettolani10, A. Zanichelli10, C. Adami9, S. Arnouts9, S. Bardelli2, M. Bolzonella2, A. Cappi2, S.
Charlot12,13, P. Ciliegi2, T. Contini6, S. Foucaud14, P. Franzetti8, L. Guzzo15, O. Ilbert16, A. Iovino15, H.J.
McCracken13,17, C. Marinoni18, A. Mazure9, B. Meneux8,15, R. Merighi2, R. Pellò6, A. Pollo9,19, L. Pozzetti2, M.
Radovich20, E. Zucca2, E. Hatziminaoglou21 , M. Polletta22, M. Bondi10, J. Brinchmann23, O. Cucciati15,24, S. de la
Torre9, L. Gregorini25, Y. Mellier13,17, P. Merluzzi20, S. Temporin15, D. Vergani8, and C.J. Walcher9
(Affiliations can be found after the references)
Received; accepted
Abstract
In a previous paper (Gavignaud et al. 2006), we presented the type–1 Active Galactic Nuclei (AGN) sample obtained from the first epoch data
of the VIMOS-VLT Deep Survey (VVDS). The sample consists of 130 faint, broad-line AGN with redshift up to z = 5 and 17.5 < IAB < 24.0,
selected on the basis of their spectra.
In this paper we present the measurement of the Optical Luminosity Function up to z = 3.6 derived from this sample, we compare our results with
previous results from brighter samples both at low and at high redshift.
Our data, more than one magnitude fainter than previous optical surveys, allow us to constrain the faint part of the luminosity function up to high
redshift.
By combining our faint VVDS sample with the large sample of bright AGN extracted from the SDSS DR3 (Richards et al., 2006b), we find that the
model which better represents the combined luminosity functions, over a wide range of redshift and luminosity, is a luminosity dependent density
evolution (LDDE) model, similar to those derived from the major X-surveys. Such a parameterization allows the redshift of the AGN space density
peak to change as a function of luminosity and explains the excess of faint AGN that we find at 1.0 < z < 1.5. On the basis of this model we find,
for the first time from the analysis of optically selected samples, that the peak of the AGN space density shifts significantly towards lower redshift
going to lower luminosity objects. This result, already found in a number of X-ray selected samples of AGN, is consistent with a scenario of “AGN
cosmic downsizing”, in which the density of more luminous AGN, possibly associated to more massive black holes, peaks earlier in the history of
the Universe, than that of low luminosity ones.
Key words. surveys-galaxies: high-redshift - AGN: luminosity function
1. Introduction
Active Galactic Nuclei (AGN) are relatively rare objects that ex-
hibit some of the most extreme physical conditions and activity
known in the universe.
A useful way to statistically describe the AGN activity along
the cosmic time is through the study of their luminosity func-
tion, whose shape, normalization and evolution can be used to
derive constraints on models of cosmological evolution of black
holes (BH). At z.2.5, the luminosity function of optically se-
lected type–1 AGN has been well studied since many years
(Boyle et al., 1988; Hewett et al., 1991; Pei, 1995; Boyle et al.,
2000; Croom et al., 2004). It is usually described as a double
power law, characterized by the evolutionary parameters L∗(z)
and Φ∗(z), which allow to distinguish between simple evolution-
ary models such as Pure Luminosity Evolution (PLE) and Pure
Density Evolution (PDE). Although the PLE and PDE mod-
els should be mainly considered as mathematical descriptions
of the evolution of the luminosity function, two different phys-
ical interpretations can be associated to them: either a small
Send offprint requests to: Angela Bongiorno, e-mail:
[email protected]
fraction of bright galaxies harbor AGN, and the luminosities of
these sources change systematically with time (‘luminosity evo-
lution’), or all bright galaxies harbor AGN, but at any given time
most of them are in ‘inactive’ states. In the latter case, the frac-
tion of galaxies with AGN in an ‘active’ state changes with time
(‘density evolution’). Up to now, the PLE model is the preferred
description for the evolution of optically selected QSOs, at least
at low redshift (z < 2).
Works on high redshift type–1 AGN samples (Warren et al.,
1994; Kennefick et al., 1995; Schmidt et al., 1995; Fan et al.,
2001; Wolf et al., 2003; Hunt et al., 2004) have shown that the
number density of QSOs declines rapidly from z ∼ 3 to z ∼ 5.
Since the size of complete and well studied samples of QSOs
at high redshift is still relatively small, the rate of this decline
and the shape of the high redshift luminosity function is not yet
as well constrained as at low redshift. For example, Fan et al.
(2001), studying a sample of 39 luminous high redshift QSOs at
3.6 < z < 5.0, selected from the commissioning data of the Sloan
Digital Sky Survey (SDSS), found that the slope of the bright end
of the QSO luminosity function evolves with redshift, becoming
flatter at high redshift, and that the QSO evolution from z = 2
to z = 5 cannot be described as a pure luminosity evolution. A
similar result on the flattening at high redshift of the slope of
http://arxiv.org/abs/0704.1660v1
2 Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function
the luminosity function for luminous QSOs has been recently
obtained by Richards et al. (2006b) from the analysis of a much
larger sample of SDSS QSOs (but see Fontanot et al. (2007) for
different conclusions drawn on the basis of combined analysis of
GOODS and SDSS QSOs).
At the same time, a growing number of observations at differ-
ent redshifts, in radio, optical and soft and hard X-ray bands, are
suggesting that also the faint end slope evolves, becoming flat-
ter at high redshift (Page et al., 1997; Miyaji et al., 2000, 2001;
La Franca et al., 2002; Cowie et al., 2003; Ueda et al., 2003;
Fiore et al., 2003; Hunt et al., 2004; Cirasuolo et al., 2005;
Hasinger et al., 2005). This evolution, now dubbed as “AGN cos-
mic downsizing” is described either as a direct evolution in the
faint end slope or as “luminosity dependent density evolution”
(LDDE), and it has been the subject of many speculations since
it implies that the space density of low luminosity AGNs peaks
at lower redshift than that of bright ones.
It has been observed that, in addition to the well known
local scale relations between the black hole (BH) masses and
the properties of their host galaxies (Kormendy & Richstone,
1995; Magorrian et al., 1998; Ferrarese & Merritt, 2000), also
the galaxy spheroid population follows a similar pattern of “cos-
mic downsizing” (Cimatti et al., 2006). Various models have
been proposed to explain this common evolutionary trend in
AGN and spheroid galaxies. The majority of them propose that
the feedback from the black hole growth plays a key role in
determining the BH-host galaxy relations (Silk & Rees, 1998;
Di Matteo et al., 2005) and the co-evolution of black holes
and their host galaxies. Indeed, AGN feedback can shut down
the growth of the most massive systems steepening the bright
end slope (Scannapieco & Oh, 2004), while the feedback-driven
QSO decay determines the shape of the faint end of the QSO LF
(Hopkins et al., 2006).
This evolutionary trend has not been clearly seen yet with
optically selected type–1 AGN samples. By combining results
from low and high redshifts, it is clear from the studies of op-
tically selected samples that the cosmic QSO evolution shows a
strong increase of the activity from z ∼ 0 out to z ∼ 2, reaches a
maximum around z ≃ 2 − 3 and then declines, but the shape of
the turnover and the redshift evolution of the peak in activity as
a function of luminosity is still unclear.
Most of the optically selected type–1 AGN samples stud-
ied so far are obtained through various color selections of
candidates, followed by spectroscopic confirmation (e.g. 2dF,
Croom et al. 2004 and SDSS, Richards et al. 2002), or grism and
slitless spectroscopic surveys. These samples are expected to be
highly complete, at least for luminous type–1 AGN, at either
z ≤ 2.2 or z ≥ 3.6, where type–1 AGN show conspicuous colors
in broad band color searches, but less complete in the redshift
range 2.2 ≤ z ≤ 3.6 (Richards et al. 2002).
An improvement in the multi-color selection in optical bands
is through the simultaneous use of many broad and medium band
filters as in the COMBO-17 survey (Wolf et al., 2003). This sur-
vey is the only optical survey so far which, in addition to cov-
ering a redshift range large enough to see the peak of AGN ac-
tivity, is also deep enough to sample up to high redshift type–1
AGN with luminosity below the break in the luminosity func-
tion. However, only photometric redshifts are available for this
sample and, because of their selection criteria, it is incomplete
for objects with a small ratio between the nuclear flux and the
total host galaxy flux and for AGN with anomalous colors, such
as, for example, the broad absorption line (BAL) QSOs , which
have on average redder colors and account for ∼ 10 - 15 % of the
overall AGN population (Hewett & Foltz, 2003).
The VIMOS-VLT Deep Survey (Le Fèvre et al., 2005) is a
spectroscopic survey in which the target selection is purely flux
limited (in the I-band), with no additional selection criterion.
This allows the selection of a spectroscopic type–1 AGN sample
free of color and/or morphological biases in the redshift range
z > 1. An obvious advantage of such a selection is the possi-
bility to test the completeness of the most current surveys (see
Gavignaud et al., 2006, Paper I), based on morphological and/or
color pre-selection, and to study the evolution of type–1 AGN
activity in a large redshift range.
In this paper we use the type-1 AGN sample selected from
the VVDS to derive the luminosity function in the redshift range
1 < z < 3.6. The VVDS type–1 AGN sample is more than one
magnitude deeper than any previous optically selected sample
and allow thus to explore the faint part of the luminosity func-
tion. Moreover, by combining this LF with measurement of the
LF in much larger, but very shallow, surveys, we find an analyt-
ical form to dercribe, in a large luminosity range, the evolution
of type-1 AGN in the redshift range 0< z <4. The paper is or-
ganized as follows: in Section 2 and 3 we describe the sample
and its color properties. In Section 4 we present the method used
to derive the luminosity function, while in Section 5 we com-
pare it with previous works both at low and high redshifts. The
bolometric LF and the comparison with the results derived from
samples selected in different bands (from X-ray to IR) is then
presented in Section 6. The derived LF fitting models are pre-
sented in Section 7 while the AGN activity as a function of red-
shift is shown in Section 8. Finally in section 9 we summarize
our results. Throughout this paper, unless stated otherwise, we
assume a cosmology with Ωm = 0.3, ΩΛ = 0.7 and H0 = 70 km
s−1 Mpc−1.
2. The sample
Our AGN sample is extracted from the first epoch data of the
VIMOS-VLT Deep Survey, performed in 2002 (Le Fèvre et al.,
2005).
The VVDS is a spectroscopic survey designed to measure
about 150,000 redshifts of galaxies, in the redshift range 0 < z <
5, selected, nearly randomly, from an imaging survey (which
consists of observations in U, B, V, R and I bands and, in a
small area, also K-band) designed for this purpose. Full de-
tails about VIMOS photometry can be found in Le Fèvre et al.
(2004a), McCracken et al. (2003), Radovich et al. (2004) for
the U-band and Iovino et al. (2005) for the K-band. In this
work we will as well use the Galex UV-catalog (Arnouts et al.,
2005; Schiminovich et al., 2005), the u∗,g′,r′,i′,z′ photometry
obtained in the frame of the Canada-France-Hawaii Legacy
Survey (CFHTLS)1, UKIDSS (Lawrence et al., 2006), and the
Spitzer Wide-area InfraRed Extragalactic survey (SWIRE)
(Lonsdale et al., 2003, 2004). The spectroscopic VVDS survey
consists of a deep and a wide survey and it is based on a sim-
ple selection function. The sample is selected only on the basis
of the I band magnitude: 17.5 < IAB < 22.5 for the wide and
17.5 < IAB < 24.0 for the deep sample. For a detailed descrip-
tion of the spectroscopic survey strategy and the first epoch data
see Le Fèvre et al. (2005).
Our sample consists of 130 AGN with 0 < z < 5, selected
in 3 VVDS fields (0226-04, 1003+01 and 2217-00) and in the
Chandra Deep Field South (CDFS, Le Fèvre et al., 2004b). All
of them are selected as AGN only on the basis of their spectra,
1 www.cfht.hawaii.edu/Science/CFHLS
Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function 3
Figure 1. Distribution of absolute magnitudes and redshifts of
the total AGN sample. Open circles are the objects with am-
biguous redshift, shown at all their possible z values. The dotted
and dashed lines represent the magnitude limits of the samples:
IAB < 22.5 for the wide sample and IAB < 24.0 for the deep
sample.
irrespective of their morphological or color properties. In partic-
ular, we selected them on the basis of the presence of at least one
broad emission line. We discovered 74 of them in the deep fields
(62 in the 02h field and 12 in the CDFS) and 56 in the wide fields
(18 in the 10h field and 38 in the 22h field). This represents an
unprecedented complete sample of faint AGN, free of morpho-
logical or color selection bias. The spectroscopic area covered
by the First Epoch Data is 0.62 deg2 in the deep fields (02h field
and CDFS) and 1.1 deg2 in the wide fields (10h and 22h fields).
To each object we have assigned a value for the spectro-
scopic redshift and a spectroscopic quality flag which quantifies
our confidence level in that given redshift. As of today, we have
115 AGN with secure redshift, and 15 AGN with two or more
possible values for the redshift. For these objects, we have two
or more possible redshifts because only one broad emission line,
with no other narrow lines and/or additional features, is detected
in the spectral wavelength range adopted in the VVDS (5500 -
9500 Å) (see Figure 1 in Paper I). For all of them, however, a
best solution is proposed. In the original VVDS AGN sample,
the number of AGN with this redshift degeneracy was 42. To
solve this problem, we have first looked for the objects already
observed in other spectroscopic surveys in the same areas, solv-
ing the redshift for 3 of them. For the remaining objetcs, we
performed a spectroscopic follow-up with FORS1 on the VLT
Unit Telescope 2 (UT2). With these additional observations we
found a secure redshift for 24 of our AGN with ambiguous red-
shift determination and, moreover, we found that our proposed
best solution was the correct one in ∼ 80% of the cases. On the
basis of this result, we decided to use, in the following analysis,
our best estimate of the redshift for the small remaining fraction
of AGN with ambiguous redshift determination (15 AGN).
In Figure 1 we show the absolute B-magnitude and the
redshift distributions of the sample. As shown in this Figure,
our sample spans a large range of luminosities and consists of
both Seyfert galaxies (MB >-23; ∼59%) and QSOs (MB <-23;
∼41%). A more detailed and exhaustive description of the prop-
Figure 2. Composite spectra derived for our AGN with se-
cure redshift in the 02h field, divided in a “bright” (19 objects
at M1450 <-22.15, dotted curve) and a “faint” (31 objects at
M1450 >-22.15, dashed curve) sample. We consider here only
AGN with z > 1 (i.e. the AGN used in to compute the lumi-
nosity function). The SDSS composite spectrum is shown with
a solid line for comparison.
erties of the AGN sample is given in Paper I (Gavignaud et al.,
2006) and the complete list of BLAGN in our wide and deep
samples is available as an electronic Table in Appendix of
Gavignaud et al. (2006).
3. Colors of BLAGNs
As already discussed in Paper I, the VVDS AGN sample shows,
on average, redder colors than those expected by comparing
them, for example, with the color track derived from the SDSS
composite spectrum (Vanden Berk et al., 2001). In Paper I we
proposed three possible explanations: (a) the contamination of
the host galaxy is reddening the observed colors of faint AGN;
(b) BLAGN are intrinsically redder when they are faint; (c) the
reddest colors are due to dust extinction. On the basis of the sta-
tistical properties of the sample, we concluded that hypothesis
(a) was likely to be the more correct, as expected from the faint
absolute magnitudes sampled by our survey, even if hypotheses
(b) and (c) could not be ruled out.
In Figure 2 we show the composite spectra derived from the
sample of AGN with secure redshift in the 02h field, divided
in a “bright” and a “faint” sample at the absolute magnitude
M1450 = −22.15. We consider here only AGN with z > 1, which
correspond to the AGN used in Section 4 to compute the lumi-
nosity function. The choice of the reference wavelength for the
absolute magnitude, λ = 1450 Å, is motivated by our photo-
metric coverage. In fact, for most of the objects it is possible
to interpolate M1450 directly from the observed magnitudes. In
the same plot we show also the SDSS composite spectrum (solid
curve) for comparison. Even if also the ”bright” VVDS compos-
ite (dotted curve) is somewhat redder than the SDSS one, it is
clear from this plot that the main differences occur for faintest
objects (dashed curve).
A similar result is shown for the same sample in the upper
panel of Figure 3, where we plot the spectral index α as a func-
tion of the AGN luminosity. The spectral index is derived here
by fitting a simple power law f (ν) = ν−α to our photometric data
points. This analysis has been performed only on the 02h deep
4 Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function
Figure 3. Upper Panel: Distribution of the spectral index α as
a function of M1450 for the same sample of AGN as in Figure
2. The spectral index is derived here by fitting a simple power
law f (ν) = ν−α to our photometric data points. Asterisks are
AGN morphologically classified as extended and the grey point
is a BAL AGN. Bottom Panels: Distribution of the spectral in-
dex α for the same sample of AGN. All the AGN in this sample
are shown in the first of the three panels, while the AGN in the
“bright” and “faint” sub–samples are shown in the second and
third panel, respectively. The dotted curve in the second panel
corresponds to the gaussian fit of the bright sub–sample and it is
reported also in the third panel to highlight the differences in the
α distributions of the two sub-samples.
sample, since for the wide sample we do not have enough photo-
metric coverage to reliably derive the spectral index. Most of the
AGN with α > 1 are fainter than M1450 = −22.15, showing that,
indeed, the faintest objects have on average redder colors than
the brightest ones. The outlier (the brightest object with large α,
i.e. very red colors, in the upper right corner of the plot) is a BAL
The three bottom panels of Figure 3 show the histograms of
the resulting power law slopes for the same AGN sample. The
total sample is plotted in the first panel, while the bright and the
faint sub-samples are plotted in the second and third panels, re-
spectively. A Gaussian curve with < α >= 0.94 and dispersion
σ = 0.38 is a good representation for the distribution of about
80% (40/50) of the objects in the first panel. In addition, there
is a significant tail (∼ 20%) of redder AGN with slopes in the
range from 1.8 up to ∼ 3.0. The average slope of the total sample
(∼ 0.94) is redder than the fit to the SDSS composite (∼ 0.44).
Moreover, the distribution of α is shifted toward much larger val-
ues (redder continua) than the similar distribution in the SDSS
sample (Richards et al., 2003). For example, only 6% of the ob-
jects in the SDSS sample have α > 1.0, while this percentage is
57% in our sample.
The differences with respect to the SDSS sample can be
partly due to the differences in absolute magnitude of the two
samples (Mi <-22.0 for the SDSS sample (Schneider et al.,
2003) and MB <-20.0 for the VVDS sample). In fact, if we con-
sider the VVDS “bright” sub-sample, the average spectral index
< α > becomes ∼ 0.71, which is closer to the SDSS value (even
if it is still somewhat redder), and only two objects (∼8% of the
sample) show values not consistent with a gaussian distribution
with σ ∼0.32. Moreover, only 30% of this sample have α > 1.0.
Most of the bright SDSS AGNs with α > 1 are interpreted by
Richards et al. (2003) to be dust-reddened, although a fraction
of them is likely to be due to intrinsically red AGN (Hall et al.,
2006). At fainter magnitude one would expect both a larger frac-
tion of dust-reddened objects (in analogy with indications from
the X-ray data (Brandt et al., 2000; Mushotzky et al., 2000) and
a more significant contamination from the host galaxy.
We have tested these possibilities by examining the global
Spectral Energy Distribution (SED) of each object and fitting
the observed fluxes fobs with a combination of AGN and galaxy
emission, allowing also for the possibility of extinction of the
AGN flux. Thanks to the multi-wavelength coverage in the deep
field in which we have, in addition to VVDS bands, also data
from GALEX, CFHTLS, UKIDSS and SWIRE, we can study
the spectral energy distribution of the single objects. In particu-
lar, we assume that:
fobs = c1 fAGN · 10
−0.4·Aλ + c2 fGAL (1)
and, using a library of galaxy and AGN templates, we find the
best parameters c1, c2 and EB−V for each object. We used the
AGN SED derived by Richards et al. (2006a) with an SMC-
like dust-reddening law (Prevot et al., 1984) with the form
Aλ/EB−V = 1.39λ
µm , and a library of galaxies template by
Bruzual & Charlot (2003).
We found that for ∼37% of the objects, the observed flux
is fitted by a typical AGN power law (pure AGN), while 44%
of the sources require the presence of a contribution from the
host galaxy to reproduce the observed flux. Only 4% of the ob-
jects are fitted by pure AGN + dust, while the remaining 15%
of objects require instead both contributions (host galaxy con-
tamination and presence of dust). As expected, if we restrict the
analysis to the bright sample, the percentage of pure AGN in-
creases to 68%, with the rest of the objects requiring either some
contribution from the host galaxy (∼21%) or the presence of dust
oscuration (∼11%).
In Figure 4 we show 4 examples of the resulting fits: (i) pure
AGN; (ii) dust-extincted AGN; (iii) AGN contaminated by the
host galaxy; (iv) dust-extincted AGN and contaminated by the
host galaxy. The dotted line corresponds to the AGN template
before applying the extinction law, while the solid blue line cor-
responds to the same template, but extincted for the given EB−V ;
the red line corresponds to the galaxy template and, finally, the
black line is the resulting best fit to the SED. The host galaxy
contaminations will be taken into account in the computation of
the AGN absolute magnitude for the luminosity function.
Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function 5
Figure 4. Four examples of different decompositions of the ob-
served SEDs of our objects. Since for λ < 1216 Å, corresponding
to the Lyα line, the observed flux is expected to decrease because
of intervening absorption, all the photometric data at λ <1216 Å
are not considered in the fitting. The only requested constraint is
that they lie below the fit. The four fits shown in this Figure cor-
respond, from top to bottom, to pure-AGN, dust-extincted AGN,
AGN and host galaxy, dust-extincted AGN and host galaxy. The
dotted line corresponds to the AGN template before applying
the extinction law, while the solid blue line corresponds to the
same template, but extincted for the given EB−V . The red line
(third and fourth panel) corresponds to the galaxy template and,
finally, the black line is the resulting best fit to the SED. Arrows
correspond to 5σ upper limits in case of non detection in the IR.
4. Luminosity function
4.1. Definition of the redshift range
For the study of the LF we decided to exclude AGN with z ≤
1.0. This choice is due to the fact that for 0.5 ≤ z ≤ 1.0 the
only visible broad line in the VVDS spectra is Hβ (see Figure
1 of Paper I). This means that all objects with narrow or almost
narrow Hβ and broad Hα (type 1.8, 1.9 AGN; see Osterbrock
1981) would not be included in our sample, because we include
in the AGN sample all the objects with at least one visible broad
line. Since at low luminosities the number of intermediate type
AGN is not negligible, this redshift bin is likely to be under-
populated and the results would not be meaningful.
At z < 0.5, in principle we have less problems, because also
Hα is within the wavelength range of the VVDS spectra, but,
since at this low redshift, our sampled volume is relatively small
and QSOs rare, only 3 objects have secure redshifts in this red-
shift bin in the current sample. For these reasons, our luminosity
function has been computed only for z > 1.0 AGN. As already
mentioned in Section 2, the small fraction of objects with an am-
biguous redshift determination have been included in the compu-
tation of the luminosity function assuming that our best estimate
of their redshift is correct.
The resulting sample used in the computation of the LF consists
thus of 121 objects at 1< z <4.
4.2. Incompleteness function
Our incompleteness function is made up of two terms linked, re-
spectively, to the selection algorithm and to the spectral analysis:
the Target Sampling Rate (TSR) and the Spectroscopic Success
Rate (SSR) defined following Ilbert et al. (2005).
The Target Sampling Rate, namely the ratio between the ob-
served sources and the total number of objects in the photometric
catalog, quantifies the incompleteness due to the adopted spec-
troscopic selection criterion. The TSR is similar in the wide and
deep sample and runs from 20% to 30%.
The Spectroscopic Success Rate is the probability of a spec-
troscopically targeted object to be securely identified. It is a com-
plex function of the BLAGN redshift, apparent magnitude and
intrinsic spectral energy distribution and it has been estimated
by simulating 20 Vimos pointings, for a total of 2745 spectra.
Full details on TSR and SSR can be found in Paper I
(Gavignaud et al., 2006). We account for them by computing for
each object the associated weights wtsr = 1/TS R and wssr =
1/S S R; the total weighted contribution of each object to the
luminosity function is then the product of the derived weights
(wtsr × wssr).
4.3. Estimate of the absolute magnitude
We derived the absolute magnitude in the reference band from
the apparent magnitude in the observed band as:
M = mobs − 5log10(dl(z)) − 25 − k (2)
where M is computed in the band in which we want to compute
the luminosity function, mobs is the observed band from which
we want to calculate it, dl(z) is the luminosity distance expressed
in Mpc and k is the k-correction in the reference band. To make
easier the comparison with previous results in the literature, we
computed the luminosity function in the B-band.
To minimize the uncertainties in the adopted k-correction,
mobs for each object should be chosen in the observed band
which is sampling the rest-wavelength closer to the band in
which the luminosity function is computed. For our sample,
which consists only of z > 1 objects, the best bands to use to
compute the B-band absolute magnitudes should be respectively
the I-, J- and K-bands going to higher redshift. Since however,
the only observed band available for the entire sample (deep and
wide), is the I-band, we decided to use it for all objects to com-
pute the B-band magnitudes. This means that for z ∼
> 2, we
introduce an uncertainty in the absolute magnitudes due to the
k-correction. We computed the absolute magnitude considering
the template derived from the SDSS sample (Vanden Berk et al.,
2001).
As discussed in Section 3, the VVDS AGN sample shows
redder colors than those typical of normal, more luminous AGN
and this can be due to the combination of the host galaxy contri-
bution and the presence of dust. Since, in this redshift range, the
fractional contribution from the host galaxies is expected to be
more significant in the I-band than in bluer bands, the luminos-
ity derived using the I-band observed magnitude could, in some
cases, be somewhat overestimated due to the contribution of the
host galaxy component.
6 Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function
Figure 5. Real (full circles; AGN in the deep sample) and simu-
lated (open triangles; AGN in the wide sample) B-band absolute
magnitude differences as a function of MB(TOT) (upper panel)
and redshift (bottom panel). MB(TOT) is the absolute magnitude
computed considering the total observed flux, while MB(AGN)
is the absolute magnitude computed after subtracting the host-
galaxy contribution.
We estimated the possible impact of this effect on our re-
sults in the following way. From the results of the analysis of the
SED of the single objects in the deep sample (see Section 3) we
computed for each object the difference mI(TOT ) − mI(AGN)
and, consequently, MB(TOT ) − MB(AGN). This could allow us
to derive the LF using directly the derived MB(AGN), resolv-
ing the possible bias introduced by the host galaxy contami-
nation. These differences are shown as full circles in Figure
5 as a function of absolute magnitude (upper panel) and red-
shift (lower panel). For most of the objects the resulting dif-
ferences between the total and the AGN magnitudes are small
(∆M≤0.2). However, for a not negligible fraction of the faintest
objects (MB ≥-22.5, z ≤2.0) these differences can be signifi-
cant (up to ∼1 mag). For the wide sample, for which the more
restricted photometric coverage does not allow a detailed SED
analysis and decomposition, we used simulated differences to
derive the MB(AGN). These simulated differences have been de-
rived through a Monte Carlo simulation on the basis of the bi-
variate distribution ∆M(M,z) estimated from the objects in the
deep sample. ∆M(M,z) takes into account the probability distri-
bution of ∆M as a function of MB and z, between 0 and the solid
line in Figure 5 derived as the envelope suggested by the black
dots. The resulting simulated differences for the objects in the
wide sample are shown as open triangles in the two panels of
Figure 5.
The AGN magnitudes and the limiting magnitudes of the
samples have been corrected also for galactic extinction on the
basis of the mean extinction values E(B−V) in each field derived
from Schlegel et al. (1998). Only for the 22h field, where the ex-
tinction is highly variable across the field, we used the extinction
on the basis of the position of individual objects. The resulting
corrections in the I-band magnitude are AI ≃ 0.027 in the 2h and
10h fields and AI = 0.0089 in the CDFS field, while the average
value in the 22h field is AI = 0.065. These corrections have been
applied also to the limiting magnitude of each field.
4.4. The 1/Vmax estimator
We derived the binned representation of the luminosity function
using the usual 1/Vmax estimator (Schmidt, 1968), which gives
the space density contribution of individual objects. The lumi-
nosity function, for each redshift bin (z − ∆z/2 ; z + ∆z/2), is
then computed as:
Φ(M) =
M+∆M/2
M−∆M/2
wtsri w
Vmax,i
where Vmax,i is the comoving volume within which the i
th ob-
ject would still be included in the sample. wtsri and w
i are re-
spectively the inverse of the TSR and of the SSR, associated to
the ith object. The statistical uncertainty on Φ(M) is given by
Marshall et al. (1983):
√M+∆M/2
M−∆M/2
(wtsri w
V2max,i
We combined our samples at different depths using the
method proposed by Avni & Bahcall (1980). In this method it
is assumed that each object, characterized by an observed red-
shift zi and intrinsic luminosity Li, could have been found in any
of the survey areas for which its observed magnitude is brighter
than the corresponding flux limit. This means that, for our total
sample, we consider an area of:
Ωtot(m) = Ωdeep+Ωwide = 1.72 deg
2 for 17.5 < IAB < 22.5
Ωtot(m) = Ωdeep = 0.62 deg
2 for 22.5 < IAB < 24.0
The resulting luminosity functions in different redshift
ranges are plotted in Figure 6 and 7, where all bins which contain
at least one object are plotted. The LF values, together with their
1σ errors and the numbers of objects in each absolute magnitude
bin are presented in Table 1. The values reported in Table 1 and
plotted in Figures 6 and 7 are not corrected for the host galaxy
contribution. We have in fact a posteriori verified that, even if the
differences between the total absolute magnitudes and the mag-
nitudes corrected for the host galaxy contribution (see Section
4.3) can be significant for a fraction of the faintest objects, the
resulting luminosity functions computed by using these two sets
of absolute magnitudes are not significantly different. For this
reason and for a more direct comparison with previous works,
the results on the luminosity function presented in the next sec-
tion are those obtained using the total magnitudes.
5. Comparison with the results from other optical
surveys
We derived the luminosity function in the redshift range 1.0<
z <3.6 and we compared it with the results from other surveys at
both low and high redshift.
5.1. The low redshift luminosity function
In Figure 6 we present our luminosity function up to z = 2.1.
The Figure show our LF data points (full circles) derived in two
redshift bins: 1.0 < z < 1.55 and 1.55 < z < 2.1 compared with
the LF fits derived from the 2dF QSO sample by Croom et al.
(2004) and by Boyle et al. (2000), with the COMBO-17 sample
Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function 7
Figure 6. Our rest-frame B-band luminosity function, derived in the redshift bins 1.0 < z < 1.55 and 1.55 < z < 2.1, compared with
the 2dFQRS (Croom et al., 2004; Boyle et al., 2000), COMBO-17 data (Wolf et al., 2003) and with the 2dF-SDSS (2SLAQ) data
(Richards et al., 2005). The curves in the Figure show the PLE fit models derived by these authors. The thick parts of the curves
correspond to the luminosity range covered by the data in each sample, while the thin parts are extrapolations based on the best fit
parameters of the models.
by Wolf et al. (2003), and with the 2dF-SDSS (2SLAQ) LF fit
by Richards et al. (2005). In each panel the curves, computed
for the average z of the redshift range, correspond to a double
power law luminosity function in which the evolution with red-
shift is characterized by a pure luminosity evolution modeled as
M∗b(z) = M
b(0)−2.5(k1z+ k2z
2). Moreover, the thick parts of the
curves show the luminosity range covered by the data in each of
the comparison samples, while the thin parts are extrapolation
based on the the best fit parameters of the models.
We start considering the comparison with the 2dF and the
COMBO-17 LF fits. As shown in Figure 6, our bright LF data
points connect rather smoothly to the faint part of the 2dF data.
However, our sample is more than two magnitudes deeper than
the 2dF sample. For this reason, a comparison at low luminosity
is possible only with the extrapolations of the LF fit. At z > 1.55,
while the Boyle’s model fits well our faint LF data points, the
Croom’s extrapolation, being very flat, tends to underestimate
our low luminosity data points. At z < 1.55 the comparison is
worse: as in the higher redshift bin, the Boyle’s model fits our
data better than the Croom’s one but, in this redshift bin, our
data points show an excess at low luminosity also with respect to
Boyle’s fit. This trend is similar to what shown also by the com-
parison with the fit of the COMBO-17 data which, differently
from the 2dF data, have a low luminosity limit closer to ours: at
z > 1.55 the agreement is very good, but in the first redshift bin
our data show again an excess at low luminosity. This excess is
likely due to the fact that, because of its selection criteria, the
COMBO-17 sample is expected to be significantly incomplete
for objects in which the ratio between the nuclear flux and the
total host galaxy flux is small. Finally, we compare our data with
the 2SLAQ fits derived by Richards et al. (2005). The 2SLAQ
data are derived from a sample of AGN selected from the SDSS,
at 18.0 < g < 21.85 and z < 3, and observed with the 2-degree
field instrument. Similarly to the 2dF sample, also for this sam-
ple the LF is derived only for z < 2.1 and MB < −22.5. The plot-
ted dot-dashed curve corresponds to a PLE model in which they
fixed most of the parameters of the model at the values found
by Croom et al. (2004), leaving to vary only the faint end slope
and the normalization constant Φ∗. In this case, the agreement
with our data points at z < 1.55 is very good also at low lu-
minosity. The faint end slope found in this case is β = −1.45,
which is similar to that found by Boyle et al. (2000) (β = −1.58)
and significantly steeper than that found by Croom et al. (2004)
(β = −1.09). At z > 1.55, the Richards et al. (2005) LF fit tends
to overestimate our data points at the faint end of the LF, which
suggest a flatter slope in this redshift bin.
The first conclusion from this comparison is that, at low red-
shift (i.e. z < 2.1), the data from our sample, which is ∼2 mag
fainter than the previous spectroscopically confirmed samples,
are not well fitted simultaneously in the two analyzed redshift
bins by the PLE models derived from the previous samples.
Qualitatively, the main reason for this appears to be the fact that
our data suggest a change in the faint end slope of the LF, which
appears to flatten with increasing redshift. This trend, already
highlighted by previous X-ray surveys (La Franca et al., 2002;
Ueda et al., 2003; Fiore et al., 2003) suggests that a simple PLE
parameterization may not be a good representation of the evolu-
tion of the AGN luminosity function over a wide range of red-
shift and luminosity. Different model fits will be discussed in
Section 7.
8 Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function
Figure 7. Our luminosity function, at 1450 Å rest-frame, in the
redshift range 2.1<z<3.6, compared with data from other high-
z samples (Hunt et al. (2004) at z = 3; Combo-17 data from
Wolf et al. (2003) at 2.4 < z < 3.6; data from Warren et al.
(1994) at 2.2 < z < 3.5 and the SDSS data from Fan et al.
(2001)). The SDSS data points at 3.6< z <3.9 have been
evolved to z=3 using the luminosity evolution of Pei (1995) as
in Hunt et al. (2004). The curves show some model fits in which
the thick parts of the curves correspond to the luminosity range
covered by the data samples, while the thin parts are model ex-
trapolation. For this plot, anΩm = 1,ΩΛ = 0, h = 0.5 cosmology
has been assumed for comparison with the previous works.
5.2. The high redshift luminosity function
The comparison of our LF data points for 2.1< z <3.6 (full
circles) with the results from other samples in similar redshift
ranges is shown in Figure 7. In this Figure an Ωm = 1, ΩΛ = 0,
h = 0.5 cosmology has been assumed for comparison with pre-
vious works, and the absolute magnitude has been computed at
1450 Å. As before, the thick parts of the curves show the lu-
minosity ranges covered by the various data samples, while the
thin parts are model extrapolations. In terms of number of ob-
jects, depth and covered area, the only sample comparable to
ours is the COMBO-17 sample (Wolf et al., 2003), which, in
this redshift range, consists of 60 AGN candidates over 0.78
square degree. At a similar depth, in terms of absolute mag-
nitude, we show also the data from the sample of Hunt et al.
(2004), which however consists of 11 AGN in the redshift range
< z > ±σz =3.03±0.35 (Steidel et al., 2002). Given the small
number of objects, the corresponding Hunt model fit was de-
rived including also the Warren data points (Warren et al., 1994).
Moreover, they assumed the Pei (1995) luminosity evolution
model, adopting the same values for L∗ and Φ∗, leaving free to
vary the two slopes, both at the faint and at the bright end of
the LF. For comparison we show also the original Pei model
fit derived from the empirical luminosity function estimated
by Hartwick & Schade (1990) and Warren et al. (1994). In the
same plot we show also the model fit derived from a sample of
∼100 z ∼ 3 (U-dropout) QSO candidates by Siana et al. (pri-
vate comunication; see also Siana et al. 2006). This sample has
been selected by using a simple optical/IR photometric selec-
tion at 19< r′ <22 and the model fit has been derived by fix-
ing the bright end slope at z=-2.85 as determined by SDSS data
(Richards et al., 2006b).
In general, the comparison of the VVDS data points with
those from the other surveys shown in Figure 7 shows a satis-
factory agreement in the region of overlapping magnitudes. The
best model fit which reproduce our LF data points at z ∼ 3 is the
Siana model with a faint end slope β = −1.45. It is interesting to
note that, in the faint part of the LF, our data points appear to be
higher with respect to the Hunt et al. (2004) fit and are instead
closer to the extrapolation of the original Pei model fit. This dif-
ference with the Hunt et al. (2004) fit is probably due to the fact
that, having only 11 AGN in their faint sample, their best fit to
the faint-end slope was poorly constrained.
6. The bolometric luminosity function
The comparison between the AGN LFs derived from samples
selected in different bands has been for a long time a critical
point in the studies of the AGN luminosity function. Recently,
Hopkins et al. (2007), combining a large number of LF measure-
ments obtained in different redshift ranges, observed wavelength
bands and luminosity intervals, derived the Bolometric QSO
Luminosity Function in the redshift range z = 0 - 6. For each
observational band, they derived appropriate bolometric correc-
tions, taking into account the variation with luminosity of both
the average absorption properties (e.g. the QSO column density
NH from X-ray data) and the average global spectral energy dis-
tributions. They show that, with these bolometric corrections, it
is possible to find a good agreement between results from all
different sets of data.
We applied to our LF data points the bolometric corrections
given by Eqs. (2) and (4) of Hopkins et al. (2007) for the B-band
and we derived the bolometric LF shown as black dots in Figure
8. The solid line represents the bolometric LF best fit model de-
rived by Hopkins et al. (2007) and the colored data points cor-
respond to different samples: green points are from optical LFs,
blue and red points are from soft-X and hard-X LFs, respec-
tively, and finally the cyan points are from the mid-IR LFs. All
these bolometric LFs data points have been derived following
the same procedure described in Hopkins et al. (2007).
Our data, which sample the faint part of the bolometric lu-
minosity function better than all previous optically selected sam-
ples, are in good agreement with all the other samples, selected
in different bands. Only in the last redshift bin, our data are quite
higher with respect to the samples selected in other wavelength
bands. The agreement remains however good with the COMBO-
17 sample which is the only optically selected sample plotted
here. This effect can be attributed to the fact that the conversions
used to compute the Bolometric LF, being derived expecially for
AGN at low redshifts, become less accurate at high redshift.
Our data show moreover good agreement also with the
model fit derived by Hopkins et al. (2007). By trying various an-
alytic fits to the bolometric luminosity function Hopkins et al.
(2007) concluded that neither pure luminosity nor pure density
evolution represent well all the data. An improved fit can in-
stead be obtained with a luminosity dependent density evolution
model (LDDE) or, even better, with a PLE model in which both
the bright- and the faint-end slopes evolve with redshift. Both
Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function 9
Figure 8. Bolometric luminosity function derived in three
redshift bins from our data (black dots), compared with
Hopkins et al. (2007) best-fit model and the data-sets used in
their work. The central redshift of each bin is indicated in each
panel. Here, we adopted the same color-code as in Hopkins et al.
(2007), but for more clarity we limited the number of samples
presented in the Figure. Red symbols correspond to hard X-ray
surveys (squares: Barger et al. 2005; circles: Ueda et al. 2003).
Blue to soft X-ray surveys (squares: Silverman et al. 2005; cir-
cles: Hasinger et al. 2005). Cyan to infra-red surveys (circles:
Brown et al. 2006; squares: Matute et al. 2006). For the optical
surveys we are showing here, with green circles, the data from
the COMBO-17 survey (Wolf et al., 2003), which is comparable
in depth to our sample.
these models can reproduce the observed flattening with redshift
of the faint end of the luminosity function.
7. Model fitting
In this Section we discuss the results of a number of different fits
to our data as a function of luminosity and redshift. For this pur-
pose, we computed the luminosity function in 5 redshift bins at
1.0 < z < 4.0 where the VVDS AGN sample consists of 121
objects. Since, in this redshift range, our data cover only the
faint part of the luminosity function, these fits have been per-
formed by combining our data with the LF data points from the
SDSS data release 3 (DR3) (Richards et al., 2006b) in the red-
shift range 0 < z < 4. The advantage of using the SDSS sample,
rather than, for example, the 2dF sample, is that the former sam-
ple, because of the way it is selected, probes the luminosity func-
tion to much higher redshifts. The SDSS sample contains more
than 15,000 spectroscopically confirmed AGN selected from an
effective area of 1622 sq.deg. Its limiting magnitude (i < 19.1 for
z < 3.0 and i < 20.2 for z > 3.0) is much brighter than the VVDS
and because of this it does not sample well the AGN in the faint
part of the luminosity function. For this reason, Richards et al.
(2006b) fitted the SDSS data using only a single power law,
which is meant to describe the luminosity function above the
break luminosity. Adding the VVDS data, which instead mainly
sample the faint end of the luminosity function, and analyzing
the two samples together, allows us to cover the entire luminosity
range in the common redshift range (1.0 < z < 4.0), also extend-
ing the analysis at z < 1.0 where only SDSS data are available.
The goodness of fit between the computed LF data points
and the various models is then determined by the χ2 test.
For all the analyzed models we have parameterized the lu-
minosity function as a double power law that, expressed in lumi-
nosity, is given by:
Φ(L, z) =
(L/L∗)−α + (L/L∗)−β
whereΦ∗L is the number of AGN per Mpc
3, L∗ is the characteris-
tic luminosity around which the slope of the luminosity function
is changing and α and β are the two power law indices. Equation
5 can be expressed in absolute magnitude 2 as:
Φ(M, z) =
100.4(α+1)(M−M∗) + 100.4(β+1)(M−M∗)
7.1. The PLE and PDE models
The first model that we tested is a Pure Luminosity Evolution
(PLE) with the dependence of the characteristic luminosity de-
scribed by a 2nd-order polynomial in redshift:
M∗(z) = M∗(0) − 2.5(k1z + k2z
2). (7)
Following the finding by Richards et al. (2006b) for the SDSS
sample, we have allowed a change (flattening with redshift) of
the bright end slope according to a linear evolution in redshift:
α(z) = α(0) + A z. The resulting best fit parameters are listed
in the first line of Table 2 and the resulting model fit is shown
as a green short dashed line in Figure 9. The bright end slope α
derived by our fit (αVVDS=-3.19 at z=2.45) is consistent with the
one found by Richards et al. (2006b) (αSDSS = -3.1).
This model, as shown in Figure 9, while reproduces well the
bright part of the LF in the entire redshift range, does not fit the
faint part of the LF at low redshift (1.0 < z < 1.5). This appears
to be due to the fact that, given the overall best fit normalization,
the derived faint end slope (β =-1.38) is too shallow to reproduce
the VVDS data in this redshift range.
Richards et al. (2005), working on a combined 2dF-SDSS
(2SLAQ) sample of AGN up to z = 2.1. found that, fixing
all of the parameters except β and the normalization, to those
of Croom et al. (2004), the resulting faint end slope is β =
−1.45 ± 0.03. This value would describe better our faint LF at
low redshift. This trend suggests a kind of combined luminosity
and density evolution not taken into account by the used model.
2 Φ∗M = Φ
∣ln10−0.4
3 in their parameterization A1=-0.4(α + 1) =0.84
10 Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function
For this reason, we attempted to fit the data also including a term
of density evolution in the form of:
M(z) = Φ
M(0) · 10
k1Dz+k2Dz
In this model the evolution of the LF is described by both
a term of luminosity evolution, which affects M∗, and a term of
density evolution, which allows for a change in the global nor-
malization Φ∗. The derived best fit parameters of this model are
listed in the second line of Table 2 and the model fit is shown
as a blue long dashed line in Figure 9. This model gives a better
χ2 with respect to the previous model, describing the entire sam-
ple better than a simple PLE (the reduced χ2 decreases from ∼
1.9 to ∼ 1.35). However, it still does not satisfactorily reproduce
the excess of faint objects in the redshift bin 1.0 < z < 1.5 and,
moreover, it underestimates the faint end of the LF in the last
redshift bin (3.0 < z < 4.0).
7.2. The LDDE model
Recently, a growing number of observations at different red-
shifts, in soft and hard X-ray bands, have found evidences
of a flattening of the faint end slope of the LF towards high
redshift. This trend has been described through a luminosity-
dependent density evolution parameterization. Such a param-
eterization allows the redshift of the AGN density peak to
change as a function of luminosity. This could help in explain-
ing the excess of faint AGN found in the VVDS sample at
1.0 < z < 1.5. Therefore, we considered a luminosity depen-
dent density evolution model (LDDE), as computed in the major
X-surveys (Miyaji et al. 2000; Ueda et al. 2003; Hasinger et al.
2005). In particular, following Hasinger et al. (2005), we as-
sumed an LDDE evolution of the form:
Φ(MB, z) = Φ(M, 0) ∗ ed(z,MB) (9)
where:
ed(z,MB) =
(1 + z)p1 (z ≤ zc)
ed(zc)[(1 + z)/(1 + zc)]
p2 (z > zc)
. (10)
along with
zc(MB) =
zc,010
−0.4γ(MB−Mc) (MB ≥ Mc)
zc,0 (MB < Mc)
. (11)
where zc corresponds to the redshift at which the evolution
changes. Note that zc is not constant but it depends on the lu-
minosity. This dependence allows different evolutions at differ-
ent luminosities and can indeed reproduce the differential AGN
evolution as a function of luminosity, thus modifying the shape
of the luminosity function as a function of redshift. We also con-
sidered two different assumptions for p1 and p2: (i) both param-
eters constant and (ii) both linearly depending on luminosity as
follows:
p1(MB) = p1Mref − 0.4ǫ1 (MB − Mref) (12)
p2(MB) = p2Mref − 0.4ǫ2 (MB − Mref) (13)
The corresponding χ2 values for the two above cases are re-
spectively χ2=64.6 and χ2=56.8. Given the relatively small im-
provement of the fit, we considered the addition of the two fur-
ther parameters (ǫ1 and ǫ2) unnecessary. The model with con-
stant p1 and p2 values is shown with a solid black line in Figure
Figure 10. Evolution of comoving AGN space density with red-
shift, for different luminosity range: -22.0< MB <-20.0; -24.0<
MB <-22.0; -26.0< MB <-24.0 and MB <-26.0. Dashed lines
correspond to the redshift range in which the model has been
extrapolated.
9 and the best fit parameters derived for this model are reported
in the last line of Table 2.
This model reproduces well the overall shape of the luminos-
ity function over the entire redshift range, including the excess
of faint AGN at 1.0 < z < 1.5. The χ2 value for the LDDE model
is in fact the best among all the analyzed models. We found in
fact a χ2 of 64.6 for 67 degree of freedom and, as the reduced χ2
is below 1, it is acceptable 4.
The best fit value of the faint end slope, which in this model
corresponds to the slope at z = 0, is β =-2.0. This value is consis-
tent with that derived by Hao et al. (2005) studying the emission
line luminosity function of a sample of Seyfert galaxies at very
low redshift (0 < z < 0.15), extracted from the SDSS. They in
fact derived a slope β ranging from -2.07 to -2.03, depending on
the line (Hα, [O ii] or [O iii]) used to compute the nuclear lumi-
nosity. Moreover, also the normalizations are in good agreement,
confirming our model also in a redshift range where data are not
available and indeed leading us to have a good confidence on the
extrapolation of the derived model.
8. The AGN activity as a function of redshift
By integrating the luminosity function corresponding to our best
fit model (i.e the LDDE model; see Table 2), we derived the co-
moving AGN space density as a function of redshift for different
luminosity ranges (Figure 10).
The existence of a peak at z∼ 2 in the space density of bright
AGN is known since a long time, even if rarely it has been possi-
ble to precisely locate the position of this maximum within a sin-
gle optical survey. Figure 10 shows that for our best fit model the
peak of the AGN space density shifts significantly towards lower
4 We note that the reduced χ2 of our best fit model, which in-
cludes also VVDS data, is significantly better than that obtained by
Richards et al. (2006b) in fitting only the SDSS DR3 data.
Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function 11
Figure 9. Filled circles correspond to our rest-frame B-band luminosity function data points, derived in the redshift bins 1.0 < z <
1.5, 1.5 < z < 2.0, 2.0 < z < 2.5, 2.5 < z < 3.0 and 3.0 < z < 4.0. Open circles are the data points from the SDSS Data Release 3
(DR3) by Richards et al. (2006b). These data are shown also in two redshift bins below z = 1. The red dot-dashed line corresponds to
the model fit derived by Richards et al. (2006b) only for the SDSS data. The other lines correspond to model fits derived considering
the combination of the VVDS and SDSS samples for different evolutionary models, as listed in Table 2 and described in Section 7.
redshift going to lower luminosity. The position of the maximum
moves from z∼ 2.0 for MB <-26.0 to z∼ 0.65 for -22< MB <-20.
A similar trend has recently been found by the analysis
of several deep X-ray selected samples (Cowie et al., 2003;
Hasinger et al., 2005; La Franca et al., 2005). To compare with
X-ray results, by applying the same bolometric corrections
used is Section 6, we derived the volume densities derived
by our best fit LDDE model in the same luminosity ranges
as La Franca et al. (2005). We found that the volume density
peaks at z ≃ [0.35; 0.7; 1.1; 1.5] respectively for LogLX(2−10kev)
= [42–43; 43–44; 44–44.5; 44.5–45]. In the same luminosity
intervals, the values for the redshift of the peak obtained by
La Franca et al. (2005) are z ≃ [0.5; 0.8; 1.1; 1.5], in good agree-
ment with our result. This trend has been interpreted as evidence
of AGN (i.e. black hole) “cosmic downsizing”, similar to what
has recently been observed in the galaxy spheroid population
(Cimatti et al., 2006). The downsizing (Cowie et al., 1996) is a
term which is used to describe the phenomenon whereby lumi-
12 Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function
1.0 < z < 1.5 1.5 < z < 2.0
∆M Nqso LogΦ(B) ∆LogΦ(B) ∆M Nqso LogΦ(B) ∆LogΦ(B)
-19.46 -20.46 3 -5.31 +0.20 -0.38
-20.46 -21.46 11 -4.89 +0.12 -0.16 -20.28 -21.28 4 -5.29 +0.18 -0.30
-21.46 -22.46 17 -5.04 +0.09 -0.12 -21.28 -22.28 7 -5.18 +0.15 -0.22
-22.46 -23.46 9 -5.32 +0.13 -0.18 -22.28 -23.28 7 -5.54 +0.14 -0.20
-23.46 -24.46 3 -5.78 +0.20 -0.38 -23.28 -24.28 10 -5.34 +0.12 -0.17
-25.46 -26.46 1 -6.16 +0.52 -0.76 -24.28 -25.28 2 -5.94 +0.23 -0.53
2.0 < z < 2.5 2.5 < z < 3.0
∆M Nqso LogΦ(B) ∆LogΦ(B) ∆M Nqso LogΦ(B) ∆LogΦ(B)
-20.90 -21.90 1 -5.65 +0.52 -0.76
-21.90 -22.90 3 -5.48 +0.20 -0.38 -21.55 -22.55 3 -5.45 +0.20 -0.38
-22.90 -23.90 4 -5.76 +0.18 -0.30 -22.55 -23.55 4 -5.58 +0.19 -0.34
-23.90 -24.90 4 -5.81 +0.18 -0.30 -23.55 -24.55 3 -5.90 +0.20 -0.38
-24.90 -25.90 2 -5.97 +0.23 -0.53 -24.55 -25.55 2 -6.11 +0.23 -0.53
-25.90 -26.90 2 -6.03 +0.23 -0.55 -25.55 -26.55 1 -6.26 +0.52 -0.76
3.0 < z < 4.0
∆M Nqso LogΦ(B) ∆LogΦ(B)
-21.89 -22.89 4 -5.52 +0.19 -0.34
-22.89 -23.89 3 -5.86 +0.20 -0.40
-23.89 -24.89 7 -5.83 +0.14 -0.21
-24.89 -25.89 3 -6.12 +0.20 -0.38
Table 1. Binned luminosity function estimate for Ωm=0.3, ΩΛ=0.7 and H0=70 km · s−1 · Mpc−1. We list the values of Log Φ and
the corresponding 1σ errors in five redshift ranges, as plotted with full circles in Figure 9 and in ∆MB=1.0 magnitude bins. We also
list the number of AGN contributing to the luminosity function estimate in each bin
Sample - Evolution Model α β M∗ k1L k2L A k1D k2D Φ
∗ χ2 ν
VVDS+SDSS - PLE α var -3.83 -1.38 -22.51 1.23 -0.26 0.26 - - 9.78E-7 130.36 69
VVDS+SDSS - PLE+PDE -3.49 -1.40 -23.40 0.68 -0.073 - -0.97 -0.31 2.15E-7 91.4 68
Sample - Evolution Model α β M∗ p1 p2 γ zc,0 Mc Φ
∗ χ2 ν
VVDS+SDSS - LDDE -3.29 -2.0 -24.38 6.54 -1.37 0.21 2.08 -27.36 2.79E-8 64.6 67
Table 2. Best fit models derived from the χ2 analysis of the combined sample VVDS+SDSS-DR3 in the redshift range 0.0 < z < 4.0
assuming a flat (Ωm + ΩΛ = 1) universe with Ωm = 0.3.
nous activity (star formation and accretion onto black holes) ap-
pears to be occurring predominantly in progressively lower mass
objects (galaxies or BHs) as the redshift decreases. As such, it
explains why the number of bright sources peaks at higher red-
shift than the number of faint sources.
As already said, this effect had not been seen so far in the
analysis of optically selected samples. This can be due to the
fact that most of the optical samples, because of their limiting
magnitudes, do not reach luminosities where the difference in the
location of the peak becomes evident. The COMBO-17 sample
(Wolf et al., 2003), for example, even if it covers enough redshift
range (1.2 < z < 4.8) to enclose the peak of the AGN activity,
does not probe luminosities faint enough to find a significant
indication for a difference between the space density peaks of
AGN of different luminosities (see, for example, Figure 11 in
Wolf et al. (2003), which is analogous to our Figure 10, but in
which only AGN brighter than M ∼ -24 are shown). The VVDS
sample, being about one magnitude fainter than the COMBO-
17 sample and not having any bias in finding faint AGN, allows
us to detect for the first time in an optically selected sample the
shift of the maximum space density towards lower redshift for
low luminosity AGN.
9. Summary and conclusion
In the present paper we have used the new sample of AGN, col-
lected by the VVDS and presented in Gavignaud et al. (2006), to
derive the optical luminosity function of faint type–1 AGN.
The sample consists of 130 broad line AGN (BLAGN) se-
lected on the basis of only their spectral features, with no mor-
phological and/or color selection biases. The absence of these
biases is particularly important for this sample because the typ-
ical non-thermal AGN continuum can be significantly masked
by the emission of the host galaxy at the low intrinsic luminos-
ity of the VVDS AGN. This makes the optical selection of the
faint AGN candidates very difficult using the standard color and
morphological criteria. Only spectroscopic surveys without any
pre-selection can therefore be considered complete in this lumi-
nosity range.
Because of the absence of morphological and color selec-
tion, our sample shows redder colors than those expected, for
example, on the basis of the color track derived from the SDSS
composite spectrum and the difference is stronger for the intrin-
sically faintest objects. Thanks to the extended multi-wavelength
coverage in the deep VVDS fields in which we have, in addition
Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function 13
to the optical VVDS bands, also photometric data from GALEX,
CFHTLS, UKIDSS and SWIRE, we examined the spectral en-
ergy distribution of each object and we fitted it with a combina-
tion of AGN and galaxy emission, allowing also for the possi-
bility of extinction of the AGN flux. We found that both effects
(presence of dust and contamination from the host galaxy) are
likely to be responsible for this reddening, even if it is not pos-
sible to exclude that faint AGN are intrinsically redder than the
brighter ones.
We derived the luminosity function in the B-band for 1 <
z < 3.6, using the usual 1/Vmax estimator (Schmidt, 1968),
which gives the space density contributions of individual ob-
jects. Moreover, using the prescriptions recently derived by
Hopkins et al. (2007), we computed also the bolometric lumi-
nosity function for our sample. This allows us to compare our
results also with other samples selected from different bands.
Our data, more than one magnitude fainter than previous op-
tical surveys, allow us to constrain the faint part of the luminosity
function up to high redshift. A comparison of our data with the
2dF sample at low redshift (1 < z < 2.1) shows that the VVDS
data can not be well fitted with the PLE models derived by pre-
vious samples. Qualitatively, our data suggest the presence of
an excess of faint objects at low redshift (1.0 < z < 1.5) with
respect to these models.
Recently, a growing number of observations at different red-
shifts, in soft and hard X-ray bands, have found in fact evi-
dences of a similar trend and they have been reproduced with a
luminosity-dependent density evolution parameterization. Such
a parameterization allows the redshift of the AGN density peak
to change as a function of luminosity and explains the excess
of faint AGN that we found at 1.0 < z < 1.5. Indeed, by com-
bining our faint VVDS sample with the large sample of bright
AGN extracted from the SDSS DR3 (Richards et al., 2006b),
we found that the evolutionary model which better represents
the combined luminosity functions, over a wide range of red-
shift and luminosity, is an LDDE model, similar to those derived
from the major X-surveys. The derived faint end slope at z=0 is
β = -2.0, consistent with the value derived by Hao et al. (2005)
studying the emission line luminosity function of a sample of
Seyfert galaxies at very low redshift.
A feature intrinsic to these LDDE models is that the comov-
ing AGN space density shows a shift of the peak with luminos-
ity, in the sense that more luminous AGN peak earlier in the
history of the Universe (i.e. at higher redshift), while the density
of low luminosity ones reaches its maximum later (i.e. at lower
redshift). In particular, in our best fit LDDE model the peak of
the space density ranges from z ∼ 2 for MB < -26 to z∼ 0.65
for -22 < MB < -20. This effect had not been seen so far in the
analysis of optically selected samples, probably because most of
the optical samples do not sample in a complete way the faintest
luminosities, where the difference in the location of the peak be-
comes evident.
Although the results here presented appear to be already ro-
bust, the larger AGN sample we will have at the end of the still
on-going VVDS survey (> 300 AGN), will allow a better sta-
tistical analysis and a better estimate of the parameters of the
evolutionary model.
Acknowledgements. This research has been developed within the framework of
the VVDS consortium.
This work has been partially supported by the CNRS-INSU and its Programme
National de Cosmologie (France), and by Italian Ministry (MIUR) grants
COFIN2000 (MM02037133) and COFIN2003 (num.2003020150).
Based on data obtained with the European Southern Observatory Very Large
Telescope, Paranal, Chile, program 070.A-9007(A), 272.A-5047, 076.A-0808,
and on data obtained at the Canada-France-Hawaii Telescope, operated by the
CNRS of France, CNRC in Canada, and the University of Hawaii. The VLT-
VIMOS observations have been carried out on guaranteed time (GTO) allo-
cated by the European Southern Observatory (ESO) to the VIRMOS consortium,
under a contractual agreement between the Centre National de la Recherche
Scientifique of France, heading a consortium of French and Italian institutes,
and ESO, to design, manufacture and test the VIMOS instrument.
References
Arnouts, S., Schiminovich, D., Ilbert, O., et al. 2005, ApJ, 619, L43
Avni, Y. & Bahcall, J. N. 1980, ApJ, 235, 694A
Barger, A. J., Cowie, L. L., Mushotzky, R. F., et al. 2005, AJ, 129, 578
Boyle, B. J., Shanks, T., Croom, S. M., et al. 2000, MNRAS, 317, 1014
Boyle, B. J., Shanks, T., & Peterson, B. A. 1988, MNRAS, 235, 935
Brandt, W. N., Hornschemeier, A. E., Schneider, D. P., et al. 2000, AJ, 119, 2349
Brown, M. J. I., Brand, K., Dey, A., et al. 2006, ApJ, 638, 88
Bruzual, G. & Charlot, S. 2003, MNRAS, 344, 1000
Cimatti, A., Daddi, E., & Renzini, A. 2006, A&A, 453, L29
Cirasuolo, M., Magliocchetti, M., & Celotti, A. 2005, MNRAS, 357, 1267
Cowie, L. L., Barger, A. J., Bautz, M. W., Brandt, W. N., & Garmire, G. P. 2003,
ApJ, 584, L57
Cowie, L. L., Songaila, A., Hu, E. M., & Cohen, J. G. 1996, AJ, 112, 839
Croom, S. M., Smith, R. J., Boyle, B. J., et al. 2004, MNRAS, 349, 1397
Di Matteo, T., Springel, V., & Hernquist, L. 2005, Nature, 433, 604
Fan, X., Strauss, M. A., Schneider, D. P., et al. 2001, AJ, 121, 54
Ferrarese, L. & Merritt, D. 2000, ApJ, 539, L9
Fiore, F., Brusa, M., Cocchia, F., et al. 2003, A&A, 409, 79
Fontanot, F., Cristiani, S., Monaco, P., et al. 2007, A&A, 461, 39
Gavignaud, I., Bongiorno, A., Paltani, S., et al. 2006, ArXiv Astrophysics e-
prints
Hall, P. B., Gallagher, S. C., Richards, G. T., et al. 2006, AJ, 132, 1977
Hao, L., Strauss, M. A., Fan, X., et al. 2005, AJ, 129, 1795
Hartwick, F. D. A. & Schade, D. 1990, ARA&A, 28, 437
Hasinger, G., Miyaji, T., & Schmidt, M. 2005, A&A, 441, 417
Hewett, P. C. & Foltz, C. B. 2003, AJ, 125, 1784
Hewett, P. C., Foltz, C. B., Chaffee, F. H., et al. 1991, AJ, 101, 1121
Hopkins, P. F., Hernquist, L., Cox, T. J., et al. 2006, ApJ, 639, 700
Hopkins, P. F., Richards, G. T., & Hernquist, L. 2007, ApJ, 654, 731
Hunt, M. P., Steidel, C. C., Adelberger, K. L., & Shapley, A. E. 2004, ApJ, 605,
Ilbert, O., Tresse, L., Zucca, E., et al. 2005, A&A, 439, 863
Iovino, A., McCracken, H. J., Garilli, B., et al. 2005, A&A, 442, 423
Kennefick, J. D., Djorgovski, S. G., & de Carvalho, R. R. 1995, AJ, 110, 2553
Kormendy, J. & Richstone, D. 1995, ARA&A, 33, 581
La Franca, F., Fiore, F., Comastri, A., et al. 2005, ApJ, 635, 864
La Franca, F., Fiore, F., Vignali, C., et al. 2002, ApJ, 570, 100
Lawrence, A., Warren, S. J., Almaini, O., et al. 2006, ArXiv Astrophysics e-
prints
Le Fèvre, O., Mellier, Y., McCracken, H. J., et al. 2004a, A&A, 417, 839
Le Fèvre, O., Vettolani, G., Paltani, S., et al. 2004b, A&A, 428, 1043
Le Fèvre, O., Vettolani, G., Garilli, B., et al. 2005, A&A, 439, 845
Lonsdale, C., Polletta, M. d. C., Surace, J., et al. 2004, ApJS, 154, 54
Lonsdale, C. J., Smith, H. E., Rowan-Robinson, M., et al. 2003, PASP, 115, 897
Magorrian, J., Tremaine, S., Richstone, D., et al. 1998, AJ, 115, 2285
Marshall, H. L., Tananbaum, H., Avni, Y., & Zamorani, G. 1983, ApJ, 269, 35
Matute, I., La Franca, F., Pozzi, F., et al. 2006, A&A, 451, 443
McCracken, H. J., Radovich, M., Bertin, E., et al. 2003, A&A, 410, 17
Miyaji, T., Hasinger, G., & Schmidt, M. 2000, A&A, 353, 25
Miyaji, T., Hasinger, G., & Schmidt, M. 2001, A&A, 369, 49
Mushotzky, R. F., Cowie, L. L., Barger, A. J., & Arnaud, K. A. 2000, Nature,
404, 459
Osterbrock, D. E. 1981, ApJ, 249, 462
Page, M. J., Mason, K. O., McHardy, I. M., Jones, L. R., & Carrera, F. J. 1997,
MNRAS, 291, 324
Pei, Y. C. 1995, ApJ, 438, 623
Prevot, M. L., Lequeux, J., Prevot, L., Maurice, E., & Rocca-Volmerange, B.
1984, A&A, 132, 389
Radovich, M., Arnaboldi, M., Ripepi, V., et al. 2004, A&A, 417, 51
Richards, G. T., Croom, S. M., Anderson, S. F., et al. 2005, MNRAS, 360, 839
Richards, G. T., Fan, X., Newberg, H. J., et al. 2002, AJ, 123, 2945
Richards, G. T., Hall, P. B., Vanden Berk, D. E., et al. 2003, AJ, 126, 1131
Richards, G. T., Lacy, M., Storrie-Lombardi, L. J., et al. 2006a, ApJS, 166, 470
Richards, G. T., Strauss, M. A., Fan, X., et al. 2006b, AJ, 131, 2766
Scannapieco, E. & Oh, S. P. 2004, ApJ, 608, 62
Schiminovich, D., Ilbert, O., Arnouts, S., et al. 2005, ApJ, 619, L47
Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
Schmidt, M. 1968, ApJ, 151, 393
14 Bongiorno, A. et al.: The VVDS type–1 AGN sample: The faint end of the luminosity function
Schmidt, M., Schneider, D. P., & Gunn, J. E. 1995, AJ, 110, 68
Schneider, D. P., Fan, X., Hall, P. B., et al. 2003, AJ, 126, 2579
Siana, B., Polletta, M., Smith, H. E., et al. 2006, ArXiv Astrophysics e-prints
Silk, J. & Rees, M. J. 1998, A&A, 331, L1
Silverman, J. D., Green, P. J., Barkhouse, W. A., et al. 2005, ApJ, 624, 630
Steidel, C. C., Hunt, M. P., Shapley, A. E., et al. 2002, ApJ, 576, 653
Ueda, Y., Akiyama, M., Ohta, K., & Miyaji, T. 2003, ApJ, 598, 886
Vanden Berk, D. E., Richards, G. T., Bauer, A., et al. 2001, AJ, 122, 549
Warren, S. J., Hewett, P. C., & Osmer, P. S. 1994, ApJ, 421, 412
Wolf, C., Wisotzki, L., Borch, A., et al. 2003, A&A, 408, 499
1 Università di Bologna, Dipartimento di Astronomia - Via Ranzani
1, I-40127, Bologna, Italy
2 INAF-Osservatorio Astronomico di Bologna - Via Ranzani 1, I-
40127, Bologna, Italy
3 Astrophysical Institute Potsdam, An der Sternwarte 16, D-14482
Potsdam, Germany
4 Integral Science Data Centre, ch. d’Écogia 16, CH-1290 Versoix
5 Geneva Observatory, ch. des Maillettes 51, CH-1290 Sauverny,
Switzerland
6 Laboratoire d’Astrophysique de Toulouse/Tabres (UMR5572),
CNRS, Université Paul Sabatier - Toulouse III, Observatoire Midi-
Pyriénées, 14 av. E. Belin, F-31400 Toulouse, France
7 Institute for Astronomy, University of Edinburgh, Royal
Observatory, Edinburgh EH9 3HJ
8 IASF-INAF - via Bassini 15, I-20133, Milano, Italy
9 Laboratoire d’Astrophysique de Marseille, UMR 6110 CNRS-
Université de Provence, BP8, 13376 Marseille Cedex 12, France
10 IRA-INAF - Via Gobetti,101, I-40129, Bologna, Italy
11 INAF-Osservatorio Astronomico di Roma - Via di Frascati 33, I-
00040, Monte Porzio Catone, Italy
12 Max Planck Institut fur Astrophysik, 85741, Garching, Germany
13 Institut d’Astrophysique de Paris, UMR 7095, 98 bis Bvd Arago,
75014 Paris, France
14 School of Physics & Astronomy, University of Nottingham,
University Park, Nottingham, NG72RD, UK
15 INAF-Osservatorio Astronomico di Brera - Via Brera 28, Milan,
Italy
16 Institute for Astronomy, 2680 Woodlawn Dr., University of
Hawaii, Honolulu, Hawaii, 96822
17 Observatoire de Paris, LERMA, 61 Avenue de l’Observatoire,
75014 Paris, France
18 Centre de Physique Théorique, UMR 6207 CNRS-Université de
Provence, F-13288 Marseille France
19 Astronomical Observatory of the Jagiellonian University, ul Orla
171, 30-244 Kraków, Poland
20 INAF-Osservatorio Astronomico di Capodimonte - Via
Moiariello 16, I-80131, Napoli, Italy
21 Institute de Astrofisica de Canarias, C/ Via Lactea s/n, E-38200
La Laguna, Spain
22 Center for Astrophysics & Space Sciences, University of
California, San Diego, La Jolla, CA 92093-0424, USA
23 Centro de Astrofsica da Universidade do Porto, Rua das Estrelas,
4150-762 Porto, Portugal
24 Universitá di Milano-Bicocca, Dipartimento di Fisica - Piazza
delle Scienze 3, I-20126 Milano, Italy
25 Università di Bologna, Dipartimento di Fisica - Via Irnerio 46,
I-40126, Bologna, Italy
Introduction
The sample
Colors of BLAGNs
Luminosity function
Definition of the redshift range
Incompleteness function
Estimate of the absolute magnitude
The 1/Vmax estimator
Comparison with the results from other optical surveys
The low redshift luminosity function
The high redshift luminosity function
The bolometric luminosity function
Model fitting
The PLE and PDE models
The LDDE model
The AGN activity as a function of redshift
Summary and conclusion
|
0704.1661 | Can a Black Hole Collapse to a Space-time Singularity? | Noname manuscript No.
(will be inserted by the editor)
Can a Black Hole Collapse to a Space-time Singularity?
R. K. Thakur
Received: date / Accepted: date
Abstract A critique of the singularity theorems of Penrose, Hawking, and Geroch is
given. It is pointed out that a gravitationally collapsing black hole acts as an ultrahigh
energy particle accelerator that can accelerate particles to energies inconceivable in any
terrestrial particle accelerator, and that when the energy E of the particles comprising
matter in a black hole is ∼ 102GeV or more, or equivalently, the temperature T is
∼ 1015K or more, the entire matter in the black hole is converted into quark-gluon
plasma permeated by leptons. As quarks and leptons are fermions, it is emphasized that
the collapse of a black-hole to a space-time singularity is inhibited by Pauli’s exclusion
principle. It is also suggested that ultimately a black hole may end up either as a
stable quark star, or as a pulsating quark star which may be a source of gravitational
radiation, or it may simply explode with a mini bang of a sort.
Keywords black hole · gravitational collapse · space-time singularity · quark star
1 Introduction
When all the thermonuclear sources of energy of a star are exhausted, the core of
the star begins to contract gravitationally because, practically, there is no radiation
pressure to arrest the contraction, the pressure of matter being inadequate for this
purpose. If the mass of the core is less than the Chandrasekhar limit (∼ 1.44M⊙), the
contraction stops when the density of matter in the core, ρ > 2 × 106 g cm−3; at this
stage the pressure of the relativistically degenerate electron gas in the core is enough
to withstand the force of gravitation. When this happens, the core becomes a stable
white dwarf. However, when the mass of the core is greater than the Chandrasekhar
limit, the pressure of the relativistically degenerate electron gas is no longer sufficient
to arrest the gravitational contraction, the core continues to contract and becomes
R. K. Thakur
Retired Professor of Physics,
School of Studies in Physics,
Pt. Ravishankar Shukla University, Raipur, India
Tel.: +91-771-2255168
E-mail: [email protected]
http://arxiv.org/abs/0704.1661v2
denser and denser; and when the density reaches the value ρ ∼ 107 g cm−3, the process
of neutronization sets in; electrons and protons in the core begin to combine into
neutrons through the reaction
p+ e− → n+ νe
The electron neutrinos νe so produced escape from the core of the star. The gravi-
tational contraction continues and eventually, when the density of the core reaches the
value ρ ∼ 1014 g cm−3, the core consists almost entirely of neutrons. If the mass of
the core is less than the Oppenheimer-Volkoff limit (∼ 3M⊙), then at this stage the
contraction stops; the pressure of the degenerate neutron gas is enough to withstand
the gravitational force. When this happens, the core becomes a stable neutron star. Of
course, enough electrons and protons must remain in the neutron star so that Pauli’s
exclusion principle prevents neutron beta decay
n→ p+ e− + νe
Where νe is the electron antineutrino (Weinberg 1972a). This requirement sets a
lower limit ∼ 0.2M⊙ on the mass of a stable neutron star.
If, however, after the end of the thermonuclear evolution, the mass of the core of a
star is greater than the Chandrasekhar and Oppenheimer-Volkoff limit, the star may
eject enough matter so that the mass of the core drops below the Chandrasekhar and
Oppenheimer-Volkoff limit as a result of which it may settle as a stable white dwarf
or a stable neutron star. If not, the core will gravitationally collapse and end up as a
black hole.
As is well known, the event horizon of a black hole of mass M is a spherical
surface located at a distance r = rg = 2GM/c
2 from the centre, where G is Newton’s
gravitational constant and c the speed of light in vacuum; rg is called gravitational
radius or Schwarzschild radius. An external observer cannot observe anything that is
happening inside the event horizon, nothing, not even light or any other electromagnetic
signal can escape outside the event horizon from inside. However, anything that enters
the event horizon from outside is swallowed by the black hole; it can never escape
outside the event horizon again.
Attempts have been made, using the general theory of relativity (GTR), to under-
stand what happens inside a black hole. In so doing, various simplifying assumptions
have been made. In the simplest treatment (Oppenheimer and Snyder 1939; Weinberg
1972b) a black hole is considered to be a ball of dust with negligible pressure, uniform
density ρ = ρ(t), and at rest at t = 0. These assumptions lead to the unique solution
of the Einstein field equations, and in the comoving co-ordinates the metric inside the
black hole is given by
2 −R2(t)
1− k r2
in units in which speed of light in vacuum, c=1, and where k is a constant. The require-
ment of energy conservation implies that ρ(t)R3(t) remains constant. On normalizing
the radial co-ordinate r so that
R(0) = 1 (2)
one gets
ρ(t) = ρ(0)R
(t) (3)
The fluid is assumed to be at rest at t = 0, so
Ṙ(0) = 0 (4)
Consequently, the field equations give
ρ(0) (5)
Finally, the solution of the field equations is given by the parametric equations of
a cycloid :
ψ + sin ψ
(1 + cos ψ) (6)
From equation (6) it is obvious that when ψ = π. i.e., when
t = ts =
8πGρ(0)
a space-time singularity occurs; the scale factor R(t) vanishes. In other words, a black
hole of uniform density having the initial values ρ(0), and zero pressure collapses from
rest to a point in 3 - subspace, i.e., to a 3 - subspace of infinite curvature and zero
proper volume, in a finite time ts; the collapsed state being a state of infinite proper
energy density. The same result is obtained in the Newtonian collapse of a ball of dust
under the same set of assumptions (Narlikar 1978).
Although the black hole collapses completely to a point at a finite co-ordinate
time t = ts, any electromagnetic signal coming to an observer on the earth from the
surface of the collapsing star before it crosses its event horizon will be delayed by its
gravitational field, so an observer on the earth will not see the star suddenly vanish.
Actually, the collapse to the Schwarzschild radius rg appears to an outside observer to
take an infinite time, and the collapse to R = 0 is not at all observable from outside
the event horizon.
The internal dynamics of a non-idealized, real black hole is very complex. Even
in the case of a spherically symmetric collapsing black hole with non-zero pressure
the details of the interior dynamics are not well understood, though major advances
in the understanding of the interior dynamics are now being made by means of nu-
merical computations and analytic analyses. But in these computations and analyses
no new features have emerged beyond those that occur in the simple uniform-density,
free-fall collapse considered above (Misner,Thorne, and Wheeler 1973). However, us-
ing topological methods, Penrose (1965,1969), Hawking (1996a, 1966b, 1967a, 1967b),
Hawking and Penrose (1970), and Geroch (1966, 1967, 1968) have proved a number
of singularity theorems purporting that if an object contracts to dimensions smaller
than rg, and if other reasonable conditions - namely, validity of the GTR, positivity of
energy, ubiquity of matter and causality - are satisfied, its collapse to a singularity is
inevitable.
2 A critique of the singularity theorems
As mentioned above, the singularity theorems are based, inter alia, on the assump-
tion that the GTR is universally valid. But the question is : Has the validity of
the GTR been established experimentally in the case of strong fields ? Actually,
the GTR has been experimentally verified only in the limiting case of week fields,
it has not been experimentally validated in the case of strong fields. Moreover, it
has been demonstrated that when curvatures exceed the critical value Cg = 1/L
where Lg =
h̄ G/c3
= 1.6 × 10−33 cm corresponding to the critical density
ρg = 5 × 1093 g cm−3, the GTR is no longer valid; quantum effects must enter the
picture (Zeldovich and Novikov 1971). Therefore, it is clear that the GTR breaks down
before a gravitationally collapsing object collapses to a singularity. Consequently, the
conclusion based on the GTR that in comoving co-ordinates any gravitationally col-
lapsing object in general, and a black hole in particular, collapses to a point in 3-space
need not be held sacrosanct, as a matter of fact it may not be correct at all.
Furthermore, while arriving at the singularity theorems attention has mostly been
focused on the space-time geometry and geometrodynamics; matter has been tacitly
treated as a classical entity. However, as will be shown later, this is not justified;
quantum mechanical behavior of matter at high energies and high densities must be
taken into account. Even if we regard matter as a classical entity of a sort, it can be
easily seen that the collapse of a black hole to a space-time singularity is inhibited
by Pauli’s exclusion principle. As mentioned earlier, a collapsing black hole consists,
almost entirely, of neutrons apart from traces of protons and electrons; and neutrons
as well as protons and electrons are fermions; they obey Pauli’s exclusion principle. If
a black hole collapses to a point in 3-space, all the neutrons in the black hole would be
squeezed into just two quantum states available at that point, one for spin up and the
other for spin down neutron. This would violate Pauli’s exclusion principle, according to
which not more than one fermion of a given species can occupy any quantum state. So
would be the case with the protons and the electrons in the black hole. Consequently,
a black hole cannot collapse to a space-time singularity in contravention to Pauli’s
exclusion principle.
Besides, another valid question is : What happens to a black hole after t > ts, i.e.,
after it has collapsed to a point in 3-space to a state of infinite proper energy density,
if at all such a collapse occurs? Will it remain frozen forever at that point? If yes, then
uncertainties in the position co-ordinates of each of the particles - namely, neutrons,
protons, and electrons - comprising the black hole would be zero. Consequently, accord-
ing to Heisenberg’s uncertainty principle, uncertainties in the momentum co-ordinates
of each of the particles would be infinite. However, it is physically inconceivable how
particles of infinite momentum and energy would remain frozen forever at a point.
From this consideration also collapse of a black hole to a singularity appears to be
quite unlikely.
Earlier, it was suggested by the author that the very strong ’hard-core’ repulsive
interaction between nucleons, which has a range lc ∼ 0.4× 10−13 cm, might set a limit
on the gravitational collapse of a black hole and avert its collapse to a singularity
(Thakur 1983). The existence of this hard-core interaction was pointed out by Jastro
(1951) after the analysis of the data from high energy nucleon-nucleon scattering ex-
periments. It has been shown that this very strong short range repulsive interaction
arises due to the exchange of isoscalar vector mesons ω and φ between two nucleons (
Scotti and Wong 1965). Phenomenologically, that part of the nucleon-nucleon potential
which corresponds to the repulsive hard core interaction may be taken as
Vc(r) = ∞ for r < lc (8)
where r is the distance between the two interacting nucleons. Taking this into account,
the author concluded that no spherical object of mass M could collapse to a sphere
of radius smaller than Rmin = 1.68 × 10−6M1/3 cm, or of the density greater than
ρmax = 5.0× 1016 g cm−3. It was also pointed out that an object of mass smaller than
Mc ∼ 1.21 × 1033 gm could not cross the event horizon and become a black hole; the
only course left to an object of mass smaller than Mc was to reach equilibrium as
either a white dwarf or a neutron star. However, one may not regard these conclusions
as reliable because they are based on the hard core repulsive interaction (8) between
nucleons which has been arrived at phenomenologically by high energy nuclear physi-
cists while accounting for the high energy nucleon-nucleon scattering data; but it must
be noted that, as mentioned above, the existence of the hard core interaction has been
demonstrated theoretically also by Scotti and Wong in 1965. Moreover, it is interesting
to note that the upper limitMc ∼ 1.21×1033 g = 0.69M⊙ on the masses of objects that
cannot gravitationally collapse to form black holes is of the same order of magnitude
as the Chandrasekhar and the Oppenheimer- Volkoff limits.
Even if we disregard the role of the hard core, short range repulsive interaction
in arresting the collapse of a black hole to a space-time singularity in comoving co-
ordinates, it must be noted that unlike leptons which appear to be point-like particles
- the experimental upper bound on their radii being 10−16 cm (Barber et al. 1979)
-nucleons have finite dimensions. It has been experimentally demonstrated that the
radius r0 of the proton is about 10
−13 cm(Hofstadter & McAllister 1955). Therefore,
it is natural to assume that the radius r0 of the neutron is also about 10
−13 cm.
This means the minimum volume vmin occupied by a neutron is
3. Ignoring the
“mass defect” arising from the release of energy during the gravitational contraction
(before crossing the event horizon), the number of neutrons N in a collapsing black
hole of mass M is, obviously, Mmn
where mn is the mass of the neutron. Assuming
that neutrons are impregnable particles, the minimum volume that the black hole can
occupy is Vmin = Nvmin = vmin
, for neutrons cannot be more closely packed
than this in a black hole. However, Vmin =
where Rmin is the radius of
the minimum volume to which the black hole can collapse. Consequently, Rmin =
. On substituting 10−13 cm for r0 and 1.67× 10−24 g for mn one finds that
Rmin = 8.40 × 10−6M1/3. This means a collapsing black hole cannot collapse to a
density greater than ρmax =
= Nmn
4/3πr3
= 3.99× 1014 g cm−3. The critical mass
Mc of the object for which the gravitational radius Rg = Rmin is obtained from the
equation
This gives
Mc = 1.35× 1034 g = 8.68M⊙ (10)
Obviously, for M > Mc, Rg > Rmin, and for M < Mc, Rg < Rmin.
Consequently, objects of mass M < Mc cannot cross the event horizon and become
a black hole whereas those of mass M > Mc can. Objects of mass M < Mc will,
depending on their mass, reach equilibrium as either white dwarfs or neutron stars. Of
course, these conclusions are based on the assumption that neutrons are impregnable
particles and have radius r0 = 10
−13cm each. Also implicit is the assumption that
neutrons are fundamental particles; they are not composite particles made up of other
smaller constituents. But this assumption is not correct; neutrons as well as protons and
other hadrons are not fundamental particles; they are made up of smaller constituents
called quarks as will be explained in section 4. In section 5 it will be shown how, at
ultrahigh energy and ultrahigh density, the entire matter in a collapsing black hole is
eventually converted into quark-gluon plasma permeated by leptons.
3 Gravitationally collapsing black hole as a particle accelerator
We consider a gravitationally collapsing black hole. On neglecting mutual interactions
the energy E of any one of the particles comprising the black hole is given by E2 =
p2 + m2 > p2, in units in which the speed of light in vacuum c = 1, where p is the
magnitude of the 3-momentum of the particle and m its rest mass. But p = h
, where λ
is the de Broglie wavelength of the particle and h Planck’s constant of action. Since all
lengths in the collapsing black hole scale down in proportion to the scale factor R(t)
in equation (1), it is obvious that λ ∝ R(t). Therefore it follows that p ∝ R−1(t), and
hence p = aR−1(t), where a is the constant of proportionality. From this it follows
that E > a/R. Consequently, E as well as p increases continually as R decreases. It is
also obvious that E and p, the magnitude of the 3-momentum, → ∞ as R → 0. Thus,
in effect, we have an ultra-high energy particle accelerator, so far inconceivable in any
terrestrial laboratory, in the form of a collapsing black hole, which can, in the absence
of any physical process inhibiting the collapse, accelerate particles to an arbitrarily
high energy and momentum without any limit.
What has been concluded above can also be demonstrated alternatively, without
resorting to GTR, as follows. As an object collapses under its selfgravitation, the in-
terparticle distance s between any pair of particles in the object decreases. Obviously,
the de Broglie’s wavelength λ of any particle in the object is less than or equal to s, a
simple consequence of Heisenberg’s uncertainty principle. Therefore, s ≥ h/p, where h
is Planck’s constant and p the magnitude of 3-momentum of the particle. Consequently,
p ≥ h/s and hence E ≥ h/s. Since during the collapse of the object s decreases, the
energy E as well as the momentum p of each of the particles in the object increases.
Moreover, from E ≥ h/s and p ≥ h/s it follows that E and p → ∞ as s → 0. Thus,
any gravitationally collapsing object in general, and a black hole in particular, acts as
an ultrahigh energy particle accelerator.
It is also obvious that ρ, the density of matter in the black hole, increases as it
collapses. In fact, ρ ∝ R−3, and hence ρ→ ∞ as R → 0.
4 Quarks: The building blocks of matter
In order to understand eventually what happens to matter in a collapsing black hole one
has to take into account the microscopic behavior of matter at high energies and high
densities; one has to consider the role played by the electromagnetic, weak, and strong
interactions - apart from the gravitational interaction - between the particles compris-
ing the matter. For a brief account of this the reader is referred to Thakur(1995), for
greater detail to Huang(1992), or at a more elementary level to Hughes(1991).
As has been mentioned in Section 2, unlike leptons, hadrons are not point-like parti-
cles, but are of finite size; they have structures which have been revealed in experiments
that probe hadronic structures by means of electromagnetic and weak interactions. The
discovery of a very large number of apparently elementary (fundamental) hadrons led
to the search for a pattern amongst them with a view to understanding their nature.
This resulted in attempts to group together hadrons having the same baryon number,
spin, and parity but different strangeness S ( or equivalently hypercharge Y = B + S,
where B is the baryon number) into I-spin (isospin) multiplets. In a plot of Y against
I3 (z- component of isospin I), members of I-spin multiplets are represented by points.
The existence of several such hadron (baryon and meson) multiplets is a manifestation
of underlying internal symmetries.
In 1961 Gell-Mann, and independently Neémann, pointed out that each of these
multiplets can be looked upon as the realization of an irreducible representation of an
internal symmetry group SU(3) ( Gell-Mann and Neémann 1964). This fact together
with the fact that hadrons have finite size and inner structure led Gell-Mann, and
independently Zweig, in 1964 to hypothesize that hadrons are not elementary particles,
rather they are composed of more elementary constituents called quarks (q) by Gell-
Mann (Zweig called them aces). Baryons are composed of three quarks (q q q) and
antibaryons of three antiquarks (q q q) while mesons are composed of a quark and
an antiquark each. In the beginning, to account for the multiplets of baryons and
mesons, quarks of only three flavours, namely, u(up), d (down), and s(strange) were
postulated, and they together formed the basic triplet
of the internal symmetry
group SU(3). All these three quarks u, d, and s have spin 1/2 and baryon number
1/3. The u quark has charge 2/3 e whereas the d and s quarks have charge −1/3 e
where e is the charge of the proton. The strangeness quantum number of the u and d
quarks is zero whereas that of the s quark is -1. The antiquarks (u , d , s) have charges
−2/3 e, 1/3 e, 1/3 e and strangeness quantum numbers 0, 0, 1 respectively. They all
have spin 1/2 and baryon number -1/3. Both u and d quarks have the same mass,
namely, one third that of the nucleon, i.e., ≃ 310MeV/c2 whereas the mass of the s
quark is ≃ 500MeV/c2. The proton is composed of two up and one down quarks (p:
uud) and the neutron of one up and two down quarks (n: udd).
Motivated by certain theoretical considerations Glashow, Iliopoulos and Maiani
(1970) proposed that, in addition to u, d, s quarks, there should be another quark
flavour which they named charm (c). Gaillard and Lee (1974) estimated its mass to be
≃ 1.5GeV/c2. In 1974 two teams, one led by S.C.C. Ting at SLAC (Aubert et al. 1974)
and another led by B. Richter at Brookhaven (Augustin et al. 1974) independently
discovered the J/Ψ , a particle remarkable in that its mass (3.1GeV/c2) is more than
three times that of the proton. Since then, four more particles of the same family,
namely, ψ(3684), ψ(3950), ψ(4150), ψ(4400) have been found. It is now established
that these particles are bound states of charmonium (cc), J/ψ being the ground state.
On adopting non-relativistic independent quark model with a linear potential between
c and c, and taking the mass of c to be approximately half the mass of J/ψ, i. e. ,
1.5GeV/c2, one can account for the J/ψ family of particles. The c has spin 1/2, charge
2/3 e, baryon number 1/3, strangeness −1, and a new quantum number charm (c)
equal to 1. The u, d, s quarks have c = 0. It may be pointed out here that charmed
mesons and baryons, i. e. , the bound states like (cd), and (cdu) have also been found.
Thus the existence of the c quark has been established experimentally beyond any
shade of doubt.
The discovery of the c quark stimulated the search for more new quarks. An ad-
ditional motivation for such a search was provided by the fact that there are three
generations of lepton weak doublets:
, and
where νe, νµ, and ντ are elec-
tron (e), muon (µ), and tau lepton (τ ) neutrinos respectively. Hence, by analogy, one
expects that there should be three generations of quark weak doublets also:
. It may be mentioned here that weak interaction does not distinguish between
the upper and the lower members of each of these doublets. In analogy with the isopin
1/2 of the strong doublet
, the weak doublets are regarded as possessing weak isopin
IW = 1/2, the third component (IW )3 of this weak isopin being + 1/2 for the upper
components of these doublets and - 1/2 for the lower components. These statements ap-
ply to the left-handed quarks and leptons, i. e. , those with negative helicity (i. e. , with
the spin antiparallel to the momentum) only. The right-handed leptons and quarks,
i. e. , those with positive helicity (i. e. , with the spin parallel to the momentum), are
weak singlets having weak isopin zero.
The discovery, at Fermi Laboratory, of a new family of vector mesons, the upsilon
family, starting at a mass of 9.4GeV/c2 gave an evidence for a new quark flavour called
bottom or beauty (b) (Herb et al. 1997; Innes et al. 1977). These vector mesons are
in fact, bound states of bottomonium (bb). These states have since been studied in
detail at the Cornell electron accelerator in an electron-positron storage ring of energy
ideally matched to this mass range. Four such states with masses 9.46, 10.02, 10.35, and
10.58 GeV/c2 have been found, the state with mass 9.46GeV/c2 being the ground state
(Andrews et al. 1980). This implies that the mass of the b quark is ≃ 4.73GeV/c2. The
b quark has spin 1/2 and charge −1/3 e. Furthermore, the b flavoured mesons have
been found with exactly the expected properties (Beherend et al. 1983).
After the discovery of the b quark, the confidence in the existence of the sixth quark
flavour called top or truth (t) increased and it became almost certain that, like leptons,
the quarks also occur in three generations of weak isopin doublets, namely,
. In view of this, intensive search was made for the t quark. But the discovery
of the t quark eluded for eighteen years. However, eventually in 1995, two groups, the
CDF (Collider Detector at Fermi lab) Collaboration (Abe et al. 1995) and the Dφ
Collaboration (Abachi et al. 1995) succeeded in detecting toponium tt in very high
energy pp collisions at Fermi Laboratory’s 1.8TeV Tevetron collider. The toponium tt
is the bound state of t and t. The mass of t has been estimated to be 176.0±2.0GeV/c2 ,
and thus it is the most massive elementary particle known so far. The t quark has spin
1/2 and charge 2/3 e.
Moreover, in order to account for the apparent breaking of the spin-statistics the-
orem in certain members of the Jp = 3
decuplet (spin 3/2,parity even), e. g. , △++
(uuu), and Ω− (sss), Greenberg (1964) postulated that quark of each flavour comes
in three colours, namely, red, green, and blue, and that real particles are always colour
singlets. This implies that real particles must contain quarks of all the three colours
or colour-anticolour combinations such that they are overall white or colourless. White
or colourless means all the three primary colours are equally mixed or there should
be a combination of a quark of a given colour and an antiquark of the corresponding
anticolour. This means each baryon contains quarks of all the three colours(but not
necessarily of the same flavour) whereas a meson contains a quark of a given colour and
an antiquark having the corresponding anticolour so that each combination is overall
white. Leptons have no colour. Of course, in this context the word ‘colour’ has noth-
ing to do with the actual visual colour, it is just a quantum number specifying a new
internal degree of freedom of a quark.
The concept of colour plays a fundamental role in accounting for the interaction
between quarks. The remarkable success of quantum electrodynamics (QED) in ex-
plaining the interaction between electric charges to an extremely high degree of preci-
sion motivated physicists to explore a similar theory for strong interaction. The result
is quantum chromodynamics (QCD), a non-Abelian gauge theory (Yang-Mills theory),
which closely parallels QED. Drawing analogy from electrodynamics, Nambu (1966)
postulated that the three quark colours are the charges (the Yang-Mills charges) re-
sponsible for the force between quarks just as electric charges are responsible for the
electromagnetic force between charged particles. The analogue of the rule that like
charges repel and unlike charges attract each other is the rule that like colours repel,
and colour and anticolour attract each other. Apart from this, there is another rule in
QCD which states that different colours attract if the quantum state is antisymmetric,
and repel if it is symmetric under exchange of quarks. An important consequence of
this is that if we take three possible pairs, red-green. green-blue, and blue-red, then a
third quark is attracted only if its colour is different and if the quantum state of the
resulting combination is antisymmetric under the exchange of a pair of quarks thus
resulting in red-green-blue baryons. Another consequence of this rule is that a fourth
quark is repelled by one quark of the same colour and attracted by two of different
colours in a baryon but only in antisymmetric combinations. This introduces a factor
of 1/2 in the attractive component and as such the overall force is zero, i.e., the fourth
quark is neither attracted nor repelled by a combination of red-green-blue quarks. In
spite of the fact that hadrons are overall colourless, they feel a residual strong force
due to their coloured constituents.
It was soon realized that if the three colours are to serve as the Yang-Mills charges,
each quark flavour must transform as a triplet of SUc(3) that causes transitions between
quarks of the same flavour but of different colours ( the SU(3) mentioned earlier causes
transitions between quarks of different flavours and hence may more appropriately be
denoted by SUf (3)). However, the SUc(3) Yang-Mills theory requires the introduction
of eight new spin 1 gauge bosons called gluons. Moreover, it is reasonable to stipulate
that the gluons couple to left-handed and right-handed quarks in the same manner
since the strong interactions do not violate the law of conservation of parity. Just as
the force between electric charges arise due to the exchange of a photon, a massless
vector (spin 1) boson, the force between coloured quarks arises due to the exchange
of a gluon. Gluons are also massless vector (spin 1) bosons. A quark may change its
colour by emitting a gluon. For example, a red quark qR may change to a blue quark
qB by emitting a gluon which may be thought to have taken away the red (R) colour
from the quark and given it the blue (B) colour, or, equivalently, the gluon may be
thought to have taken away the red (R) and the antiblue (B) colours from the quark.
Consequently, the gluon GRB emitted in the process qR → qB may be regarded as
the composite having the colour R B so that the emitted gluon GRB = qRqB . In
general, when a quark qi of colour i changes to a quark qj of colour j by emitting a
gluon Gij , then Gij is the composite state of qi and qj , i.e., Gij = qiqj . Since there are
three colours and threeanticolours, there are 3×3 = 9 possible combinations (gluons)of
the form Gij = qiqj . However, one of the nine combinations is a special combination
corresponding to the white colour, namely, GW = qRqR = qGqG = qBqB . But there is
no interaction between a coloured object and a white (colourless) object. Consequently,
gluon GW may be thought not to exist. This leads to the conclusion that only 9−1 = 8
kinds of gluons exist. This is a heuristic explanation of the fact that SUc(3) Yang-Mills
gauge theory requires the existence of eight gauge bosons, i.e., the gluons. Moreover, as
the gluons themselves carry colour, gluons may also emit gluons. Another important
consequence of gluons possessing colour is that several gluons may come together and
form gluonium or glue balls. Glueballs have integral spin and no colour and as such
they belong to the meson family.
Though the actual existence of quarks has been indirectly confirmed by experiments
that probe hardronic structure by means of electromagnetic and weak interactions, and
by the production of various quarkonia (qq) in high energy collisions made possible by
various particle accelerators, no free quark has been detected in experiments at these
accelerators so far. This fact has been attributed to the infrared slavery of quarks, i.e.,
to the nature of the interaction between quarks responsible for their confinement inside
hadrons. Perhaps enormous amount of energy , much more than what is available in the
existing terrestrial accelerators, is required to liberate the quarks from confinement.
This means the force of attraction between quarks increases with increase in their
separation. This is reminiscent of the force between two bodies connected by an elastic
string.
On the contrary, the results of deep inelastic scattering experiments reveal an al-
together different feature of the interaction between quarks. If one examines quarks at
very short distances (< 10−13 cm ) by observing the scattering of a nonhadronic probe,
e.g., an electron or a neutrino, one finds that quarks move almost freely inside baryons
and mesons as though they are not bound at all. This phenomenon is called the asymp-
totic freedom of quarks. In fact Gross and Wilczek (1973 a,b) and Politzer (1973) have
shown that the running coupling constant of interaction between two quarks vanishes
in the limit of infinite momentum (or equivalently in the limit of zero separation).
5 Eventually what happens to matter in a collapsing black hole?
As mentioned in Section 3 the energy E of the particles comprising the matter in a
collapsing black hole continually increases and so does the density ρ of the matter
whereas the separation s between any pair of particles decreases. During the continual
collapse of the black hole a stage will be reached when E and ρ will be so large and
s so small that the quarks confined in the hadrons will be liberated from the infrared
slavery and will enjoy asymptotic freedom, i.e., the quark deconfinement will occur. In
fact, it has been shown that when the energy E of the particle ∼ 102 GeV (s ∼ 10−16
cm) corresponding to a temperature T ∼ 1015K all interactions are of the Yang-Mills
type with SUc(3)×SUIW (2)×UYW (1) gauge symmetry, where c stands for colour, IW
for weak isospin, and YW for weak hypercharge, and at this stage quark deconfinement
occurs as a result of which matter now consists of its fundamental constituents : spin 1/2
leptons, namely, the electrons, the muons, the tau leptons, and their neutrinos, which
interact only through the electroweak interaction(i.e., the unified electromagnetic and
weak interactions); and the spin 1/2 quarks, u, d, s, c, b, t, which interact eletroweakly
as well as through the colour force generated by gluons(Ramond, 1983). In other words,
when E ≥ 102 GeV (s ≤ 10−16 cm) corresponding to T ≥ 1015K, the entire matter
in the collapsing black hole will be in the form of qurak-gluon plasma permeated by
leptons as suggested by the author earlier (Thakur 1993).
Incidentally, it may be mentioned that efforts are being made to create quark-gluon
plasma in terrestrial laboratories. A report released by CERN, the European Organi-
zation for Nuclear Research, at Geneva, on February 10, 2000, said that by smashing
together lead ions at CERN’s accelerator at temperatures 100,000 times as hot as the
Sun’s centre, i.e., at T ∼ 1.5 × 1012K, and energy densities never before reached in
laboratory experiments, a team of 350 scientists from institutes in 20 countries suc-
ceeded in isolating tiny components called quarks from more complex particles such
as protons and neutrons. “A series of experiments using CERN’s lead beam have pre-
sented compelling evidence for the existence of a new state of matter 20 times denser
than nuclear matter, in which quarks instead of being bound up into more complex
particles such as protons and neutrons, are liberated to roam freely ” the report said.
However, the evidence of the creation of quark gluon plasma at CERN is indirect,
involving detection of particles produced when the quark-gluon plasma changes back
to hadrons. The production of these particles can be explained alternatively without
having to have quark-gluon plasma. Therefore, Ulrich Heinz at CERN is of the opinion
that the evidence of the creation of quark-gluon plasma at CERN is not enough and
conclusive. In view of this, CERN will start a new experiment, ALICE, soon (around
2007-2008) at its Large Hadron Collider (LHC) in order to definitively and conclusively
creat QGP.
In the meantime the focus of research on quark-gluon plasma has shifted to the
Relativistic Heavy Ion Collider (RHIC), the worlds newest and largest particle accel-
erator for nuclear research, at Brookhaven National Laboratory in Upton, New York.
RHIC’s goal is to create and study quark-gluon plasma. RHIC’s aim is to create quark-
gluon plasma by head-on collisions of two beams of gold ions at energies 10 times those
of CERN’s programme, which ought to produce a quark-gluon plasma with higher
temperature and longer lifetime thereby allowing much clearer and direct observation.
RHIC’s quark-gluon plasma is expected to be well above the transition temperature
for transition between the ordinary hadronic matter phase and the quark-gluon plasma
phase. This will enable scientists to perform numerous advanced experiments in order
to study the properties of the plasma. The programme at RHIC began in the summer
of 2000 and after two years Thomas Kirk, Brookhaven’s Associate Laboratory Director
for High Energy Nuclear Physics, remarked, “It is too early to say that we have dis-
covered the quark-gulon plasma, but not too early to mark the tantalizing hints of its
existence.” Other definitive evidence of quark-gluon plasma will come from experimen-
tal comparisons of the behavior in hot, dense nuclear matter with that in cold nuclear
matter. In order to accomplish this, the next round of experimental measurements at
RHIC will involve collisions between heavy ions and light ions, namely, between gold
nuclei and deuterons.
Later, on June 18, 2003 a special scientific colloquium was held at Brcokhaven
Natioal Laboratory (BNL) to discuss the latest findings at RHIC. At the colloquium,
it was announced that in the detector system known as STAR ( Solenoidal Tracker AT
RHIC ) head-on collision between two beams of gold nuclei of energies of 130 GeV per
nuclei resulted in the phenomenon called “jet quenching“. STAR as well as three other
experiments at RHIC viz., PHENIX, BRAHMS, and PHOBOS, detected suppression
of “leading particles“, highly energetic individual particles that emerge from nuclear
fireballs, in gold-gold collisions. Jet quenching and leading particle suppression are
signs of QGP formation. The findings of the STAR experiment were presented at the
BNL colloquium by Berkeley Laboratory’s NSD ( Nuclear Science Division ) physicist
Peter Jacobs.
6 Collapse of a black hole to a space-time singularity is inhibited by
Pauli’s exclusion principle
As quarks and leptons in the quark-gluon plasma permeated by leptons into which
the entire matter in a collapsing black hole is eventually converted are fermions, the
collapse of a black hole to a space-time singularity in a finite time in a comoving co-
ordinate system, as stipulated by the singularity theorems of Penrose, Hawking and
Geroch, is inhibited by Pauli’s exclusion principle. For, if a black hole collapses to
a point in 3-space, all the quarks of a given flavour and colour would be squeezed
into just two quantum states available at that point, one for spin up and the other
for spin down quark of that flavour and colour. This would violate Pauli’s exclusion
principle according to which not more than one fermion of a given species can occupy
any quantum state. So would be the case with quarks of each distinct combination of
colour and flavour as well as with leptons of each species, namely, e, µ, τ, νe, νµ and ντ .
Consequently, a black hole cannot collapse to a space-time singularity in contravention
to Pauli’s exclusion principle. Then the question arises : If a black hole does not collapse
to a space-time singularity, what is its ultimate fate? In section 7 three possibilities
have been suggested.
7 Ultimately how does a black hole end up?
The pressure P inside a black hole is given by
P = Pr +
Pij +
P ij +
P k (11)
where Pr is the radiation pressure, Pij the pressure of the relativistically degenerate
quarks of the ith flavour and jth colour, Pk the pressure of the relativistically degenerate
leptons of the kth species, P ij the pressure of relativistically degenerate antiquarks of
the ith flavour and jth colour, Pk that of the relativistically degenerate antileptons of
the kth species. In equation (11) the summations over i and j extend over all the six
flavours and the three colours of quarks, and that over k extend over all the six species
of leptons. However, calculation of these pressures are prohibitively difficult for several
reasons. For example, the standard methods of statistical mechanics for calculation of
pressure and equation of state are applicable when the system is in thermodynamics
equilibrium and when its volume is very large, so large that for practical purpose
we may treat it as infinite. Obviously, in a gravitationally collapsing black hole, the
photon, quark and lepton gases cannot be in thermodynamic equilibrium nor can their
volume be treated as infinite. Moreover, at ultrahigh energies and densities, because
of the SUIW (2) gauge symmetry, transitions between the upper and lower components
of quark and lepton doublets occur very frequently. In addition to this, because of the
SUf (3) and SUc(3) gauge symmetries transitions between quarks of different flavours
and colours also occur. Furthermore, pair production and pair annihilation of quarks
and leptons create additional complications. Apart from these, various other nuclear
reactions may as well occur. Consequently, it is practically impossible to determine the
number density and hence the contribution to the overall pressure P inside the black
hole by any species of elementary particle in a collapsing black hole when E ≥ 102 Gev
(s ≤ 10−16 cm), or equivalently, T ≥ 1015K. However, it may not be unreasonable
to assume that, during the gravitational collapse, the pressure P inside a black hole
increases monotonically with the increase in the density of matter ρ. Actually, it might
be given by the polytrope, P = kρ
(n+1)
n , where K is a constant and n is polytropic
index. Consequently, P → ∞ as ρ → ∞, i.e., P → ∞ as the scale factor R(t) → 0 (or
equivalently s→ 0). In view of this, there are three possible ways in which a black hole
may end up.
1. During the gravitational collapse of a black hole, at a certain stage, the pressure
P may be enough to withstand the gravitational force and the object may become
gravitationally stable. Since at this stage the object consists entirely of quark-gluon
plasma permeated by leptons, it means it would end up as a stable quark star. Indeed,
such a possibility seems to exist. Recently, two teams - one led by David Helfand of
Columbia University, NewYork (Slane, Helfand, and Murray 2002) and another led
by Jeremy Drake of Harvard-Smithsonian Centre for Astrophysics, Cambridge, Mass.
USA (Drake et al. 2002) studied independently two objects, 3C58 in Cassiopeia, and
RXJ1856.5-3754 in Corona Australis respectively by combining data from the NASA’s
Chandra X-ray Observatory and the Hubble Space Telescope, that seemed, at first, to
be neutron stars, but, on closer look, each of these objects showed evidence of being
an even smaller and denser object, possibly a quark star.
2. Since the collapse of a black hole is inhibited by Pauli’s exclusion principle, it can
collapse only upto a certain minimum radius, say, rmin. After this, because of the
tremendous amount of kinetic energy, it would bounce back and expand, but only
upto the event horizon, i.e., upto the gravitational (Schwarzschild ) radius rg since,
according to the GTR, it cannot cross the event horizon. Thereafter it would collapse
again upto the radius rmin and then bounce back upto the radius rg . This process of
collapse upto the radius rmin and bounce upto the radius rg would occur repeatedly. In
other words, the black hole would continually pulsate radially between the radii rmin
and rg and thus become a pulsating quark star. However, this pulsation would cause
periodic variations in the gravitational field outside the event horizon and thus produce
gravitational waves which would propagate radially outwards in all directions from just
outside the event horizon. In this way the pulsating quark star would act as a source
of gravitational waves. The pulsation may take a very long time to damp out since the
energy of the quark star (black hole) cannot escape outside the event horizon except
via the gravitational radiation produced outside the event horizon. However, gluons in
the quark-gluon plasma may also act as a damping agent. In the absence of damping,
which is quite unlikely, the black hole would end up as a perpetually pulsating quark
star.
3. The third possibility is that eventually a black hole may explode; amini bang of a sort
may occur, and it may, after the explosion, expand beyond the event horizon though it
has been emphasized by Zeldovich and Novikov (1971) that after a collapsing sphere’s
radius decreases to r < rg in a finite proper time, its expansion into the external space
from which the contraction originated is impossible, even if the passage of matter
through infinite density is assumed.
Notwithstanding Zeldovich and Novikov’s contention based on the very concept
of event horizon, a gravitationally collapsing black hole may also explode by the very
same mechanism by which the big bang occurred, if indeed it did occur. This can be
seen as follows. At the present epoch the volume of the universe is ∼ 1.5 × 1085 cm3
and the density of the galactic material throughout the universe is ∼ 2×10−31 g cm−3
(Allen 1973). Hence, a conservative estimate of the mass of the universe is ∼ 1.5 ×
1085 × 2× 10−31 g = 3× 1054 g. However, according to the big bang model, before the
big bang, the entire matter in the universe was contained in an ylem which occupied
very very small volume. The gravitational radius of the ylem of mass 3 × 1054g was
4.45× 1021 km (it must have been larger if the actual mass of the universe were taken
into account which is greater than 3× 1054 g). Obviously, the radius of the ylem was
many orders of magnitude smaller than its gravitational radius, and yet the ylem
exploded with a big bang, and in due course of time crossed the event horizon and
expanded beyond it upto the present Hubble distance c/H0 ∼ 1.5 × 1023 km where
c is the speed of light in vacuum and H0 the Hubble constant at the present epoch.
Consequently, if the ylem could explode in spite of Zeldovich and Novikov’s contention,
a gravitationally collapsing black hole can also explode, and in due course of time
expand beyond the event horizon. The origin of the big bang, i.e., the mechanism by
which the ylem exploded, is not definitively known. However, the author has, earlier
proposed a viable mechanism (Thakur 1992) based on supersymmetry/supergravity.
But supersymmetry/supergravity have not yet been validated experimentally.
8 Conclusion
From the foregoing three inferences may be drawn. One, eventually the entire matter
in a collapsing black hole is converted into quark-gluon plasma permeated by leptons.
Two, the collapse of a black hole to a space - time singularity is inhibited by Pauli’s
exclusion principle. Three, ultimately a black hole may end up in one of the three
possible ways suggested in section 7.
Acknowledgements The author thanks Professor S. K. Pandey, Co-ordinator, IUCAA Ref-
erence Centre, School of Studies in Physics, Pt. Ravishankar Shukla University, Raipur, for
making available the facilities of the Centre. He also thanks Sudhanshu Barway, Mousumi Das
for typing the manuscript.
References
1. Abachi S., et al. , 1995, PRL, 74, 2632
2. Abe F., et al. , 1995, PRL, 74, 2626
3. Allen C. W., 1993, Astrophysical Quantities, The Athlone Press, University of London, 293
4. Andrew D., et al. , 1980, PRL, 44, 1108
5. Aubert J. J., et al. , 1974, PRL, 33, 1404
6. Augustin J. E., et al. , 1974, PRL, 33, 1406
7. Barber D.P.,et al. ,1979, PRL, 43, 1915
8. Beherend S., et al. , 1983, PRL, 50, 881
9. Drake J. et al. , 2002, ApJ, 572, 996
10. Gaillard M. K., Lee B. W., 1974, PRD, 10, 897
11. Gell-Mann M., Néeman Y., 1964, The Eightfold Way, W. A. Benjamin, NewYork
12. Geroch R. P., 1966, PRL, 17, 445
13. Geroch R. P., 1967, Singularities in Spacetime of General Relativity : Their Defination,
Existence and Local Characterization, Ph.D. Thesis, Princeton University
14. Geroch, R. P., 1968, Ann. Phys., 48, 526
15. Galshow S. L., Iliopoulos J., Maiani L., 1970, PRD, 2,1285
16. Greenberg O. W., 1964, PRL, 13, 598
17. Gross D. J., Wilczek F., 1973a, PRL, 30, 1343
18. Gros, D. J., Wilczek F., 1973b, PRD, 8, 3633
19. Hawking S. W., 1966a, Proc. Roy. Soc., 294A, 511
20. Hawking S. W., 1966b, Proc. Roy. Soc., 295A, 490
21. Hawking S. W., 1967a, Proc. Roy. Soc., 300A, 187
22. Hawking S. W., 1967b, Proc. Roy. Soc., 308A, 433
23. Hawking S. W., Penrose R., 1970, Proc. Roy. Soc., 314A, 529
24. Herb S. W., et al. , 1977, PRL, 39, 252
25. Hofstadter R., McAllister R. W., PR, 98, 217
26. Huang K., 1982, Quarks, Leptons and Gauge Fields, World Scientific, Singapore
27. Hughes I. S., 1991, Elementry Particles, Cambridge Univ. Press, Cambridge
28. Innes W. R., et al. , 1977, PRL, 39, 1240
29. Jastrow R., 1951, PR, 81, 165
30. Misner C. W., Thorne K. S., Wheeler J. A., 1973, Gravtitation, Freemon, NewYork, 857
31. Nambu Y., 1966, in A. de Shalit (Ed.), Preludes in Theoretical Physics, North-Holland,
Amsterdam
32. Narlikar J. V., 1978,Lectures on General Relativity and Cosmology, The MacMillan Com-
pany of India Limited, Bombay, 152
33. Oppenheimer,J. R., Snyder H., 1939, PR 56, 455
34. Penrose R., 1965, PRL, 14, 57
35. Penrose R., 1969, Riv. Nuoro Cimento, 1, Numero Speciale, 252
36. Politzer H. D., 1973, PRL, 30, 1346
37. Ramond P., 1983, Ann. Rev. Nucl. Part. Sc., 33, 31
38. Scotti A., Wong D. W., 1965, PR, 138B, 145
39. Slane P. O., Helfand D. J., Murray, S. S., 2002, ApJL, 571, 45
40. Thakur R. K., 1983, Ap&SS, 91, 285
41. Thakur R. K., 1992, Ap&SS, 190, 281
42. Thakur R. K., 1993, Ap&SS, 199, 159
43. Thakur R. K., 1995, Space Science Reviews, 73, 273
44. Weinberg S., 1972a, Gravitation and Cosmology, John Wiley & Sons, New York, 318
45. Weinberg S., 1972b, Gravitation and Cosmology, John Wiley & Sons, New York. 342-349
46. Zeldovich Y. B., Novikov I. D., 1971, Relativistic Astrophysics, Vol. I, University of Chicogo
Press, Chicago,144-148
47. Zweig G., 1964, Unpublished CERN Report
Introduction
A critique of the singularity theorems
Gravitationally collapsing black hole as a particle accelerator
Quarks: The building blocks of matter
Eventually what happens to matter in a collapsing black hole?
Collapse of a black hole to a space-time singularity is inhibited by Pauli's exclusion principle
Ultimately how does a black hole end up?
Conclusion
|
0704.1662 | Right-Handed Quark Mixings in Minimal Left-Right Symmetric Model with
General CP Violation | Right-Handed Quark Mixings in Minimal
Left-Right Symmetric Model with General CP Violation
Yue Zhang,1, 2 Haipeng An,2 Xiangdong Ji,2, 1 and R. N. Mohapatra2
1Center for High-Energy Physics and Institute of Theoretical Physics,
Peking University, Beijing 100871, China
2Department of Physics, University of Maryland,
College Park, Maryland 20742, USA
(Dated: October 31, 2018)
Abstract
We present a systematic approach to solve analytically for the right-handed quark mixings in
the minimal left-right symmetric model which generally has both explicit and spontaneous CP
violations. The leading-order result has the same hierarchical structure as the left-handed CKM
mixing, but with additional CP phases originating from a spontaneous CP-violating phase in
the Higgs vev. We explore the phenomenology entailed by the new right-handed mixing matrix,
particularly the bounds on the mass of WR and the CP phase of the Higgs vev.
http://arxiv.org/abs/0704.1662v1
The physics beyond the standard model (SM) has been the central focus of high-energy
phenomenology for more than three decades. Many proposals, including supersymmetry,
technicolors, little Higgs, and extra dimensions, have been made and studied thoroughly in
the literature; tests are soon to be made at the Large Hadron Collider (LHC). One of the
earliest proposals, the left-right symmetric (LR) model [1], was motivated by the hypothesis
that parity is a perfect symmetry at high-energy, and is broken spontaneously at low-energy
due to an asymmetric vacuum. Asymptotic restoration of parity has a definite aesthetic
appeal [2]. This model, based on the gauge group SU(2)L × SU(2)R × U(1)B−L, has a
number of additional attractive features, including a natural explanation of the weak hyper-
change in terms of baryon and lepton numbers [3], the existence of right-handed neutrinos,
and the possibility of spontaneous CP (charge-conjugation-parity) violation (SCPV) [4].
The model can easily be constrained by low-energy physics and predict clear signatures at
colliders [5]. It so far remains a decent possibility for new physics.
The LR modes are best constrained at low-energies by flavor-changing mixings and de-
cays, particularly the CP violating observables. In making theoretical predictions, the major
uncertainty comes from the unknown right-handed quark mixing matrix, conceptually simi-
lar to the left-handed quark Cabibbo-Kobayashi-Maskawa (CKM) mixing. The new mixing
generally depends on 9 real parameters: 6 CP violation phases and 3 rotational angles. Over
the years, two limiting cases of the model have usually been studied. The first case, “mani-
fest left-right symmetry”, assumes that there is no SCPV, i.e., all Higgs vacuum expectation
values (vev) are real. The quark mass matrices are then hermitian, and the left and right-
handed quark mixings become identical, modulo the sign uncertainty from possible negative
quark masses. The reality of the Higgs vev, however, does not survive radiative corrections
which generate infinite renormalization. The second case, “pseudo-manifest left-right sym-
metry”, assumes that the CP violation comes entirely from spontaneous symmetry breaking
(SSB) and that all Yukawa couplings are real [6]. Here the quark mass matrices are complex
but symmetric, the right-handed quark mixing is related to the complex conjugate of the
CKM matrix multiplied by additional CP phases. There are few studies of the model with
general CP violation in the literature [7], with the exception of an extensive numerical study
in Ref. [8] where solutions were generated through a Monte Carlo method.
In this paper, we report a systematic approach to solve analytically for the right-handed
quark mixings in the minimal LR model with general CP violation. As is well-known,
the model has a Higgs bi-doublet whose vev’s are complex, leading to both explicit and
spontaneous CP violations. The approach is based on the fact that mt ≫ mb and hence the
ratio of the two vev’s of the Higgs bi-doublet, ξ = κ′/κ, is small. In the leading-order in
ξ, we find a linear matrix equation for the right-handed quark mixing which can readily be
solved. We present an analytical solution of this equation valid to O(λ3), where λ = sin θC
is the Cabibbo mixing parameter. The leading-order solution is very close to the left-handed
CKM matrix, apart from additional phases that are fixed by ξ, the spontaneous CP phase
α, and the quark masses. This explicit right-handed quark mixing allows definitive studies
of the neutral meson mixing and CP-violating observables. We use the experimental data
on kaon and B-meson mixings and neutron electrical dipole moment (EMD) to constrain
the mass of WR and the SCPV phase α.
The matter content of the LR model is the same as the standard model (SM), except for a
right-handed neutrino for each family which, together with the right-handed charged lepton,
forms a SU(2)R doublet. The Higgs sector contains a bi-doublet φ, which transforms like
(2,2,0) of the gauge group, and the left and right triplets ∆L,R, which transform as (3, 1, 2)
and (1, 3, 2), respectively. The gauge group is broken spontaneously into the SM group
SU(2)L × U(1)Y at scale vR through the vev of ∆R. The breaking of the SM group is
accomplished through vev’s of φ.
The most general renormalizable Higgs potential can be found in Ref. [9]. Only one of
the parameters, α2, which describes an interaction between the bi-doublet and triplet Higgs,
is complex, and induces an explicit CP violation in the Higgs potential. It is known in the
literature that when this parameter is real, SCPV does not occur if the SM group is to be
recovered in the decoupling limit vR → ∞ [9]. Without SCPV, the Yukawa couplings in the
quark sector are hermitian, and we have the manifest left-right symmetry limit. Here we
are interested in the general case when α2 is complex. A complex α2 allows spontaneous CP
violation as well, generating a finite phase α for the vevs of φ,
〈φ〉 =
0 κ′eiα
. (1)
In reference [10], a relation was derived between α and the phase δ2 of α2,
α ∼ sin−1
2|α2| sin δ2
, (2)
where α3 is another interaction parameter between the Higgs bi-doublet and triplets.
The quark masses in the model are generated from the Yukawa coupling,
LY = q̄(hφ+ h̃φ̃)q + h.c. . (3)
Parity symmetry (φ → φ†, qL → qR) constrains h and h̃ be hermitian matrices. After SSB,
the above lagrangian yields the following quark mass matrices,
Mu = κh+ κ
′e−iαh̃
Md = κ
′eiαh+ κh̃ . (4)
Because of the non-zero α, both Mu and Md are non-hermitian. And therefore, the right-
handed quark mixing can in principle very different from that of the left-hand counter part.
Since the top quark mass is much larger than that of down quark, one may assume,
without loss of generality, κ′ ≪ κ, while at the same time h̃ is at most the same order as h.
We parameterize κ′/κ = rmb/mt, where r is a parameter of order unity. As a consequence,
Mu is nearly hermitian, and one may neglect the second term to leading order in ξ. One
can account for it systematically in ξ expansion if the precision of a calculation demands.
Now h can be diagonalized by a unitary matrix Uu,
Mu = UuM̂uSU
u = κh , (5)
where M̂u is diag(mu, mc, mt), and S is a diagonal sign matrix, diag(su, sc, st), satisfying
S2 = 1. Replacing the h-matrix in Md by the above expression, one finds
eiαξM̂u + κUuh̃U
uS = VLM̂dV
R (6)
where M̂d is diag(md, ms, mb), VL is the CKM matrix and VR is the right-handed mixing
matrix that we are after. Two comments are in order. First, through redefinitions of quark
fields, one can bring VL to the standard CKM form with four parameters (3 rotations and
1 CP violating phase) and the above equation remains the same. Second, all parameters in
the unitary matrix VR are now physical, including 3 rotations and 6 CP-violating phases.
To make further progress, one uses the hermiticity condition for Uuh̃U
u, which yields the
following equation,
M̂dV̂
R − V̂RM̂d = 2iξ sinα V
LM̂uSVL (7)
where V̂R is the quotient between the left and right mixing VR = SVLV̂R. There are a total
of 9 equations above, which are sufficient to solve 9 parameters in V̂R. It is interesting to
note that if there is no SCPV, α = 0, the solution is simply VR = SVLS̃, where S̃ is another
diagonal sign matrix, diag(sd, ss, sb), satisfying S̃
2 = 1. We recover the manifest left-right
symmetry case.
The above linear equation can be solved using various methods. The simplest is to utilize
the hierarchy between down-type-quark masses. Multiplying out the left-hand side and
assuming V̂Rij and V̂
Rij are of the same order, which can be justified posteriori, the solution
is (for ρ sinα ≤ 1)
ImV̂R11 = −r sinα
sc + st
A2λ4((1− ρ)2 + η2)
ImV̂R22 = −r sinα
sc + st
ImV̂R33 = −r sinα st (10)
V̂R12 = 2ir sinα
sc + st
λ4A2(1− ρ+ iη)
V̂R13 = −2ir sinαAλ
3(1− ρ+ iη)st (12)
V̂R23 = 2ir sinαAλ
2st , (13)
where ImV̂R11, ImV̂R22, ImV̂R33, V̂R12, V̂R23, V̂R13 are on the orders of λ, λ, 1, λ
2, λ2, and λ3,
respectively. The above solution allows us to construct entirely the right-handed mixing to
order O(λ3)
VR = PUV PD , (14)
where the factors PU =diag(su, sd exp(2iθ2), st exp(2iθ3)),
PD =diag(sd exp(iθ1), ss exp(−iθ2), sb exp(−iθ3)), and
1− λ2/2 λ Aλ3(ρ− iη)
−λ 1− λ2/2 Aλ2e−i2θ2
Aλ3(1− ρ− iη) −Aλ2e2iθ2 1
; (15)
with θi = s̃i sin
−1 ImV̂Rii. The pseudo-manifest limit is recovered when η = 0.
A few remarks about the above result are in order. First, the hierarchical structure of
the mixing is similar to that of the CKM, namely 1-2 mixing is of order λ, 1-3 order λ3 and
2-3 order λ2. Second, every element has a significant CP phase. The elements involving the
first two families have CP phases of order λ, and the phases involving the third family are
of order 1. These phases are all related to the single SCPV phase α, and can produce rich
phenomenology for K and B meson systems as well as the neutron EDM. Third, depending
on signs of the quark masses, there are 25 = 32 discrete solutions. Finally, using the right-
handed mixing at leading order in ξ, one can construct h̃ from Eq. (6) and solve Mu with a
better approximation. The iteration yields a systematic expansion in ξ.
In the remainder of this paper, we consider the kaon and B-meson mixing as well as
the neutron EMD. We will first study the contribution to the KL − KS mass difference
∆MK and derive an improved bound on the mass of right-handed gauge boson WR, using
the updated hadronic matrix elements and strange quark mass. Then we calculate the CP
violation parameter ǫ in KL decay and the neutron EDM, deriving an independent bound on
MWR. Finally, we consider the implications of the model in the B-meson system, deriving
yet another bound on MWR.
The leading non-SM contribution to the K0 −K0 mixing comes from the WL −WR box
diagram and the tree-level flavor-changing, neutral-Higgs (FCNH) diagram[11, 12]. The
latter contribution has the same sign as the former, and inversely proportional to the square
of the FCNH boson masses. We assume large Higgs boson masses (> 20TeV) from a large
α3 in the Higgs potential so that the contribution to the mixing is negligible. Henceforth we
concentrate on the box diagram only.
Because of the strong hierarchical structure in the left and right quark mixing, the WL−
WR box contribution to the kaon mixing comes mostly from the intermediate charm quark,
H12 =
4π sin2 θW
2ηλLRc λ
c (16)
× [4(1 + ln xc) + ln η]
(d̄s)2 − (d̄γ5s)
+ h.c.
where xc = m
, η = M2WL/M
, λRLc = V
RcdVLcs, and λ
c = V
LcdVRcs. The above
result is very similar to that from the manifest-symmetry limit because the phases in VRcd
and VRcs are O(λ). Therefore, we expect a similar bound on MWR as derived in previous
work [11]. However, the rapid progress in lattice QCD calculations warrants an update.
When the QCD radiative corrections are taken into account explicitly, the above effective
hamiltonian will be multiplied by an additional factor η4. [We neglect contributions of other
operators with small coefficients.] In the leading-logarithmic approximation, η4 is about 1.4
when the the four-quark operators are defined at the scale of 2 GeV in MS scheme [13].
The hadronic matrix element of the above operator can be calculated in lattice QCD and
expressed in terms of a factorized form
〈K0|d̄(1− γ5)sd̄(1 + γ5)s|K0〉
= 2MKf
KB4(µ)
ms(µ) +md(µ)
. (17)
Using the domain-wall fermion, one finds B4 = 0.81 at µ = 2 GeV in naive dimensional
regularization (NDR) scheme [14]. In the same scheme and scale, the strange quark mass
is ms = 98(6) MeV. Using the standard assumption that the new physics contribution shall
be less than the experimental value, one finds
MWR > 2.5 TeV , (18)
which is now the bound in the model with the general CP violation. This bound is stronger
than similar ones obtained before because of the new chiral-symmetric calculation of B4 and
the updated value of the strange quark mass.
The most interesting predictions of VR are for CP-violating observables. We first study
the CP violating parameter ǫ in KL decay. When the SCPV phase α = 0, the WL−WR box
diagram still makes a significant contribution to ǫ from the phase δCKM of the CKM matrix.
The experimental data then requires WR be at least 20 TeV to suppress this contribution.
When α 6= 0, it is possible to relax the constraint by cancelations. The most significant
contribution due to α comes from the element VRcd which is naturally on the order of λ. In
the presence of α, we have an approximate expression for ǫLR
ǫLR = 0.77
1 TeV
sssd Im
g(MR, θ2, θ3)e
−i(θ1+θ2)
where the function g(MR, θ2, θ3) = −2.22+[0.076+(0.030+0.013i) cos 2(θ2−θ3)] ln
80 GeV
The required value of r sinα for cancelation depends sensitively on the sign of quark masses.
When ss = sd, there exist small r sinα solutions even when MWR is as low as 1 TeV, as
shown by solid dots in Fig. 1. However, when ss = −sd, only large r sinα solutions are
possible.
-0.15 -0.1 -0.05 0 0.05 0.1 0.15
r sin Α
FIG. 1: Constaints on the mass of WR and the spontaneous CP violating parameter r sinα from
ǫ (red dots) and neutron EDM in two different limit: small r ∼ 0.1 (green dots) and large r ∼ 1
(blue trinagles).
An intriguing feature appears if one considers the constraint from the neutron EMD as
well. A calculation of EDM is generally complicated because of strongly interacting quarks
inside the neutron. As an estimate, one can work in the quark models by first calculating
the EDM of the constituent quarks. In our model, there is a dominant contribution from
the WL − WR boson mixing [15]. Requiring the theoretical value be below the current
experimental bound, the neutron EDM prefers small r sinα solutions as can be seen in Fig.
1. The combined constraints from ǫ and the neutron EMD (dn < 3×10
−26 e cm [16]) impose
an independent bound on MWR
MWR > (2− 6) TeV , (20)
where the lowest bound is obtained for small r ∼ 0.05 and large CP phase α = π/2.
Finally we consider the neutral B-meson mixing and CP-violating decays. In Bd − Bd
and Bs − Bs mixing, due to the heavy b-quark mass, there is no chiral enhancement in
the hadronic matrix elements of ∆B = 2 operators from WL −WR box diagram as in the
kaon case. One generally expects the constraint from neutral B-meson mass difference to
be weak. In fact, we find a lower bound on WR mass of 1-2 TeV from B-mixing. On the
other hand, CP asymmetry in decay Bd → J/ψKS, SJ/ψKS = sin 2β in the standard model,
receives a new contribution from both Bd −Bd and K0 −K0 mixing in the presence of WR
[7] and is very sensitive to the relative sign sd and ss. By demanding the modified sin 2βeff
within the experimental error bar, we find another independent bound on MWR ,
MWR > 2.4 TeV , (21)
when sd = ss as required by the neutron EDM bound.
To summarize, we have derived analytically the right-handed quark mixing in the minimal
left-right symmetric model with general CP violation. Using this and the kaon and B-meson
mixing and the neutron EMD bound, we derive new bounds on the mass of right-handed
gauge boson, consistently above 2.5 TeV. To relax this constraint, one can consider models
with different Higgs structure and/or supersymetrize the theory. A more detailed account
of the present work, including direct CP observables, will be published elsewhere [17]
This work was partially supported by the U. S. Department of Energy via grant DE-
FG02-93ER-40762. Y. Z. acknowledges the hospitality and support from the TQHN group
at University of Maryland and a partial support from NSFC grants 10421503 and 10625521.
X. J. is supported partially by a grant from NSFC and a ChangJiang Scholarship at Peking
University. R. N. M. is supported by NSF grant No. PHY-0354407.
[1] J. C. Pati and A. Salam, Phys. Rev. D 10, 275 (1974); R. N. Mohapatra and J. C. Pati, Phys.
Rev. D 11, 566 (1975); Phys. Rev. D 11, 2558 (1975); G. Senjanovic and R. N. Mohapatra,
Phys. Rev. D 12, 1502 (1975).
[2] T. D. Lee, talk given at the Center for High-Energy Physics, Peking University, Nov. 2006.
[3] R. E. Marshak and R. N. Mohapatra, Phys. Lett. B 91, 222 (1980).
[4] T. D. Lee, Phys. Rev. D 8, 1226 (1973).
[5] S. N. Gninenko, N. M. Kirsanov, N. V. Krasnikov, and V. A. Matveev, CMS Note 2006/098,
June, 2006.
[6] D. Chang, Nucl. Phys. B 214, 435 (1983); J. M. Frere, J. Galand, A. Le Yaouanc, L. Oliver,
O. Pene and J. C. Raynal, Phys. Rev. D 46, 337 (1992); G. Barenboim, J. Bernabeu and
M. Raidal, Nucl. Phys. B 478, 527 (1996); P. Ball, J. M. Frere and J. Matias, Nucl. Phys. B
572, 3 (2000);
[7] P. Langacker and S. Uma Sankar, Phys. Rev. D 40, 1569 (1989); G. Barenboim, J. Bernabeu,
J. Prades and M. Raidal, Phys. Rev. D 55, 4213 (1997);
[8] K. Kiers, J. Kolb, J. Lee, A. Soni and G. H. Wu, Phys. Rev. D 66, 095002 (2002).
[9] G. N. G. Deshpande, J. F. Gunion, B. Kayser and F. I. Olness, Phys. Rev. D 44, 837 (1991).
G. Barenboim, M. Gorbahn, U. Nierste and M. Raidal, Phys. Rev. D 65, 095003 (2002);
[10] K. Kiers, M. Assis and A. A. Petrov, Phys. Rev. D 71, 115015 (2005).
[11] G. Beall, M. Bander and A. Soni, Phys. Rev. Lett. 48, 848 (1982); R. N. Mohapatra, G. Sen-
janovic and M. D. Tran, Phys. Rev. D 28, 546 (1983); F. J. Gilman and M. H. Reno, Phys.
Rev. D 29, 937 (1984); S. Sahoo, L. Maharana, A. Roul and S. Acharya, Int. J. Mod. Phys.
A 20, 2625 (2005); M. E. Pospelov, Phys. Rev. D 56, 259 (1997).
[12] D. Chang, J. Basecq, L. F. Li and P. B. Pal, Phys. Rev. D 30, 1601 (1984); W. S. Hou and
A. Soni, Phys. Rev. D 32, 163 (1985); J. Basecq, L. F. Li and P. B. Pal, Phys. Rev. D 32,
175 (1985).
[13] G. Ecker and W. Grimus, Nucl. Phys. B 258, 328 (1985); A. J. Buras, S. Jager and J. Urban,
Nucl. Phys. B 605, 600 (2001); A. J. Buras, arXiv:hep-ph/9806471.
[14] R. Babich, N. Garron, C. Hoelbling, J. Howard, L. Lellouch and C. Rebbi, Phys. Rev. D 74,
073009 (2006); Y. Nakamura et al. [CP-PACS Collaboration], arXiv:hep-lat/0610075.
[15] G. Beall and A. Soni, Phys. Rev. Lett. 47, 552 (1981); J. M. Frere, J. Galand, A. Le Yaouanc,
L. Oliver, O. Pene and J. C. Raynal, Phys. Rev. D 45, 259 (1992).
[16] C. A. Baker et al., Phys. Rev. Lett. 97, 131801 (2006) [arXiv:hep-ex/0602020].
[17] Y. Zhang, H. An, X. Ji and R. N. Mohapatra, to be published.
http://arxiv.org/abs/hep-ph/9806471
http://arxiv.org/abs/hep-lat/0610075
http://arxiv.org/abs/hep-ex/0602020
References
|
0704.1663 | Dynamics of single polymers under extreme confinement | Dynamics of single polymers under extreme confinement
Armin Rahmani1, Claudio Castelnovo1,2, Jeremy Schmit3 and Claudio Chamon1
1 Department of Physics, Boston University, Boston, MA 02215 USA,
2 Rudolf Peierls Centre for Theoretical Physics, University of Oxford, UK, and
3 Department of Physics, Brandeis University, Waltham MA 02454 USA.
(Dated: October 24, 2018)
Abstract
We study the dynamics of a single chain polymer confined to a two dimensional cell. We introduce
a kinetically constrained lattice gas model that preserves the connectivity of the chain, and we use this
kinetically constrained model to study the dynamics of the polymer at varying densities through Monte
Carlo simulations. Even at densities close to the fully-packed configuration, we find that the monomers
comprising the chain manage to diffuse around the box with a root mean square displacement of the order
of the box dimensions over time scales for which the overall geometry of the polymer is, nevertheless,
largely preserved. To capture this shape persistence, we define the local tangent field and study the two-
time tangent-tangent correlation function, which exhibits a glass-like behavior. In both closed and open
chains, we observe reptational motion and reshaping through local fingering events which entail global
monomer displacement.
http://arxiv.org/abs/0704.1663v2
I. INTRODUCTION
In this paper we consider the situation of a single chain polymer confined within a space smaller
than its radius of gyration. Such a situation is encountered within the nucleus of a cell where
one or more chromosomes with a radius of gyration on the order of 10 µm are confined by the
nuclear membrane to a space of order 1 µm. Even in the case of organisms where the genome
is composed of many chromosomes, the situation is distinct from that of a polymer melt as each
chromosome is effectively confined to a separate sub-volume of the nucleus [1]. Since many
biological processes such as gene suppression and activation require a rearrangement of the DNA
polymer, understanding the dynamics of confined polymers may yield insight to the dynamics
of these cellular activities. Strongly confined polymers may also be encountered in the “lab on
a chip” applications promised by microfluidic technology [2]. In these applications the reaction
vessels are ∼ 10µm microdroplets.
While the equilibrium properties of confined polymers may be understood based on scaling
arguments [3], the dynamics of confined polymers are less well understood, and there has been
growing interest in the problem. For instance, the transport of polymers in confined geometries
has been studied in a variety of contexts, including translocation through pores [4, 5], diffusion
through networks [6] and tubes [7], and the packing of DNA within viral capsids [8]. For these
highly confined polymers the density profile strongly resembles that of a polymer melt. We might
naively expect that since a given section of the polymer interacts primarily with segments that are
greatly separated along the chain, each segment may be treated as a sub-chain embedded within a
melt. However, this picture is troublesome for dynamical quantities as reptation theory says that
the dynamics is governed by the time it takes for a given chain to vacate the tube defined by its
immediate neighbors. With a system consisting of a single polymer, this would imply that the tube
occupies the entire box. Therefore, we would be forced to conclude that the chain is completely
immobile. We show here that reptation-like motion is, in fact, the dominant mode of deformation
of confined polymers. In contrast to the situation in melts, however, reptational diffusion is not
necessarily initiated by the chain ends, and therefore, cannot be always thought of as diffusion
along a fixed tube.
Polymers confined to thin films have been experimentally shown to have glassy characteris-
tics [9, 10]. While this phenomenon has attracted considerable theoretical attention, it is not well
understood [11, 12, 13]. It is also not known whether glassy behavior occurs in other confined
geometries. Here we point out a connection between lattice polymer models and Kinetically Con-
strained Models (KCM) with the chain connectivity as the analog of the kinetic constraint. Since
many KCMs display glassy behavior at high density it is plausible that polymers do as well.
In this paper we numerically explore the dynamics of confined polymers using a kinetically
constrained lattice gas model. We find that monomer diffusion exhibits power law behavior up
to densities very close to the close-packing limit. However, the overall shape of the chain, as
quantified by a tangent-tangent correlation function, shows a broad plateau at high densities. This
apparent paradox is due to the reptation-like nature of the chain movement. Because the monomer
diffusion is primarily in the direction of the chain backbone, only relatively small rearrangements
of the backbone are required for the monomers to move distances comparable to the system size.
The outline of the paper is as follows. In Section II we define our model and employ Monte
Carlo simulations to show that this model reproduces known results for the dynamic and static
properties of unconfined polymers in two dimensions. In Section III we show that the individual
monomers diffuse with a power law in time behavior up to the close-packing density. In Sec-
tion IV we define the tangent-tangent correlation function and use it to show that the overall shape
of the chain is essentially frozen within the time scale required by a monomer to diffuse across dis-
tances much larger than the inter-particle seperation. In Section V we use a tangent-displacement
correlation function to show that the discrepancy between the monomer diffusion and reshaping
time scales is due to reptation-like diffusion of the polymer along the chain backbone. Finally, in
Section VI we summarize our conclusions.
II. A KINETICALLY CONSTRAINED LATTICE GAS MODEL
Inspired by the bond fluctuation model [14] of polymer dynamics and kinetically constrained
models (KCM) [15] such as the Kob-Andersen model [16, 17], we propose a KCM for the dy-
namics of a self-avoiding polymer. The fact that the monomers constitute a polymer requires the
connectivity to be preserved. Namely, connected (unconnected) monomers must remain connected
(unconnected) during the polymer motion. We begin by introducing the model in two dimensions
with monomers living on the sites of a square lattice for simplicity. Let us define the polymer
connectivity in the following way. Consider a square box of linear size 2r whose center lies on a
given monomer. Any other monomer that lies inside or on the boundary of this box is defined as a
box-neighbor of the monomer at the center. Clearly, if monomer A is a box-neighbor of monomer
B, then monomer B is also a box-neighbor of monomer A. We define two monomers as being
connected by a bond if and only if they are box-neighbors. A monomer with no box neighbors is
an isolated polymer of length one. A monomer with only one box-neighbor is the end-point of a
polymer. A monomer with two box neighbors is a point in the middle of a polymer and a monomer
with more than two box neighbors corresponds to a branching point along a polymer. Depending
on the initial monomer positions, multiple open or closed chains can be modeled. Also by using a
d-dimensional hyper-cube instead of a square, the model can be immediately generalized to higher
dimensions.
The dynamics is defined as follows. A monomer can hop to a nearest neighbor unoccupied site
if it has exactly the same box-neighbors before and after the move, as in Fig. 1. If no monomer
enters the box associated with the moving monomer and no monomer falls out of it during the
move, the box will contain the exact same monomers before and after the move. In other words, all
the 2(2r+1) sites (2(2r+1)(d−1) in d dimensions) that enter or exit the box as it is moved to the new
position, must be unoccupied, as shown in Fig. 2. In our model as in many kinetically constrained
lattice gas models [15], we take the energy to be independent of the configuration, resulting in
constant hopping rates. We assume that the allowed moves take place at unit rate. So if n(x, y) is
defined to assume the value 1 for occupied sites and 0 for empty sites, and n̄(x, y) ≡ 1− n(x, y),
the rate of hopping to the right out of site (x, y) is given by
w→(x, y) = n(x, y)n̄(x+ 1, y)
n̄(x+ r + 1, y + j)n̄(x− r, y + j), (2.1)
and by similar expressions for the other directions. The dynamics forbids monomers that are
unconnected from getting too close to each other and therefore ensures self-avoidance. Notice that
all the moves are reversible because, as seen in Fig. 2, any particle that was allowed to hop to a
nearest neighbor empty site is allowed to hop back to its original position. We choose the smallest
value of r for which the model behaves like a polymer while allowing for shorter simulation times.
The r = 1 case is too restrictive to model different modes of motion. For example a polymer lying
along a straight line is forbidden in the r = 1 model to undergo one-dimensional translation. We
choose r = 2 as it is found to adequately describe the free polymer dynamics, as shown by our
numerical simulations.
Monte Carlo (MC) simulations are used to study the model. A particularly time-efficient al-
gorithm is achieved by storing two representations of the system at each MC step. One consists
of the position vector of all the monomers, and the other is the configuration matrix of the lattice,
FIG. 1: An example of a configuration that satisfies the initial conditions discussed in the text to have a
single closed polymer. This is illustrated explicitly for the particle in red: the box of size 2r (r = 2) is
indicated by the dashed purple square, and the two other particles inside the box are colored in blue. The
same condition holds for all the particles in the system. All the (nearest-neighbor) particle moves allowed
by the kinetic constraint are shown for each particle with arrows along the corresponding lattice edge. One
particle in this configuration is temporarily frozen (circled blue particle), and its move is subordinated to
the move of one of the two particles in its box of size 2r. Notice that the initial sequence of particles,
represented by the wiggly green line, is clearly preserved by the allowed moves.
with unoccupied sites having value zero and occupied sites value one. This allows us to chose
a monomer at random from the position vector (rather than a site at random from the whole lat-
tice) and quickly determine if the monomer is allowed to move in a randomly chosen direction by
checking the values of at most eleven elements (the nearest neighbor site plus the sites by which
the old and new box differ) in the configuration matrix. Note that it is possible to define an al-
ternative model by using a circle of radius r = 2 instead of a square box of side 2r = 4, which
would require the same amount of computational effort because the same number of sites, namely
FIG. 2: For a model with an r = 2 box, a monomer can hop in a given direction if the destination site, as
well as the ten sites which the old and new box differ by, are empty.
ten sites, would enter or leave the circle drawn around the moving monomer.
The r = 2 model with a square box proved to give consistent results with the ones available
in the literature. Namely, the time-averaged radius of gyration squared of the polymer computed
for our model in an infinite box scales as R2 ∝ N1.451±0.084, which is consistent with Flory’s the-
oretical result of R2 ∝ N
2 [18]. The dynamics of the polymer in unconfined environments is also
compatible with the Rouse model to a good approximation. As shown in Fig. 3, the mean square
displacement of the center of mass is diffusive with a diffusion constant that scales as N−0.96 com-
pared to the N−1 theoretical value. Throughout the paper, time and length are measured in units of
Monte-Carlo steps and lattice spacings respectively. Only four values of N = 128, 256, 512, 1024
were used for these consistency checks but the fits were very close to the theoretical predictions.
The individual monomer mean square displacement is diffusive at very short times followed by an
intermediate-time subdiffusive behavior and a cross-over to a final diffusive behavior at long times
as each monomer begins to move with the center of mass. The subdiffusive MSD can be fit with
an exponent of 0.5968±0.0008 over the two-decade interval of 101 < t < 103 which is consistent
with the theoretical value of 3
and the bond fluctuation results [14]. Because of the cross-over to
diffusive behavior at long times the curve fits well to a higher exponent of 0.6516 ± 0.0008 over
the longer interval of 101 < t < 106.
In the present paper we focus on open and closed polymers without any branching. The close-
Individual Monomer
Center of Mass
∝ t3/5
FIG. 3: Center of mass and individual monomer mean square displacement of a free polymer. The results
are shown for N = 256.
packing density of such polymers clearly has an upper limit. For a single open chain for ex-
ample, each monomer except for the two end-points has exactly two box-neighbors. The maxi-
mally packed configuration is achieved once the distance along the chain between the consecu-
tive monomers alternates between one and two lattice spacings, and the distance between paral-
lel segments of the folded polymer equals 3 lattice spacings, as depicted in Fig. 4. If we have
N monomers on an L × L lattice with (L + 1)2 sites, the fully-packed configuration attains a
thermodynamic-limit density ρ = N
(L+1)2
in d dimensions). Note that except for fluctua-
tions at the U-turns, the polymer is completely frozen at close-packing.
III. MEAN SQUARE MONOMER DISPLACEMENT
The question of whether or not placing a self-avoiding polymer in a highly confining envi-
ronment can freeze its motion can be addressed by measuring the statistical average of the mean
square displacement as a function of time
c(t) = 〈
~xi(t + tw)− ~xi(t)
〉, (3.1)
FIG. 4: A fully packed configuration of the model described in the text. Notice that only the shaded
monomers are allowed to move at any given time.
where ~xi is the position of the i-th monomer and tw is the waiting time between the preparation
of the sample and the measurement. Throughout the paper, we denote the ensemble average by
〈. . .〉. We prepare random samples with different densities by placing the polymer in a large box
and gradually reducing the size of the box. This is achieved by forbidding the monomers to move
to the edges of the box, which corresponds to an infinite repulsive potential at the boundary, and
removing one vertical and one horizontal edge line once they have become completely empty.
After each shrinking process, the system is confined to a smaller square box. Shrinking and
measurements are done in series. Specifically, we start from a very low density of ρ = 0.00010
and after each shrinking step we let the system run for 1000 Monte-Carlo steps before trying to
shrink further. For measurements involving a long waiting time (tw = 10
7 steps), i.e., for densities
ρ ≃ 0.0010, 0.050, 0.10, 0.15 and ρ > 0.195, subsequent shrinking steps are preformed starting
from the post-measurement configurations. With this method we are able to reach densities of
approximately ρ = 0.21, compared to the limiting theoretical value of ρ = 2/9 ≃ 0.222. At high
densities, as shown in Fig. 5, the overall geometry of the polymer resembles that of a compact
polymer described by a Hamiltonian path, i.e., a path which visits all sites exactly once, exploring
a lattice with lattice spacing three times larger than in the original one. At these high densities
our model resembles a semi-flexible polymer because the chain is able to attain a higher packing
density in the direction parallel to the backbone (average monomer spacing equal to 3
lattice
spacings) than in the direction perpendicular to the backbone (average monomer spacing equal to
3 lattice spacings) (see Fig. 4).
FIG. 5: Snapshots of a polymer with N = 1024 monomers. As the box shrinks, the polymer gets more and
more confined. At densities close to full-packing, a geometry resembling that of a compact (Hamiltonian
walk) polymer is formed.
Measurements are done with two values of waiting time, tw = 10
7 and tw = 1.1× 10
8 Monte-
Carlo steps over a period of t = 108 steps. (For the highest density we used t = 2 × 108 in-
stead.) The mean square displacement shows time translation invariance up to the highest densities
achieved. We study the behavior of c(t) for N = 128, 256, 512, 1024 at the densities listed above.
The root mean square displacement
c(t) is a measure of how much the monomers have moved.
As shown in Fig. 6, c(t) increases with power law behavior and finally saturates with a limiting root
mean square displacement of the order of the box size. For very large box sizes (lowest density)
as well as very small box sizes (very high densities), the saturation plateau is not always reached
within the measurement time. However, the maximum value of
c(t) is still of the same order
of the box dimensions. Although the dynamics slows down at high confinement, each monomer
manages to move an average distance comparable to the box size over our measurement time t.
Visually observing the polymer motion, however, clearly shows a more complex scenario where
at high densities the overall geometry of the polymer is largely preserved (see Fig. 5). Indeed, we
will see that the tangent-tangent correlation function, although time-translation invariant at low
and intermediate densities, exhibits signs of aging at higher densities. In the following sections
we discuss in detail the shape persistence as well as the mechanisms by which the polymer shape
changes as the box size is reduced.
ρ=0.209
ρ=0.198
ρ=0.152
ρ=0.102
ρ=0.051
ρ=0.001
FIG. 6: Mean square displacement for N = 256 and different box sizes (i.e., different densities) as a
function of time. The ρ = 0.001 curve can be fit with a 0.63 exponent which is close to the intermediate-
time regime of the Rouse model. The results for other values of N are qualitatively very similar.
IV. TANGENT FIELD CORRELATION
The large values reached by
c(t) within our simulation times even for very high densities
indicate that confinement does not freeze the motion of the monomers. The overall shape or ge-
ometry of the polymer, however, exhibits global persistence at high densities as observed via direct
visualization of the dynamics. In order to systematically study shape persistence and reshaping, we
introduce the concept of a tangent field, a vector field defined on the entire lattice which captures
the overall shape of the polymer. We define the instantaneous tangent field as
~sinst(~x, t) =
~xi+1(t)− ~xi−1(t) if ~x = ~xi(t)
~0 otherwise,
(4.1)
~xi(t) being the position of monomer i at time t. The tangent field is defined in a symmetric way
so that labeling the monomers in reverse order only changes the direction of the field. In an open
chain, the definition needs to be modified at the end points. Note that the tangent field is indexed by
a position in space and not by a monomer number; this allows us to compare the shapes at different
times, independently of the monomer motion. Since local vibrations of the polymer do not change
the overall geometry, we seek a quantity that is insensitive to these vibrations. Coarse-graining the
field by time averaging over a carefully chosen interval removes the local vibrations and results in
a smeared field ~s~x(t) which captures the overall geometry, as shown in Fig. 7. We have chosen the
time interval to be 75 Monte Carlo steps (or 75×N single monomer attempts) which is sufficient
to allow for several vibrations. Since the success rate of the Monte Carlo attempts at high density
is found to be around 1/10, this interval corresponds to roughly 7 moves per monomer. In terms of
FIG. 7: An example of the coarse-grained tangent field ~s~x(t)
the coarse-grained tangent field defined above, we define the tangent-tangent correlation function
cs(t, tw) = 〈
all ~x
~s~x(t + tw) · ~s~x(tw)
〉 (4.2)
as a measure of the overlap of the tangent field at times t+ tw and tw.
As shown in Fig. 8, cs(t, tw) decays as a power low in t for very low densities. As the density
increases, a second time scale emerges and at the highest densities we clearly see an initial decay
followed by a broad plateau and a secondary decay. (Notice the use of a logarithmic scale on
both axes.) The time-averaging of the tangent field hides the fast mode responsible for the initial
decay and causes the correlation to have a smaller initial value at lower densities. The correlation
function (4.2) does not depend on the value of N at low densities, as seen in Fig. 9, while at higher
densities we observe broader plateaux and longer decorrelation times as the number of monomers
N is increased.
ρ=0.209
ρ=0.198
ρ=0.152
ρ=0.102
ρ=0.051
ρ=0.001
FIG. 8: Tangent-tangent correlation function for 256 monomers and different box sizes. The ρ = 0.001
curve fits well to a power law of exponent 0.42. Note that the correlation functions are not normalized.
To check for time-translation invariance, we ran the simulations two more times after increas-
ing the waiting time tw by an order of magnitude each time. The mean square displacement
exhibits time-translation invariance at all densities. For the tangent-tangent correlation function,
N=128, ρ=0.205
N=512, ρ=0.205
N=256, ρ=0.209
N=1024, ρ=0.209
FIG. 9: The tangent-tangent correlation function for four different polymer sizes. Top: At low density
ρ = 0.05, the curves are independent of N . Bottom: At the highest achieved density ρ ≃ 0.20 − 0.21, the
width of the emergent plateau increases with N
however, time-translation invariance is respected at low densities but violated at the highest densi-
ties where a broad plateau emerges. It appears that at high densities the average distance between
monomers slowly evolves with time, so that the initial value of the tangent-tangent correlation
function cs(0, tw) depends on tw. If we normalize the correlation function using its value at the
beginning of the measurement and plot cs(t, tw)/cs(0, tw) as a function of time, the violation of
time-translation invariance suggests the existence of aging effects, a comprehensive study of which
is beyond the scope of the present paper. For tw = 1.01× 10
9, the system has almost equilibrated
N=1024, ρ=0.152, t
N=1024, ρ=0.152, t
=107+108
N=1024, ρ=0.192, t
N=1024, ρ=0.192, t
=107+108
N=1024, ρ=0.192, t
=107+109
FIG. 10: The normalized tangent-tangent correlation function cs(t, tw)/cs(0, tw) vs. time for different
waiting times shown in log-log plots. Top: Up to intermediate densities, before the appearance of a clear
plateau, time translation invariance is not broken. Bottom: At higher densities, where a broad plateau has
emerged, we observe that the second decay occurs at a longer time scale. With increasing the waiting times
the correlation function approaches equilibrium but there is still a systematic shift between tw ≈ 10
9 and
tw ≈ 10
8 curves indicating slower decay for the older system.
but there is still a systematic shift toward longer times compared with the tw = 1.1 × 10
8 cor-
relation function. Fig. 10 summarizes the above observations. Moreover, Fig. 9 suggests longer
equilibration times for larger systems and in the thermodynamic limit (N → ∞) the aging effects
are expected to survive for arbitrarily large tw.
V. TANGENT-DISPLACEMENT CORRELATION
As seen in the previous sections, confinement slows down the motion of individual monomers
in a polymer chain, but it does not fundamentally change the characteristics of their mean square
displacement. It does, however, have a profound effect on reshaping. Without any reshaping,
the only possible motion can happen via reptation, i.e., when the monomers move back and forth
along a fixed path. With the exception of the unlikely event of the two end-points finding each
other, reptation without reshaping is not possible in open chains. Indeed, by studying closed
loops in detail we do find that longitudinal diffusion is the main mechanism for motion at high
densities. The existence of large root mean square displacements in strongly confined open chains
and in the absence of major reshaping can be explained by noting that local reshaping events with
only a minor contribution to the tangent-tangent decorrelation allow for global monomer motion
through a reptation-like process. Fig. 11 shows one instance of such behavior in a particularly
mobile realization. The mechanisms shown in Fig. 11, namely end-point initiated reptation and
”fingering” events, are observed in other realizations as well. A finger is formed when the chain
makes a 180-degree bend resulting in two adjacent segments of the polymer running antiparallel
to each other. A fingering event occurs when a finger retracts making room for the extension
of another finger. Even in the case of closed loops where pure reptation is possible, reptation is
usually accompanied by local fingering events as shown in Fig. 12. To explicitly quantify the
contribution of reptation to highly confined motion, we define tangent-displacement and normal-
displacement correlation functions as follows:
ct(t) = 〈
~s~xi(tw)
|~s~xi(tw)|
~xi(tw + t)− ~xi(tw)
〉 (5.1)
cn(t) = 〈
~n~xi(tw)
|~n~xi(tw)|
~xi(tw + t)− ~xi(tw)
〉, (5.2)
where ~n is the normal field defined as ~n~x(t) = ẑ × ~s~x(t) and the polymer – and therefore ~s~x(t) –
belongs to the xy plane. By comparison with Eq. (3.1), one can show that
c(t) = ct(t) + cn(t). (5.3)
Therefore, the correlation functions (5.2) and (5.1) are the contributions to the mean square dis-
placement due to the transverse and longitudinal motion relative to the initial polymer orientation.
At very high densities and up to time scales smaller than the beginning of the secondary decay
FIG. 11: Snapshots of a particularly mobile realization of a 256-monomer chain at density ρ = 0.209.
Fingering events are observed as well of end-point reptation accompanied by local rearrangements. An
end-point initiated reptation is highlighted with a dashed ellipse at the moving end-point. The dashed
rectangle shows the tip of a finger which participates in a fingering event during which the tagged monomer
shown with a solid circle, for example, moves through reptation.
in the tangent-tangent correlation function Eq. (4.2), ct is a good measure of the reptational con-
tribution to the confined motion. This is due to the fact that over such time scales the polymer
shape is largely preserved. Additionally, over the same time scales the mean square displacement
FIG. 12: Snapshots of a realization with 256 monomers arranged in a single closed polymer chain, at
density ρ = 0.209. Pure reptation and fingering events are visibly responsible for monomer motion. The
dashed rectangle highlights the tip of a finger taking part in a fingering event. The dashed ellipse shows the
reptational motion of a tagged monomer.
is smaller than the average chain length between major bends, which as seen in Fig. 11 is a large
fraction of the box size. If the polymer undergoes major reshaping or the monomers move through
the bends of the folded polymer, reptation would no longer be equivalent to the motion along the
original orientation.
Let us define f as a measure of the anisotropy of the motion with respect to the longitudinal
and transverse directions,
f(t) =
ct(t)− cn(t)
. (5.4)
We have ct(t)/c(t) = (1 + f(t))/2 and cn(t)/c(t) = (1 − f(t))/2, where f ranges from −1 to
+1. A large value of f clearly indicates a motion primarily due to reptation. As shown in semi-
logarithmic scale in Fig. 13, f reaches a maximum value of 0.8 at short time scales, demonstrating
−0.15
−0.05
f(t)=0
N=128, ρ=0.205
N=256, ρ=0.209
N=512, ρ=0.205
N=1024, ρ=0.209
f(t)=0
FIG. 13: Top: At the lowest density studied ρ ≃ 0.001, the function f is independent of the number of
monomers and approaches zero monotonically (i.e., the motion is isotropic at intermediate and large time
scales). Bottom: At the highest densities achieved ρ ≃ 0.20 − 0.21, the function f reaches very large
positive values before decreasing again towards zero. For short polymers f(t) goes down to negative values
at large times. Here we ignore the causes of this observation because as explained in the text, f(t) is a good
measure of the anisotropy of the motion only up to certain time scales.
that reptation-like motion is the primary contributor to monomer displacement. We also observe
that the maximum of f increases as the number of monomers N is increased, consistent with the
fact that the width of the plateau becomes monotonically larger and larger with N .
VI. CONCLUSIONS
As the size of the confining box around a polymer is reduced, the monomer density makes it
increasingly difficult for the polymer to move. However, the effect on the polymer movement is not
isotropic. The transverse fluctuations are strongly suppressed due to the proximity of monomers
that may be greatly separated along the chain backbone. This is in contrast to motion parallel to
the chain backbone where, due to the connectivity constraint, the monomer density is very similar
for the confined and unconfined chains. While longitudinal motion is sub-dominant in the free
chain, it is the primary mode of monomer diffusion when the density becomes high enough to
suppress the transverse fluctuations.
The emergence of motion parallel to the chain backbone as the dominant mode of diffusion
is similar to what occurs in a polymer solution when the density is increased to form a melt.
However, the longitudinal diffusion observed here differs from the classic reptation picture in that
the motion is not necessarily initiated at the chain ends but it can also be triggered by fingering
events. The prevalence of fingering reptation over end-initiated reptation is due to three factors.
First, the two-dimensional nature of our system imposes topological constraints that severely limit
the mobility of the chain ends. Second, a single confined chain has only two end-points, while the
number of fingers it can have grows with the system size. Third, the compact configuration due to
the confinement forces the creation of more fingering structures relative to the extended polymer
structures found in melts.
The peculiarities of the dynamics of a single chain in extreme confinement (high density limit)
leads to an interesting effect: monomers can diffuse through large distances comparable to the
box size within time scales for which the overall shape of the polymer is, nevertheless, largely
preserved. While monomer displacement exhibits a smooth power law behavior in time at all den-
sities, the tangent-tangent correlation function develops a secondary decay at high densities. This
decay takes place at longer time scales for older systems, suggestive of aging phenomena. We thus
find glass-like behavior in the overall geometry concurrently with non-glassy monomer motion.
Despite significant persistence of geometry, monomer displacement can reach large values relative
to its saturation value over the same time scales because local rearrangements cause monomers to
flow even in parts of the system where no reshaping is taking place.
The two dimensional lattice model presented here is a largely simplified one. However, we
believe that this model yields considerable insight into the generic properties of confined polymers.
Namely, reptational or longitudinal motion is identified as the primary mechanism for motion
at high densities and extreme confinement is found to primarily suppress changes in the overall
geometry of the polymer rather than the monomer motion.
Acknowledgments
The simulations were carried out on Boston University supercomputer facilities (SCV). We
thank B. Chakraborty, J. Kondev, D. Reichman and F. Ritort for useful discussions. This work is
supported in part by the NSF Grant DMR-0403997 (AR, CC, CC, and JS) and by EPSRC Grant
No. GR/R83712/01 (C. Castelnovo).
Appendix: Closed chains
Closed chains can be studied using the same correlation functions. The only subtlety with
closed chains is the existence of a non-trivial background in finite systems. The background has
to do with the topology of closed loops and must be subtracted from the tangent-tangent corre-
lation function. Suppose that the monomers in the chain are initially indexed clockwise or anti-
clockwise. The dynamics cannot change the chirality of the loop in two dimensions. Therefore, all
the outer segments of the polymer running parallel to the walls of the box have correlated tangent
fields. Using an ensemble with random chirality does not remove the problem because each real-
ization, whether clockwise or anti-clockwise, would contribute a positive value to the correlation
function. One can correct for this effect as follows. For an ensemble with a given chirality, let
us call the equilibrium tangent-field background ~save(~x). We can then modify the tangent-tangent
correlation function by subtracting this background field.
cs,loop(t, tw) = 〈
all ~x
~s~x(t+ tw)− ~save(~x)
~s~x(tw)− ~save(~x)
〉. (6.1)
The equilibrium background can be obtained at low densities via Monte Carlo simulations, using
realizations with the same chirality and averaging over time and ensemble. This approach becomes
less and less reliable as the density increases and glassy behavior arises, because each realization
is essentially stuck in a small region of configuration space over the measurement time scales.
Fig. 14 shows the background tangent field at an intermediate density ρ = 0.1. The modified
tangent-tangent correlation function (6.1) and the mean square displacement were measured for a
FIG. 14: The background tangent field for N = 256 monomers in a box of size L = 49. The relative scale
between the field vectors reflects the actual values of the tangent-tangent field (an overall scale factor has
been introduced to enhance visibility). Notice that the average field at the boundary does not vanish.
closed loop of N = 256 monomers and no qualitative difference was observed in comparison to
open chains. Also f(t) behaves similarly in the two cases, reaching high values at high densities
for closed chains as well as open chains. As shown in Fig. 15, the mean square displacement for
closed chains at high densities reaches its saturation value faster than for open chains, whereas at
low and intermediate densities they are identical.
[1] T. Cremer and C. Cremer, Nature Rev. Gen. 2, 292-301 (2001).
[2] G.M. Whitesides, Nature 442, 368-373 (2006).
[3] P.G. de Gennes, Scaling concepts in polymer physics, Cornell University Press (1979).
[4] M. Muthukumar, Phys. Rev. Lett. 86, 3188 - 3191 (2001).
[5] S.M. Bezrukov, I. Vodyanoy, R.A. Brutan, and J.J. Kasianowicz, Macromolecules 29, 8517 (1996).
[6] D. Nykypanchuk, H.H. Strey, D.A. Hoagland, Science 297, 987 - 990 (2002).
[7] J. Kalb, B. Chakraborty, cond-mat/0702152.
http://arxiv.org/abs/cond-mat/0702152
N=256, ρ=0.197, OP
N=256, ρ=0.197, CP
N=256, ρ=0.152, OP
N=256, ρ=0.152, CP
FIG. 15: The mean square displacements of closed polymers (CP) and open polymers (OP) are identical at
low and intermediate densities. At high densities closed polymers seem to reach the saturation value faster.
[8] A. J. Spakowitz and Z.-G. Wang, Biophys. J., 88, 3912 (2005).
[9] J.A. Forrest and K. Dalnoki-Veress, Adv. Colloid and Interface Sci. 94, 167 (2001).
[10] J.L. Keddie, R.A.L. Jones, and R.A. Cory, Europhys. Lett. 27, 59 (1994).
[11] P.G. de Gennes, Eur. Phys. J. E 2, 201 (2000).
[12] K.L. Ngai, A.K. Rizos, and D.J. Plazek, J. Non-cryst. Sol. 235, 435 (1998).
[13] Q. Jiang, H.X. Shi, and J.C. Li, Thin Solid Films 354, 283 (1999).
[14] K. Kremer and I. Carmesin, Macromolecules 21, 2819 (1988).
[15] F. Ritort and P. Sollich, Advances in Physics 52, 219 (2003).
[16] W. Kob and H.C. Andersen, Phys. Rev. E 48, 4364 (1993).
[17] C. Toninelli, G. Biroli and D. Fisher, Phys. Rev. Lett. 92, 185504 (2004).
[18] P.J. Flory, J. Chem. Phys. 17, 303 (1949).
[19] T. Odijk Macromolecules 16, 1340-1344 (1983)
Introduction
A kinetically constrained Lattice gas model
Mean square monomer displacement
Tangent field Correlation
Tangent-Displacement correlation
Conclusions
Acknowledgments
Appendix: Closed chains
References
|
0704.1664 | Diffuse Optical Light in Galaxy Clusters II: Correlations with Cluster
Properties | Draft version November 9, 2018
Preprint typeset using LATEX style emulateapj v. 11/12/01
DIFFUSE OPTICAL LIGHT IN GALAXY CLUSTERS II:
CORRELATIONS WITH CLUSTER PROPERTIES
J.E. Krick 1,2 and R.A. Bernstein 1
[email protected], [email protected]
Draft version November 9, 2018
ABSTRACT
We have measured the flux, profile, color, and substructure in the diffuse intracluster light (ICL) in
a sample of ten galaxy clusters with a range of mass, morphology, redshift, and density. Deep, wide-
field observations for this project were made in two bands at the one meter Swope and 2.5 meter du
Pont telescope at Las Campanas Observatory. Careful attention in reduction and analysis was paid to
the illumination correction, background subtraction, point spread function determination, and galaxy
subtraction. ICL flux is detected in both bands in all ten clusters ranging from 7.6× 1010 to 7.0× 1011
h−170 L�in r and 1.4× 10
10 to 1.2× 1011 h−170 L�in the B−band. These fluxes account for 6 to 22% of the
total cluster light within one quarter of the virial radius in r and 4 to 21% in the B−band. Average ICL
B− r colors range from 1.5 to 2.8 mags when k and evolution corrected to the present epoch. In several
clusters we also detect ICL in group environments near the cluster center and up to 1h−170 Mpc distant
from the cluster center. Our sample, having been selected from the Abell sample, is incomplete in that
it does not include high redshift clusters with low density, low flux, or low mass, and it does not include
low redshift clusters with high flux, mass, or density. This bias makes it difficult to interpret correlations
between ICL flux and cluster properties. Despite this selection bias, we do find that the presence of
a cD galaxy corresponds to both centrally concentrated galaxy profiles and centrally concentrated ICL
profiles. This is consistent with ICL either forming from galaxy interactions at the center, or forming at
earlier times in groups and later combining in the center.
Subject headings: galaxies: clusters: individual (A4059, A3880, A2734, A2556, A4010, A3888, A3984,
A0141, AC114, AC118) — galaxies: evolution — galaxies: interactions — galaxies:
photometry — cosmology: observations
1. introduction
A significant stellar component of galaxy clusters is
found outside of the galaxies. The standard theory of
cluster evolution is one of hierarchical collapse, as time
proceeds, clusters grow in mass through the merging with
other clusters and groups. These mergers as well as inter-
actions within groups and within clusters strip stars out of
their progenitor galaxies. The study of these intracluster
stars can inform hierarchical formation models as well as
tell us something about physical mechanisms involved in
galaxy evolution within clusters.
Paper I of this series (Krick et al. 2006) discusses the
methods of ICL detection and measurement as well as the
results garnered from one cluster in our sample. We re-
fer the reader to that paper and the references therein
for a summary of the history and current status of the
field. This paper presents the remaining nine clusters in
the sample and seeks to answer when and how intracluster
stars are formed by studying the total flux, profile shape,
color, and substructure in the ICL as a function of cluster
mass, redshift, morphology, and density in the sample of
10 clusters. The advantage to having an entire sample of
clusters is to be able to follow evolution in the ICL and
use that as an indicator of cluster evolution.
Strong evolution in the ICL fraction with mass of the
cluster has been predicted in simulations by both Lin
& Mohr (2004) and Murante et al. (2004). If ongoing
stripping processes are dominant, ram pressure stripping
(Abadi et al. 1999) or harassment (Moore et al. 1996), then
high mass clusters should have a higher ICL fraction than
low-mass clusters . If, however, most of the galaxy evolu-
tion happens early on in cluster collapse by galaxy-galaxy
merging, then the ICL should not correlate directly with
current cluster mass.
Because an increase in mass is tied to the age of the
cluster under hierarchical formation, evolution has also
been predicted in the ICL fraction as a function of red-
shift (Willman et al. 2004; Rudick et al. 2006). Again,
if ICL formation is an ongoing process then high redshift
clusters will have a lower ICL fraction than low redshift
clusters. Conversely, if ICL formation happened early on
in cluster formation there will be no correlation of ICL
with redshift.
The stripping of stars (or even the gas to make stars)
to create an intracluster stellar population requires an in-
teraction between their original host galaxy and either an-
other galaxy, the cluster potential, or possibly the hot gas
in the cluster. Because all of these processes require an
interaction, we expect cluster density to be a predictor of
ICL fraction. Cluster density is linked to cluster morphol-
ogy, which implies morphology should also be a predictor
of ICL fraction. Specifically we measure morphology by
the presence or absence of a cD galaxy. cD galaxies are
the results of 2 - 5 times more mergers than the average
cluster galaxy (Dubinski 1998). The added number of in-
teractions that went into forming the cD galaxy will also
mean an increased disruption rate in galaxies therefore
morphological relaxed (dynamically old) clusters should
1 Astronomy Department, University of Michigan, Ann Arbor, MI 48109
2 Spitzer Science Center, Caltech, Pasadena, CA 91125
2 Krick, Bernstein
have a higher ICL flux than dynamically young clusters.
Observations of the color and fractional flux in the ICL
over a sample of clusters with varying redshift and dynam-
ical state will allow us to identify the timescales involved
in ICL formation. If the ICL is the same color as the
cluster galaxies, it is likely to be a remnant from ongoing
interactions in the cluster. If the ICL is redder than the
galaxies it is likely to have been stripped from galaxies at
early times. Stripped stars will passively evolve toward red
colors while the galaxies continue to form stars. If the ICL
is bluer than the galaxies, then some recent star formation
has made its way into the ICL, either from ellipticals with
low metallicity or spirals with younger stellar populations,
or from in situ formation.
While multiple mechanisms are likely to play a role in
the complicated process of formation and evolution of clus-
ters, important constraints can come from ICL measure-
ment in clusters with a wide range of properties. In addi-
tion to directly constraining galaxy evolution mechanisms,
the ICL flux and color is a testable prediction of cosmolog-
ical models. As such it can indirectly be used to examine
the accuracy of the physical inputs to these models.
This paper is structured in the following manner. In §2
we discuss the characteristics of the entire sample. De-
tails of the observations and reduction are presented in §3
and §4 including flat-fielding, sky background subtraction
methods, object detection, and object removal and mask-
ing. In §5 we lists the results for both cluster and ICL
properties including a discussion of each individual clus-
ter. Accuracy limits are discussed in §6. A discussion of
the interesting correlations can be found in §7 followed by
a summary of the conclusions in §8.
Throughout this paper we use H0 = 70km/s/Mpc, ΩM
= 0.3, ΩΛ = 0.7.
2. the sample
The general properties of our sample of ten galaxy clus-
ters have been outlined in paper I; for completeness we
summarize them briefly here. Our choice of the 10 clusters
both minimizes the observational hazards of the galactic
and ecliptic plane, and maximizes the amount of informa-
tion in the literature. All clusters were chosen to have
published X–ray luminosities which guarantees the pres-
ence of a cluster and provides an estimate of the clus-
ter’s mass. The ten chosen clusters are representative
of a wide range in cluster characteristics, namely red-
shift (0.05 < z < 0.3), morphology (3 with no clear
central dominant galaxy, and 7 with a central dominant
galaxy as determined from this survey, §5.1.2, and not
from Bautz Morgan morphological classifications), spatial
projected density (richness class 0 - 3), and X–ray lumi-
nosity (1.9 × 1044 ergs/s < Lx < 22 × 1044 ergs/s). We
discuss results from the literature and this survey for each
individual cluster in order of ascending redshift in §5.
3. observations
The sample is divided into a “low” (0.05 < z < 0.1)
and “high” (0.15 < z < 0.3) redshift range which we have
observed with the 1 meter Swope and 2.5 meter du Pont
telescope respectively. The du Pont observations were dis-
cussed in detail in paper I. The Swope observations follow
a similar observational strategy and data reduction pro-
cess which we outline below. Observational parameters
are listed in Table 2.
We used the 2048 × 3150 “Site#5” CCD with a
3e−/count gain and 7e− readnoise on the Swope telescope.
The pixel scale is 0.435′′/pixel (15µ pixels), so that the full
field of view per exposure is 14.8′ × 22.8′. Data was taken
in two filters, Gunn-r (λ0 = 6550 Å) and B (λ0 = 4300
Å). These filters were selected to provide some color con-
straint on the stellar populations in the ICL by spanning
the 4000Å break at the relevant redshifts, while avoid-
ing flat-fielding difficulties at longer wavelengths and pro-
hibitive sky brightness at shorter wavelengths.
Observing runs occurred on October 20-26, 1998,
September 2-11, 1999, and September 19-30, 2000. All
observing runs took place within eight days of new moon.
A majority of the data were taken under photometric con-
ditions. Those images taken under non-photometric con-
ditions were individually tied to the photometric data (see
discussion in §4. Across all three runs, each cluster was
observed for an average of 5 hours in each band. In addi-
tion to the cluster frames, night sky flats were obtained in
nearby, off-cluster, “blank” regions of the sky with total
exposure times roughly equal to one third of the integra-
tion times on cluster targets. Night sky flats were taken in
all moon conditions. Typical B− and r−band sky levels
during the run were 22.7 and 21.0 mag arcsec−2, respec-
tively.
Cluster images were dithered by one third of the field
of view between exposures. The large overlap from the
dithering pattern gives us ample area for linking back-
ground values from the neighboring cluster images. Ob-
serving the cluster in multiple positions on the chip reduces
large-scale flat-fielding fluctuations upon combination. In-
tegration times were typically 900 seconds in r and 1200
seconds in B.
4. reduction
In order to create mosaiced images of the clusters with
a uniform background level and accurate resolved–source
fluxes, the images were bias and dark subtracted, flat–
fielded, flux calibrated, background–subtracted, extinction
corrected, and registered before combining. Methods for
this are discussed in detail in paper I and summarized be-
The bias level is roughly 270 counts which changed by
approximately 8% throughout the night. This, along with
the large-scale ramping effect in the first 500 columns of ev-
ery row was removed in the standard manner using IRAF
tasks. The mean dark level is 1.6 counts/900s, and there
is some vertical structure in the dark which amounts to
1.4 counts/900s over the whole image. To remove this
large-scale structure from the data images, a combined
dark frame from the whole run was median smoothed over
9× 9 pixels (3.9′′), scaled by the exposure time, and sub-
tracted from the program frames. Small scale variations
were not present in the dark. Pixel–to–pixel sensitivity
variations were corrected in all cluster and night-sky flat
images using nightly, high S/N, median-combined dome
flats with 70,000 – 90,000 total counts. After this step,
a large-scale illumination pattern remains across the chip.
This was removed using night-sky flats of “blank” regions
of the sky, which, when combined using masking and re-
ICL 3
jection, produced an image with no evident residual flux
from sources but has the large scale illumination pattern
intact. The illumination pattern was stable among images
taken during the same moon phase. Program images were
corrected only with night sky flats taken in conditions of
similar moon.
We find that the Site#3 CCD does have an approx-
imately 7% non-linearity over the full range of counts,
which we fit with a second order polynomial and corrected
for in all the data. The same functional fit was found for
both the 1998 and 1999 data, and also applied to the 2000
data. The uncertainty in the linearity correction is incor-
porated in the total photometric uncertainty.
Photometric calibration was performed in the usual
manner using Landolt standards at a range of airmasses.
Extinction was monitored on stars in repeat cluster images
throughout the night. Photometric nights were analyzed
together; solutions were found in each filter for an extinc-
tion coefficient and common magnitude zero-point with a
r− and B−band RMS of 0.04 & 0.03 magnitudes in Oc-
tober 1998, 0.03 & 0.03 magnitudes in September 1999,
and 0.05 & 0.05 magnitudes in September 2000. These
uncertainties are a small contribution to our final error
budget (§5.3). Those exposures taken in non-photometric
conditions were individually tied to the photometric data
using roughly 10 stars well distributed around each frame
to find the effective extinction for that frame. Among
those non-photometric images we find a standard devia-
tion of 0.03 magnitudes within each frame. Two further
problems with using non-photometric data for low surface
brightness measurements are the scattering of light off of
clouds causing a changing background illumination across
the field and secondly the smoothing out of the PSF. We
find no spatial gradient over the individual frame to the
limit discussed in §5.3. The change in PSF is on small
scales and will have no effect on the ICL measurement
(see 4.2.1).
Due to the temporal variations in the background, it is
necessary to link the off-cluster backgrounds from adjacent
frames to create one single background of zero counts for
the entire cluster mosaic before averaging together frames.
To determine the background on each individual frame we
measure average counts in approximately twenty 20 × 20
pixel regions across the frame. Regions are chosen indi-
vidually by hand to be a representative sample of all areas
of the frame that are more distant than 0.8h−170 Mpc from
the center of the cluster. This is well beyond the radius
at which ICL components have been identified in other
clusters (Krick et al. 2006; Feldmeier et al. 2002; Gonzalez
et al. 2005; Zibetti et al. 2005). The average of these back-
ground regions for each frame is subtracted from the data,
bringing every frame to a zero background. The accuracy
of the background subtraction is discussed in §5.3.
The remaining flux in the cluster images after back-
ground subtraction is corrected for atmospheric extinction
by multiplying each individual image by 10τχ/2.5, where χ
is the airmass and τ is the fitted extinction in magnitudes
from the photometric solution. This multiplicative correc-
tion is between 1.04 and 2.0 for an airmass range of 1.04
to 1.9.
The IRAF tasks geomap and geotran were used to
find and apply x and y shifts and rotations between all
images of a single cluster. The geotran solution is accu-
rate on average to 0.03 pixels (RMS). Details of the final
combined image after pre-processing, background subtrac-
tion, extinction correction, and registration are included in
Table 2.
4.1. Object Detection
Object detection follows the same methods as Paper I.
We use SExtractor to both find all objects in the combined
frames, and to determine their shape parameters. The de-
tection threshold in the V , B, and r images was defined
such that objects have a minimum of 6 contiguous pixels,
each of which are greater than 1.5σ above the background
sky level. We choose these parameters as a compromise be-
tween detecting faint objects in high signal-to-noise regions
and rejecting noise fluctuations in low signal-to-noise re-
gions. This corresponds to minimum surface brightnesses
which range from of 25.2 to 25.8 mag arcsec−2 in B, 25.9
to 26.9 mag arcsec−2 in V , and 24.7 to 26.4 mag arcsec−2
in r (see Table 2). This range in surface brightness is
due to varying cumulative exposure time in the combined
frames. Shape parameters are determined in SExtractor
using only those pixels above the detection threshold.
4.2. Object Removal & Masking
To measure the ICL we remove all detected objects from
the frame by either subtraction of an analytical profile or
masking. Details of this process are described below.
4.2.1. Stars
Scattered light in the telescope and atmosphere pro-
duce an extended point spread function (PSF) for all ob-
jects. To correct for this effect, we determine the extended
PSF using the profiles of a collection of stars from super-
saturated 4th mag stars to unsaturated 14th magnitude
stars. The radial profiles of these stars were fit together
to form one PSF such that the extremely saturated star
was used to create the profile at large radii and the unsat-
urated stars were used for the inner portion of the profile.
This allows us to create an accurate PSF to a radius of 7′,
shown in Figure 1.
The inner region of the PSF is well fit by a Moffat func-
tion. The outer region is well fit by r−2.0 in the r−band
and r−1.6 in the B−band. In the r−band there is a small
additional halo of light at roughly 50 - 100′′(200-400pix)
around stars imaged on the CCD. The newer, higher qual-
ity, anti-reflection coated interference B−band filter does
not show this halo, which implies that the halo is caused by
reflections in the filter. To test the effect of clouds on the
shape of the PSF we create a second deep PSF from stars
in cluster fields taken under non-photometric conditions.
There is a slight shift of flux in the inner 10 arcseconds
of the PSF profile, which will have no impact on our ICL
measurement.
For each individual, non-saturated star, we subtract a
scaled band–specific profile from the frame in addition to
masking the inner 30′′ of the profile (the region which fol-
lows a Moffat profile). For each individual saturated star,
to be as cautious as possible with the PSF wings, we have
subtracted a stellar profile given the USNO magnitude of
that star, and produced a large mask to cover the inner re-
gions and any bleeding. The mask size is chosen to be twice
4 Krick, Bernstein
the radius at which the star goes below 30mag arcsec−2,
and therefore goes well beyond the surface brightness limit
at which we measure the ICL. We can afford to be liberal
with our saturated star masking since most clusters have
very few saturated stars which are not near the center of
the cluster where we need the unmasked area to measure
any possible ICL.
In the specific case of A3880 there are two saturated
stars (9th and 10th r−band magnitude) within two ar-
cminutes of the core region of the cluster. If we used the
same method of conservatively masking (twice the radius
of the 30 mag arcsec−2 aperture), the entire central region
of the image where we expect to find ICL would be lost.
We therefore consider a less extreme method of removing
the stellar profile by iteratively matching the saturated
stars’ profiles with the known PSF shape. We measure
the saturated star profiles on an image which has had ev-
ery object except for those two saturated stars masked, as
described in §4.2.2. We can then scale our measured PSF
to the star’s profile, at radii where there is expected to be
no contamination from the ICL, and the star’s flux is not
saturated. Since the two stars are within an arcminute of
each other, the scaled profiles of the stars are iteratively
subtracted from the masked cluster image until the process
converges on solutions for the scaling of each star. We still
use a mask for the inner region (∼ 75′′) where saturation
and seeing effect the profile shape.
4.2.2. Galaxies
We want to remove all the flux in our images associated
with galaxies. Although some galaxies might follow de-
Vaucouleurs, Sersic, or exponential profiles, those galaxies
which are near the centers of clusters can not be fit with
these or other models either because of the overcrowding
in the center or because their profiles really are different
due to their location in a dense environment. A variety
of strategies for modeling galaxies within the centers of
clusters were explored in Paper 1 and were found to be in-
adequate for these purposes. Since we can not fit and sub-
tract the galaxies to remove their light, we instead mask
all galaxies in our cluster images.
By masking, we remove from our ICL measurements all
pixels above a surface brightness limit which are centered
on a galaxy as detected by SExtractor. For paper I, we
chose to mask inside of 2 - 2.3 times the radius at which
the galaxy light dropped below 26.4 mag arcsec−2 in r,
akin to 2-2.3 times a Holmberg radius (Holmberg 1958).
Holmberg radii are typically used to denote the outermost
radii of the stellar populations in galaxies.
Galaxy profiles will also have the characteristic underly-
ing shape of the PSF, including the extended halo. How-
ever for a 20th magnitude galaxy, the PSF is below 30 mag
arcsec−2by a radius of 10′′.
Each of the clusters has a different native surface bright-
ness detection threshold based on the illumination correc-
tion and background subtraction, and they are all at dif-
ferent redshifts. However we want to mask galaxies at all
redshifts to the same physical surface brightness to allow
for a meaningful comparison between clusters at different
redshifts. To do this we make a correction for (1+z)4 sur-
face brightness dimming and a k correction for each cluster
when calculating mask sizes. The masks sizes change by
an average of 10% and at most 22% from what they would
have been given the native detection threshold. Both the
native and corrected surface brightness detection thresh-
olds are listed in Table 2. To test the effect of mask size on
the ICL profile and total flux, we also create masks which
are 30% larger and 30% smaller in area than the calcu-
lated mask size. The flux within the masked areas for
these galaxies is on average 25% more than the flux iden-
tified by SExtractor as the corrected isophotal magnitude
for each object.
5. results
Here we discuss our methods for measuring both cluster
and ICL properties as well as a discussion of each individ-
ual cluster in our sample.
5.1. Cluster Properties
Cluster redshift, mass, and velocity dispersion are taken
from the literature, where available, as listed in table 1.
Additional properties that can be identified in our data,
particularly those which may correlate with ICL proper-
ties (cluster membership, flux, dynamical state, and global
density), are discussed below and also summarized in Ta-
ble 1.
5.1.1. Cluster Membership & Flux
Cluster membership and galaxy flux are both deter-
mined using a color magnitude diagram (CMD) of either
B− r vs. r (clusters with z < 0.1) or V − r vs. r (clusters
with z > 0.1). We create color magnitude diagrams for
all clusters using corrected isophotal magnitudes as deter-
mined by SExtractor. Membership is then assigned based
on a galaxy’s position in the diagram. If a given galaxy
is within 1σ of the red cluster sequence (RCS) determined
with a biweight fit, then it is considered a member (fits are
shown in Figure 2). All others are considered to be non-
member foreground or background galaxies. This method
selects the red elliptical galaxies as members. The ben-
efits of this method are that membership can easily be
calculated with 2 band photometry. The drawbacks are
that it both does not include some of the bluer true mem-
bers and does include some of the redder non-members.
An alternative method of determining cluster flux with-
out spectroscopy by integrating under a background sub-
tracted luminosity function is discussed in detail in §5.3
of paper I. Due to the large uncertainties involved in both
methods (∼ 30%), the choice of procedure will not greatly
effect the conclusions.
To determine the total flux in galaxies, we sum the flux
of all member galaxies within the same cluster radius. The
image size of our low-redshift clusters restricts that radius
to one quarter of the virial radius of the cluster where virial
radii are taken from the literature or calculated from X–
ray temperatures as described in §A.1-A.10. From tests
with those clusters where we do have some spectroscopic
membership information from the literature (see §A.3 &
§A.6), we expect the uncertainty in flux from using the
CMD for membership to be ∼ 30%.
Fits to the CMDs produce the mean color of the red
ellipticals, the slope of the color versus magnitude relation
(CMR) for each cluster, and the width of that distribution.
ICL 5
Among our 10 clusters, the color of the red sequence is cor-
related with redshift whereas the slopes of the relations are
roughly the same across redshift, consistent with López-
Cruz et al. (2004). The widths of the CMRs vary from
0.1 to 0.4 magnitudes. This is expected if these clusters
are made up of multiple clumps of galaxies all at similar,
but not exactly the same, redshifts. True background and
foreground groups and clusters can also add to the width
of the RCS.
In order to compare fluxes from all clusters, we consider
two correction factors. First, galaxies below the detection
threshold will not be counted in the cluster flux as we have
measured it, and will instead contribute to the ICL flux.
Since each cluster has a different detection threshold based
mainly on the quality of the illumination correction (see
Table 2), we calculate individually for each cluster the flux
contribution from galaxies below the detection threshold.
Without luminosity functions for each cluster, we adopt
the Goto et al. (2002) luminosity function based on 200
Sloan clusters (α′r = −0.85 ± 0.03). The flux from dwarf
galaxies below the detection threshold ( M = −11 in r) is
less than or equal to 0.1% of the flux from sources above
the detection threshold (our assumed value of total flux).
This is an extremely small contribution due to the faint
end slope, and our deep, uniform images with detection
thresholds in all cases more than 7 magnitudes dimmer
than M∗. Our surface brightness detection thresholds are
low enough that we don’t expect to miss galaxies of nor-
mal surface brightness below our detection threshold at
any redshift assuming that all galaxies at all redshifts have
similar central surface brightnesses.
Second, we apply k and evolutionary corrections to ac-
count for the shifting of the bandpasses through which we
are observing, and the evolution of the galaxy spectra due
to the range in redshifts we observe. We use Poggianti
(1997) for both of these corrections as calculated for sim-
ple stellar population of elliptical galaxies in B, V , and
5.1.2. Dynamical Age
Dynamical age is an important cluster characteristic for
this work as dynamical age is tied to the number of past
interactions among the galaxies. We discuss four methods
for estimating cluster dynamical age based on optical and
X–ray imaging. The first two methods are based on cluster
morphology using Bautz Morgan type and an indication of
the presence of a cD galaxy. We use morphology as a proxy
for dynamical age since clusters with single large ellipti-
cal galaxies at their centers (cD) have presumably been
through more mergers and interactions than clusters that
have multiple clumps of galaxies where none have settled
to the center of the potential. Those clusters with more
mergers are dynamically older, therefore clusters with cD
galaxies should be dynamically older. Specifically Bautz
Morgan type is a measure of cluster morphology defined
such that type I clusters have cD galaxies, type III clusters
do not have cD galaxies, and type II clusters may show
cD-like galaxies which are not centrally located. Bautz
Morgan type is not reliable as Abell did not have mem-
bership information. To this we add our own binary in-
dicator of cluster morphology; clusters which have single
galaxy peaks in the centers of their ICL distributions (cD
galaxies) versus clusters which have multiple galaxy peaks
in the centers of their ICL distributions (no cD).
We have more information about the dynamical age of
the cluster beyond just the presence or absence of a cD
galaxy, namely the difference in brightness of the bright-
est cluster galaxy (BCG) relative to the next few bright-
est galaxies in the cluster (the luminosity gap statistic
Milosavljević et al. 2006), which is our third estimate of
dynamical age. Clusters with one bright galaxy that is
much brighter than any of the other cluster galaxies im-
ply an old dynamic age because it takes time to form that
bright galaxy through multiple mergers. Conversely, mul-
tiple evenly bright galaxies imply a cluster that is dynam-
ically young. For our sample we measure the magnitude
differences between the first (M1) and second (M2) bright-
est galaxies that are considered members based on color.
We run the additional test of comparing M2-M1 with M3-
M1, where consistency between these values insures a lack
of foreground or background contamination. Values of M3-
M1 range from 0.24 to 1.1 magnitudes and are listed in
Table 1. This is the most reliable measure of dynamic
age available to us in this dataset. In a sample of 12
galaxy groups from N-body hydrodynamical simulations,
D’Onghia et al. (2005) find a clear, strong correlation be-
tween the luminosity gap statistic and formation time of
the group (spearman rank coefficient of 0.91) such that
δmag increases by 0.69 ± 0.41(1σ) magnitudes for every
one billion years of formation. We assume this simulation
is also an accurate reflection of the evolution of clusters
and therefore that M3-M1 is well correlated with forma-
tion time and therefore dynamical age of the clusters.
The fourth method for measuring dynamical state is
based on the X–ray observations of the clusters. In a sim-
ulation of 9 cluster mergers with mass ratios ranging from
1:1 to 10:1 with a range of orbital properties, Poole et al.
(2006) show that clusters are virialized at or shortly af-
ter they visually appear relaxed through the absence of
structures (clumps, shocks, cavities) or centroid shifts (X–
ray peak vs. center of the X–ray gas distribution). We
then assume that spherically distributed hot gas as evi-
denced by the X–ray morphologies of the clusters free from
those structures and centroid shifts implies relaxed clusters
which are therefore dynamically older clusters that have
already been through significant mergers. With enough
photons, X–ray spectroscopy can trace the metallicity of
different populations to determine progenitor groups or
clusters. X–ray observations are summarized in §A.1 -
§A.10.
5.1.3. Global Density
Current global cluster density is an important cluster
characteristic for this work as density is correlated with
the past interaction rate among galaxies. We would like a
measure of the number of galaxies in each of the clusters
within some well defined radius which encompasses the po-
tentially dynamically active regions of the cluster. Abell
chose to calculate global density as the number of galaxies
with magnitudes between that of the third ranked member,
M3, and M3+2 within 1.5 Mpc of the cluster, statistically
correcting for foreground and background galaxy contami-
nation with galaxy densities outside of 1.5Mpc (Abell et al.
1989). The cluster galaxy densities are then binned into
6 Krick, Bernstein
richness classes with values of zero to three, where rich-
ness three clusters are higher density than richness zero
clusters. Cluster richnesses are listed in Table 1.
In addition to richness class we use a measure of global
density which has not been binned into coarse values and is
not affected by sample completeness. To do this we count
the number of member galaxies inside of 0.8 h−170 Mpc to
the same absolute magnitude limit for all clusters. Mem-
bership is assigned to those galaxies within 1σ of the color
magnitude relation (CMR). The density may be affected
by the width of the CMR if the CMR has been artificially
widened due to foreground and background contamina-
tion. We choose a magnitude limit of Mr = -18.5 which
is deep enough to get many tens of galaxies at all clus-
ters, but is shallow enough that our photometry is still
complete. At the most distant clusters (z=0.31), an Mr =
-18.5 galaxy is a 125σ detection. The numbers of galax-
ies in each cluster that meet these criteria range from 62
- 288, and are in good agreement with the broader Abell
richness determination. These density estimates are listed
in Table 1.
5.2. ICL properties
We detect an ICL component in all ten clusters of our
sample. We describe below our methods for measuring the
surface brightness profile, color, flux, and substructure in
that component.
5.2.1. Surface brightness profile
In eight out of 10 clusters the ICL component is central-
ized enough to fit with a single set of elliptical isophotes.
The exceptions are A0141 and AC118. We use the IRAF
routine ellipse to fit isophotes to the diffuse light which
gives us a surface brightness profile as a function of semi–
major axis. The masked pixels are completely excluded
from the fits. There are 3 free parameters in the isophote
fitting: center, position angle (PA), and ellipticity. We fix
the center and let the PA and ellipticity vary as a func-
tion of radius. Average ICL ellipticities range from 0.3 to
0.7 and vary smoothly if at all within each cluster. The
PA is notably coincident with that of the cD galaxy where
present (discussed in §A.1 - A.10).
We identify the surface brightness profile of the total
cluster light (ie., including resolved galaxies) for compari-
son with the ICL within the same radial extent. To do this,
we make a new “cluster” image by masking non-member
galaxies as determined from the color magnitude relation
(§5.1.1). A surface brightness profile of the cluster light is
then measured from this image using the same elliptical
isophotes as were used in the ICL profile measurement.
Figure 3 shows the surface brightness profiles of all eight
clusters for which we can measure an ICL profile. Individ-
ual ICL profiles in both r− and V− or B−bands are shown
in Figures 4 - 13. Results based on all three versions of
mask size (as discussed in §4.2.2) are shown via shading on
those plots. Note that we are not able to directly measure
the ICL at small radii (<∼ 70kpc) in any of the clusters
because greater than 75% of those pixels are masked. The
uncertainty in the ICL surface brightness is dominated by
the accuracy with which the background level can be iden-
tified, while the error on the mean within each elliptical
isophote is negligible, as discussed in §5.3. Error bars in
Figures 3 and 4 - 13 show the 1σ uncertainty based on
the error budget for each cluster (see representative error
budget in Table 3).
The ICL surface brightness profiles have two interesting
characteristics. First, in all cases they can be fit by both
exponential and deVaucouleurs profiles. Both appear to
perform equally well given the large error bars at low sur-
face brightness. These profiles, in contrast to the galaxy
profiles, are relatively smooth, only occasionally reflect-
ing the clustering of galaxies. Second, the ICL is more
concentrated than the galaxies, which is to say that the
ICL falls off more rapidly with increased radius than the
galaxy light. In all cases the ICL light is decreasing rapidly
enough at large radii such that the additional flux beyond
the radius at which we can reliable measure the surface
brightness is at most 10% of the flux inside of that radius
based on an extrapolation of the exponential fit.
There are 2 clusters (A0141, Figure 11 & AC118, Fig-
ure 13) for which there is no single centralized ICL profile.
These clusters do not have a cD galaxy, and their giant el-
lipticals are distant enough from each other that the ICL is
not a continuous centralized structure. We therefore have
no surface brightness profile for those clusters although we
are still able to measure an ICL flux, as discussed below.
We attempt to measure the profile of the cD galaxy
where present in our sample. To do this we remove the
mask of that galaxy and allow ellipse to fit isophotes all
the way into the center. In 5 out of 7 clusters with a cD
galaxy, the density of galaxies at the center is so great that
just removing the mask for the cD galaxy is not enough
to reveal the center of the cluster due to the other over-
lapping galaxies. Only for A4059 and A2734 are we able
to connect the ICL profile to the cD profile at small radii.
These are shown in Figures 4 & 6.
In both cases the entire profile of the cD plus ICL is well
fit by a single DeVaucouleurs profile, although it can also
be fit by 2 DeVaucouleurs profiles. The profiles can not
be fit with single exponential functions. We do not see a
break between the cD and ICL profiles as seen by Gonzalez
et al. (2005). While those authors find that breaks in the
extended BCG profile are common in their sample, ∼ 25%
of the BCG’s in that sample did not show a clear pref-
erence for a double deVaucouleurs model over the single
deVaucouleurs model. In both clusters where we measure
a cD profile, the color appears to start out with a blue
color gradient, and then turn around and become increas-
ingly redder at large radii as the ICL component becomes
dominant (see Figures 4 & 6).
5.2.2. ICL Flux
The total amount of light in the ICL and the ratio of
ICL flux to total cluster flux can help constrain the impor-
tance of galaxy disruption in the evolution of clusters. As
some clusters have cD galaxies in the centers of their ICL
distribution, we need a consistent, physically motivated
method of measuring ICL flux in the centers of those clus-
ters as compared to the clusters without a single central-
ized galaxy. The key difference here is that in cD clusters
the ICL stars will blend smoothly into the galaxy occupy-
ing the center of the potential well, whereas with non-cD
clusters the ICL stars in the center are unambiguous. Since
our physical motivation is to understand galaxy interac-
ICL 7
tions, we consider ICL to be all stars which were at some
point stripped from their original host galaxies, regardless
of where they are now.
In the case of clusters with cD galaxies, although we
cannot separate the ICL from the galaxy flux in the cen-
ter of the cluster, we can measure the ICL profile outside
of the cD galaxy. Gonzalez et al. (2005) have shown for
a sample of 24 clusters that a BCG with ICL halo can be
well fit with two deVaucouleurs profiles. The two profiles
imply two populations of stars which follow different or-
bits. We assume stars on the inner profile are cD galaxy
stars and those stars on the outer profile are ICL stars.
Gonzalez et al. (2005) find that the outer profile on aver-
age accounts for 80% of the combined flux and becomes
dominant at 40-100kpc from the center which is at sur-
face brightness levels of 24 - 25 mag arcsec−2 in r. Since
all of our profiles are well beyond this radius and well be-
low this surface brightness level, we conclude that the ICL
profile we identify is not contaminated by cD galaxy stars.
Assuming that the stars on the outer profile have differ-
ent orbits than the stars on the inner profile, we calculate
ICL flux by summing all the light in the outer profile from
a radius of zero to the radius at which the ICL becomes
undetectable. Note that this method identifies ICL stars
regardless of their current state as bound or unbound from
the cD galaxy.
We therefore calculate ICL flux by first finding the
mean surface brightness in each elliptical annuli where all
masked pixels are not included. This mean flux is then
summed over all pixels within that annulus including the
ones which were masked. This represents a difference from
paper I where we performed an integration on the fit to the
ICL profile; here we sum the profile values themselves. We
are justified in using the area under the galaxy masks for
the ICL sum since the galaxies only account for less than
3% of the volume of the cluster regardless of projected
area.
There are two non-cD clusters (A141 & A118) for which
we could not recover a profile. We calculate ICL flux for
those clusters by measuring a mean flux within three con-
centric, manually–placed, elliptical annuli (again not uti-
lizing masked pixels) in the mean, and then summing that
flux over all pixels in those annuli. All ICL fluxes are
subject to the same k and evolutionary corrections as dis-
cussed in §5.1.1.
5.2.3. ICL Fraction
In addition to fluxes, we present the ratio of ICL flux
to total cluster flux, where total cluster flux includes ICL
plus galaxy flux. Galaxy flux is taken from the CMDs out
to 0.25rvirial, as discussed in §5.1.1. ICL fractions range
from 6 to 22% in the r−band and 4 to 21% in the B−band
where the smallest fraction comes from A2556 and the
largest from A4059. All fluxes and fractions are listed in
Table 1. As mentioned in §5.1.1, there is no perfect way of
measuring cluster flux without a complete spectroscopic
survey. Based on those clusters where we do have some
spectroscopic information, we estimate the uncertainty in
the cluster flux to be ∼ 30% . This includes both the ab-
sence from the calculation of true member galaxies, and
the false inclusion of non-member galaxies.
All cluster fluxes as measured from the RCS do not in-
clude blue member galaxies so those fluxes are potentially
lower limits to the true cluster flux, implying that the ICL
fractions are potentially biased high. This possible bias
is made more complicated by the known fact that not all
clusters have the same amount of blue member galaxies
(Butcher & Oemler 1984). Less evolved clusters (at higher
redshifts) will have higher fractions of blue galaxies than
more evolved clusters (at lower redshifts). Therefore ICL
fractions in the higher redshift clusters will be systemati-
cally higher than in the lower redshift clusters since their
fluxes will be systematically underestimated. We estimate
the impact of this effect using blue fractions from Couch
et al. (1998) who find maximal blue fractions of 60% of all
cluster galaxies at z = 0.3 as compared to ∼ 20% at the
present epoch. If none of those blue galaxies were included
in our flux measurement for AC114 and AC118 (the two
highest z clusters), this implies a drop in ICL fraction of
∼ 40% as compared to ∼ 10% at the lowest redshifts. This
effect will strengthen the relations discussed below.
Most simulations use a theoretically motivated defini-
tion of ICL which determine its fractional flux within r200
or rvir. It is not straightforward to compare our data to
those simulated values since our images do not extend to
the virial radius nor do they extend to infinitely low surface
brightness which keeps us from measuring both galaxy and
ICL flux at those large radii. The change in fractional flux
from 0.25rvir to rvir will be related to the relative slopes
of the galaxies versus ICL. As the ICL is more centrally
concentrated than the galaxies we expect the fractional
flux to decrease from 0.25rvir to rvir since the galaxies
will contribute an ever larger fraction to the total cluster
flux at large radii. We estimate what the fraction at rvir
would be for 2 clusters in our sample, A4059 and A3984
(steep profile and shallow profile respectively), by extrap-
olating the exponential fits to both the ICL and galaxy
profiles. Using the extrapolated flux values, the fractional
flux decreases by 10% where ICL and galaxy profiles are
steep and up to 90% where profiles are shallower.
5.2.4. Color
For those clusters with an ICL surface brightness pro-
file we measure a color profile as a function of radius by
binning together three to four points from the surface
brightness profile. All colors are k corrected and evolution
corrected assuming a simple stellar population (Poggianti
1997). Color profiles range from flat to increasingly red or
increasingly blue color gradients (see Figures 14). We fit
simple linear functions to the color profiles with their cor-
responding errors. To determine if the color gradients are
statistically significant we look at the ±2σ values on the
slope of the linear fit. If those values do not include zero
slope, then we assume the color gradient is real. Color er-
ror bars are quite large, so in most cases 2σ does include a
flat profile. The significant color gradients (A4010, A3888,
A3984) are discussed in §A.1 - A.10.
For all clusters an average ICL color is used to compare
with cluster properties. In the case where there is a color
gradient, that average color is taken as an average of all
points with error bars less than one magnitude.
5.2.5. ICL Substructure
Using the technique of unsharp masking (subtracting a
smoothed version of the image from itself) we scan each
8 Krick, Bernstein
cluster for low surface brightness (LSB) tidal features as
evidence of ongoing galaxy interactions and thus possible
ongoing contribution to the ICL . All 10 clusters do have
multiple LSB features which are likely from tidal interac-
tions between galaxies, although some are possibly LSB
galaxies seen edge on. For example we see multiple inter-
acting galaxies and warped galaxies, as well as one shell
galaxy. For further discussion see §6.5 of paper I. From the
literature we know that the two highest redshift clusters
in the sample (AC114 and AC118, z=0.31) have a higher
fraction of interacting galaxies than other clusters (∼ 12%
of galaxies, Couch et al. 1998). In two of our clusters,
A3984 and A141, there appears to be plume-like structure
in the diffuse ICL, which is to say that the ICL stretches
from the BCG towards another set of galaxies. Of this
sample, only A3888 has a large, hundred kpc scale, arc
type feature, see Figure 9 and Table 2 of paper I. There
are ∼ 4 examples of these large features in the literature
(Gregg & West 1998; Calcáneo-Roldán et al. 2000; Feld-
meier et al. 2004; Mihos et al. 2005). These structures
are not expected to last longer than a few cluster cross-
ing times, so we don’t expect that they must exist in our
sample. Furthermore, it is possible that there is signifi-
cant ICL substructure below our surface brightness limits
(Rudick et al. 2006).
5.2.6. Groups
In seven out of 10 clusters the diffuse ICL is determined
by eye to be multi-peaked
(A4059,A2734,A3888,A3984,A141,AC114,AC118). In
some cases those excesses surround the clumps of galax-
ies which appear to all be part of the same cluster, ie
the clumps are within a few hundred kpc from the cen-
ter but have obvious separations, and there is no central
dominant galaxy (eg., A118). In other cases, the sec-
ondary diffuse components are at least a Mpc from the
cluster center (eg., A3888). In these cases, the secondary
diffuse light component is likely associated with groups
of galaxies which are falling in toward the center of the
cluster, and may be at various different stages of merg-
ing at the center. This is strong evidence for ICL cre-
ation in group environments, which is consistent with re-
cent measurements of a small amount of ICL in isolated
galaxy groups (Castro-Rodŕıguez et al. 2003; Durrell et al.
2004; Rocha & de Oliveira 2005). This is also consistent
with current simulations (Willman et al. 2004; Fujita 2004;
Gnedin 2003b; Rudick et al. 2006; Sommer-Larsen 2006,
and references therein). From the theory, we expect ICL
formation to be linked with the number density of galax-
ies. Since group environments can have high densities at
their centers and have lower velocity dispersions, it is not
surprising that groups have ICL flux associated with them.
Sommer-Larsen (2006) find the intra-group light to have
very similar properties to the ICL making up 12 − 45%
of the group light, having roughly deVaucouleurs profiles,
and in general varying in flux from group to group where
groups with older dynamic ages (fossil groups D’Onghia
et al. 2005) have a larger amount of ICL. Groups in indi-
vidual clusters are discussed in §A.1 - A.10.
5.3. Accuracy Limits
The accuracy of the ICL surface brightness is limited
on small scales (< 10′′) by photon noise. On larger scales
(> 10′′), structure in the background level (be it intrinsic
or instrumental) will dominate the error budget. We de-
termine the stability of the background level in each clus-
ter image on large scales by first median smoothing the
masked image by 20′′. We then measure the mean flux in
thousands of random 1′′ regions more distant than 0.8 Mpc
from the center of the cluster. The standard deviation of
these regions represents the accuracy with which we can
measure the background on 20′′ scales. We tested the accu-
racy of this measure for even larger-scale uncertainties on
two clusters (A3880 from the 40” data and A3888 from the
100” data). We find that the uncertainty remains roughly
constant on scales equal to, or larger than, 20′′. These
accuracies are listed for each cluster in Table 2. Regions
from all around the frame are used to check that this es-
timate of standard deviation is universal across the image
and not affected by location in the frame. This empiri-
cal measurement of the large-scale fluctuations across the
image is dominated by the instrumental flat-fielding ac-
curacy, but includes contributions from the bias and dark
subtraction, physical variations in the sky level, and the
statistical uncertainties mentioned above.
We examine the effect of including data taken under
non-photometric conditions on the large-scale background
illumination. This noise is fully accounted for in the mea-
surement described above. All B− and V− band data
were taken on photometric nights. Five clusters include
varying fractions of non-photometric r− band data; 47%
of A3880, 12% of A3888, 15% of A3984, 48% of A141, and
14% of A114 are non-photometric. For A3880, the clus-
ter with one of the largest fractions of non-photometric
data, we compare the measured accuracy on the combined
image which includes the non-photometric data with accu-
racy measured from a combined image which includes only
photometric frames. The resulting large-scale accuracy is
0.3 mag arcsec−2better on the frame which includes only
photometric data. Although this does imply that the non-
photometric frames are noisier, the added signal strength
gained from having 4.5 more hours on source outweighs
the extra noise.
This empirical measurement of the large–scale back-
ground fluctuations is likely to be a conservative estimate
of the accuracy with which we can measure surface bright-
ness on large scales because it is derived from the outer
regions of the image where compared to the central re-
gions on average a factor of ∼ 2 fewer individual exposures
have been combined for the 100” data and a factor of 1.5
for the 40” (which has a larger field of view and requires
less dithering). A larger number of dithered exposures at
a range of airmass, lunar phase, photometric conditions,
time of year, time of night, and distance to the moon has
the effect of smoothing out large-scale fluctuations in the
illumination pattern. We therefore expect greater accu-
racy in the center of the image where the ICL is being
measured.
We include a list all sources of uncertainty for one cluster
in our sample (A3888) in Table 3 (reproduced here from
Paper I). In addition to the dominant uncertainty due to
the large-scale fluctuations on the background as discussed
above, we quantify the contributions from the photometry,
masking, and the accuracy with which we can measure the
mean in the individual elliptical isophotes. Errors for the
ICL 9
other clusters are similarly dominated by background fluc-
tuations, which are listed in Table 2. The errors on the
total ICL fluxes in all bands range from 17% to 70% with
an average of 39%. The exception is A2556 which reaches a
flux error of 100% in the B−band due to its extremely faint
profile (see §A.4). Assuming a 30% error in the galaxy flux
(see §5.1.1), the errors on the ICL fraction are on average
48%. The errors plotted on the surface brightness profiles
are the 1σ errors.
6. discussion
We measure a diffuse intracluster component in all ten
clusters in our sample. Clues to the physical mechanisms
driving galaxy evolution come from comparing ICL prop-
erties with cluster properties. We have searched for corre-
lations between the entire set of properties. Pairs of prop-
erties not explicitly discussed below showed no correla-
tions. Limited by a small sample and non-parametric data,
we use a Spearman rank test to determine the strength
of any possible correlations where 1.0 or -1.0 indicate a
definite correlation or anti–correlation respectively, and 0
indicates no correlation. Note that this test does not take
into account the errors in the parameters, and instead only
depends on their rank among the sample. Where a corre-
lation is indicated we show the fit as well as ±2σ in both
y-intercept and slope to graphically show the ranges of the
fit, and give some estimate of the strength of the correla-
tion.
There are selection biases in our data between cluster
parameters due to our use of an Abell selected sample.
The Abell cluster sample is incomplete at high redshifts;
it does not include low-mass, low-luminosity, low-density,
high-redshift clusters because of the difficulty in obtaining
the required sensitivity with increasing redshift. Although
our 5 low-redshift clusters are not affected by this selection
effect, and should be a random sampling, small numbers
prevent those clusters from being fully representative of
the entire range of cluster properties.
Specifically we discuss the possibility that there is a real
trend underlying the selection bias in the cases of lower
luminosity (Figure 15) and lower density clusters (Figure
16) being preferentially found at lower redshift. Clusters
in our sample with less total galaxy flux are preferentially
found at low redshifts, however hierarchical formation pre-
dicts the opposite trend; clusters should be gaining mass
over time and hence light over time. Note that on size
scales much larger than the virial radius mass does not
change with time and therefore those systems can be con-
sidered as closed boxes; but on the size scales of our data,
a quarter of a virial radius, clusters are not closed boxes.
We might expect a slight trend, as was found, such that
lower density clusters are found at lower redshifts. As a
cluster ages, it converts a larger number of galaxies into
a smaller number of galaxies via merging and therefore
has a lower density at lower redshifts despite being more
massive than high redshift clusters. The infall of galaxies
works against this trend. The sum total of merger and
infall rates will control this evolution of density with red-
shift. The observed density redshift relation for this sam-
ple is strong; over the range z=0.3 - 0.05 (elapsed time of
3Gyr assuming standard ΛCDM) the projected number
density of galaxies has to change by a factor of 5.5, imply-
ing that every 5.5 galaxies in the cluster must have merged
into 1 galaxy in the last 3 Gyr. This is well above a re-
alistic merger rate for this timescale and this time period
(Gnedin 2003a). Instead it is likely that we are seeing the
result of a selection effect.
An interesting correlation which may be indirectly due
to the selection bias is that clusters with less total galaxy
flux tend to have lower densities (Figure 17). While we ex-
pect a smaller number of average galaxies to emit a smaller
amount of total light, it is possible that the low density
clusters are actually made up of a few very bright galaxies.
So although the trend might be real, it is also likely that
the redshift selection effect of both density and cluster flux
is causing these two parameters to be correlated.
A correlation which does not appear to be affected by
sample selection is that lower density clusters in our sam-
ple are weakly correlated with the presence of a cD galaxy,
see Figure 18. A possible explanation for this is that as
a cluster ages it will have made a cD galaxy out of many
smaller galaxies, so the density will actually be lower for
dynamically older clusters. Loh & Strauss (2006) find the
same correlation by looking at a sample of environments
around 2000 SDSS luminous red galaxies.
In the remainder of this section we examine the inter-
esting physics that can be gleaned from the combination
of cluster properties and ICL properties given the above
biases. The interpretation of ICL correlations with clus-
ter properties is highly complicated due not only to small
number statistics and the selection bias, but to the direc-
tion of the selection bias. Biases in mass, density, and
total galaxy flux with redshift will destructively combine
to cancel the trends which we expect to find in the ICL (as
described in the introduction). An added level of compli-
cation is due to the fact that we expect the ICL flux to be
evolving with time. We examine below each ICL property
in turn, including how the selection bias will effect any
conclusions drawn from the observed trends.
6.1. ICL flux
We see a range in ICL flux likely caused by the differ-
ing interaction rates and therefore differing production of
tidal tails, streams, plumes, etc. in different clusters. Clus-
ters include a large amount of tidal features at low surface
brightness as evidenced by their discovery at low redshift
where they are not as affected by surface brightness dim-
ming (Mihos et al. 2005). It is therefore not surprising
that we see a variation of flux levels in our own sample.
ICL flux is apparently correlated with three cluster pa-
rameters; M3-M1, density, and total galaxy flux (Figures
19, 17, & 20). There is no direct, significant correlation
between ICL flux and redshift. As discussed above, the se-
lection effects of density and mass with redshift will tend
to cancel any expected trends in either density, mass, or
redshift. We therefore are unable to draw conclusions from
these correlations. Zibetti et al. (2005), who have a sam-
ple of 680 SDSS clusters, are able to split their sample on
both richness and magnitude of the BCG (as a proxy for
mass). They find that both richer clusters and brighter
BCG clusters have brighter ICL than poor or faint clus-
ters.
6.1.1. ICL Flux vs. M3-M1
10 Krick, Bernstein
Figure 19 shows the moderate correlation between ICL
flux and M3-M1 such that clusters with cD galaxies have
less ICL than clusters without cD galaxies (Spearman coef-
ficient of -0.50). Although we choose M3-M1 to be cautious
about interlopers, M2-M1 shows the same trend with a
slightly more significant spearman coefficient of -0.61. Our
simple binary indicator of the presence of a cD galaxy gives
the same result. Clusters with cD galaxies (7) have an av-
erage flux of 2.3±0.96×1011(1σ) whereas clusters without
cD galaxies (3) have an average flux of 5.0±0.18×1011(1σ).
Although density is correlated with M3-M1, and density
is affected by incompleteness, this trend of ICL flux with
M3-M1 is not necessarily caused by that selection effect.
Furthermore, the correlation of M3-M1 with redshift is
much weaker (if there at all) than trends of either density
or cluster flux with redshift. If the observed relation is due
to the selection effect then we are prevented from drawing
conclusions from this relation. Otherwise, if this relation
between ICL flux and the presence of a cD galaxy is not
caused by a selection effect, then we conclude that the
lower levels of measured ICL are a result of the ICL stars
being indistinguishable form the cD galaxy and therefore
the ICL is evolving in a similar way to a cD galaxy.
By which physical mechanism can the ICL stars end
up in the center of the cluster and therefore overlap with
cD stars? cD galaxies indicate multiple major mergers
of galaxies which have lost enough energy or angular mo-
mentum to now reside in the center of the cluster potential
well. ICL stars on their own will not be able to migrate to
the center over any physically reasonable timescales unless
they were stripped at the center, or are formed in groups
and get pulled into the center along with their original
groups(Merritt 1984).
Assuming the ICL is observationally inseparable from
the cD galaxy, we investigate how much ICL light the mea-
sured relation implies is hidden amongst the stars of the cD
galaxy. If 20% of the total cD + ICL light is added to the
value of the ICL flux in the outer profile, then the observed
trend of ICL flux with M3-M1 is weakened (Spearman co-
efficient drops from 0.5 to 0.4). If 30% of the total cD +
ICL light is hidden in the inner profile then the relation
disappears (Spearman coefficient of 0.22). The measured
relation between ICL r−band flux and dynamical age of
the clusters may then imply that 25-40% of the ICL is coin-
cident with the cD galaxy in dynamically relaxed clusters.
6.2. ICL fraction
We focus now on the fraction of total cluster light which
is in the diffuse ICL. If ICL and galaxy flux do scale to-
gether (not just due to the selection effect), then the ICL
fraction is the physically meaningful parameter in compar-
ison to cluster properties.
ICL fraction is apparently correlated with both mass
and redshift (Figure 21 & 22) and not with density or total
galaxy flux. The selection effect will again work against
the predicted trend of ICL fraction to increase with in-
creasing mass (Murante et al. 2004; Lin & Mohr 2004)
and increasing density. Therefore the lack of trends of
ICL fraction with mass and density could be attributable
to the selection bias.
6.2.1. ICL fraction vs. Mass
We find no trend in ICL fraction with mass. Our data
for ICL fraction as a function of mass is inconsistent with
the theoretical predictions of Murante et al. (2004), Mu-
rante et al. (2007) (based on a cosmological hydrodynam-
ical simulation including radiative cooling, star formation,
and supernova feedback), and Lin & Mohr (2004)(based on
a model of cluster mass and the luminosity of the BCG).
However Murante et al. (2007) show a large scatter of ICL
fractions within each mass bin. They also discuss the
dependence of a simulations mass resolution on the ICL
fraction. These theoretical predictions are over-plotted on
Figure 21. Note that the simulations generally report the
fractional light in the ICL out to much larger radii (rvirial
or r200) than its surface brightness can be measured ob-
servationally. To compare the theoretical predictions at
rvirial to our measurement at 0.25rvirial, the predicted
values should be raised by some significant amount which
depends on the ICL and galaxy light profiles at large radii.
This makes the predictions and the data even more incon-
sistent than it first appears. As an example of the differ-
ences, a cluster with the measured ICL fraction of A3888
would require a factor of greater than 100 lower mass than
the literature values to fall along the predicted trend. Al-
though these clusters are not dynamically relaxed, such
large errors in mass are not expected. As an upper limit
on the ICL flux, if we assumed the entire cD galaxy was
made of intracluster stars, that flux plus the measured ICL
flux would still not be enough to raise the ICL fractions
to the levels predicted by these authors.
There are no evident correlations between velocity dis-
persion and ICL characteristics, although velocity disper-
sion is a mass estimator. Large uncertainties are presum-
ably responsible for the lack of correlation.
6.2.2. ICL fraction vs. Redshift
Figure 22 is a plot of redshift versus ICL fraction for
both the r− andB−or V−bands. We find a marginal anti–
correlation between ICL fraction and redshift with a very
shallow slope, if at all, in the direction that low redshift
clusters have higher ICL fractions (Spearman rank coeffi-
cient of -0.43). This relation is strengthened when assum-
ing fractions of blue galaxies are higher in the higher red-
shift clusters(spearman rank of -0.6) (see §5.2.3). A trend
of ICL fraction with redshift tells us about the timescales
of the mechanisms involved in stripping stars from galax-
ies. This relation is possibly affected by the same redshift
selection effects as discussed above.
Over the redshift range of our clusters, 0.31 > z > 0.05,
a chi–squared fit to our data gives a range of fractional
flux of 11 to 14%. Willman et al. (2004) find the ICL
fraction grows from 14 to 19%. over that same redshift
range. Willman et al. (2004) measure the ICL fraction at
r200 which means these values would need to be increased
in order to directly compare with our values. While their
normalization of the relation is not consistent with our
data, the slopes are roughly consistent, with the caveat of
the selection effect. The discrepancy is likely, at least in
part, caused by different definitions of ICL. Simulations
tag those particles which become unbound from galax-
ies whereas in practice we do not have that information
and instead use surface brightness cutoffs and ICL profile
shapes. Rudick et al. (2006) do use a surface brightness
ICL 11
cutoff in their simulations to tag ICL stars which is very
similar to our measurement. They find on average from
their 3 simulated clusters a change of ICL fraction of ap-
proximately 2% over this redshift range. We are not able
to observationally measure such a small change in fraction.
Rudick et al. (2006) predict that in order to grow the ICL
fraction by 10%, on average, we would need to track clus-
ters as they evolve from a redshift of 2 to the present.
However, both Willman et al. (2004) and Rudick et al.
(2006) find that the ICL fraction makes small changes over
short timescales (as major mergers or collisions occur).
6.3. ICL color
The average color of the ICL, is roughly the same as
the color of the red ellipticals in each of the clusters. In
§8.1 of paper I we discuss the implications of this on ICL
formation redshift and metallicity. Zibetti et al. (2005)
have summed g−, r−, and i− band imaging of 680 clus-
ters in a redshift range of 0.2 - 0.3. Similar to our results,
they find that the summed ICL component has roughly
the same g − r color at all radii as the summed cluster
population including the galaxies. Since we have applied
an evolutionary correction to the ICL colors, if there is
only passive color evolution, the ICL will show no trend
with redshift. Indeed we find no correlation between B−r
color and the redshift of the cluster, as shown in Figure 23
(B − r = 2.3 ± 0.2(1σ)). ICL color may have the ability
to broadly constrain the epoch at which these stars were
stripped. In principle, as mentioned in the introduction,
we could learn at which epoch the ICL had been stripped
from the galaxies based on its color relative to the galaxies
assuming passively evolving ICL and ongoing star forma-
tion in galaxies. While this simple theory should be true,
the color difference between passively evolving stars and
low star forming galaxies may not be large enough to de-
tect since clusters are not made up of galaxies which were
all formed at a single epoch and we don’t know the star
formation rates of galaxies once they enter a cluster.
ICL color may have the ability to determine the types
of galaxies from which the stars are being stripped. Un-
fortunately the difference in color between stars stripped
from ellipticals, and for example stars stripped from low
surface brightness dwarfs is not large enough to confirm in
our data given the large amount of scatter in the color of
the ICL (see paper I for a more complete discussion).
There is no correlation in our sample between the pres-
ence or direction of ICL color gradients and any cluster
properties. This is very curious since we see both blue-
ward and red-ward color gradients. A larger sample with
more accurate colors and without a selection bias might
be able to determine the origin of the color gradients.
6.4. Profile Shape
Figure 3 shows all eight surface brightness profiles for
clusters that have central ICL components. To facilitate
comparison, we have shifted all surface brightnesses to a
redshift of zero, including a correction for surface bright-
ness dimming, a k–correction, and an evolution correction.
We see a range in ICL profile shape from cluster to cluster.
This is consistent with the range of scale-lengths found in
other surveys (Gonzalez et al. 2005, find a range of scale
lengths from 18 - 480 kpc, fairly evenly distributed be-
tween 30 and 250 kpc) .
The profiles are equally well fit with the empirically
motivated deVaucouleurs profiles and simple exponential
profiles which are shown in the individual profile plots in
Figures 4 - 13. The profiles can also be fit with a Hubble–
Reynolds profile which is a good substitute for the more
complicated surface brightness profile of an NFW density
profile ( Lokas & Mamon 2001). An example of this profile
shape is shown in Figure 3 with a 100 kpc scale length de-
fined as the radius inside of which the profile contains 25%
of the luminosity. This profile shape is what you would
predict given a simple spherical collapse model. The phys-
ically motivated Hubble–Reynolds profile gives acceptable
fits to the ICL profiles with the exception of A4059, A2734,
& A2556 which have steeper profiles. We explore causes
of the differing profile shapes for these three clusters.
A steeper profile is correlated with M3-M1, density, to-
tal cluster flux, and redshift. These three clusters have an
average M3-M1 value of 0.93 ± 0.27 as compared to the
average of 0.49± 0.20 for the remaining 7 clusters. These
three clusters are also three of the four lowest redshift clus-
ters, have an average of 93 galaxies which is 45% smaller
than the value for the remaining sample, and have an av-
erage cluster flux of 12.3 × 1011L�which is 47% smaller
than the value for the remaining sample.
We have the same difficulties here in distinguishing be-
tween the selection effects and the true physical correla-
tions. The key difference is that the three clusters with the
steepest profiles are the most relaxed clusters (which is not
a redshift selection effect). We use “most relaxed” to de-
scribe the three clusters with the most symmetric X–ray
isophotes that have single, central, smooth ICL profiles.
This is consistent with our finding that M3-M1 is a key in-
dicator of ICL flux in §6.1.1 and that ICL can form either
in groups at early times or at later times through galaxy in-
teractions in the dense part of the cluster. If galaxy groups
in which the ICL formed are able to get to the cluster cen-
ter then their ICL will also be found in the cluster center,
and can be hiding in the cD galaxy. If the galaxy groups
in which the ICL formed have not coalesced in the center
then the ICL will be less centrally distributed and there-
fore have a shallower profile. This is consistent with the
recent numerical work by Murante et al. (2007) who find
that the majority of the ICL is formed by the merging
processes which create the BCG’s in clusters. This pro-
cess leads o the ICL having a steeper profile shape than
the galaxies and having greater than half of the ICL be
located inside of 250h−170 kpc, approaching radii where we
do not measure the ICL due to the presence of the BCG.
Their simulations also confirm that different clusters with
different dynamical histories will have differing amounts
and locations of ICL.
7. conclusion
We have identified an intracluster light component in all
10 clusters which has fluxes ranging from 0.76 × 1011 to
7.0×1011 h−170 L�in r and 0.14×10
11 to 1.2×1011 h−170 L�in
the B−band, ICL fractions of 6 to 22% of the total cluster
light within one quarter of the virial radius in r and 4 to
21% in the B−band, and B−r colors ranging from 1.49 to
2.75 magnitudes. This work shows that there is detectable
12 Krick, Bernstein
ICL in clusters and groups out to redshifts of at least 0.3,
and in two bands including the shorter wavelength B− or
V−band.
The interpretation of our results is complicated by small
number statistics, redshift selection effects of Abell clus-
ters, and the fact that the ICL is evolving with time. Of
the cluster properties (M3-M1, density, redshift, and clus-
ter flux), only M3-M1 and redshift are not correlated. As
a result of these selection effects ICL flux is apparently
correlated with density and total galaxy flux but not with
redshift or mass and ICL fraction is apparently correlated
with redshift but not with M3-M1, density, total galaxy
flux, or mass. However, we do draw conclusions from the
ICL color, average values of the ICL fractions, the relation
between ICL flux and M3-M1, and the ICL profile shape.
We find a passively evolving ICL color which is similar
to the color of the RCS at the redshift of each cluster.
The relations between ICL fraction with redshift and ICL
fraction with mass show the disagreement of our data with
simulations since our fractional fluxes are lower than those
predictions. These discrepancies do not seem to be caused
by the details of our measurement.
Furthermore we find evidence that clusters with sym-
metric X–ray profiles and cD galaxies have both less ICL
flux and significantly steeper profiles. The lower amount
of flux can be explained if ICL stars have become in-
distinguishable from cD stars. As the cluster formed a
cD galaxy any groups which participated in the merging
brought their ICL stars with them, as well as created more
ICL through interactions. If a cD does not form, then the
ICL already in groups or actively forming is also prevented
from becoming very centralized as it has no way of loos-
ing energy or angular momentum on its own. While the
galaxies or groups are subject to tidal forces and dynam-
ical friction, the ICL, once stripped, will not be able to
loose energy and/or angular momentum to these forces,
and instead will stay on the orbit on which it formed.
Observed density may not be a good predictor of ICL
properties since it does not directly indicate the density
at the time in which the ICL was formed. We do indeed
expect density at any one epoch to be linked to ICL pro-
duction at that epoch through the interaction rates.
The picture that is emerging from this work is that ICL
is ubiquitous, not only in cD clusters, but in all clus-
ters, and in group environments. The amount of light
in the ICL is dependent upon cluster morphology. ICL
forms from ongoing processes including galaxy–galaxy in-
teractions and tidal interactions with the cluster potential
(Moore et al. 1996; Gnedin 2003b) as well as in groups
(Rudick et al. 2006). With time, as multiple interactions
and dissipation of angular momentum and energy lead
groups already containing ICL to the center of the cluster,
the ICL moves with the galaxies to the center and be-
comes indistinguishable from the cD’s stellar population.
Any ICL forming from galaxy interactions stays on the
orbit where it was formed.
A large, complete sample of clusters, including a pro-
portionate amount with high redshift and low density, will
be able to break the degeneracies present in this work.
Shifting to a lower redshift range will not be as benefi-
cial because a shorter range than presented here will not
be large enough to see the predicted evolution in the ICL
fraction.
In addition to large numbers of clusters it would be
beneficial to go to extremely low surface brightness lev-
els (<∼ 30 mag arcsec−2) to reduce significantly the error
bars on the color measurement and thereby learn about the
progenitor galaxies of the ICL and the timescales for strip-
ping. It will not be easy to achieve these surface brightness
limits for a large sample which includes high-redshift low-
density clusters since those clusters will have very dim ICL
due to both an expected lower amount as correlated with
density, and due to surface brightness dimming.
We acknowledge J. Dalcanton and V. Desai for observ-
ing support and R. Dupke, E. De Filippis, and J. Kemp-
ner for help with X–ray data. We thank the anonymous
referee for useful suggestions on the manuscript. Par-
tial support for J.E.K. was provided by the National Sci-
ence Foundation (NSF) through UM’s NSF ADVANCE
program. Partial support for R.A.B. was provided by a
NASA Hubble Fellowship grant HF-01088.01-97A awarded
by Space Telescope Science Institute, which is operated
by the Association of Universities for Research in As-
tronomy, Inc., for NASA under contract NAS 5-2655.
This research has made use of data from the following
sources: USNOFS Image and Catalogue Archive operated
by the United States Naval Observatory, Flagstaff Station
(http://www.nofs.navy.mil/data/fchpix/); NASA/IPAC
Extragalactic Database (NED), which is operated by the
Jet Propulsion Laboratory, California Institute of Tech-
nology, under contract with the National Aeronautics
and Space Administration; the Two Micron All Sky Sur-
vey, which is a joint project of the University of Mas-
sachusetts and the Infrared Processing and Analysis Cen-
ter/California Institute of Technology, funded by the Na-
tional Aeronautics and Space Administration and the Na-
tional Science Foundation; the SIMBAD database, oper-
ated at CDS, Strasbourg, France; and the High Energy
Astrophysics Science Archive Research Center Online Ser-
vice, provided by the NASA/Goddard Space Flight Cen-
REFERENCES
Abadi, M. G., Moore, B., & Bower, R. G. 1999, MNRAS, 308, 947
Abell, G. O., Corwin, H. G., & Olowin, R. P. 1989, ApJS, 70, 1
Allen, S. W. 1998, MNRAS, 296, 392
Andreon, S., Punzi, G., & Grado, A. 2005, MNRAS, 360, 727
Batuski, D. J., Miller, C. J., Slinglend, K. A., Balkowski, C.,
Maurogordato, S., Cayatte, V., Felenbok, P., & Olowin, R. 1999,
ApJ, 520, 491
Busarello, G., Merluzzi, P., La Barbera, F., Massarotti, M., &
Capaccioli, M. 2002, A&A, 389, 787
Butcher, H. & Oemler, A. 1984, ApJ, 285, 426
Calcáneo-Roldán, C., Moore, B., Bland-Hawthorn, J., Malin, D., &
Sadler, E. M. 2000, MNRAS, 314, 324
Campusano, L. E., Pelló, R., Kneib, J.-P., Le Borgne, J.-F., Fort, B.,
Ellis, R., Mellier, Y., & Smail, I. 2001, A&A, 378, 394
Caretta, C. A., Maia, M. A. G., & Willmer, C. N. A. 2004, AJ, 128,
Castro-Rodŕıguez, N., Aguerri, J. A. L., Arnaboldi, M., Gerhard, O.,
Freeman, K. C., Napolitano, N. R., & Capaccioli, M. 2003, A&A,
405, 803
Choi, Y.-Y., Reynolds, C. S., Heinz, S., Rosenberg, J. L., Perlman,
E. S., & Yang, J. 2004, ApJ, 606, 185
http://www.nofs.navy.mil/data/fchpix/
ICL 13
Colless, M., Dalton, G., Maddox, S., Sutherland, W., Norberg, P.,
Cole, S., Bland-Hawthorn, J., Bridges, T., Cannon, R., Collins,
C., Couch, W., Cross, N., Deeley, K., De Propris, R., Driver, S. P.,
Efstathiou, G., Ellis, R. S., Frenk, C. S., Glazebrook, K., Jackson,
C., Lahav, O., Lewis, I., Lumsden, S., Madgwick, D., Peacock,
J. A., Peterson, B. A., Price, I., Seaborne, M., & Taylor, K. 2001,
MNRAS, 328, 1039
Collins, C. A., Guzzo, L., Nichol, R. C., & Lumsden, S. L. 1995,
MNRAS, 274, 1071
Couch, W. J., Balogh, M. L., Bower, R. G., Smail, I., Glazebrook,
K., & Taylor, M. 2001, ApJ, 549, 820
Couch, W. J., Barger, A. J., Smail, I., Ellis, R. S., & Sharples, R. M.
1998, ApJ, 497, 188
Couch, W. J. & Sharples, R. M. 1987, MNRAS, 229, 423
Cypriano, E. S., Sodré, L. J., Kneib, J.-P., & Campusano, L. E. 2004,
ApJ, 613, 95
Dahle, H., Kaiser, N., Irgens, R. J., Lilje, P. B., & Maddox, S. J.
2002, ApJS, 139, 313
De Filippis, E., Bautz, M. W., Sereno, M., & Garmire, G. P. 2004,
ApJ, 611, 164
D’Onghia, E., Sommer-Larsen, J., Romeo, A. D., Burkert, A.,
Pedersen, K., Portinari, L., & Rasmussen, J. 2005, ApJ, 630, L109
Driver, S. P., Couch, W. J., & Phillipps, S. 1998, MNRAS, 301, 369
Dubinski, J. 1998, ApJ, 502, 141
Durrell, P. R., Decesar, M. E., Ciardullo, R., Hurley-Keller, D., &
Feldmeier, J. J. 2004, in IAU Symposium, ed. P.-A. Duc, J. Braine,
& E. Brinks, 90
Ebeling, H., Voges, W., Bohringer, H., Edge, A. C., Huchra, J. P., &
Briel, U. G. 1996, MNRAS, 281, 799
Feldmeier, J. J., Mihos, J. C., Morrison, H. L., Harding, P., Kaib,
N., & Dubinski, J. 2004, ApJ, 609, 617
Feldmeier, J. J., Mihos, J. C., Morrison, H. L., Rodney, S. A., &
Harding, P. 2002, ApJ, 575, 779
Fujita, Y. 2004, PASJ, 56, 29
Girardi, M., Borgani, S., Giuricin, G., Mardirossian, F., & Mezzetti,
M. 1998a, ApJ, 506, 45
Girardi, M., Giuricin, G., Mardirossian, F., Mezzetti, M., & Boschin,
W. 1998b, ApJ, 505, 74
Girardi, M. & Mezzetti, M. 2001, ApJ, 548, 79
Gnedin, O. Y. 2003a, ApJ, 589, 752
—. 2003b, ApJ, 582, 141
Gonzalez, A. H., Zabludoff, A. I., & Zaritsky, D. 2005, ApJ, 618, 195
Goto, T., Okamura, S., McKay, T. A., Bahcall, N. A., Annis, J.,
Bernard, M., Brinkmann, J., Gómez, P. L., Hansen, S., Kim,
R. S. J., Sekiguchi, M., & Sheth, R. K. 2002, PASJ, 54, 515
Govoni, F., Enßlin, T. A., Feretti, L., & Giovannini, G. 2001, A&A,
369, 441
Gregg, M. D. & West, M. J. 1998, Nature, 396, 549
Holmberg, E. 1958, Meddelanden fran Lunds Astronomiska
Observatorium Serie II, 136, 1
Katgert, P., Mazure, A., den Hartog, R., Adami, C., Biviano, A., &
Perea, J. 1998, A&AS, 129, 399
Krick, J. E., Bernstein, R. A., & Pimbblet, K. A. 2006, AJ, 131, 168
Lin, Y. & Mohr, J. J. 2004, ApJ, 617, 879
Loh, Y.-S. & Strauss, M. A. 2006, MNRAS, 366, 373
Lokas, E. L. & Mamon, G. A. 2001, MNRAS, 321, 155
López-Cruz, O., Barkhouse, W. A., & Yee, H. K. C. 2004, ApJ, 614,
Merritt, D. 1984, ApJ, 276, 26
Mihos, J. C., Harding, P., Feldmeier, J., & Morrison, H. 2005, ApJ,
631, L41
Milosavljević, M., Miller, C. J., Furlanetto, S. R., & Cooray, A. 2006,
ApJ, 637, L9
Moore, B., Katz, N., Lake, G., Dressler, A., & Oemler, A. 1996,
Nature, 379, 613
Murante, G., Arnaboldi, M., Gerhard, O., Borgani, S., Cheng, L. M.,
Diaferio, A., Dolag, K., Moscardini, L., Tormen, G., Tornatore, L.,
& Tozzi, P. 2004, ApJ, 607, L83
Murante, G., Giovalli, M., Gerhard, O., Arnaboldi, M., Borgani, S.,
& Dolag, K. 2007, ArXiv Astrophysics e-prints
Muriel, H., Quintana, H., Infante, L., Lambas, D. G., & Way, M. J.
2002, AJ, 124, 1934
Pimbblet, K. A., Smail, I., Kodama, T., Couch, W. J., Edge, A. C.,
Zabludoff, A. I., & O’Hely, E. 2002, MNRAS, 331, 333
Poggianti, B. M. 1997, A&AS, 122, 399
Poole, G. B., Fardal, M. A., Babul, A., McCarthy, I. G., Quinn, T.,
& Wadsley, J. 2006, MNRAS, 373, 881
Reimers, D., Koehler, T., & Wisotzki, L. 1996, A&AS, 115, 235
Reiprich, T. H. & Böhringer, H. 2002, ApJ, 567, 716
Rocha, C. D. & de Oliveira, C. M. 2005, MNRAS, 364, 1069
Rudick, C. S., Mihos, J. C., & McBride, C. 2006, ApJ, 648, 936
Smith, R. E., Dahle, H., Maddox, S. J., & Lilje, P. B. 2004, ApJ,
617, 811
Sommer-Larsen, J. 2006, MNRAS, 369, 958
Struble, M. F. & Rood, H. J. 1999, ApJS, 125, 35
Teague, P. F., Carter, D., & Gray, P. M. 1990, ApJS, 72, 715
Willman, B., Governato, F., Wadsley, J., & Quinn, T. 2004, MNRAS,
355, 159
Wu, X., Xue, Y., & Fang, L. 1999, ApJ, 524, 22
Zibetti, S., White, S. D. M., Schneider, D. P., & Brinkmann, J. 2005,
MNRAS, 358, 949
14 Krick, Bernstein
ICL 15
16 Krick, Bernstein
Table 3
Error Budget
Source contribution to ICL uncertainty (%)
1σ uncertainty µ(0′′- 100′′) µ(100′′- 200′′) total ICL flux
(V ) (r) (V ) (r) (V ) (r) (V ) (r)
background levela 29.5 mag arcsec−2 28.8 mag arcsec−2 14 18 39 45 24 31
photometry 0.02 mag 0.03 mag 2 3 2 3 2 3
maskingb variation in mask area ±30 5 5 14 19 9 12
std. dev. in meanc 32.7 mag arcsec−2 32.7 mag arcsec−2 3 2 2 1 3 1
(total) 15 19 41 50 26 33
cluster fluxd 16% 16% · · · · · · · · · · · · · · · · · ·
Note. — a: Large scale fluctuations in background level are measured empirically and include instrumental
calibration uncertainties as well as and true variations in background level (see §5.3). b: Object masks were scaled
by ±30% in area to test the impact on ICL measurement (see §4.2.2). c: The statistical uncertainty in the mean
surface brightness of the ICL in each isophote. d: Errors on the total cluster flux are based on errors in the fit to the
luminosity function (see §5.1.1).
Fig. 1.— The PSF of the 40-inch Swope telescope at Las Campanas Observatory. The y-axis shows surface brightness scaled to correspond
to the total flux of a zero magnitude star. The profile within 5′′ was measured from unsaturated stars and can be affected by seeing. The
outer profile was measured from two stars with super-saturated cores imaged in two different bands. The profile with the bump in it at 100′′
is the r−band profile, that without the bump is the B−band PSF. The bump in the profile at 100′′ is due to a reflection off the CCD which
then bounces off of the filter, and back down onto the CCD. The outer surface brightness profile decreases as r−2 in the r−band and r−1.6
in the B, shown by the dashed lines. An r−3.9 profile is plotted to show the range in slopes.
ICL 17
Fig. 2.— The color magnitude diagrams for all ten clusters in increasing redshift order from let to right, top to bottom; A4059, A3880, A2734,
A2556, A4010, A3888, A3984, A014, AC114, AC118. All galaxies detected in our image are denoted with a gray star. Those galaxies which
have membership information in the literature are over–plotted with open black triangles (members) or squares (non–members)(membership
references are given in §A.1- A.10). Solid lines indicate a biweight fit to the red sequence with 1σ uncertainties.
18 Krick, Bernstein
Fig. 3.— Surface brightness profiles for the eight clusters with a measurable profile. Profiles are listed on the plot in order of ascending
redshift. To avoid crowding, error bars are only plotted on one of the profiles. Errors on the other profiles are similar at similar surface
brightnesses. All surface brightnesses have been shifted to z = 0 using surface brightness dimming, k, and evolutionary corrections. The
x-axis remains in arcseconds and not in Mpc since the y-axis is in reference to arcseconds. Physical scales are noted on the individual plots
(4 - 13). In addition marks have been placed on each profile at the distances corresponding to 200kpc and 300kpc. Also included as the solid
black line near the bottom of the plot is a Hubble Reynolds surface brightness profile as a proxy for an NFW density profile with a scale
length of 100kpc. The ICL does not have a single uniform amount of flux or profile shape. Profile shape does correlate with dynamical age
where those clusters with steeper profiles are dynamically more relaxed (see §6.4).
ICL 19
Fig. 4.— A4059. The plots moving left to right and top to bottom are as follows. The first is our final combined r−band image zoomed
in on the central cluster region. The second plot shows X–ray isophotes where available. Some clusters were observed during the ROSAT
all sky survey, and so have X–ray luminosities, but have not had targeted observations to allow isophote fitting. Isophote levels are derived
from quick-look images taken from HEASARC. X-ray luminosities of these clusters are listed in Table 1 of paper I and are discussed in the
appendix. The third plot shows our background subtracted, fully masked r−band image of the central region of the cluster, smoothed to aid
in visual identification of the surface brightness levels. Masks are shown in their intermediate levels which are listed in column 7 of Table
2. The six gray-scale levels show surface brightness levels of up to 28.5, 27.7,27.2,26.7 mag arcsec−2. The fourth plot shows the surface
brightness profiles of the ICL (surrounded by shading;r−band on top, V− or B−band on the bottom) and cluster galaxies as a function
of semi-major axis. The bottom axis is in arcseconds and the top axis corresponds to physical scale in Mpc. Error bars represent the 1σ
background identification errors as discussed in §5.3. DeVaucouleurs fits to the entire cD plus ICL profile are over-plotted.
20 Krick, Bernstein
Fig. 5.— A3880, same as Figure 4
ICL 21
Fig. 6.— A2734, same as Figure 4
22 Krick, Bernstein
Fig. 7.— A2556, same as Figure 4
ICL 23
Fig. 8.— A4010, same as Figure 4
24 Krick, Bernstein
Fig. 9.— A3888, same as Figure 4, except here we show the elliptical isophotes of the ICL over-plotted on the surface brightness image.
ICL 25
Fig. 10.— A3984, same as Figure 4
26 Krick, Bernstein
Fig. 11.— A141, same as Figure 4, except we are not able to measure a surface brightness profile or consequently a color profile.
ICL 27
Fig. 12.— A114, same as Figure 4
28 Krick, Bernstein
Fig. 13.— A118, same as Figure 4, except we are not able to measure a surface brightness profile or consequently a color profile.
ICL 29
Fig. 14.— The color profile of the eight clusters where measurement was possible plotted as a function of semi-major axis in arcseconds on
the bottom and Mpc on the top. The average color of the red cluster sequence is shown for comparison, as well as the best fit linear function
to the data.
30 Krick, Bernstein
Fig. 15.— Redshift versus total galaxy flux within one quarter of a virial radius. The Spearman rank coefficient is printed in the upper
right corner. The best fit linear function as well as the lines representing ±2σ in both slope and y intercept are also plotted. The strong
correlation between redshift and total galaxy flux shows the incompleteness of the Abell sample which does not include high-redshift, low-flux
clusters
ICL 31
Fig. 16.— Projected number of galaxies versus redshift. Galaxies brighter than Mr = −18.5 within 800 h−170 kpc are included in this
count, which is used as a proxy for density. The Spearman rank coefficient is printed in the upper left corner. There is a strong correlation
between density and redshift. The best fit linear function is included. While we do expect clusters to become less dense over time, this strong
correlation is not expected. Instead this is due to an incompleteness at high redshift. See §6 for a discussion of the effects of this selection
effect.
32 Krick, Bernstein
Fig. 17.— Projected number of galaxies versus ICL luminosity. ICL luminosity shows 1σ error bars and has been K and evolution corrected.
Galaxies brighter than Mr = −18.5 within 800 h−170 kpc are included in this count, which is used as a proxy for density. The Spearman rank
coefficient is printed in the upper left corner. The best fit linear function as well as the lines representing ±2σ in both slope and y intercept
are also plotted. There is a mild correlation between density and ICL luminosity such that higher density clusters have a larger amount of
ICL flux.
ICL 33
Fig. 18.— The difference in magnitude between the first and third ranked galaxy versus projected number of galaxies brighter than
Mr = −18.5 within 800 h−170 kpc, which is used as a proxy for density. Clusters with cD galaxies will have larger M3-M1 values. This plot
implies that over time galaxies merge in clusters to make a cD galaxy, and by the time the cD galaxy has formed, the global density is lower.
As discussed in the §6, we assume this is not a selection bias.
34 Krick, Bernstein
Fig. 19.— The difference in magnitude between the first and third ranked galaxy versus ICL luminosity. ICL luminosity shows 1σ error
bars and has been K and evolution corrected. Clusters which have cD galaxies have larger M3 - M1 values and are dynamically older clusters.
There is a mild correlation between dynamic age and ICL luminosity indicating that the ICL evolves at roughly the same rate as the cluster.
ICL 35
Fig. 20.— The flux in galaxies versus the flux in ICL in units of solar luminosities. Errors on ICL luminosity are 1σ. Errors on galaxy
luminosity are 30% as estimated in §5.1.1. Over-plotted is the best fit linear function as well as two lines which represent 2σ errors in both
y-intercept and slope. The Spearman rank coefficient is printed in the upper right. Here galaxy luminosity is assumed to be a proxy for mass,
so we find a significant correlation between mass and ICL flux such that more massive clusters have a larger amount of ICL flux.
36 Krick, Bernstein
Fig. 21.— Cluster mass versus the ICL fraction measured at one quarter of the virial radius. Stars denote the r−band while squares show
B− and diamonds show V−band. Errors on ICL fraction are 1σ as discussed in §5.3. Mass estimates and errors are taken from the literature
as discussed in §A.1 - §A.10. The predictions of Lin & Mohr (2004) and Murante et al. (2004) at the virial radius are shown for comparison.
These represent extrapolations beyond roughly 1×1015 M� in both cases (as marked by the crosses). The roughly constant ICL fraction with
mass can be explained using hierarchical formation by the in-fall of groups with a similar ICL fraction as the main cluster, or by increased
interaction rates with the infall of the groups, or both.
ICL 37
Fig. 22.— Cluster redshift versus ICL fraction measured at one quarter of the virial radius. As in Figure 21, starred symbols denote
the r−band, squares show B−band, and diamonds show V−band fractions. The prediction of Willman et al. (2004) for the ICL fraction as
measured at r200 is shown for comparison. This prediction would increase if measured at smaller radii, such as was used in our measurement.
There is mild evidence for a correlation between redshift and ICL fraction such that ICL fraction grows with decreasing redshift. This trend
is consistent with ongoing ICL formation.
38 Krick, Bernstein
Fig. 23.— Cluster redshift versus ICL color in B − r which has been k corrected and had simple passive evolution applied to it. If a color
gradient is detected in a given cluster then the mean color plotted here is that measured near the center of the profile, weighted slightly
toward the center. There is no trend in redshift with ICL color which leads to the conclusion that the ICL is simply passively reddening.
ICL 39
APPENDIX
the clusters
In order of increasing redshift we discuss interesting characteristics of the clusters and their ICL components. Relevant
papers are listed in Table 1. Relevant figures are 4 - 13.
A4059
A4059 is a richness class 1, Bautz Morgan type I cluster at a redshift of 0.048. There is a clear cD galaxy which is
however offset from the Abell center, likely due to the presence of at least two other bright elliptical galaxies. The cD
galaxy is 0.91 ± .05 magnitudes brighter than the second ranked cluster galaxy. The cD galaxy is at the center of the
Chandra and ASCA mass distributions. Those telescopes detect no hot gas around the other bright ellipticals. This
cluster shows interesting features in it’s X–ray morphology. There appear to be large bubbles, or cavities in the hot gas,
which is likely evidence of past radio galaxy interactions with the ICM (Choi et al. 2004). As additional evidence of past
activity in this cluster, the cD galaxy contains a large dust lane (Choi et al. 2004). M500 (the mass within the radius
where the mean mass density is equal to 500 times the critical density) is calculated by Reiprich & Böhringer (2002) for
A4059 to be 2.82±0.370.34 ×1014h
70 M�.
The color magnitude diagram shows a very tight red sequence. Membership information is taken from Collins et al.
(1995), Colless et al. (2001), and Smith et al. (2004). Using the CMD as an indication of membership, we estimate the
flux in cluster galaxies to be 1.2± .35×1012L�in r and 4.2±1.3×1011L�in B inside of 0.65h−170 Mpc, which is one quarter
of the virial radius of this cluster. In this particular cluster, since the Abell center is not at the true cluster center, and it
is the nearest cluster in our sample, our image does not uniformly cover the entire one quarter of the virial radius. This
estimate is therefore below the true flux in galaxies because we are missing area on the cluster.
Figure 4 shows the relevant plots for this cluster. There is a strong ICL component ranging from 26 - 29 mag arcsec−2
in r centered on the cD galaxy. The total flux in the ICL is 3.4 ± 1.7 × 1011L�in r and 1.2 ± .24 × 1011L�in B, which
makes for ICL fractions of 22± 12% in r and 21± 8% in B. The ICL has a flat color profile with B− r ' 1.7± .08, which
is marginally bluer (0.2 magnitudes) than the RCS. One of the two other bright ellipticals at 0.7h−170 Mpc from the center
has a diffuse component, the other bright elliptical is too close to a saturated star to detect a diffuse component.
A3880
A3880 is a richness class 0, Bautz Morgan type II cluster at a redshift of 0.058. There is a clear cD galaxy in the center
of this cluster, which is 0.52 ± .05 magnitudes brighter than the second ranked galaxy. This cluster is detected in the
ROSAT All Sky Survey, however that survey is not deep enough to show us the shape of the mass distribution. Girardi
et al. (1998b) find a mass for this cluster based on its velocity dispersion of 8.3+2.8−2.1 × 10
14h−170 M�.
The color magnitude diagram shows a clear red sequence. There is possibly another red sequence at lower redshift
adding to the width of the red sequence. Membership information is provided by Collins et al. (1995), Colless et al.
(2001), and Smith et al. (2004). Using the CMD as an indication of membership, we estimate the flux in cluster galaxies
to be 8.6± 2.6× 1011L�in r and 3.8± 1.1× 1011L�in B inside of 0.62h−170 Mpc, which is one quarter of the virial radius
of this cluster.
Figure 5 shows the relevant plots for this cluster. Unfortunately this cluster has larger illumination problems than the
other clusters which can be seen in the greyscale masked image. Nonetheless, there is clearly an r−band ICL component,
although the B−band ICL is extremely faint. The total flux in the ICL is 1.4± 2.3× 1011L�in r and 4.4± 1.5× 1010L�in
B, which makes for ICL fractions of 14±6% in r and 10±6% in B. The ICL has a flat color profile with B−r ' 2.4±1.1,
which is 0.8 magnitudes redder than the RCS.
A2734
A2734 is a richness class 1, Bautz Morgan type III cluster at a redshift of 0.062. The BCG by 0.51± .05 magnitudes is
in the center of this cluster, however there are 2 other large elliptical galaxies 0.55h−170 Mpc and 0.85h
70 Mpc distant from
the BCG. The X–ray gas does confirm the BCG as being at the center of the mass distribution. Those 2 other elliptical
galaxies are not seen in the 44ks ASCA GIS observation of this cluster, however they are confirmed members based on
spectroscopy (Collins et al. 1995; Colless et al. 2001; Smith et al. 2004). M500 is calculated by Reiprich & Böhringer
(2002) for A2734 to be 2.49±0.890.63 ×1014h
70 M�.
The color magnitude diagram shows a clear red sequence, which includes the 3 bright elliptical galaxies. 2df spectroscopy
gives us roughly 80 galaxies in our field of view which we can use to estimate the effectiveness of the biweight fit to the
RCS in finding true cluster members. Of those galaxies with confirmed membership, 94% are determined members with
this method, however 86% of the confirmed non-members are also considered members. This is likely due to how galaxies
were selected for spectroscopy in the 2df catalog. Using the CMD as an indication of membership, we estimate the flux
in cluster galaxies to be 1.2± .36× 1012L�in r and 3.4± 1.0× 1011L�in B inside of 0.60h−170 Mpc, which is one quarter of
the virial radius of this cluster.
Figure 6 shows the relevant plots for this cluster. There is a strong ICL component ranging from 26 - 29 mag arcsec−2
in r centered on the BCG. The total flux in the ICL is 2.8± .47× 1011L�in r and 7.0± 4.7× 1010L�in B, which makes
for ICL fractions of 19± 6% in r and 17± 13% in B. The ICL has a flat to red-ward color profile with B − r ' 2.3± .03,
40 Krick, Bernstein
which is marginally redder than the RCS (0.3 magnitudes). The cluster has a second diffuse light component around one
of the giant elliptical galaxies, .55 h−170 Mpc from the center of the cD galaxy. The third bright elliptical has a saturated
star just 40′′away, so we do not have a diffuse light map of that galaxy.
A2556
A2556 is a richness class 1, Bautz Morgan type II-III cluster at a redshift of 0.087. Despite the Bautz Morgan
classification, this cluster has a clear cD galaxy in the center of the X–ray distribution which is 0.93 ± .05 magnitudes
brighter than any other galaxy in the cluster. The Chandra derived X–ray distribution is slightly elongated toward the
NE where a second cluster, A2554, resides, 1.4h−170 Mpc from the center of A2556. The cD galaxy of A2554 is just on the
edge of our images so we have no information about its low surface brightness component. A2556 and A2554 are a part
of the Aquarius supercluster(Batuski et al. 1999), so they clearly reside in an overdense region of the universe. Given an
X–ray luminosity from Ebeling et al. (1996) and a velocity dispersion from Reimers et al. (1996), we calculate the virial
mass of A2556 to be 2.5± 1.1× 1015h−170 M�.
The red sequence for this cluster is a bit wider than in other clusters. The one sigma width to a biweight fit is 0.38
magnitudes in B-r which is approximately 30% larger than in the rest of the low-z sample. This extra width is not caused
by only a few galaxies, instead the entire red sequence appears to be inflated. This is probably caused by the nearby
A2554 which is at z=0.11 (Struble & Rood 1999). This is close enough in redshift space that we cannot separate out
the 2 red sequences in our CMD. We have roughly 30 redshifts for A2556 from Smith et al. (2004), Caretta et al. (2004),
and Batuski et al. (1999) which are also unable to differentiate between the clusters. Using the CMD as an indication of
membership, we estimate the flux in cluster galaxies to be 1.3 ± .38 × 1012L�in r and 3.3 ± 1.0 × 1011L�in B inside of
0.65h−170 Mpc, which is one quarter of the virial radius of this cluster.
Figure 7 shows the relevant plots for this cluster. There is an r−band ICL component ranging from 27 - 29 mag arcsec−2
in r centered on the cD galaxy. The B−band ICL is extremely faint, barely above or detection threshold. Although we
were able to fit a profile to the B−band diffuse light, all points on the medium sized mask are below 29 mag arcsec−2.
The total flux in the ICL is 7.6± 6.6× 1010L�in r and 1.4± 1.4× 1010L�in B, which makes for ICL fractions of 6± 5%
in r and 4 ± 4% in B. Although Figure 7 shows a color profile, we do not assume anything about the profile shape due
to the low SB level of the B−band. We take the B − r color from the innermost point to be 2.1 ± 0.4, which is fully
consistent with the color of the RCS.
A4010
A4010 is a richness class 1, Bautz Morgan type I-II cluster at a redshift of 0.096. This cluster has a cD galaxy in the
center of the galaxy distribution, which is 0.7 ± .05 magnitudes brighter than the second ranked galaxy. There is only
ROSAT All Sky Survey data for this cluster and no other sufficiently deep X–ray observations to show us the shape of
the mass distribution. There are weak lensing maps which put the center of mass of the cluster at the same position as
the cD galaxy, and elongated along the same position angle as the cD galaxy (Cypriano et al. 2004). Muriel et al. (2002)
find a velocity dispersion of 743 ± 140 for this cluster which is 15% larger than found by Girardi et al. (1998b), where
those authors find a virial mass of 3.8±1.61.2 ×1014h
70 M�.
The color magnitude diagram for A4010 is typical among the sample with a clear red sequence. A few redshifts exist
in the literature which help define the red sequence (Collins et al. 1995; Katgert et al. 1998). Using the CMD as an
indication of membership, we estimate the flux in cluster galaxies to be 1.2± .4× 1012L�in r and 3.5± 1.0× 1011L�in B
inside of 0.75h−170 Mpc, which is one quarter of the virial radius of this cluster.
Figure 8 shows the relevant plots for this cluster. There is an elongated ICL component ranging from 25.5 - 28 mag
arcsec−2 in r centered on the cD galaxy. The total flux in the ICL is 3.2± 0.7× 1011L�in r and 7.7± 2.8× 1010L�in B,
which makes for ICL fractions of 21 ± 8% in r and 18 ± 8% in B. The ICL has a significant red-ward trend in its color
profile with an average color of B − r ' 2.1± 0.1, which is marginally redder (0.2 magnitudes) than the RCS.
A3888
A3888 is discussed in great detail in paper I. In review, A3888 is a richness class 2, Bautz Morgan type I-II cluster
at a redshift of 0.151. This cluster has no cD galaxy; instead the core is comprised of 3 distinct sub-clumps of multiple
galaxies each. At least 2 galaxies in each of the subclumps are confirmed members based on velocities (Teague et al.
1990; Pimbblet et al. 2002). The brightest cluster galaxy is only 0.12± .04 magnitudes brighter than the second ranked
galaxy. XMM contours show an elongated distribution centered roughly in the middle of the three clumps of galaxies.
Reiprich & Böhringer (2002) estimate mass from the X–ray luminosity to be M200 = 25.5±10.57.4 × 1014h
70 M�, where r200
= 2.8h−170 Mpc. This is consistent with the mass estimate from the published velocity dispersion of 1102±
107 (Girardi &
Mezzetti 2001).
There is a clear red sequence of galaxies in the CMD of A3888. Using the CMD as an indication of membership, we
estimate the flux in cluster galaxies to be 3.0±0.9×1012L�in r and 7.2±2.2×1011L�in B inside of 0.92h−170 Mpc. We also
determine galaxy flux using the Driver et al. (1998) luminosity distribution, which is based on the statistical background
subtraction of non-cluster galaxies, to be 4.3 ± 0.7 × 1012L�in the r−band and 3.4 ± 0.6 × 1012L�in V . The difference
in these two estimates is likely due to uncertainties in our membership identification (of order 30%) and difference in
detection thresholds of the two surveys.
ICL 41
Figure 9 shows the relevant plots for this cluster. There is a centralized ICL component ranging from 26 - 29 mag
arcsec−2 in r despite the fact that there is no cD galaxy. The total flux in the ICL is 4.4 ± 2.1 × 1011L�in r and
8.6 ± 2.5 × 1010L�in B, which makes for ICL fractions of 13 ± 5% in r and 11 ± 3% in B. The ICL has a red color
profile with an average color of V − r ' 0.5 ± 0.1, which is marginally redder (0.2 magnitudes) than the RCS. There is
also a diffuse light component surrounding a group of galaxies that is 1.4 h−170 Mpc from the cluster center which totals
1.7± 0.5× 1010L�in V and 2.6± 1.2× 1010L�in r and has a color consistent with the main ICL component.
A3984
A3984 is an interesting richness class 2, Bautz Morgan type II-III cluster at a redshift of 0.181. There appear to be 2
centers of the galaxy distribution. One around the BCG, and one around a semi-circle of ∼ 5 bright ellipticals which are
1h−170 Mpc north of the BCG. The BCG and at least one of the other bright ellipticals are at the same redshift (Collins et al.
1995). To determine if these 2 centers are part of the same redshift structure, we split the image in half perpendicular to
the line bisecting the 2 regions, and plot the cumulative distributions of V − r galaxy colors. A KS test reveals that these
2 regions have an 89% probability of being drawn from the same distribution. Without X–ray observations we do not
know where the mass in this cluster resides. There is a weak lensing map of just the northern region of the cluster which
does show a centralized mass distribution, but does not include the southern clump (Cypriano et al. 2004). The BCG is
0.57± .04 magnitudes brighter than the second ranked galaxy. We use a velocity dispersion from the lensing measurement
to determine a mass of 31± 10× 1014h−170 M�.
There is a clear red sequence of galaxies in the CMD of A3984. Using the CMD as an indication of membership, we
estimate the flux in cluster galaxies to be 2.0± 0.6× 1012L�in r and 4.4± 1.3× 1011L�in B inside of 0.87h−170 Mpc, which
is one quarter of the virial radius of this cluster.
Figure 10 shows the relevant plots for this cluster. There are 2 clear groupings of diffuse light. We can only fit a profile to
the ICL which is centered on the BCG. We stop fitting that profile before it extends into the other ICL group (∼ 600kpc)
in an attempt to keep the fluxes separate. The total flux in the ICL is 2.2 ± 1.0 × 1011L�in r and 6.2 ± 2.1 × 1010L�in
B, which makes for ICL fractions of 10± 6% in r and 12± 6% in B. The ICL becomes distinctly bluer with radius and
is bluer at all radii than the RCS with an average color of V − r ' −0.2± 0.4 (0.5 magnitudes bluer than the RCS).
A0141
A0141 is a richness class 3, Bautz Morgan type III cluster at a redshift of 0.23. True to its morphological type, this
cluster has no cD galaxy, instead it has 4 bright elliptical galaxies, each at the center of a clump of galaxies, the brightest
one of which is 0.42± .04 magnitudes brighter than the second brightest. The center of the cluster, as defined by ASCA
observations and a weak lensing map (Dahle et al. 2002), is near the northernmost clumps of galaxies. The distribution is
clearly elongated north-south, it is therefore possible that the other bright ellipticals are in-falling groups along a filament.
M200 from the lensing map is 18.9±1.10.9 ×1014 h
70 M�.
There is a clear red sequence of galaxies in the CMD of A0141. Using the CMD as an indication of membership, we
estimate the flux in cluster galaxies to be 3.2± 1.0× 1012L�in r and 5.4± 1.6× 1011L�in B inside of 0.94 h−170 Mpc, which
is one quarter of the virial radius of this cluster.
Figure 11 shows the relevant plots for this cluster. There are 3 clear groupings of diffuse light which do not have a
common center, although 1 of these ICL peaks does include 2 clumps of galaxies. We are unable to fit a single centralized
profile to this ICL as the three clumps are too far separated. The total flux in the ICL as measured in manually placed
elliptical annuli is 3.5± .9× 1011L�in r and 3.4± 1.1× 1010L�in B, which makes for ICL fractions of 10± 4% in r and
6 ± 3% in B. We estimate the color of the ICL to be V − r ' 1.0 ± 0.8, which is significantly redder (0.6 magnitudes)
than the RCS. We have no color profile information.
AC114
AC114 (AS1077) is a richness class 2, Bautz Morgan type II-III cluster at a redshift of 0.31. The brightest galaxy is only
0.28± .04 magnitudes brighter than the second ranked galaxy. The galaxy distribution is elongated southeast to northwest
(Couch et al. 2001) as is the Chandra derived X–ray distribution. The X–ray gas shows a very irregular morphology, with
a soft X–ray tail stretching toward a mass clump in the southeast which is also detected in a lensing map (De Filippis
et al. 2004; Campusano et al. 2001). The X–ray gas is roughly centered on a bright elliptical galaxy, however the tail is
an indication of a recent interaction. There is a clump of galaxies, 1.6h−170 Mpc northwest of the BCG, which looks like a
group or cluster with its own cD-like galaxy which is not targeted in either the X–ray or lensing (strong) observations.
Only one of these galaxies has redshifts in the literature, and it is a member of AC114. Without redshifts, we cannot
know definitively if these galaxies are a part of the same structure, however their location along the probable filament
might be evidence that they are part of the same velocity structure. As this cluster is not in dynamical equilibrium, mass
estimates from the X–ray gas come from B-model fits to the surface brightness distribution. De Filippis et al. (2004) find
a mass within 1h−170 Mpc of 4.5 ± 1.1 × 10
14h−170 M�. A composite strong and weak lensing analysis agree with the X–ray
analysis within 500h−170 kpc, but they do not extend out to larger radii (Campusano et al. 2001). Within the virial radius,
(Girardi & Mezzetti 2001) find a mass of 26.3+8.2−7.1 × 10
14h−170 M�.
This cluster, in relation to lower-z clusters, is a prototypical example of the Butcher-Oemler effect. There is a higher
fraction of blue, late-type galaxies at this redshift, than in our lower-z clusters, rising to 60% outside of the core region
42 Krick, Bernstein
(Couch et al. 1998). This is not only evidenced in the morphologies, but in the CMD, which nicely shows these blue
member galaxies. We adopt the Andreon et al. (2005) luminosity function for this cluster based on an extended likelihood
distribution for background galaxies. Integrating the luminosity distribution from very dim dwarf galaxies (MR = −11.6)
to infinity gives a total luminosity for AC114 of 1.5± 0.2× 1012L�in r and 1.9± 1.2× 1011L�in B inside of 0.9h−170 Mpc,
which is one quarter of the virial radius of this cluster. For the purpose of comparison with other clusters, we adopt the
cluster flux from the CMD, which gives 1.8 ± 0.5 × 1012L�in r and 2.3 ± 0.7 × 1011L�in B inside of one quarter of the
virial radius of this cluster. The differences in these estimates are likely due to uncertainties in membership identification
and differing detection thresholds of the two surveys.
Figure 12 shows the relevant plots for this cluster. There is a centralized ICL component ranging from 27.5 - 29 mag
arcsec−2 in r, in addition to a diffuse component around the group of galaxies to the northwest of the BCG. The total flux
in the ICL is 2.2± 0.4× 1011L�in r and 3.8± 7.9× 1010L�in B, which includes the flux from the group as measured in
elliptical annuli. The ICL fraction is 11±2% in r and 14±3% in B. The ICL has a flat color profile with V −r ' 0.1±0.1,
which is marginally bluer (0.4 magnitudes) than the RCS.
AC118 (A2744)
AC118 (A2744) is a richness class 3, Bautz Morgan type III cluster at a redshift of 0.31. This cluster has 2 main clumps
of galaxies separated by 1h−170 Mpc, with a third bright elliptical in a small group which is 1.2h
70 Mpc distant from the
center of the other clumps. The BCG is 0.23 ± .04 magnitudes brighter than the second ranked galaxy. The Chandra
X–ray data suggests that there are probably 3 clusters here, at least 2 of which are interacting. The gas distribution,
along with abundance ratios, suggests that the third, smaller group might be the core of one of the interacting clusters
which has moved beyond the scene of the interaction where the hot gas is detected. From velocity measurements Girardi
& Mezzetti (2001) also find 2 populations of galaxies with distinctly different velocity dispersions. The presence of a large
radio halo and radio relic are yet more evidence for dynamical activity in this cluster (Govoni et al. 2001). Mass estimates
for this cluster range from ∼ 3×1013M� from X–ray data to ∼ 3×1015M� from the velocity dispersion data. This cluster
clearly violates assumptions of sphericity and hydrostatic equilibrium, which is leading to the large variations. The two
velocity dispersion peaks have a total mass of 38± 37× 1014 h−170 M�; we adopt this mass throughout the paper.
AC118, at the same redshift as AC114, also shows a significant fraction of blue galaxies, which leads to a wider red cluster
sequence(1σ = 0.3 magnitudes), than at lower redshifts. We adopt the Busarello et al. (2002) R and V−band luminosity
distributions based on photometric redshifts and background counts from a nearby, large area survey. Integrating the
luminosity distribution from very dim dwarf galaxies (MR = −11.6) to infinity gives a total luminosity for AC118 of
4.5 ± .2 × 1011L� in V and 4.2 ± .4 × 1012L� in the r−band inside of 0.25rvirial. For the purpose of comparison with
other clusters, we adopt the cluster flux from the CMD, which gives 5.4 ± 1.6 × 1011L�in B and 4.4 ± 0.1 × 1012L�in r
inside of 0.94h−170 Mpc, which is one quarter of the virial radius of this cluster.
Figure 13 shows the relevant plots for this cluster. There are at least two, if not three groupings of diffuse light which
do not have a common center. The possible third is mostly obscured behind the mask of a saturated star. We are unable
to fit a centralized profile to this ICL. The total flux in the ICL as measured in manually placed elliptical annuli is
7.0± 1.0× 1011L�in r and 6.7± 1.7× 1010L�in B, which makes for ICL fractions of 14± 5% in r and 11± 5% in B. We
estimate the color of the ICL to be V − r ' 1.0 ± 0.8, which is significantly redder (0.6 magnitudes) than the RCS. We
have no color profile information.
Introduction
The Sample
Observations
Reduction
Object Detection
Object Removal & Masking
Stars
Galaxies
Results
Cluster Properties
Cluster Membership & Flux
Dynamical Age
Global Density
ICL properties
Surface brightness profile
ICL Flux
ICL Fraction
Color
ICL Substructure
Groups
Accuracy Limits
Discussion
ICL flux
ICL Flux vs. M3-M1
ICL fraction
ICL fraction vs. Mass
ICL fraction vs. Redshift
ICL color
Profile Shape
Conclusion
The Clusters
A4059
A3880
A2734
A2556
A4010
A3888
A3984
A0141
AC114
AC118 (A2744)
|
0704.1665 | Approach to Physical Reality: a note on Poincare Group and the
philosophy of Nagarjuna | Approach to Physical Reality: a note on Poincaré Group
and the philosophy of Nagarjuna.
David Vernette, Punam Tandan, and Michele Caponigro
We argue about a possible scenario of physical reality based on the parallelism between Poincaré
group and the sunyata philosophy of Nagarjuna. The notion of ”relational” is the common denom-
inator of two views. We have approached the relational concept in third-person perspective (ontic
level). It is possible to deduce different physical consequence and interpretation through first-person
perspective approach. This relational interpretation leave open the questions: i)we must abandon
the idea for a physical system the possibility to extract the completeness information? ii)we must
abandon the idea to infer a possible structure of physical reality?
POINCARÉ GROUP
There are two universal features of modern day physics
regarding physical systems: all physical phenomena take
place in 1)space-time and all phenomena are (in princi-
ple) subject to 2)quantum mechanics. Are these aspects
just two facets of the same underlying physical reality?
The research is concentrate on this fundamental point.
The notion of space-time is linked to the geometry, so an
interesting question is what geometry is appropriate for
quantum physics[3]. Can geometry give us any knowl-
edge about the nature of physical space where the phys-
ical laws take place? Can geometry give us the possible
scenario of the physical reality? A fundamental aspect of
a geometry is the group of transformations defined over
it. Group theory is the necessary instruments for ex-
pressing the laws of physics (the concept of symmetry is
derived from group theory.)[4].Physics and the geometry
in which it take place are not independent. We retain
there is a close relationship between space-time struc-
ture and physical theory. Space-time imposes univer-
sally valid constraints on physical theories and the uni-
versality of these laws starts to become less mysterious
(i.e. various paradox). The invariance under the group
of transformations is a fundamental criterion to classify
mathematical structures. Poincaré introduced notion of
invariance under continue transformations. The Poincaré
Group is the group of translation, rotation, and boost
operators in 4-dimensional space-time. Now, some nat-
ural questions are: does space exist independently
of phenomena? Itself has an intrinsic significance? A
system defined in this space through physical law could
exist by itself? We call ”absolute” reality the reality of a
system that do not depend by its interaction with other
system. The problem is that we have not a single system.
In this brief note, we abandon the idea of absolute reality
and we argue in favor of a relational reality, because re-
lational reality is founded on the premise that an object
is real only in relation to another object that it is inter-
acting with. In the relational interpretation[2], the basic
elements of objective reality are the measurement events
themselves. This interpretation goes beyond the Copen-
hagen interpretation by replacing the absolute reality
with relational reality. In the relational interpreta-
tion the wave function is merely a useful mathematical
abstraction. Some authors proposes that the laws of na-
ture are really the result of probabilities constrained by
fundamental symmetries. Relational reality is asso-
ciated with the fundamental concept of interac-
tions. These later analysis of the ”relational” notion
bring us to approach the same problem utilizing the sun-
yata philosophy of Nagaujuna.
CONCEPT OF REALITY IN THE PHILOSOPHY
OF NAGARJUNA
The Middle Way of Madhyamika refers to the teach-
ings of Nagarjuna, very interesting are the implications
between quantum physics and Madhyamika. The basic
concept of reality in the philosophy of Nagarjuna is that
the fundamental reality has no firm core but consists
of systems of interacting objects. According to the
middle way perspective, based on the notion of empti-
ness, phenomena exist in a relative way, that is, they are
empty of any kind of inherent and independent existence.
Phenomena are regarded as dependent events existing
relationally rather than permanent things, which
have their own entity. Nagarjuna middle way perspective
emerges as a relational approach, based on the insight of
emptiness. Sunyata (emptiness) is the foundation of all
things, and it is the basic principle of all phenomena. The
emptiness implies the negation of unchanged, fixed sub-
stance and thereby the possibility for relational existence
and change. This suggests that both the ontological con-
stitution of things and our epistemological schemes are
just as relational as everything else. We are fundamen-
tally relational internally and externally. In other words,
Nagarjuna do not fix any ontological nature of the things:
• 1)they do not arise.
http://arxiv.org/abs/0704.1665v1
• 2)they do not exist.
• 3)they are not to be found.
• 4)they are not.
• 5)and they are unreal
In short, an invitation do not decide on either existence
or non-existence(nondualism). According the theory of
sunyata, phenomena exist in a relative state only, a kind
of ’ontological relativity’. Phenomena are regarded
as dependent(only in relation to something else) events
rather than things which have their own inherent nature;
thus the extreme of permanence is avoided.
CONCLUSION
We have seen the link between relational and in-
teraction within a space-time governed by own geom-
etry. Nagarjuna’s philosophy use the same basic con-
cept of ”relational” in the interpretation of reality. We
note that our parallelism between the scenario of physi-
cal reality and the relational interpretation of the same
reality is based on third-person perspective approach
(i.e. the ontic level, relational view include the observer-
device). Different considerations could be done thought
first-person perspective approach, in this case we retain
the impossibility to establish any parallelism. Finally,
we note that probably the relational approach stimulate
the interest to fundamental problems in physics like: the
unification of laws and the discrete/continuum view[1].
——————
⋄David Vernette, Punam Tandan, Michele Caponigro
Quantum Philosophy Theories www.qpt.org.uk
⋄ [email protected]
[1] Vernette-Caponigro: Continuum versus Discrete Physics
physics/0701164
[2] Rovelli C. Relational quantum mechanics, Intl. J. theor.
Phys. 35, 1637-1678 (1996)
[3] note -1- It was suggested, for instance, that the universal
symmetry group elements which act on all Hilbert spaces
may be appropriate for constructing a physical geometry
for quantum theory.
[4] note -2- Some authors retain the symmetry is the ontic
element, and the physical laws like the space-time are sec-
ondary
http://arxiv.org/abs/physics/0701164
Poincaré Group
Concept of reality in the philosophy of Nagarjuna
Conclusion
References
|
0704.1666 | Ultraviolet Observations of Supernovae | ULTRAVIOLET OBSERVATIONS OF
SUPERNOVAE
Nino Panagia
STScI, Baltimore, MD, USA; [email protected]
INAF - Observatory of Catania, Italy
Supernova Ltd., Virgin Gorda, BVI
Abstract. The motivations to make ultraviolet (UV) studies of supernovae (SNe) are reviewed and
discussed in the light of the results obtained so far by means of IUE and HST observations. It appears
that UV studies of SNe can, and do lead to fundamental results not only for our understanding of the
SN phenomenon, such as the kinematics and the metallicity of the ejecta, but also for exciting new
findings in Cosmology, such as the tantalizing evidence for "dark energy" that seems to pervade the
Universe and to dominate its energetics. The need for additional and more detailed UV observations
is also considered and discussed.
Keywords: Supernovae: general, Ultraviolet: stars, Binaries: general, Cosmology: miscellaneous
PACS: 97.60.Bw, 97.80.-d, 98.80.-k
1. INTRODUCTION
Supernovae (SNe) are the explosive death of massive stars as well as moderate mass
stars in binary systems. They enrich the interstellar medium of galaxies of most heavy
elements (only C and N can efficiently be produced and ejected into the ISM by red
giants winds and by planetary nebulae, as well as pre-SN massive star winds): nuclear
detonation supernovae, i.e., Type Ia SNe (SNIa), provide mostly Fe and iron-peak
elements, while core collapse supernovae, i.e., Type II (SNII) and Type Ib/c (SNIb/c),
mostly O and alpha-elements (see below for type definitions). Therefore, they are the
primary factors to determine the chemical evolution of the Universe. Moreover, SN
ejecta carry approximately 1051 erg in the form of kinetic energy, which constitute
a large injection of energy into the ISM of a galaxy (for a Milky Way class galaxy
EMWkin ≃ 3×10
57 erg). This energy input is very important for the evolution of the entire
galaxy, both dynamically and for star-formation through cloud compression/energetics.
In addition SNe are bright events that can be detected and studied up to very large
distances. Therefore: (1) SN observations can be used trace the evolution of the Uni-
verse. (2) SNe can be used as measuring sticks to determine cosmologically interesting
distances, either as "standard candles" (SNIa, which at maximum are about 10 billion
times bright than the Sun, with a dispersion of the order of 10%) or employing a refined
Baade-Wesselink method (SNII in which strong lines provide ideal conditions for the ap-
plication of the method, with a distance accuracy of ±20%). (3) Their intense radiation
can be used to study the ISM/IGM properties through measurements of the absorption
lines. Since most of the strong absorption lines are found in the UV, this is best done
observing SNII at early phases, when the UV continuum is still quite strong. Additional
http://arxiv.org/abs/0704.1666v2
studies in the optical (mostly CaII and NaI lines) are possible using all bright SNe. How-
ever, only combining optical and UV observations can one obtain the whole picture and,
therefore, SNII are the preferred targets for these studies. (4) Finally, the strong light
pulse provided by a SN explosion (the typical HPW of a light curve in the optical is
about a month for SNIa and about two-three months for SNII; in the UV the light curve
evolution is much faster) can used to probe the intervening ISM in a SN parent galaxy
by observing the brightness, and the time evolution of associated light echoes.
2. ULTRAVIOLET OBSERVATIONS
The launch of the International Ultraviolet Explorer (IUE) satellite in early 1978 marked
the beginning of a new era for SN studies because of its capability of measuring the ul-
traviolet emission from objects as faint as mB=15. Moreover, just around that time, other
powerful astronomical instruments became available, such as the Einstein Observatory
X-ray measurements, the VLA for radio observations, and a number of telescopes either
dedicated to infrared observations (e.g. UKIRT and IRTF at Mauna Kea) or equipped
with new and highly efficient IR instrumentation (e.g. AAT and ESO observatories). As a
result, starting in the late 70’s a wealth of new information become available that, thanks
to the coordinated effort of astronomers operating at widely different wavelengths, has
provided us with fresh insights as for the properties and the nature of supernovae of all
types. Eventually, the successful launch of the Hubble Space Telescope (HST) opened
new possibilities for the study of supernovae, allowing us to study SNe with an accuracy
unthinkable before and to reach the edge of the Universe.
Even after 18 years of IUE observations and 16 more of HST observations, the number
of SN events that have been monitored with UV spectroscopy is quite small and hardly
include more than two objects per SN type and hardly with good quality spectra for more
than three epochs each. As a consequence, we still know very little about the properties
and the evolution of the ultraviolet emission of SNe. On the other hand, it is just the
UV spectrum of a SN, especially at early epochs, that contains a wealth of valuable and
crucial information that cannot be obtained with any other means. Therefore, we truly
want to monitor many more SNe with much more frequent observations.
We have learned at this Conference that SWIFT, in addition to doing UV photometry,
is also able to obtain low resolution spectra of SNe at λ > 2000Å with a sensitivity
comparable to that of IUE and that spectroscopic observations of SNe with SWIFT are
in the planning (see, e.g., the contribution by F. Bufano). This is an exciting possibility
that promises to provide very valuable results and to fill the gaps in our knowledge about
UV properties of SNe.
Here, I present a short summary of the UV observations of supernovae. A more
detailed review on this subject can be found in Panagia (2003).
3. TYPE IA SUPERNOVAE
Type Ia supernovae are characterized by a lack of hydrogen in their spectra at all epochs
and by a number of typically broad, deep absorption bands, most notably the Si II 6150Å
FIGURE 1. Ultraviolet spectra of ten Type Ia supernovae observed with IUE around maximum light.
The dashed line is SN 1992A spectrum as measured with HST-FOS.
(actually the blue-shifted absorption of the 6347-6371Å Si II doublet; see e.g. Filippenko
1997), which dominate their spectral distributions at early epochs. SNIa are found in all
types of galaxies, from giant ellipticals to dwarf irregulars. However, the SNIa explosion
rate, normalized relative to the galaxy H or K band luminosity and, therefore, relative
to the galaxy mass, is much higher, up to a factor of 16 when comparing the extreme
cases of irregulars and ellipticals (Della Valle & Livio 1994, Panagia 2000, Mannucci et
al. 2005) in late type galaxies than in early type galaxies. This suggests that, contrary to
common belief, a considerable fraction of SNIa belong to a relatively young (age much
younger that 1∼Gyr), moderately massive (∼5M⊙< M(SNIa progenitor)< 8M⊙) stellar
population (Mannucci, Della Valle & Panagia 2006), and that in present day ellipticals
SNIa are mostly the result of capture of dwarf galaxies by massive ellipticals (Della
Valle & Panagia, 2003, Della Valle et al. 2005)
3.1. Existing Samples of UV Spectra of SNIa
Although 12 type Ia SNe were observed with IUE, only two events, namely SN1990N
and SN1992A, had extensive time coverage, whereas all others were observed only
around maximum light either because of their intrinsic UV faintness or because of
satellite pointing constraints. Even so, one can reach important conclusions of general
validity, which are confirmed by the detailed data obtained for a few SNIa.
The UV spectra of type Ia SNe are found to decline rapidly with frequency, making
it hard to detect any signal at short wavelengths. This aspect is illustrated in Fig. 1,
which displays the UV long wavelength spectra of 10 type Ia SNe observed with IUE. It
appears that the spectra do not have a smooth continuum but rather consist of a number of
FIGURE 2. The spectral evolution of SN1992A [adapted from Kirshner et al. 1993].
"bands” that are observed with somewhat different strengths. The fact that the spectrum
is so similar for most of the SNe supports the idea of an overall homogeneity in the
properties of type Ia SNe.
On the other hand, some clear deviations from “normal” can be recognized for some
SNIa. In particular, one can notice that both SN1983G and SN1986G display excess
flux around 2850 Å, and a deficient flux around 2950 Å. This suggests that the Mg
II resonance line is much weaker, which may indicate a lower abundance of Mg in
these fast-decline, under-luminous SNIa. On the other hand, SN1990N, SN1991T, and,
possibly, SN1989M show excess flux around ∼2750 Å and ∼2950 Å and a clear deficit
around ∼3100 Å, which may be ascribed to enhanced Mg II and Fe II features in these
slow-decline, over-luminous SNIa.
The best studied SNIa event so far is the "normal” type Ia supernova SN1992A in the
S0 galaxy NGC1380 that was observed as a TOO by both IUE and HST (Kirshner et al.
1993). The HST-FOS spectra, from 5 to 45 days past maximum light, are the best UV
spectra available for a SNIa (see Fig. 2) and reveal, with good signal to noise ratio, the
spectral region blueward of ∼2650 Å.
An LTE analysis of the SN1992A spectra shows that the features in the region
shortward of ∼2650Å are P Cygni absorptions due to blends of iron peak element
multiplets and the Mg II resonance multiplet. Newly synthesized Mg, S, and Si probably
extend to velocities at least as high as ∼19,000 km s−1. Newly synthesized Ni and Co
may dominate the iron peak elements out to ∼13,000 km s−1 in the ejecta of SN1992A.
On the other hand, an analysis of the O I λ7773 line in SN1992A and other SNIa
implies that the oxygen rich layer in typical SNIa extends over a velocity range of at
least ∼11,000-19,000 km s−1, but none of the "canonical” models has an O-rich layer
that completely covers this range. Even higher velocities were inferred by Jeffery et al.
(1992) for the overluminous, slow-decline SNIa SN1990N and SN1991T through an
LT analysis of their photospheric epoch optical and UV spectra. In particular, matter
moving as fast as 40,000 and 20,000 km s−1 were found for SN1990N and SN1991T,
respectively.
It thus appears that type Ia supernovae are consistently weak UV emitters, and even
at maximum light their UV spectra fall well below a blackbody extrapolation of their
optical spectra. Broad features due to P Cygni absorption of Mg II and Fe II are
present in all SNIa spectra, with remarkable constancy of properties for normal SNIa
and systematic deviations for slow-decline, over-luminous SNIa (enhanced Mg II and
Fe II absorptions) and fast-decline, under-luminous SNIa (weaker Mg II lines).
4. CORE COLLAPSE SUPERNOVAE: TYPES II AND IB/C
Massive stars (M*>8M⊙) are believed to end their evolution collapsing over their
inner Fe core and producing an explosion by a gigantic bounce that launches a shock
wave that propagates through the star and eventually erupts through the progenitor
photosphere, ejecting several solar masses of material at velocities of several thousand
km s−1. The current view is that single stars (as well as stars in wide binary systems
in which the companion does not affect the evolution of the primary star) explode as
type II supernovae, while supernovae of types Ib and Ic originate from massive stars in
interacting binary systems. Although the explosion mechanism is essentially the same in
both types, the spectral characteristics and light curve evolution are markedly different
among the different types.
4.1. Type Ib/c Supernovae
Type Ib/c supernovae (SNIb/c) are similar to SNIa in not displaying any hydrogen
lines in their spectra and are dominated by broad P Cygni-like metal absorptions, but
they lack the characteristic SiII 6150Å trough of SNIa. The finer distinction into SNIb
and SNIc was introduced by Wheeler and Harkness (1986) and is based on the strength
of He I absorption lines, most importantly He I 5876Å, so that the spectra of SNIb
display strong He I absorptions and those of SNIc do not. SNIb and SNIc are found
only in late type galaxies, often (but not always) associated with spiral arms and/or H II
regions. They are generally believed to be the result of the evolution of massive stars in
close binary systems.
Although the properties of some peculiarly red and under-luminous SNI (SN1962L
and SN1964L) were already noticed by Bertola and collaborators in the mid-1960s
(Bertola 1964, Bertola et al. 1965), the first widely recognized member and prototype of
the SNIb class was SN1983N in NGC5236=M83.
Because of its bright magnitude (B∼11.6 mag at maximum light), SN1983N is one
of the best-studied SNe with IUE (see Panagia 1985). The UV spectrum of SN1983N
closely resembles that of type Ia SNe at comparable epochs and, as such, only a minor
fraction of the SN energy is radiated in the UV. In particular, only ∼13% of the total
luminosity was emitted by SN1983N shortward of 3400Å at the time of the UV maxi-
mum. Moreover, there is no indication of any stronger emission in the UV at very early
epochs; this implies that the initial radius of the SN, i.e. the radius the stellar progenitor
had when the shock front reached the photosphere, was probably < 1012cm, ruling out
FIGURE 3. The spectrum of SN 1983N near maximum optical light, dereddened with E(B-V)=0.16.
Both UV and optical spectra have been boxcar smoothed with a 100Å bandwith. The triangle is the IUE
Fine Error Sensor (FES) photometric point, and the dots represent the J, H, and K data. The dash-dotted
curve is a blackbody spectrum at T=8300K [adapted from Panagia 1985].
a RSG progenitor. From the bolometric light curve Panagia (1985) estimated that ∼0.15
M⊙ of
56Ni was synthesized in the explosion.
The best observed SNIc is SN1994I that was discovered on 2 April 1994 in the
grand design spiral galaxy M51 and was promptly observed both with IUE (as early
as 3 April) and with HST- FOS (19 April). The UV spectra were remarkably similar
to those obtained for SN1983N and, although they were taken only at two epochs well
past maximum light (10 days and 35 days), they were of high quality. From synthetic
spectra matching the observed spectra from 4 days before to 26 days after the time of
maximum brightness, the inferred velocity at the photosphere decreased from 17,500
to 7,000 km s−1 (Millard et al. 1999). Simple estimates of the kinetic energy carried
by the ejected mass gave values that were near the canonical supernova energy of 1051
erg. Such velocities and kinetic energies for SN1994I are "normal” for SNe and are
much lower than those found for the peculiar type Ic SN1997ef and SN1998bw (see,
e.g. Branch 2000) which appear to have been hyper-energetic.
Thus, as type Ia, type Ib/c supernovae are weak UV emitters with their UV spectra
much fainter than a blackbody extrapolation of both optical and NIR spectra, and their
typical luminosity is about a factor of 4 lower than that of SNIa. The mass of 56Ni
synthesized in a typical SNIb/c is, therefore, ∼0.15 M⊙.
4.2. Type II Supernovae
Type II supernovae display prominent hydrogen lines in their spectra (Balmer series in
the optical) and their spectral energy distributions are mostly a continuum with relatively
few broad P Cygni-like lines superimposed, rather than being dominated by discrete
features as is the case of all type I supernovae. SNII are believed to be the result of a
core collapse of massive stars exploding at the end of their RSG phase. SN1987A was
FIGURE 4. UV spectral evolution of SN1998S (SINS project, unpublished). Shown are spectra ob-
tained near maximum light (March 16,1998), about two weeks past maximum (March 30, 1998), and
about two months after maximum (May 13, 1998).
both a confirmation and an exception to this model. It was clearly the product of the
collapse of a massive star, but it exploded when it was a BSG, not an RSG. Since its
properties are amply discussed in many detailed papers presented at this Conference, we
do not include SN1987A in this summary of the UV properties of "normal" SNII.
Among the other five SNII that were observed with IUE, only two, SN1979C and
SN1980K, were bright enough to allow a detailed study of their properties in the UV
(Panagia et al. 1980). They were both of the so-called "linear” type (SNIIL), which is
characterized by an almost straight-line decay of the B and V-band light curves, rather
than of the more common "plateau” type (SNIIP) which display a flattening in their light
curves starting a few weeks after maximum light.
The SNII studied best in the UV so far is possibly SN1998S in NGC3877, a type II
with relatively narrow emission lines (SNIIn). SN1998S was discovered several days
before maximum. Its first UV spectrum, obtained on 16 March 1998, near maximum
light, was very blue and displayed lines with extended blue wings, which indicate
expansion velocities up to 18,000 km s−1 (Panagia 2003). The UV spectral evolution
of SN1998S (Fig. 5) showed the spectrum to gradually steepen in the UV, from near
maximum light on 16 March 1998 to about two weeks past maximum on 30 March,
and the blue absorptions to weaken or disappear completely. About two months after
maximum (13 May 1998) the continuum was much weaker, although its UV slope had
not changed appreciably, and it had developed broad emission lines, the most noticeable
being the Mg II doublet at about 2800Å. This type of evolution is quite similar to that of
SN1979C (Panagia 2003) and suggests that the two sub-types are related to each other,
especially in their circumstellar interaction properties.
A detailed analysis of early observations of SN1998S (Lentz et al. 2001) indicated that
early spectra originated primarily in the circumstellar region itself, and later spectra are
due primarily to the supernova ejecta. Intermediate spectra are affected by both regions.
A mass-loss rate of order of ∼ 10−4[v/(100km s−1)] M⊙/yr was inferred from these
calculations but with a fairly large uncertainty.
Despite the fact that type II plateau (SNIIP) supernovae account for a large fraction
of all SNII, so far SN1999em in NGC1637 is the only SNIIP that has been studied in
some detail in the ultraviolet. Although caught at an early stage, SN1999em was already
past maximum light (see, e.g. Hamuy et al. 2001). An early analysis of the optical and
UV spectra (Baron et al. 2000) indicates that, spectroscopically, this supernova appears
to be a normal type II. Also, the analysis suggests the presence of enhanced N as found
in other SNII.
Another sub-type of the SNII family is the so-called type IIb SNe, dubbed so because
at early phases their spectra display strong Balmer lines, typical of type II SNe, but
at more advanced phases the Balmer lines weaken significantly or disappear altogether
(see, e.g. Filippenko et al. 1997) and their spectra become more similar to those of type
Ib SNe. A prototypical member of this class is SN1993J that was discovered in early
April 1993 in the nearby galaxy M81. An HST-FOS UV spectrum of SN1993J was
obtained on 15 April 1993, about 18 days after explosion, and rather close to maximum
light. The study of this spectrum (Jeffery et al. 1994) shows that the approximately 1650-
2900Å region is smoother than observed for SN1987A and SN1992A and lacks strong
P Cygni lines absorptions caused by iron peak element lines. It is of interest to note
that the UV spectrum of SN1993J is appreciably fainter than observed in most SNII,
thus revealing its "hybrid” nature and some resemblance to a SNIb. Synthetic spectra
calculated using a parameterized LT procedure and a simple model atmosphere do not
fit the UV observations. Radio observations suggest that SN1993J is embedded in a thick
circumstellar medium envelope (Van Dyk et al. 1994, Weiler et al. 2007). Interaction of
supernova ejecta with circumstellar matter may be the origin of the smooth UV spectrum
so that UV observations of supernovae could provide insight into the circumstellar
environment of the supernova progenitors.
Thus, despite their different characteristics in the detailed optical and UV spectra,
all type II supernovae of the various sub-types appear to provide clear evidence for
the presence of a dense CSM and, in many cases, enhanced nitrogen abundance. Their
UV spectra at early phases are very blue, possibly with strong UV excess relative to a
blackbody extrapolation of their optical spectra.
5. SUPERNOVAE AND COSMOLOGY
SNIa have gained additional prominence because of their cosmological utility, in that
one can use their observed light curve shape and color to standardize their luminosities.
Thus, SNIa are virtually ideal standard candles (e.g. Macchetto and Panagia 1999)
to measure distances of truly distant galaxies, currently up to redshift around 1 and,
considerably more in the foreseeable future. In particular, Hubble Space Telescope
observations of Cepheids in parent galaxies of SNIa (an international project lead by
Allan Sandage) have produced very accurate determinations of their distances and the
absolute magnitudes of normal SNIa at maximum light that, in turn, have lead to the most
modern measure of the Hubble constant (i.e. the expansion rate of the local Universe),
FIGURE 5. Top: Intermediate resolution (R=1500) spectra of three SNe near maximum light, SNIa
1992A (Kirshner et al. 1993), SNIIn/L 1998S (Lentz et al. 2001) and SNIIP 1999em (Baron et al. 2000)
normalized in the V band. Bottom: Low-resolution rendition of the observed spectra convolved to a R=4
resolution showing the color differences in the UV.
H0 = 62.3± 1.3(random)± 5.0(systematic) km s
−1Mpc−1 (Sandage et al. 2006, and
references therein). This value is lower than the determination obtained by the H0 key-
project from a combination of various methods, (H0 = 72±8 km s
−1Mpc−1; Freedman
et al. 2001). The difference is well within the experimental uncertainties, and a weighted
average of the two determinations would provide a compromise value of H0 = 65.2±4.3
km s−1Mpc−1.
Observations of high redshift (i.e. z>0.1) SNIa have provided evidence for a recent
(past several billion years) acceleration of the expansion of the Universe, pushed by
some mysterious "dark energy". This is an exciting result that, if confirmed, may shake
the foundations of physics. The results of two competing teams (Perlmutter et al. 1998,
1999, Riess et al. 1998, Knop et al. 2003, Tonry et al. 2003, Riess et al. 2004) appear
to agree in indicating a non-empty inflationary Universe, which can be characterized by
ΩM ≃ 0.3 and ΩΛ ≃ 0.7. Correspondingly, the age of the Universe can be bracketed
within the interval 12.3-15.3 Gyrs to a 99.7% confidence level (Perlmutter et al. 1999).
However, the uncertainties, especially the systematic ones, are still uncomfortably
large and, therefore, the discovery and the accurate measurement of more high-z SNIa
are absolutely needed. This is a challenging proposition, both for technical reasons, in
that searching for SNe at high redshifts one has to make observations in the near IR
(because of redshift) of increasingly faint objects (because of distance) and for more
subtle scientific reasons, i.e. one has to verify that the discovered SNe are indeed SNIa
and that these share the same properties as their local Universe relatives.
One can discern Type I from Type II SNe on the basis of the overall properties of their
UV spectral distributions (Panagia 2003), because Type II SNe are strong UV emitters,
whereas all Type I SNe, irrespective of whether they are Ia or Ib/c, have spectra steeply
declining at high frequencies (see Figure 5). This technique of recognizing SNIa from
their steep UV spectral slope was devised by Panagia (2003), and has been successfully
employed by Riess et al. (2004a,b) to select their best candidates for HST follow-up of
high-z SNIa. However, we have to keep in mind that by using this technique one is barely
separatiung the SNe with low UV emission (SNe Ia, Ib, Ic and, possibly, IIb) from the
ones with high UV emission (most type II SNe). While it is a convenient approach to
select interesting candidates, it cannot be a substitute for detailed spectroscopy, possibly
at an R>100 resolution, to reliably characterize the SN type.
REFERENCES
. E. Baron et al. 2000, ApJ, 545, 444
. F. Bertola 1964, Ann.Ap, 27, 319
. F. Bertola, A. Mammano, M. Perinotto 1965, Asiago Contr., 174, 51
. D. Branch 2000, in "The Largest Explosions since the Big Bang: Supernovae and Gamma Ray
Bursts", eds. M. Livio, N. Panagia, K. Sahu (Cambridge University Press, Cambridge) p. 96
. M. Della Valle , M. Livio, 1994, ApJ, 423, L31
. M. Della Valle , N. Panagia, 2003, ApJ, 587, L71
. M. Della Valle et al., 2005, ApJ, 629, 750
. A. Filippenko 1997, ARAAp, 35, 309
. W.L. Freedman et al. 2001, ApJ, 553, 47
. M. Hamuy et al. 2001, ApJ, 558, 615
. R.P. Kirshner et al. 1993, ApJ, 415, 589
. R.D. Knop, et al., 2003, ApJ, 598, 102
. E.J. Lentz et al. 2001, ApJ, 547, 406
. D.J. Jeffery, et al. 1992, ApJ, 397, 304
. D.J. Jeffery et al. 1994, ApJ, 421, L27
. F.D. Macchetto, N. Panagia 1999, in "Post-Hipparcos Cosmic Candles", eds. A. Heck, F. Caputo
(Kluwer, Holland) p. 225
. F. Mannucci et al. 2005, A&A, 433, 807
. F. Mannucci, M. Della Valle & N. Panagia, 2006, MNRAS, 370, 773
. J. Millard, et al., 1999, ApJ, 527, 746
. N. Panagia 1985, in "Supernovae As Distance Indicators", LNP 224, (Springer, Berlin) p. 226
. N. Panagia 2000, in "Experimental Physics of Gravitational Waves", eds. G. Calamai, M. Mazzoni,
R. Stanga, F. Vetrano (World Scientific, Singapore) p. 107
. N. Panagia 2003, in "Supernovae and Gamma-Ray Bursters", ed. K. W. Weiler (Springer-Verlag:
Berlin), p. 113-144.
. N. Panagia 2005, in "Frontier Objects in Astrophysics and Particle Physics", eds. F. Giovannelli &
G. Mannocchi, It. Phys. Soc., in press [astro-ph/0502247]
. N. Panagia et al. 1980, MNRAS, 192, 861
. S. Perlmutter et al. 1998, Nature, 391, 51
. S. Perlmutter et al. 1999, ApJ, 517, 565
. A.G. Riess et al. 1998, AJ, 116, 1009
. A.G. Riess et al. 2004a, ApJ, 600, L163
. A.G. Riess et al. 2004b, ApJ, 607, 665
. A. Sandage, G.A. Tammann, A. Saha, B. Reindl, F.D. Macchetto, N. Panagia 2006, ApJ, 653, 843
. J.L. Tonry et al., 2003, ApJ, 594, 1
. S. Van Dyk et al. 1994, ApJ, 432, L115
. K.W. Weiler et al. 2007, ApJ, in press
. J.C. Wheeler, R.P. Harkness 1986, in "Galaxy Distances and Deviations from Universal Expansion",
eds. B.F. Madore, R.B. Tully (Reidel, Dordrecht) p. 45
http://arxiv.org/abs/astro-ph/0502247
Introduction
Ultraviolet Observations
Type Ia Supernovae
Existing Samples of UV Spectra of SNIa
Core Collapse Supernovae: Types II and Ib/c
Type Ib/c Supernovae
Type II Supernovae
Supernovae and Cosmology
|
0704.1667 | Stochastic fluctuations in metabolic pathways | Stochastic fluctuations in metabolic pathways
Erel Levine and Terence Hwa
Center for Theoretical Biological Physics and Department of Physics, University of
California at San Diego, La Jolla, CA 92093-0374
Abstract. Fluctuations in the abundance of molecules in the living cell may
affect its growth and well being. For regulatory molecules (e.g., signaling proteins
or transcription factors), fluctuations in their expression can affect the levels of
downstream targets in a network. Here, we develop an analytic framework to
investigate the phenomenon of noise correlation in molecular networks. Specifically,
we focus on the metabolic network, which is highly inter-linked, and noise properties
may constrain its structure and function. Motivated by the analogy between the
dynamics of a linear metabolic pathway and that of the exactly soluable linear
queueing network or, alternatively, a mass transfer system, we derive a plethora
of results concerning fluctuations in the abundance of intermediate metabolites in
various common motifs of the metabolic network. For all but one case examined, we
find the steady-state fluctuation in different nodes of the pathways to be effectively
uncorrelated. Consequently, fluctuations in enzyme levels only affect local properties
and do not propagate elsewhere into metabolic networks, and intermediate metabolites
can be freely shared by different reactions. Our approach may be applicable to study
metabolic networks with more complex topologies, or protein signaling networks which
are governed by similar biochemical reactions. Possible implications for bioinformatic
analysis of metabolimic data are discussed.
http://arxiv.org/abs/0704.1667v1
Stochastic fluctuations in metabolic pathways 2
Due to the limited number of molecules for typical molecular species in microbial
cells, random fluctuations in molecular networks are common place and may play
important roles in vital cellular processes. For example, noise in sensory signals
can result in pattern formation and collective dynamics [1], and noise in signaling
pathways can lead to cell-to-cell variability [2]. Also, stochasticity in gene expression
has implications on cellular regulation [3, 4] and may lead to phenotypic diversity [5, 6],
while fluctuations in the levels of (toxic) metabolic intermediates may reduce metabolic
efficiency [7] and impede cell growth.
In the past several years, a great deal of experimental and theoretical efforts have
focused on the stochastic expression of individual genes, at both the translational and
transcriptional levels [8, 9, 10]. The effect of stochasticity on networks has been studied
in the context of small, ultra-sensitivie genetic circuits, where noise at a circuit node (i.e.,
a gene) was shown to either attenuate or amplify output noise in the steady state [11, 12].
This phenomenon — termed ‘noise propagation’ — make the steady-state fluctuations
at one node of a gene network dependent in a complex manner on fluctuations at other
nodes, making it difficult for the cell to control the noisiness of individual genes of
interest [13]. Several key questions which arise from these studies of genetic noise
include (i) whether stochastic gene expression could further propagate into signaling
and metabolic networks through fluctuations in the levels of key proteins controlling
those circuits, and (ii) whether noise propagation occurs also in those circuits.
Recently, a number of approximate analytical methods have been applied to
analyze small genetic and signaling circuits; these include the independent noise
approximation [14, 15, 16], the linear noise approximation [14, 17], and the self-consistent
field approximation [18]. Due perhaps to the different approximation schemes used,
conflicting conclusions have been obtained regarding the extent of noise propagation
in various networks (see, e.g., [17].) Moreover, it is difficult to extend these studies to
investigate the dependences of noise correlations on network properties, e.g., circuit
topology, nature of feedback, catalytic properties of the nodes, and the parameter
dependences (e.g., the phase diagram). It is of course also difficult to elucidate these
dependences using numerical simulations alone, due to the very large degrees of freedoms
involved for a network with even a modest number of nodes and links.
In this study, we describe an analytic approach to characterize the probability
distribution for all nodes of a class of molecular networks in the steady state. Specifically,
we apply the method to analyze fluctuations and their correlations in metabolite
concentrations for various core motifs of the metabolic network. The metabolic network
consists of nodes which are the metabolites, linked to each other by enzymatic reactions
that convert one metabolite to another. The predominant motif in the metabolic
network is a linear array of nodes linked in a given direction (the directed pathway),
which are connected to each other via converging pathways and diverging branch points
[19]. The activities of the key enzymes are regulated allosterically by metabolites
from other parts of the network, while the levels of many enzymes are controlled
transcriptionally and are hence subject to deterministic as well as stochastic variations
Stochastic fluctuations in metabolic pathways 3
in their expressions [20]. To understand the control of metabolic network, it is important
to know how changes in one node of the network affect properties elsewhere.
Applying our analysis to directed linear metabolic pathways, we predict that
the distribution of molecule number of the metabolites at intermediate nodes to
be statistically independent in the steady state, i.e., the noise does not propagate.
Moreover, given the properties of the enzymes in the pathway and the input flux, we
provide a recipe which specifies the exact metabolite distribution function at each node.
We then show that the method can be extended to linear pathways with reversible
links, with feedback control, to cyclic and certain converging pathways, and even to
pathways in which flux conservation is violated (e.g., when metabolites leak out of
the cell). We find that in these cases correlations between nodes are negligable or
vanish completely, although nontrivial fluctuation and correlation do dominate for a
special type of converging pathways. Our results suggest that for vast parts of the
metabolic network, different pathways can be coupled to each other without generating
complex correlations, so that properties of one node (e.g., enzyme level) can be changed
over a broad range without affecting behaviors at other nodes. We expect that the
realization of this remarkable property will shape our understanding of the operation
of the metabolic network, its control, as well as its evolution. For example, our results
suggest that correlations between steady-state fluctuations in different metabolites bare
no information on the network structure. In contrast, temporal propagation of the
response to an external perturbation should capture - at least locally - the morphology
of the network. Thus, the topology of the metabolic network should be studied during
transient periods of relaxation towards a steady-state, and not at steady-state.
Our method is motivated by the analogy between the dynamics of biochemical
reactions in metabolic pathways and that of the exactly solvable queueing systems [46]
or, alternatively, as mass transfer systems [22, 47]. Our approach may be applicable
also to analyzing fluctuations in signaling networks, due to the close analogy between
the molecular processes underlying the metabolic and signaling networks. To make our
approach accessible to a broad class of circuit modelers and bioengineers who may not
be familiar with nonequilibrium statistical mechanics, we will present in the main text
only the mathematical results supported by stochastic simulations, and defer derivations
and illustrative calculations to the Supporting Materials. While our analysis is general,
all examples are taken from amino-acid biosynthesis pathways in E. Coli [24].
1. Individual Nodes
1.1. A molecular Michaelis-Menton model
In order to set up the grounds for analyzing a reaction pathway and to introduce our
notation, we start by analyzing fluctuations in a single metabolic reaction catalyzed by
an enzyme.
Recent advances in experimental techniques have made it possible to track the
Stochastic fluctuations in metabolic pathways 4
enzymatic turnover of a substrate to product at the single-molecule level [26, 27], and
to study instantaneous metabolite concentration in the living cell [28]. To describe
this fluctuation mathematically, we model the cell as a reaction vessel of volume V ,
containing m substrate molecules (S) and NE enzymes (E). A single molecule of S can
bind to a single enzyme E with rate k+ per volume, and form a complex, SE. This
complex, in turn, can unbind (at rate k−) or convert S into a product form, P , at rate
k2. This set of reactions is summarized by
S + E
k2→P + E . (1)
Analyzing these reactions within a mass-action framework — keeping the substrate
concentration fixed, and assuming fast equilibration between the substrate and the
enzymes (k± ≫ k2) — leads to the Michaelis-Menten (MM) relation between the
macroscopic flux c and the substrate concentration [S] = m/V :
c = vmax[S]/([S] +KM) , (2)
where KM = k−/k+ is the dissociation constant of the substrate and the enzyme,
and vmax = k2[E] is the maximal flux, with [E] = NE/V being the total enzyme
concentration.
Our main interest is in noise properties, resulting from the discreteness of molecules.
We therefore need to track individual turnover events. These are described by the
turnover rate wm, defined as the inverse of the mean waiting time per volume between
the (uncorrelated‡) synthesis of one product molecule to the next. Assuming again
fast equilibration between the substrate and the enzymes, the probability of having
NSE complexes given m substrate molecules and NE enzymes is simply given by the
Boltzmann distribution,
p(NSE|m,NE) =
K−NSE
Zm,NE
m!NE !
NSE!(m−NSE)!(NE −NSE)!
for NSE < NE and m. Here K
−1 = V k+/k− is the Boltzmann factor associated
with the formation of an SE complex, and the Zm,NE takes care of normalization (i.e.,
chosen such that
p(NSE|m,NE) = 1.) Under this condition, the turnover rate
NSE · p(NSE |m,NE) is given approximately by
wm = vmax
m+ (K +NE − 1)
+O(K−3) , (4)
with vmax = k2NE/V ; see Supp. Mat. We note that for a single enzyme (NE = 1), one
has wm = vmaxm/(m+K), which was derived and verified experimentally [27, 29].
1.2. Probability distribution of a single node
In a metabolic pathway, the number of substrate molecules is not kept fixed; rather,
these molecules are synthesized or imported from the environment, and at the same time
‡ We note in passing that some correlations do exist – but not dominate – in the presence of “dynamical
disorder” [27], or if turnover is a multi-step process [29, 30].
Stochastic fluctuations in metabolic pathways 5
turned over into products. We consider the influx of substrate molecules to be a Poisson
process with rate c. These molecules are turned into product molecules with rate wm
given by Eq. (4). The number of substrate molecules is now fluctuating, and one can ask
what is the probability π(m) of finding m substrate molecules at the steady-state. This
probability can be found by solving the steady-state Master equation for this process
(see Supp. Mat.), yielding
π(m) =
m+K + (NE − 1)
(1− z)K+NEzm , (5)
where z = c/vmax [31]. The form of this distribution is plotted in supporting figure 1
(solid black line). As expected, a steady state exists only when c ≤ vmax. Denoting the
steady-state average by angular brackets, i.e., 〈xm〉 ≡
m xm π(m), the condition that
the incoming flux equals the outgoing flux is written as
c = 〈wm〉 = vmax
s+ (K +NE)
, (6)
where s ≡ 〈m〉.
Comparing this microscopically-derived flux-density relation with the MM relation
(2) using the obvious correspondence [S] = s/V , we see that the two are equivalent
with KM = (K +NE)/V . Note that this microscopically-derived form of MM constant
is different by the amount [E] from the commonly used (but approximate form)
KM = K/V , derived from mass-action. However, for typical metabolic reactions,
KM ∼ 10 − 1000µM [24] while [E] is not more than 1000 molecules in a bacterium
cell (∼ 1µM); so the numerical values of the two expressions may not be very different.
We will characterize the variation of substrate concentration in the steady-state by
the noise index
η2s ≡
c · (K +NE)
, (7)
where σ2s is the variance of the distribution π(m). Since c ≤ vmax and increases with
s towards 1 (see Eq. 6), ηs decreases with the average occupancy s as expected. It
is bound from below by 1/
K +NE , which can easily be several percent. Generally,
large noise is obtained when the reaction is catalyzed by a samll number of high-affinity
enzymes (i.e., for low K and NE).
2. Linear pathways
2.1. Directed pathways
We now turn to a directed metabolic pathway, where an incoming flux of substrate
molecules is converted, through a series of enzymatic reactions, into a product flux [19].
Typically, such a pathway involves the order of 10 reactions, each takes as precursor the
product of the preceding reaction, and frequently involves an additional side-reactant
(such as a water molecule or ATP) that is abundant in the cell (and whose fluctuations
can be neglected). As a concrete example, we show in figure 1(a) the tryptophan
Stochastic fluctuations in metabolic pathways 6
Figure 1. Linear biosynthesis pathway. (a) Tryptophan biosynthesis pathway in E.
Coli. (b) Model for a directed pathway. Dashed lines depict end-product inhibition.
Abbreviations: CPAD5P, 1-O-Carboxyphenylamino 1-deoxyribulose-5-phosphate;
NPRAN, N-5-phosphoribosyl-anthranilate; IGP, Indole glycerol phosphate; PPI,
Pyrophosphate; PRPP, 5-Phosphoribosyl-1-pyrophosphate; T3P1, Glyceraldehyde 3-
phosphate.
biosynthesis pathway of E. Coli [24], where an incoming flux of chorismate is converted
through 6 directed reactions into an outgoing flux of tryptophan, making use of several
side-reactants. Our description of a linear pathway includes an incoming flux c of
substrates of type S1 along with a set of reactions that convert substrate type Si to
Si+1 by enzyme Ei (see figure 1(b)) with rate w
mi = vimi/(mi +Ki − 1) according to
Eq. (4). We denote the number of molecules of intermediate Si by mi, with m1 for the
substrate and mL for the end-product. The superscript (i) indicates explicitly that the
parameters vi = k
E /V and Ki = (K
(i) + N
E ) describing the enzymatic reaction
Si → Si+1 are expected to be different for different reactions.
The steady-state of the pathway is fully described by the joint probability
distribution π(m1, m2, . . . , mL) of having mi molecules of intermediate substrate type
Si. Surprisingly, this steady-state distribution is given exactly by a product measure,
π(m1, m2, . . . , mL) =
πi(mi) , (8)
where πi(m) is as given in Eq. (5) (with K +NE replaced by Ki and z by zi = c/vi),
as we show in Supp. Mat. This result indicates that in the steady state, the number of
molecules of one intermediate is statistically independent of the number of molecules of
any other substrate§. The result has been derived previously in the context of queueing
networks [46], and of mass-transport systems [47]. Either may serve as a useful analogy
for a metabolic pathway.
Since the different metabolites in a pathway are statistically decoupled in the steady
state, the mean si = 〈mi〉 and the noise index η2si = c
−1vi/Ki can be determined by
Eq. (7) individually for each node of the pathway. It is an interesting consequence
of the decoupling property of this model that both the mean concentration of each
substrate and the fluctuations depend only on the properties of the enzyme immediately
downstream. While the steady-state flux c is a constant throughout the pathway, the
§ We note, however, that short-time correlations between metabolites can still exist, and may be probed
for example by measuring two-time cross-correlations; see discussion at the end of the text.
Stochastic fluctuations in metabolic pathways 7
1 2 3 4 5 6
1 2 3 4 5 6
1 2 3 4 5 6
1 2 3 4 5 6
1 2 3 4 5 6 7 8 9
1 2 3 4 5 6
7 8 9
Figure 2. Noise in metabolite molecular number (ηs = σs/s) for different pathways.
Monte-Carlo simulations (bars) are compared with the analytic prediction (symbols)
obtained by assuming decorrelation for different nodes of the pathways. The structure
of each pathway is depicted under each panel. Parameter values were chosen randomly
such that 103 < Ki < 10
4 and c < vi < 10c. SImilar decorrelation was obtained for
100 different random choices of parameters, and for 100 different sets with Ki 10-fold
smaller (data not shown). The effect on the different metabolites of a change in the
velocity of the first reaction, v1 = 1.1c (dark gray)→ 5c (light gray), is demonstrated.
Similar results are obtained for changes in K1 (data not shown.) (a) Directed pathway.
Here the decorrelation property is exact. (b) Directed pathway with two reversible
reactions. For these reactions, v+
3,4 = 8.4, 6.9c; v
3,4 = 1.6, 3.7, c;K
3,4 = 2500, 8000
and K−
3,4 = 7700, 3700. (c) Linear dilution of metabolites. Here β/c = 1/100. (d)
End-product inhibition,where the influx rate is given by α = c0 [1 + (mL/KI)]
KI = 1000. (e) Diverging pathways. Here metabolite 4 is being processed by two
enzymes (with different affinities, KI = 810,KII = 370) into metabolites 5 and 7,
resp. (f) Converging pathways. Here two independent 3-reaction pathways , with
fluxes c and c′ = c/2, produce the same product, S4.
parameters vi and Ki can be set separately for each reaction by the copy-number and
kinetic properties of the enzymes (provided that c < vi). Hence, for example, in a case
where a specific intermediate may be toxic, tuning the enzyme properties may serve to
decrease fluctuations in its concentration, at the price of a larger mean. To illustrate
the decorrelation between different metabolites, we examine the response of steady-state
fluctuations to a 5-fold increase in the enzyme level [E1]. Typical time scale for changes
in enzyme level much exceeds those of the enzymatic reactions. Hence, the enzyme level
changes may be considered as quasi-steady state. In figure 2(a) we plot the noise indices
of the different metabolites. While noise in the first node is significantly reduced upon
a 5-fold increase in [E1], fluctuations at the other nodes are not affected at all.
2.2. Reversible reactions
The simple form of the steady-state distribution (8) for the directed pathways may
serve as a starting point to obtain additional results for metabolic networks with more
elaborate features. We demonstrate such applications of the method by some examples
below. In many pathways, some of the reactions are in fact reversible. Thus, a
Stochastic fluctuations in metabolic pathways 8
metabolite Si may be converted to metabolite Si+1 with rate v
maxmi/(mi + K
i ) or
to substrate Si−1 with rate v
maxmi/(mi + K
i ). One can show — in a way similar to
Ref. [47] — that the decoupling property (5) holds exactly only if the ratio of the two
rates is a constant independent of mi, i.e. when K
i = K
i . In this case the steady state
probability is still given by (5), with the local currents obeying
v+i zi − v−i+1zi+1 = c . (9)
This is nothing but the simple fact that the overall flux is the difference between the
local current in the direction of the pathway and that in the opposite direction.
In general, of course, K+i 6= K−i . However, we expect the distribution to be given
approximately by the product measure in the following situations: (a) K+i ≃ K−i ; (b)
the two reactions are in the zeroth-order regime, s ≫ K±i ; (c) the two reactions are in
the linear regime, s ≪ K±i . In the latter case Eq. (9) is replaced by
zk+1 = c .
Taken together, it is only for a narrow region (i.e., si ∼ Ki) where the product measure
may not be applicable. This prediction is tested numerically, again by comparing two
pathways (now containing reversible reactions) with 5-fold difference in the level of
the first enzyme. From figure2(b), we see again that the difference in noise indices
exist only in the first node, and the computed value of the noise index at each node
is in excellent agreement with predictions based on the product measure (symbols).
SImilar decorrelation was obtained for 100 different random choices of parameters, and
for 100 different sets with Ki 10-fold smaller (data not shown).
2.3. Dilution of intermediates
In the description so far, we have ignored possible catabolism of intermediates or dilution
due to growth. This makes the flux a conserved quantity throughout the pathway, and
is the basis of the flux-balance analysis [32]. One can generalize our framework for the
case where flux is not conserved, by allowing particles to be degraded with rate um.
Suppose, for example, that on top of the enzymatic reaction a substrate is subjected
to an effective linear degradation, um = βm. This includes the effect of dilution due
to growth, in which case β = ln(2)/(mean cell division time), and the effect of leakage
out of the cell. As before, we first consider the dynamics at a single node, where the
metabolite is randomly produced (or transported) at a rate c0. It is straightforward to
generalize the Master equation for the microscopic process to include um, and solve it
in the same way. With wm as before, the steady state distribution of the substrate pool
size is then found to be
π(m) =
m+K − 1
(c0/β)
(v/β +K)m
, (10)
where (a)m ≡ a(a+ 1) · · · (a+m− 1). This form of π(m) allows one to easily calculate
moments of the molecule number from the partition function Z as in equilibrium
statistical mechanics, e.g. s = 〈m〉 = c0dZ/dc0, and thence the outgoing flux, c = c0−βs.
Using the fact that Z can be written explicitly in terms of hypergeometric functions,
Stochastic fluctuations in metabolic pathways 9
we find that the noise index grows with β as η2s ≃ v/(Kc0) + β/c0. The distribution
function is given in supporting figure 1 for several values of β.
Generalizing the above to a directed pathway, we allow for β, as well as for vmax
and K, to be i-dependent. The decoupling property (8) does not generally hold in the
non-conserving case [33]. However, in this case the stationary distribution still seems
to be well approximated by a product of the single-metabolite functions πi(m) of the
form (10), with c0/β → ci−1/βi. This is supported again by the excellent agreement
between noise indices obtained by numerical simulations and analytic calculations using
the product measure Ansatz, for linear pathways with dilution of intermediates; see
figure 2(c). In this case, change in the level of the first enzyme does ”propagate” to
the downstream nodes. But this is not a “noise propagation” effect, as the mean fluxes
〈ci〉 at the different nodes are already affected. (To illustrate the effect of leakage, the
simulation used parameters that corresponded to a huge leakage current which is 20%
of the flux. This is substantially larger than typical leakage encountered, say due to
growth-mediated dilution, and we do not expect propagation effects due to leakage to
be significant in practice.)
3. Interacting pathways
The metabolic network in a cell is composed of pathways of different topologies. While
linear pathways are abundant, one can also find circular pathways (such as the TCA
cycle), converging pathways and diverging ones. Many of these can be thought of as a
composition of interacting linear pathways. Another layer of interaction is imposed
on the system due to the allosteric regulation of enzyme activity by intermediate
metabolites or end products. To what extent can our results for a linear pathway
be applied to these more complex networks? Below we address this question for a few of
the frequently encountered cases. To simplify the analysis, we will consider only directed
pathways and suppress the dilution/leakage effect.
3.1. Cyclic pathways
We first address the cyclic pathway, in which the metabolite SL is converted into S1
by the enzyme EL. Borrowing a celebrated result for queueing networks [34] and mass
transfer models [35], we note that the decoupling property (8) described above for the
linear directed pathway also holds exactly even for the cyclic pathways‖. This result is
surprising mainly because the Poissonian nature of the “incoming” flux assumed in the
analysis so far is lost, replaced in this case by a complex expression, e.g., w
mL · πL(mL).
In an isolated cycle the total concentration of the metabolites, stot – and not the
‖ In fact, the decoupling property holds for a general network of directed single-substrate reactions,
even if the network contains cycles.
Stochastic fluctuations in metabolic pathways 10
flux – is predetermined. In this case, the flux c is give by the solution to the equation
stot =
si(c) =
vi − c
. (11)
Note that this equation can always be satisfied by some positive c that is smaller than
all vi’s. In a cycle that is coupled to other branches of the network, flux may be governed
by metabolites going into the cycle or taken from it. In this case, flux balance analysis
will enable determination of the variables zi which specify the probability distribution
3.2. End-product inhibition
Many biosynthesis pathways couple between supply and demand by a negative feedback
[24, 19], where the end-product inhibits the first reaction in the pathway or the transport
of its precursor; see, e.g., the dashed lines in figure 1. In this way, flux is reduced when
the end-product builds up. In branched pathways this may be done by regulating
an enzyme immediately downstream from the branch-point, directing some of the flux
towards another pathway.
To study the effect of end-product inhibition, we consider inhibition of the inflow
into the pathway. Specifically, we model the probability at which substrate molecules
arrive at the pathway by a stochastic process with exponentially-distributed waiting
time, characterized by the rate α(mL) = c0
1 + (mL/KI)
, where c0 is the maximal
influx (determined by availability of the substrate either in the medium or in the
cytoplasm), mL is the number of molecules of the end-product (SL), KI is the
dissociation constant of the interaction between the first enzyme E0 and SL, and h is a
Hill coefficient describing the cooperativity of interaction between E0 and SL. Because
mL is a stochastic variable itself, the incoming flux is described by a nontrivial stochastic
process which is manifestly non-Poissonian.
The steady-state flux is now
c = 〈α(mL)〉 = c0 ·
1 + (mL/KI)
. (12)
This is an implicit equation for the flux c, which also appears in the right-hand side of
the equation through the distribution π(m1, ..., mL).
By drawing an analogy between feedback-regulated pathway and a cyclic pathway,
we conjecture that metabolites in the former should be effectively uncorrelated. The
quality of this approximation is expected to become better in cases where the ration
between the influx rate α(mL) and the outflux rate wmL is typically mL idependent.
Under this assumption, we approximate the distribution function by the product
measure (8), with the form of the single node distributions given by (5). Note that the
conserved flux then depends on the properties of the enzyme processing the last reaction,
and in general should be influenced by the fluctuations in the controlling metabolite. In
this sense, these fluctuations propagate throughout the pathway at the level of the mean
Stochastic fluctuations in metabolic pathways 11
flux. This should be expected from any node characterized by a high control coefficient
Using this approximate form, Eq. (12) can be solved self-consistently to yield c(c0),
as is shown explicitly in Supp. Mat. for h = 1. The solution obtained is found to be
in excellent agreement with numerical simulation (Supporting figure 2a). The quality
of the product measure approximation is further scrutinized by comparing the noise
index of each node upon increasing the enzyme level of the first node 5-fold. Figure 2(c)
shows clearly that the effect of changing enzyme level does not propagate to other nodes.
While being able to accurately predict the flux and mean metabolite level at each node,
the predictions based on the product measure are found to be under-estimating the
noise index by up to 10% (compare bars and symbols). We conclude that in this case
correlations between metabolites do exist, but not dominate. Thus analytic expressions
dervied from the decorrelation assumption can be useful even in this case (see supporting
figure 2b).
3.3. Diverging pathways
Many metabolites serve as substrates for several different pathways. In such cases,
different enzymes can bind to the substrate, each catabolizes a first raction in a different
pathway. Within our scheme, this can be modeled by allowing for a metabolite Si to be
converted to metabolite SI1 with rate w
= vImi/(mi +K
I − 1) or to metabolite SII1
with rate wImi = v
IImi/(mi +K
II − 1). The paramters vI,II and KI,II characterize the
two different enzymes.
Similar to the case of reversible reactions, the steady-state distribution is given
exactly by a product measure only if wImi/w
is a constant, independent of mi (namely
when KI = KII). Otherwise, we expect it to hold in a range of alternative scenarios,
as described for reversible pathways.
Considering a directed pathway with a single branch point, the distribution (5)
describes exactly all nodes upstream of that point. At the branchpoint, one replaces
wm by wm = w
m + w
m , to obtain the distribution function
π(m) =
(KI)m(K
m!((KIvII +KIIvII)/(vI + vII))m
. (13)
From this distribution one can obtain the fluxes going down each one of the two
branching pathway, cI,II =
wI,IIm π(m). Both fluxes depend on the properties of
both enzymes, as can be seen from (13), and thus at the branch-point the two pathways
influence each other [36]. Moreover, fluctuations at the branch point to propagate into
the branching pathways already at the level of the mean flux. This is consistent with the
fact that the branch node is expected to be characterized by a high control coefficient
While different metabolite upstream and including the branch point are
uncorrelated, this is not exactly true for metabolites of the two branches. Nevertheless,
since these pathways are still directed, we further conjecture that metabolites in the two
Stochastic fluctuations in metabolic pathways 12
carbamoyl−phosphate L−ornithine
citrulline
L−arginino−succinate
L−arginine
L−threonine
L−serine
glycine
(a) (b)
Figure 3. Converging pathways. (a) Glycine is synthesized in two independent
pathways. (b) Citrulline is synthesized from products of two pathways. Abbreviations:
2A3O, 2-Amino-3-oxobutanoate.
branching pathways can still be described, independently, by the probability distribution
(5), with c given by the flux in the relevant branch, as calculated from (13). Indeed, the
numerical results of figure 2(e) strongly support this conjecture. We find that changing
the noise properties of a metabolite in the upstream pathway do not propagte to those
of the branching pathways.
3.4. Converging pathways – combined fluxes
We next examine the case where two independent pathways result in synthesis of the
same product, P . For example, the amino acid glycine is the product of two (very short)
pathways, one using threonine and the other serine as precursors (figure 3(a)) [24]. With
only directed reactions, the different metabolites in the combined pathway – namely,
the two pathways producing P and a pathway catabolizing P – remain decoupled.
The simplest way to see this is to note that the process describing the synthesis of P ,
being the sum of two Poisson processes, is still a Poisson process. The pathway which
catabolizes P is therefore statistically identical to an isolated pathway, with an incoming
flux that is the sum of the fluxes of the two upstream pathways. More generally, the
Poissonian nature of this process allows for different pathways to dump or take from
common metabolite pools, without generating complex correlations among them.
3.5. Converging pathways – reaction with two fluctuating substrates
As mentioned above, some reactions in a biosynthesis pathway involve side-reactants,
which are assumed to be abundant (and hence at a constant level). Let us now discuss
briefly a case where this approach fails. Suppose that the two products of two linear
pathways serve as precursors for one reaction. This, for example, is the case in the
arginine biosynthesis pathway, where L-ornithine is combined with carbamoyl-phosphate
by ornithine-carbamoyltransferase to create citrulline (figure 3(b)) [24]. Within a flux
balance model, the net fluxes of both substrates must be equal to achieve steady state,
in which case the macroscopic Michaelis-Menten flux takes the form
c = vmax
[S1][S2]
(KM1 + [S1])(KM2 + [S2])
Stochastic fluctuations in metabolic pathways 13
0 2 4 6 8 10
Metabolite 1
Metabolite 2
Figure 4. Time course of a two-substrate enzymatic reaction, as obtained by a
Gillespie simulation [44]. Here c = 3t−1, k+ = 5t
−1 and k− = 2t
−1 for both substrates,
t being an arbitrary time unit.
Here [S1,2] are the steady-state concentrations of the two substrates, and KM1,2 the
corresponding MM-constants. However, flux balance provides only one constraint to a
system with two degrees of freedom.
In fact, this reaction exhibits no steady state. To see why, consider a typical time
evolution of the two substrate pools (figure 4). Suppose that at a certain time one of
the two substrates, say S1, is of high molecule-number compared with its equilibrium
constant, m1 ≫ K1. In this case, the product synthesis rate is unaffected by the precise
value of m1, and is given approximately by vmaxm2/(m2+K2). Thus, the number m2 of
S2 molecules can be described by the single-substrate reaction analyzed above, while m1
performs a random walk (under the influence of a weak logarithmic potential), which is
bound to return, after some time τ , to values comparable with K1. Then, after a short
transient, one of the two substrates will become unlimiting again, and the system will
be back in the scenario described above, perhaps with the two substrates changing roles
(depending on the ratio between K1 and K2).
Importantly, the probability for the time τ during which one of the substrates is
at saturating concentration scales as τ−3/2 for large τ . During this time the substrate
pool may increase to the order
τ . The fact that τ has no finite mean implies that this
reaction has no steady state. Since accumulation of any substrate is most likely toxic,
the cell must provide some other mechanism to limit these fluctuations. This may be one
interpretation for the fact that within the arginine biosynthesis pathway, L-ornithine is
an enhancer of carbamoyl-phosphate synthesis (dashed line in figure 3(b)).
In contrast, a steady-state always exists if the two metabolites experience linear
degradation, as this process prevents indefinite accumulation. However, in general one
expects enzymatic reactions to dominate over futile degradation. In this case, equal
in-fluxes of the two substrates result in large fluctuations, similar to the ones described
above [31].
Stochastic fluctuations in metabolic pathways 14
4. Discussion
In this work we have characterized stochastic fluctuations of metabolites for dominant
simple motifs of the metabolic network in the steady state. Motivated by the analogy
between the directed biochemical pathway and the mass transfer model or, equivalently,
as the queueing network, we show that the intermediate metabolites in a linear pawthway
– the key motif of the biochemical netrowk – are statistically independent. We then
extend this result to a wide range of pathway structures. Some of the results (e.g.,
the directed linear, diverging and cyclic pathways) have been proven previously in
other contexts. In other cases (e.g., for reversible reaction, diverging pathway or with
leakage/dilution), the product measure is not exact. Nevertheless, based on insights
from the exactly solvable models, we conjecture that it still describes faithfuly the
statistics of the pathway. Using the product measure as an Ansatz, we obtained
quantitative predictions which turned out to be in excellent agreement with the numerics
(figure 2). These results suggest that the product measure may be an effective starting
point for quantitative, non-perturbative analysis of (the stochastic properties) of these
circuit/networks. We hope this study will stimulate further analytical studies of
the large variety of pathway topologies in metabolic networks, as well as in-depth
mathematical analysis of the conjectured results. Moreover, it will be interesting to
explore the applicability of the present approach to other cellular networks, in particular,
stochasticity in protein signaling networks [2], whose basic mathematical structure is also
a set of interlinked Michaelis-Menton reactions.
Our main conclusion, that the steady-state fluctuations in each metabolite depends
only on the properties of the reactions consuming that metabolite and not on fluctuations
in other upstream metabolites, is qualitatively different from conclusions obtained for
gene networks in recent studies, e.g., the “noise addition rule” [14, 15] derived from the
Independent Noise Approximation, and its extension to cases where the singnals and
the processing units interact [17]. The detailed analysis of [17], based on the Linear
Noise Approximation found certain anti-correlation effects which reduced the extent of
noise propagation from those expected by “noise addition” alone [14, 15]. While the
specific biological systems studied in [17] were taken from protein signaling systems,
rather than metabolic networks, a number of systems studied there are identical in
mathematical structure to those considered in this work. It is reassuring to find that
reduction of noise propagation becomes complete (i.e., no noise propagation) according
to the analysis of [17], also, for Poissonian input noise where direct comparisons can
be made to our work (ten Wolde, private communication). The cases in which residue
noise propagation remained in [17], corresponded to certain “bursty” noises which is non-
Poissonian. While bursty noise is not expected for metabolic and signaling reactions, it
is nevertheless important to address the extent to which the main finding of this work
is robust to the nature of stochasticity in the input and the individual reactions. The
exact result on the cyclic pathways and the numerical result on the directed pathway
with feedback inhibition suggest that our main conclusion on statistical independence
Stochastic fluctuations in metabolic pathways 15
of the different nodes extends significantly beyond strict Poisson processes. Indeed,
generalization that preserve this property include classes of transport rules and extended
topologies [37, 38].
The absence of noise propagation for a large part of the metabolic network allows
intermediate metabolites to be shared freely by multiple reactions in multiple pathways,
without the need of installing elaborate control mechanisms. In these systems, dynamic
fluctuations (e.g., stochasticity in enzyme expression which occurs at a much longer
time scale) stay local to the node, and are shielded from triggering system-level failures
(e.g., grid-locks). Conversely, this property allows convenient implementation of controls
on specific node of pathways, e.g., to limit the pool of a specific toxic intermediate,
without the concern of elevating fluctuations in other nodes. We expect this to make
the evolution of metabolic network less constrained, so that the system can modify its
local properties nearly freely in order to adapt to environmental or cellular changes. The
optimized pathways can then be meshed smoothly into the overall metabolic network,
except for junctions between pathways where complex fluctuations not constrained by
flux conservation.
In recent years, metabolomics, i.e., global metabolite profiling, has been suggested
as a tool to decipher the structure of the metabolic network [39, 40]. Our results suggest
that in many cases, steady-state fluctuations do not bare information about the pathway
structure. Rather, correlations between metabolite fluctuations may be, for example, the
result of fluctuation of a common enzyme or coenzyme, or reflect dynamical disorder [27].
Indeed, a bioinformatic study found no straightforward connection between observed
correlation and the underlying reaction network [41]. Instead, the response to external
perturbation [28, 39, 42] may be much more effective in shedding light on the underlying
structure of the network, and may be used to study the morphing of the network under
different conditions. It is important to note that all results described here are applicable
only to systems in the steady state; transient responses such as the establishment of
the steady state and the response to external perturbations will likely exhibit complex
temporal as well as spatial correlations. Nevertheless, it is possible that some aspects of
the response function may be attainable from the steady-state fluctuations through non-
trivial fluctuation-dissipation relations as was shown for other related nonequilibrium
systems [22, 43].
Acknowledgments
We are grateful to Peter Lenz and Pieter Rein ten Wolde for discussions. This work
was supported by NSF through the PFC-sponsored Center for Theoretical Biological
Physics (Grants No. PHY-0216576 and PHY-0225630). TH acknowledges additional
support by NSF Grant No. DMR-0211308.
Stochastic fluctuations in metabolic pathways 16
Supporting Material
Appendix A. Microscopic model
Under the assumption of fast equlibration between the substrate and the enzyme, the
probability of having NSE complexes given m substrate molecules and NE enzymes is
given by equation (3) of the main text. To write the partition function explicitly, we
define u(x) = U(x, 1 −m − NE;−K), where U denotes the Confluent Hypergeometric
function [45]. One can then write the partition sum as Zm,NE = (−K)−NEu(−m).
The turnover rate is then given by wm =
[−mu(1 − m)]/[u(−m)], which can be
approximated by Equation (4).
Appendix B. Influx of metabolites
Ametabolic reaction in vivo can be described as turnover of an incoming flux of substrate
molecules, characterized by a Possion process with rate c, into an outgoing flux. To find
the probability of having m substrate molecules we write down the Master equation,
π(m) = [c(a− 1) + (â− 1)wm]π(m) = c[π(m−1)−π(m)]+[wm+1π(m+1)−wmπ(m)] , (B.1)
where we took the opportunity to define the lowering and raising operators a and â,
which – for any function h(n) – satisfy ah(n) = h(n−1), ah(0) = 0, and âh(n) = h(n+1).
The first term in this equation is the influx, and the second is the biochemical reaction.
The solution of this steady state equation is of the form π(m) ∼ cm/
k=1wk (up to a
normalization constant), as can be verified by plugging it into the equation,
π(m− 1)
π(m+ 1)
wm+1 − wm
wm+1 − wm
= 0.(B.2)
Using the approximate form of wm, as given in (4), the probability π(m) takes the
form,
π(m) =
m+K + (NE − 1)
(1− z)K+NEzm , (B.3)
as given in equation (5) of the main text.
Appendix C. Directed linear pathway
We now derive our key results, equation (8) (The result has been derived previously in
the context of queueing networks [46], and of mass-transport systems [47]). To this end
we write the Master equation for the joint probability function π ≡ π(m1, m2, · · · , mL),
c(a1 − 1) +
(âiai+1 − 1)w(i)mi + (âL − 1)w
π , (C.1)
which generalizes (B.1). As above, ai and âi are lowering and raising operators, acting
on the number of Si molecules. The first term in this equation is the incoming flux c
Stochastic fluctuations in metabolic pathways 17
of the substrate, and the last term is the flux of end product. Let us try to solve the
steady-state equation by plugging a solution of the form π(m1, m2, · · · , mL) =
gi(mi),
yielding
gi(m1 − 1)
g1(m1)
gi(mi + 1)gi+1(mi+1 − 1)
gi(mi)gi+1(mi+1)
−w(i)mi ]+[w
gL(mL + 1)
gL(mL)
−w(L)mL ] = 0 .(C.2)
Motivated by the solution to (B.1), we try to satisfy this equation by choosing
gi(m) = c
k . With this choice we have g(m + 1)/g(m) = c/wm+1 and
g(m− 1)/g(m) = wm/c. It is now straightforward to verify that indeed
w(i+1)mi+1 − w
c− w(L)mL
= 0 . (C.3)
Finally, in our choice of gi(m) we replace w
m by the MM- rate vimi/(mi+Ki), and find
that in fact gi(m) = πi(m), namely
π(m1, m2, . . . , mL) =
πi(mi) , (C.4)
as stated in (8).
Appendix D. End-product inhibition
Equation (13) of the main text is a self-consistent equation for the steady- state flux c
through a pathway regulated via end-product inhibition. Using considerations analogous
to what led to the exact result on the product measure distribution for the cyclic
pathways, we conjecture that even for the present case of end-product inhibition, the
distribution function can still be approximated by the product measure (C.4) with the
form of the single node distributions given by (B.3). The flux c enters the calculation of
the average on the right-hand side through the probability function π(m). Solving this
equation for c yields the steady state current, and consequently determines the mean
occupancy and standard deviation of all intermediates.
To verify the validity of this conjecture, and to demonstrate its application, we
consider the case h = 1. In this case one can carry the sum, and find
1 + (mL/KI)
πL(mL) (D.1)
= c0(1− z)KL2F1(KI , KL;KI + 1; z)
with z = c/vL and 2F1 the hypergeometric function [45]. This equation was solved
numerically, and plotted in supporting figure D2(a) for some values of KI and KL. Note
that predictions based on the product measure (lines) are in excellent agreement with
the results of numerical simulation (circles) for the different sets of parameters tried.
Stochastic fluctuations in metabolic pathways 18
0 50 100 150
Figure D1. The steady-state distribution π(m) of a metabolite, that experiences
enzymatic reaction (with rate wm = vm/(m +K − 1)) and linear degradation (with
rate βm), as given by equation (10) of the main text. Here K = 100 and v = 2c0.
Results obtained from equation (D.1) can be used, for example, to compare the
flux that flows through the noisy pathway with the mean-field flux cMF, obtained when
one ignores fluctuations in mL, i.e.,
cMF =
1 + (sL/KI)h
. (D.2)
The fractional difference δc = (c− cMF)/cMF is plotted in supporting figure D2(b). The
results show that number fluctuations in the end-product always increase the flux in the
pathway since δc > 0 always. Quantitatively, this increase can easily be several percent.
For large c0, a simplifying expression can be derived by using an asymptotic expansion
of the hypergeometric function [45]. For example, when KI < KL,
(1− z)KL2F1(KI , KL;KI + 1; z) ∼
1 +KL −KI
, (D.3)
which yields
c− cMF
. (D.4)
Thus the effect of end-product fluctuations on the current is enhanced by stronger
binding of the inhibitor (smaller KI), as one would expect. We note that obtaining
these predictions from Monte-Carlo simulation is rather difficult, given the fact that
one is interested here in sub-leading quantities.
Stochastic fluctuations in metabolic pathways 19
0.2 0.4 0.6 0.8 1 1.2 1.4
0.2 0.4 0.6 0.8 1 1.2 1.4
=10, K
=100, K
=10, K
=100, K
Figure D2. Pathway with end-product inhibition. The influx rate is taken to be
c0/(1+mL/KI), and thus the steady-state flux is given by equation (12) of the main
text, with h = 1. (a) Assuming that different metabolites in the pathway remain
decoupled even in the presence of feedback regulation, (12) can be approximated
by (D.1). Numerical solutions of equation (D.1) (lines) are compared with Monte-
Carlo simulations (symbols). Values of parameters are chosen randomly such that
100 < Ki < 1000 and c < vi < 10c. For the data presented here, vL = 2.4c. We
find that (D.1) yields excellent prediction for the steady-state flux. (b) Neglecting
fluctuations altogether yields a mean-field approximation for the flux, cMF, given in (
D.2). For the same data of (a), we plot the fractional difference δc = (c− cMF)/c. We
find that steady-state flux is increased by fluctuations, and thus taking fluctuations
into account (even in an approximate manner) better predicts the steady-state flux.
Stochastic fluctuations in metabolic pathways 20
References
[1] Zhou, T., Chen, L. & Aihara, K. (2005) Phys. Rev. Lett. 95, 178103.
[2] Colman-Lerner A, Gordon A, Serra E, Chin T, Resnekov O, Endy D, Pesce CG, Brent R. (2005)
Nature 437: 699-706.
[3] Raser, J.M. & O’shea, E.K. (2005) Science 309, 2010.
[4] Kaern, M., Elston, T.C., Blake, W.J. & Collins, J.J. (2005) Nat. rev. Gen. 6, 451.
[5] Kussel, E. & Leibler, S. (2005) Science 309, 2075.
[6] Suel, G.M., Garcia-Ojalvo, J., Liberman, L.M., & Elowitz M.B. (2006) Nature 440, 545.
[7] Fell, D. (1997) Understanding the Control of Metabolism (Protland Press, London, England).
[8] Swain, P.S., Elowitz, M.B. & Siggia E.D. (2002) Proc Natl Acad Sci U S A 99, 12795.
[9] Pedraza, J.M. & van Oudenaarden, A. (2005) Science 307, 1965.
[10] Golding, I., Paulsson, J., Zawilski, S.M., & Cox E.C. (2005) Cell 123, 1025.
[11] Thattai, M. & van Oudenaarden, A. (2002) Biophys J 82, 2943-50.
[12] Hooshangi, S., Thiberge, S. & Weiss, R. (2005) Proc Natl Acad Sci U S A 102, 3581-6.
[13] Hooshangi, S. & Weiss, R. (2006) Chaos 16, 026108.
[14] Paulsson, J. (2004) Nature 427, 415.
[15] Shibata, T. & Fujimoto, K., Proc Natl Acad Sci U S A 102, 331.
[16] Austin D.W., Allen M.S., McCollum J.M., Dar R.D., Wilgus J.R., Sayler G.S., Samatova N.F.,
Cox C.D., & Simpson M.L (2006) Nature 439, 608-11
[17] Tanase-Nicola, S., Warren, P.B., & ten Wolde P.R. (2006) Phys Rev Lett 97 068102.
[18] Sasai M. & Wolynes P.G. (2003) Proc Natl Acad Sci U S A 100, 2374-9.
[19] Michal, G. (1999) Biochemical Pathways (Wiley & Sons, New York).
[20] Berg J.M, Tymoczko J.L., & Stryer L. (2006) Bichemistry, 6th edition, (WH Freeman Ĉompany,
New York)
[21] Taylor, H.M. & Karlin, S. (1998) An Introduction to Stochastic Modeling, 3rd edition (Academic
Press); Ross, S.M. (1983) Stochastic Processes (John Wiley & Sons).
[22] Liggett, T.M. (1985) Interacting Particle Systems (Springer-Verlag, New York).
[23] Levine, E. , Mukamel, D. & Schütz, G.M. (2005) J. Stat. Phys. 120, 759.
[24] Neidhardt, F.C. et al, eds. (1996) Escherichia coli and Salmonella: Cellular and Molecular Biology,
2nd ed. (Am. Soc. Microbiol., Washington, DC).
[25] McAdams, H.H. & Arkin, A. (1997) Proc. Natl. Acad. Sci. USA 94, 814.
[26] Xie, X.S. & Lu, H.P. (1999) J Biol. Chem. 274, 15967.
[27] English, B.P., Min, W., van Oijen, A.M., Lee, K.T., Luo, G., Sun, H., Cherayil, B.J., Kou, S.C.
& Xie., X.S. (2005) Nat. Chem. Bio. 2, 87.
[28] Arkin, A. Shen, P. & Ross, J. (1997) from measurements, Science 29, 1275.
[29] Kou, S.C., Cherayil, B.J., Min, W., English, B.P. & Xie, X.S. (2005) J. Phys. Chem. B 109, 19068.
[30] Qiana, H. & Elson, E.L. (2002) Biophys. Chem. 101-102, 565.
[31] Elf, J., Paulsson, J., Berg, O.G. & Ehrenberg, M. (2003) Biophys. J 84, 154.
[32] Edwards, J.S., Covert, M. & Palsson B.O. (2002) Environ Microbiol. 4, 133.
[33] Evans, M. R. & Hanney, T. (2005) J. Phys. A 38, R195.
[34] Jackson, J.R. (1957) Operations Research 5, 58.
[35] Spitzer, F. (1970) Adv. Math. 5, 246.
[36] LaPorte D.C., Walsh K., & Koshland, Jr D.E (1984) J. Biochem. 259 14068.
[37] Evans M.R., Majumdar S.N., & Zia R.K.P (2004) J. Phys. A: Math. Gen. 37 L275.
[38] Greenblatt, R.L., & Lebowitz, J.L. (2006), J. Phys. A: Math. Gen. 39 15651573.
[39] Arkin, A. & Ross, J. (1995) Measured Time-Series, J. Phys. Chem. 99, 970.
[40] Weckwerth, W. & Fiehn, O. (2002) Curr. Opin. Biotech. 13, 156.
[41] Steuer, R., Kurths, J., Fiehn, O. & Weckwerth, W.(2003) Bioinformatics 19, 1019.
[42] Vance, W., Arkin, A. & Ross, J. (2002) networks, Proc. Natl. Acad. Sci. USA 99, 5816.
[43] Forster, D. , Nelson, D., & Stephens, M. (1977) Phys. Rev. A 16, 732749
Stochastic fluctuations in metabolic pathways 21
[44] Gillespie, D.T. (1977). J. Phys. Chem 81, 2340.
[45] M. Abramowitz, Handbook of Mathematical Functions (Dover, New York, 1972).
[46] Taylor, H.M. & Karlin, S. (1998) An Introduction to Stochastic Modeling, 3rd edition (Academic
Press); Ross, S.M. (1983) Stochastic Processes (John Wiley & Sons).
[47] Levine, E. , Mukamel, D. & Schütz, G.M. (2005) J. Stat. Phys. 120, 759.
Individual Nodes
A molecular Michaelis-Menton model
Probability distribution of a single node
Linear pathways
Directed pathways
Reversible reactions
Dilution of intermediates
Interacting pathways
Cyclic pathways
End-product inhibition
Diverging pathways
Converging pathways – combined fluxes
Converging pathways – reaction with two fluctuating substrates
Discussion
Microscopic model
Influx of metabolites
Directed linear pathway
End-product inhibition
|
0704.1668 | A new search for planet transits in NGC 6791 | arXiv:0704.1668v1 [astro-ph] 12 Apr 2007
Astronomy & Astrophysics manuscript no. n6791 c© ESO 2021
August 31, 2021
A new search for planet transits in NGC 6791. ⋆
M. Montalto1, G. Piotto1, S. Desidera2, F. De Marchi1, H. Bruntt3,4, P.B. Stetson5
A. Arellano Ferro6, Y. Momany1,2 R.G. Gratton2, E. Poretti7,
A. Aparicio8, M. Barbieri2,9, R.U. Claudi2, F. Grundahl3, A. Rosenberg8.
1 Dipartimento di Astronomia, Università di Padova, Vicolo dell’Osservatorio 2, I-35122, Padova, Italy
2 INAF – Osservatorio Astronomico di Padova, Vicolo dell’ Osservatorio 5, I-35122, Padova, Italy
3 Department of Physics and Astronomy, University of Aarhus, Denmark
4 University of Sydney, School of Physics, 2006 NSW, Australia
5 Herzberg Institute of Astrophysics, Victoria, Canada
6 Instituto de Astronomı́a, Universidad Nacional Autónoma de México
7 INAF – Osservatorio Astronomico di Brera, Via E. Bianchi 46, 23807 Merate (LC), Italy
8 Instituto de Astrofisica de Canarias, 38200 La Laguna, Tenerife, Canary Islands, Spain
9 Dipartimento di Fisica, Università di Padova, Italy
ABSTRACT
Context. Searching for planets in open clusters allows us to study the effects of dynamical environment on planet formation and evolution.
Aims. Considering the strong dependence of planet frequency on stellar metallicity, we studied the metal rich old open cluster NGC 6791 and
searched for close-in planets using the transit technique.
Methods. A ten-night observational campaign was performed using the Canada-France-Hawaii Telescope (3.6m), the San Pedro Mártir tele-
scope (2.1m), and the Loiano telescope (1.5m). To increase the transit detection probability we also made use of the Bruntt et al. (2003)
eight-nights observational campaign. Adequate photometric precision for the detection of planetary transits was achieved.
Results. Should the frequency and properties of close-in planets in NGC 6791 be similar to those orbiting field stars of similar metallicity,
then detailed simulations foresee the presence of 2-3 transiting planets. Instead, we do not confirm the transit candidates proposed by Bruntt et
al. (2003). The probability that the null detection is simply due to chance coincidence is estimated to be 3%-10%, depending on the metallicity
assumed for the cluster.
Conclusions. Possible explanations of the null-detection of transits include: (i) a lower frequency of close-in planets in star clusters; (ii) a
smaller planetary radius for planets orbiting super metal rich stars; or (iii) limitations in the basic assumptions. More extensive photometry with
3–4m class telescopes is required to allow conclusive inferences about the frequency of planets in NGC 6791.
Key words. open cluster: NGC 6791 – planetary systems – Techniques: photometric
1. Introduction
During the last decade more than 200 extra-solar planets have
been discovered. However, our knowledge of the formation
and evolution of planetary systems remains largely incomplete.
One crucial consideration is the role played by environment
where planetary systems may form and evolve.
More than 10% of the extra-solar planets so far discov-
ered are orbiting stars that are members of multiple systems
Send offprint requests to: M. Montalto,
e-mail: [email protected]
⋆ Based on observation obtained at the Canada-France-Hawaii
Telescope (CFHT) which is operated by the National Research
Council of Canada, the Institut National des Sciences de l’Univers
of the Centre National de la Recherche Scientifique of France, and the
Univesity of Hawaii and on observations obtained at San Pedro Mártir
2.1 m telescope (Mexico), and Loiano 1.5 m telescope (Italy).
(Desidera & Barbieri 2007). Most of these are binaries with
fairly large separations (a few hundred AU). However, in few
cases, the binary separation reaches about 10 AU (Hatzes et
al. 2003; Konacki 2005), indicating that planets can exist even
in the presence of fairly strong dynamical interactions.
Another very interesting dynamical environment is repre-
sented by star clusters, where the presence of nearby stars
or proto-stars may affect the processes of planet formation
and evolution in several ways. Indeed, close stellar encoun-
ters may disperse the proto-planetary disks during the fairly
short (about 10 Myr, e.g., Armitage et al. 2003) epoch of giant
planet formation or disrupt the planetary system after its forma-
tion (Bonnell et al. 2001; Davies & Sigurdsson 2001; Woolfson
2004; Fregeau et al. 2006). Another possible disruptive effect
is the strong UV flux from massive stars, which causes photo-
evaporation of dust grains and thus prevents planet formation
(Armitage 2000; Adams et al. 2004). These effects are expected
http://arxiv.org/abs/0704.1668v1
2 M. Montalto et al.: A new search for planet transits in NGC 6791.
to depend on star density, being much stronger for globular
clusters (typical current stellar density ∼ 103 stars pc−3) than
for the much sparser open clusters (≤ 102 stars pc−3).
The recent discovery of a planet in the tight triple system
HD188753 (Konacki 2005) adds further interest to the search
for planets in star clusters. In fact, the small separation between
the planet host HD188753A and the pair HD188753BC (about
6 AU at periastron) makes it very challenging to understand
how the planet may have been formed (Hatzes & Wüchterl
2005). Portegies, Zwart & McMillan et al. (2005) propose that
the planet formed in a wide triple within an open cluster and
that dynamical evolution successively modified the configura-
tion of the system. Without observational confirmation of the
presence of planets in star clusters, such a scenario is purely
speculative.
On the observational side, the search for planets in star
clusters is a quite challenging task. Only the closest open clus-
ters are within reach of high-precision radial velocity surveys
(the most successful planet search technique). However, the
activity-induced radial velocity jitter limits significantly the
detectability of planets in clusters as young as the Hyades
(Paulson et al. 2004). Hyades red giants have a smaller activity
level, and the first planet in an open cluster has been recently
announced by Sato et al. (2007), around ǫ Tau.
The search for photometric transits appears a more suitable
technique: indeed it is possible to monitor simultaneously a
large number of cluster stars. Moreover, the target stars may
be much fainter. However, the transit technique is mostly sen-
sitive to close-in planets (orbital periods ≤ 5 days).
Space and ground-based wide-field facilities were also used
to search for planets in the globular clusters 47 Tucanae and
ω Centauri. These studies (Gilliland et al. 2000; Weldrake et
al. 2005; Weldrake et al. 2006) reported not a single planet de-
tection. This seemed to indicate that planetary systems are at
least one order of magnitude less common in globular clusters
than in Solar vicinity. The lack of planets in 47 Tuc and ω Cen
may be due either to the low metallicity of the clusters (since
planet frequency around solar type stars appears to be a rather
strong function of the metallicity of the parent star: Fischer &
Valenti 2005; Santos et al. 2004), or to environmental effects
caused by the high stellar density (or both).
One planet has been identified in the globular cluster M4
(Sigurdsson et al. 2003), but this is a rather peculiar case, as
the planet is in a circumbinary orbit around a system including
a pulsar and it may have formed in a different way from the
planets orbiting solar type stars (Beer et al. 2004).
Open clusters are not as dense as globular clusters. The dy-
namical and photo-evaporation effects should therefore be less
extreme than in globular clusters. Furthermore, their metallic-
ity (typically solar) should, in principle, be accompanied by a
higher planet frequency.
In the past few years, some transit searches were specif-
ically dedicated to open clusters: see e.g. von Braun et
al. (2005), Bramich et al. (2005), Street et al. (2003), Burke
et al. (2006), Aigrain et al. (2006) and references therein.
However, in a typical open cluster of Solar metallicity with
∼ 1000 cluster members, less than one star is expected to show
a planetary transit. This depends on the assumption that the
planet frequency in open clusters is similar to that seen for
nearby field stars 1. Considering the unavoidable transits de-
tection loss due to the observing window and photometric er-
rors, it turns out that the probability of success of such efforts
is fairly low unless several clusters are monitored 2.
On the other hand, the planet frequency might be higher for
open clusters with super-solar metallicities. Indeed, for [Fe/H]
between +0.2 and +0.4 the planet frequency around field stars
is 2-6 times larger than at solar metallicity. However, only a
few clusters have been reported to have metallicities above
[Fe/H]= +0.2. The most famous is NGC 6791, a quite massive
cluster that is at least 8 Gyr old (Stetson et al. 2003; King et
al. 2005, and Carraro et al. 2006). As estimated by different au-
thors, its metallicity is likely above [Fe/H]=+0.2 (Taylor 2001)
and possibly as high as [Fe/H]=+0.4 (Peterson et al. 1998). The
most recent high dispersion spectroscopy studies confirmed the
very high metallicity of the cluster ([Fe/H]=+0.39, Carraro et
al. 2006; [Fe/H]=+0.47, Gratton et al. 2006). Its old age im-
plies the absence of significant photometric variability induced
by stellar activity. Furthermore, NGC 6791 is a fairly rich clus-
ter. All these facts make it an almost ideal target.
NGC 6791 has been the target of two photometric cam-
paigns aimed at detecting planets transits. Mochejska et
al. (2002, 2005, hereafter M05) observed the cluster in the R
band with the 1.2 m Fred Lawrence Whipple Telescope dur-
ing 84 nights, over three observing seasons (2001-2003). They
found no planet candidates, while the expected number of de-
tections considering their photometric precision and observing
window was ∼ 1.3. Bruntt et al. (2003, hereafter B03) observed
the cluster for 8 nights using ALFOSC at NOT. They found 10
transit candidates, most of which (7) being likely due to instru-
mental effects.
Nearly continuous, multi-site monitoring lasting several
days could strongly enhance the transit detectability. This idea
inspired our campaigns for multi-site transit planet searches in
the super metal rich open clusters NGC 6791 and NGC 6253.
This paper presents the results of the observations of the cen-
tral field of NGC 6791, observed at CFHT, San Pedro Mártir
(SPM), and Loiano. We also made use of the B03 data-set [ob-
tained at the Nordic Optical Telescope (NOT) in 2001] and re-
duce it as done for our three data-sets. The analysis for the ex-
ternal fields, containing mostly field stars, and of the NGC 6253
campaign, will be presented elsewhere.
The outline of the paper is the following: Sect. 2 presents
the instrumental setup and the observations. We then describe
the reduction procedure in Sect. 3, and the resulting photomet-
ric precision for the four different sites is in Sect. 4. The selec-
tion of cluster members is discussed in Sect. 5. Then, in Sect. 6,
we describe the adopted transit detection algorithm. In Sect. 7
we present the simulations performed to establish the transit
detection efficiency (TDE) and the false alarm rate (FAR) of
1 0.75% of stars with planets with period less than 7 days (Butler et
al. 2000), and a 10% geometric probability to observe a transit.
2 A planet candidate was recently reported by Mochejska et
al. (2006) in NGC 2158, but the radius of the transiting object is larger
than any planet known up to now (∼ 1.7 RJ). The companion is then
most likely a very low mass star.
M. Montalto et al.: A new search for planet transits in NGC 6791. 3
the algorithm for our data-sets. Sect. 8 illustrates the different
approaches that we followed in the analysis of the data. Sect. 9
gives details about the transit candidates.
In Sect. 10, we estimate the expected planet frequency
around main sequence stars of the cluster, and the expected
number of detectable transiting planets in our data-sets. In
Sect. 11, we compare the results of the observations with those
of the simulations, and discussed their significance. In Sect. 12
we discuss the different implications of our results, and, in
Sect. 13, we make a comparison with other transit searches to-
ward NGC 6791. In Sect. 14 we critically analyze all the ob-
servations dedicated to the search for planets in NGC 6791 so
far, and propose future observations of the cluster, and finally,
in Sect. 15, we summarize our work.
2. Instrumental setup and observations
The observations were acquired during a ten-consecutive-day
observing campaign, from July 4 to July 13, 2002. Ideally, one
should monitor the cluster nearly continuously. For this reason,
we used three telescopes, one in Hawaii, one in Mexico, and
the third one in Italy. In Table 1 we show a brief summary of
our observations.
In Hawaii, we used the CFHT with the CFH12K detector 3,
a mosaic of 12 CCDs of 2048×4096 pixels, for a total field of
view of 42×28 arcmin, and a pixel scale of 0.206 arcsec/pixel.
We acquired 278 images of the cluster in the V filter. The see-
ing conditions ranged between 0.′′6 to 1.′′9, with a median of
1.′′0. Exposure times were between 200 and 900 sec, with a
median value of 600 sec. The observers were H. Bruntt and
P.B. Stetson.
In San Pedro Mártir, we used the 2.1m telescope, equipped
with the Thomson 2k detector. However the data section of the
CCD corresponded to a 1k × 1k pixel array. The pixel scale was
0.35 arcsec/pixel, and therefore the field of view (∼ 6 arcmin2)
contained just the center of the cluster, and was smaller than
the field covered with the other detectors. We made use of 189
images taken between July 6, 2002 and July 13, 2002. During
the first two nights the images were acquired using the focal re-
ducer, which increased crowding and reduced our photometric
accuracy. All the images were taken in the V filter with ex-
posure times of 480 − 1200 sec (median 660 sec), and seeing
between 1.′′1 and 2.′′1 (median 1.′′4). Observations were taken
by A. Arellano Ferro.
In Italy, we used the Loiano 1.5m telescope4 equipped with
BFOSC + the EEV 1300×1348B detector. The pixel scale was
0.52 arcsec/pixel, for a total field coverage of 11.5 arcmin2. We
observed the target during four nights (2002 July 6−9). We ac-
quired and reduced 63 images of the cluster in the V and Gunn
i filters (61 in V , 2 in i). The seeing values were between 1.′′1
3 www.cfht.hawaii.edu/Instruments/Imaging/CFH12K/
4 The observations were originally planned at the Asiago
Observatory using the 1.82 m telescope + AFOSC. However, a ma-
jor failure of instrument electronics made it impossible to perform
these observations. We obtained four nights of observations at the
Loiano Observatory, thanks to the courtesy of the scheduled observer
M. Bellazzini and of the Director of Bologna Observatory F. Fusi
Pecci.
and 4.′′3 arcsec, with a median value of 1.′′4 arcsec. Exposure
times ranged between 120 and 1500 sec (median 1080 sec).
The observer was S. Desidera.
We also make use of the images taken by B03 in 2001.
We obtained these images from the Nordic Optical Telescope
(NOT) archive. As explained in BO3, these data covered
eight nights between July 9 and 17, 2001. The detector was
ALFOSC5 a 2k×2k thinned Loral CCD with a pixel scale of
0.188 arcsec/pixel yielding a total field of view of 6.5 arcmin2.
Images were taken in the I and V filters with a median seeing
of 1 arcsec. We used only the images of the central part of the
cluster (which were the majority) excluding those in the exter-
nal regions. In total we reduced 227 images in the V filter and
389 in the I filter.
It should be noted that ALFOSC and BFOSC are focal re-
ducers, hence light concentration introduced a variable back-
ground. These effects can be important when summed with flat
fielding errors. In order to reduce these un-desired effects, the
images were acquired while trying to maintain the stars in the
same positions on the CCDs. Nevertheless, the precision of
this pointing procedure was different for the four telescopes:
the median values of the telescope shifts are of 0.′′5, 0.′′5, 3.′′4
and 2.′′1, respectively for Hawaii, NOT, SPM, and Loiano. This
means that while for the CFHT and the NOT the median shift
was at a sub-seeing level (half of the median seeing), for the
other two telescopes it was respectively of the order of 2.4
and 1.5 times the median seeing. Hence, it is possible that flat-
fielding errors and possible background variation have affected
the NOT, SPM, and Loiano photometry, but the effects on the
Hawaii photometry are expected to be smaller.
Bad weather conditions and the limited time allocated at
Loiano Observatory caused incomplete coverage of the sched-
uled time interval. Moreover we did not use the images coming
from the last night (eighth night) of observation in La Palma
with the NOT because of bad weather conditions. Our observ-
ing window, defined as the interval of time during which obser-
vations were carried out, is shown in Tab. 2 for Hawaii, SPM
and Loiano observations and in Tab. 3 for La Palma observa-
tions.
3. The reduction process
3.1. The pre-reduction procedure
For the San Pedro, Loiano and La Palma images, the pre-
reduction was done in a standard way, using IRAF routines6.
The images from the Hawaii came already reduced via the
ELIXIR software7.
5 ALFOSC is owned by the Instituto de Astrofisica de Andalucia
(IAA) and operated at the Nordic Optical Telescope under agreement
between IAA and the NBIfAFG of the Astronomical Observatory of
Copenhagen.
6 IRAF is distributed by the National Optical Astronomy
Observatory, which is operated by the Association of Universities for
Research in Astronomy, Inc., under cooperative agreement with the
7 http://www.cfht.hawaii.edu/Instruments/Elixir
4 M. Montalto et al.: A new search for planet transits in NGC 6791.
Table 1. Summary of the observations taken during 4-13 July, 2002 in Hawaii, San Pedro Màrtir, Loiano and from 9-17 July, 2001 in La Palma
(by B03).
Hawaii San Pedro Mártir Loiano NOT
N. of Images 278 189 63 227(V), 389(I)
Nights 8 8 4 8
Scale (arcsec/pix) 0.21 0.35 0.52 0.188
FOV (arcmin) 42 x 28 6 x 6 11.5 x 11.5 6.5 x 6.5
Table 2. The observing window relative to the July 2002 observations.
The number of observing hours is given for each night. The last line
shows the total number of observing hours for each site.
Night Hawaii SPM Loiano
1st 3.58 − −
2nd 3.88 − −
3rd 2.68 6.31 3.58
4th 7.56 6.61 5.21
5th 5.23 6.58 6.08
6th 8.30 6.86 5.45
7th − 3.20 −
8th − 7.03 −
9th 8.34 7.27 −
10th 8.37 4.15 −
Total 47.94 48.01 20.32
Table 3. The observing window relative to the July 2001 observations.
Night La Palma
1st 5.45
2nd 7.06
3rd 8.04
4th 7.61
5th 7.57
6th 7.82
7th 7.72
8th 2.41
Total 53.68
3.2. Reduction strategies
The data-sets described in Sec. 2 were reduced with three dif-
ferent techniques: aperture photometry, PSF fitting photometry
and image subtraction. An accurate description of these tech-
niques is given in the next Sections. Our goal was to compare
their performances to see if one of them performed better than
the others. For what concerned aperture and PSF fitting pho-
tometry we used the DAOPHOT (Stetson 1987) package. In
particular the aperture photometry routine was slightly different
from that one commonly used in DAOPHOT and was provided
by P .B. Stetson. It performed the photometry after subtracting
all the neighbors stars of each target star. Image subtraction was
performed by means of the ISIS2.2 package (Alard & Lupton,
1998) except for what concerned the final photometry on the
subtracted images which was performed with the DAOPHOT
aperture routine for the reasons described in paragraph 3.4.
3.3. DAOPHOT/ALLFRAME reduction: aperture and
PSF fitting photometry
The DAOPHOT/ALLFRAME reduction package has been ex-
tensively used by the astronomical community and is a very
well tested reduction technique. The idea behind this stellar
photometry package consists in modelling the PSF of each im-
age following a semi-analytical approach, and in fitting the
derived model to all the stars in the image by means of least
square method. After some tests, we chose to calculate a vari-
able PSF across the field (quadratic variability). We selected the
first 200 brightest, unsaturated stars in each frame, and calcu-
lated a first approximate PSF from them. We then rejected the
stars to which DAOPHOT assigned anomalously high fitting χ
values. After having cleaned the PSF star list, we re-calculated
the PSF. This procedure was iterated three times in order to
obtain the final PSF for each image.
We then used DAOMATCH and DAOMASTER (Stetson
1992) in order to calculate the coordinate transformations
among the frames, and with MONTAGE2 we created a refer-
ence image by adding up the 50 best seeing images. We used
this high S/N image to create a master list of stars, and applied
ALLFRAME (Stetson 1994) to refine the estimated star posi-
tions and magnitudes in all the frames. We applied a first selec-
tion on the photometric quality of our stars by rejecting all stars
with SHARP and CHI parameters (Stetson 1987) deviating by
more than 1.5 times the RMS of the distribution of these pa-
rameters from their mean values, both calculated in bins of 0.1
magnitudes. About 25% of the stars were eliminated by this se-
lection. This was the PSF fitting photometry we used in further
analysis. The aperture photometry with neighbor subtraction
was obtained with a new version of the PHOT routine (devel-
oped by P. B. Stetson). We used as centroids the same values
used for ALLFRAME. The adopted apertures were equal to the
FWHM of the specific image, and after some tests we set the
annular region for the calculation of the sky level at a distance
< r < 2.5
(both for the ALLFRAME and for the aperture
photometry).
Finally, we used again DAOMASTER for the cross-
correlation of the final star lists, and to prepare the light curves.
3.4. Image Subtraction photometry
In the last years the Image Subtraction technique has been
largely used in photometric reductions. This method firstly im-
plemented in the software ISIS did not assume any specific
functional shape for the PSF of each image. Instead it mod-
eled the kernel that convolved the PSF of the reference image
M. Montalto et al.: A new search for planet transits in NGC 6791. 5
to match the PSF of a target image. The reference image is
convolved by the computed kernel and then subtracted from
the image. The photometry is then done on the resulting differ-
ence image. Isolated stars were not required in order to model
the kernel. This technique had rapidly gained an appreciable
consideration across the astronomical community. Since its ad-
vent, it appeared particularly well suited for the search for vari-
able stars, and it has proved to be very effective in extremely
crowded fields like in the case of globular clusters (e.g. Olech
et al. 1999, Kaluzny et al. 2001, Clementini et al. 2004, Corwin
et al. 2006). An extensive use of this approach has been applied
also in long photometric surveys devoted to the search for ex-
trasolar planet transits (eg. Mochejska et al., 2002, 2004).
We used the standard reduction routines in the ISIS2.2
package. At first, the images were interpolated on the reference
system of the best seeing image. Then we created a reference
image from the 50 images with best seeing. We performed dif-
ferent tests in order to set the best parameters for the subtrac-
tion, and we checked the images to find which combination
with lowest residuals. In the end, we decided to sub-divide the
images in four sub-regions, and to apply a kernel and sky back-
ground variable at the second order.
Using ALLSTAR, we build up a master list of stars from
the reference image. As in Bruntt et al. (2003), we were not
able to obtain a reliable photometry using the standard pho-
tometry routine of ISIS. We used the DAOPHOT aperture pho-
tometry routine and slightly modified it in order to accept the
subtracted images. Aperture, and sky annular region were set
as for the aperture and PSF photometry. Then the magnitude of
the stars in the subtracted images was obtained by means of the
following formula:
mi = mref − 2.5 log
(Nref − Ni
where mi was the magnitude of a generic star in a generic i
subtracted image, mref was the magnitude of the correspondent
star in the reference image, Nref were the counts obtained in the
reference image and Ni is the i
th subtracted image.
3.5. Zero point correction
For what concerns psf fitting photometry and aperture photom-
etry, we corrected the light curves taking into account the zero
points magnitude (the mean difference in stellar magnitude) be-
tween a generic target image, and the best seeing image. This
was done by means of DAOMASTER and can be considered as
a first, crude correction of the light curves. Image subtraction
was able to handle these first order corrections automatically,
and thus the resulting light curves were already free of large
zero points offsets.
Nevertheless, important residual correlations persisted in
the light curves, and it was necessary to apply specific, and
more refined post-reduction corrections, as explained in the
next Section.
3.6. The post reduction procedure
In general, and for long time series photometric studies, it has
been commonly recognized that regardless of the adopted re-
duction technique, important correlations between the derived
magnitudes and various parameters like seeing, airmass, expo-
sure time, etc. persist in the final photometric sequences. As put
into evidence by Pont et al. (2006), the presence of correlated
noise in real light curves, (red noise), can significantly reduce
the photometric precision that can be obtained, and hence re-
duce transit detectability. For example ground-based photomet-
ric measurements are affected by color-dependent atmospheric
extinction. This is a problem since, in general, photometric sur-
veys employ only one filter and no explicit colour information
is available. To take into account these effects, we used the
method developed by Tamuz et al. (2005). The software was
provided by the same authors, and an accurate description of it
can be found in the referred paper.
One of the critical points in this algorithm regarded the
choice of systematic effects to be removed from the data. For
each systematic effect, the algorithm performed an iteration af-
ter which it removed the systematic passing to the next.
To verify the efficiency of this procedure (and establish the
number of systematic effects to be removed) we performed the
following simulations. We started from a set of 4000 artifi-
cial stars with constant magnitudes. These artificial light curves
were created with a realistic modeling of the noise (which ac-
counts for the intrinsic noise of the sources, of the sky back-
ground, and of the detector electronics) but also for the pho-
tometric reduction algorithm itself, as described in Sec. 7.1.
Thus, they are fully representative of the systematics present
in our data-set. At this point we also add 10 light curves in-
cluding transits that were randomly distributed inside the ob-
serving window. These spanned a range of (i) depths from a
few milli-magnitudes to around 5 percents, (ii) durations and
(iii) periods respectively of a few hours to days accordingly to
our assumptionts on the distributions of these parameters for
planetary transits, as accounted in Sec. 7.3.
In a second step, we applied the Tamuz et al. (2005) algo-
rithm to the entire light curves data-set. For a deeper under-
standing of the total noise effects and the transit detection ef-
ficiency, we progressively increased the number of systematic
effects to be removed (eigenvectors in the analysis described
by Tamuz 2005). Typically, we started with 5 eigenvectors and
increased it to 30.
Repeated experiments showed no significant RMS im-
provement after the ten iterations. The final RMS was 15%-
20% lower than the original RMS . Thus, the number of eigen-
vectors was set to 10. In no case the added transits were re-
moved from the light curves and the transit depths remained
unaltered.
We conclude that this procedure, while reducing the RMS
and providing a very effective correction for systematic effects,
did not influence the uncorrelated magnitude variations associ-
ated with transiting planets.
6 M. Montalto et al.: A new search for planet transits in NGC 6791.
4. Definition of the photometric precision
To compare the performances of the different photometric al-
gorithms we calculated the Root Mean Square, (RMS ), of the
photometric measurements obtained for each star, which is de-
fined as:
RMS =
Ii−<I>
N − 1
Where Ii is the brightness measured in the generic i
th image
for a particular target star, < I > is the mean star brightness in
the entire data-set, and N is the number of images in which the
star has been measured. For constant stars, the relative variation
of brightness is mainly due to the photometric measurement
noise. Thus, the RMS (as defined above) is equal to the mean
N/S of the source. In order to allow the detection of transiting
jovian-planets eclipses, whose relative brightness variations are
of the order of 1%, the RMS of the source must be lower than
this level.
4.1. PSF photometry against aperture photometry
The first comparison we performed was that between aper-
ture photometry (having neighboring stars subtracted) and
PSF fitting photometry. We started with aperture photometry.
Figure 1, shows a comparison of the RMS dispersion of the
light curves obtained with the new PHOT software with re-
spect to the RMS of the PSF fitting photometry for the different
sites. We also show the theoretical noise which was estimated
considering the contribute to the noise coming from the pho-
ton noise of the source and of the sky background as well as
the noise coming from the instrumental noise (see Kjeldsen &
Frandsen 1992, formula 31). In Fig. 1- 3, we also separated
the contribution of the source’s Poisson noise from that of the
sky (short dashed line) and of the detector noise (long dashed
line). The total theoretical noise is represented as a solid line.
It is clear that the data-sets from the different telescopes gave
different results. In the case of the CFHT and SPM data-sets,
aperture photometry does not reach the same level of preci-
sion as PSF fitting photometry (for both bright and the faint
sources). Moreover, it appears that the RMS of aperture pho-
tometry reaches a constant value below V ∼ 18.5 for CFHT
data and around V ∼ 17.5 for SPM data, while for PSF fitting
photometry the RMS continues to decrease. For Loiano data,
and with respect to PSF photometry, aperture photometry pro-
vides a smaller RMS in the light curve, in particular for bright
sources. The NOT observations on the other hand show that
the two techniques are almost equivalent. Leaving aside these
differences, it is clear that the CFHT provide the best photo-
metric precision, and this is due to the larger telescope diam-
eter, the smaller telescope pixel scale (0.206 arcsec/pixel, see
Table 1), and the better detector performances at the CFHT. For
this data-set, the photometric error remains smaller than 0.01
mag, from the turn-off of the cluster (∼ 17.5) to around mag-
nitude V = 21, allowing the search for transiting planets over a
magnitude range of about 3.5 magnitudes, (in fact, it is possible
to go one magnitude deeper because of the expected increase
in the transit depth towards the fainter cluster stars, see Sec. 5
for more details). Loiano photometry barely reaches 0.01 mag
photometric precision even for the brightest stars, a photomet-
ric quality too poor for the purposes of our investigation. The
search for planetary transits is limited to almost 2 magnitudes
below the turn-off for SPM data (in particular with the PSF fit-
ting technique) and to 1.5 magnitude below the turn-off for the
NOT data. In any case, the photometric precision for the SPM
and NOT data-sets reaches the 0.004 mag level for the brightest
stars, while, for the CFHT, it reaches the 0.002 mag level.
It is clear that both the PSF fitting photometry and the aper-
ture photometry tend to have larger errors with respect to the
expected error level. This effect is much clearer for Loiano,
SPM and NOT rather than for Hawaii photometry. As explained
by Kjeldsen & Frandsen (1992), and more recently by Hartman
et al. (2005), the PSF fitting approach in general results in
poorer photometry for the brightest sources with respect to the
aperture photometry. But, for our data-sets, this was true only
for the case of Loiano photometry, as demonstrated above. The
aperture photometry routine in DAOPHOT returns for each
star, along with other information, the modal sky value asso-
ciated with that star, and the rms dispersion (σsky) of the sky
values inside the sky annular region. So, we chose to calcu-
late the error associated with the random noise inside the star’s
aperture with the formula:
σAperture =
σ2sky Area (3)
where Area is the area (in pixels2) of the region inside which
we measure the star’s brightness. This error automatically takes
into account the sky Poissonian noise, instrumental effects like
the Read Out Noise, (RON), or detector non-linearities, and
possible contributions of neighbor stars. To calculate this error,
we chose a representative mean-seeing image, and subdivide
the magnitude range in the interval 17.5 < V < 22.0 into
nine bins of 0.5 mag. Inside each of these bins, we took the
minimum value of the stars’ sky variance as representative of
the sky variance of the best stars in that magnitude bin. We
over-plot this contribution in Fig. 4 which is relative to the San
Pedro photometry. This error completely accounts for the ob-
served photometric precision. So, the variance inside the star’s
aperture is much larger than what is expected from simple pho-
ton noise calculations. This can be the effect of neighbor stars
or of instrumental problems. For CFHT photometry, as we have
good seeing conditions and an optimal telescope scale, crowd-
ing plays a less important role. Concerning the other sites, we
noted that for Loiano the crowding is larger than for SPM and
NOT, and this could explain the lower photometric precision of
Loiano observations, along with the smaller telescope diameter.
For NOT photometry, instead, the crowding should be larger
than for San Pedro (since the scale of the telescope is larger)
while the median seeing conditions are comparable for the two
data-sets, as shown in Sect 2. Therefore this effect should be
more evident for NOT rather than for SPM, but this is not the
case, at least for what concerns aperture photometry. For PSF
photometry, as it is seen in Fig. 2, the NOT photometric preci-
sion appears more scattered than the SPM photometry.
M. Montalto et al.: A new search for planet transits in NGC 6791. 7
Fig. 1. Comparison of the photometric precision for aperture photometry with neighbor subtraction (Ap) and for PSF fitting photometry (PSF)
as a function of the apparent visual magnitude for CFHT, SPM, Loiano and NOT images. The short dashed line indicates the N/S considering
only the star photon noise, the long dashed line is the N/S due to the sky photon noise and the detector noise. The continuous line is the total
We are forced to conclude that poor flat fielding, optical
distortions, non-linearity of the detectors and/or presence of
saturated pixels in the brightest stars must have played a sig-
nificant role in setting the aperture and PSF fitting photometric
precisions.
4.2. PSF photometry against image subtraction
photometry
Applying the image subtraction technique we were able to
improve the photometric precision with respect to that ob-
tained by means of the aperture photometry and the PSF fitting
techniques. This appears evident in Fig. 2, in which the im-
age subtraction technique is compared to the PSF fitting tech-
nique. Again, the best overall photometry was obtained for the
CFHT , for the reasons explained in the previous subsection.
For the image subtraction reduction, the photometric precision
overcame the 0.001 mag level for the brightest stars in the
CFHT data-set, and for the other sites it was around 0.002 mag
(for the NOT ) or better (for S PM and Loiano). This clearly al-
lowed the search for planets in all these different data-sets. In
this case, it was possible to include also the Loiano observa-
tions (up to 2 magnitudes below the turn-off), and, for the other
sites, to extend by about 0.5-1 mag, the range of magnitudes
over which the search for transits was possible, (see previous
Section).
The reason for which image subtraction gave better results
could be that it is more suitable for crowded regions (as the
center of the cluster), because it doesn’t need isolated stars in
order to calculte the convolution kernel while the subtraction of
stars by means of PSF fitting can give rise to higher residuals,
8 M. Montalto et al.: A new search for planet transits in NGC 6791.
Fig. 2. Comparison of the photometric precision for image subtraction (ISIS) and for psf fitting photometry (PSF) as function of the apparent
visual magnitude for CFHT, SPM, Loiano, and NOT images.
because it’s much more difficult to obtain a reliable PSF from
crowded stars.
4.3. The best photometric precision
Given the results of the previous comparisons, we decided to
adopt the photometric data set obtained with the image sub-
traction technique. Figure 3, shows the photometric precision
that we obtained for the four different sites. The photometric
precision is very close to the theoretical noise for all the data-
sets. The NOT data-set has a lower photometric precision with
respect to SPM and even to Loiano, in particular for the bright-
est stars. We observed that the mean S/N for the NOT images
is lower than for the other sites because of the larger number of
images taken (and consequently of their lower exposure times
and S/N), see Tab. 1.
5. Selection of cluster members
To detect planetary transits in NGC 6791 we selected the proba-
ble main sequence cluster members as follows. Calibrated mag-
nitudes and colors were obtained by cross-correlating our pho-
tometry with the photometry by Stetson et al. (2003), based on
the same NOT data-set used in the present paper. Then, as done
in M05, we considered 24 bins of 0.2 magnitudes in the interval
17.3 < V < 22.1. For each bin, we calculated a robust mean
of all (B-V) star colors, discarding outliers with colors differing
by more than ∼ 0.06 mag from the mean. Our selected main
sequence members are shown in Fig. 5. Overall, we selected
3311 main-sequence candidates in NGC 6791. These are the
stars present in at least one of the four data-sets (see Sec. 8),
and represent the candidates for our planetary transits search.
Note that our selection criteria excludes stars in the bi-
nary sequence of the cluster. These are blended objects, for
which any transit signature should be diluted by the light
of the unresolved companion(s) and then likely undetectable.
M. Montalto et al.: A new search for planet transits in NGC 6791. 9
Fig. 3. The expected RMS noise for the observations taken at the different sites as a function of the visual apparent magnitude, is compared
with the RMS of the observed light curves obtained with the image subtraction technique.
Furthermore, a narrow selection range helps in reducing the
field-star contamination.
6. Description of the transit detection technique
6.1. The box fitting technique
To detect transits in our light curves we adopted the BLS algo-
rithm by Kovács et al. (2002). This technique is based on the
fitting of a box shaped transit model to the data. It assumes that
the value of the magnitude outside the transit region is constant.
It is applied to the phase folded light curve of each star span-
ning a range of possible orbital periods for the transiting object,
(see Table 4). Chi-squared minimization is used to obtain the
best model solution. The quantity to be maximized in order to
get the best solution is:
)2[ 1
Nin Nout
where mn = Mn − < M >. Mn is the n-th measurement of the
stellar magnitude in the light curve, < M > is the mean mag-
nitude of the star and thus mn is the n-th residual of the stellar
magnitude. The sum at the numerator includes all photometric
measurements that fall inside the transit region. Finally Nin and
Nout are respectively the number of photometric measurements
inside and outside the transit region.
The algorithm, at first, folds the light curve assuming a par-
ticular period. Then, it sub-divides the folded light curve in nb
bins and starting from each one of these bins calculates the T
index shown above spanning a range of transit lengths between
qmi and qma fraction of the assumed period. Then, it provides
the period, the depth of the brightness variation, δ, the transit
length, and the initial and final bins in the folded light curve
at which the maximum value of the index T occurs. We used a
routine called ’eebls’ available on the web8. We applied also the
so called directional correction (Tingley 2003a, 2003b) which
8 http://www.konkoly.hu/staff/kovacs/index.html
10 M. Montalto et al.: A new search for planet transits in NGC 6791.
Fig. 4. RMS noise for the San Pedro Mártir observations with the aper-
ture error (triangles) as estimated by Equation 3.
Fig. 5. The NGC 6791 CMD highlighting the selection region of the
main sequence stars (blue circles).
consists in taking into account the sign of the numerator in the
above formula in order to retain only the brightness variations
which imply a positive increment in apparent magnitude.
6.2. Algorithm parameters
The parameters to be set before running the BLS algorithm
are the following: 1) nf, number of frequency points for which
the spectrum is computed; 2) fmin, minimum frequency; 3) df,
frequency step; 4) nb, number of bins in the folded time se-
ries at any test frequency; 5) qmi, minimum fractional transit
length to be tested; 6) qma, maximum fractional transit length
to be tested; qmi and qma are given as the product of the tran-
Table 4. Adopted parameters for the BLS algorithm: nf is the number
of frequency steps adopted, fmin is the minimum frequency consid-
ered, df is the increasing frequency step, nb is the number of bins in
the folded time series at any test frequency, qmi and qma are the mini-
mum and maximum fractional transit length to be tested, as explained
in the text.
nf fmin(days−1) df(days−1) nb qmi qma
3000 0.1 0.0005 1000 0.01 0.1
sit length to the test frequency. Table 4 displays our adopted
parameters.
6.3. Algorithm transit detection criteria
To characterize the statistical significance of a transit-like
event detected by the BLS algorithm we followed the meth-
ods by Kovács & Bakos (2005): deriving the Dip Significance
Parameter (hereafter DSP) and the significance of the main pe-
riod signal in the Out of Transit Variation (hereafter OOTV,
given by the folded time series with the exclusion of the tran-
sit).
The Dip Significance Parameter is defined as
DSP = δ(σ2/Ntr + A
OOTV)
2 (5)
where δ is the depth of the transit given by the BLS at the point
at which the index T is maximum,σ is the standard deviation of
the Ntr in-transit data points, AOOTV is the peak amplitude in the
Fourier spectrum of the Out of Transit Variation. The threshold
for the DSP set by Kovacs & Bakos (2005) is 6.0 and it was set
on artificial constant light curves with gaussian noise. In real
light curves the noise is not gaussian, as explained in Sec. 3.6,
and, in general, the value of the DSP threshold should be set
case by case. In Sec. 8, we presented the adopted thresholds,
based on our simulations on artificial light curves, described in
Sec. 7.
The significance of the main periodic signal in the OOTV
is defined as:
SNROOTV = σ
A (AOOTV− < A >) (6)
where < A > and σA are the average and the standard deviation
of the Fourier spectrum. This parameter accounts for the Out
Of Transit Variation, and we impose it to be lower than 7.0, as
in Kovacs & Bakos (2005).
For our search we imposed a maximum transit duration of
six hours; we also required that at least ten data points must be
included in the transit region.
7. Simulations
The Transit Detection Efficiency (TDE) of the adopted algo-
rithm and its False Alarm Rate (FAR) were determined by
means of detailed simulations. The TDE is a measure of the
probability that an algorithm correctly identifies a transit in a
light curve. The FAR is a measure of the probability that an
M. Montalto et al.: A new search for planet transits in NGC 6791. 11
algorithm identifies a feature in a light curve that does not rep-
resent a transit, but rather a spurious photometric effect.
In the following discussion, we address the details of the
simulations we performed, considering the case of the CFHT
observations of NGC 6791. Because the CFHT data provided
the best of our photometric sequences, the results on the algo-
rithm performance is shown below, and should be considered
as an upper limit for the other cases.
7.1. Simulations with constant light curves
Artificial stars with constant magnitude were added to each im-
age, according to an equally-spaced grid of 2*PSFRADIUS+1,
(where the PSFRADIUS was the region over which the Point
Spread Function of the stars was calculated, and was around 15
pixels for the CFHT images), as described in Piotto & Zoccali
(1999). We took into account the photometric zero-point dif-
ferences among the images, and the coordinate transformations
from one image to another. 7722 stars were added on the CFHT
images. In order to assure the homogeneity of these simula-
tions, the artificial stars were added exactly in the same posi-
tions, (relative to the real stars in the field), for the other sites.
Because of the different field of views of the detectors, (see
Tab. 1), the number of resulting added stars was 3660 for the
NOT, 5544 for Loiano, and 3938 for SPM. The entire set of
images was then reduced again with the procedure described
in Sec.3. This way we got a set of constant light curves which
is completely representative of many of the spurious artifacts
that could have been introduced by the photometry procedure.
This is certainly a more realistic test than simply considering
Poisson noise on the light curves, as it is usually done. We then
applied the algorithm, with the parameters described in Sec.6,
to the constant light curves. The result is shown in Figure 6,
where the DSP parameter is plotted against the mean magni-
tude of the light curve. For the CFHT data, fixing the DSP
threshold at 4.3 yielded a FAR of 0.1%. This was the FAR
we adopted also when considering the other sites, which corre-
sponded to different levels of the DSP parameter, as explained
in Sec. 8.
Repeating the whole procedure 4 times and slightly shift-
ing the positions of the artificial stars, allowed us to better es-
timate the FAR and its error, FAR=(0.10 ± 0.04)%. Therefore,
running the transit search procedure on the 3311 selected main
sequence stars, we expect (3.3 ± 1.3) false candidates.
7.2. Masking bad regions and temporal intervals
We verified that, when stars were located near detector defects,
like bad columns, saturate stars, etc., or, in correspondence of
some instants of a particular night, (associated with sudden cli-
matic variations, or telescope shifts), it was possible to have an
over-production of spurious transit candidates. To avoid these
effects, we chose to mask those regions of the detectors and
the epochs which caused sudden changes in the photometric
quality. This was done also for the simulations with the con-
stant added stars, that were not inserted in detector defected
regions, and in the excluded images that generating bad pho-
Fig. 6. False Alarm Probability (FAR) in %, against the DSP parame-
ter given by the algorithm. The points indicate the results of our sim-
ulations on constant light curves, the solid line is our assumed best
tometry. In particular, we observed these spurious effects for
the NOT and SPM images. We further observed, when dis-
cussing the candidates coming from the analysis of the whole
data-set (as described in Sec.10.3) that the photometric varia-
tions were concentrated on the first night of the NOT. This fact,
which appeared from the simulations with the constant stars
too, meant that this night was probably subject to bad weather
conditions. had not we applied the Because we didn’t recog-
nize it at the beginning, we retained that night, as long as those
candidates, which were all recognized of spurious nature. Had
not we applied any masking the number of false alarms would
have almost quadruplicated. This fact probably can explain at
least some of the candidates found by B03 (see Sec. 13) that
were identified on the NOT observations. Even if some kind of
masking procedure was applied by B03, many candidates ap-
peared concentrated on the same dates, and were considered
rather suspicious by the same authors.
7.3. Artificially added transits
The transit detection efficiency (TDE) was determined by ana-
lyzing light curves modified with the inclusion of random tran-
sits. To properly measure the TDE and to estimate the number
of transits we expect to detect it is mandatory to consider real-
istic planetary transits. We proceeded as follows:
7.3.1. Stellar parameters
The basic cluster parameters were determined by fitting the-
oretical isochrones from Girardi et al. (2002) to the observed
color-magnitude diagram (Stetson et al. 2003). Our best fit pa-
rameters are (see Fig. 7): age = 10.0 Gyr, (m − M) = 13.30,
E(B−V) = 0.12 for Z = 0.030 (corresponding to [Fe/H]=
12 M. Montalto et al.: A new search for planet transits in NGC 6791.
Fig. 7. CMD diagram of NGC6791 with the best fit Z=0.030 isochrone
(dashed line), and the best fit Z=0.046 isochrone (from Carraro et
al. 2006, solid line). Photometry: Stetson et al. (2003)
Fig. 8. Left: Mi/M⊙ vs visual apparent magnitude Right: Ri/R⊙ vs vi-
sual apparent magnitude, from our best fit isochrone (dashed line)
and from the Z=0.046 isochrone (solid line) applied to the stars
of NGC 6791.
+0.18), and age = 8.9 Gyr, (m−M) = 13.35 and E(B−V) = 0.09
for Z=0.046 (corresponding to [Fe/H]= +0.39).
From the best-fit isochrones we then obtained the values of
stellar mass and radius as a function of the visual magnitude
(Fig. 8).
7.3.2. Planetary parameters
The actual distribution of planetary radii has a very strong im-
pact on the transit depth and therefore on the number of plan-
etary transits we expect to be able to detect. The radius of
the fourteen transiting planets discovered to date ranges from
R=1.35 ± 0.07 RJ (HD209458b; Wittenmyrer et al. 2005) to
R=0.725 ± 0.03 RJ (HD149026b; Sato et al. 2005), where
J refers to the value for Jupiter. The observed distribution is
likely biased towards larger radii. Gaudi (2005a) suggests for
the close-in giant planets a mean radius Rp = 1.03 RJ. To eval-
uate the efficiency of the algorithm we have considered three
cases:
Fig. 9. Continuous line: adopted distribution for planet peri-
ods. Histogram: RV surveys data (from the Extrasolar Planets
Encyclopaedia).
– Rp = (0.7 ± 0.1) RJ
– Rp = (1.0 ± 0.2) RJ
– Rp = (1.4 ± 0.1) RJ
assuming a Gaussian distribution for Rp. We fixed the planetary
mass at Mp = 1 MJ , because the effect of planet mass on transit
depth or duration is negligible.
The period distribution was taken from the data for plan-
ets discovered by radial velocity surveys, from the Extra-solar
Planets Encyclopaedia9. We selected the planets discovered by
radial velocity surveys with mass 0.3MJ ≤ Mpl sin i ≤ 10MJ
(the upper limit was fixed to exclude brown dwarfs; the lower
limit to ensure reasonable completeness of RV surveys and to
exclude Hot Neptunes that might have radii much smaller than
giant planets, Baraffe et al. 2005) and periods 1 ≤ P ≤ 9 days.
We assumed that the period distribution of RV planets is un-
biased in this period range. We then fitted the observed period
distribution with a positive power law for the Very Hot Jupiters
(VHJ, 1 ≤ P ≤ 3) and a negative power law for the Hot Jupiters
(HJ, 3 < P ≤ 9, see Gaudi et al. 2005b for details) as shown in
Fig. 9.
7.3.3. Limb-darkening
To obtain realistic transit curves it is important to include the
limb darkening effect. We adopted a non-linear law for the spe-
cific intensity of a star:
= 1 −
ak (1 − µ
k/2) (7)
from Claret (2000).
9 http://exoplanet.eu/index.php
M. Montalto et al.: A new search for planet transits in NGC 6791. 13
In this relation µ = cosγ is the cosine of the angle between
the normal to the stellar surface and the line of sight of the
observer, and ak are numerical coefficients that depend upon
vturb (micro-turbulent velocity), [M/H], Te f f , and the spectral
band. The coefficients are available from the ATLAS calcula-
tions (available at CDS).
We adopted the metallicity of the cluster for [M/H] and
vturb=2 km s
−1 for all the stars. For each star we adopted the
appropriate V-band ak coefficients as a function of the values
of log g and Te f f derived from the best fit isochrone.
7.3.4. Modified light curves
In order to establish the TDE of the algorithm, we considered
the whole sample of constant stars with 17.3 ≤ V ≤ 22.1, and
each star was assigned a planet with mass, radius and period
randomly selected from the distributions described above. The
orbital semi-major axis a was derived from the 3rd Kepler’s
law, assuming circular orbits.
To each planet, we also assigned an orbit with a random
inclination angle i, with 0 < cos i < 0.1, with a uniform dis-
tribution in cos i. We infer that ∼ 85% of the planets result in
potentially detectable transits. We also assigned a phase 2φ0
randomly chosen from 0 to 2π rad and a random direction of
revolution s = ±1 (clockwise or counter-clockwise).
Having fixed the planet’s parameters (P, i, φ0, Mp, Rp, a),
the star’s parameters (M⋆, R⋆) and a constant light curve (ti ,
Vi) it is now possible to derive the position of the planet with
respect to the star at every instant from the relation:
φ = φ0 +
where φ is the angle between the star-planet radius and the
line of sight. The positions were calculated at all times ti
corresponding to the Vi values of the light curve of the star.
When the planet was transiting the star, the light curve was
modified, calculating the brightness variation ∆V(ti) and
adding this value to the Vi (see Fig. 10).
7.4. Calculating the TDE
We then selected only the light curves for which there was
at least a half transit inside an observing night and applied
our transit detection algorithm. We considered not only central
transits but also grazing ones. We considered the number of
light curves that exceeded the thresholds, and also determined
for how many of these the transit instants were correctly iden-
tified on the unfolded light curves.
We isolated three different outputs:
1. Missed candidates: the light curves for which the algorithm
did not get the values of the parameters that exceeded the
thresholds (DSP, OOTV, transit duration and number of in
transit points, see Sec 6.3), or if it did, the epochs of the
transits were not correctly recovered;
2. Partially recovered transit candidates: the parameters ex-
ceeded the thresholds and at least one of the transits that
fell in the observing window was correctly identified;
Fig. 10. Top: constant light curve Bottom: the same light curve after
inserting the simulated transit with limb-darkening (black points). The
solid line shows the theoretical light curve of the transit.
3. Totally recovered transit candidates: the parameters ex-
ceeded the thresholds and all the transits that were present
were correctly recovered.
The TDE was calculated as the sum of the totally and partially
recovered transit candidates relative to the whole number of
stars with transiting planets. We derive the TDE as a function
of magnitude in Fig. 11. The TDE decreases with increasing
magnitude because the lower photometric precision at fainter
magnitudes is not fully compensated by the larger transit depth.
The TDE depends strongly also on the assumptions concerning
the planetary radii, and on the inclusion of the limb darkening
effect. Fig. 11 is relative to a threshold equal to 4.3 for the DSP
(cf. Fig. 6).
The resulting TDE is about 11.5% around V = 18 and 1%
around V = 21 for the case with R = (1.0 ± 0.2)RJ.
Figure 12–14 show the histograms relative to the input tran-
sit parameters and the recovered values of the BLS algorithm
normalized to the whole number of transiting planets. For com-
parison we also show in the upper left panel of each figure
the recovered values of the BLS for the constant simulated
light curves (normalized to the total number of constant light
curves). We found that on average the BLS algorithm has un-
derestimated the depth and duration of the transit by about 15-
20%. This is likely due to the deviation of the transit curves
from the box shape assumed by the algorithm. For the periods,
(Fig. 13), the recovered transit period distribution shown in the
upper right panel of Fig. 13, had two clear peaks at 1.5 and 3
days, with the first one much more evident meaning that the
algorithm tends to estimate half of the input transit period, as
shown in the lower panels of the same Figure. The constant
light curve period distribution of the upper left panel, instead,
showed that the vast majority of constant stars were recovered
with periods between 0.5 and 1 day, but residual peaks at 2.5
and 5 days were present.
14 M. Montalto et al.: A new search for planet transits in NGC 6791.
Fig. 11. TDE as a function of the stellar magnitude for various as-
sumptions on planetary radii distribution. From the top to the bottom:
1) Dashed line, R = (1.4 ± 0.1) RJ ; 2) Solid line, R = (1.0 ± 0.2) RJ ;
3) Dotted line, R = (0.7 ± 0.1) RJ . The adopted threshold for the DSP
in this figure is 4.3. The normalization is respect to the whole number
of transiting planets.
Fig. 12. (Upper left) Distributions of transit depths measured by
the BLS algorithm on the artificial constant-light-curves (lc); (upper
right) transit depths measured on artificial light curves with transits
added;(lower left) input transit depths used to generate artificial light
curves with transits;(lower right) relative difference between the tran-
sit depth recovered by BLS and its input value. Empty histograms refer
to distributions relative to all light curves, filled ones to light curves
with totally and partially recovered transits. Histograms are normal-
ized to all light curves with transiting planets, or, for the upper left
panel to all constant light curves. This Figure is relative to CFHT data,
and the assumed planetary radii distribution is R = (1.0 ± 0.2).
Fig. 13. The same as Fig.12 for the transit periods.
Fig. 14. The same as Fig.12 for the transit durations.
8. Different approaches in the transit search
The data we have acquired on NGC 6791 came from four dif-
ferent sites and involved telescopes with different diameters
and instrumentations. Moreover, the observing window of each
site was clearly different with respect to the others as well as
observing conditions like seeing, exposure times, etc.
The first approach we tried consisted in putting together
the observations coming from all the different telescopes. The
most important complication we had to face regarded the dif-
M. Montalto et al.: A new search for planet transits in NGC 6791. 15
Table 5. The different cases in which the data-sets analysis was split-
ted into. The notation in the first column is explained in the text,
the second column shows the number of stars in each case and the
third column refers to the DSP values assumed, correspondent to a
FAR= 0.1%.
Case N.stars DSP threshold
11111 1093 7.5
10000 771 4.3
10100 870 5.5
11011 162 7.1
10001 112 7.1
11001 108 7.5
10111 99 7.2
10011 96 6.5
ferent field of views of the detectors. This had the consequence
that some stars were measured only in a subset of the sites,
and therefore these stars had in general different observing win-
dows. Considering only the stars in common would reduce the
number of candidates from 3311 to 1093 which means a re-
duction of about 60% of the targets. We decided to distinguish
eight different cases, which are shown in Tab. 5. In the first col-
umn a simple binary notation identifies the different sites: each
digit represents one site in the following order: CFHT, SPM,
Loiano, NOT(V) and NOT(I). If the number correspondent to a
generic site is 1, it indicates that the stars contained in that case
have been observed, otherwise the value is set to 0. For exam-
ple, the notation 11111 was used for the stars in common to all
4 sites. The notation 10000 indicates the number of stars which
were present only on the CFHT field, and so on. Each one of
these cases was treated as independent, and the resulting FAR
and expected number of transiting planets were added together
in order to obtain the final values.
The second approach we followed was to consider only the
CFHT data. As demonstrated in Section 3, overall we obtained
the best photometric precision for this data-set. We considered
the 3311 candidates which were recovered in the CFHT data-
For the CFHT data-set, as shown in Tab. 5, the DSP value
correspondent to a FAR= 0.1% is equal to 4.3, lower than the
other cases reported in that Table. Thus, despite the reduced
observing window of the CFHT data, it is possible to take ad-
vantage of its increased photometric precision in the search for
planets.
In Section 9, and in Section 10, we presented the candidates
and the different expected number of transiting planets for these
two different approaches.
9. Presentation of the candidates
Table 6 shows the characteristics of the candidates found by the
algorithm, distinguishing those coming from the entire data-set
analysis from those coming from the CFHT analysis.
Fig. 15. Composite light curve of candidate 6598. In ordinate is re-
ported the calibrated V magnitude and in abscissa the observing
epoch, (in days), where 0 corresponds to JD = 52099. Filled circles
indicate CFHT data, crosses SPM data, open triangles Loiano data,
open circles NOT data in the V filter and open squares NOT data in
the I filter. Light blue symbols highlight regions which were flagged
by the BLS.
9.1. Candidates from the whole data-sets
Applying the algorithm with the DSP thresholds shown
in Tab. 5 on the real light curves we obtained four can-
didates. Hereafter we adopt the S03 notation reported in
Tab. 6. For what concerns candidates 6598, 4304, and 4699
(Fig. 15, 16, 17) we noted (see also Sec. 7.2) that the points
contributing to the detected signal came from the first observ-
ing night at the NOT, meaning that bad weather conditions
deeply affected the photometry during that night. In particu-
lar candidate 6598, was also found in the B03 transit search
survey, (see Sec. 13), and flagged as a probable spurious can-
didate. In none of the other observing nights we were able to
confirm the photometric variations which are visible in the first
night at the NOT. We concluded that these three candidates are
of spurious nature.
The fourth candidate corresponds to star 1239, that is lo-
cated in the external regions of the cluster. For this reason we
presented in Fig. 18 only the data coming from the CFHT. In
this case, the data points appear irregularly scattered underly-
ing a particular pattern of variability or simply a spurious pho-
tometric effect.
9.2. Candidates from the CFHT data-set
Considering only the data coming from the CFHT observing
run we obtained three candidates. The star 1239 is in common
with the list of candidates coming from the whole data-sets be-
cause, as explained above, it is located in the external regions
for which we had only the CFHT data. For candidate 4300,
the algorithm identified two slight (∼ 0.004 mag) magnitude
variations with duration of around one hour during the sixth
and the tenth night, with a period of around 4.1 days. A jovian
16 M. Montalto et al.: A new search for planet transits in NGC 6791.
Table 6. The candidates found in the two cases discussed in Sec. 8. The case of the whole data-sets put together is indicated with ALL (1st
column), that one for the only CFHT data-set is indicated with CFHT (2nd column). A cross (x) indicates that the candidate was found in that
case, a trait (-) that it is absent. In the 3rd column, the ID of the stars taken from S03 is shown. Follow the V calibrated magnitude, the (B − V)
color, the right ascension, (α), and the declination, (δ), of the stars.
ALL CFHT ID(S tetson) V (B − V) α(2000) δ(2000)
x - 6598 18.176 0.921 19h 20m 48s.65 +37◦ 47
x - 4304 17.795 0.874 19h 20m 41s.39 +37◦ 43
x - 4699 17.955 0.846 19h 20m 42s.67 +37◦ 43
x x 1239 19.241 1.058 19h 20m 25s.42 +37◦ 47
- x 4300 18.665 0.697 19h 20m 41s.38 +37◦ 45
- x 7591 18.553 0.959 19h 20m 51s.51 +37◦ 48
Fig. 16. Composite light curve of candidate 4304.
Fig. 17. Composite light curve of candidate 4699.
planet around a main sequence star of magnitude V=18.665,
(with R = 0.9 R⊙, see Fig. 8), should determine a transit with
a maximum depth of around 1.2%, and maximum duration of
2.6 hours. Although compatible with a grazing transit, we ob-
served that the two suspected eclipses are not identical, and, in
Fig. 18. CFHT light curve for candidate 1239.
Fig. 19. CFHT light curve for candidate 4300.
Fig. 20. CFHT light curve for candidate 7591.
any case, outside these regions, the photometry appears quite
scattered. Star 7591, instead, does not show any significant fea-
ture.
From the analysis of these candidates we concluded that
no transit features are detected for both the entire data-sets and
the CFHT data. Moreover, we can say to have recovered the
expected number of false alarm candidates which was (3.3 ±
1.3) as explained in Sec. 10.3.
10. Expected number of transiting planets
10.1. Expected frequency of close-in planets in
NGC 6791
The frequency of short-period planets in NGC 6791 was es-
timated considering the enhanced occurrence of giant planets
M. Montalto et al.: A new search for planet transits in NGC 6791. 17
around metal rich stars and the fraction of hot Jupiters among
known extrasolar planets.
Fischer & Valenti (2005) derived the probability P of for-
mation of giant planets with orbital period shorter than 4 yr and
radial velocity semi-amplitude K > 30 ms−1 as a function of
[Fe/H]:
P = 0.03 · 10 2.0 [Fe/H] − 0.5 < [Fe/H] < 0.5 (8)
The number of stars with a giant planet with P < 9 d was
estimated considering the ratio between the number of the plan-
ets with P < 9 days and the total number of planets from Table
3 of Fischer & Valenti 2005 (850 stars with uniform planet de-
tectability). The result is 0.22+0.12
−0.09.
Assuming for NGC 6791 [Fe/H]= +0.17 dex, a conserva-
tive lower limit to the cluster metallicity, from Equation 8 we
determined that the probability that a cluster star has a giant
planet with P < 9 is 1.4%. Assuming [Fe/H]= +0.47, the
metallicity resulting from the spectral analysis by Gratton et
al. (2006), the probability rises to 5.7%.
Our estimate assumes that the planet period and the metal-
licity of the parent star are independent, as found by Fischer
& Valenti (2005). If the hosts of hot Jupiters are even more
metal rich than the hosts of planets with longer periods, as pro-
posed by Society (2004), then the expected frequency of close-
in planets at the metallicity of NGC 6791 should be slightly
higher than our estimate.
10.2. Expected number of transiting planets
In order to evaluate the expected number of transiting planets
in our survey we followed this procedure:
– From the constant stars of our simulations (see Sec. 7), tak-
ing into account the luminosity function of main sequence
stars of the cluster, we randomly selected a sample cor-
responding to the probability that a star has a planet with
P ≤ 9 d.
– From the V magnitude of the star we calculated the mass
and radius.
– To each star in this sample we assigned a planet with mass,
radius, period randomly chosen from the distributions de-
scribed in Sec. 7.3.2, and cos i randomly chosen inside
the range 0 - 1. The range spanned for the periods was
1 < P < 9 days, with a step size of 0.006 days. For plane-
tary radii we considered the three distributions described in
Sec. 7.3.2, sampled with a step size of 0.001 RJ, and incli-
nations were varied of 0.005 degrees.
– We selected only the stars with planets that can make tran-
sits thanks to their inclination angle given by the relation:
cos i ≤
Rpl + R⋆
– Finally, as described above, we assigned to each planet the
initial phase φ0 and the revolution orbital direction s and
modified the constant light curves inserting the transits. The
initial phase was chosen randomly inside the range 0-360
degrees, with a step size of 0.3 degrees.
– We applied the BLS algorithm to the modified light curves
with the adopted thresholds.
We performed 7000 different simulations and we calculated
the mean values of these quantities:
– The number of MS stars with a planet: Npl
– The number of planets that make transits (thanks to
their inclination angles): Ngeom
– The number of planets that make one or more transits
in the observing window: N+1
– The number of planets that make one single transit in
the observing window: N1
– The number of transiting planets detected by the algo-
rithm for the three different planetary radii distributions
adopted, (as described in Sect. 7), R1 = (0.7 ± 0.1) RJ,
R2 = (1.0 ± 0.2) RJ, and R
3 = (1.4 ± 0.1) RJ.
10.3. FAR and expected number of detectable
transiting planets for the whole data-sets
We followed the procedure reported in Sec. 7 to perform simu-
lations with the artificial stars. It is important to note that artifi-
cial stars were added exactly in the same positions in the fields
of the different detectors. This is important because it assured
the homogeneity of the artificial star tests. We decided to accept
a FAR equal to 0.1%, which meant that we expected to obtain
(3.3 ± 1.3) false alarms from the total number of 3311 clus-
ter candidates. The DSP thresholds correspondent to this FAR
value are different for each case, and is reported in Table 5.
Table 7 displays the results for the simulations performed
in order to obtain the expected numbers of detectable transit-
ing candidates for three values of [Fe/H] (the values found by
Carraro et al. 2006 and Gratton et al. 2006 and a conservative
lower limit to the cluster metallicity).
The columns listed as Ngeom, N1+ and N1 indicate respec-
tively the number of planets which have a favorable geometric
inclination for the transit, the number of expected planets that
transit at least one time within the observing window and the
number of expected planets that transit exactly one time in the
observing window.
The numbers of expected transiting planets in our observ-
ing window detectable by the algorithm were calculated for
the three different planetary radii distributions (see Sec. 7, and
previous paragraph). On the basis of the current knowledge
on giant planets the most likely case corresponds to R2 =
(1.0 ± 0.2) RJ.
Table 7 shows that, assuming the most likely planetary radii
distribution R = (1.0 ± 0.2) RJ and the high metallicity result-
ing from recent high dispersion studies (Carraro et al. 2006;
Gratton et al. 2006), we expected to be able to detect 2 − 3
planets that exhibit at least one detectable transit in our observ-
ing window.
10.4. FAR and expected number of detectable
transiting planets for the CFHT data-set
Table 8 shows the expected number of detectable planets in
our observing window for the case of the CFHT data. A
18 M. Montalto et al.: A new search for planet transits in NGC 6791.
Table 7. The Table shows the results of our simulations on the expected number of detectable transiting planets for the whole data-set (all
the cases of Tab. 5) as explained in Sect. 10.3. Ngeom indicates planets with favorable inclination for transits, N1+, and N1, planets that transit
respectively at least one time and only one time inside the observing window. R1, R2, R3, indicate the expected number of detectable transiting
planets inside our observing window, for the three assumed planetary radii distributions, (see Sec. 7.3.2).
[Fe/H] Ngeom N1+ N1 R
1 R2 R3
+0.17 5.39 3.08 1.68 0.0 ± 0.0 0.0 ± 0.0 1.8 ± 0.9
+0.39 15.13 8.32 4.60 0.1 ± 0.1 1.9 ± 0.8 3.6 ± 1.8
+0.47 21.92 11.95 6.62 0.2 ± 0.3 3.2 ± 1.9 5.4 ± 1.8
comparison with Table 7 revealed that, in general, except for
the largest planetary radii distribution, R3, the number of ex-
pected detections is not increasing considering all the sites to-
gether instead of the CFH only. Moreover, for the cases of
[Fe/H]= (+0.39,+0.47)dex, and the R = (0.7 ± 0.1)RJ radii
distribution, we obtained significantly better results consider-
ing only the CFHT data than putting together all the data-sets.
We interpreted this result as the evidence that the transit signal
is, in general, lower than the total scatter in the composite light
curves and this didn’t allow the algorithm to take advantage of
the increased observing window giving, for the cases of ma-
jor interest, R = (1 ± 0.2)RJ and [Fe/H]= (+0.39,+0.47)dex,
comparable results.
11. Significance of the results
As explained in Sec. 9, on real data we obtained 4 candidates,
considering the data coming from the entire data-sets, (all the
cases of Tab. 5), and 3 candidates considering only the best
photometry coming from the CFHT . None of these candidates
shows clear transit features, and their number agrees with the
expected number of false candidates coming from the simula-
tions (3.3 ± 1.3) as explained in Sec. 7.1.
Considering the case relative to the metallicity of Carraro
et al. 2006 ([Fe/H]= +0.39) and the one relative to the metallic-
ity of Gratton et al. 2006, ([Fe/H]= +0.47), and given the most
probable planetary radii distribution with R = (1.0 ± 0.2)RJ,
from Table 7 and Table 8 we expected between 2 and 3 planets
with at least one detectable transit inside our observing win-
Therefore, this study reveals a lack of transit detections.
What is the probability that our survey resulted in no tran-
siting planets just by chance? To answer this question we went
back to the simulations described in Sect. 10.2 and calculated
the ratio of the number of simulations for which we were not
able to detect any planet relative to the total number of simula-
tions performed. The resulting probabilities to obtain no tran-
siting planets were respectively around 10% and 3% for the
metallicities of Carraro et al. 2006 and Gratton et al. 2006 con-
sidered above.
12. Implication of the results
Beside the rather small, but not negligible probability of a
chance result, (3-10%, see Sec. 11), different hypothesis can
be invoked to explain the lack of observed transits. We have
discussed them here.
12.1. Lower frequency of close-in planets in cluster
environments
The lack of observed transits might be due to a lower frequency
of close-in planets in clusters compared to the field stars of sim-
ilar metallicity. In general, two possible factors could prevent
planet formation especially in clustered environments:
– in the first million years of the cluster life, UV-flux can
evaporate fragile embryonic dust disks from which planets
are expected to form. Circumstellar disks associated with
solar-type stars can be readily evaporated in sufficiently
large clusters, whereas disks around smaller (M-type) stars
can be evaporated in more common, smaller groups. In ad-
dition, even though giant planets could still form in the disk
region r = 5-15 AU, little disk mass (outside that region)
would be available to drive planet migration.;
– on the other hand, gravitational forces could strip nascent
planets from their parent stars or, taking in mind that tran-
sit planet searches are biased toward ’hot jupiter’ plan-
ets, tidal effects could prevent the planetary migration pro-
cesses which are essential for the formation of this kind of
planets.
These factors depend critically on the cluster size. Adams
et al. (2006), show that for clusters with 100-1000 members
modest effects are expected on forming planetary systems. The
interaction rates are low, so that the typical solar system ex-
periences a single encounter with closest approach distance of
1000 AU. The radiation exposure is also low, so that photo-
evaporation of circumstellar disks is only important beyond 30
AU. For more massive clusters like NGC6791, these factors
are expected to be increasingly important and could drastically
affect planetary formation (Adams et al. 2004).
12.2. Smaller planetary radii for planets around very
metal rich host stars
Guillot et al. (2006) suggested that the masses of heavy ele-
ments in planets was proportional to the metallicities of their
parent star. This correlation remains to be confirmed, being
still consistent with a no-correlation hypothesis at the 1/3 level
in the least favorable case. A consequence of this would be
a smaller radius for close-in planets orbiting super-metal rich
stars. Since the transit depth scales with the square of the ra-
dius, this would have important implications for ground-based
transit detectability, (see Tables 8- 7).
M. Montalto et al.: A new search for planet transits in NGC 6791. 19
Table 8. The same as 7, but for the case of the only CFHT data as explained in Sect. 10.2.
[Fe/H] Ngeom N1+ N1 R
1 R2 R3
+0.17 5.39 2.49 1.98 0.2 ± 0.5 0.4 ± 0.7 0.6 ± 0.8
+0.39 15.13 7.01 5.39 1.6 ± 1.3 2.3 ± 1.6 2.6 ± 1.7
+0.47 21.92 10.12 7.94 2.5 ± 1.7 3.4 ± 2.0 4.0 ± 2.1
12.3. Limitations on the assumed hypothesis
While we exploited the best available results to estimate the ex-
pected number of transiting planets, it is possible that some of
our assumptions are not completely realistic, or applicable to
our sample. One possibility is that the planetary frequency no
longer increases above a given metallicity. The small number of
stars in the high metallicity range in the Fischer & Valenti sam-
ple makes the estimate of the expected planetary frequency for
the most metallic stars quite uncertain. Furthermore, the con-
sistency of the metallicity scales of Fischer & Valenti (2005),
Carraro et al. (2006) and Gratton et al. (2006) should be
checked.
Another possibility concerns systematic differences be-
tween the stellar sample studied by Fischer & Valenti, and the
present one. One relevant point is represented by binary sys-
tems. The sample of Fischer & Valenti has some biases against
binaries, in particular close binaries. As the frequency of plan-
ets in close binaries appears to be lower than that of planets
orbiting single stars and wide binaries (Bonavita & Desidera
2007, A&A, submitted), the frequency of planets in the Fischer
& Valenti sample should be larger than that resulting in an un-
biased sample. On the other hand, our selection of cluster stars
excludes the stars in the binary sequence, partially compensat-
ing this effect.
Another possible effect is that of stellar mass. As shown in
Fig. 8, the cluster’s stars searched for transits have mass be-
tween 1.1 to 0.5 M⊙. On the other hand, the stars in the FV
sample have masses between 1.6 to 0.8 M⊙. If the frequency of
giant planets depends on stellar mass, the results by Fischer &
Valenti (2005) might not be directly applicable to our sample.
Furthermore, some non-member contamination is certainly
present. As discussed in Section 5, the selection of cluster
members was done photometrically around a fiducial main se-
quence line.
12.4. Possibility of a null result being due to chance
As shown in Sec. 11, the probability that our null result was
simply due to chance was comprised between 3% and 10%,
depending on the metallicity assumed for the cluster. This is
a rather small, but not negligible probability, and other efforts
must be undertaken to reach a firmer conclusion.
13. Comparison of the transit search surveys on
NGC 6791
It is important to compare our results on the presence of planets
with those of other photometric campaigns performed in past
years. We consider in this comparison B03 and M05.
13.1. The Nordic Optical Telescope (NOT) transit
search
As already described in this paper, (e.g. see Sect. 2), in July
2001, at NOT, B03 undertook a transit search on NGC 6791
that lasted eight nights. Only seven of these nights were good
enough to search for planetary transits. Their time coverage
was thus comparable to the CFHT data presented here. The
expected number of transits was obtained considering as can-
didates all the stars with photometric precision lower than 2%,
(they did not isolate cluster main sequence stars, as we did,
but they then multiplied their resulting expected numbers for a
factor equal to 85% in order to account for binarity), and as-
suming that the probability that a local G or F-type field star
harbors a close-in giant planet is around 0.7%. With these and
other obvious assumptions B03 expected 0.8 transits from their
survey. However, they made also the hypothesis that for metal-
rich stars the fraction of stars harboring planets is ∼ 10 times
greater than for general field stars, following Laughlin (2000).
In this way, they would have expected to find “at least a few
candidates with single transits”. In Section 3 we showed how
the photometric precision for the NOT was in general of lower
quality for the brightest stars with respect to that one of SPM
and Loiano. This fact can be recognized also in Table 5 where
the value of the threshold for the DSP was always bigger than
6.5 when the NOT observations were included. This demon-
strates the higher noise level of this data-set. We did not per-
form the accurate analysis of the expected number of transit-
ing planets considering only the NOT data, but, on the basis of
our assumptions, and on the photometric precision of the NOT
data, the numbers showed in Table 8 for the CFHT should be
considered as an upper limit for the expected transit from the
NOT survey.
B03 reported ten transit events, two of which, (identified in
B03 as T6 and T10), showed double transit features, and the
others were single transits. Except for candidate T2, which was
recovered also in our analysis (see Sec. 9.1) our algorithm did
not identify any other of the candidates reported by B03.
B03 recognized that most of the candidates were likely spu-
rious, while three cases, referred as T5, T7 and T8, were con-
sidered the most promising ones. We noted that T8 lies off the
cluster main sequence. Therefore, it can not be considered as a
planet candidate for NGC 6791. Furthermore, from our CFHT
images we noted that this candidate is likely a blended star. The
other two candidates were on the main sequence of NGC 6791.
Visual inspection of the light curves in Fig. 21 and Fig. 22 also
show no sign of eclipse.
Finally, candidate T9, (Fig. 23) lies off the cluster main
sequence and it was recognized by B03 to be a long-period
low-amplitude variable (V80). In our photometry, it shows
clear signs of variability, and a ∼ 0.05 mag eclipse during the
20 M. Montalto et al.: A new search for planet transits in NGC 6791.
Fig. 21. Composite light curve for candidate 3671 correspondent to T5
of BO3. Different symbols have the same meaning of Fig. 15.
Fig. 22. Composite light curve for candidate 3723 correspondent to T7
of BO3.
second night of the CFHT campaign at t = 361.8, and probably
a partial eclipse at the end of the seventh night of the NOT
data-set, at t = 6.4, ruling out the possibility of a planetary
transit, because the magnitude depth of the eclipse is much
larger than what is expected for a planetary transit.
It is not surprising that almost all of the candidates reported
by B03 were not confirmed in our work, even for the NOT pho-
tometry itself. Even if the photometry reduction algorithm was
the same, (image subtraction, see Sec. 3), all the other steps
that followed, and the selection criteria of the candidates were
in general different. This, in turn, reinforces the idea that they
are due to spurious photometric effects.
Fig. 23. Composite light curve for candidate 12390 correspondent to
T9 of BO3.
13.2. The PISCES group extra-Solar planets search
The PISCES group collected 84 nights of observations on
NGC 6791, for a total of ∼ 300 hours of data collection from
July 2001 to July 2003, at the 1.2m Fred Lawrence Whipple
Observatory (M05). Starting from their 3178 cluster mem-
bers (selected considering all the main sequence stars with
RMS≤ 5%), assuming a distribution of planetary radii between
0.95 RJ and 1.5 RJ, and a planet frequency of 4.2%, M05 ex-
pected to detect 1.34 transiting planets in the cluster. They
didn’t identify any transiting candidate. Their planet frequency
is within the range that we assumed (1.4%–5.7%). Our number
of candidate main-sequence stars is slightly in excess relative
to that of M05, even if their field of view is larger than our own
(∼ 23 arcmin2 against ∼ 19 arcmin2 of S03 catalog), since we
were able to reach ∼ 2 mag deeper with the same photomet-
ric precision level. Their number of expected transiting plan-
ets is of the same order of magnitude as our own because of
their huge temporal coverage. In any case, looking at figure
7 of M05, one should recognize that their detection efficiency
greatly favors planetary radii larger than 1 RJ. A more realistic
planetary radius distribution, for example (1.0± 0.2) RJ, should
significantly decrease their expectations, as recognized by the
same authors.
14. Future investigations
NGC 6791 has been recognized as one of the most promising
targets for studying the planet formation mechanism in clus-
tered environments, and for investigating the planet frequency
as a function of the host star metallicity. Our estimate of the ex-
pected number of transiting planets, (about 15–20 assuming the
metallicity recently derived by means of high-dispersion spec-
troscopy by Carraro et al. 2006, and Gratton et al. 2006, and the
planet frequency derived by Fischer & Valenti 2005), confirms
that this is the best open cluster for a planet search.
M. Montalto et al.: A new search for planet transits in NGC 6791. 21
However, in spite of fairly ambitious observational efforts
by different groups, no firm conclusions about the presence or
lack of planets in the cluster can be reached.
With the goal of understanding the implications of this re-
sult and to try to optimize future observational efforts, we show,
in Table 9, that the number of hours collected on this cluster
with > 3 m telescopes is much lower than the time dedicated
with 1 − 2 m class telescopes. Despite the fact that we were
able to get adequate photometric precisions even with 1 − 2 m
class telescopes, (see Sec. 3), in general smaller aperture tele-
scopes are typically located on sites with poorer observing con-
ditions, which limits the temporal sampling and their photom-
etry is characterized by larger systematic effects. As a result,
the number of cluster stars with adequate photometric precision
for planet transit detections is quite limited. Our study suggests
that more extensive photometry with wide field imagers at 3 to
4-m class telescopes (e.g. CFHT) is required to reach conclu-
sive results on the frequency of planets in NGC 6791.
We calculated that, extending the observing window to two
transit campaigns of ten days each, providing that the same
photometric precision we had at the CFHT could be reached,
we could reduce the probability of null detection to 0.5%.
15. Conclusions
The main purpose of this work was to investigate the problem
of planet formation in stellar open clusters. We focused our at-
tention on the very metal rich open cluster NGC 6791. The idea
that inspired this work was that looking at more metal rich stars
one should expect a higher frequency of planets, as it has been
observed in the solar neighborhood (Santos et al. 2004, Fisher
& Valenti, 2005). Clustered environments can be regarded as
astrophysical laboratories in which to explore planetary fre-
quency and formation processes starting from a well defined
and homogeneous sample of stars with the advantage that clus-
ter stars have common age, distance, and metallicity. As shown
in Section 2, a huge observational effort has been dedicated
to the study of our target cluster using four different ground
based telescopes, (CFHT, SPM, Loiano, and NOT), and trying
to take advantage from multi-site simultaneous observations.
In Section 3, we showed how we were able to obtain adequate
photometric precisions for the transit search for all the differ-
ent data-sets (though in different magnitude intervals). From
the detailed simulations described in Section 10, it was demon-
strated that, with our best photometric sequence, and with the
most realistic assumption that the planetary radii distribution is
R = (1.0 ± 0.2)RJ, the expected number of detectable transiting
planets with at least one transit inside our observing window
was around 2, assuming as cluster metallicity [Fe/H]=+0.39,
and around 3 for [Fe/H]= +0.47. Despite the number of ex-
pected positive detections, no significant transiting planetary
candidates were found in our investigation. There was a rather
small, though not negligible probability that our null result can
be simply due to chance, as explained in Sect. 11: we esti-
mated that this probability is 10% for [Fe/H]= +0.39, and 3%
for [Fe/H]= +0.47. Possible interpretations for the lack of ob-
served transits (Sect. 12) are a lower frequency of close-in plan-
ets around solar-type stars in cluster environments with respect
to field stars, smaller planetary radii for planets around super
metal rich stars, or some limitations in the assumptions adopted
in our simulations. Future investigations with 3-4m class tele-
scopes are required (Sect 14) to further constrain the planetary
frequency in NGC 6791. Another twenty nights with this kind
of instrumentation are necessary to reach a firm conclusion on
this problem. The uniqueness of NGC 6791, which is the only
galactic open cluster for which we expect more than 10 giant
planets transiting main sequence stars if the planet frequency
is the same as for field stars of similar metallicity, makes such
an effort crucial for exploring the effects of cluster environment
on planet formation.
22 M. Montalto et al.: A new search for planet transits in NGC 6791.
Table 9. Number of nights and hours which have been devoted to the study of NGC 6791 as a function of the diameter of the telescope used
for the survey. We adopted a mean of 5 hours of observations per night.
Telescope Diameter(m) Nnights Hours Ref.
FLWO 1.2 84 ∼300 M05
Loiano 1.5 4 20 This paper
SPM 2.2 8 48 This paper
NOT 2.54 7 35 B03 and this paper
CFHT 3.6 8 48 This paper
MMT 6.5 3 15 Hartmann et al. (2005)
Acknowledgements. We warmly thank M. Bellazzini and F. Fusi Pecci
for having made possible the run at Loiano Observatory.
This work was partially funded by COFIN 2004 “From stars to plan-
ets: accretion, disk evolution and planet formation” by Ministero
Universitá e Ricerca Scientifica Italy.
We thanks the referee, Dr. Mochejska, for useful comments and sug-
gestions allowing the improvement of the paper.
References
Adams, F.C., Hollenbach, D., Laughlin, G. Gorti, U. 2004, ApJ, 611,
Adams, F.C., Proszkow, E.M., Fatuzzo, M., Myers, P.C. 2006, ApJ,
641, 504
Aigrain, S., Hodgkin, S., Irwin, J., et al. 2006, MNRAS, in press
(astro-ph/0611431)
Alard & Lupton 1998, ApJ, 503, 325
Armitage, P.J. 2000, A&A, 362, 968
Armitage, P.J., Clarke, C.J., Palla, F. 2003, MNRAS, 342, 1139
Baraffe, I., Chabrier, G., Barman, T.S. 2005, A&A 436, L47
Beer, M.E., King, A.R., Pringle, J.E. 2004, MNRAS 355,1244
Bonavita, M. & Desidera, S. 2007, A&A, submitted
Bonnell I.A., Smith, K.W., Davies, M.B., Horne, K. 2001, MNRAS
322, 859
Bramich et al. 2005, MNRAS, 359, 1096-1116
Bruntt, H., Grundahl, F., Tingley, B., et al. 2003, A&A, 410, 323 (B03)
Butler, R.P., Marcy, G.W., Fischer, D.A., et al. 2000, in Planetary
Systems in the Universe: Observation, Formation and Evolution,
ASP Conf Series, Penny A.J. et al. eds
Burke, C.J. et al. 2006, AJ, 132, 210
Carraro, G., Villanova, S., Demarque, P., et al. 2006, ApJ, 643, 1151
Claret, A. 2000, A&A, 363, 1081
Clementini, G., Corwin, T.M., Carney, B.W., Sumerel, A.N. 2004, AJ,
127, 938
Corwin, T.M., Sumerel, A.N., Pritzl, B.J., Barton, J., Smith, H.A.,
Catelan, M. Sweigart, A.V., Stetson, P.M. 2006, AJ, 132, 1014
Davies, M.B. & Sigurdsson, S. 2001 MNRAS, 324, 612
Desidera, S. & Barbieri, M. 2007 A&A, 462, 345
Fischer, D.A. & Valenti, J. 2005, ApJ, 622, 1102
Fregeau, J.M., Chatterjee, S., Rasio, F.A. 2006, ApJ, 640, 1086
Gaudi, B. S., 2005a, ApJ, 628, 73
Gaudi, B. S., Seager, S., & Mallen-Ornelas, G. 2005b, ApJ, 623, 472
Gilliland, R.L., Brown, T.M., Guhathakurta, P., et al. 2000, ApJ, 545,
Girardi, L., Bertelli, G., Bressan, A., et al. 2002, A&A, 391, 195
Gratton, R., Bragaglia, A., Carretta, E., Tosi, M., 2006, ApJ, 642, 462
Guillot, T., Santos, N.C., Pont, F. et al. 2006, A&A, 453, L21
Hartman, J.D., Stanek, K.Z., Gaudi, B.S. 2005, AJ, 130, 2241
Hatzes, A.P., Cochran, W.D., Endl, M., et al. 2003, ApJ, 599, 1383
Hatzes, A.P. & Wüchterl, G. 2005, Nat, 436, 182
Kaluzny, J., Olech, A., Stanek, K.Z. 2001, AJ, 121, 1533
King, I.R., Bedin L. R., Piotto, G. et al. 2005, AJ, 130, 626
Kjeldsen & Frandsen 1992, PASP, 104, 413
Konacki, M. 2005, Nat, 436, 230
Kovács, G., Zucker, S., Mazeh, T. 2002, A&A, 391, 369
Kovács, G. & Bakos, G. 2005, astro-ph/0508081
Laughlin, G. 2000, ApJ 545, 1064
Mochejska, B.J., Stanek, K.Z., Sasselov, D.D., Szentgyorgyi, A.H.
2002, AJ, 123, 3460
Mochejska, B.J., Stanek, K.Z., Sasselov, D.D., Szentgyorgyi, A. H.,
Westover, M., Winn, J.N. 2004, AJ, 128, 312
Mochejska, B.J., Stanek, K.Z., Sasselov D.D. et al. 2005 AJ, 129,
2856 (M05)
Mochejska, B.J., Stanek, K.Z., Sasselov D.D. et al. 2006, AJ, 131,
Olech, A., Woźniak, P.R., Alard, C. Kaluzny, J., Thompson, I.B. 1999,
MNRAS, 310, 759
Paulson, D., Cochran, W.D. & Hatzes, A.P. 2004, AJ, 127, 3579
Peterson, R.C. & Green, E.M. 1998, ApJ, 502, L39
Piotto, G. & Zoccali, M., 1999, A&A, 345, 485
Pont, F., Zucker, S., Queloz, D., 2006, MNRAS, 373, 231
Portegies Zwart, S.F.P., McMillan, S.L.W. 2005, ApJ, 633, L141
Santos, N.C., Israelian, G., Mayor, M. 2004, A&A, 415, 1153
Sato, B., Fischer, D., Henry, G., et al. 2005, ApJ, 633, 465
Sato, B. et al. 2007, ApJ preprint doi:10.1086/513503
Sigurdsson, S., Richer, H.B., Hansen, B.M., Stairs, I.H., Thorsett, S.E.
2003, Science, 301, 193
Sozzetti, A. 2004, MNRAS, 354, 1194
Stetson, P. B. 1987, PASP, 99, 191
Stetson, P. B. 1992, in ASP Conference Series vol.25, Astronomical
Data Analysis Software and Systems I. ed. D.M. Worrall, C.
Biemesderfer, & J. Barnes (Astronomical Society of Pacific: San
Francisco), p.291
Stetson, P. B. 1994, PASP, 106, 250
Stetson, P. B., Bruntt, H., Grundahl, F., 2003 PASP, 115, 413
Street, R.A., Horne, K., Lister, T.A., et al. 2003, MNRAS, 340, 1287
Tamuz, O., Mazeh, T., Zucker, S., 2005, MNRAS, 356, 1166
Taylor, B.J. 2001, A&A, 377, 473
Tingley, B. 2003a, A&A, 403, 329
Tingley, B. 2003b, A&A, 408, L5
von Braun, K., Lee, B.L., Seager, S, et al. 2005, PASP, 117, 141
Weldrake, D.T.F., Sackett, P.D., Bridges T.J., Freeman, K.C., 2005,
ApJ, 620, 1043
Weldrake, D.T.F., Sackett, P.D., Bridges, T.J. 2006, astro-ph/0612215
Wittenmyrer, R.A., Welsh W.F., Orosz, J.A. et al. 2005, ApJ 632, 1157
Woolfson, M.M. 2004, MNRAS, 348, 1150
M. Montalto et al.: A new search for planet transits in NGC 6791. 23
List of Objects
‘NGC 6791’ on page 1
‘NGC 6791’ on page 1
‘NGC 6791’ on page 1
‘NGC 6791’ on page 1
‘NGC 6791’ on page 1
‘HD188753’ on page 2
‘HD188753A’ on page 2
‘HD188753BC’ on page 2
‘Hyades’ on page 2
‘47 Tucanae’ on page 2
‘ω Centauri. These studies (Gilliland et al. 2000; Weldrake et
al. 2005; Weldrake et al. 2006) reported not a single planet de-
tection. This seemed to indicate that planetary systems are at
least one order of magnitude less common in globular clusters
than in Solar vicinity.’ on page 2
‘47 Tuc’ on page 2
‘ω Cen’ on page 2
‘M4’ on page 2
‘NGC 6791’ on page 2
‘NGC 6791’ on page 2
‘NGC 6791’ on page 2
‘NGC 6791’ on page 2
‘NGC 6253’ on page 2
‘NGC 6791’ on page 2
‘NGC 6253’ on page 2
‘NGC 2158’ on page 2
‘NGC 6791’ on page 3
‘NGC 6791’ on page 3
‘NGC 6791’ on page 8
‘NGC 6791’ on page 8
‘NGC 6791’ on page 10
‘NGC 6791’ on page 11
‘NGC6791’ on page 12
‘NGC 6791’ on page 12
‘HD209458b’ on page 12
‘HD149026b’ on page 12
‘NGC 6791’ on page 14
‘NGC 6791’ on page 16
‘NGC 6791’ on page 16
‘NGC 6791’ on page 17
‘NGC 6791’ on page 17
‘NGC6791’ on page 18
‘NGC 6791’ on page 19
‘NGC 6791’ on page 19
‘NGC 6791’ on page 19
‘NGC 6791’ on page 19
‘NGC 6791’ on page 20
‘NGC 6791’ on page 20
‘NGC 6791’ on page 21
‘NGC 6791’ on page 21
‘NGC 6791’ on page 21
‘NGC 6791’ on page 21
‘NGC 6791’ on page 22
|
0704.1669 | Possible polarisation and spin dependent aspects of quantum gravity | Possible polarisation and spin dependent aspects
of quantum gravity
D. V. Ahluwalia-Khalilova, N. G. Gresnigt, Alex B. Nielsen,
D. Schritt, T. F. Watson
Department of Physics and Astronomy, Rutherford Building
University of Canterbury
Private Bag 4800, Christchurch 8020, New Zealand
E-mail: [email protected]
We argue that quantum gravity theories that carry a Lie algebraic modification
of the Poincaré and Heisenberg algebras inevitably provide inhomogeneities that
may serve as seeds for cosmological structure formation. Furthermore, in this class
of theories one must expect a strong polarisation and spin dependence of various
quantum-gravity effects.
I. Introduction— Quantum gravity proposals often come with a modification of the
Heisenberg, and Poincaré, algebras. Confining ourselves to Lie algebraic modifications,
we argue that the underlying physical space of all such theories must be inhomogeneous.
In order to establish this result, we first review how, within a quantum framework, the
homogeneity and continuity of physical space lead inevitably to the Heisenberg algebra.
We then review general arguments that hint towards algebraic modifications encountered
in quantum gravity proposals. Next, we argue that a natural extension of physical laws
to the Planck scale can be obtained by a Lie algebraic modification of the Poincaré and
Heisenberg algebras in such a way that the resulting algebra is immune to infinitesimal
perturbations in its structure constants. With the context so chosen, we establish the
main thesis: that quantum gravity theories of the aforementioned class inevitably provide
inhomogeneities that may serve as seeds for structure formation; and that quantum
gravity induced effects may carry a strong polarisation and spin dependence.
The established results are not restricted to the chosen algebra but may easily be
extended to all Lie algebraic modifications that alter the Heisenberg algebra.1
2. Homogeneity and continuity of physical space, and its imprint in the Heisenberg
algebra— In order to understand the fundamental origin of primordial inhomogeneities
we will first review the fundamental connection between the homogeneity and continuity
of physical space and the Heisenberg algebra. It is in this spirit that we remind our
reader of an argument that is presented, for example, by Isham in [1, Section 7.2.2].
There it is shown that, in the general quantum mechanical framework, and under the
following two assumptions,
— physical space is homogeneous,
— any spatial distance r can be divided in to two equal parts, r = r/2 + r/2,
it necessarily follows that the operator x associated with position measurements along
the x-axis, and the generator of displacements dx along the x-direction, satisfy [x, dx] = i.
If one now requires consistency with the elementary wave mechanics of Heisenberg,
one must identify dx with px/h̄ (px is the operator associated with momentum mea-
surements along the x-direction). This gives, [x, px] = ih̄. Without any additional
assumptions, the argument easily generalises to yield the entire Heisenberg algebra
[xj, pk] = ih̄δjk, [pj, pk] = 0, [xj, xk] = 0, where xj, j = 1, 2, 3, are the position
operators associated with the three coordinate axes, where the observer is assumed to
be located at the origin of the coordinate system.
Thus it is evident that a quantum description of physical reality, with spatial homo-
geneity and continuity, inevitably leads to the Heisenberg algebra.
3. On the need to go beyond the Heisenberg and Poincaré algebraic-based description
of physical reality— From an algebraic point of view much of the success of modern
physics can be traced back to the Poincaré and Heisenberg algebras. Had the latter
algebra been discovered before the former, the conceptual formulation and evolution of
theoretical physics would have been significantly different. For instance, it is a direct
implication of Heisenberg’s fundamental commutator [xi, pj] = ih̄δij (with i, j = 1, 2, 3),
that events should be characterised not only by their spatiotemporal location xµ, but also
1A slightly weaker argument can be constructed for non-Lie algebraic proposals when we confine
ourselves to probed distances significantly larger than the length scale associated with the loss of spatial
continuity.
by the associated energy momentum pµ; and that should be done in a manner consistent
with the fundamental measurement uncertainties inherent in the formalism. The reader
may wish to come back to these remarks in the context of Eq. (16) where one shall
find that in a specific sense the physical space that underlies the conformal algebra does
indeed combine the notions of spacetime and energy momentum. Furthermore, as will
be seen from Eq. (18) and the subsequent remarks, this interplay becomes increasingly
important as we consider the early universe above ≈ 100 GeV.
In the mentioned description the interplay of the general relativistic and quantum
mechanical frameworks becomes inseparably bound. To see this, consider the well-known
thought experiment to probe spacetime at spatial resolutions around the Planck length
h̄G/c3. If one does that, one ends up creating a Planck mass mP
h̄c/G
black hole. This fleeting structure carries a temperature T ≈ 1030K and evaporates
in a thermal explosion in ≈ 10−40 seconds. This, incidentally, is a long time – about
ten thousand fold the Planck time τP
h̄G/c5. The formation and evaporation of
the black hole places a fundamental limit on the spatiotemporal resolution with which
spacetime can be probed.
The authors of [2, 3] have argued that once gravitational effects associated with
the quantum measurement process are accounted for, the Heisenberg algebra, and in
particular the commutator [xj, pk], must be modified. The role of gravity in the quantum
measurement process was also emphasised by Penrose [4].
From the above discussion, we take it as suggestive that an operationally-
defined view of physical space (or, its generalisation) shall inevitably ask for
the length scale, `P to play an important role.
In the context of the continuity of physical space we will take it as a working hypothesis
that, just as a lack of commutativity of the x and px operators does not render the
associated eigenvalues discrete, similarly the existence of a non-vanishing `P does not
necessarily make the underlying space lose its continuum nature. This is a highly non-
trivial issue requiring a detailed discussion from which we here refrain; yet, an element
of justification shall become apparent below.
From a dynamical point of view, as early as late 1800’s, the symmetries of Maxwell’s
equations were already suggesting a merger of space and time into one physical entity,
spacetime [5]. Algebraically, these symmetries are encoded in the Poincaré algebra. The
emergent unification of space and time called for a new fundamental invariant, c, the
speed of light (already contained in Maxwell’s equations). From an empirical point
of view, the Michelson-Morley experiment established the constancy of the speed of
light for all inertial observers, and thus re-confirmed, in the Einsteinian framework, the
implications of the Poincaré spacetime symmetries.
Concurrently, we note that while in classical statistical mechanics it is the volume that
determines the number of accessible states and hence the entropy, the situation is dramat-
ically different in a gravito-quantum mechanical setting. One example of this assertion
may be found in the well-known Bekenstein-Hawking entropy result for a Schwarzschild
black hole, SBH = (k/4)(A/`
P ); where k is the Boltzmann constant, and A is the surface
area of the sphere contained within the event horizon of the black hole. Thus quan-
tum mechanical and gravitational realms conspire to suggest the holographic conjecture
[6, 7, 8]. The underlying physics is perhaps two fold: (a) contributions from higher
momenta in quantum fields to the number of accessible states is dramatically reduced
because these are screened by the associated event horizons; and (b) the accessible states
for a quantum system are severely influenced by the behaviour of the wave function at
the boundary.
From this discussion, we take it as suggestive that in quantum cosmology/gravity
the new operationally-defined view of physical space shall inevitably ask for a
cosmological length scale, `C.
These observations prepare us to reach the next trail in our essay.
In the immediate aftermath of cosmic creation with the big bang, the physical reality
knew of no inertial frames of Einstein. This is due to the fact that massive particles had
yet to appear on the scene. The spacetime symmetries at cosmic creation are encoded in
the conformal algebra. So, whatever new operational view of spacetime emerges, it must
somehow also incorporate a process by which one evolves from the “conformal phase” of
the universe at cosmic creation to the present (see Fig. 1).
Algebraically, we take it to suggest that there must be a mechanism that
describes how the present day Poincaré-algebraic description relates to the
conformal-algebraic description of the universe at its birth.
We parenthetically note that in the conformal phase, where leptons and quarks were
yet to acquire mass (through the Higg’s mechanism, or something of that nature), the
operationally-accessible symmetries are not Poincaré but conformal. This is so because
to define rest frames, so essential for operationally establishing the Poincaré algebra, one
needs massive particles. In the transition when massive particles come to exist, the local
algebraic symmetries of general relativity suffer an operational change. Consequently,
for the cosmic epoch before ≈ 100 GeV general relativistic description of physical reality
might require modification.
4. A new algebra for quantum gravity and the emergent inhomogeneity of physical
space— Mathematically, a Lie algebra incorporating the three italicised items in Sec. 3
already exists. It was inspired by Faddeev’s mathematical analysis of the quantum and
relativistic revolutions of the last century [9] and was followed up by Vilela Mendes in
his 1994 paper [10]. The uniqueness of the said algebra was then explored through a
Lie-algebraic investigation of its stability by Chryssomalakos and Okon, in 2004 [11].
Some of the physical implications were subsequently explored in Refs. [12, 13], and its
Clifford-algebraic representation was provided by Gresnigt et al. [14]. Its importance
was further noted in CERN Courier [15].
However, its candidacy for the algebra underlying quantum cosmology/gravity has
been difficult to assert. This is essentially due to a perplexing observation made in
Ref. [11] regarding the interpretation of the operators associated with the spacetime
events. In this essay we overcome this interpretational hurdle and argue that it contains
all the desired features for such an algebra.
To this end we first write down what has come to be known as the Stabilised Poincaré-
Heisenberg Algebra (SPHA) and then proceed with the interpretational issues. The
SPHA contains the Lorentz sector (we follow the widespread physics convention which
takes the Jµν as dimensionless and Pν as dimensionful)
[Jµν ,Jρσ] = i (ηνρJµσ + ηµσJνρ − ηµρJνσ − ηνσJµρ) (1)
This remains unchanged (as is strongly suggested by the analysis presented in [16]), as
does the commutator
[Jµν ,Pλ] = i (ηνλPµ − ηµλPν) (2)
These are supplemented by the following modified sector
[Jµν ,Xλ] = i (ηνλXµ − ηµλXν) (3)
[Pµ,Pν ] = iqα1Jµν (4)
[Xµ,Xν ] = iqα2Jµν (5)
[Pµ,Xν ] = iqηµνI + iqα3 Jµν (6)
[Pµ, I] = iα1Xµ − iα3Pµ (7)
[Xµ, I] = iα3Xµ − iα2Pµ (8)
[Jµν , I] = 0 (9)
The metric ηµν is taken to have the signature (1,−1,−1,−1). The SPHA is stable,
except for the instability surface defined by α23 = α1α2 (see Fig. 2). Away from the
instability surface the SPHA is immune to infinitesimal perturbations in its structure
constants. This distinguishes SPHA from many of the competing algebraic structures
because a physical theory based on such an algebra is likely to be free from “fine tuning”
problems. This is essentially self evident because if an algebraic structure does not carry
this immunity, one can hardly expect the physical theory based upon such an algebra to
enjoy the opposite.
The SPHA involves three parameters α1, α2, α3. The c and h̄ arise in the process of
the Lie algebraic stabilisation that takes us from the Galilean relativity to Einsteinian
relativity, and from classical mechanics to quantum mechanics. Their specific values are
fixed by experiment. Similarly, α1, α2, α3 owe their origin to a similar stabilisation of the
combined Poincaré and Heisenberg algebra.
Except for the fact that α1 must be a measure of the size of the observable universe
(here assumed to be operationally determined from the Hubble parameter), the Lie
algebraic procedure for obtaining SPHA does not determine α1, α2, α3. Dimensional and
phenomenological considerations, along with the requirement that we obtain physically
viable limits, suggest the following identifications:2
α1 :=
where `C is of the order of the Hubble radius, and therefore it depends on the cosmic
epoch. The introductory remarks, and existing data suggest that [11]
In the limit `P → 0, `C → ∞, β → 0, I → I, the identity operator, the SPHA splits
into Heisenberg and Poincaré algebras. In that limit, the symbols Xµ → xµ,Pµ →
pµ,Jµν → Jµν , and I → I. Thus xµ, pµ, Jµν , I acquire their traditional meaning, while
2In making the identifications it is understood that these may be true up to a multiplicative factor
of the order of unity.
Xµ,Pµ,Jµν , I are to be considered their generalisations. In particular xµ should then be
interpreted as the generator of energy-momentum translation. The latter parallels the
canonical interpretation of pµ as the generator of spacetime translation. This interpre-
tation, we believe, removes the problematic interpretational aspects associated with Xµ
in the analysis of Ref. [11].
The identification of q with h̄ is dictated by the demand that we recover the Heisen-
berg algebra. It also suggests that at the present cosmic epoch α3 should not allow the
second term in the right hand side of equation (6) to have a significant contribution. It
will become apparent below that α3 is intricately connected to the conformal algebraic
limit of SPHA. With these identifications, and with α3 renamed as the dimensionless
parameter β, the SPHA takes the form
[Jµν ,Jρσ] = i (ηνρJµσ + ηµσJνρ − ηµρJνσ − ηνσJµρ) (12)
[Jµν ,Pλ] = i (ηνλPµ − ηµλPν) , [Jµν ,Xλ] = i (ηνλXµ − ηµλXν) (13)
[Pµ,Pν ] = i
h̄2/`2C
Jµν , [Xµ,Xν ] = i`2PJµν , [Pµ,Xν ] = ih̄ηµνI + ih̄β Jµν (14)
[Pµ, I] = i
h̄/`2C
Xµ − iβPµ, [Xµ, I] = iβXµ − i
`2P/h̄
Pµ, [Jµν , I] = 0 (15)
Since cosmic creation began with massless particles, it should be encouraging if in
some limit SPHA reduced to the conformal algebra. This is indeed the case. It follows
from a somewhat lengthy, though simple, exercise. Towards examining this question we
introduce two new operators
P̃µ = aPµ + bXµ, X̃µ = a′Xµ + b′Pµ (16)
and find that if the introduced parameters a, b, a′, b′ satisfy the the following conditions
, b =
, a′ =
`2C(1− β)
with β2 restricted to the value 1 + (`2P/`
C), then SPHA written in terms of P̃µ and X̃µ
satisfies the conformal algebra [17, Sec. 4.1].
Using these results, we can re-express P̃µ and X̃µ in a fashion that supports the view
taken in the opening paragraph of this section
P̃µ = a
(1− β)Xµ
, X̃µ = a′
(1− β)Pµ
β2 = 1 +
. (19)
Near the big bang, `C ≈ `P and thus β → ±
2 (see, Fig. 1). This results in a significant
mixing of the Xµ and Pµ in the conformal algebraic description in terms of X̃µ and P̃µ.
In contrast, hypothetically, had we been on the conformal surface at present then
taking `C � `P makes β → ±1. Consequently, for β → +1, P̃µ becomes identical to
Pµ up to a multiplicative scale factor a. Similarly, X̃µ becomes identical to Xµ up to a
multiplicative scale factor a′. As is evident from Eq. (17), the multiplicative scale factors
a and a′ are constrained by the relation aa′ = `2P/(`
C(1− β)). We expect that similar
modifications to spacetime symmetries would occur if we were to explore it at Planckian
energies in the present epoch. For β → −1 ( `C � `P ), one again obtains significant
mixing of the Xµ and Pµ.
By containing `P and `C , the SPHA unifies the extreme microscopic with the extreme
macroscopic, i.e., the cosmological. In the early universe it allows for the existence of
conformal symmetry. The significant departure from the Heisenberg algebra at big bang,
yields primordial inhomogeneities in the underlying physical space and the quantum fields
that it supports. The latter is an unavoidable consequence of the discussion presented
in Sec. 2.3
5. Polarisation and spin dependence of the cosmic inhomogeneities and other quantum
gravity effects— A careful examination of SPHA presented in equations (12-15) reveals
a strong Jµν dependence of the modifications to the Heisenberg algebra. Physically, this
translates to the following representative implications
— The induced primordial cosmic inhomogeneities are dependent on spin and polari-
sation of the fields for which these are calculated.
— The operationally-inferred commutativity/non-commutativity of the physical space
depends on the spin and polarisation of the probing particle.
— The just enumerated observation implies that a violation of equivalence principle
is inherent in the SPHA based quantum gravity.
3Any one of the other suggestions in quantum gravity that modify the Heisenberg algebra (see, e.g.,
references [18]-[28]) carry similar implications for homogeneity and isotropy of the physical space.
— Since Heisenberg algebra uniquely determines the nature of the wave particle du-
ality [27, 28] (including the de Broglie result “λ = h/p”), it would undergo spin
and polarisation dependent changes in quantum gravity based on SPHA.
All these results carry over to any theory of quantum gravity that modifies the Heisenberg
algebra with a Jµν dependence.
6. Conclusion— In this essay we have motivated a new candidate for the algebra which
may underlie a physically viable and consistent theory of quantum cosmology/gravity.
Besides yielding an algebraic unification of the extreme microscopic and cosmological
scales, it generalises the notion of conformal symmetry. The modifications to the Heisen-
berg algebra at the present cosmic epoch are negligibly small; but when `C and `P are of
the same order (i.e, at, and near, the big bang), the induced inhomogeneities are intrinsic
to the nature of physical space. These can then be amplified by the cosmic evolution
and result in important back reaction effects [29, 30, 31, 32]. An important aspect of
the SPHA-based quantum gravity is that it inevitably provides inhomogeneities that
may serve as an important ingredient for structure formation [33]. Furthermore, in this
class of theories one must expect a strong polarisation and spin dependence of various
quantum-gravity effects.
Acknowledgements— We wish to thank Daniel Grumiller and Peter West for their in-
sightful questions and suggestions.
References
[1] C. J. Isham, Lectures on quantum theory: Mathematical and structural foundations
(Imperial College Press, Singapore, 1995).
[2] D. V. Ahluwalia, “Quantum measurements, gravitation, and locality,” Phys. Lett.
B 339 (1994) 301 [arXiv:gr-qc/9308007].
[3] S. Doplicher, K. Fredenhagen and J. E. Roberts, “Space-time quantization induced
by classical gravity,” Phys. Lett. B 331 (1994) 39.
[4] R. Penrose, “On gravity’s role in quantum state reduction,” Gen. Rel. Grav. 28
(1996) 581.
http://arxiv.org/abs/gr-qc/9308007
[5] H. R. Brown, Physical relativity: Space-time structure from a dynamical perspec-
tive (Oxford University Press, Oxford, 2005).
[6] G. ’t Hooft, “Dimensional reduction in quantum gravity,” arXiv:gr-qc/9310026.
[7] L. Susskind, “The world as a hologram,” J. Math. Phys. 36 (1995) 6377 [arXiv:hep-
th/9409089].
[8] S. de Haro, Quantum gravity and the holographic principle (Universal press -
Science publishers, Veenendaal, 2001)
[9] L. D. Faddeev, Mathematician’s view on the development of physics, Frontiers in
Physics: High technology and mathematics ed H. A. Cerdeira and S. Lundqvist
(Singapore: Word Scientific, 1989) pp. 238-46
[10] R. Vilela Mendes, “Deformations, stable theories and fundamental constants,” J.
Phys. A 27 (1994) 8091.
[11] C. Chryssomalakos and E. Okon, Int. J. Mod. Phys. D 13 (2004) 2003 [arXiv:hep-
th/0410212].
[12] D. V. Ahluwalia-Khalilova, “A freely falling frame at the interface of gravitational
and quantum realms,” Class. Quant. Grav. 22 (2005) 1433 [arXiv:hep-th/0503141].
[13] D. V. Ahluwalia-Khalilova, “Minimal spatio-temporal extent of events, neutrinos,
and the cosmological constant problem,” Int. J. Mod. Phys. D 14 (2005) 2151
[arXiv:hep-th/0505124].
[14] N. G. Gresnigt, P. F. Renaud and P. H. Butler, “The stabilized Poincare-Heisenberg
algebra: A Clifford algebra viewpoint,” arXiv:hep-th/0611034.
[15] S. Reucroft and J. Swain, “Special relativity becomes more general”, CERN
Courier, July/August 2005 p.9.
[16] J. Collins, A. Perez, D. Sudarsky, L. Urrutia and H. Vucetich, “Lorentz invariance:
An additional fine-tuning problem,” Phys. Rev. Lett. 93 (2004) 191301 [arXiv:gr-
qc/0403053].
[17] P. Di Francesco, et al., Conformal field theory (Springer, New York, 1937).
[18] L. J. Garay, “Quantum gravity and minimum length,” Int. J. Mod. Phys. A 10
(1995) 145 [arXiv:gr-qc/9403008].
http://arxiv.org/abs/gr-qc/9310026
http://arxiv.org/abs/hep-th/9409089
http://arxiv.org/abs/hep-th/9409089
http://arxiv.org/abs/hep-th/0410212
http://arxiv.org/abs/hep-th/0410212
http://arxiv.org/abs/hep-th/0503141
http://arxiv.org/abs/hep-th/0505124
http://arxiv.org/abs/hep-th/0611034
http://arxiv.org/abs/gr-qc/0403053
http://arxiv.org/abs/gr-qc/0403053
http://arxiv.org/abs/gr-qc/9403008
[19] G. Veneziano, “A stringy nature needs just two constants,” Europhys. Lett. 2
(1986) 199.
[20] R. J. Adler and D. I. Santiago, Mod. Phys. Lett. A 14 (1999) 1371 [arXiv:gr-
qc/9904026].
[21] S. de Haro Olle, “Noncommutative black hole algebra and string theory from grav-
ity,” Class. Quant. Grav. 15 (1998) 519 [arXiv:gr-qc/9707042].
[22] G. Amelino-Camelia, “Classicality, matter-antimatter asymmetry, and quantum
gravity deformed uncertainty relations,” Mod. Phys. Lett. A 12 (1997) 1387
[arXiv:gr-qc/9706007].
[23] M. Maggiore, “A Generalized uncertainty principle in quantum gravity,” Phys.
Lett. B 304 (1993) 65 [arXiv:hep-th/9301067].
[24] N. Sasakura, “An uncertainty relation of space-time,” Prog. Theor. Phys. 102
(1999) 169 [arXiv:hep-th/9903146].
[25] F. Scardigli, “Generalized uncertainty principle in quantum gravity from micro-
black hole gedanken experiment,” Phys. Lett. B 452 (1999) 39 [arXiv:hep-
th/9904025].
[26] S. Capozziello, G. Lambiase and G. Scarpetta, “Generalized uncertainty principle
from quantum geometry,” Int. J. Theor. Phys. 39 (2000) 15 [arXiv:gr-qc/9910017].
[27] A. Kempf, G. Mangano and R. B. Mann, Phys. Rev. D 52 (1995) 1108 [arXiv:hep-
th/9412167].
[28] D. V. Ahluwalia, “ Wave particle duality at the Planck scale: Freezing of neutrino
oscillations,” Phys. Lett. A 275 (2000) 31 [arXiv:gr-qc/0002005].
[29] T. Buchert, “A cosmic equation of state for the inhomogeneous universe: Can a
global far-from-equilibrium state explain dark energy?,” Class. Quant. Grav. 22
(2005) L113 [arXiv:gr-qc/0507028].
[30] D. L. Wiltshire, “Cosmic clocks, cosmic variance and cosmic averages,” arXiv:gr-
qc/0702082.
[31] S. Rasanen, “Accelerated expansion from structure formation,” JCAP 0611 (2006)
003 [arXiv:astro-ph/0607626].
http://arxiv.org/abs/gr-qc/9904026
http://arxiv.org/abs/gr-qc/9904026
http://arxiv.org/abs/gr-qc/9707042
http://arxiv.org/abs/gr-qc/9706007
http://arxiv.org/abs/hep-th/9301067
http://arxiv.org/abs/hep-th/9903146
http://arxiv.org/abs/hep-th/9904025
http://arxiv.org/abs/hep-th/9904025
http://arxiv.org/abs/gr-qc/9910017
http://arxiv.org/abs/hep-th/9412167
http://arxiv.org/abs/hep-th/9412167
http://arxiv.org/abs/gr-qc/0002005
http://arxiv.org/abs/gr-qc/0507028
http://arxiv.org/abs/gr-qc/0702082
http://arxiv.org/abs/gr-qc/0702082
http://arxiv.org/abs/astro-ph/0607626
[32] A. Ishibashi and R. M. Wald, “Can the acceleration of our universe be explained
by the effects of inhomogeneities?,” Class. Quant. Grav. 23 (2006) 235 [arXiv:gr-
qc/0509108].
[33] A. Perez, H. Sahlmann and D. Sudarsky, “On the quantum origin of the seeds of
cosmic structure,” Class. Quant. Grav. 23 (2006) 2317 [arXiv:gr-qc/0508100].
http://arxiv.org/abs/gr-qc/0509108
http://arxiv.org/abs/gr-qc/0509108
http://arxiv.org/abs/gr-qc/0508100
Possible Scenario 1
Possible Scenario 2
Big bang Present FutureFuture
Instability
Conformal
�!!!2
Figure 1: This figure is a cut, at `P = 1 (with h̄ set to unity), of Fig. 2 and it schemat-
ically shows the cosmic evolution along two possible scenarios. For this purpose, only
β ≥ 0 values have been taken. The β < 0 sector can easily be inferred from symmetry
consideration. In one of the scenarios the conformal symmetry of the early universe is
lost without crossing the instability surface, while in the other it crosses that surface. In
the latter case the algebra changes [11] from so(2, 4) to so(1, 5). This crossover, we spec-
ulate, may be related to the mass-generating process of spontaneous symmetry breaking
(SSB) of the standard model of high energy physics. The big bang is here identified with
`C ≈ `P .
�����������{c2
Instability Cone
Conformal Surface
Poincaré-Heisenberg
Algebra HunstableL
Figure 2: The unmarked arrow is the `2P (= h̄α2) axis. The Poincaré-Heisenberg algebra
corresponds to the origin of the parameters space, which coincides with the apex of the
instability cone. In reference to Eq. (10), note that `2C = h̄/α1. Here, β is a dimensionless
parameter that corresponds to a generalisation of the conformal algebra. The SPHA lives
in the entire (`C , `P , β) space except for the surface of instability. The SPHA becomes
conformal for all values of (`C , `P , β) that lie on the “conformal surface”.
|
0704.1670 | On the support genus of a contact structure | ON THE SUPPORT GENUS OF A CONTACT STRUCTURE
MEHMET FIRAT ARIKAN
Abstract. The algorithm given by Akbulut and Ozbagci constructs an explicit open
book decomposition on a contact three-manifold described by a contact surgery on a link
in the three-sphere. In this article, we will improve this algorithm by using Giroux’s con-
tact cell decomposition process. In particular, our algorithm gives a better upper bound
for the recently defined “minimal supporting genus invariant” of contact structures.
1. Introduction
Let (M, ξ) be a closed oriented contact 3-manifold, and let (Σ, h) be an open book (de-
composition) of M which is compatible with the contact structure ξ (sometimes we also
say that (Σ, h) supports ξ). Based on the correspondence theorem (see Theorem 2.3)
between contact structures and their supporting open books, the topological invariant
sg(ξ) was defined in [EO]. More precisely, we have
sg(ξ) = min{ g(Σ) | (Σ, h) an open book decomposition supporting ξ}
called supporting genus of ξ. There are some partial results for this invariant. For instance,
we have:
Theorem 1.1 ([Et1]). If (M, ξ) is overtwisted, then sg(ξ) = 0.
Unlike the overtwisted case, there is not much known yet for sg(ξ) when ξ is tight. On
the other hand, if we, furthermore, require that ξ is Stein fillable, then an algorithm to
find an open book supporting ξ was given in [AO]. Although their construction is explicit,
the pages of the resulting open books arise as Seifert surfaces of torus knots or links, and
so this algorithm is far from even approximating the numbers sg(ξ). In [St], the same
algorithm was generalized to the case where ξ need not to be Stein fillable (or even tight),
but the pages are still of large genera.
This article is organized as follows: After the preliminaries (Section 2), in Section 3
we will present an explicit construction of a supporting open book (with considerably
less genus) for a given contact surgery diagram of any contact structure ξ. Of course,
because of Theorem 1.1, our algorithm makes more sense for the tight structures than
the overtwisted ones. Moreover, it depends on a choice of the contact surgery diagram
describing ξ. Nevertheless, it gives better and more reasonable upper bound for sg(ξ)
(when ξ is tight) as we will see from our examples in Section 4.
Let L be any Legendrian link given in (R3, ξ0 = ker(α0 = dz + xdy)) ⊂ (S
3, ξst). L can
be represented by a special diagram D called a square bridge diagram of L (see [Ly]). We
will consider D as an abstract diagram such that
(1) D consists of horizontal line segments h1, ..., hp, and vertical line segments v1, ..., vq
for some integers p ≥ 2, q ≥ 2,
The author was partially supported by NSF Grant DMS0244622.
http://arxiv.org/abs/0704.1670v4
2 MEHMET FIRAT ARIKAN
(2) there is no collinearity in {h1, . . . , hp}, and in {v1, . . . , vq}.
(3) each hi (resp., each vj) intersects two vertical (resp., horizontal) line segments of
D at its two endpoints (called corners of D), and
(4) any interior intersection (called junction of D) is understood to be a virtual cross-
ing of D where the horizontal line segment is passing over the vertical one.
We depict Legendrian right trefoil and the corresponding D in Figure 1.
Legendrian right trefoil
p = q = 5
Figure 1. The square bridge diagram D for the Legendrian right trefoil
Clearly, for any front projection of a Legendrian link, we can associate a square bridge
diagram D. Using such a diagram D, the following two facts were first proved in [AO],
and later made more explicit in [Pl]. Below versions are from the latter:
Lemma 1.2. Given a Legendrian link L in (R3, ξ0), there exists a torus link Tp,q (with p
and q as above) transverse to ξ0 such that its Seifert surface Fp,q contains L, dα0 is an
area form on Fp,q, and L does not separate Fp,q.
Proposition 1.3. Given L and Fp,q as above, there exist an open book decomposition of
S3 with page Fp,q such that:
(1) the induced contact structure ξ is isotopic to ξ0;
(2) the link L is contained in one of the page Fp,q, and does not separate it;
(3) L is Legendrian with respect to ξ;
(4) there exist an isotopy which fixes L and takes ξ to ξ0, so the Legendrian type of
the link is the same with respect to ξ and ξ0;
(5) the framing of L given by the page Fp,q of the open book is the same as the contact
framing.
Being a Seifert surface of a torus link, Fp,q is of large genera. In Section 3, we will construct
another open book OB supporting (S3, ξst) such that its page F arises as a subsurface of
Fp,q (with considerably less genera), and given Legendrian link L sits on F as how it sits
on the page Fp,q of the construction used in [AO] and [Pl]. The page F of the open book
OB will arise as the ribbon of the 1-skeleton of an appropriate contact cell decomposition
for (S3, ξst). As in [Pl], our construction will keep the given link L Legendrian with respect
to the standard contact structure ξst. Our main theorem is:
Theorem 1.4. Given L and Fp,q as above, there exists a contact cell decomposition ∆ of
(S3, ξst) such that
(1) L is contained in the Legendrian 1-skeleton G of ∆,
ON THE SUPPORT GENUS OF A CONTACT STRUCTURE 3
(2) The ribbon F of the 1-skeleton G is a subsurface of Fp,q (p and q as above),
(3) The framing of L coming from F is equal to its contact framing tb(L), and
(4) If p > 3 and q > 3, then the genus g(F ) of F is strictly less than the genus g(Fp,q)
of Fp,q.
As an immediate consequence (see Corollary 3.1), we get an explicit description of an open
book supporting (S3, ξ) whose page F contains L with the correct framing. Therefore, if
(M±, ξ±) is given by contact (±1)-surgery on L (such a surgery diagram exists for any
closed contact 3-manifold by Theorem 2.1), we get an open book supporting ξ± with page
F by Theorem 2.5. Hence, g(F ) improves the upper bound for sg(ξ) as g(F ) < g(Fp,q)
(for p > 3, q > 3). It will be clear from our examples in Section 4 that this is indeed a
good improvement.
Acknowledgments. The author would like to thank Selman Akbulut, Selahi Durusoy,
Cagri Karakurt, and Burak Ozbagci for their helpful conversations and comments on the
draft of this paper.
2. Preliminaries
2.1. Contact structures and Open book decompositions. A 1-form α ∈ Ω1(M) on
a 3-dimensional oriented manifold M is called a contact form if it satisfies α ∧ dα 6= 0.
An oriented contact structure on M is then a hyperplane field ξ which can be globally
written as the kernel of a contact 1-form α. We will always assume that ξ is a positive
contact structure, that is, α ∧ dα > 0. Note that this is equivalent to asking that dα
be positive definite on the plane field ξ, ie., dα|ξ > 0. Two contact structures ξ0, ξ1 on
a 3-manifold are said to be isotopic if there exists a 1-parameter family ξt (0 ≤ t ≤ 1)
of contact structures joining them. We say that two contact 3-manifolds (M1, ξ1) and
(M2, ξ2) are contactomorphic if there exists a diffeomorphism f : M1 −→ M2 such that
f∗(ξ1) = ξ2. Note that isotopic contact structures give contactomorphic contact manifolds
by Gray’s Theorem. Any contact 3-manifold is locally contactomorphic to (R3, ξ0) where
standard contact structure ξ0 on R
3 with coordinates (x, y, z) is given as the kernel of
α0 = dz + xdy. The standard contact structure ξst on the 3-sphere S
3 = {(r1, r2, θ1, θ2) :
r21 + r
2 = 1} ⊂ C
2 is given as the kernel of αst = r
1dθ1 + r
2dθ2. One basic fact is that
(R3, ξ0) is contactomorphic to (S
3 \ {pt}, ξst). For more details on contact geometry, we
refer the reader to [Ge], [Et3].
An open book decomposition of a closed 3-manifold M is a pair (L, f) where L is an
oriented link in M , called the binding, and f : M \L → S1 is a fibration such that f−1(t)
is the interior of a compact oriented surface Σt ⊂ M and ∂Σt = L for all t ∈ S
1. The
surface Σ = Σt, for any t, is called the page of the open book. The monodromy of an open
book (L, f) is given by the return map of a flow transverse to the pages (all diffeomorphic
to Σ) and meridional near the binding, which is an element h ∈ Aut(Σ, ∂Σ), the group
of (isotopy classes of) diffeomorphisms of Σ which restrict to the identity on ∂Σ . The
group Aut(Σ, ∂Σ) is also said to be the mapping class group of Σ, and denoted by Γ(Σ).
An open book can also be described as follows. First consider the mapping torus
Σ(h) = [0, 1]× Σ/(1, x) ∼ (0, h(x))
where Σ is a compact oriented surface with n = |∂Σ| boundary components and h is an
element of Aut(Σ, ∂Σ) as above. Since h is the identity map on ∂Σ, the boundary ∂Σ(h)
of the mapping torus Σ(h) can be canonically identified with n copies of T 2 = S1 × S1,
4 MEHMET FIRAT ARIKAN
where the first S1 factor is identified with [0, 1]/(0 ∼ 1) and the second one comes from
a component of ∂Σ. Now we glue in n copies of D2 × S1 to cap off Σ(h) so that ∂D2
is identified with S1 = [0, 1]/(0 ∼ 1) and the S1 factor in D2 × S1 is identified with a
boundary component of ∂Σ. Thus we get a closed 3-manifold
M = M(Σ,h) := Σ(h) ∪n D
2 × S1
equipped with an open book decomposition (Σ, h) whose binding is the union of the core
circles in the D2 × S1’s that we glue to Σ(h) to obtain M . To summarize, an element
h ∈ Aut(Σ, ∂Σ) determines a 3-manifold M = M(Σ,h) together with an “abstract” open
book decomposition (Σ, h) on it. For furher details on these subjects, see [Gd], and [Et2].
2.2. Legendrian Knots and Contact Surgery. A Legendrian knot K in a contact
3-manifold (M, ξ) is a knot that is everywhere tangent to ξ. Any Legendrian knot comes
with a canonical contact framing (or Thurston-Bennequin framing), which is defined by
a vector field along K that is transverse to ξ. If K is null-homologous, then this framing
can be given by an integer tb(K), called Thurston-Bennequin number. For any Legendrian
knot K in (R3, ξ0), the number tb(K) can be computed as
tb(K) = bb(K)−#left cusps of K
where bb(K) is the blackboard framing of K.
We call (M, ξ) (or just ξ) overtwisted if it contains an embedded disc D ≈ D2 ⊂ M
with boundary ∂D ≈ S1 a Legendrian knot whose contact framing equals the framing it
receives from the disc D. If no such disc exists, the contact structure ξ is called tight.
For any p, q ∈ Z, a contact (r)-surgery (r = p/q) along a Legendrian knot K in a contact
manifold (M, ξ) was first described in [DG1]. It is defined to be a special kind of a
topological surgery, where surgery coefficient r ∈ Q∪∞ measured relative to the contact
framing of K. For r 6= 0, a contact structure on the surgeried manifold
(M − νK) ∪ (S1 ×D2),
(νK denotes a tubular neighborhood of K) is defined by requiring this contact structure
to coincide with ξ on Y − νK and its extension over S1 × D2 to be tight on (glued in)
solid torus S1 ×D2. Such an extension uniquely exists (up to isotopy) for r = 1/k with
k ∈ Z (see [Ho]). In particular, a contact (±1)-surgery along a Legendrian knot K on a
contact manifold (M, ξ) determines a unique (up to contactomorphism) surgered contact
manifold which will be denoted by (M, ξ)(K,±1).
The most general result along these lines is:
Theorem 2.1 ([DG1]). Every (closed, orientable) contact 3-manifold (M, ξ) can be ob-
tained via contact (±1)-surgery on a Legendrian link in (S3, ξst).
Any closed contact 3-manifold (M, ξ) can be described by a contact surgery diagram. Such
a diagram consists of a front projection (onto the yz-plane) of a Legendrian link drawn in
(R3, ξ0) ⊂ (S
3, ξst) with contact surgery coefficient on each link component. Theorem 2.1
implies that there is a contact surgery diagram for (M, ξ) such that the contact surgery
coefficient of any Legendrian knot in the diagram is ±1. For more details see [Gm] and
[OS].
ON THE SUPPORT GENUS OF A CONTACT STRUCTURE 5
2.3. Compatibility and Stabilization. A contact structure ξ on a 3-manifold M is
said to be supported by an open book (L, f) if ξ is isotopic to a contact structure given by
a 1-form α such that
(1) dα is a positive area form on each page Σ ≈ f−1(pt) of the open book and
(2) α > 0 on L (Recall that L and the pages are oriented.)
When this holds, we also say that the open book (L, f) is compatible with the contact
structure ξ on M . Geometrically, compatibility means that ξ can be isotoped to be
arbitrarily close (as oriented plane fields), on compact subsets of the pages, to the tangent
planes to the pages of the open book in such a way that after some point in the isotopy
the contact planes are transverse to L and transverse to the pages of the open book in a
fixed neighborhood of L.
Definition 2.2. A positive (resp., negative) stabilization S+K(Σ, h) (resp., S
K(Σ, h)) of
an abstract open book (Σ, h) is the open book
(1) with page Σ′ = Σ ∪ 1-handle and
(2) monodromy h′ = h ◦DK (resp., h
′ = h ◦D−1K ) where DK is a right-handed Dehn
twist along a curve K in Σ′ that intersects the co-core of the 1-handle exactly once.
Based on the result of Thurston and Winkelnkemper [TW], Giroux proved the following
theorem which strengthened the link between open books and contact structures.
Theorem 2.3 ([Gi]). Let M be a closed oriented 3-manifold. Then there is a one-to-
one correspondence between oriented contact structures on M up to isotopy and open book
decompositions of M up to positive stabilizations: Two contact structures supported by the
same open book are isotopic, and two open books supporting the same contact structure
have a common positive stabilization.
For a given fixed open book (Σ, h) of a 3-manifold M , there exists a unique compatible
contact structure up to isotopy on M = M(Σ,h) by Theorem 2.3. We will denote this
contact structure by ξ(Σ,h). Therefore, an open book (Σ, h) determines a unique contact
manifold (M(Σ,h), ξ(Σ,h)) up to contactomorphism.
Taking a positive stabilization of an open book (Σ, h) is actually taking a special Murasugi
sum of (Σ, h) with (H+, Dc) where H
+ is the positive Hopf band, and c is the core circle
in H+. Taking a Murasugi sum of two open books corresponds to taking the connect sum
of 3-manifolds associated to the open books. For the precise statements of these facts,
and a proof of the following theorem, we refer the reader to [Gd], [Et2].
Theorem 2.4. (MS+
(Σ,h), ξS+
(Σ,h))
∼= (M(Σ,h), ξ(Σ,h))#(S
3, ξst) ∼= (M(Σ,h), ξ(S,h)).
2.4. Monodromy and Surgery Diagrams. Given a contact surgery diagram for a
closed contact 3-manifold (M, ξ), we want to construct an open book compatible with
ξ. One implication of Theorem 2.1 is that one can obtain such a compatible open book
by starting with a compatible open book of (S3, ξst), and then interpreting the effects of
surgeries (yielding (M, ξ) ) in terms of open books. However, we first have to realize each
surgery curve (in the given surgery diagram of (M, ξ) ) as a Legendrian curve sitting on
a page of some open book supporting (S3, ξst). We refer the reader to Section 5 in [Et2]
for a proof of the following theorem.
6 MEHMET FIRAT ARIKAN
Theorem 2.5. Let (Σ, h) be an open book supporting the contact manifold (M, ξ). If K
is a Legendrian knot on the page Σ of the open book, then
(M, ξ)(K, ±1) = (M(Σ, h◦D∓
), ξ(Σ, h◦D∓
2.5. Contact Cell Decompositions and Convex Surfaces. The exploration of con-
tact cell decompositions in the study of open books was originally initiated by Gabai
[Ga], and then developed by Giroux [Gi]. We want to give several definitions and facts
carefully.
Let (M, ξ) be any contact 3-manifold, and K ⊂ M be a Legendrian knot. The twisting
number tw(K,Fr) of K with respect to a given framing Fr is defined to be the number
of counterclockwise 2π twists of ξ along K, relative to Fr. In particular, if K sits on a
surface Σ ⊂ M , and FrΣ is the surface framing of K given by Σ, then we write tw(K,Σ)
for tw(K,FrΣ). If K = ∂Σ, then we have tw(K,Σ) = tb(K) (by the definition of tb).
Definition 2.6. A contact cell decomposition of a contact 3−manifold (M, ξ) is a finite
CW-decomposition of M such that
(1) the 1-skeleton is a Legendrian graph,
(2) each 2-cell D satisfies tw(∂D,D) = −1, and
(3) ξ is tight when restricted to each 3-cell.
Definition 2.7. Given any Legendrian graph G in (M, ξ), the ribbon of G is a compact
surface R = RG satisfying
(1) R retracts onto G,
(2) TpR = ξp for all p ∈ G,
(3) TpR 6= ξp for all p ∈ R \G.
For a proof of the following lemma we refer the reader to [Gd] and [Et2].
Lemma 2.8. Given a closed contact 3−manifold (M, ξ), the ribbon of the 1-skeleton of
any contact cell decomposition is a page of an open book supporting ξ.
The following lemma will be used in the next section.
Lemma 2.9. Let ∆ be a contact cell decomposition of a closed contact 3-manifold (M, ξ)
with the 1−skeleton G. Let U be a 3-cell in ∆. Consider two Legendrian arcs I ⊂ ∂U
and J ⊂ U such that
(1) I ⊂ G,
(2) J ∩ ∂U = ∂J = ∂I,
(3) C = I ∪∂ J is a Legendrian unknot with tb(C) = −1.
Set G′ = G ∪ J . Then there exists another contact cell decomposition ∆′ of (M, ξ) such
that G′ is the 1-skeleton of ∆′
Proof. The interior of the 3−cell U is contactomorphic to (R3, ξ0). Therefore, there exists
an embedded disk D in U such that ∂D = C and int(D) ⊂ int(U) as depicted in Figure
2(a). We have tw(∂D,D) = −1 since tb(C) = −1. As we are working in (R3, ξ0), there
exist two C∞-small perturbations of D fixing ∂D = C such that perturbed disks intersect
each other only along their common boundary C. In other words, we can find two isotopies
H1, H2 : [0, 1]×D −→ U such that for each i = 1, 2 we have
(1) Hi(t, .) fixes ∂D = C pointwise for all t ∈ [0, 1],
(2) Hi(0, D) = IdD where IdD is the identity map on D,
ON THE SUPPORT GENUS OF A CONTACT STRUCTURE 7
(3) Hi(1, D) = Di where each Di is an embedded disk in U with int(Di) ⊂ int(U),
(4) D ∩D1 ∩D2 = C (see Figure 2(b)).
int(U)
Ext(U)
int(U − U ′)
(a) (b)
Figure 2. Constructing a new contact cell decomposition
Note that tw(∂Di, Di) = tw(C,Di) = −1 for i = 1, 2. This holds because each Di is a
small perturbation of D, so the number of counterclockwise twists of ξ (along K) relative
to FrDi is equal to the one relative to FrD.
Next, we introduce G′ = G ∪ J as the 1-skeleton of the new contact cell decomposition
∆′. In M − int(U), we define the 2- and 3- skeletons of ∆′ to be those of ∆ . However,
we change the cell structure of int(U) as follows: We add 2-cells D1, D2 to the 2-skeleton
of ∆′ (note that they both satisfy the twisting condition in Definition 2.6). Consider the
2-sphere S = D1 ∪D2 where the union is taken along the common boundary C. Let U
be the 3-ball with ∂U ′ = S. Note that ξ|U ′ is tight as U
′ ⊂ U and ξ|U is tight. We add U ′
and U −U ′ to the 3-skeleton of ∆′ (note that U −U ′ can be considered as a 3-cell because
observe that int(U − U ′) is homeomorphic to the interior of a 3-ball as in Figure 2(b)).
Hence, we established another contact cell decomposition of (M, ξ) whose 1-skeleton is
G′ = G∪J . (Equivalently, by Theorem 2.4, we are taking the connect sum of (M, ξ) with
(S3, ξst) along U
′.) �
3. The Algorithm
3.1. Proof of Theorem 1.4.
Proof. By translating L in (R3, ξ0) if necessary (without changing its contact type), we
can assume that the front projection of L onto the yz-plane lying in the second quadrant
{ (y, z) | y < 0, z > 0}. After an appropriate Legendrian isotopy, we can assume that L
consists of the line segments contained in the lines
ki = {x = 1, z = −y + ai}, i = 1, . . . , p,
lj = {x = −1, z = y + bj}, j = 1, . . . , q
for some a1 < a2 < · · · < ap, 0 < b1 < b2 < · · · < bq, and also the line segments (parallel
to the x-axis) joining certain ki’s to certain lj ’s. In this representation, L seems to have
8 MEHMET FIRAT ARIKAN
corners. However, any corner of L can be made smooth by a Legendrian isotopy changing
only a very small neighborhood of that corner.
Let π : R3 −→ R2 be the projection onto the yz-plane. Then we obtain the square bridge
diagram D = π(L) of L such that D consists of the line segments
hi ⊂ π(ki) = {x = 0, z = −y + ai}, i = 1, . . . , p,
vj ⊂ π(lj) = {x = 0, z = y + bj}, j = 1, . . . , q.
Notice that D bounds a polygonal region P in the second quadrant of the yz-plane, and
divides it into finitely many polygonal subregions P1, . . . , Pm ( see Figure 3-(a) ).
Throughout the proof, we will assume that the link L is not split (that is, the region P
has only one connected component). Such a restriction on L will not affect the generality
of our construction (see Remark 3.2).
a1 a2 a3 a4 a5
P2 P3
h2 h3
Figure 3. The region P for right trefoil knot and its division into rectangles
Now we decompose P into finite number of ordered rectangular subregions as follows:
The collection {π(lj) | j = 1, . . . , q} cuts each Pk into finitely many rectangular regions
R1k, . . . , R
k . Consider the set P of all such rectangles in P . That is, we define
= { Rlk | k = 1, . . . , m, l = 1, . . . , mk}.
Clearly P decomposes P into rectangular regions ( see Figure 3-(b) ). The boundary of
an arbitrary element Rlk in P consists of four edges: Two of them are the subsets of the
lines π(lj(k,l)), π(lj(k,l)+1), and the other two are the subsets of the line segments hi1(k,l),
hi2(k,l) where 1 ≤ i1(k, l) < i2(k, l) ≤ p and 1 ≤ j(k, l) < j(k, l) + 1 ≤ q (see Figure 4).
Since the region P has one connected component, the following holds for the set P:
ON THE SUPPORT GENUS OF A CONTACT STRUCTURE 9
π(lj(k,l))
π(lj(k,l)+1)
hi1(k,l)
hi2(k,l)
ai1(k,l) ai2(k,l)
bj(k,l)
bj(k,l)+1
Figure 4. Arbitrary element Rlk in P
(⋆) Any element of P has at least one common vertex with some other element of P.
By (⋆), we can rename the elements of P by putting some order on them so that any
element of P has at least one vertex in common with the union of all rectangles coming
before itself with respect to the chosen order. More precisely, we can write
P = { Rk | k = 1, . . . , N}
(N is the total number of rectangles in P) such that each Rk has at least one vertex in
common with the union R1 ∪ · · · ∪ Rk−1.
Equivalently, we can construct the polygonal region P by introducing the building rect-
angles (Rk’s) one by one in the order given by the index set {1, 2, . . . , N}. In particular,
this eliminates one of the indexes, i.e., we can use Rk’s instead of R
k’s. In Figure 5, how
we build P is depicted for the right trefoil knot (compare it with the previous picture
given for P in Figure 3-(b)).
π(k1)
π(l5)
π(l1)
Figure 5. The region P for right trefoil knot
10 MEHMET FIRAT ARIKAN
Using the representation P = R1 ∪ R2 ∪ · · · ∪ RN , we will construct the contact cell
decomposition (CCD) ∆. Consider the following infinite strips which are parallel to the
x-axis (they can be considered as the unions of “small” contact planes along ki’s and lj ’s):
S+i = {1− ǫ ≤ x ≤ 1 + ǫ, z = y + ai}, i = 1, . . . , p,
S−j = {−1− ǫ ≤ x ≤ −1 + ǫ, z = −y + bj}, j = 1, . . . , q.
Note that π(S+i ) = π(ki) and π(S
j ) = π(lj). Let Rk ⊂ P be given. Then we can write
∂Rk = C
k ∪ C
k ∪ C
k ∪ C
k where C
k ⊂ π(ki1), C
k ⊂ π(lj), C
k ⊂ π(ki2), C
k ⊂ π(lj+1)
for some 1 ≤ i1 < i2 ≤ p and 1 ≤ j ≤ q. Lift C
k , C
k , C
k , C
k (along the x-axis) so that
the resulting lifts (which will be denoted by the same letters) are disjoint Legendrian arcs
contained in ki1 , lj, ki2, lj+1 and sitting on the corresponding strips S
, S−j , S
, S−j+1. For
l = 1, 2, 3, 4, consider Legendrian linear arcs I lk (parallel to the x-axis) running between
the endpoints of C lk’s as in Figure 6-(a)&(b). Along each I
k the contact planes make a 90
left-twist. Let Blk be the narrow band obtained by following the contact planes along I
Then define Fk to be the surface constructed by taking the union of the compact subsets
of the above strips (containing corresponding C lk’s) with the bands B
k’s (see Figure 6-(b)).
C lk’s and I
k’s together build a Legendrian unknot γk in (R
3, ξ0), i.e., we set
γk = C
k ∪ I
k ∪ C
k ∪ I
k ∪ C
k ∪ I
k ∪ C
k ∪ I
Note that π(γk) = ∂Rk, γk sits on the surface Fk, and Fk deformation retracts onto γk.
Indeed, by taking all strips and bands in the construction small enough, we may assume
that contact planes are tangent to the surface Fk only along the core circle γk. Thus, Fk
is the ribbon of γk. Observe that, topologically, Fk is a positive (left-handed) Hopf band.
(a) (b) (c)
0−1 1
Dk ≈ fk(Rk)
Figure 6. (a) The Legendrian unknot γk, (b) The ribbon Fk, (c) The disk
Dk (shaded bands in (b) are the bands B
Let fk : Rk −→ R
3 be a function modelled by (a, b) 7→ c = a2 − b2 (for an appropriate
choice of coordinates). The image fk(Rk) is, topologically, a disk, and a compact subset
of a saddle surface. Deform fk(Rk) to another “saddle” disk Dk such that ∂Dk = γk (see
Figure 6-(c)). We observe here that tw(γk, Dk) = −1 because along γk, contact planes
rotate 90◦ in the counter-clockwise direction exactly four times which makes one full left-
twist (enough to count the twists of the ribbon Fk since Fk rotates with the contact planes
along γk !).
ON THE SUPPORT GENUS OF A CONTACT STRUCTURE 11
We repeat the above process for each rectangle Rk in P and get the set
D = { Dk | Dk ≈ fk(Rk), k = 1, . . . , N}
consisting of the saddle disks. Note that by the construction of D, we have the property:
(∗) If any two elements of D intersect each other, then they must intersect along a
contractible subset (a contractible union of linear arcs) of their boundaries.
For instance, if the corresponding two rectangles (for two intersecting disks in D) have
only one common vertex, then those disks intersect each other along the (contractible)
line segment parallel to the x-axis which is projected (by the map π) onto that vertex.
For each k, let D′k be a disk constructed by perturbing Dk slightly by an isotopy fixing
only the boundary of Dk. Therefore, we have
(∗∗) ∂Dk = γk = ∂D
k , int(Dk) ∩ int(D
k) = ∅ , and tw(γk, D
k) = −1 = tw(γk, Dk).
In the following, we will define a sequence { ∆k | k = 1, . . . , N } of CCD’s for (S
3, ξst).
∆1k,∆
k, and ∆
k will denote the 1-skeleton, 2-skeleton, and 3-skeleton of ∆k, respectively.
First, take ∆11 = γ1, and ∆
1 = D1 ∪γ1 D
1. By (∗∗), ∆1 satisfies the conditions (1) and
(2) of Definition 2.6. By the construction, any pair of disks Dk, D
k (together) bounds
a Darboux ball (tight 3-cell) Uk in the tight manifold (R
3, ξ0). Therefore, if we take
∆31 = U1 ∪∂ (S
3 − U1), we also achieve the condition (3) in Definition 2.6 ( the boundary
union “ ∪∂” is taken along ∂U1 = S
2 = ∂(S3 − U1) ). Thus, ∆1 is a CCD for (S
3, ξst).
Inductively, we define ∆k from ∆k−1 by setting
∆1k = ∆
k−1 ∪ γk = γ1 ∪ · · · ∪ γk−1 ∪ γk,
∆2k = ∆
k−1 ∪Dk ∪γk D
k = D1 ∪γ1 D
1 ∪ · · · ∪Dk−1 ∪γk−1 D
k−1 ∪Dk ∪γk D
∆3k = U1 ∪ · · · ∪ Uk−1 ∪ Uk ∪∂ (S
3 − U1 ∪ · · · ∪ Uk−1 ∪ Uk)
Actually, at each step of the induction, we are applying Lemma 2.9 to ∆k−1 to get ∆k.
We should make several remarks: First, by the construction of γk’s, the set
(γ1 ∪ · · · ∪ γk−1) ∩ γk
is a contractible union of finitely many arcs. Therefore, the union ∆1k−1 ∪ γk should be
understood to be a set-theoretical union (not a topological gluing!) which means that
we are attaching only the (connected) part (γk \ ∆
k−1) of γk to construct the new 1-
skeleton ∆1k. In terms of the language of Lemma 2.9, we are setting I = ∆
k−1 \ γk and
J = γk \∆
k−1. Secondly, we have to show that ∆
k = ∆
k−1 ∪Dk ∪γk D
k can be realized
as the 2-skeleton of a CCD: Inductively, we can achieve the twisting condition on 2-cells
by using (∗∗). The fact that any two intersecting 2-cells in ∆2k intersect each other along
some subset of the 1-skeleton ∆1k is guaranteed by the property (∗) if they have different
index numbers, and guaranteed by (∗∗) if they are of the same index. Thirdly, we have
to guarantee that 3-cells meet correctly: It is clear that U1, . . . , Uk meet with each other
along subsets of the 1-skeleton ∆1k(⊂ ∆
k). Observe that ∂(U1 ∪ · · · ∪ Uk) = S
2 for any
k = 1, . . . , N by (∗) and (∗∗). Therefore, we can always consider the complementary
Darboux ball S3 − U1 ∪ · · · ∪ Uk−1 ∪ Uk, and glue it to U1 ∪ · · · ∪ Uk along their common
boundary 2-sphere. Hence, we have seen that ∆k is a CCD for (S
3, ξst) with Legendrian
1-skeleton ∆1k = γ1 ∪ · · · ∪ γk.
12 MEHMET FIRAT ARIKAN
To understand the ribbon, say Σk, of ∆
k, observe that when we glue the part γk \∆
k−1 of
γk to ∆
k−1, actually we are attaching a 1-handle (whose core interval is (γk \∆
k−1)\Σk−1)
to the old ribbon Σk−1 (indeed, this corresponds to a positive stabilization). We choose
the 1-handle in such a way that it also rotates with the contact planes. This is equivalent
to extending Σk−1 to a new surface by attaching the missing part (the part which retracts
onto (γk \∆
k−1) \ Σk−1) of Fk given in Figure 6-(c). The new surface is the ribbon Σk of
the new 1-skeleton ∆1k.
By taking k = N , we get a CCD ∆N of (S
3, ξst). By the construction, γk’s are only
piecewise smooth. We need a smooth embedding of L into the 1-skeleton ∆1N (the union
of all γk’s). Away from some small neighborhood of the common corners of ∆
N and L
(recall that L had corners before the Legendrian isotopies), L is smoothly embedded in
∆1N . Around any common corner, we slightly perturb ∆
N using the isotopy used for
smoothing that corner of L. This guaranties the smooth Legendrian embedding of L into
the Legendrian graph ∆1N = ∪
k=1γk. Similarly, any other corner in ∆
N (which is not in
L) can be made smooth using an appropriate Legendrian isotopy.
As L is contained in the 1-skeleton ∆1N , L sits (as a smooth Legendrian link) on the ribbon
ΣN . Note that during the process we do not change the contact type of L, so the contact
(Thurston-Bennequin) framing of L is still the same as what it was at the beginning. On
the other hand, consider tubular neighborhood N(L) of L in ΣN . Being a subsurface
of the ribbon ΣN , N(L) is the ribbon of L. By definition, the contact framing of any
component of L is the one coming from the ribbon of that component. Therefore, the
contact framing and the N(L)-framing of L are the same. Since N(L) ⊂ ΣN , the framing
which L gets from the ribbon ΣN is the same as the contact framing of L. Finally, we
observe that ΣN is a subsurface of the Seifert surface Fp,q of the torus link (or knot) Tp,q.
To see this, note that P is contained in the rectangular region, say Pp,q, enclosed by the
lines π(k1), π(kp), π(l1), π(lq). Divide Pp,q into the rectangular subregions using the lines
π(ki), π(lj), i = 1, . . . , p, j = 1, . . . , q. Note that there are exactly pq rectangles in the
division. If we repeat the above process using this division of Pp,q, we get another CCD
for (S3, ξst) with the ribbon Fp,q. Clearly, Fp,q contains our ribbon ΣN as a subsurface
(indeed, there are extra bands and parts of strips in Fp,q which are not in ΣN ).
Thus, (1), (2) and (3) of the theorem are proved once we set ∆ = ∆N , (and so G = ∆
F = ΣN ). To prove (4), recall that we are assuming p > 3, q > 3. Then consider
= total number of intersection points of all π(lj)’s with all hi’s.
That is, we define κ
= |{π(lj) | j = 1, . . . , q} ∩ {hi | i = 1, . . . , p} |. Notice that κ is the
number of bands used in the construction of the ribbon F , and also that if D (so P ) is
not a single rectangle (equivalently p > 2, q > 2), then κ < pq. Since there are p+ q disks
in F , we compute the Euler characteristic and genus of F as
χ(F ) = p+ q − κ = 2− 2g(F )− |∂F | =⇒ g(F ) =
2− p− q
|∂F |
Similarly, there are p + q disks and pq bands in Fp,q, so we get
χ(Fp,q) = p+ q − pq = 2− 2g(Fp,q)− |∂Fp,q| =⇒ g(Fp,q) =
2− p− q
|∂Fp,q|
ON THE SUPPORT GENUS OF A CONTACT STRUCTURE 13
Observe that |∂Fp,q| divides the greatest common divisor gcd(p, q) of p and q, so
|∂Fp,q| ≤ gcd(p, q) ≤ p =⇒ g(Fp,q) ≥
2− p− q
Therefore, to conclude g(F ) < g(Fp,q), it suffices to show that pq−κ > p−|∂F |. To show
the latter, we will show pq − κ− p ≥ 0 (this will be enough since |∂F | 6= 0).
Observe that pq−κ is the number of bands (along x-axis) in Fp,q which we omit to get the
ribbon F . Therefore, we need to see that at least p bands are omitted in the construction
of F : The set of all bands (along x-axis) in Fp,q corresponds to the set
{π(lj) | j = 1, . . . , q} ∩ {π(ki) | i = 1, . . . , p}.
Notice that while constructing F we omit at least 2 bands corresponding to the intersec-
tions of the lines π(k1), π(kp) with the family {π(lj) | j = 1, . . . , q} (in some cases, one of
these bands might correspond to the intersection of the lines π(k2) or π(kp−1) with π(l1)
or π(lq), but the following argument still works because in such a case we can omit at
least 2 bands corresponding to two points on π(k2) or π(kp−1)). For the remaining p− 2
line segments h2, . . . , hp−1, there are two cases: Either each hi, for i = 2, . . . , p− 1 has at
least one endpoint contained on a line other than π(l1) or π(lq), or there exists a unique
hi, 1 < i < p, such that its endpoints are on π(l1) and π(lq) (such an hi must be unique
since no two vj ’s are collinear !). If the first holds, then that endpoint corresponds to the
intersection of hi with π(lj) for some j 6= 1, q. Then the band corresponding to either
π(ki)∩π(lj−1) or π(ki)∩π(lj+1) is omitted in the construction of F (recall how we divide
P into rectangular regions). If the second holds, then there is at least one line segment
hi′ , which belongs to the same component of L containing hi, such that we omit at least 2
points on π(ki′) (this is true again since no two vj ’s are collinear). Hence, in any case, we
omit at least p bands from Fp,q to get F . This completes the proof of Theorem 1.4. �
Corollary 3.1. Given L and Fp,q as in Theorem 1.4, there exists an open book decompo-
sition OB of (S3, ξst) such that
(1) L lies (as a Legendrian link) on a page F of OB,
(2) The page F is a subsurface of Fp,q
(3) The page framing of L coming from F is equal to its contact framing tb(L),
(4) If p > 3 and q > 3, then g(F ) is strictly less than g(Fp,q),
(5) The monodromy h of OB is given by h = tγ1 ◦ · · · ◦ tγN where γk is the Legendrian
unknot constructed in the proof of Theorem 1.4, and tγk denotes the positive (right-
handed) Dehn twist along γk.
Proof. The proofs of (1), (2), (3), and (4) immediately follow from Theorem 1.4 and
Lemma 2.8. To prove (5), observe that by adding the missing part of each γk to the
previous 1-skeleton, and by extending the previous ribbon by attaching the ribbon of the
missing part of γk (which is topologically a 1-handle), we actually positively stabilize the
old ribbon with the positive Hopf band (H+, tγk). Therefore, (5) follows. �
With a little more care, sometimes we can decrease the number of 2-cells in the final
2-skeleton. Also the algorithm can be modified for split links:
Remark 3.2. Under the notation used in the proof of Theorem 1.4, we have the following:
(1) Suppose that the link L is split (so P has at least two connected components).
Then we can modify the above algorithm so that Theorem 1.4 still holds.
14 MEHMET FIRAT ARIKAN
(2) Let Tj denote the row (or set) of rectangles (or elements) in P (or in P) with
bottom edges lying on the fixed line π(lj). Consider two consecutive rows Tj , Tj+1
lying between the lines π(lj), π(lj+1), and π(lj+2). Let R ∈ Tj and R
′ ⊂ Tj+1 be
two rectangles in P with boundaries given as
∂R = C1 ∪ C2 ∪ C3 ∪ C4, ∂R
′ = C ′1 ∪ C
2 ∪ C
3 ∪ C
Suppose that R and R′ have one common boundary component lying on π(lj+1),
and two of the other components lie on the same lines π(ki1), π(ki2) as in Figure
7. Let γ, γ′ ⊂ ∆1N and D,D
′ ⊂ ∆N be the corresponding Legendrian unknots and
2-cells of the CCD ∆N coming from R,R
′. That is,
∂D = γ, ∂D′ = γ′, and π(D) = R, π(D′) = R′
Suppose also that L∩ γ ∩ γ′ = ∅. Then in the construction of ∆N , we can replace
R,R′ ⊂ P with a single rectangle R′′ = R∪R′. Equivalently, we can take out γ∩γ′
from ∆1N , and replace D,D
′ by a single saddle disk D′′ with ∂D′′ = (γ∪γ′)\(γ∩γ′).
π(lj)
π(lj+1)
π(ki1)
π(ki2)
ai1 ai2
R′′ C1
π(lj+2)
Figure 7. Replacing R, R′ with their union R′′
Proof. To prove each statement, we need to show that CCD structure and all the conclu-
sions in Theorem 1.4 are preserved after changing ∆N the way described in the statement.
To prove (1), let P (1), . . . , P (m) be the separate components of P . After putting the
corresponding separate components of L into appropriate positions (without changing
their contact type) in (R3, ξ0), we may assume that the projection
P = P (1) ∪ · · · ∪ P (m)
of L onto the second quadrant of the yz-plane is given similar as the one which we
illustrated in Figure 8.
In such a projection, we require two important properties:
(1) P (1), . . . , P (m) are located from left to right in the given order in the region bounded
by the lines π(k1), π(l1), and π(lq).
(2) Each of P (1), . . . , P (m) has at least one edge on the line π(l1).
ON THE SUPPORT GENUS OF A CONTACT STRUCTURE 15
π(lq)
π(l1)
π(k1)
(m−1)
Figure 8. Modifying the algorithm for the case when L is split
If the components P (1) . . . P (m) remain separate, then our construction in Theorem 1.4
cannot work (the complement of the union of 3-cells corresponding to the rectangles in
P would not be a Darboux ball; it would be a genus m handle body). So we have
to make sure that any component P (l) is connected to the some other via some bridge
consisting of rectangles. We choose only one rectangle for each bridge as follows: Let
Al be the rectangle in T1 (the row between π(l1) and π(l2)) connecting P
(l) to P (l+1)
for l = 1, . . . , m − 1 (see Figure 8). Now, by adding 2- and 3-cells (corresponding to
A1, . . . , Am−1), we can extend the CCD ∆N to get another CCD for (S
3, ξst). Therefore,
we have modified our construction when L is split.
To prove (2), if we replace D′′ in the way described above, then by the construction of
∆3N , we also replace two 3-cells with a single 3-cell whose boundary is the union of D
and its isotopic copy. This alteration of ∆3N does not change the fact that the boundary
of the union of all 3-cells coming from all pairs of saddle disks is still homeomorphic to a
2-sphere S2, Therefore, we can still complete this union to S3 by gluing a complementary
Darboux ball. Thus, we still have a CCD. Note that γ ∩ γ′ is taken away from the 1-
skeleton. However, since L∩ γ ∩ γ′ = ∅, the new 1-skeleton still contains L. Observe also
that this process does not change the ribbon N(L) of L. Hence, the same conclusions in
Theorem 1.4 are satisfied by the new CCD. �
16 MEHMET FIRAT ARIKAN
4. Examples
Example I. As the first example, let us finish the one which we have already started
in the previous section. Consider the Legendrian right trefoil knot L (Figure 1) and the
corresponding region P given in Figure 5. Then we construct the 1-skeleton, the saddle
disks, and the ribbon of the CCD ∆ as in Figure 9.
All twists are left-handed
Figure 9. (a) The page F for the right trefoil knot, (b) Construction of ∆
ON THE SUPPORT GENUS OF A CONTACT STRUCTURE 17
In Figure 9-(a), we show how to construct the 1-skeleton G = ∆1 of ∆ starting from a
single Legendrian arc (labelled by the number “ 0 ”). We add Legendrian arcs labelled
by the pairs of numbers “1, 1”, . . . ,“8, 8” to the picture one by one (in this order). Each
pair determines the endpoints of the corresponding arc. These arcs represent the cores
of the 1-handles building the page F (the ribbon of G) of the corresponding open book
OB. Note that by attaching each 1-handle, we (positively) stabilize the previous ribbon
by the positive Hopf band (H+
, tγk) where γk is the boundary of the saddle disk Dk as
before. Therefore, the monodromy h of OB supporting (S3, ξst) is given by
h = tγ1 ◦ · · · ◦ tγ8
where tγk ∈ Aut(F, ∂F ) denotes the positive (right-handed) Dehn twist along γk. To
compute the genus gF of F , observe that F is constructed by attaching eight 1-handles
(bands) to a disk, and |∂F | = 3 where |∂F | is the number of boundary components of F .
Therefore,
χ(F ) = 1− 8 = 2− 2gF − |∂F | =⇒ gF = 3.
Now suppose that (M±1 , ξ
1 ) is obtained by performing contact (±1)-surgery on L. Clearly,
the trefoil knot L sits as a Legendrian curve on F by our construction, so by Theorem
2.5, we get the open book (F, h1) supporting ξ with monodromy
h1 = tγ1 ◦ · · · ◦ tγ8 ◦ t
L ∈ Aut(F, ∂F ).
Hence, we get an upper bound for the support genus invariant of ξ1, namely,
sg(ξ1) ≤ 3 = gF .
We note that the upper bound, which we can get for this particular case, from [AO] and
[St] is 6 where the page of the open book is the Seifert surface F5,5 of the (5, 5)-torus link
(see Figure 10).
z + y = 0x
z − y = 0
All twists are left-handed
Figure 10. Legendrian right trefoil knot sitting on F5,5
Example II. Consider the Legendrian figure-eight knot L, and its square bridge position
given in Figure 11-(a) and (b). We get the corresponding region P in Figure 11-(c). Using
Remark 3.2 we replace R5 and R8 with a single saddle disk. So this changes the set P.
Reindexing the rectangles in P, we get the decomposition in Figure 12 which will be used
to construct the CCD ∆.
18 MEHMET FIRAT ARIKAN
a6a5a4a3a2a1
Figure 11. (a),(b) Legendrian figure-eight knot, (c) The region P
π(l1)
π(l6)
π(k1)
Figure 12. Modifying the region P
In Figure 13-(a), similar to Example I, we construct the 1-skeleton G = ∆1 of ∆ again by
attaching Legendrian arcs (labelled by the pairs of numbers “1, 1”, . . . , “10, 10”) to the
initial arc (labelled by the number “0”) in the given order. Again each pair determines
the endpoints of the corresponding arc, and the cores of the 1-handles building the page F
(of the corresponding open book OB). Once again attaching each 1-handle is equivalent
to (positively) stabilizing the previous ribbon by the positive Hopf band (H+
, tγk) for
k = 1, . . . , 10. Therefore, the monodromy h of OB supporting (S3, ξst) is given by
h = tγ1 ◦ · · · ◦ tγ10
To compute the genus gF of F , observe that F is constructed by attaching ten 1-handles
(bands) to a disk, and |∂F | = 5. Therefore,
χ(F ) = 1− 10 = 2− 2gF − |∂F | =⇒ gF = 3.
ON THE SUPPORT GENUS OF A CONTACT STRUCTURE 19
D10(b)
z + y = 0
z − y = 0
All twists are left-handed
All twists are left-handed
Figure 13. (a) The page F , (b) Construction of ∆, (c) The figure-eight
knot on F6,6
20 MEHMET FIRAT ARIKAN
Let (M±2 , ξ
2 ) be a contact manifold obtained by performing contact (±)-surgery on the
figure-8 knot L. Since L sits as a Legendrian curve on F by our construction, Theorem
2.5 gives an open book (F, h2) supporting ξ2 with monodromy
h2 = tγ1 ◦ · · · ◦ tγ10 ◦ t
L ∈ Aut(F, ∂F ).
Therefore, we get the upper bound sg(ξ2) ≤ 3 = gF . Once again we note that the smallest
possible upper bound, which we can get for this particular case, using the method of [AO]
and [St] is 10 where the page of the open book is the Seifert surface F6,6 of the (6, 6)-torus
link (see Figure 13-(c)).
References
[AO] S. Akbulut, B. Ozbagci, Lefschetz fibrations on compact Stein surfaces, Geom. Topol. 5 (2001),
319–334 (electronic).
[DG1] F. Ding and H. Geiges, A Legendrian surgery presentation of contact 3-manifolds, Math. Proc.
Cambridge Philos. Soc. 136 (2004), no. 3, 583–598.
[Et1] J. B. Etnyre, Planar open book decompositions and contact structures, IMRN 79 (2004), 4255–
4267.
[Et2] J. B. Etnyre, Lectures on open book decompositions and contact structures, Floer homology,
gauge theory, and low-dimensional topology, 103–141, Clay Math. Proc., 5, Amer. Math. Soc.,
Providence, RI, 2006.
[Et3] J. Etnyre, Introductory Lectures on Contact Geometry, Topology and geometry of manifolds
(Athens, GA, 2001), 81–107, Proc. Sympos. Pure Math., 71, Amer. Math. Soc., Providence, RI,
2003.
[EO] J. Etnyre, and B. Ozbagci, Invariants of Contact Structures from Open Books,
arXiv:math.GT/0605441, preprint 2006.
[Ga] D. Gabai, Detecting fibred links in S3, Comment. Math. Helv., 61(4):519-555, 1986.
[Ge] H. Geiges, Contact geometry, Handbook of differential geometry. Vol. II, 315–382, Elsevier/North-
Holland, Amsterdam, 2006.
[Gd] N Goodman, Contact Structures and Open Books, PhD thesis, University of Texas at Austin
(2003)
[Gi] E. Giroux, Géométrie de contact: de la dimension trois vers les dimensions supérieures, Pro-
ceedings of the ICM, Beijing 2002, vol. 2, 405–414.
[Gm] R. E. Gompf, Handlebody construction of Stein surfaces, Ann. of Math. 148 (1998), 619–693.
[GS] R. E. Gompf, A. I. Stipsicz, 4-manifolds and Kirby calculus, Graduate Studies in Math. 20, Amer.
Math. Soc., Providence, RI, 1999.
[Ho] K. Honda, On the classification of tight contact structures -I, Geom. Topol. 4 (2000), 309–368
(electronic).
[LP] A. Loi, R. Piergallini, Compact Stein surfaces with boundary as branched covers of B4, Invent.
Math. 143 (2001), 325–348.
[Ly] H. Lyon, Torus knots in the complements of links and surfaces, Michigan Math. J. 27 (1980),
39-46.
[OS] B. Ozbagci, A. I. Stipsicz, Surgery on contact 3-manifolds and Stein surfaces, Bolyai Society
Mathematical Studies, 13 (2004), Springer-Verlag, Berlin.
[Pl] O. Plamenevskaya, Contact structures with distinct Heegaard Floer invariants, Math. Res. Lett.,
11 (2004), 547-561.
[St] A. I. Stipsicz, Surgery diagrams and open book decomposition of contact 3-manifolds, Acta Math.
Hungar, 108 (1-2) (2005), 71-86.
[TW] W. P. Thurston, H. E. Winkelnkemper, On the existence of contact forms, Proc. Amer. Math.
Soc. 52 (1975), 345–347.
Department of Mathematics, MSU, East Lansing MI 48824, USA
E-mail address : [email protected]
http://arxiv.org/abs/math/0605441
1. Introduction
2. Preliminaries
2.1. Contact structures and Open book decompositions
2.2. Legendrian Knots and Contact Surgery
2.3. Compatibility and Stabilization
2.4. Monodromy and Surgery Diagrams
2.5. Contact Cell Decompositions and Convex Surfaces
3. The Algorithm
3.1. Proof of Theorem ??
4. Examples
References
|
0704.1671 | Very Massive Stars in High-Redshift Galaxies | Mon. Not. R. Astron. Soc. 000, 1–11 (2006) Printed 11 November 2021 (MN LaTEX style file v2.2)
Very Massive Stars in High-Redshift Galaxies
Mark Dijkstra1⋆ and J. Stuart B. Wyithe1†
1School of Physics, University of Melbourne, Parkville, Victoria, 3010, Australia
11 November 2021
ABSTRACT
A significant fraction of Lyα emitting galaxies (LAEs) at z > 5.7 have rest-frame
equivalent widths (EW) greater than∼ 100Å. However only a small fraction of the Lyα
flux produced by a galaxy is transmitted through the IGM, which implies intrinsic Lyα
EWs that are in excess of the maximum allowed for a population-II stellar population
having a Salpeter mass function. In this paper we study characteristics of the sources
powering Lyα emission in high redshift galaxies. We propose a simple model for Lyα
emitters in which galaxies undergo a burst of very massive star formation that results
in a large intrinsic EW, followed by a phase of population-II star formation with a
lower EW. We confront this model with a range of high redshift observations and
find that the model is able to simultaneously describe the following eight properties
of the high redshift galaxy population with plausible values for parameters like the
efficiency and duration of star formation: i-iv) the UV and Lyα luminosity functions
of LAEs at z=5.7 and 6.5, v-vi) the mean and variance of the EW distribution of Lyα
selected galaxies at z=5.7, vii) the EW distribution of i-drop galaxies at z∼6, and
viii) the observed correlation of stellar age with EW. Our modeling suggests that the
observed anomalously large intrinsic equivalent widths require a burst of very massive
star formation lasting no more than a few to ten percent of the galaxies star forming
lifetime. This very massive star formation may indicate the presence of population-
III star formation in a few per cent of i-drop galaxies, and in about half of the Lyα
selected galaxies.
Key words: cosmology–theory–galaxies–high redshift
1 INTRODUCTION
Narrow band searches for redshifted Lyα lines have
discovered a large number of Lyα emitting galaxies
with redshifts between z = 4.5 and z = 7.0 (e.g.
Hu & McMahon 1996; Hu et al. 2002; Malhotra & Rhoads
2002; Kodaira et al. 2003; Dawson et al. 2004; Hu et al.
2004; Stanway et al. 2004; Taniguchi et al. 2005;
Westra et al. 2006; Kashikawa et al. 2006; Shimasaku et al.
2006; Iye et al. 2006; Stanway et al. 2007; Tapken et al.
2007). The Lyα line emitted by these galaxies is very
prominent, often being the only observed feature. The
prominence of the Lyα line is quantified by its equivalent
width (EW), defined as the total flux of the Lyα line, FLyα
divided by the flux density of the continuum at 1216 Å:
EW≡ FLyα/f1216 . Throughout this paper we refer to the
rest-frame EW of the Lyα line (which a factor of (1 + z)
lower than the EW in the observers frame).
Approximately 50% of Lyα emitters (hereafter LAEs)
at z = 4.5 and z = 5.7 have lines with EW∼ 100 − 500 Å
⋆ E-mail:[email protected]
† E-mail:[email protected]
(Dawson et al. 2004; Hu et al. 2004; Shimasaku et al. 2006).
For comparison, theoretical studies conclude that the maxi-
mum EW which can be produced by a conventional popula-
tion of stars is 200-300 Å. Moreover, this maximum EW can
only be produced during the first few million years of a star-
burst, while at later times the luminous phase of Lyα EW
gradually fades (Charlot & Fall 1993; Malhotra & Rhoads
2002). Therefore, observed EWs lie near the upper envelope
of values allowed by a normal stellar population.
The quoted value for the upper envelope of EW∼ 200−
300 Å corresponds to the emitted Lyα flux. However not
all Lyα photons are transmitted through the IGM, and we
expect some attenuation. Within the framework of a Cold
Dark Matter cosmology, gas surrounding galaxies is signifi-
cantly overdense, and possesses an infall velocity relative to
the mean IGM (Barkana 2004). As a net result, the IGM
surrounding high redshift galaxies is significantly opaque to
Lyα photons. Indeed it can be shown that for reasonable
model assumptions, only ∼ 10− 30% of all Lyα photons are
transmitted through the IGM (Dijkstra et al. 2007). As a
result, the intrinsic Lyα EW emitted by high redshift LAEs
is systematically larger than observed. Indeed, this observa-
tion suggests that a significant fraction of LAEs at z > 4.5
c© 2006 RAS
http://arxiv.org/abs/0704.1671v2
2 Mark Dijkstra & J. Stuart B. Wyithe
have intrinsic EWs that are much larger than can possibly
be produced by a conventional population of young stars.
One possible origin for this large EW population is pro-
vided by active galactic nuclei (AGN), which can have much
larger EWs due to their harder spectra (e.g. Charlot & Fall
1993). However, large EW LAEs are not AGN for several
reasons: (1) the Lyα lines are too narrow (Dawson et al.
2004) (2) these objects typically lack high–ionisation state
UV emission lines, which are symptomatic of AGN activity
(Dawson et al. 2004), and (3) deep X-Ray observations of
101 Lyα emitters by Wang et al. (2004, also see Malhotra et
al. 2003, Lai et al, 2007) revealed no X-ray emission neither
from any individual source, nor from their stacked X-Ray
images.
Several recent papers have investigated the stel-
lar content of high-redshift LAEs by comparing stellar
synthesis models with the observed broad band colors
(Finkelstein et al. 2007). These comparisons are often aided
by deep IRAC observations on Spitzer (Lai et al. 2007;
Pirzkal et al. 2007). In this paper we take a different ap-
proach. Instead of focusing on individual galaxies, our goal
is to provide a simple model that describes the population
of Lyα emitting galaxies as a whole. This population is de-
scribed by the rest-frame ultraviolet (UV) and Lyα lumi-
nosity functions (LFs) at z = 5.7 and z = 6.5 , and the
Lyα EW distribution at z = 5.7 (Shimasaku et al. 2006;
Kashikawa et al. 2006). The sample of high-redshift LAEs
is becoming large enough that meaningful constraints can
now be placed on simple models of galaxy formation.
The outline of this paper is as follows: In § 2 -§ 5
we describe our models. In § 6 we discuss our results,
and compare with results from stellar synthesis models, be-
fore presenting our conclusions in § 7. The parameters for
the background cosmology used throughout this paper are
Ωm = 0.24, ΩΛ = 0.76, Ωb = 0.044, h = 0.73 and σ8 = 0.74
(Spergel et al. 2007).
2 THE MODEL
Dijkstra et al. (2007b) found that the observed Lyα LFs at
z = 5.7 and z = 6.5 are well described by a model in which
the Lyα luminosity of a galaxy increases in proportion to
the mass of its host dark matter Mtot. One can constrain
quantities related to the star formation efficiency from such
a model (also see Mao et al. 2007; Stark et al. 2007).
However, it is also possible to obtain constraints from
the rest-frame UV-LFs. In contrast to the Lyα LF, the UV-
LF is not affected by attenuation by the IGM, which allows
for more reliable constraints on quantities related to the star
formation efficiency. In the first part of this paper (§ 3-§ 4)
we present limited modeling to illustrate parameter depen-
dences, using the UV-LF to constrain model parameters re-
lated to star formation efficiency and lifetime. These model
parameters may then be kept fixed, and the Lyα LFs and
EW distributions used to constrain properties of high red-
shift LAEs such as their intrinsic Lyα EW and the fraction
of Lyα that is transmitted through the IGM. Later, in § 5,
we present our most general model, and fit to both the UV
and Lyα LFs, as well as the EW distribution, simultane-
ously, treating all model parameters as free.
3 MODELING THE UV AND LYα
LUMINOSITY FUNCTIONS.
3.1 Constraints from the UV-LF.
We begin by presenting a simple model for the UV-LF
(Wyithe & Loeb 2007; Stark et al. 2007). In Figure 1 we
show the rest-frame UV-LFs of LAEs at z = 5.7 and
z = 6.5 (Shimasaku et al. 2006; Kashikawa et al. 2006). We
use the following simple prescription to relate the ultravi-
olet flux density emitted by a galaxy, f1350, to the mass
of its host dark matter Mtot. The total mass of baryons
within a galaxy is (Ωb/Ωm)Mtot, of which a fraction f∗
is assumed to be converted into stars over a time scale of
tsys = ǫDCthub. Here, ǫDC is the duty cycle and thub(z), the
Hubble time at redshift z. This prescription yields a star
formation rate of Ṁ∗ = f∗(Ωb/Ωm)M/tsys. The star forma-
tion rate can then be converted into f1350 using the relation
f1350 = 7 × 1027(Ṁ∗/[M⊙/yr]) erg s−1 Hz−1 (Kennicutt
1998). The precise relation is uncertain but differs by a fac-
tor of less than 2 between a normal and a metal-free stel-
lar population (see e.g. Loeb et al. 2005). Uncertainty in
this conversion factor does not affect our main conclusions.
The presence of dust would lower the ratio of f1350 and Ṁ∗,
which could be compensated for by increasing f∗. However,
Bouwens et al. (2006) found that dust in z = 6.0 Lyman
Break Galaxies (LBGs) attenuates the UV-flux by an aver-
age factor of only 1.4 (and dust obscuration may be even
less important in LAEs, see § 6.3). Since this is within the
uncertainty of the constraint we obtain on f∗, we ignore ex-
tinction by dust. The number density of LAEs with UV-flux
densities exceeding f1350 is then given by
N(> f1350) = ǫDC
, (1)
where MUV is the mass that corresponds to the flux den-
sity, f1350 (through the relations given above). The func-
tion dn/dM is the Press-Schechter (1974) mass function
(with the modification of Sheth et al. 2001), which gives the
number density of halos of mass M (in units of comoving
Mpc−3)1. The free parameters in our model are the duty
cycle, ǫDC, of the galaxy, and the fraction of baryons that
are converted into stars, f∗. We calculated the UV-LF for
a grid of models in the (ǫDC, f∗)-plane, and generated like-
lihoods L[P ] = exp[−0.5χ2], where χ2 =
PNdata
(modeli −
datai)
2/σ2i , in which datai and σi are the i
th UV-LF data
point and its error, and modeli is the model evaluated at the
ith luminosity bin. The sum is over Ndata = 8 data points.
The inset in Figure 1 shows the resulting likelihood contours
1 Our model effectively states that the star formation rate in a
galaxy increases linearly with halo mass. This is probably not cor-
rect. To account for a different mass dependence we could write
the star formation rate as Ṁ∗ ∝ M
β , where β is left as a free pa-
rameter. However, the range of observed luminosities span only 1
order of magnitude, and we will show that the choice β = 1 pro-
vides a model that describes the observations well. Furthermore,
the duty cycle ǫDC may be viewed as the fraction of dark mat-
ter halos that are currently forming stars. The remaining fraction
(1− ǫDC) of halos either have not formed stars yet, or are evolv-
ing passively. In either case, the contribution of these halos to the
UV-LF is set to be negligible.
c© 2006 RAS, MNRAS 000, 1–11
Very Massive Stars in High-z Galaxies 3
Figure 1. Constraints on the star formation efficiency from the
observed rest-frame UV-luminosity functions of LAEs at z = 5.7
(red squares) and z = 6.5 (blue circles) (Shimasaku et al. 2006;
Kashikawa et al. 2006). In our best-fit model, a faction f∗ ∼
0.06 of all baryons is converted into stars over a time-scale of
ǫDCthub ∼ 0.03 Gyr (see text). The inset shows likelihood con-
tours in the (ǫDC, f∗)-plane at 64%, 26% and 10% of the peak
likelihood. Also shown on the upper horizontal axis is the mass
corresponding to MAB,1350 in the best-fit model.
in the (ǫDC, f∗)-plane at 64%, 26% and 10% of the peak like-
lihood. The best fit model has (ǫDC, f∗) = (0.03, 0.06) and is
plotted as the solid line. In the following sections we assume
this combination of f∗ and ǫDC.
3.2 Constraints from the Lyα LF.
We next model the Lyα LF, beginning with the best-fit
model of the previous section. The number density of LAEs
at redshift z with Lyα luminosities exceeding Tα × Lα is
given by (Dijkstra et al. 2007b)
N(> Tα × Lα, z) = ǫDC
(z), (2)
where the Lyα luminosity and host halo mass, Mα are re-
lated by
Tα × Lα = Lα
Mα(M⊙)
tsys(yr)
Tα. (3)
In this relation, Tα is the IGM transmission multiplied by
the escape fraction of Lyα photons from the galaxy, and
Lα = 2.0 × 1042 erg s−1/(M⊙ yr−1), is the Lyα luminos-
ity emitted per unit of star formation rate (in M⊙ yr
Throughout, Lα,42 denotes Lα in units of 1042 erg s−1/(M⊙
yr−1). We have taken Lα,42 = 2.0, which is appropriate for
a metallicity of Z = 0.05Z⊙ and a Salpeter IMF (Dijkstra
et al, 2007). Note that when comparing to observed lumi-
nosities (Shimasaku et al. 2006; Kashikawa et al. 2006), we
have replaced Lα with Tα×Lα. This is because the observed
luminosities have been derived from the observed fluxes by
assuming that all Lyα emerging from the galaxy was trans-
mitted by the IGM, whereas there is substantial absorption
(e.g. Dijkstra et al. 2007). The product Tα×Lα may be writ-
ten as Tα×Lα = 4πd2L(z)Sα, where Sα is the total Lyα flux
Figure 2. Joint constraints on the IGM transmission from the
observed Lyα and rest-frame UV LFs of LAEs at z = 5.7
(Shimasaku et al. 2006) and z = 6.5 (Kashikawa et al. 2006). The
red squares and blue circles represent the data at z = 5.7 and
z = 6.5. Using the model that best describes the UV-LFs with
(fstar, ǫDC) = (0.06, 0.03) we fit to the observed Lyα LF. The
only free parameter of our model was the fraction of Lyα that
was transmitted through the IGM at z = 5.7, Tα,57 (see text).
Shown in the inset is the likelihood for Tα,57, normalised to a
peak of unity. The figure shows that in order to simultaneously
fit the Lyα and UV-LFs, only ∼ 30% of Lyα photons are trans-
mitted through the IGM. The best fit models are overplotted as
the solid lines.
detected on earth and dL(z) is the luminosity distance to
redshift z. The product Tα × Lα may therefore be viewed
as an effective luminosity inferred at earth. Furthermore,
the selection criteria used by Shimasaku et al. (2006) and
Kashikawa et al. (2006) limits these surveys to be sensitive
to LAEs with EW >∼10Å. In the reminder of this paper, the
EW of model LAEs is always larger than this EWmin, and
we need not worry about selection effects when comparing
our model to the data.
In this section, we set the transmission at z = 6.5 (de-
noted by Tα,65) to be a factor of ∼ 1.2 lower2 than at z = 5.7
(denoted by Tα,57). This ratio is the median of the range
found by Dijkstra et al. (2007). We then calculated the Lyα
LF for a range of Tα,57, and generated likelihoods L[P ] =
exp[−0.5χ2], where χ2 =
Ndata
(modeli − datai)2/σ2i , for
each model. Here, datai and σi are the i
th data point and its
error, and modeli is the model evaluated at the i
th luminos-
ity bin. The sum is over Ndata = 6 points at each redshift.
In Figure 2 we show the Lyα luminosity functions at z = 5.7
and z = 6.5. The red squares and blue circles represent data
from Shimasaku et al. (2006, z = 5.7) and Kashikawa et al.
(2006, z = 6.5), respectively.
The likelihood for Tα,57 (normalised to a peak of unity)
is shown in the inset. The best fit model is overplotted
as the solid lines, for which the value of the transmission
is Tα,57 = 0.30. The modeling presented in this and the
2 For our primary results in § 5 we allow this ratio to be a free
parameter. The results presented in this section is not sensitive
to the precise choice of the ratio of IGM transmission at z = 5.7
and z = 6.5.
c© 2006 RAS, MNRAS 000, 1–11
4 Mark Dijkstra & J. Stuart B. Wyithe
previous section therefore suggests that, in order to simul-
taneously fit the Lyα and UV luminosity functions, only
∼ 30% of the Lyα can be transmitted through the IGM. This
transmission is in good agreement with the results obtained
by Dijkstra et al. (2007), who modeled the transmission di-
rectly and found that for reasonable model parameters the
transmission must lie in the range 0.1 <∼Tα <∼0.3.
3.3 The Predicted Equivalent Width
While in agreement with the observed LFs, the model de-
scribed in § 3.2 does not reproduce the very large ob-
served equivalent widths. The Lyα luminosity can be rewrit-
ten in terms of the star formation rate (Ṁ∗) and EW as
Lα = 2.0 × 1042 erg s−1Ṁ∗(M⊙/yr)(EW/160 Å). Here we
have used the relation Lα =EW×[ναf(να)/λα], where f(να)
is the flux density in erg s−1 Hz−1 at να, and where we
denoted the Lyα frequency and wavelength by να and λα
respectively. Furthermore, we assumed the spectrum to be
constant between 1216 Å and 1350 Å. For Lα,42 = 2.0, we
find a best fit model that predicts LAEs to have an observed
EW of Tα × 160 Å ∼ 50 Å. This value compares unfavor-
ably with the observed sample that includes EWs exceed-
ing ∼ 100 Å in ∼ 50% of cases for Lyα selected galaxies
(Shimasaku et al. 2006).
Thus although our simple model can successfully repro-
duce the observed Lyα and UV luminosity functions, the
model fails to reproduce the observed large EW LAEs. This
discrepancy cannot be remedied by changing the intrinsic
Lyα emissivity of a given galaxy, Lα: increasing Lα would
be simply be compensated for by a lower Tα and vice versa.
In the next section we discuss a simple modification of this
model that aims to alleviate this discrepancy.
4 THE FLUCTUATING IGM MODEL
The model described in § 3 assumed that the Lyα flux of
galaxies was subject to uniform attenuation by the IGM.
In this section we relax this assumption and investigate the
predicted EWs in a more realistic IGM where transmission
fluctuates between galaxies. We refer to this model as the
’fluctuating IGM’ model. In this model, a larger transmis-
sion translates to a larger observed equivalent width. As a
result, galaxies with large Tα are more easily detected, and
the existence of these galaxies may therefore affect the ob-
served EW-distribution for Lyα selected galaxies, even in
cases where they comprise only a small fraction of the in-
trinsic population. In this section we investigate whether
this bias could explain the anomalously large observed EW.
We assume a log-normal distribution for Tα,57,
P (u)du =
× exp
“−(u− 〈u〉)2
du, (4)
where 〈u〉 = log〈Tα,57〉 is the log (base 10) of the mean
transmission and σu is the standard deviation in log-space.
Throughout this section we drop the subscript ’57’. Eq (4)
may be rewritten in the form
f(> u) =
u− 〈u〉√
, (5)
Figure 3. Same as Figure 2. However, instead of assuming a
single value of IGM transmission Tα,57, we assumed a log-normal
distribution of IGM transmission with a mean of log Tα,57 and
standard deviation of (in the log) σu (see text). This reflects the
possibility that the IGM transmission fluctuates between galaxies.
The inset shows likelihood contours for (log Tα,57, σu). Increasing
σu flattens the luminosity function (and moves it upward), which
is illustrated by the model LFs shown as solid lines, for which
we used (Tα,57, σu) = (0.27, 0.2) (shown as the thick black dot
in the inset). The best-fit model to the data has σu ∼ 0 (which
corresponds to the model shown in Figure 2).
which gives the fraction of LAEs with log Tα > u. The num-
ber density of LAEs is then given by
N(> Tα ×Lα) = ǫDC
f(> u(Tα ×Lα,M)) (6)
where
u(Tα × Lα,M) = log
“Tα × Lα
LαṀ∗
. (7)
Eq (6) differs from Eq (2) in two ways: (1) there is no lower
integration limit, and (2) there is an additional term f(>
u). These two differences reflect the facts that all masses
contribute to the number density of LAEs brighter than Tα×
Lα, and that lower mass systems require larger transmissions
(Eq 7) which are less common (Eq 5). In the limit σu → 0,
the function f(> u) ’jumps’ from 0 to 1 at Mmin (Eq 3),
which corresponds to the original Eq (2).
In this formalism we may also write the number density
of LAEs with transmission in the range u±du/2 ≡ log Tα ±
d log Tα
, which is given by
N(u)du = P (u)du
Mmin(u)
. (8)
Here Mmin(u) is the minimum mass of galaxies that can
be detected with a transmission in the range u± du/2 (for
Mtot < Mmin the total flux falls below the detection thresh-
old). The number density of LAEs with transmission in the
range u ± du/2 may be used to find the number density of
LAEs with equivalent widths in the range log EW± d log EW
via the relation EW= 160Tα Å (for the choice Lα,42 = 2.0,
see § 3.2). Eq (8) shows that the observed equivalent width
distribution takes the shape of the original transmission dis-
tribution, modulo a boost which increases towards larger
As in § 3.2 we assume the best-fit model parameters
c© 2006 RAS, MNRAS 000, 1–11
Very Massive Stars in High-z Galaxies 5
for ǫDC and f∗ derived from the UV-LFs determined in
§ 3.1. We calculate model Lyα LFs on a grid of models in
the (σu, 〈Tα〉)-plane, and generate likelihoods following the
procedure outlined in § 3.2. The results of this calculation
are shown in Figure 3 where we plot the Lyα LFs together
with likelihood contours in the (σu, 〈Tα〉)-plane (inset). The
best-fit models favor no scatter in Tα (σu ∼ 0). The reason
for this is that for any given model, a scatter in Tα serves
to flatten the model LF. However, the observed Lyα LF is
quite steep, and as a result the data prefers a model with no
scatter. Furthermore, a scatter in Tα results in a model LF
that lies above the original (constant transmission) LF at
all Tα × Lα. This also explains the shape of the contours in
the (σu, 〈Tα〉)-plane; increasing σu must be compensated for
by lowering 〈Tα〉). To illustrate the impact of a fluctuating
Tα, the model LFs shown in Figure 3 are not those of the
best-fit model, but of a model with (〈Tα〉, σu) = (0.27, 0.2).
Figure 4 shows the observed EW-distribution (solid
line) at z = 5.7 associated with the Lyα LFs shown in Fig-
ure 3. This model may be compared to the observed distri-
bution (shown as the histogram, data from Shimasaku et al,
2006). The upper horizontal axis shows the transmission cor-
responding to each equivalent width. The dotted line shows
the distribution of transmission (given by Eq 4). Note that
the range of transmissions shown in Figure 4 extends to
Tα > 1, which of course is not physical. The model EW-
distribution has a tail towards larger EWs. However, the
model distribution peaks at EW∼ 50 Å. As was seen in
the constant transmission model, this is clearly inconsistent
with the observations, which favor values of EW∼ 100 Å.
Shimasaku et al. (2006) also show a probability distribution
of EW, which is probably closer to the actual distribution.
Although this distribution does peak closer to EW∼ 50 Å,
it is still significantly broader than that of the model (see
§ 6 for more arguments against the fluctuating IGM model).
We point out that the model distribution shown in Figure 4
is independent of the assumed value of Lα,42.
5 LYα EMITTERS POWERED BY VERY
MASSIVE STARS.
In § 3 we demonstrated that a simple model where Lα and
LUV were linearly related to halo mass can reproduce the
UV and Lyα LFs, but not the observed EW distribution.
In § 4 we showed that this situation is not remedied by a
variable IGM transmission, and that favored models have a
constant transmission. In this section we discuss an alternate
model, which leads to consistency with both the observed
Lyα LFs, UV-LFs, and the EW-distribution. In this model,
galaxies are assumed to have a bright Lyα phase (hereafter
the ’population III’-phase) which lasts a fraction fIII of the
galaxies’ life-time. After this the galaxy’s Lyα luminosity
drops to the ’normal’ value for population II star formation.
This model may be viewed as an extension of the idea
originally described by Malhotra & Rhoads (2002), that
large EW LAEs are young galaxies in the early stages of
their lives. In this picture, the sudden drop in Lyα luminos-
ity could represent i) a sudden drop in the ionising lumi-
nosity when the first O-stars died, or ii) an enhanced dust-
opacity after enrichment by the first type-II supernovae. Al-
ternatively, our parametrisation could represent a scenario
Figure 4. Comparison of the observed equivalent width distribu-
tion (EW, histogram), with the model prediction for a model in
which we assumed a log-normal distribution of IGM transmission
with (Tα,57, σu) = (0.27, 0.2) (see Fig 3). The EW is related to Tα
via EW= 160Tα Å. The dotted line shows the fraction, f(> Tα)
(shown on the right vertical axis), of galaxies with a transmission
greater than Tα (Eq 5). Galaxies with large Tα are more easily
detected, hence the large Tα (EW) end is boosted considerably,
resulting in closer agreement (but not close enough) to the data.
in which the population III phase ended after the first pop-
ulation III stars enriched the surrounding interstellar gas
from which subsequent generations of stars formed. Hence,
we refer to this model as the ’population III’ model. We will
show that to be consistent with the large values of the ob-
served EW, a very massive population of stars is required
during the early stages of star formation.
To minimise the number of free parameters we modeled
the time dependence of the Lyα EW as a step-function. The
number density of LAEs is then given by
N(> Tα × Lα, z) = fIII × ǫDC
Mα,III
(z) + (9)
(1− fIII)× ǫDC
Mα,II
Here,Mα,II is the mass related to Tα×Lα through Tα×Lα =
Lα×Tα×Ṁ∗, while Mα,III is the population III mass, which
is calculated with Lα replaced by Lα = (EWIII/160 Å)×2×
1042 erg s−1.
Whereas in previous sections we chose fiducial or best
fit parameters for illustration, for the model described in this
section we take the most general approach. We fit the model
simultaneously to the UV-LF and Lyα LFs, as well as to
the observed EW-distribution of Lyα selected galaxies. This
model predicts two observed equivalent widths (Tα×EWIII
and Tα×EWII) in various abundances. The associated mean
and variance from the model are compared to the observed
EW-distribution, which has a mean of 〈EW〉 = 120± 25 Å,
and a standard deviation of σEW = 50± 10 Å.
Our model has 6 free parameters
(ǫDC, f∗, Tα,57, Tα,65, fIII,EWIII). We produce likelihoods
for each parameter by marginalising over the others in
this space. The lower set of panels in Figure 5 show
likelihood contours for our model parameters at 64%,
26% and 10% of the peak likelihood. The best-fit models
have EWIII ∼ 600 − 800 Å and fIII = 0.04 − 0.1 which
c© 2006 RAS, MNRAS 000, 1–11
6 Mark Dijkstra & J. Stuart B. Wyithe
Figure 5. Marginalised constraints on the 6 population-III model parameters, (ǫDC, f∗,Tα,57,Tα,65, fIII,EWIII), are shown in the lower
three panels. In the best-fit population-III model, each galaxy goes through a luminous Lyα phase during which the equivalent width
is EWIII = 600 − 800 Å for a fraction fIII = 0.04 − 0.1 of the galaxies life time. Although the bright phase only lasts a few to ten per
cent of their life-time, galaxies in the bright phase are more easily detectable, and the number of galaxies detected in Lyα surveys in the
bright phase is equal to that detected in the faint phase. This is also demonstrated by the thick solid line (with label ’1.0’) in the lower
left panel, which shows the combination of fIII and EWIII that produces equal numbers of galaxies in the population III and II phase
(the dashed lines are defined similarly, also see Fig 6). The best-fit intrinsic equivalent widths are in excess of the maximum allowed for
a population-II stellar population having a Salpeter mass function. Therefore, this model requires a burst of very massive star formation
lasting no more than a few to ten percent of the galaxies star forming lifetime, and may indicate the presence of population-III star
formation in a large number of high-redshift LAEs.
corresponds to a physical timescale for the population-III
phase of fIII× ǫDC × thub ∼ 4−50 Myr (for 0.1 <∼ǫDC <∼0.5).
The model Lyα luminosity functions at z=5.7 and
z=6.5 described by (ǫDC, f∗, Tα,57, Tα,65, fIII,EWIII) =
(0.2, 0.14, 0.22, 0.19, 0.08, 650 Å) are shown as solid lines
and provide good fits to the data. The model produces
two observed EWs, namely Tα×EWII = 35 Å and Tα×
EWIII = 143 Å . It is worth emphasising that the emitted
EW of the bright phase depends on the choice Lα,42 via
EWIII = 650(Lα,42/2.0) Å. Hence, a lower/higher value
of Lα,42 would decrease/increase the intrinsic brightness
of the ’population III’ phase. Note that Lα,42 = 1.0 when
LAEs formed out of gas of solar metallicity, which is
unreasonable given the universe was only ∼ 1 Gyr old at
z = 6. Furthermore, Lα,42 = 1 would have yielded a best-fit
Tα,57 = 0.5, which is well outside the range calculated by
Dijkstra et al. (2007). We there conclude that Lα,42 is in
excess of unity.
In performing fits we have fixed the value of EWII to
correspond to a standard stellar population, and then ex-
plored the possibility that there might be a second phase of
SF producing a larger EW. Our modeling finds strong sta-
tistical evidence for this early phase and rules out the null-
hypothesis that properties can be described by population-II
stars alone at high confidence (grey region in the lower left
panel inset of Fig 5). Despite the fact that the best fit model
has a bright phase which lasts only a few per cent of the total
star formation lifetime, the two populations of LAEs are sim-
ilarly abundant in model realisations of the observed sample
in Lyα selected galaxies (see § 5.1 for a more detailed com-
parison to the observed EW distributions). This is shown
in the lower left panel in Figure 5 in which the solid line
(with label ’1.0’) shows the combination of fIII and EWIII
for which the observed number of galaxies in the popula-
tion III phase (NIII) equals that in the population II phase
(NII). The dashed lines show the cases NIII/NII = 0.3 and
NIII/NII = 3.0. The duration of the bright Lyα phase meets
theoretical expectations for a burst of star formation, while
the large EW requires a very massive stellar population (e.g
Schaerer 2003; Tumlinson et al. 2003). In summary, in order
to reproduce both the UV and Lyα LFs, and the observed
population of large EW galaxies, we require a burst of very
massive star formation lasting <∼10 per cent of the galaxies
lifetime.
c© 2006 RAS, MNRAS 000, 1–11
Very Massive Stars in High-z Galaxies 7
5.1 EW Distribution of UV and Lyα Selected
Galaxies
Stanway et al. (2007) show that 11 out of 14 LAE candidates
among i-drop galaxies in the Hubble Ultra Deep Field have
EW< 100 Å. If galaxies are included for which only upper or
lower limits on the EW are available, then this fraction be-
comes 21 out of 26. Thus the distribution of EWs for i-drop
selected galaxies differs strongly from the EW-distribution
observed by Shimasaku et al. (2006). We next describe why
this strong dependence of the observed Lyα EW distribu-
tion on the precise galaxy selection criteria arises naturally
in our population III model.
We first assume that the Lyα selected and UV-selected
galaxies were drawn from the same population (this assump-
tion is discussed further in § 6.3). In our model a galaxy that
is selected based on it’s rest-frame UV-continuum emission
has a probability fIII of being observed in the Lyα bright
phase, while the probability of finding a galaxies in the
Lyα faint phase is 1 − fIII. In § 5 we found fIII ∼ 0.1,
hence an i-drop galaxy is ∼ 10 times more likely to have
a low than a high observed EW. If we denote the number
of galaxies with EW> 100 Å by NIII, and the number of
galaxies with EW< 100 Å by NII, then the model predicts
NIII/NII =fIII/(1 − fIII) ∼ 0.1, while the observed fraction
including the galaxies for which the EW is known as upper
or lower limit is NIII/NII = 0.19± 0.05. Therefore the quali-
tative difference in observed Lyα EW distribution among i-
drop galaxies in the HUDF and among Lyα selected galaxies
follows naturally from our two-phase star formation model.
Note that our model predicts population III star formation
to be observed in fIII/(1 − fIII) ∼ 10% of the z = 6.0 LBG
population.
The dependence of the observed EW distribution on the
selection criteria used to construct the sample of galaxies is
illustrated in Figure 6. To construct this figure, we have
taken the best-fit population III model of § 5. For the pur-
pose of presentation, we let the IGM fluctuate according to
the prescription of § 4 with σu = 0.1, so that the model pre-
dicts a finite range of EWs in each phase. The left and right
panels show the predicted EW distribution for Lyα selected
(left panel) and UV-selected (right panel) galaxies as the
solid lines, respectively. For a UV-selected galaxy the prob-
ability of being in the bright phase and having an observed
EW in the range EWIII×(Tα±dTα/2) is fIIIP (Tα)dTα. Here
P (Tα)dTα is the probability that the IGM transmission is in
the range Tα±dTα/2, which is derived from Eq 4. The units
on the vertical axis are arbitrary, and chosen to illustrate
the different predicted and observed Lyα EW distribtions
for the two samples at large EWs. The observed distribu-
tions for Lyα and UV selected galaxies, shown as histograms,
are taken from Shimasaku et al. (2006) and Stanway et al.
(2007), respectively. Figure 6 clearly shows that both the
predicted and observed Lyα selected samples contain signif-
icantly more large EW LAEs than the UV-selected sample.
Our model naturally explains the qualitative shape of these
distributions and their differences.
Before proceeding we mention a caveat to the distribu-
tions shown in Figure 6. In our model all galaxies have an
EW of Tα×EWII ∼ 25−35 Å during the population II phase,
while in contrast Stanway et al. (2007) do not detect 10 out
of 26 LBGs, which implies that ∼ 40% of LBGs have an
Figure 6. Comparison of the predicted EW distribution for UV
and Lyα selected galaxies. The best-fit population-III model (see
§ 5) was used. In order to get a finite range of observed EWs (in-
stead of only two values at Tα×EWII and Tα×EWIII, we assumed
the IGM transmission to fluctuate. The units on the vertical axis
are arbitrary. The figure shows that in our population III model,
the Lyα selected sample contains a larger relative fraction of large
EW LAEs than the UV-selected (i-drop) sample, which is quali-
tatively in good agreement with the observations (shown by the
histograms).
EW <∼6 Å. Thus there is a descrepancy between our model
and the observations with respect to the value of EW in the
population II phase. The resolution of this discrepancy lies
in the fact that the very low EW emitters are drawn from
the UV (i-drop) sample and not the Lyα selected sample
our model was set up to describe. This issue is discussed in
more detail in § 6.3.
An EW distribution of dropout sources was also pre-
sented by Dow-Hygelund et al. (2007). These authors per-
formed an analysis similar to Stanway et al. (2007) and
found 1 LAE with EW= 150 Å among 22 candidate
z=6.0 LBGs. When interpreted in reference to our model,
this translates to NIII/NII ∼ 5%, which is consistent
with the model predictions. Therefore, when interpreted
in light of a two-phase star formation history and differ-
ent selection methods, the EW distribution observed by
Dow-Hygelund et al. (2007) is consistent with that found
by Shimasaku et al. (2006).
If population III star formation does provide the ex-
planation for the very large EW Lyα emitters, then we
would expect the large EW emitters to become less common
with time as the mean metallicity of the Universe increased.
To test this idea, we can compare the EW-distribution at
z = 5.7 with the results at lower redshift from Shapley et al.
(2003) who found that <∼0.5% of z = 3 LBGs have Lyα
EW> 150 Å , and that <∼2% of z = 3 LBGs to have Lyα
EW> 100 Å. Dow-Hygelund et al. (2007) argue that the
fraction of large EW Lyα lines at z = 6 is consistent with
that observed at z = 3 (Shapley et al. 2003). However, if
the EW distribution did not evolve with redshift, then the
probability that a sample of 22 LBGs will contain at least
1 LAE with EW >∼150 Å is <∼10%. Thus the hypothesis
that the observed EW distribution remains constant is ruled
out at the ∼ 90% level. On the other hand, in a similar
analysis Stanway et al. (2007) found 5 out of 26 LBGs to
have an EW >∼100 Å. If the EW distribution did not evolve
c© 2006 RAS, MNRAS 000, 1–11
8 Mark Dijkstra & J. Stuart B. Wyithe
with redshift, then the probability of finding 5 EW >∼100
Å in this sample is only ∼ 10−4. Furthermore, Nagao et al.
(2007) recently found at least 5 LAEs with EW> 100 Å at
6.0 <∼z <∼6.5, and conclude that 8% of i’-drop galaxies in
the Subaru Deep Field have EW> 100 Å, which is signifi-
cantly larger than the fraction of large EW LBGs at z = 3.
Therefore, the observed EW distribution of LBGs at z = 6 is
skewed more toward large EWs than at z = 3. The strength
of this result is increased by the fact that the IGM is more
opaque to Lyα photons at z = 6 than at z = 3. Thus we con-
clude that the intrinsic EW distribution must have evolved
with redshift.
6 DISCUSSION
6.1 Comparison with Population Synthesis
Models
Population synthesis models have suggested that the broad
band colors of observed LAEs are best described with young
stellar populations (Gawiser et al. 2006; Pirzkal et al. 2007;
Finkelstein et al. 2007). Lai et al. (2007) found the stellar
populations in three LAEs to be 5−100 Myr old, and possi-
bly as old as 700 Myr (where the precise age upper limit de-
pends on the assumed star formation history of the galaxies).
However as was argued by Pirzkal et al. (2007), since these
galaxies were selected based on their detection in IRAC,
a selection bias towards older stellar populations may ex-
ist (also see Lai et al. 2007). Furthermore, Finkelstein et al.
(2007) found that LAEs with EW> 110 Å have ages <∼4
Myr, while LAEs with EW< 40 Å have ages between 20-
400 Myr. This latter result in particular agrees well with
our population III model. On the other hand, in a fluctu-
ating IGM model for example, the EW of LAEs should be
uncorrelated with age.
In models presented in this paper, on average f∗ ∼ 0.15
of all baryons are converted into stars within halos of mass
Mtot ∼ 1010 − 1011M⊙, yielding stellar masses in the range
M∗ = 10
8−109M⊙ (Dijkstra et al. 2007b; Stark et al. 2007).
This compares unfavorably with the typical stellar masses
found observationally in LAEs which can be as low as
M∗ = 10
6 − 107M⊙ (Finkelstein et al. 2007; Pirzkal et al.
2007). However, the lowest stellar masses are found (natu-
rally) for the younger galaxies. Indeed, the LAEs with the
oldest stellar populations can have stellar masses as large
as 1010M⊙. Thus, we do not find the derived stellar masses
in LAEs to be at odds with the results of this paper. If
significant very massive (or population III) star formation
indeed occurred in high redshift LAEs, then one may ex-
pect these stars to reveal themselves in unusual broad-band
colors (e.g. Stanway et al. 2005). However, Tumlinson et al.
(2003) have shown that the most distinctive feature in the
spectrum of population III stars is the number of H and He
ionising photons (also see Bromm et al. 2001). Since these
are (mostly) absorbed in the IGM, the broad band spec-
trum of population III stars is in practice difficult to dis-
tinguish from a normal stellar population (Tumlinson et al.
2003), especially when nebular continuum emission is taken
into account (Schaerer & Pelló 2005, see their Fig 1). Hence,
population III stars would not necessarily be accompanied
by unusually blue broad band colors.
6.2 Alternative Explanations for Large EW LAEs
We have shown that a simple model in which high-redshift
galaxies go through a population-III phase lasting <∼15
Myr can simultaneously explain the observed Lyα LFs at
z = 5.7 and z = 6.5 (Kashikawa et al. 2006), and the ob-
served EW-distribution of Lyα selected galaxies at z = 5.7
(Shimasaku et al. 2006). In addition, this model predicts the
much lower EWs found in the population of UV selected
galaxies (Stanway et al, 2007, see § 5.1). Moreover the con-
straints on the population-III model parameters such as the
duration and the equivalent width of the bright phase are
physically plausible, and consistent with existing population
synthesis work (see § 6.1).
Are there other interpretations of the large observed
EWs? One possibility was discussed in § 4, where we showed
that the simple model in which the IGM transmission fluc-
tuates between galaxies reproduces the LFs, but fails to
simultaneously reproduce the Lyα LFs and the observed
EW-distribution. In addition, this model fails to reproduce
other observations. Dijkstra et al. (2007) calculated the im-
pact of the high-redshift reionised IGM on Lyα emission
lines and found the range of plausible transmissions to lie
in the range 0.1 < Tα < 0.3. This work showed that it is
possible to boost the transmission to (much) larger values
but not without increasing the observed width of the Lyα
line. Absorption in the IGM typically erases all flux blue-
ward of the Lyα resonance, and when infall is accounted
for, part of the Lyα redward of the Lyα resonance as well.
This implies that Lyα lines that are affected by absorption
in the IGM are systematically narrower than they would
have been if no absorption in the IGM had taken place. It
follows that in the fluctuating IGM model, Lyα EW should
be strongly correlated with the observed Lyα line width (or
FHWM). This correlation is not observed. In fact, observa-
tions suggest that an anti-correlation exists between EW
and FWHM (Shimasaku et al. 2006; Tapken et al. 2007).
This anti-correlation provides strong evidence against the
anomalously large EWs being produced by a fluctuating
IGM transmission.
A second possibility is the presence of galaxies with
strong superwinds. The models of Dijkstra et al. (2007) did
not study the impact of superwinds on the Lyα line pro-
file. The presence of superwinds can cause the Lyα line to
emerge with a systematic redshift relative to the Lyα reso-
nance through back scattering of Lyα photons off the far side
of the shell that surrounds the galaxy (Ahn et al. 2003; Ahn
2004; Hansen & Oh 2006; Verhamme et al. 2006). However,
superwinds tend not only to redshift the Lyα line, they also
make the Lyα line appear broader than when this scatter-
ing does not occur. As in the case of the fluctuating IGM
model, this results in a predicted correlation between EW
and FWHM, which is not observed. Furthermore in wind-
models, the overall redshift of the Lyα line, and thus Tα,
increases with wind velocity, vw. This predicts that EW in-
creases with wind velocity. However, observations of z = 3
LBGs by Shapley et al. (2003) show that EW correlates with
v−1w (Ferrara & Ricotti 2006). We therefore conclude that
the large EW in LAEs cannot be produced by superwind
galaxies.
A third possibility might be that within the Lyα emit-
ting galaxy, cold, dusty clouds lie embedded in a hot inter-
c© 2006 RAS, MNRAS 000, 1–11
Very Massive Stars in High-z Galaxies 9
Figure 7. Comparison of the best-fit population III model
(shown in Figure 5) with the UV-LF constructed by
Bouwens et al. (2006) using ∼ 300 LBGs in the Hubble Deep
Fields. This good agreement is likely a coincidence since in our
model all LBGs have an observed EW >
30 Å, while the obser-
vations show that ∼ 40% of all LBGs are not detected in Lyα
(EW <
6 Å). This overlap could possibly be (partly) due to a
lower dust content of LAEs relative to their Lyα quiet counter-
parts (see text).
cloud medium of negligible Lyα opacity. Under such condi-
tions, the continuum photons can suffer more attenuation
than Lyα photons which bounce from cloud to cloud and
mainly propagate through the hot, transparent inter-cloud
medium (Neufeld 1991; Hansen & Oh 2006). This attenua-
tion of continuum leads to a large EW. We point out that in
this scenario, large EW LAEs are not intrinsically brighter
in Lyα. At fixed Lyα flux, one is therefore equally likely
to detect a low EW LAE. In other words, to produce the
observed EW distribution one requires preferential destruc-
tion of continuum flux by dust in ∼ 50% of the galaxies.
Currently there is no evidence that this mechanism is at
work even in one galaxy. Furthermore, the rest-frame UV
colors of galaxies in the Hubble Ultra Deep Field imply that
dust in high-redshift galaxies suppresses the continuum flux
by only a factor of ∼ 1.4 (Bouwens et al. 2006). The maxi-
mum boost of the EW in a multi-phase ISM is therefore 1.4,
which is not nearly enough to produce intrinsic equivalent
widths of EW∼ 600 − 800 Å. In summary, the only model
able to simultaneously explain all observations calls for a
short burst of very massive star formation.
6.3 Comparison with the LBG Population
In § 5.1 we have shown that the observed EW distributions
of Lyα selected and i-drop galaxies and their differences can
be reproduced qualitatively with our population III model.
However, in our model all high-redshift galaxies have an ob-
served EW of at least Tα×EWII ∼ 30 Å, whereas many
i-drop galaxies are not detected in Lyα. Kashikawa et al.
(2006) show that the UV-LFs of LAEs at z = 6.5 and
z = 5.7 overlap with that constructed by Bouwens et al.
(2006) from a sample of ∼ 300 z = 6 LBGs discovered in
the Hubble Deep Fields. Naively, this overlap implies that
LBGs and LAEs are the same population and therefore that
all LBGs should be detected by Lyα surveys. Since Lyα sur-
veys only detect galaxies with EW >∼20 Å, this suggests all
LBGs should have a Lyα EW >∼20 Å contrary to observa-
tion. To illustrate this point further, we have taken the best-
fit population III model shown in Figure 5 and compared the
model predictions for the rest-frame UV-LF with that of
Bouwens et al. (2006) in Figure 7. Clearly, our best-fit pop-
ulation III model fits the data well. However, Stanway et al.
(2007) found ∼ 40% of i-drop galaxies in the HUDF to have
an observed EW <∼6 Å, and a similar result was presented
by Dow-Hygelund et al. (2007).
Two effects may help reconcile these two apparently
conflicting sets of observations: (i) Dow-Hygelund et al.
(2007) found Lyα emitting LBGs to be systematically
smaller. That is, for a fixed angular size, the z850-band flux
of LAEs is systematically higher with ∆z850 ∼ −1. If we
assume that the angular scale of a galaxy is determined by
the mass of its host halo, then this implies that for a fixed
mass the z850-band flux of LAEs is systematically higher,
and (ii) only a fraction of LBGS are LAEs. The drop-out
technique used to select high redshift galaxies is known to
introduce a bias against strong LAEs, as a strong Lyα line
can affect the broad band colors of high-redshift galaxies.
This may cause ∼ 10−46% of large EW LAEs to be missed
using the i-drop technique (Dow-Hygelund et al. 2007).
If only a fraction fα of all LBGs are detected in Lyα,
then effect (i) would explain why the UV-LF of LAEs lies
less than a factor of 1/fα below the observed UV-LF of
the general population of LBGs. This is because a more
abundant lower mass halo is required to produce the same
UV-flux in LAEs, which would shift the LF upwards. In
addition, effect (ii) may reduce this difference even further.
It follows that these two effects combined may cause the LFs
to overlap. Thus the overlap of the UV and Lyα selected UV-
LFs appears to be a coincidence, and not evidence of their
being the same population of galaxies. This implies that our
model is valid for Lyα selected galaxies, but not the high-
redshift population as a whole and explains the lack of very
low EWs in UV selected samples discussed in § 5.1.
The reason why LAEs may be brighter in the UV for
a fixed halo mass is unclear. It is possibly related to dust
content. Bouwens et al. (2006) found the average amount
of UV exctinction to be 0.4 mag in the total sample of
z = 6 LBGs. This value is close to the average excess z850-
band flux detected from LAEs for a given angular scale
(Dow-Hygelund et al. 2007). If LAEs contain less (or no)
dust, then this would explain why they are brighter in the
UV and thus why they appear more compact. The possi-
bility that ’Lyα quiet’ LBG contain more dust than their
Lyα emitting counterparts is not very surprising, as a low
dust abundance has the potential to eliminate the Lyα line.
Thus LAEs could be high redshift galaxies with a lower dust
content.
Shimasaku et al. (2006) and Ando et al. (2006) found
that luminous LBGs, MUV <∼ − 21.0, typically do not con-
tain large EW Lyα emission lines. This deficiency of large
EW LAEs among UV-bright sources is not expected in
our model, and may reflect that UV-bright sources are
more massive, mature, galaxies that cannot go through a
population-III phase anymore. It should be pointed out
though that the absence of large EW LAEs among galax-
c© 2006 RAS, MNRAS 000, 1–11
10 Mark Dijkstra & J. Stuart B. Wyithe
ies with MUV <∼ − 21.0 in the survey of Shimasaku et al.
(2006) is consistent with our model: The observed num-
ber density of sources with MUV <∼ − 21.0 is ∼ 5 × 10
cMpc−3 (see Fig 1). In our best-fit pop-III model (shown in
Fig 5), a fraction fIII ∼ 0.08 of these galaxies would be in the
bright phase. This translates to a number density of large
EW LAEs of ∼ 4× 10−6 cMpc−3. Given the survey volume
of ∼ 2×105 cMpc3, the expected number of large EW LAEs
with MUV <∼ − 21.0 is ∼ 0.8, and the absence of large EW
LAE among UV bright sources is thus not surprising.
6.4 Clustering Properties of the LAEs
In our model large EW LAEs are less massive by a factor
of EWIII/EWII ∼ 4 at fixed Lyα luminosity. Since clus-
tering of dark matter halos increases with mass, it follows
that our model predicts large EW LAEs to be clustered less
than their low EW counterparts (at a fixed Lyα luminos-
ity). The clustering of LAEs is typically quantified by their
angular correlation function (ACF), w(θ), which gives the
excess (over random) probability of finding a pair of LAEs
separated by an angle θ on the sky. The ACF depends on
the square of the bias parameter (w(θ) ∝ b2(m)), which for
galaxies in the population II phase is ∼ 1.24 − 1.4 times
larger than for galaxies in the population III phase, for the
mass range of interest. This implies that the clustering of low
EW LAEs at fixed Lyα luminosity is enhanced by a factor
of ∼ 1.5− 2.0. Existing determinations of the ACF of LAEs
by Shimasaku et al. (2006) and Kashikawa et al. (2006) are
still too uncertain to test this prediction.
7 CONCLUSIONS
Observations of high redshift Lyα emitting galaxies (LAEs)
have shown the typical equivalent width (EW) of the Lyα
line to increase dramatically with redshift, with a signifi-
cant fraction of the galaxies lying at z > 5.7 having an
EW >∼100 Å. Recent calculations by Dijkstra et al. (2007)
show that the IGM at z > 4.5 transmits only 10 − 30%
of the Lyα photons emitted by galaxies. In this paper we
have investigated the transmission using a model that re-
produces the observed Lyα and UV LFs. This model re-
sults in an empirically determined transmission of Tα ∼
0.30(Lα,42/2.0)−1, where Lα,42 denotes the Lyα luminos-
ity per unit star formation rate (in M⊙ yr
−1) Lα in units
of 1042 erg s−1 (§ 3). This value is in good agreement with
earlier theoretical results.
If only ∼ 30% of all Lyα that was emitted by high
redshift galaxies reaches the observer, then this impies that
the intrinsic EWs are systematically (much) larger than ob-
served in many cases. To investigate the origin of these very
high EWs, we have developed semi-analytic models for the
Lyα and UV luminosity functions and the distribution of
equivalent widths. In this model Lyα emitters undergo a
burst of very massive star formation that results in a large
intrinsic EW, followed by a phase of population-II star for-
mation that produces a lower EW3. This model is referred
3 Technically, the model discussed in § 5 only specifies that galax-
ies go through a ’population-III phase for a fraction fIII ∼ 0.1 of
to as the ’population III model’ and is an extension of the
idea originally described by Malhotra & Rhoads (2002), who
proposed large EW Lyα emitters to be young galaxies.
The population III model in which the Lyα equivalent
width is EWIII ∼ 650(Lα,42/2.0) Å for <∼50 Myr, is able
to simultaneously describe the following eight properties of
the high redshift galaxy population: i-iv) the UV and Lyα
luminosity functions of LAEs at z=5.7 and 6.5, v-vi) the
mean and variance of the EW distribution of Lyα selected
galaxies at z=5.7, vii) the EW distribution of UV- selected
galaxies at z∼6 (§ 5), and viii) the observed correlation of
stellar age and mass with EW (§ 6.1). Our modeling sug-
gests that the anomalously large intrinsic equivalent widths
observed in about half of the high redshift Lyα emitters
require a burst of very massive star formation lasting no
more than a few to ten percent of the galaxies star forming
lifetime. This very massive star formation may indicate the
presence of population-III star formation in a large number
of high-redshift LAEs. The model parameters for the best-fit
model are physically plausible where not previously known
(e.g. those related to the efficiency and duration of star for-
mation), and agree with estimates where those have been
calculated directly (e.g the IGM transmission, EWIII, and
fIII).
In addition, we argued that the observed overlap of the
UV-LFs of LAEs with that of z∼ 6 LBGs appears to be at
odds the observed Lyα detection rate in high-redshift LBGs,
suggesting that LAEs and LBGs are not the same popula-
tion. A lower dust content of LAEs relative to their ‘Lyα
quiet’ counterparts would partly remedy this discrepancy,
and could also explain why LAEs appear to be typically
more compact (§ 6.3).
Semi-analytic modeling of the coupled reionisation and
star formation histories of the universe suggests that popu-
lation III star formation could still occur after the bulk of
reionisation had been completed (Scannapieco et al. 2003;
Schneider et al. 2006; Wyithe & Cen 2007). The observa-
tion of anomalously large EWs in Lyα emitting galaxies at
high redshift may therefore provide observational evidence
for such a scenario. In the future, the He 1640 Å may be
used as a complementary probe (e.g Tumlinson et al. 2001,
2003). The EW of this line is smaller by a factor of >∼20 for
population III (Schaerer 2003). However, the He 1640Å will
not be subject to a small transmission of ∼ 10− 30%, mak-
ing it accessible to the next generation of space telescopes.
On the other hand, it may also be possible to observe the
He 1640 Å in a composite spectra of z = 6 LBGs. Indeed,
the He 1640 Å line has already been observed in the com-
posite spectrum of z= 3 LBGs (Shapley et al. 2003), which
led Jimenez & Haiman (2006) to argue for population III
star formation at redshifts as low as z = 3−4. If population
III star formation was more widespread at higher redshifts,
as predicted by our model, then the composite spectrum
their lifetimes. Our model does not specify when this population-
III phase occurs. Hypothetically, the population III phase could
occur at an arbitrary moment in the galaxies’ lifetime when it
is triggered by a merger of a regular star forming galaxy and a
dark matter halo containing gas of primordial composition. Note
however that such a model would probably have difficulties ex-
plaining the apparent observed correlation between Lyα EW and
the age of a stellar population (§ 6.1).
c© 2006 RAS, MNRAS 000, 1–11
Very Massive Stars in High-z Galaxies 11
of LBGs at higher redshifts should exhibit an increasingly
prominent He 1640 Å line. In particular, this line should be
most prominent in the subset of LBGs that have large EW
Lyα emission lines.
Acknowledgments JSBW and MD thank the Aus-
tralian Research Counsel for support. We thank Avi Loeb
for useful discussions, and an anonymous referee for a helpful
report that improved the content of this paper.
REFERENCES
Ahn, S.-H., Lee, H.-W., & Lee, H. M. 2003, MNRAS, 340,
Ahn, S.-H. 2004, ApJL, 601, L25
Ando, M., Ohta, K., Iwata, I., Akiyama, M., Aoki, K., &
Tamura, N. 2006, ApJL, 645, L9
Barkana, R. 2004, MNRAS, 347, 59
Bouwens, R. J., Illingworth, G. D., Blakeslee, J. P., &
Franx, M. 2006, ApJ, 653, 53
Bromm, V., Kudritzki, R. P., & Loeb, A. 2001, ApJ, 552,
Charlot, S., & Fall, S. M. 1993, ApJ, 415, 580
Dawson, S., et al. 2004, ApJ, 617, 707
Dijkstra, M., Lidz, A., & Wyithe, J. S. B. 2007, MNRAS,
377, 1175
Dijkstra, M., Wyithe, J.S.B., & Haiman, Z., 2007b, MN-
RAS 1mm in press, astroph/0611195
Dow-Hygelund, C. C., et al. 2007, ApJ, 660, 47
Ferrara, A., & Ricotti, M. 2006, MNRAS, 373, 571
Finkelstein, S. L., Rhoads, J. E., Malhotra, S., Pirzkal, N.,
& Wang, J. 2007, ApJ, 660, 1023
Gawiser, E., et al. 2006, ApJL, 642, L13
Hansen, M., & Oh, S. P. 2006, MNRAS, 367, 979
Hu, E. M., & McMahon, R. G. 1996, Nature, 382, 231
Hu, E. M., Cowie, L. L., McMahon, R. G., Capak, P., Iwa-
muro, F., Kneib, J.-P., Maihara, T., & Motohara, K. 2002,
ApJL, 568, L75
Hu, E. M., Cowie, L. L., Capak, P., McMahon, R. G.,
Hayashino, T., & Komiyama, Y. 2004, AJ, 127, 563
Iye, M., et al. 2006, Nature, 443, 186
Jimenez, R., & Haiman, Z. 2006, Nature, 440, 501
Kashikawa, N., et al. 2006, ApJ, 648, 7
Kennicutt, R. C., Jr. 1998, ARA&A, 36, 189
Kodaira, K., et al. 2003, PASJ, 55, L17
Kunth, D., Mas-Hesse, J. M., Terlevich, E., Terlevich, R.,
Lequeux, J., & Fall, S. M. 1998, A&A, 334, 11
Lai, K., Huang, J.-S., Fazio, G., Cowie, L. L., Hu, E. M.,
& Kakazu, Y. 2007, ApJ, 655, 704
Loeb, A., Barkana, R., & Hernquist, L. 2005, ApJ, 620, 553
Malhotra, S., & Rhoads, J. E. 2002, ApJL, 565, L71
Malhotra, S., Wang, J. X., Rhoads, J. E., Heckman, T. M.,
& Norman, C. A. 2003, ApJL, 585, L25
Mao, J., Lapi, A., Granato, G. L., de Zotti, G., & Danese,
L. 2007, Submitted to ApJ, astro-ph/0611799
Nagao, T., et al. 2007, ArXiv Astrophysics e-prints,
arXiv:astro-ph/0702377
Neufeld, D. A. 1991, ApJL, 370, L85
Partridge, R. B., & Peebles, P. J. E. 1967, ApJ, 147, 868
Pirzkal, N., Malhotra, S., Rhoads, J. E., & Xu, C. 2006,
ArXiv Astrophysics e-prints, arXiv:astro-ph/0612513
Press, W. H., & Schechter, P. 1974, ApJ, 187, 425
Rhoads, J. E., Malhotra, S., Dey, A., Stern, D., Spinrad,
H., & Jannuzi, B. T. 2000, ApJL, 545, L85
Scannapieco, E., Schneider, R., & Ferrara, A. 2003, ApJ,
589, 35
Schaerer, D. 2003, A&A, 397, 527
Schaerer, D., & Pelló, R. 2005, MNRAS, 362, 1054
Schneider, R., Salvaterra, R., Ferrara, A., & Ciardi, B.
2006, MNRAS, 369, 825
Shapley, A. E., Steidel, C. C., Pettini, M., & Adelberger,
K. L. 2003, ApJ, 588, 65
Sheth, R. K., Mo, H. J., & Tormen, G. 2001, MNRAS, 323,
Shimasaku, K., et al. 2006, PASJ, 58, 313
Spergel, D. N., et al. 2007, ApJS, 170, 377
Stanway, E. R., et al. 2004, ApJL, 604, L13
Stanway, E. R., McMahon, R. G., & Bunker, A. J. 2005,
MNRAS, 359, 1184
Stanway, E. R., et al. 2007, MNRAS, 376, 727
Stark, D. P., Loeb, A., & Ellis, R. S. 2007, ArXiv Astro-
physics e-prints, arXiv:astro-ph/0701882
Taniguchi, Y., et al. 2005, PASJ, 57, 165
Tapken, C., Appenzeller, I., Noll, S., Richling, S., Heidt,
J., Meinköhn, E., & Mehlert, D. 2007, A&A, 467, 63
Tumlinson, J., Giroux, M. L., & Shull, J. M. 2001, ApJL,
550, L1
Tumlinson, J., Shull, J. M., & Venkatesan, A. 2003, ApJ,
584, 608
Verhamme, A., Schaerer, D., & Maselli, A. 2006, A&A, 460,
Wang, J. X., et al. 2004, ApJL, 608, L21
Westra, E., et al. 2006, A&A, 455, 61
Wyithe, J. S. B., & Cen, R. 2007, ApJ, 659, 890
Wyithe, J. S. B., & Loeb, A. 2007, MNRAS, 375, 1034
c© 2006 RAS, MNRAS 000, 1–11
http://arxiv.org/abs/astro-ph/0611799
http://arxiv.org/abs/astro-ph/0702377
http://arxiv.org/abs/astro-ph/0612513
http://arxiv.org/abs/astro-ph/0701882
Introduction
The Model
Modeling the UV and Ly Luminosity Functions.
Constraints from the UV-LF.
Constraints from the Ly LF.
The Predicted Equivalent Width
The Fluctuating IGM Model
Ly Emitters Powered by Very massive Stars.
EW Distribution of UV and Ly Selected Galaxies
Discussion
Comparison with Population Synthesis Models
Alternative Explanations for Large EW LAEs
Comparison with the LBG Population
Clustering Properties of the LAEs
Conclusions
|
0704.1672 | Two center multipole expansion method: application to macromolecular
systems | Two center multipole expansion method: application to
macromolecular systems
Ilia A. Solov’yov∗, Alexander V. Yakubovich*, Andrey V. Solov’yov*, and Walter Greiner
Frankfurt Institute for Advanced Studies,
Max von Laue Str. 1, 60438 Frankfurt am Main, Germany
We propose a new theoretical method for the calculation of the interaction energy
between macromolecular systems at large distances. The method provides a linear
scaling of the computing time with the system size and is considered as an alternative
to the well known fast multipole method. Its efficiency, accuracy and applicability
to macromolecular systems is analyzed and discussed in detail.
I. INTRODUCTION
In recent years, there has been much progress in simulating the structure and dynamics
of large molecules at the atomic level, which may include up to thousands and millions of
atoms [1, 2, 3, 4]. For example, amorphous polymers may have segments each with 10000
atoms [4] which associate to form partially crystalline lamellae, random coil regions, and
interfaces between these regions, each of which may contribute with special mechanical and
chemical properties to the system.
With increasing computer powers nowadays it became possible to study molecular systems
of enormous sizes which were not imaginable just several years ago. For example in [1] a
molecular dynamics simulations of the complete satellite tobacco mosaic virus was performed
which includes up to 1 million of atoms. In that paper the stability of the whole virion and
of the RNA core alone were demonstrated, and a pronounced instability was shown for the
capsid without the RNA.
The study of structure and dynamics of macromolecules often implies the calculation of
the potential energy surface for the system. The potential energy surface of a macromolecule
carries a lot of useful information about the system. For example from the potential energy
landscape it is possible to estimate the characteristic times for the conformational changes [5,
∗ On leave from the A.F. Ioffe Institute, St. Petersburg, Russia. E-mail: [email protected]
http://arxiv.org/abs/0704.1672v1
6, 7] and for fragmentation [8]. The potential energy surface of a macromolecular system can
be used for studying the thermodynamical processes in the system such as phase transitions
[9]. In proteins, the potential energy surface is related to one of the most intriguing problems
of protein physics: protein folding [9, 10, 11, 12, 13]. The rate constants for complex
biochemical reactions can also be established from the analysis of the potential energy surface
[14, 15].
The calculation of the potential energy surface and molecular dynamics simulations often
implies the evaluation of pairwise interactions. The direct method for evaluating these
potentials is proportional to ∼ N2, where N is the number of particles in the system.
This places a severe restraint on the treatable size of the system. During the last two
decades many different methods have been suggested which provide a linear dependence of
the computational cost with respect to N [16, 17, 18, 19, 20, 21]. The most widely used
algorithm of this kind is the fast multipole method (FMM) [17, 18, 19, 20, 21, 22, 23, 24].
The critical size of the system at which this method becomes computationally faster than the
exact method is accuracy dependent and is very sensitive to the slope in the N dependence
of the computational cost. In refs. [18, 20, 25] critical sizes ranging from N ≈ 300 to
N ≈ 30000 have been reported. Many discrepancies of the estimates in the critical size arise
from differences in the effort of optimizing the algorithm and the underlying code. However,
it is also important to optimize the methods themselves with respect to the required accuracy.
The FMM is based on the systematic organization of multipole representations of a local
charge distribution, so that each particle interacts with local expansions of the potential.
Originally FMM was introduced in [21] by Greengard and Rokhlin. Later, Greengard’s
method has been implemented in various forms. Schmidt and Lee [20] have produced a
version based upon the spherical multipoles for both periodic and nonperiodic systems.
Zhou and Johnson have implemented the FMM for use on parallel computers [26], while
Board et al have reported both serial and parallel versions of the FMM [25].
Ding et al introduced a version of the FMM that relies upon Cartesian rather than spher-
ical multipoles [18], which they applied to very large scale molecular dynamics calculations.
Additionally they modified Greengard’s definition of the nearest neighbors to increase the
proportion of interactions evaluated via local expansions. Shimada et al also developed a
cartesian based FMM program [27], primarily to treat periodic systems described by molec-
ular mechanics potentials. In both cases only low order multipoles were employed, since
high accuracy was not sought.
In the present paper we suggest a new method for calculating the interaction energy
between macromolecules. Our method also provides a linear scaling of the computational
costs with the size of the system and is based on the multipole expansion of the potential.
However, the underlying ideas are quite different from the FMM.
Assuming that atoms from different macromolecules interact via a pairwise Coulomb
potential, we expand the potential around the centers of the molecules and build a two
center multipole expansion using bipolar harmonics algebra. Finally, we obtain a general
expression which can be used for calculating the energy and forces between the fragments.
This approach is different from the one used in the FMM, where the so-called translational
operators were used to expand the potential around a shifted center. Note that the final
expression, which we suggest in our theory was not discussed before within the FMM. Similar
expressions were discussed since the earlier 50’s (see e.g. [28, 29, 30, 31]). In these papers
the two center multipole expansion was considered as a new form of Coulomb potential
expansion, but the expansion was never applied to the study of macromolecular systems.
We consider the interaction of macromolecules via Coulomb potential since this is the
only long-range interaction in macromolecules, which is important for the description of
the potential energy surface at large distances. Other interaction terms in macromolecular
systems are of the short-range type and become important when macromolecules get close
to each other [8]. At large distances these terms can be neglected.
In the present paper we show that the method based on the two center multipole ex-
pansion can be used for computing the interaction energy between complex macromolecular
systems. In section II we present the formalism which lies behind the two center multipole
expansion method. In subsection IIIA we analyze the behavior of the computation cost of
this method and establish the critical sizes of the system, when the two center multipole
expansion method demands less computer time than the exact energy calculation approach.
In subsection IIIB we compare the results of our calculation with the results obtained within
the framework of the FMM. In section IV we discuss the accuracy of the two center multipole
expansion method.
II. TWO CENTER MULTIPOLE EXPANSION METHOD
In this section we present the formalism, which underlies the two center multipole expan-
sion method, which will be further referred to as the TCM method.
Let us consider two multi atomic systems, which we will denote as A and B. The pairwise
Coulomb interaction energy of those systems can be written as follows:
∣R0 + r
, (1)
where NA and NB are the total number of atoms in systems A and B respectively, qi and
qj are the charges of atoms i and j from the system A and B respectively, R0 is the vector
interconnecting the center of system A with the center of system B, rAi and r
j are the
vectors describing the position of charges i and j with respect to the centers of the system
A and B respectively. The centers of both systems can be any suitable points of each of the
molecules. It is natural to define them as the centers of mass of the corresponding systems,
but in some cases another choice might be more convenient (see for example [8], where we
have applied the TCM method for studying fragmentation of alanine dipeptide).
Expression (1) can be expanded into a series of spherical harmonics. The expansion
depends on the vectors R0, r
i and r
j . In the present paper we consider the case when
|R0| >
∣ (2)
holds for all i and j. This particular case is important, because it describes well separated
charge distributions, and can be used for modeling the interaction between complex objects
at large distances. In this case the expansion of (1) reads as [32]:
∣R0 + r
= qiqj
2L+ 1
∣rBj − rAi
RL+10
Y ∗LM
YLM (ΘR0,ΦR0) (3)
According to [32] the function
j − rAi
Y ∗LM
can be expanded into series of bipolar harmonics:
j − rAi
Y ∗LM
4π(2L+ 1)!
l1,l2=0
l1+l2=L
(−1)l2
(2l1 + 1)!(2l2 + 1)!
Yl1(ΘrA
)⊗ Yl2(ΘrB
where a bipolar harmonic is defined as follows:
(Yl1(Θr1 ,Φr1) ⊗ Yl2(Θr2,Φr2))LM =
m1,m2
CLMl1m1l2m2Yl1m1(Θr1 ,Φr1)Yl2m2(Θr2,Φr2). (5)
Here CLMl1m1l2m2 are the Clebsch-Gordan coefficients, which can be transformed to the 3j-
symbol notation as follows:
CLMl1m1l2m2 = (−1)
l1−l2+M
2L+ 1
l1 l2 L
m1 m2 −M
(6)
Using equations (4),(6) and (5) we can rewrite expansion (3) as follows:
∣R0 + r
= qiqj
l1,l2=0
l1+l2=L
l1,l2
m1=−l1
m2=−l2
(−1)l1+M
(4π)3(2L)!
(2l1 + 1)!(2l2 + 1)!
l1 l2 L
m1 m2 −M
RL+10
Yl1m1(ΘrAi ,Φr
)Yl2m2(ΘrBj ,Φr
)Y ∗LM (ΘR0,ΦR0) (7)
The multipole moments of systems A and B are defined as follows:
QAl1m1 =
2l1 + 1
Yl1m1(ΘrA
) (8)
QBl2m2 =
2l2 + 1
Yl2m2(ΘrB
Summing equation (7) over i and j, and accounting only for the first Lmax multipoles in
both systems, we obtain:
Umult =
l1,l2=0
l1+l2=L
l1,l2
m1=−l1
m2=−l2
(−1)l1+M
RL+10
4π(2L)!
(2l1)!(2l2)!
l1 l2 L
m1 m2 −M
QAl1m1Q
Y ∗LM (ΘR0 ,ΦR0) . (9)
This expression describes the electrostatic energy of the system in terms of a two center
multipole expansion. Note, that this expansion is only valid when the condition R0 > r
holds for all i and j, otherwise more sophisticated expansions have to be considered, which
is beyond the scope of the present paper.
Summation in equation (9) is performed over l1, l2 ∈ [0...Lmax]; m1 ∈ [−l1...l1]; m2 ∈
[−l2...l2], and the condition M = m1 +m2 holds. Lmax is the principal multipole number,
which determines the number of multipoles in the expansion.
III. COMPUTATIONAL EFFICIENCY
A. Comparison with direct Coulomb interaction method
In this section we discuss the computational efficiency of the TCM method. For this
purpose we have analyzed the time required for computing the Coulomb interaction energy
between two systems of charges and the time required for the energy calculation within the
framework of the TCM method for different system sizes, and for different values of the
principal multipole number.
For the study of the computational efficiency of the TCM method we have considered
the interaction between two systems (we denote them as A and B) of randomly distributed
charges, for which the condition eq. (2) holds. The charges in both systems were randomly
distributed within the spheres of radiiRA = 1.0·N1/3A and RB = 1.0·N
B respectively and the
distance between the centers of mass of the two systems was chosen as R0 = 3/2(RA +RB).
The computational time needed for the energy calculations is proportional to the number
of operations required. Thus, the time needed for the Coulomb energy calculation (CE
calculation) can be estimated as:
τCoul = αCoulNANB ∼ N2 (10)
where αCoul is a constant depending on the computer processor power and on the efficiency
of the code, NA ∼ NB ∼ N . From equation (10) it follows that the computational cost of
the CE calculation grows proportionally to the second power of the system size.
For large systems the TCM method becomes more efficient because it provides a linear
scaling with the system size. The time needed for the energy calculation reads as follows:
τmult(Lmax) = βN +
m1=−l1
m2=−l2
(NAτl1,m1 +NBτl2,m2) ≈ (11)
≈ αmultLmax(1 + Lmax)3N,
where the first term, βN , corresponds to the computer time needed for allocating arrays in
memory and tabulating the computationally expensive functions like cos(Φ) and exp(imΦ).
τl,m is the time needed for evaluation of the spherical harmonic at given l and m, and αmult
is a numerical coefficient, which depends on the processor power and on the efficiency of the
code. In general it is different from αCoul.
In Fig. 1 we present the dependencies of the computer time needed for the CE calculation
(squares) and for the computation of energy within the TCM method for different values of
the principal multipole number as a function of system size. This data was obtained on a
1.8 GHz 64-bit AMD Opteron-244 computer.
From Fig. 1 it is clear that the time needed for the CE calculation has a prominent
parabolic trend that is consistent with the analytical expression (10). The fitting expression
which describes this dependance is given in the first row of Tab. I. At large N the N2 term
becomes dominant and the other two terms can be neglected. Thus, αCoul ≈ 4.46 · 10−8
(sec).
The fitting expressions which describe the time needed for the energy computations within
the TCM method at different values of the principal multipole number are given in Tab. I,
rows 2-10. These expressions were obtained by fitting the data shown in Fig. 1. Note the
linear dependence on N . The numerical coefficient in all expressions correspond to the factor
αmultLmax(1 + Lmax)
3 in equation (11). The fitting expressions in Tab. I were obtained by
fitting of data obtained for systems with large number of particles (see Fig. 1). Therefore
these expressions are applicable when N ≫ 1.
From equations presented in Tab. I it is possible to determine the critical system sizes at
which the TCM method becomes less computer time demanding then the CE calculation.
FIG. 1: Time needed for energy calculation as a function of the system size.
The critical system sizes calculated for different principal multipole numbers are shown in
the third column of Tab. I. These sizes correspond to the intersection points of the parabola
describing the time needed for the CE calculation with the straight lines describing the
computational time needed for the TCM method. In Fig. 1 one can see six intersection
points for Lmax = 2− 7.
From equation (11), it follows that computation time of the energy within the framework
of the TCM method grows as the power of 4 with increasing Lmax. To stress this fact,
in Fig. 2 we present the dependencies of the computation time obtained within the TCM
method at different system sizes as a function of principal multipole number. All curves
shown in Fig. 2 can be perfectly fitted by the analytical expression (11). In the inset to
Fig. 2, we plot the dependence of the fitting coefficient αmult as a function of the system
size. From this plot it is seen that αmult varies only slightly for all system sizes considered,
being equal to (1.982± 0.015) · 10−7 (sec).
Thus, the expression for the time needed for the energy calculation within the framework
TABLE I: Fitting expressions for the computational time needed for the CE calculation and for the
energy computation within the TCM method at different values of the principal multipole number,
Lmax (second column). System sizes, for which the Coulomb energy calculation becomes more
computer time demanding at a given value of Lmax are shown in the third column.
Lmax τ(N) (sec.) Nmax
Coulomb 0.11736 − 0.0002N + 4.6768 · 10−8N2 -
2 −0.01986 + 3.0 · 10−5N 4223
3 −0.03159 + 5.0 · 10−5N 4662
4 −0.04714 + 1.0 · 10−4N 5809
5 −0.16054 + 2.1 · 10−4N 8026
6 −0.14710 + 3.7 · 10−4N 11704
7 −0.59675 + 7.4 · 10−4N 19308
8 −0.35383 + 10.9 · 10−4N 27212
9 −1.15856 + 1.9 · 10−3N 44286
10 −0.83688 + 2.71 · 10−3N 61892
of the TCM method reads as:
τmult(Lmax) ≈ 1.98 · 10−7Lmax(1 + Lmax)3N. (12)
Note, that αmult = 1.98 · 10−7 (sec) is larger than αCoul ≈ 4.46 · 10−8 (sec), since in one turn
of the TCM method more algebraic operations have to be done, than in one turn of the CE
calculation.
From the analysis performed in this section it is clear that the TCM method can give a
significant gain in the computation time. However, at larger principal multipole numbers
(Lmax = 8, 9, 10) this method can compete with the CE calculation only at system sizes
greater than 27000-61000 atoms. The accounting for higher multipoles is necessary if the
distance between two interacting systems becomes comparable to the size of the systems. In
the next section we discuss in detail the accuracy of the TCM method and identify situations
in which higher multipoles should be accounted for.
FIG. 2: Time needed for the calculation of energy of the systems of different sizes computed within
the framework of the TCM method as a function of the principal multipole number Lmax. In the
inset we plot αmult as a function of the system size.
B. Comparison with the fast multipole method
The fast multipole method (FMM) [21, 22, 23] is a well known method for calculating
the electrostatic energy in a multiparticle system, which provides a linear scaling of the
computing time with the system size. In order to stress the computational efficiency of the
TCM method in this section we compare the time required for the energy calculation within
the framework of the FMM and using the TCM method.
To perform such a comparison we used an adaptive FMM library, which has been im-
plemented for the Coulomb potential in three dimensions [24, 33]. We have generated two
random charge distributions of different size and calculated the interaction energies between
them as well as the required computation time using the FMM and the TCM methods. As
in the previous section the charges in both systems were randomly distributed within the
FIG. 3: Time needed for the calculation of the interaction energy between two systems as a function
the total number of particles calculated within the framework of the TCM method (triangles) and
within the framework of the FMM (squares). In the upper and lower insets we plot the relative
error of the FMM and of the TCM methods as a function of the system size respectively.
spheres of radii RA = 1.0 ·N1/3A and RB = 1.0 ·N
B respectively and the distance between
the center of mass of the two systems was chosen as R0 = 3/2(RA +RB).
In Fig. 3 we present the comparison of the computer time needed for the FMM calculation
(squares) and for the computation of energy within the TCMmethod (triangles) as a function
of system size. These data were obtained on an Intel(R) Xeon(TM) CPU 2.40GHz computer.
In the upper and lower insets of Fig. 3 we show the relative error of the FMM and of the
TCM methods as a function of the system size respectively, which is defined as follows:
ηmethod =
|Ucoul − Umethod|
|Ucoul|
· 100%. (13)
Here method indicates the FMM or the TCM methods. For comparing the efficiency of the
two methods we have considered different charge distributions within the size range of 100
to 10000 particles. Each point in Fig. 3 corresponds to a particular charge distribution.
For each system size ten different charge distributions were used. The time of the FMM
calculation depends on the charge distribution, as is clearly seen in Fig. 3. Note that for a
given system size the calculation time of the FMM can change by more than a factor of 5,
depending on the charge distribution (see points for N = 10000 in Fig. 3).
For all system sizes FMM requires some minimal computer time for calculating the energy
of the system, which increases with the growth of system size (see Fig. 3). The comparison
of the minimal FMM computation time with the computation time required for the TCM
method shows that the TCM method appears to be significantly faster than the FMM. For
N = 10000 FMM requires at least 2.15 seconds to compute the energy, while TCM method
requires 0.53 seconds, being approximately 4 times faster.
The results of the TCM method calculation shown in Fig. 3, were obtained for Lmax = 2.
The analysis of relative errors presented in the inset to Fig. 3 shows that with this principal
multipole number it is possible to calculate the energy between two systems with an error
of less than 1 % for almost arbitrary charge distribution. Note that for the same charge
distributions the error of the FMM is much more, being about 5 % in almost all of the
considered systems. This allows us to conclude that the TCM method is more efficient and
more accurate than the classical FMM.
It is important to mention that in the traditional implementation, FMM calculates the
total electrostatic energy of the system while TCM method was developed for studying
interaction energy between system fragments. It is possible to modify the FMM to study only
interaction energies between different parts of the system. However, the computation cost of
the modified FMM is expected to be higher than of the TCMmethod. This happens because,
within the framework of the modified FMM method, the field created by one fragment of
the system should be expanded in the multipole series and the interactions of the resulting
multipole moments with the charges from another fragment should be calculated. Thus the
computation cost of this method will be proportional to NA ·NB, where NA and NB are the
number of particles in two fragments, while the TCM method is proportional to NA +NB.
The computation cost of the modified version of the FMM depends quadratically on the
size of the system, because in this method the interacting fragments should be considered as
two independent cells, while traditional FMM uses a hierarchical subdivision of the whole
system into cells to achieve linear scaling.
So far we have considered only the interaction between two multi particle systems in
vacuo, and demonstrated the efficiency of the TCM method in this case, although the TCM
method can also be applied to the larger number of interacting systems. The study of
structure and dynamics of biomolecular systems consisting of several components (i.e an
ensemble of proteins, DNA, macromolecules in solution) is a separate topic, which is beyond
the scope of this paper and deserves a separate investigation.
IV. ACCURACY OF THE TCM METHOD. POTENTIAL ENERGY SURFACE
FOR PORCINE PANCREATIC TRYPSIN/SOYBEAN TRYPSIN INHIBITOR
COMPLEX.
We have calculated the interaction energy between two proteins within the framework of
the TCM method and compared it with the exact Coulomb energy value. On the basis of
this comparison we have concluded about the accuracy of the TCM method.
In the present paper we have studied the interaction energy between the porcine pan-
creatic trypsin and the soybean trypsin inhibitor proteins (Protein Data Bank (PDB) [36]
entry 1AVW [37]). Trypsins are digestive enzymes produced in the pancreas in the form
of inactive trypsinogens. They are then secreted into the small intestine, where they are
activated by another enzyme into trypsins. The resulting trypsins themselves activate more
trypsinogens (autocatalysis). Members of the trypsin family cleave proteins at the carboxyl
side (or ”C-terminus”) of the amino acids lysine and arginine. Porcine pancreatic trypsin is a
archetypal example. Its natural non-covalent inhibitor (porcine pancreatic trypsin inhibitor)
inhibits the enzyme’s activity in the pancreas, protecting it from self-digestion.
Trypsin is also inhibited non-covalently by the soybean trypsin inhibitor from the soya
bean plant, although this inhibitor is unrelated to the porcine pancreatic trypsin inhibitor
family of inhibitors. Although the biological function of the soybean trypsin inhibitor is
mostly unknown it is assumed to help defend the plant from insect attack by interfering
with the insect digestive system.
The structure of both proteins is shown in Fig. 4. The coordinate frame used for our
computations is marked in the figure. This coordinate frame is consistent with the standard
coordinate frame used in the PDB.
FIG. 4: Structure of the porcine pancreatic trypsin and soybean trypsin inhibitor with the coor-
dinate frame used for the energy computation. Figure has been rendered with help of the VMD
visualization package [38]
We use this particular example as a model system in order to demonstrate the possible
use of the TCM method. Therefore environmental effects are omitted and we consider only
the protein-protein interaction in vacuo. The porcine pancreatic trypsin and the soybean
trypsin inhibitor include 223 and 177 amino acids respectively. Both proteins include 5847
atoms. Thus for such system size the TCM method is faster than the CE calculation if
Lmax ≤ 4 (see Tab. I).
We have calculated the interaction energy between the porcine pancreatic trypsin and
soybean trypsin inhibitor as a function of distance between the centers of masses of the
fragments, ~R0, and the angle Θ, which is determined as the angle between the x-axis and
the vector ~R0 (see Fig. 4). We have assumed that the porcine pancreatic trypsin is fixed
in space at the center of the coordinate frame and have restricted ~R0 to the (xy)-plane. Of
course, the two degrees of freedom considered are not sufficient for a complete description
of the mutual interaction between the two systems. For this purpose at least six degrees
of freedom are needed. However for our example of the energy calculation of the porcine
pancreatic trypsin/soybean trypsin inhibitor complex within the framework of the TCM
method the two coordinates ~R0 and Θ are sufficient.
The interaction energy of the porcine pancreatic trypsin with the soybean trypsin in-
hibitor as the function of R0 and Θ calculated within the framework of the TCM method is
shown in Fig. 5. The Coulomb interaction energy between the two proteins is shown in the
top-left plot. In [8] it has been shown that the interaction energy between two well sepa-
rated biological fragments arises mainly due to the Coulomb forces. In the present paper we
consider R0 ∈ [58, 100] Å and Θ ∈ [0, 360]◦, at which condition (2) holds and both proteins
can be considered as separated. This means that the potential energy surface shown in the
top-left plot of Fig. 5 describes the interaction energy between the porcine pancreatic trypsin
and the soybean trypsin inhibitor on the level of accuracy of 90 % at least.
The top-left plot of Fig. 5 shows that one can select several characteristic regions on
the potential energy surface marked with numbers 1-4. The corresponding configurations
(states) of the system are shown in Fig. 6. The potential energy surface is determined by the
Coulomb interactions between atoms, thus at large distances it raises and asymptotically
approaches to zero. State 1 has the maximum energy within the considered part of the
potential energy surface because this state corresponds to the largest contact separation
distance between porcine pancreatic trypsin and the soybean trypsin inhibitor being equal
to 54.8 Å.
At smaller distances the potential energy decreases due to the attractive forces acting
between the two proteins. State 2 corresponds to the minimum on the potential energy sur-
face. It arises because a positively charged polar arginine (R125) from the porcine pancreatic
trypsin approaches the negatively charged site of the soybean trypsin inhibitor, which in-
cludes negatively charged polar amino acids glutamic acid (E549) and aspartic acid (D551)
(see state 2 in Fig. 6). The strong attraction between the amino acids leads to the formation
of a potential well on the potential energy surface. This observation is essential for dynamics
of the attachment process of two proteins, because it establishes the most probable angle at
which the proteins stick in the (xy)-plane of the considered coordinate frame (Θ = 192◦).
States 3 and 4 correspond to the saddle points on the potential energy surface and have
energies higher than state 2. They are formed because at these configurations two positively
FIG. 5: The interaction energies of the porcine pancreatic trypsin with the soybean trypsin inhibitor
calculated as the function of R0 and Θ (see Fig. 4) within the framework of the TCM method at
different values of the principal multipole number, Lmax. The principal multipole number is given
above the corresponding image. The result of the CE calculation is shown in the top left plot.
charged polar amino acids from the two proteins become closer in space providing a source
of a local repulsive force. In state 3 these amino acids are lysines (K145 and K665) (see
state 3 in Fig. 6), and in state 4 these are arginines (R62 and R563)(see state 4 in Fig. 6).
FIG. 6: Relative orientations of the porcine pancreatic trypsin and the soybean trypsin inhibitor,
corresponding to the selected points on the potential energy surface presented in Fig. 5. Below
each image we give the corresponding values of R0 and Θ. Some important amino acids are marked
according to their PDB id. Figure prepared with help of the VMD visualization package [38]
In the top-right plot of Fig. 5 we present the potential energy surface obtained within
the framework of the TCM method with Lmax = 2, i.e. with accounting for up to the
quadrupole-quadrupole interaction term in the multipole expansion (9). From the figure it
is seen that the TCM method describes correctly the major features of the potential energy
landscape (i.e. the position of the minimum and maximum as well as their relative energies).
However, the minor details of the landscape, such as the saddle points 3 and 4 (see top-left
plot of Fig. 5) are missed.
The relative error of the TCM method can be defined as follows:
η(Lmax)(R0,Θ) =
|Ucoul(R0,Θ)− ULmaxmult (R0,Θ)|
|Ucoul(R0,Θ)|
· 100%, (14)
where Ucoul(R0,Θ) and U
mult (R0,Θ) are the Coulomb energy and the energy calculated at
given values of R0 and Θ within the TCM method respectively. In the top-left plot of Fig. 7
we present the relative error calculated according to (14) for Lmax = 2. From this plot it
is clear that significant deviation from the exact result arise at Θ ∼ 50 − 60◦, 140 − 150◦,
245◦, 300− 310◦ and 350◦. The discrepancy at Θ ∼ 50− 60◦, Θ ∼ 300− 310◦ and Θ ∼ 350◦
arises because the saddle points 3 and 4, can not be described within the framework of TCM
method with Lmax = 2. The discrepancy at Θ ∼ 140 − 150◦ and Θ ∼ 245◦ is due to the
error in the calculation of the slopes of minimum 2 at R0 = 58 Å and Θ = 198
FIG. 7: Relative error of the interaction energies of the porcine pancreatic trypsin with the soybean
trypsin inhibitor calculated as the function of R0 and Θ within the framework of the TCM method
at different values of the principal multipole number, Lmax. The principal multipole number is
given above the corresponding image.
It is worth noting that the relative error of the TCM method with Lmax = 2 is less than 10
%. With increasing distance between the proteins, the relative error decreases, and becomes
less than 5 % at R0 ≥ 72 Å and less than 3 % at R0 ≥ 86 Å. This means that already at
Lmax = 2 the TCM method reproduces with a reasonable accuracy the essential features of
the potential energy landscape. This observation is very important, because TCM method
with Lmax = 2 requires less computer time then the CE calculation already at N = 4223
(see Tab. I). Thus, the TCM method can be used for the identification of major minima
and maxima on the potential energy surface of macromolecules and modeling dynamics of
complex molecular systems.
Accounting for higher multipoles in the multipole expansion (9) leads to a more accurate
calculation of the potential energy surface. In the second row of Fig. 5 we present the
potential energy surfaces obtained within the framework of the TCM method with Lmax = 4
and 6. From these plots it is seen that all minor details of the Coulomb potential energy
surface, such as the saddle points 3 and 4 are reproduced correctly. Figure 7 shows that the
TCM method with Lmax = 4 gives the maximal relative error of about 5 % at R0 = 58 Å
and Θ = 75◦, in the vicinity of the saddle point 3. The relative errors in the vicinity of the
saddle point 4 and minimum 2 are equal to 4 % and 1 % respectively. The error becomes
less then 1 % for all values of angle Θ at R0 ≥ 70 Å. For Lmax = 6, the largest relative error
is equal to 1.5 % at R0 = 58 Å and Θ = 340
◦ (saddle point 4), becoming less then 1 % at
R0 ≥ 61 Å.
By accounting for the higher multipoles in the multipole expansion (9) one can increase
the accuracy of the method. Thus, with Lmax = 8 and 10 it is possible to calculate the
potential energy surface with the error less then 1 % (see bottom row in Fig. 5 and Fig. 7).
Although the time needed for computing the potential energy surfaces with Lmax = 8 and
10 is larger than the time needed for computing the Coulomb energy directly (see Tab. I),
we present these surfaces in order to stress the convergence of the TCM method.
V. CONCLUSION
In the present paper we have proposed a new method for the calculation of the Coulomb
interaction energy between pairs of macromolecular objects. The suggested method provides
a linear scaling of the computational costs with the size of the system and is based on the
two center multipole expansion of the potential. Analyzing the dependence of the required
computer time on the system size, we have established the critical sizes at which our method
becomes more efficient than the direct calculation of the Coulomb energy.
The comparison of efficiency of the TCM method with the efficiency of FMM allows us to
conclude that the TCM method has proved to be faster and more accurate than the classical
The method based on the two center multipole expansion can be used for the efficient
computation of the interaction energy between complex macromolecular systems. To de-
termine that we have considered the interaction between two proteins: porcine pancreatic
trypsin and the soybean trypsin inhibitor. The accuracy of the method has been discussed
in detail. It has been shown that accounting of only four multipoles in both proteins gives
an error in the interaction energy less than 5 %.
The TCM method is especially useful for studying dynamics of rigid molecules, but it can
also be adopted for studying dynamics of flexible molecules. In this work we have developed
a method for the efficient calculation of the interaction energy between pairs of large multi
particle systems, e.g. macromolecules, being in vacuo. The investigation of biomolecular
systems consisting of several components (i.e complexes of proteins, DNA, macromolecules
in solution) and the extension of the TCM method for these cases deserves a separate
investigation. If a system of interest consists of several interacting molecules being placed in
a solution, one can use the TCM method to describe the interaction between the molecules
and then to take account of the solution as implicit solvent. This can be achieved using for
example the formalism of the Poisson-Bolzmann [34, 35], similar to how it was implemented
for the description of the antigen-antibody binding/unbinding process [14, 15]. The other
possibility is to split the whole system into boxes and account for the solvent explicitly by
calculating the interactions between the boxes and the molecules of interest. This can be
achieved by using the TCM method or a combination of the FMM and the TCM methods.
In this case the FMM can be used for the calculation of the resulting effective multipole
moment of the solvent, while the TCM method is much better suitable for the description of
the macromolecules energetics and dynamics. Note that all of the suggested methodologies
provide linear scaling of the computation time on the system size.
The results of this work can be utilized for the description of complex molecular systems
such as viruses, DNA, protein complexes, etc and their dynamics. Many dynamical features
and phenomena of these systems are caused by the electrostatic interaction between their
various fragments and thus the use of the two center multipole expansion method should
give a significant gain in their computation costs.
VI. ACKNOWLEDGEMENTS
This work is partially supported by the European Commission within the Network of
Excellence project EXCELL and by INTAS under the grant 03-51-6170. We are grateful to
Dr. Paul Gibbon for providing us with the FMM code. We thank Dr. Axel Arnold for his
help with compiling the programs as well as for many insightful discussions. We also thank
Dr. Elsa Henriques and Dr. Andrey Korol for many useful discussions. We are grateful
to Ms. Stephanie Lo for her help in proofreading of this manuscript. The possibility to
perform complex computer simulations at the Frankfurt Center for Scientific Computing is
also gratefully acknowledged.
[1] P.L. Freddolino, A.S. Arkhipov, S.B. Larson, A. McPherson, and K. Schulten, Structure 14,
437 (2006).
[2] A.Y. Shih, A. Arkhipov, P.L. Freddolino, and K. Schulten., Journ. Phys. Chem. B 110, 3674
(2006).
[3] D. Lu, A. Aksimentiev, A.Y. Shih, E. Cruz-Chu, P.L. Freddolino, A. Arkhipov, and K. Schul-
ten, Phys. Biol. 3, S40 (2006).
[4] H. Meyer and J. Baschnagel, Eur. Phys. Jorn. E 12, 147 (2003).
[5] A.V. Yakubovich, I.A. Solov’yov, A.V. Solov’yov and W. Greiner, Eur. Phys. Journ. D 39, 23
(2006).
[6] I.A. Solov’yov, A.V. Yakubovich, A.V. Solov’yov and W. Greiner, Phys. Rev. E 73 021916
(2006)
[7] I.A. Solov’yov, A.V. Yakubovich, A.V. Solov’yov and W. Greiner, Journ. of Exp. and Theor.
Phys. 102, 314 (2006). Original Russian Text, published in Zh. Eksp. Teor. Fiz. 129, 356
(2006).
[8] I.A. Solov’yov, A.V. Yakubovich, A.V. Solov’yov and W. Greiner, Journ. of Exp. and Theor.
Phys. 103, 463 (2006). Original Russian Text, published in Zh. Eksp. Teor. Fiz. 130, 534
(2006).
[9] A.V. Yakubovich, I.A. Solov’yov, A.V. Solov’yov and W. Greiner, Eur. Phys. Journ. D, 40
363 (2006), (Highlight paper).
[10] C. Chen, Y. Xiao, L. Zhang, Biophysical Journal 88, 3276 (2005).
[11] Y. Duan, P. A. Kollman, Science 282 740 (1998).
[12] A. Liwo, M. Khalili and H. A. Scheraga, PNAS 102, 2362 (2005).
[13] F. Ding, N. V. Dokholyan, S. V. Buldyrev, H. E. Stanley and E. I. Scakhnovich, Biophysical
Journal 83, 3525 (2002).
[14] E.S. Henriques and A.V. Solov’yov, Abstract at the WE-Heraeus- Seminar ”Biomolecular Sim-
ulation: From Physical Principles to Biological Function”. Manuscript in preparation (2006).
[15] E.S. Henriques and A.V. Solov’yov, submitted to Europhys. J. D (2007).
[16] W.F. van Gunsteren and H.J.C. Berendsen, Angew. Chem. Int. Ed. Engl. 29, 992 (1990).
[17] H.G. Petersen, D. Soelvason and J.W. Perram, J. Chem. Phys. 101, 8870 (1994).
[18] H.-Q. Ding, N. Karasawa, and W.A. Goddard III, J. Chem. Phys. 97, 4309 (1992).
[19] C.A. White, B.G. Johnson, P.M.W. Gill, M. Head-Gordon, Chem. Phys. Lett. 230, 8 (1994).
[20] K.E. Schmidt and M.A. Lee, Journ. of Stat. Phys. 63, 1223 (1991).
[21] L. Grengard and V. Rokhlin, J. Comput. Phys. 60, 187 (1985).
[22] J. Board and K. Schulten, Comp. Sci. Eng., 56 (2000)
[23] M.R. Pincus and H.A. Scheraga, J. Phys. Chem. 81, 1579 (1977).
[24] P. Gibbon and G. Sutmann, in Quantum Simulations of Complex Many-Body Systems: From
Theory to Algorithms, Lecture Notes, J. Grotendorst, D. Marx, A. Muramatsu (Eds.), John
von Neumann Institute for Computing, Jülich, NIC Series 10 467 (2002).
[25] J.A. Board, J.W. Causey, J.F. Leathrum, Jr., A. Windemuth, and K. Schulten, Chem. Phys.
Lett. 198, 89 (1992);
[26] F. Zhao and S.L. Johnson, Siam. J. Sci. Stat. Comput. 12, 1420 (1991).
[27] J. Shimada, H. Kaneko, and T. Takada, J. Comp. Chem. 15, 29 (1994).
[28] R.J. Buehler and J.O. Hirschfelder, Phys. Rev. 83, 628 (1951).
[29] C.G. Joslin and C.G. Gray, J.Phys.A: Mat. Gen 17, 1313 (1984).
[30] G.P. Gupta and K.C. Mathur, Phys. Rev. A 23, 2347 (1981).
[31] W.P Wolf and R.J. Birgeneau, Phys. Rev. 166, 376 (1968).
[32] D.A. Varshalovich, A.N. Moskalev, and V.K. Khersonskii, Qunatum Theory of Angular Mo-
mentum (World Scientific, Singapore, 1988).
[33] The FMM library for long range Coulomb interactions was provided by the
complex atomistic modeling and simulations group, Forschungszentrum Jülich,
http://www.fz-juelich.de/zam/fcs/
[34] F. Fogolari, P. Zuccato, G. Esposito, and P. Viglino, Biophys. J. 76, 1 (1999).
[35] N.A. Baker, D. Sept, S. Joseph, M.J. Holst and J.A. MacCammon, Proc. Natl. Acad. Sci 98,
10037 (2001).
[36] H. Berman, J. Westbrook, Z. Feng, G. Gilliland, T. Bhat, H. Weissig, I. Shindyalov, P. Bourne,
Nucleic Acids Research 28, 235 (2000).
[37] H.K. Song and S.W. Suh, Journ. of Mol. Biol. 275, 347 (1998).
[38] W. Humphrey, A. Dalke and K. Schulten, Journ. Molec. Graphics 14.1, 33 (1996)
http://www.fz-juelich.de/zam/fcs/
Introduction
Two center multipole expansion method
Computational efficiency
Comparison with direct Coulomb interaction method
Comparison with the fast multipole method
Accuracy of the TCM method. Potential energy surface for porcine pancreatic trypsin/soybean trypsin inhibitor complex.
Conclusion
Acknowledgements
References
|
0704.1673 | Holographic formula for Q-curvature | HOLOGRAPHIC FORMULA FOR Q-CURVATURE
C. ROBIN GRAHAM AND ANDREAS JUHL
Introduction
In this paper we give a formula for Q-curvature in even-dimensional conformal
geometry. The Q-curvature was introduced by Tom Branson in [B] and has been
the subject of much research. There are now a number of characterizations of Q-
curvature; see for example [GZ], [FG1], [GP], [FH]. However, it has remained an open
problem to find an expression for Q-curvature which, for example, makes explicit the
relation to the Pfaffian in the conformally flat case.
Theorem 1. The Q-curvature of a metric g in even dimension n is given by
(0.1) 2ncn/2Q = nv
(n) +
n/2−1∑
(n− 2k)p∗2kv
(n−2k),
where cn/2 = (−1)
n/2 [2n(n/2)!(n/2− 1)!]
Here the v(2j) are the coefficients appearing in the asymptotic expansion of the
volume form of a Poincaré metric for g, the differential operators p2k are those which
appear in the expansion of a harmonic function for a Poincaré metric, and p∗2k denotes
the formal adjoint of p2k. These constructions are recalled in §1 below. We refer to
the papers cited above and the references therein for background about Q-curvature.
Each of the operators p∗2k for 1 ≤ k ≤ n/2− 1 can be factored as p
2k = δqk, where
δ denotes the divergence operator with respect to g and qk is a natural operator from
functions to 1-forms. So the second term on the right hand side is the divergence of
a natural 1-form. In particular, integrating (0.1) over a compact manifold recovers
the result of [GZ] that
(0.2) 2cn/2
Qdvg =
v(n)dvg.
This quantity is a global conformal invariant; the right hand side occurs as the coeffi-
cient of the log term in the renormalized volume expansion of a Poincaré metric (see
[G]).
The work of the first author was partially supported by NSF grant DMS-0505701. The work of
the second author was supported by SFB 647 “Raum-Zeit-Materie” of DFG.
http://arxiv.org/abs/0704.1673v1
2 C. ROBIN GRAHAM AND ANDREAS JUHL
As we also discuss in §1, if g is conformally flat then
v(n) = (−2)−n/2(n/2)!−1Pff ,
where Pff denotes the Pfaffian of g. So in the conformally flat case, Theorem 1 gives a
decomposition of the Q-curvature as a multiple of the Pfaffian and the divergence of a
natural 1-form. A general result in invariant theory ([BGP]) establishes the existence
of such a decomposition, but does not produce a specific realization.
We refer to (0.1) as a holographic formula because its ingredients come from the
Poincaré metric, involving geometry in n + 1 dimensions. Our proof is via the char-
acterization of Q-curvature presented in [FG1] in terms of Poincaré metrics; in some
sense Theorem 1 is the result of making explicit the characterization in [FG1]. How-
ever, passing from the construction in [FG1] to (0.1) involves a non-obvious appli-
cation of Green’s identity. The transformation law of Q-curvature under conformal
change, probably its most fundamental property, is not transparent from (0.1), but it
is from the characterization in [FG1]. In §2, we derive another identity involving the
p∗2kv
(n−2k) which is used in §3 and we discuss relations to the paper [CQY]. In §3, we
describe the relation between holographic formulae for Q-curvature and the theory of
conformally covariant families of differential operators of [J], and in particular explain
how this theory leads to the conjecture of a holographic formula for Q.
We are grateful to the organizing committee of the 2007 Winter School ’Geome-
try and Physics’ at Srńı, particularly to Vladimir Souc̆ek, for the invitation to this
gathering, which made possible the interaction leading to this paper.
We dedicate this paper to the memory of Tom Branson. His insights have led to
beautiful new mathematics and have greatly influenced our own respective work.
1. Derivation
Let g be a metric of signature (p, q) on a manifold M of even dimension n. In this
paper, by a Poincaré metric for (M, g) we will mean a metric g+ on M × (0, a) for
some a > 0 of the form
(1.1) g+ = x
−2(dx2 + gx),
where gx is a smooth 1-parameter family of metrics on M satisfying g0 = g, such
that g+ is asymptotically Einstein in the sense that Ric(g+) + ng+ = O(x
n−2) and
trg+(Ric(g+)+ng+) = O(x
n+2). Such a Poincaré metric always exists and gx is unique
up addition of a term of the form xnhx, where hx is a smooth 1-parameter family of
symmetric 2-tensors on M satisfying trg(h0) = 0 on M . The Taylor expansion of gx
is even through order n and the derivatives (∂x)
2kgx|x=0 for 1 ≤ k ≤ n/2− 1 and the
trace trg((∂x)
ngx|x=0) are determined inductively from the Einstein condition and are
given by polynomial formulae in terms of g, its inverse, and its curvature tensor and
covariant derivatives thereof. See [GH] for details.
HOLOGRAPHIC FORMULA FOR Q-CURVATURE 3
The first ingredient in our formula for Q-curvature consists of the coefficients in
the expansion of the volume form
(1.2) dvg+ = x
−n−1dvgxdx.
Because the expansion of gx has only even terms through order n, it follows that
dvgx =
det gx
det g
= (1 + v(2)x2 + · · ·+ v(n)xn + · · · )dvg,
(1.3)
where each of the v(2k) for 1 ≤ k ≤ n/2 is a smooth function on M expressible in
terms of the curvature tensor of g and its covariant derivatives. Set v(0) = 1.
The second ingredient in our formula is the family of differential operators which
appears in the expansion of a harmonic function for the metric g+. Given f ∈ C
∞(M),
one can solve formally the equation ∆g+u = O(x
n) for a smooth function u such that
u|x=0 = f , and such a u is uniquely determined modulo O(x
n). The Taylor expansion
of u is even through order n − 2 and these Taylor coefficients are given by natural
differential operators in the metric g applied to f which are obtained inductively by
solving the equation ∆g+u = O(x
n) order by order. See [GZ] for details. We write
the expansion of u in the form
(1.4) u = f + p2f x
2 + · · ·+ pn−2f x
n−2 +O(xn);
then p2k has order 2k and its principal part is (−1)
Γ(n/2− k)
22k k! Γ(n/2)
∆k. (Our convention
is ∆ = −∇i∇i.) Set p0f = f .
We remark that the volume coefficients v(2k) and the differential operators p2k also
arise in the context of an ambient metric associated to (M, [g]). If an ambient metric
is written in normal form relative to g, then the same v(2k) are coefficients in the
expansion of its volume form, and the same operators p2k appear in the expansion of
a harmonic function homogeneous of degree 0 with respect to the ambient metric.
Let g+ be a Poincaré metric for (M, g). In [FG1] it is shown that there is a unique
solution U mod O(xn) to
(1.5) ∆g+U = n +O(x
n+1 log x)
of the form
(1.6) U = log x+ A+Bxn log x+O(xn) ,
A,B ∈ C∞(M × [0, a)) , A|x=0 = 0 .
Also, A mod O(xn) is even in x and is formally determined by g, and
(1.7) B|x=0 = −2cn/2Q.
4 C. ROBIN GRAHAM AND ANDREAS JUHL
The proof of (1.7) presented in [FG1] used results from [GZ] about the scattering
matrix, so is restricted to positive definite signature. However, a purely formal proof
was also indicated in [FG1]. Thus (1.7) holds in general signature.
Proof of Theorem 1. Let g+ be a Poincaré metric for g and let U be a solution of
(1.5) as described above. Let f ∈ C∞(M) have compact support. Let u be a solution
of ∆g+u = O(x
n) with u|x=0 = f ; for definiteness we take u to be given by (1.4) with
the O(xn) term set equal to 0. Let 0 < ǫ < x0 with ǫ, x0 small.
Consider Green’s identity
(1.8)
ǫ<x<x0
(U∆g+u− u∆g+U) dvg+ =
(U∂νu− u∂νU) dσ,
where ν denotes the inward normal and dσ the induced volume element on the bound-
ary, relative to g+. Both sides have asymptotic expansions as ǫ→ 0; we calculate the
coefficient of log ǫ in these expansions.
Using the form of the expansion of U and the fact that ∆g+u = O(x
n), one sees
that the expansion of U∆g+u dvg+ has no x
−1 term, so
ǫ<x<x0
U∆g+u dvg+ has no
log ǫ term. Using (1.2), (1.3), (1.4), and (1.5), one finds that the log ǫ coefficient of
ǫ<x<x0
u∆g+U dvg+ is
(1.9) n
n/2−1∑
v(n−2k)p2kf dvg.
On the right hand side of (1.8),
is independent of ǫ, and
(U∂νu− u∂νU) dσ = ǫ
(U∂xu− u∂xU) dvgǫ.
A log ǫ term in the expansion of this quantity can arise only from the log x or xn log x
terms in the expansion of U . Substituting the expansions, one finds without difficulty
that the log ǫ coefficient is
n/2−1∑
2kv(n−2k)p2kf − nBf
dvg.
Equating this to (1.9), using (1.7), and moving all derivatives off f gives the desired
identity. �
Since ∆g+1 = 0, it follows that p2k1 = 0 for 1 ≤ k ≤ n/2− 1. Thus these p2k have
no constant term, so p∗2k = δqk for some natural operator qk from functions to 1-forms,
where δ denotes the divergence with respect to the metric g. So in (0.1), the second
term on the right hand side is the divergence of a natural 1-form. As mentioned in
the introduction, integration gives (0.2). The proof of Theorem 1 presented above in
the special case u = 1 is precisely the proof of (0.2) presented in [FG1].
HOLOGRAPHIC FORMULA FOR Q-CURVATURE 5
Theorem 1 provides an efficient way to calculate the Q curvature. Solving for the
beginning coefficients in the expansion of the Poincaré metric and then expanding its
volume form shows that the first few of the v(2k) are given by:
v(2) = −
v(4) =
(J2 − |P |2)
v(6) =
P ijBij + 3J |P |
2 − J3 − 2P ijPi
where
Pij =
Rij −
2(n− 1)
2(n− 1)
= P ii
Bij = Pij,k
k − Pik,j
k − P klWkijl
and Wijkl denotes the Weyl tensor. Similarly, one finds that the operators p2 and p4
are given by:
−2(n− 2)p2 = ∆
8(n− 2)(n− 4)p4 = ∆
2 + 2J∆+ 2(n− 2)P ij∇i∇j + (n− 2)J,
(1.10)
For n = 2, Theorem 1 states Q = −2v(2) = 1
R. For n = 4, substituting the above
into Theorem 1 gives:
Q = 2(J2 − |P |2) + ∆J,
and for n = 6:
Q = 8P ijBij + 16P
kPkj − 24J |P |
2 + 8J3
+∆2J + 4∆(J2) + 8(P ijJ,i),j − 4∆(|P |
In the formula for n = 6, the first line is (12c3)
−16v(6) and the second line is
(12c3)
4p∗2v
(4) + 2p∗4v
. Details of these calculations will appear in [J].
The expansion of the Poincaré metric g+ was identified explicitly in the case that
g is conformally flat in [SS]. (Since we are only interested in local considerations,
by conformally flat we mean locally conformally flat.) The two dimensional case is
somewhat anomalous in this regard, but the identification of Q curvature is trivial
when n = 2, so we assume n > 2 for this discussion. The conclusion of [SS] is that
if g is conformally flat and n > 2 (even or odd), then the expansion of the Poincaré
metric terminates at second order and
(1.11) (gx)ij = gij − Pijx
6 C. ROBIN GRAHAM AND ANDREAS JUHL
(The details of the computation are not given in [SS]. Details will appear in [FG2]
and [J].) This easily yields
Proposition 1. If g is conformally flat and n > 2, then
v(2k) =
(−2)−kσk(P ) 0 ≤ k ≤ n
0 n < k
where σk(P ) denotes the k-th elementary symmetric function of the eigenvalues of the
endomorphism Pi
Proof. Write g−1P for Pi
j. Then the σk(P ) are given by
det(I + g−1P t) =
σk(P )t
Equation (1.11) can be rewritten as g−1gx = (I−
g−1Px2)2. Taking the determinant
and comparing with (1.3) gives the result. �
We remark that for g conformally flat, gx given by (1.11) is uniquely determined to
all orders by the requirement that g+ be hyperbolic. So in this case the v
(2k) are
invariantly determined and given by Proposition 1 for all k ≥ 0 in all dimensions
n > 2.
Returning to the even-dimensional case, we define the Pfaffian of the metric g by
(1.12) 2n(n/2)! Pff = (−1)qµi1...inµj1...jnRi1i2j1j2 . . . Rin−1injn−1jn,
where µi1...in =
| det(g)| ǫi1...in is the volume form and ǫi1...in denotes the sign of the
permutation. For a conformally flat metric, one has Rijkl = 2(Pi[kgl]j−Pj[kgl]i). Using
this in (1.12) and simplifying gives
Pff = (n/2)! σn/2(P )
(see Proposition 8 of [V] for details). Combining with Proposition 1, we obtain for
conformally flat g:
v(n) = (−2)−n/2(n/2)!−1 Pff .
Hence in the conformally flat case, (0.1) specializes to
2Q = 2n/2(n/2− 1)! Pff +(ncn/2)
n/2−1∑
(n− 2k)p∗2kv
(n−2k),
and again the second term on the right hand side is a formal divergence.
HOLOGRAPHIC FORMULA FOR Q-CURVATURE 7
2. A Related Identity
In this section we derive another identity involving the p∗2kv
(n−2k). It is in gen-
eral impossible to choose the O(xn) term in (1.4) to make ∆g+u = O(x
n); in fact
x−n∆g+u|x=0 is independent of the O(x
n) term in (1.4) and is a conformally invariant
operator of order n applied to f , namely a multiple of the critical GJMS operator
Pn. Following [GZ], we consider the limiting behavior of the corresponding term in
the expansion of an eigenfunction for ∆g+ as the eigenvalue tends to 0.
Let g+ be a Poincaré metric as above. If 0 6= λ ∈ C is near 0, then for f ∈ C
∞(M),
one can solve formally the equation (∆g+ − λ(n − λ))uλ = O(x
n+λ+1) for uλ of the
(2.1) uλ = x
f + p2,λf x
2 + · · ·+ pn,λf x
n +O(xn+1)
where p2k,λ is a natural differential operator in the metric g of order 2k with principal
part (−1)k
Γ(n/2− k − λ)
22k k! Γ(n/2− λ)
∆k such that
Γ(n/2− λ)
Γ(n/2− k − λ)
p2k,λ is polynomial in λ. Set
p0,λf = f . The operators p2k,λ for k < n/2 extend analytically across λ = 0 and
p2k,0 = p2k for such k, where p2k are the operators appearing in (1.4). But pn,λ has a
simple pole at λ = 0 with residue a multiple of the critical GJMS operator Pn. Now
Pn is self-adjoint, so it follows that pn,λ − p
n,λ is regular at λ = 0. We denote its
value at λ = 0 by pn − p
n, a natural operator of order at most n − 2. Our identity
below involves the constant term (pn−p
n)1. Note that since Pn1 = 0, both pn,λ1 and
p∗n,λ1 are regular at λ = 0. We denote their values at λ = 0 by pn1 and p
n1; then
(pn − p
n)1 = pn1− p
n1. Moreover, (4.7), (4.13), (4.14) of [GZ] show that
(2.2) pn1 = −cn/2Q.
It is evident that
pn1 dvg =
p∗n1 dvg. The next proposition expresses the differ-
ence pn1− p
n1 as a divergence.
Proposition 2.
(2.3) n (pn − p
n) 1 =
n/2−1∑
2k p∗2kv
(n−2k)
Proof. Take f ∈ C∞(M) to have compact support, let 0 6= λ be near 0, and define
uλ as in (2.1) with the O(x
n+1) term taken to be 0. Define wλ by the corresponding
expansion with f = 1:
wλ = x
1 + p2,λ1 x
2 + · · ·+ pn,λ1 x
As in the proof of Theorem 1, consider Green’s identity
(2.4)
ǫ<x<x0
(uλ∆g+wλ − wλ∆g+uλ)dvg+ = ǫ
(uλ∂xwλ − wλ∂xuλ) dvgǫ + cx0 ,
8 C. ROBIN GRAHAM AND ANDREAS JUHL
where cx0 is the constant (in ǫ) arising from the boundary integral over x = x0.
Consider the coefficient of ǫ2λ in the asymptotic expansion of both sides. The left
hand side equals
ǫ<x<x0
∆g+ − λ(n− λ)
wλ − wλ
∆g+ − λ(n− λ)
dvg+.
Now uλ
∆g+ − λ(n− λ)
wλ dvg+ and wλ
∆g+ − λ(n− λ)
uλ dvg+ are of the form
x2λψ dxdvg where ψ is smooth up to x = 0. It follows that the asymptotic expansion
of the left hand side of (2.4) has no ǫ2λ term. Consequently the coefficient of ǫn+2λ
must vanish in the asymptotic expansion of
(uλx∂xwλ − wλx∂xuλ) dvgǫ.
This is the same as the coefficient of ǫn in the expansion of
p2k,λf ǫ
(2k + λ)p2k,λ1 ǫ
p2k,λ1 ǫ
(2k + λ)p2k,λf ǫ
v(2k) ǫ2k
dvg.
Evaluation of the ǫn coefficient gives
0≤k,l,m≤n/2
k+l+m=n/2
(2l − 2k)(p2k,λf)(p2l,λ1)v
(2m) dvg = 0,
and then moving the derivatives off f results in the pointwise identity
(2.5)
0≤k,l,m≤n/2
k+l+m=n/2
(2l − 2k) p∗2k,λ
(p2l,λ1)v
The limit as λ → 0 exists of all p2l,λ1 with 0 ≤ l ≤ n/2 and all p
2k,λ with 0 ≤ k ≤
n/2− 1. Since k = n/2 forces l = m = 0, the operator p∗n,λ occurs only applied to 1.
Thus we may let λ→ 0 in (2.5). Using p2l1 = 0 for 1 ≤ l ≤ n/2− 1 results in
npn1−
0≤k,m≤n/2
k+m=n/2
2k p∗2kv
(2m) = 0.
Separating the k = n/2 term in the sum gives (2.3). �
Proposition 2 may be combined with (0.1) and (2.2) to give other expressions for
Q-curvature. However, (0.1) seems the preferred form, as the other expressions all
involve some nontrivial linear combination of pn1 and p
HOLOGRAPHIC FORMULA FOR Q-CURVATURE 9
We remark that the generalization of (2.5) obtained by replacing p2l,λ1 by p2l,λf
remains true for arbitrary f ∈ C∞(M). This follows by the same argument, taking
wλ to be given by the asymptotic expansion of the same form but with arbitrary
leading coefficient.
We conclude this section with some observations concerning relations to the paper
[CQY]:
(1) Recall that Theorem 1 was proven by consideration of the log ǫ term in (1.8),
generalizing the proof of (0.2) in [FG1] where u = 1. In [CQY], it was shown
that for a global conformally compact Einstein metric g+, consideration of the
constant term in
∆g+U dvg+ =
∂νU dσ
for U a global solution of ∆g+U = n gives a formula for the renormalized
volume V (g+, g) of g+ relative to a metric g in the conformal infinity of g+.
In our notation this formula reads
(2.6) V (g+, g) = −
(S(s)1) dvg +
2k ṗ∗2kv
(n−2k) dvg,
where ṗ2k =
|λ=0p2k,λ (which exists for k = n/2 when applied to 1) and
S(s) denotes the scattering operator relative to g. The operators ṗ2k arise in
this context because the coefficient of x2k in the expansion of U is ṗ2k1 for
1 ≤ k ≤ n/2−1, and the coefficient of xn involves ṗn1. Likewise, consideration
of the constant term in
u∆g+U dvg+ =
(u∂νU − U∂νu) dσ
for harmonic u gives an analogous formula for the finite part of
u dvg+ in
terms of boundary data.
(2) There is an analogue of Proposition 2 involving the ṗ∗2kv
(n−2k). Differentiating
(2.5) with respect to λ at λ = 0 and rearranging gives the identity
ṗ∗2kv
(n−2k) − (ṗ2k1)v
(n−2k)
(4l − 2k)p∗2k−2l
(ṗ2l1)v
(n−2k)
which expresses the left hand side as a divergence.
(3) In [CQY] it was also shown that under an infinitesimal conformal change, the
scattering term
S(g+, g) ≡
(S(s)1)dvg
10 C. ROBIN GRAHAM AND ANDREAS JUHL
satisfies
S(g+, e
2αΥg) = −2cn/2
ΥQdvg.
Comparing with
V (g+, e
2αΥg) =
Υv(n) dvg
(see [G]) and using (2.6) and Theorem 1, one deduces the curious conclusion
that the infinitesimal conformal variation of
2k ṗ∗2kv
(n−2k) dvg
n/2−1∑
(n− 2k)p∗2kv
(n−2k) dvg.
This statement involves the conformal variation only of local expressions. For
n = 2 this is the statement of conformal invariance of
Rdvg, while for
n = 4 it is the assertion that the infinitesimal conformal variation of
J2 dvg
Υ∆J dvg.
3. Q-curvature and families of conformally covariant differential
operators
In [J] one of the authors initiated a theory of one-parameter families of natural
conformally covariant local operators
(3.1) DN(X,M ; h;λ) : C
∞(X) → C∞(M), N ≥ 0
of orderN associated to a Riemannian manifold (X, h) and a hypersurface i :M → X ,
depending rationally on the parameter λ ∈ C. For such a family the conformal weights
which describe the covariance of the family are coupled to the family parameter in
the sense that
(3.2) e−(λ−N)ωDN(X,M ; ĥ;λ)e
λω = DN (X,M ; h;λ), ĥ = e
for all ω ∈ C∞(X) (near M).
Two families are defined in [J]: one via a residue construction which has its origin
in an extension problem for automorphic functions of Kleinian groups through their
limit set ([J2], chapter 8), and the other via a tractor construction. Whereas the
tractor family depends on the choice of a metric h on X , the residue family depends
on the choice of an asymptotically hyperbolic metric h+ and a defining function x, to
which is associated the metric h = x2h+.
HOLOGRAPHIC FORMULA FOR Q-CURVATURE 11
Fix an asymptotically hyperbolic metric h+ on one side X+ of X in M and choose
a defining function x for M with x > 0 in X+. Set h = x
2h+. To an eigenfunction u
on X+ satisfying
∆h+u = µ(n− µ)u, Reµ = n/2, µ 6= n/2
is associated the family
〈Tu(ζ, x), ϕ〉 ≡
xζ uϕ dvh, ϕ ∈ C
c (X)
of distributions on X . The integral converges for Re ζ > −n/2− 1 and the existence
of a formal asymptotic expansion
xµ+jaj(µ) +
xn−µ+jbj(µ), x→ 0
with aj(µ), bj(µ) ∈ C
∞(M) implies the existence of a meromorphic continuation of
Tu(ζ, x) to C with simple poles in the ladders
−µ − 1− N0, −(n− µ)− 1− N0.
For N ∈ N0, its residue at ζ = −µ− 1−N has the form
a0δN(h;µ+N − n)(ϕ)dvi∗h,
where
δN (h;λ) : C
∞(X) → C∞(M)
is a family of differential operators of order N depending rationally on λ ∈ C. If
x̂ = eωx with ω ∈ C∞(X), then ĥ = e2ωh and it is easily checked that δN(h;λ)
satisfies (3.2). (The family δN (h;λ) should more correctly be regarded as determined
by x and h+, but we use this notation nonetheless.)
If g is a metric on M , then we can take h+ = g+ to be a Poincaré metric for g on
X+ =M×(0, a) and x to be the coordinate in the second factor, so that h = dx
2+gx.
Then (assuming N ≤ n if n is even), the family δN(h;λ) depends only on the initial
metric g. The residue can be evaluated explicitly and for even orders N = 2L one
obtains
(3.3) δ2L(h;µ+ 2L− n) =
(2L− 2k)!
p∗2l,µ ◦ v
(2k−2l)
◦ i∗∂2L−2kx ,
where the p2l,µ are the operators appearing in (2.1) and the coefficients v
(2j) are used
as multiplication operators. The corresponding residue family is defined by
(3.4) Dres2L (g;λ) = 2
Γ(−n/2 − λ+ 2L)
Γ(−n/2 − λ+ L)
δ2L(h;λ);
12 C. ROBIN GRAHAM AND ANDREAS JUHL
the normalizing factor makes Dres2L (g;λ) polynomial in λ. We are interested in the
critical case 2L = n for n even. Using
Res0(pn,λ) = −cn/2Pn
from [GZ], we see that
(3.5) Dresn (g; 0) = (−1)
n/2Pn(g)i
Direct evaluation from (3.3), (3.4) gives
Ḋresn (g; 0)1 = −(−1)
n/2c−1
p∗n1 +
n/2−1∑
p∗2kv
(n−2k)
where the dot refers to the derivative in λ.
Suppose now that g is transformed conformally: ĝ = e2Υg with Υ ∈ C∞(M). By
the construction of the normal form in §5 of [GL], the Poincaré metrics g+ and ĝ+ are
related by Φ∗ĝ+ = g+ for a diffeomorphism Φ which restricts to the identity onM and
for which the function Φ∗(x)/x restricts to eΥ. Using this the residue construction
easily implies
(3.6) e−(λ−n)ΥDresn (ĝ;λ) = D
n (g;λ) (Φ
∗(x)/x)
Applying (3.6) to the function 1, differentiating at λ = 0, and using (3.5) and Pn1 = 0
gives
enΥḊresn (ĝ; 0)1 = Ḋ
n (g; 0)1− (−1)
n/2Pn/2Υ.
This proves that the curvature quantity
−(−1)n/2Ḋresn (g; 0)1 = c
p∗n1 +
n/2−1∑
p∗2kv
(n−2k)
satisfies the same transformation law as the Q-curvature. It is natural to conjecture
that it equals the Q-curvature. Indeed, this follows from (0.1), (2.2), and (2.3):
p∗n1 +
n/2−1∑
p∗2kv
(n−2k) = pn1 +
p∗n1− pn1 +
n/2−1∑
2kp∗2kv
(n−2k)
n/2−1∑
p∗2kv
(n−2k) −
n/2−1∑
2kp∗2kv
(n−2k)
The first term is −cn/2Q by (2.2), the second term is 0 by (2.3), and the last term is
2cn/2Q by (0.1).
The relation Ḋresn (g; 0)1 = (−1)
n/2+1Q and (3.5) show that both the critical GJMS
operator Pn and the Q-curvature are contained in the one object D
n (g;λ). In that
HOLOGRAPHIC FORMULA FOR Q-CURVATURE 13
respect, Dresn (g;λ) resembles the scattering operator in [GZ]. However, the family
Dresn (g;λ) is local and all operators in the family have order n.
References
[B] T. Branson, Sharp inequalities, the functional determinant, and the complementary series,
Trans. AMS 347 (1995), 3671-3742.
[BGP] T. Branson, P. Gilkey, and J. Pohjanpelto, Invariants of locally conformally flat manifolds,
Trans. AMS 347 (1995), 939–953.
[CQY] S.-Y. A. Chang, J. Qing, and P. Yang, On the renormalized volumes for conformally com-
pact Einstein manifolds, math.DG/0512376.
[FG1] C. Fefferman and C.R. Graham, Q-curvature and Poincaré metrics, Math. Res. Lett. 9
(2002), 139–151.
[FG2] C. Fefferman and C.R. Graham, The ambient metric, in preparation.
[FH] C. Fefferman and K. Hirachi, Ambient metric construction of Q-curvature in conformal
and CR geometries, Math. Res. Lett. 10 (2003), 819–832.
[GP] A.R. Gover and L.J. Peterson, Conformally invariant powers of the Laplacian, Q-curvature
and tractor calculus, Comm. Math. Phys. 235 (2003), 339–378.
[G] C.R. Graham, Volume and area renormalizations for conformally compact Einstein metrics,
Rend. Circ. Mat. Palermo, Ser. II, Suppl. 63 (2000), 31–42.
[GH] C.R. Graham and K. Hirachi, The ambient obstruction tensor and Q-curvature, in
AdS/CFT Correspondence: Einstein Metrics and their Conformal Boundaries, IRMA Lec-
tures in Mathematics and Theoretical Physics 8 (2005), 59–71, math.DG/0405068.
[GJMS] C.R. Graham, R. Jenne, L.J. Mason, and G.A.J. Sparling Conformally invariant powers
of the Laplacian. I. Existence, J. London Math. Soc. (2) 46 (1992), 557–565.
[GL] C.R. Graham and J. M. Lee, Einstein metrics with prescribed conformal infinity on the
ball, Adv. Math. 87 (1991), 186–225.
[GZ] C.R. Graham and M. Zworski, Scattering matrix in conformal geometry, Invent. Math.
152 (2003), 89–118.
[J] A. Juhl, Families of conformally covariant differential operators, Q-curvature and holog-
raphy, book in preparation.
[J2] A. Juhl, Cohomological theory of dynamical zeta functions, Prog. Math. 194, Birkhäuser,
2001.
[SS] K. Skenderis and S. Solodukhin, Quantum effective action from the AdS/CFT correspon-
dence, Phys. Lett. B472 (2000), 316–322, hep-th/9910023.
[V] J. Viaclovsky, Conformal Geometry, Contact Geometry and the Calculus of Variations,
Duke Math. J. 101 (2000), 283–316.
Department of Mathematics, University of Washington, Box 354350, Seattle, WA
98195 USA
E-mail address : [email protected]
Humboldt-Universität, Institut für Mathematik, Unter den Linden, 10099 Berlin
E-mail address : [email protected]
http://arxiv.org/abs/math/0512376
http://arxiv.org/abs/math/0405068
http://arxiv.org/abs/hep-th/9910023
Introduction
1. Derivation
2. A Related Identity
3. Q-curvature and families of conformally covariant differential operators
References
|
0704.1674 | The azimuth structure of nuclear collisions -- I | Version 1.6
The azimuth structure of nuclear collisions – I
Thomas A. Trainor and David T. Kettler
CENPA 354290, University of Washington, Seattle, WA 98195
(Dated: November 5, 2018)
We describe azimuth structure commonly associated with elliptic and directed flow in the context
of 2D angular autocorrelations for the purpose of precise separation of so-called nonflow (mainly
minijets) from flow. We extend the Fourier-transform description of azimuth structure to include
power spectra and autocorrelations related by the Wiener-Khintchine theorem. We analyze several
examples of conventional flow analysis in that context and question the relevance of reaction plane
estimation to flow analysis. We introduce the 2D angular autocorrelation with examples from
data analysis and describe a simulation exercise which demonstrates precise separation of flow and
nonflow using the 2D autocorrelation method. We show that an alternative correlation measure
based on Pearson’s normalized covariance provides a more intuitive measure of azimuth structure.
PACS numbers: 13.66.Bc, 13.87.-a, 13.87.Fh, 12.38.Qk, 25.40.Ep, 25.75.-q, 25.75.Gz
I. INTRODUCTION
A major goal of the RHIC is production of color-
deconfined or QCD matter in heavy ion (HI) collisions, a
bulk QCD medium extending over a nontrivial space-
time volume which is in some sense thermalized and
whose dynamics are dominated in some sense by quarks
and gluons as the dominant degrees of freedom [1]. “Mat-
ter” in this context means an aggregate of constituents
in an equilibrium state, at least locally in space-time,
such that thermodynamic state variables provide a nearly
complete description of the system. Demonstration of
thermalization is seen by many as a necessary part of the
observation of QCD matter.
A. Global variables
One method proposed to demonstrate the existence of
QCD matter is to measure trends of global event vari-
ables, statistical measures formulated by analogy with
macroscopic thermodynamic quantities and based on in-
tegrals of particle yields over kinematically accessible mo-
mentum space. E.g., temperature analogs include spec-
trum inverse slope parameter T and ensemble-mean pt
p̂t. Chemical analogs include particle yields and their
ratios, such as the ensemble-mean K/π ratio [2]. Corre-
sponding fluctuation measures have been formulated for
the event-wise mean pt 〈pt〉 (“temperature” fluctuations)
and K/π ratio (chemical or flavor fluctuations) [3, 4, 5].
Arguments by analogy are less appropriate when deal-
ing with small systems (‘small’ in particle number, space
and/or time) where large deviations from macroscopic
thermodynamics may be encountered.
B. Flow analysis
One such global feature is the large-scale angular struc-
ture of the event-wise particle distribution. The compo-
nents of angular structure described by low-order spheri-
cal or cylindrical harmonics are conventionally described
as “flows.” The basic assumption is that such structure
represents collective motion of a thermalized medium,
and hydrodynamics is therefore an appropriate descrip-
tion. Observation of larger flow amplitudes is therefore
interpreted by many to provide direct evidence for event-
wise thermalization in heavy ion collisions [6]. Given
those assumptions each collision event is treated sepa-
rately. Event-wise angular distributions are fitted with
model functions associated with collective dynamics. The
model parameters are interpreted physically in a thermo-
dynamic (i.e., collective, thermalized) context.
However, collective flow in many-body physics is a
complex topic with longstanding open issues. There is
conflict in the description of nuclear collisions between
continuum hydrodynamics and discrete multiparticle sys-
tems which echoes the state of physics prior to the study
of Brownian motion by Einstein and Perrin [7, 8]. Be-
yond the classical dichotomy between discrete and con-
tinuous dynamics there is the still-uncertain contribu-
tion of quantum mechanics to the early stages of nuclear
collisions. Quantum transitions may play a major role
in phenomena perceived to be “collective.” Premature
imposition of hydrodynamic (hydro) models on collision
data may hinder full understanding.
C. Nonflow and multiplicity distortions
A major concern for conventional flow analysis is the
presence of “nonflow,” non-sinusoidal contributions to
azimuth structure often comparable in amplitude to sinu-
soid amplitudes (flows). Nonflow is treated as a system-
atic error in flow analysis, reduced to varying degrees by
analysis strategies. Another significant systematic issue
is “multiplicity distortions” associated with small event
multiplicities, also minimized to some degree by analysis
strategies. Despite corrections nonflow and small multi-
plicities remain a major limitation to conventional flow
measurements. For those reasons flow measurements in
peripheral heavy ion collisions are typically omitted. The
http://arxiv.org/abs/0704.1674v1
opportunity is then lost to connect the pQCD physics of
elementary collisions to nonperturbative, possibly collec-
tive dynamics in heavy ion collisions.
D. Minijets
A series of recent experiments has demonstrated that
the nonsinusoidal components of angular correlations at
full RHIC energy are dominated by fragments from low-
Q2 partons or minijets [9, 10, 11, 12]. Minijets in RHIC
p-p and A-A collisions have been studied extensively via
fluctuations [9, 10] and two-particle correlations [11, 12].
Minijets may dominate the production mechanism for the
QCD medium [13], and may also provide the best probe
of medium properties, including the extent of thermal-
ization. Comparison of minijets in elementary and heavy
ion collisions may relate medium properties and collec-
tive motion to a theoretical QCD context.
Ironically, demonstrating the existence and properties
of collective flows and of jets (collective hadron motion
from parton collisions and fragmentation) is formally
equivalent. Identifying a partonic “reaction plane” and
determining a nucleus-nucleus reaction plane require sim-
ilar techniques. For example, sphericity has been used to
obtain evidence for deformation of particle/pt/Et angu-
lar distributions due to parton collisions [14] and collec-
tive nucleon flow [15]. At RHIC we should ask whether
final-state angular correlations depend on the geometry
of parton collisions (minijets), N-N collisions or nucleus-
nucleus collisions (flows), or all three. The analogy is
important because to sustain a flow interpretation one
has to prove that there is a difference: e.g., to what ex-
tent do parton collisions contribute to flow correlations
or mimic them? The phenomena coexist on a continuum.
To resolve such ambiguities we require analysis meth-
ods which treat flow and minijets on an equal footing
and facilitate their comparison, methods which do not
impose the hypothesis to be tested on the measurement
scheme. To that end we should: 1) develop a consistent
set of neutral symbols; 2) manipulate random variables
with minimal approximations; 3) introduce proper statis-
tical references so that nonstatistical correlations of any
origin can be isolated unambiguously; 4) treat azimuth
structure ab initio in a model-independent manner using
standard mathematical methods (e.g., standard Fourier
analysis); and 5) include what is known about minijets
(“nonflow”) and “flow” in a more general analysis based
on two-particle correlations.
E. Structure of this paper
An underlying theme of this paper is the formal re-
lation between event-wise azimuth structure in nuclear
collisions and Brownian motion, and how that relation
can inform our study of heavy ion collisions. We begin
with a review of Fourier transform theory and the rela-
tion between power spectra and autocorrelations. That
material forms a basis for analysis of sinusoidal compo-
nents of angular correlations in nuclear collisions which
is well-established in standard mathematics.
We then review the conventional methods of flow anal-
ysis from Bevalac to RHIC. Five papers are discussed in
the context of Fourier transforms, power spectra and au-
tocorrelations. To facilitate a more general description of
angular asymmetries we set aside flow terminology (ex-
cept as required to make connections with the existing lit-
erature) and move to a model-independent description in
terms of spherical and cylindrical multipole moments. We
emphasize the relation of “flows” to multipole moments
as model-independent correlation measures. Physical in-
terpretation of multipole moments is an open question.
We then consider whether event-wise estimation of the
reaction plane is necessary for “flow” studies. The con-
ventional method of flow analysis is based on such esti-
mation, assuming that event-wise statistics are required
to demonstrate collectivity, and hence thermalization, in
heavy ion collisions.
We define the 2D joint angular autocorrelation and de-
scribe its properties. The autocorrelation is fundamental
to time-series analysis, the Brownian motion problem and
its generalizations and astrophysics, among many other
fields. It is shown to be a powerful tool for separating
“flow” from “nonflow.” The autocorrelation eliminates
biases in conventional flow analysis stemming from finite
multiplicities, and makes possible bias-free study of cen-
trality variations in A-A collisions down to N-N collisions.
“Nonflow” is dominated by minijets (minimum-bias
parton fragments, mainly from low-Q2 partons) which
can be regarded as Brownian probe particles for the QCD
medium, offering the possibility to explore small-scale
medium properties. Minijet systematics provide strong
constraints on “nonflow” in the conventional flow con-
text. Interaction of minijets with the medium, and par-
ticularly its collective motion, is the subject of paper II
of this series.
Finally, we consider examples from RHIC data of au-
tocorrelation structure. We show the relation between
“flow” and minijets, how conventional flow analysis is
biased by the presence of minijets, and how the autocor-
relation method eliminates that bias and insures accurate
separation of different collision dynamics.
We include several appendices. In App. A we review
Brownian motion and its formal connection to azimuth
correlations in nuclear collisions. In App. B we review
the algebra of random variables in relation to conven-
tional flow analysis techniques. We make no approxima-
tions in statistical analysis and invoke proper correlation
references to obtain a minimally-biased, self-consistent
analysis system in which flow and nonflow are precisely
distinguished. In App. C we review the mathematics
of spherical and cylindrical multipoles and sphericity. In
App. D we review subevents, scalar products and event-
plane resolution. In App. E we summarize some A-A
centrality issues related to azimuth multipoles and mini-
jets.
II. FOURIER ANALYSIS
The azimuth structure of nuclear collisions is part of
a larger problem: angular correlations of number, pt and
Et on angular subspace (η1, η2, φ1, φ2). There is a for-
mal similarity between event-wise particle distributions
on angle and the time series of displacements of a parti-
cle in Brownian motion. In either case the distribution
is discrete, combining a large random component with
the possibility of a smaller deterministic component. The
mathematical description of Brownian motion includes as
a key element the autocorrelation density, related to the
Fourier power spectrum through the Wiener-Khintchine
theorem (cf. App. A).
The Fourier series describes arbitrary distributions on
bounded angular interval 2π or distributions periodic on
an unbounded interval. The azimuth particle distribu-
tion from a nuclear collision is drawn from (samples)
a combination of sinusoids nearly invariant on rapidity
near midrapidity, conventionally described as “flows,”
and other azimuth structure localized on rapidity and
conventionally described as “nonflow.” The two contri-
butions are typically comparable in amplitude.
We first consider the mathematics of the Fourier trans-
form and power spectrum and their role in conventional
flow analysis [16]. We assume for simplicity that the
only angular structure in the data is represented by a few
lowest-order Fourier terms. In conventional flow analysis
the azimuth distribution is described solely by a Fourier
series, and corrections are applied in an attempt to com-
pensate for “nonflow” as a systematic error. We later re-
turn to the more general angular correlation problem and
consider non-sinusoidal (nonflow) structure described by
non-Fourier model functions in the larger context of 2D
(joint) angular autocorrelations. Precise description of
the composite structure requires a hybrid mathematical
model.
Event-wise random variables are denoted by a tilde.
Variables without tildes are ensemble averages, indicated
in some cases explicitly by overlines. Event-wise averages
are indicated by angle brackets. The algebra of random
variables is discussed in App. B. Where possible we em-
ploy notation consistent with conventional flow analysis.
A. Azimuth densities
The event-wise azimuth density (particle, pt or Et) is
a set of n samples from a parent density integrated over
some (pt, η) acceptance, a sum over Dirac delta functions
(particle positions)
ρ̃(φ) =
ri δ(φ− φi) (1)
The ri are weights (1, pt or Et) appropriate to a given
physical context. We assume integration over one unit of
pseudorapidity, so multiplicity n estimates dn/dη (simi-
larly for pt and Et). The continuum parent density, not
directly observable, is the object of analysis. Fixed parts
of the parent density are estimated by a histogram aver-
aged over an event ensemble. The discrete nature of the
sample distribution and its statistical character present
analysis challenges which are one theme of this paper.
The correlation structure of the single-particle azimuth
density is manifested in ensemble-averaged multiparticle
(two-particle, etc.) densities. Accessing that structure
by projection of multiparticle spaces to 2D or 1D with
minimal distortion is the object of correlation analysis.
In each event the two-particle density is the Cartesian
product ρ̃(φ1, φ2) = ρ̃(φ1) ρ̃(φ2)
ρ̃(φ1, φ2) =
r2i δ(φ1 − φi)δ(φ2 − φi) (2)
n,n−1
rirj δ(φ1 − φi)δ(φ2 − φj),
where the first term represents self pairs. Ensemble-
averaged two-particle distribution ρ(φ1, φ2) with correla-
tions is not generally factorizable. By comparing the av-
eraged two-particle distribution to a factorized (or mixed-
pair) statistical reference two-particle correlations are re-
vealed. In a later section we compare multipole moments
from two-particle correlation analysis on azimuth to re-
sults from conventional flow analysis methods.
B. Fourier transforms on azimuth
A Fourier series is an efficient representation if azimuth
structure approximates a constant plus a few sinusoids
whose wavelengths are integral fractions of 2π. A Fourier
representation of a peaked distribution (e.g., jet cone)
is not an efficient representation. We assume a simple
combination of the lowest few Fourier terms.
The Fourier forward transform (FT) is
ρ̃(φ) =
exp(imφ) (3)
cos(m[φ−Ψm]),
where boldface Q̃m is an event-wise complex amplitude,
Q̃m is its magnitude and Ψm its phase angle. The second
line arises because ρ̃(φ) is a real function, and the prac-
tical upper limit on m is particle number n (wavelength
∼ mean interparticle spacing). Q̃m/2π = ρ̃m is the am-
plitude of the density variation associated with the mth
sinusoid. With ri → 1 Q̃m is the corresponding num-
ber of particles in 2π if that density were uniform. Ψm
is event-wise by definition and does not require a tilde.
The reverse transform (RT) is
Q̃m =
dφ ρ̃(φ) exp(−imφ) (4)
ri exp(−imφi)
= Q̃m exp(imΨm).
For the discrete transform φ ∈ [−π, π] is partitioned
into M equal bins with bin contents r̃l and bin centers
φl. We multiply Eq. (3) by bin width δφ = 2π/M , and
the Fourier transform pair becomes
r̃l =
cos(m[φl −Ψm]) (5)
Q̃m =
l=−M/2
r̃l exp(−imφl),
where the r̃l are also random variables. The upper limit
M/2 in the first line is a manifestation of the Nyquist
sampling theorem [17]. With ri → 1 r̃l → ñl and Qm/M
is the maximum number of particles in a bin associated
with the mth sinusoid.
C. Autocorrelations and power spectra
The azimuth autocorrelation density ρA(φ∆) is a pro-
jection by averaging of the pair density on two-particle
azimuth space (φ1, φ2) onto difference axis φ∆ = φ1−φ2.
The autocorrelation concept is not restricted to peri-
odic or bounded distributions or discrete Fourier trans-
forms [16]. In what follows ρ̃A includes self pairs. The
autocorrelation density is defined as [18]
ρ̃A(φ∆) ≡
dφ ρ̃(φ) ρ̃(φ+ φ∆) (6)
i,j=1
dφ δ(φ− φi) δ(φ− φj + φ∆)
i,j=1
rirj δ(φi − φj + φ∆).
Using Eq. (3) we obtain the FT as
ρ̃A(φ∆) =
dφ × (7)
exp(imφ)×
m′=−∞
Q̃∗m′
exp(−im′[φ+ φ∆])
[2π]2
exp(−i mφ∆)
[2π]2
[2π]2
cos(mφ∆),
and the RT as
Q̃2m = n
2〈r cos(m[φ−Ψm])〉2 (8)
dφ∆ ρ̃A(φ∆) cos(mφ∆)
i,j=1
rirj cos(m[φi − φj ])
= n〈r2〉+ n(n− 1)〈r2 cos(mφ∆)〉.
= n〈r2〉+ n(n− 1)〈r2 cos2(m[φ−Ψr])〉.
Phase angle Ψm has been eliminated, and Ψr will be
identified with the reaction plane angle.
We can write the same relations for ensemble-averaged
quantities because the terms are positive-definite
ρA(φ∆) ≡
[2π]2
[2π]2
cos(mφ∆), (9)
with RT
Q2m = 2π
dφ∆ ρA(φ∆) cos(mφ∆) (10)
i,j=1
rirj cos(m[φi − φj ])
= n〈r2〉+ n(n− 1)〈r2 cos(mφ∆)〉.
We have adopted the convention Q2m = Q̃
m to lighten
the notation. Coefficients Q2m are power-spectrum ele-
ments on wave-number indexm. That FT transform pair
expresses the Wiener-Khintchine theorem which relates
power-spectrum elements Q2m to autocorrelation density
ρA(φ∆). The autocorrelation provides precise access to
two-particle correlations given enough collision events, no
matter how small the event-wise multiplicities.
D. Autocorrelation structure
The autocorrelation concept was developed in response
to the Brownian motion problem and the Langevin equa-
tion, a differential equation describing Brownian motion
which contains a stochastic term. The concept is al-
ready apparent in Einstein’s first paper on the subject
(cf. App. A). The large-scale, possibly-deterministic mo-
tion of the Brownian probe particle must be separated
from its small-scale random motion due to thermal col-
lisions with molecules. Similarly, we want to extract az-
imuth correlation structure persisting in some sense over
an event ensemble from event-wise random variations.
The autocorrelation technique is designed for that pur-
pose.
The statistical reference for a power spectrum is the
white-noise background representing an uncorrelated sys-
tem. The reference is typically uniform up to large
wave number or frequency (hence white noise), an in-
evitable part of any power spectrum from a discrete pro-
cess (point distribution). The “signal” is typically limited
to a bounded region (signal bandwidth) of the spectrum
at smaller frequencies or wave numbers.
From Eq. (8) the power-spectrum elements are
Q̃2m =
r2i +
n,n−1
rirj cos(m[φi − φj ]) (11)
= n〈r2〉+ n(n− 1)〈r2 cos(mφ∆)〉
The first term in Eq. (11) is Q̃2ref , the white-noise back-
ground component of the power spectrum common to
all spectrum elements. The second term, which we de-
note Ṽ 2m, represents true two-particle azimuth correla-
tions. Note that Q̃20 = n
2〈r2〉, whereas Ṽ 20 = n(n−1)〈r2〉.
In terms of complex (or vector) amplitudes we can write
Q̃m = Q̃ref + Ṽm, (12)
where Q̃ref represents a random walker. There is no
cross term in Eq. (11) because Qref and Vm are uncor-
related.
Inserting the power-spectrum elements into Eq. (9) we
obtain the ensemble-averaged autocorrelation density
ρA(φ∆) =
n〈r2〉
δ(φ∆) +
n(n− 1)〈r2〉
[2π]2
[2π]2
cos(mφ∆).
The first term is the self-pair or statistical noise term,
which can be excluded from ρA by definition simply
by excluding self pairs. The second term, with V 20 =
n(n− 1)〈r2〉, is a uniform component, and the third term
is the sinusoidal correlation structure. The self-pair term
is referred to in conventional flow analysis as the “auto-
correlation,” in the sense of a bias or systematic error,
but that is a notional misuse of standard mathematical
terminology. The true autocorrelation density is the en-
tirety of Eq. (13), including (in this simplified case) the
self-pair term, the uniform component and the sinusoidal
two-particle correlations.
In general, the single-particle ensemble-averaged dis-
tribution ρ0 may be structured on (η, φ). We want to
subtract the corresponding reference structure from the
two-particle distribution to isolate the true correlations.
In what follows we assume ri = 1 for simplicity, therefore
describing number correlations. We subtract factorized
reference autocorrelation ρA,ref (φ1, φ2) = ρ0(φ1) ρ0(φ2)
representing a system with no correlations, with ρ0 =
n̄/2π ≃ d2n/dηdφ in this simple example, to obtain the
difference autocorrelation
∆ρA(φ∆) = ρA − ρA,ref (14)
σ2n − n̄
[2π]2
[2π]2
cos(mφ∆).
The first term measures excess (non-Poisson) multiplicity
fluctuations in the full (pt, η, φ) acceptance. The second
term is a sum over cylindrical multipoles. We now divide
the autocorrelation difference by
ρA,ref = ρ0 = n̄/2π
to form the density ratio
ρA,ref
σ2n − n̄
2π n̄
cos(mφ∆) (15)
≡ ∆ρA[0]√
ρA,ref
∆ρA[m]√
ρA,ref
cos(mφ∆),
The first term ∆ρA[0]/
ρA,ref is the density ratio av-
eraged over acceptance (∆η,∆φ). Its integral at full
acceptance is normalized variance difference ∆σ2
(σ2n− n̄)/n̄, which we divide by acceptance integral 1×2π
to obtain the mean of the 2D autocorrelation density
∆ρA[0]/
ρA,ref = ∆σ
(∆η,∆φ)/∆η∆φ. The sinusoid
amplitudes are ∆ρA[m]/
ρA,ref = n(n− 1) ṽ2m/(2πn̄) ≡
m, defining unbiased vm. The event-wise ṽ
〈cos(mφ∆)〉 are related to conventional flow measures
vm, but may be numerically quite different for small
multiplicities due to bias in the latter. Different mean-
value definitions result in different measured quantities
(cf. App. B). Whereas the Q2m are power-spectrum ele-
ments which include the white-noise reference, the V 2m (∝
squares of cylindrical multipole moments) represent the
true azimuth correlation signal. This important result
combines several measures of fluctuations and correla-
tions within a comprehensive system.
E. Azimuth vectors
Power-spectrum elements Q2m are derived from com-
plex Fourier amplitudes Qm. In an alternate representa-
tion the complex Qm can be replaced by azimuth vectors
~Qm. The ~Qm are conventionally referred to as “flow vec-
tors” [22], but they include a statistical reference as well
as a “flow” sinusoid. The ~Vm defined in this paper are
more properly termed “flow vectors,” to the extent that
such terminology is appropriate. We refer to the ~Qm by
the model-neutral term azimuth vector and define them
by the following argument.
The cosine of an angle difference—cos(m[φ1 − φ2]) =
cos(mφ1) cos(mφ2) + sin(mφ1) sin(mφ2)—can be repre-
sented in two ways, with complex unit vectors u(mφ) ≡
exp(imφ) or with real unit vectors ~u(mφ) ≡ cos(mφ)̂ı+
sin(mφ)̂ [complex plane (ℜz,ℑz) vs real plane (x, y)].
Thus,
cos(m[φ1 − φ2]) = ℜ{u(mφ1)u∗(mφ2)} (16)
= ~u(mφ1) · ~u(mφ2)).
If an analysis is reducible to terms in cos(m[φ1 − φ2])
the same results are obtained with either representation.
Thus, we can rewrite the first line of Eq. (3) as
ρ̃(φ) =
· ~u(mφ) (17)
cos(m[φ−Ψm]),
in which case
~̃Qm =
dφ ρ̃(φ) ~u(mφ) =
ri~u(mφi) (18)
= n〈r[cos(mφ), sin(mφ)]〉
= Q̃m~u(mΨm),
an event-wise random real vector.
III. CONVENTIONAL METHODS
We now use the formalism in Sec. II to review con-
ventional flow analysis methods in a common framework.
We consider five significant papers in chronological order.
The measurement of angular correlations to detect col-
lective dynamics (e.g., parton fragmentation and/or hy-
drodynamic flow) proceeds from directivity (1983) at the
Bevalac (1 - 2 GeV/u fixed target) to transverse spheric-
ity predictions (1992) for the SPS/RHIC (
sNN = 17 -
200 GeV), then to Fourier analysis of azimuth distribu-
tions (1994, 1998) and v2 centrality trends (1999). We
pay special attention to the manipulation of random vari-
ables (RVs). RVs do not follow the algebra of ordinary
variables, and the differences are especially important for
small sample numbers (multiplicities, cf. App. B).
A. Directivity at the Bevalac
An important goal of the Bevalac HI program was ob-
servation of the collective response of projectile nucleons
to compression during nucleus-nucleus collisions, called
directed flow, which might indicate system memory of
the initial impact parameter as opposed to isotropic ther-
mal emission. Because of finite (small) multiplicities and
large fluctuations relative to the measured quantity the
geometry of a given collision may be poorly defined, but
the final state may still contain nontrivial collective infor-
mation. The analysis goal becomes separating possible
collective signals from statistical noise.
An initial search for collective effects was based on the
3D sphericity tensor S̃ =
i ~pi~pi [14, 15] described in
App. C 3. Alternatively, the directivity vector was defined
in the transverse plane [19]. In the notation of Sec. II,
including weights ri → wipti, directivity is
~̃Q1 ≡
wi~pti =
wipti~u(φi) (19)
= Q̃1~u(Ψ1),
azimuth vector ~̃Qm with m = 1 (corresponding to di-
rected flow). Event-plane (EP) angle Ψ1 estimates true
reaction-plane (RP) angle Ψr. To maintain correspon-
dence with SPS and RHIC analysis we simplify the de-
scription in [19] to the case that n single nucleons are
detected and no multi-nucleon clusters; thus a → 1 and
A→ n.
The terms in ~̃Q1 are weighted by wi = w(yi) = ±1 cor-
responding to the forward or backward hemisphere rela-
tive to the CM rapidity, with a region [−δ, δ] about mid-
rapidity excluded from the sum (wi = 0). Q̃1 then ap-
proximates quadrupole moment q̃21 derived from spher-
ical harmonic ℜY 12 ∝ sin(2θ) cos(φ), as illustrated in
Fig. 1 (left panel, dashed lines). In effect, a rotated
quadrupole is modeled by two opposed dipoles, point-
symmetric about the CM in the reaction plane. It is ini-
tially assumed that EP angle Ψ1 of vector ~Q1 estimates
RP angle Ψr, and magnitude Q1 measures directed flow.
The first part of the analysis was based on subevents—
nominally equivalent but independent parts of each
event. The dot product ~Q1A · ~Q1B for subevents A
and B of each event was used to establish the exis-
tence of a significant flow phenomenon and the angu-
lar resolution of the RP estimation via the distribu-
tion on Ψ1A − Ψ1B. The EP resolution was defined by
cos(Ψ1 − Ψr) = 2
cos(Ψ1A −Ψ1B). The magnitude of
~̃Q1 was then related to an estimate of the mean trans-
verse momentum in the RP. Integrating over rapidity
with weights wi we obtain event-wise quantities
Q̃21 =
w2i p
n,n−1
wi wj ~pti · ~ptj (20)
≃ n〈p2t 〉+ n(n− 1)〈p2t cos(φ∆)〉
≡ Q̃2ref + Ṽ 21 .
The last line makes the correspondence with the notation
of this paper.
The initial analysis in [19] used Q̃21 − Q̃2ref = Ṽ 21 =
n(n−1)〈p2x〉, assuming that x̂ is contained in the RP and
there are no non-flow correlations. Note that n(n− 1) ≡
∑n,n−1
i6=j |wiwj | contains weights wiwj implicitly. Since
wi ∼ sin[2 θ(yi)] (cf. Fig. 1 – left panel), what is actu-
ally calculated in [19] for the single-particle (no multi-
nucleon clusters) case is the pt-weighted r.m.s. mean
of the spherical harmonic ℜY 12 (θ, φ) ∝ quadrupole mo-
ment q̃21, thereby connecting rank-1 tensor ~Q1 (with
weights on rapidity) to rank-two sphericity tensor S (cf.
App. C 3). The mean-square quantity calculated is
p2x ≡
n(n− 1)
≃ sin
2(2θ)p2t cos
2(φ −Ψr)
sin2(2θ)
, (21)
a minimally-biased statistical measure as discussed in
App. B, from which we obtain px =
p2x estimating the
transverse momentum per particle in the reaction plane.
The second part of the analysis sought to obtain px(y),
the weighted-mean transverse momentum in the RP as a
function of rapidity. It was decided to determine pxi =
pti cos(φi−Ψr) for the ith particle relative to the RP, but
with Ψr estimated by EP angle Ψ1. The initial attempt
was based on
xi ≡ wi~pti · ~u(Ψ1) = wi~pti ·
j wj ~ptj
k wk ~ptk|
. (22)
Summing wi ~pti over all particles in a y bin gives
〈p′x〉 =
∑n,n−1
i6=j wiwj ~pti · ~ptj
l |wl| |
k wk ~ptk|
n〈p2t 〉+ Ṽ 21
n Q̃1
= Q̃1/n =
〈p2t 〉+
〈p2x〉,
from which we obtain ensemble mean p′x =
˜〈p′x〉. That
result can be compared directly with the linear speed in-
ferred from a random walker trajectory, which is ‘infinite’
in the limit of zero time interval (cf. App. A). The first
term in the radicand is said to produce “multiplicity dis-
tortions” in conventional flow terminology. The second
term contains the unbiased quantity.
In contrast to the first part of the analysis the sec-
ond method retains the statistical reference within Q̃1
as part of the result, so that Q̃1/n ∼
〈p2t 〉/n for small
multiplicities and/or flow magnitudes, a false signal com-
parable in magnitude to the true flow signal. The unbi-
ased directed flow px was said to be “distorted” by the
presence of the statistical reference (called unwanted self-
correlations or “autocorrelations”) to the strongly-biased
value p′x dominated by the statistical reference.
An attempt was made to remove statistical distortions
arising from self pairs by redefining ~̃Q1 → ~̃Q1i, a vector
complementary to each particle i with that particle omit-
ted from the sum. The estimator of Ψr for particle i is
then Ψ1i in ~̃Q1i ≡
j 6=i wj ~ptj = Q̃1i ~u(Ψ1i) and
xi = wi~pti · ~u(Ψ1i) = wi~pti ·
j 6=i wj ~ptj
k 6=i wk ~ptk|
.(24)
0 1 2
vm/σ = √(2nvm
2 ) ~ √(2Vm
2 /n)
n = 5
n = 10, ... , 50
√(n-1)Vm
0 1 2 3
FIG. 1: Left panel: Comparison of directed flow data from
Fig. 3(a) of the event-plane analysis and the ℜY 12 ∝
sin(2θ[ylab]) spherical harmonic, with amplitude 95 MeV/c
obtained from V 21 . Weights in the form w(ylab) are denoted
by dashed lines. The correspondence with sin(2θ[ylab]) (solid
curve) is apparent. Right panel: The EP resolution obtained
from [22] (dashed curve) and from ratio
(n− 1)V 2
/nQ′2
defined in this paper (solid and dotted curves for several n
values).
Summing over i within a rapidity bin one has
〈p′′x〉 =
l |wl|
j 6=i wiwj ~pti · ~ptj
k 6=i wk ~ptk|
〈p2x〉
(n− 1)〈p2x〉
〈p2t 〉+ (n− 2)〈p2x〉
〈p2x〉 〈cos(Ψ′m −Ψr)〉
with Q̃′1 ≡
(n− 1)〈p2t 〉+ (n− 1)(n− 2)〈p2t cos(φ∆)〉.
Since V 21 = n(n− 1)〈p2x〉 = n(n− 1)〈p2t cos(φ∆)〉 one
sees that the division by Q̃′1 is incorrect, even though
it seems to follow the chain of argument based on RP
estimation and EP resolution with correction. The
correct (minimally-biased) quantity is px ≡
p2x =
/n(n− 1). The new EP definition removes the ref-
erence term from the numerator, but Q̃′1 in the denom-
inator retains the statistical reference in p′′x. There is
the additional issue that
x2 6= x̄. Two different mean
values are represented by 〈px〉 and px =
p2x. The dif-
ference can be large for small event multiplicities.
The remaining bias was attributed to the EP resolu-
tion. The resolution correction factor derived from the
initial subevent analysis (cf. App. D) was applied to
〈p′′x〉 to further reduce bias. In Fig. 1 (right panel) we
compare the EP resolution correction from [22] (dashed
curve) with factor
(n− 1)Ṽ 2
/nQ̃′2
required to convert
〈p′′x〉 from Eq. (25) to px from Eq. (21). The agreement
is very good. Eq. (21) is the least biased and most direct
way to obtain px ≡
p2x, both globally over the detec-
tor acceptance and locally in rapidity bins, without EP
determination or resolution corrections.
In Fig. 1 (left panel) we show the data (points) for
px(ylab) from the EP-corrected analysis and the solid
curve px sin[2θ(ylab)] ∝ q21Y21(θ(y), 0), where px =
/n(n− 1) = 95 MeV/c. The agreement is good,
and the similarity of sin[2θ(ylab)] (solid curve) to weights
w(ylab) (dashed lines) noted above is apparent. Loca-
tion of the sin[2θ(ylab)] extrema near the kinematic limits
(vertical lines) is an accident of the collision energy and
nucleon mass. These results are for Ecm = 1.32 GeV ∼√
2. In the notation of this paper V 21 = 4.7
(GeV/c)2,
/n(n− 1) = 0.095 GeV/c = wpx/a, and
Qx ≡ n̄
/n(n− 1) = 2.17 GeV/c ≃
(not Q1).
By direct and indirect means (directivity and RP esti-
mation) quadrupole moment q21 ∝ V1 was measured.
B. Transverse sphericity at higher energies
The arguments and techniques in [20] suggest a smooth
transition from relativistic Bevalac and AGS energies
(collective nucleon and resonance flow) to intermediate
SPS and ultra-relativistic RHIC energies (possible trans-
verse flow, possibly QCD matter, collectivity manifested
by correlations of produced hadrons, mainly pions). For
all heavy ion collisions thermalization is a key issue.
Clear evidence of thermalization is sought, and collective
flow is expected to provide that evidence.
Two limiting cases are presented in [20] for SPS flow
measurements: 1) linear N-N superposition with no col-
lective behavior (no flow); 2) thermal equilibrium – col-
lective pressure in the reaction plane – fluid dynamics
leading to “elliptic” flow. In a hydro scenario the initial
space eccentricity transitions to momentum eccentricity
through thermalization and early pressure. The paper
considers flow measurement techniques appropriate for
the SPS and RHIC, and in particular newly defines trans-
verse sphericity St.
According to [20] the 3D sphericity tensor introduced
at lower energies [15] can be simplified in ultra-relativistic
heavy ion collisions to a 2D transverse sphericity tensor.
Sphericity is transformed to 2D by ~p → ~pt, omitting the
momentum ẑ component near mid-rapidity. Transverse
sphericity (in dyadic notation) is
2S̃t ≡ 2
~pti~pti (26)
p2ti {I + C(φi)}
≡ n〈p2t 〉 {I + α̃1 C(Ψ2)}
defining α̃1 and Ψ2 in the tensor context, with
C(φ) ≡
cos(2φ) sin(2φ)
sin(2φ) − cos(2φ)
. (27)
This α̃ definition corresponds to Eq. (3.1) of [20].
We next form the contraction of S̃t with itself
2S̃t : S̃t = 2
(~pti · ~ptj)2 (28)
p4ti + 2
p2tip
tj cos
2(φi − φj)
= 2n〈p4t 〉+ 2n(n− 1)〈p4t cos2(φ∆)〉
= n(n+ 1)〈p4t 〉+ n(n− 1)〈p4t cos(2φ∆)〉
using the dyadic contraction notation A : B ≡ AabBab,
with the usual summation convention. That self-
contraction of a rank-2 tensor can be compared to the
more familiar self-contraction of a rank-1 tensor ~̃Q2 · ~̃Q2 =
Q̃22 = n〈p2t 〉 + n(n − 1)〈p2t cos(2φ∆)〉. The quantity
2[S̃t : S̃t]ref = n(n + 1)〈p4t 〉 is the (uncorrelated) refer-
ence for the rank-2 contraction, whereas Q̃2
2,ref = n〈p2t 〉
is the reference for the rank-1 contraction. Subtracting
the rank-2 reference contraction gives
S̃t : S̃t − [S̃t : S̃t]ref
= n(n− 1)〈p4t cos(2φ∆)〉
≃ 〈p2t 〉Ṽ 22 , (29)
which relates transverse sphericity to two-particle corre-
lations in the form Ṽ 22 = Q̃
2 − Q̃22,ref , thus establishing
the exact correspondence between S̃t and ~̃Q2.
From the definition of α1 in Eq. 26 above and Eq. (3.1)
of [20] we also have
2 S̃t : S̃t = n2〈p2t 〉2(1 + α̃21) (30)
which implies
n2〈p2t 〉2α̃21 = n2 σ̃2p2t + n〈p
t 〉 (31)
+ n(n− 1)〈p4t cos(2φ∆)〉
≃ n2 σ̃2p2t + 〈p
t 〉Q̃22. (32)
The first relation is exact, given the definition of α̃1,
but produces a complex statistical object containing the
event-wise variance of p2t in its numerator and random
variable n2 in its denominator. For 1/n → 0 it is true
that α̃1 → 〈cos(2[φ−Ψr])〉 (since Ψ2 → Ψr also), but α̃1
is a strongly biased statistic for finite n.
The definition
α̃2 =
〈p2x − p2y〉
〈p2x + p2y〉
from Eq. (2.5) of [20] seems to imply α̃2 = 〈p2t cos(2[φ−
Ψr])〉/〈p2t 〉 → 〈cos(2[φ−Ψr])〉, assuming that x̂ lies in the
RP. However, the latter relation fails for finite multiplic-
ity [the effect of the statistical reference or self-pair term
n in Eq. (31)] because each of event-wise 〈p2x〉 and 〈p2y〉 is
a random variable, and their independent random varia-
tions do not cancel in the numerator. The exact relation
is the first line of
n2〈p2t 〉2α̃22 = n2〈p2t cos(2[φ−Ψ2])〉2 (34)
≃ n2〈p2t 〉〈pt cos(2[φ−Ψ2])〉2
= 〈p2t 〉 Q̃22
α̃2 ≃ Q̃2/Q̃0 6= Ṽ2/Q̃0
The second line is an approximation which indicates that
α̃2 is more directly related to Q̃2 than is α̃1. But Q
2 is a
poor substitute for V 22 which represents true two-particle
azimuth correlations in a minimally-biased way by incor-
porating a proper statistical reference. The effect of the
reference contribution is termed a ‘distortion’ in [20].
C. Fourier series I
Application of Fourier series to azimuth particle dis-
tributions was introduced in [21]. Fourier analysis is de-
scribed as model independent, providing variables which
are “easy to work with and have clear physical interpreta-
tions.” Sinusoids or harmonics are associated with trans-
verse collective flow, the model-dependent language fol-
lowing [20] closely. To facilitate comparisons we convert
notation in [21] to that used in this paper: r(φ) → ρ(φ),
(xm, ym) → ~Qm, ψm → Ψm, vm → Qm and ṽm → Vm.
According to the proposed method density ρ̃(φ) repre-
sents, within some (pt, η) acceptance, an event-wise par-
ticle distribution on azimuth φ including weights ri =
1, pti or Eti. The FT in terms of azimuth vectors is
ρ̃(φ) =
riδ(φ− φi) (35)
· ~u(mφ)
cos(m[φ−Ψm]),
and the RT is
~̃Qm =
ri~u(mφi) ≡ Q̃m~u(mΨm), (36)
forming a conventional Fourier transform pair [cf.
Eqs. (3) and (4)]. Scalar amplitude Q̃m =
i ri~u(mφi) ·
~u(mΨm) = n〈r cos(m[φ − Ψm])〉 is the proposed flow-
analysis quantity. Q̃m is said to measure the flow mag-
nitude, and Ψm estimates reaction-plane angle Ψr. It
is proposed that Q̃m(η) evaluated within bins on η may
characterize complex “event shapes” [densities on (η, φ)].
As with directivity and sphericity, multiplicity fluctu-
ations are seen as a major obstacle to flow analysis with
Fourier series. Finite multiplicity is described as a source
of ‘bias’ which must be suppressed. It is stated that
(Ṽm,Ψr) are the “parameter[s] relevant to the magnitude
of flow,” whereas the observed (Q̃m,Ψm) are biased flow
estimators. In the limit 1/n → 0 the two cases would
be identical. A requirement is therefore placed on min-
imum event-wise multiplicity in a “rapidity slice.” To
solve the finite-number problem the paper proposes to
use the event frequency distribution on Q̃2m from event-
wise Fourier analysis to measure flow.
If correlations are zero then
Q̃2m → Q̃2ref =
r2i = n〈r2〉 ≡ σ̃2, (37)
with 〈r2〉 ≡ σ20 . By fitting the frequency distribution on
Q̃2m with a model function it is proposed to obtain Ṽm as
the unbiased flow estimator. The fitting procedure is said
to require sufficiently large event multiplicities to obtain
Ṽm unambiguously.
The distribution on Q̃2m is derived as follows (cf.
Fig. 2 – left panel). The magnitude of statistical ref-
erence ~̃Qref (a random walker) has probability distri-
bution ∝ exp(−Q̃2ref/2Q2ref), with Q2ref = n〈r2〉. But
~̃Qref = ~̃Qm − ~̃Vm, therefore (cf. Fig. 2 – left panel)
Q̃2ref = Q̃
m + Ṽ
m − 2Q̃mṼm cos(m[Ψm −Ψr]), (38)
exp(−Q̃2ref/2Q2ref) → ρ(Q̃m,Ψm; Ṽm,Ψr). (39)
When integrated over cos(m[Ψm −Ψr]) there results the
required probability distribution on Q̃2m, with fit param-
eter Vm. The distribution on Q̃
m is said to show a ‘non-
statistical’ shape change from which Vm can be inferred
by a model fit “free from uncertainties in event-wise de-
termination of the reaction plane.” It is also proposed
to use ρ(Q̃m,Ψm; Ṽm,Ψr) to determine the EP resolu-
tion cos(Ψm − Ψr) by integrating over Q̃2m and using the
resulting projection on cos(Ψm − Ψr) to determine the
ensemble mean (Fig. 3 of [21]).
While one could extract ensemble-average V 2m at some
level of accuracy by fitting the frequency distribution on
Q̃2m with a model function, we ask why go to that trou-
ble when V 2m is easily obtained as a variance difference?
Instead of Eq. (38) we simply write
Q̃2m = Q̃
ref + Ṽ
m, (40)
where the cross term is zero on average and Q̃2ref = n〈r2〉
represents the power-spectrum white noise, which is the
same for all m (i.e., ‘white’). If the vector mean values
are zero that is a relation among variances. Ensemble
mean V 2m = Q
m −Q2ref is therefore simply determined.
For the EP resolution we factor Ṽ 2m
Ṽ 2m = n(n− 1)〈r2 cos2(m[φ−Ψr])〉 (41)
= n(n− 1)〈r2 cos2(m[φ−Ψm])〉 cos2(m[Ψm −Ψr]).
We use Q̃m = n〈r cos(m[φ − Ψm])〉 and the assumption
that random variable Ψm − Ψr is uncorrelated with φ−
Ψm to obtain
cos2(m[Ψm −Ψr]) =
n Ṽ 2m
(n− 1) Q̃2m
, (42)
which defines the EP resolution of the full n-particle
event in terms of power-spectrum elements (cf. App. D).
The square root of that expression is plotted in Fig. 2
(right panel) as the solid curves for several values of n̄.
The solid and dotted curves are nearly identical to those
in Fig. 1.
= -
vm/σ = √(2nvm
2 ) ~ √(2Vm
2 /n)
n = 5
n = 10, ... , 50
√(n-1)
0 1 2 3
FIG. 2: Left panel: Distribution of event-wise elements of ~̃Qm
components determined by the gaussian-distributed random
walker ~̃Qref and possible correlation component ~̃Vm. Right
panel: Reaction-plane resolution estimator 〈cos(mδΨmr)〉,
with δΨmr = Ψm−Ψr, determined from fits to a distribution
on Q̃2m as in the left panel (dashed curve), and from Eq. 42
for several values of n (solid and dotted curves).
As in other flow papers there is much emphasis on
insuring adequate multiplicities to reduce bias to a man-
agable level, because an easily-determined statistical ref-
erence is not properly subtracted to reveal the contribu-
tion from true two-particle correlations in isolation. For√
2nvm > 1 in Fig. 2 (right panel) the EP is meaningful;
bias of event-wise quantities relative to the EP is man-
ageable. For
2nvm < 1/2 the EP is poorly defined,
and ensemble-averaged two-particle correlations are the
only reliable measure of azimuth correlations. In either
case EP estimation is only justified when a non-flow phe-
nomenon is to be studied relative to the reaction plane.
D. Fourier series II
A more elaborate review of flow analysis methods
based on Fourier series is presented in [22]. The ap-
proach is said to be general. The event plane is ob-
tained for each event. The event-wise Fourier ampli-
tude(s) q̃m ≡ Q̃m/Q̃0 relative to the EP are corrected for
the EP resolution as obtained from subevents. We mod-
ify the notation of the paper to vobsm → q̃m and wi → ri to
maintain consistency within this paper. We distinguish
between the unbiased ṽm ≡ Ṽm/Ṽ0 and the biased q̃m.
According to [22], in the 1/n→ 0 limit (Q̃m → Vm, no
tildes, no random variables) the dependence of the single-
particle density on azimuth angle φ integrated over some
(pt, y) acceptance can be expressed as a Fourier series of
the form
ρ(φ) =
1 + 2
vm cos [m(φ−Ψr)]
, (43)
with reaction-plane angle Ψr. As we have seen, the factor
2 comes from the symmetry on indexm for a real-number
density, not an arbitrary choice as suggested in [22]. In
this definition V0/2π has been factored from the Fourier
series in [21]. The Fourier “coefficients” vm in this form
(actually coefficient ratios) are not easily related to the
power spectrum. In the 1/n → 0 limit the coefficients
are vm = 〈cos(m[φ−Ψr])〉.
In the analysis of finite-multiplicity events, reaction-
plane angle Ψr defined by the collision (beam) axis and
the collision impact parameter is estimated by event
plane (EP) angle Ψm, with Ψm derived from event-wise
azimuth vector ~̃Qm (conventional flow vector)
~̃Qm =
ri~u(mφi) ≡ Q̃m~u(mΨm). (44)
The finite-multiplicity event-wise FT is
ρ̃(φ) =
1 + 2
q̃m cos [m(φ−Ψm)]
. (45)
with q̃m = 〈cos(m[φ−Ψm])〉; e.g., q̃2 ≃ α̃2 (cf. Eq. (34)).
According to the conventional description the EP an-
gle is biased by the presence of self pairs, unfortunately
termed the “autocorrelation effect” or simply “autocor-
relation” in conventional flow analysis [19, 22], whereas
autocorrelations and cross-correlations are distributions
on difference variables used for decades in statistical anal-
ysis to measure correlation structure on time and space.
As in [19], to eliminate “autocorrelations” EP angle Ψmi
is estimated for each particle i from complementary flow
vector ~Qmi =
j 6=i rj~u(2φj) = Qmi~u(mΨmi), a form of
subevent analysis with one particle vs n− 1 particles (cf.
App. D).
In the conventional description the event-plane res-
olution results from fluctuations δΨr ≡ Ψm − Ψr of
the event-plane angle Ψm (or Ψmi) relative to the true
reaction-plane angle Ψr (e.g., due to finite particle num-
ber). The EP resolution is said to reduce the observed
q̃m relative to the true value ṽm:
q̃m =
ṽ2m (46)
= 〈cos (m[Ψr −Ψm])〉 · ṽm.
The EP resolution (first factor, second line) is obtained in
a conventional flow analysis in two ways: the frequency
distribution on Ψm − Ψr discussed in Sec. III C and the
subevent method discussed in App. D. A parameteri-
zation from the frequency-distribution method reported
in [22] is plotted as the dashed curve in Fig. 2 (right
panel). There is good agreement with the simple expres-
n/(n− 1)Vm/Qm obtained in Eq. (42), which also
follows from Eq. (46).
Two methods are described for obtaining vm without
an event-plane estimate, with the proviso that large event
multiplicities in the acceptance are required. The first, in
terms of the conventional flow vector, is expressed (with
ri → 1) as (Eq. (26) of [22])
Q2m = n̄+ n
2 v̄2m. (47)
That expression is constrasted with the exact event-wise
treatment, where for each event we can write
Q̃2m = n+
cos [m(φi − φj)] (48)
= n+ n(n− 1)〈cos(mφ∆)〉
≡ n+ n(n− 1)ṽ2m
→ Q2m = n̄+ n(n− 1)ṽ2m = n̄+ V 2m
Note the similarity with fluctuations measured by num-
ber variance σ2n = n̄+∆σ
n, where the second term on the
RHS is an integral over two-particle number correlations
and the first is the uncorrelated Poisson reference (again,
the self-pair term in the autocorrelation).
There are substantial differences between the two Q2m
formulations above, especially for smaller multiplicities.
Eq. (48) is unbaised for all n and provides a simple way
to obtain V 2m = n(n− 1)ṽ2m = Q2m− n̄. The conventional
method uses a complex fit to the frequency distribution
on Q̃2m to estimate Vm as in [21]. Why do that when such
a simple alternative is available?
E. v2 centrality dependence
In [23] the expected trend of v2 with A-A centrality for
different collision systems is discussed in a hydro context.
It is stated that the v2 centrality trend should reveal the
degree of equilibration in A-A collsions. The centrality
dependence of v2/ǫ should be sensitive to the “physics of
the collision”—the nature of the constituents (hadrons
or partons) and their degree of thermalization or collec-
tivity. “It is understood that such a state requires (at
least local) thermalization of the system brought about
by many rescatterings per particle during the system
evolution...v2 is an indicator of the degree of equilibra-
tion.”
Thermalization is related to the number of rescatter-
ings, which also strongly affects elliptic flow according to
this hydro interpretation. In the full hydro limit, corre-
sponding to full thermalization where the mean free path
λ is much smaller than the flowing system, relation v2 ∝ ǫ
is predicted, with ǫ the space eccentricity of the initial
A-A overlap region [20]. Conversely, in the low-density
limit (LDL), where λ is comparable to or larger than
the system size, a different model predicts the relation
v2 ∝ ǫA1/3/λ, where A1/3/λ estimates the mean num-
ber of collisions per “particle” during system evolution
to kinetic decoupling [24]. In the LDL case v2 ∝ ǫ 1S
where S = πRxRy is the (weighted) cross-section area of
the collision and ǫ =
R2y−R
R2y+R
is the spatial eccentricity.
Those trends are further discussed in App. E.
According to the combined scenario, comparison of
the centrality dependence of v2 at energies from AGS
to RHIC may reveal a transition from hadronic to ther-
malized partonic matter. The key expectation is that at
some combination(s) of energy and centrality v2/ǫ tran-
sitions from an LDL trend (monotonic increase) to hydro
(saturation), indicating (partonic) equilibration.
However, it is important to note two things: 1) That
overall description is contingent on the strict hydro sce-
nario. If the quadrupole component of azimuth correla-
tions arises from some other mechanism then the descrip-
tions in [20] and [24] are invalid, and v2 does not reveal
the degree of thermalization. 2) v2 is a model-dependent
and statistically-biased quantity motivated by the hydro
scenario itself. The model-independent measure of az-
imuth quadrupole structure is V 22 /n̄ ≡ n̄ v22 (defining an
unbiased v2). It is important then to reconsider the az-
imuth quadrupole centrality and energy trends revealed
by that measure to determine whether a hydro interpre-
tation is a) required by or even b) permitted by data.
IV. IS THE EVENT PLANE NECESSARY?
A key element of conventional flow analysis is estima-
tion of the reaction plane and the resolution of the esti-
mate. We stated above that determination of the event
plane is irrelevant if averaged quantities are extracted
from an ensemble of event-wise estimates. The reaction-
plane angle is relevant only for study of nonflow (minijet)
structure relative to the reaction plane on φΣ, the sum
(pair mean azimuth) axis of (φ1, φ2). In contrast to con-
cerns about low multiplicities in conventional flow analy-
sis, proper autocorrelation techniques accurately reveal
“flow” correlations (sinusoids) and any other azimuth
correlations, even in p-p collisions and even within small
kinematic bins. In this section we examine the necessity
of the event plane in more detail.
The reaction plane (RP), nominally defined by the
beam axis and the impact parameter between centers of
colliding nuclei, is determined statistically in each event
by the distribution of participants. The RP is estimated
by the event plane (EP), defined statistically in each
event by the azimuth distribution of final-state particles
in some acceptance. We now consider how to extract
vm relative to a reaction plane estimated by an event
plane in each event. Several different flow measures are
implicitely defined in conventional flow analysis
vm ≡ cos(m[φ−Ψr]) ideal case, 1/n→ 0 (49)
ṽ2m ≡ 〈cos2(m[φ−Ψr])〉 unbiased estimate
q̃m ≡ 〈cos(m[φ−Ψm])〉 self-pair bias
ṽ′m ≡ 〈cos(m[φi −Ψmi])〉 reduced bias
ṽ′m is the event-wise result of a conventional flow analysis.
The ensemble average ṽ′m must be corrected for the “EP
resolution” which we now determine.
The basic event-wise quantities, starting with an inte-
gral over two-particle azimuth space, are
Ṽ 2m ≡
n,n−1
~u(mφi) · ~u(mφj) (50)
= n(n− 1)〈cos(mφ∆)〉
= n(n− 1)〈cos2(m[φ−Ψr])〉
≡ n(n− 1)ṽ2m.
For the limiting case of subevents A and B with A a single
particle the subevent azimuth vector complementary to
particle i is
~̃Qmi ≡
j 6=i
~u(mφj) = Q̃mi~u(mΨmi). (51)
We make the following rearrangment
Ṽ 2m =
~u(mφi) ·
j 6=i
~u(mφj) (52)
Q̃mi cos(m[φi −Ψmi])
≡ nQ̃′m〈cos(m[(φ −Ψ′m])〉
n(n− 1)ṽ2m = nQ̃′m ṽ′m
ṽm = ṽ
and, since Q̃′m ≃
n− 1 + (n− 1)(n− 2)〈cos(mφ∆)〉,
we identify the EP resolution as
〈cos(m[Ψ′m −Ψr])〉 =
where the primes refer to a subevent with n−1 particles.
That expression, for full events with multiplicity n, is
plotted in Fig. 2 (right panel) for several choices of n. An
n-independent universal curve on V 2m/n̄ is multiplied by
n-dependent factor
n/(n− 1), where n is the number
of samples in the event or subevent. The dashed curve is
the parameterization from [22].
The right panel indicates that for large nṽ2m single-
particle reaction-plane estimates can provide a “flow”
measurement with manageable bias. For small nṽ2m the
EP resolution averaged over many events is itself a “flow”
measurement, even though the reaction plane is inacces-
sible in any one event. Ṽ 2m is determined from the same
underlying two-particle correlations by other means—the
only difference is how pairs are grouped across events. In
App. D the EP resolution is determined for the case of
equal subevents A and B
From this exercise we conclude that event-plane deter-
mination is irrelevant for the measurement of cylindrical
multipole moments (“flows”). Following a sequence of
analysis steps in the conventional approach based on de-
termination of and correction for the EP estimate, the
event plane cancels out of the flow measurement. What
results from the conventional method is approximations
to signal components of power-spectrum elements which
can be determined directly in the form V 2m/n̄ = n̄ v
obtained with the autocorrelation method, which defines
an unbiased version of vm. Event-plane determination
can be useful for study of other event-wise phenomena in
relation to azimuth multipoles.
V. 2D (JOINT) AUTOCORRELATIONS
We now return to the more general problem of angular
correlations on (η1, η2, φ1, φ2). We consider the analysis
of azimuth correlations based on autocorrelations, power
spectra and cylindrical multipoles without respect to an
event plane in the context of the general Fourier trans-
form algebra presented in Sec. II. We seek a comprehen-
sive method which treats η and φ equivalently.
In conventional flow analysis there are two concerns
beyond the measurement of flow in a fixed angular accep-
tance: a) study flow phenomena in multiple narrow ra-
pidity bins to characterize the overall “three-dimensional
event shape,” analogous to the sphericity ellipsoid but
admitting of more complex shapes over some rapidity in-
terval, and b) remove nonflow contributions to flow mea-
surements as a systematic error. Maintaining adequate
bin multiplicities to avoid bias is strongly emphasized in
connection with a). The contrast between such individ-
ual Fourier decompositions on single-particle azimuth in
single-particle rapidity bins and a comprehensive analy-
sis in terms of two-particle joint angular autocorrelations
is the subject of this section
A. Stationarity condition
In Fig. 3 we show two-particle pair-density ratios r̂ ≡
ρ/ρref on (η1, η2) (left panel) and (φ1, φ2) (right panel)
for mid-central 130 GeV Au-Au collisions [12]. The hat
on r̂ indicates that the number of mixed pairs in ρref
has been normalized to the number of sibling pairs in
ρ. In each case we observe approximate invariance along
sum axes ηΣ = η1 + η2 and φΣ = φ1 + φ2. In time-series
analysis the equivalent invariance of correlation structure
on the mean time is referred to as stationarity, implying
that averaging pair densities along the sum axes loses no
information. The resulting averages are autocorrelation
distributions on difference axes η∆ and φ∆.
0.9996
0.9998
1.0002
1.0004
1.0006
1.0008
1.001
0.999
0.9992
0.9995
0.9997
1.0002
1.0005
1.0007
1.001
1.0012
1.0015
FIG. 3: Normalized like-sign pair-number ratios r̂ = ρ/ρref
from central Au-Au collisions at 130 GeV for (η1, η2) (left
panel) and (φ1, φ2) (right panel) showing stationarity—
approximate invariance along sum diagonal x1 + x2.
In Fig. 3 (right panel) one can clearly see the cos(2φ∆)
structure conventionally associated with elliptic flow
(quadrupole component). However, there are other con-
tributions to the angular correlations which should be
distinguished from multipole components, accomplished
accurately by combining the two angular correlations into
one joint angular autocorrelation.
B. Joint autocorrelation definition
With the autocorrelation technique the dimensional-
ity of pair density ρ(η1, η2, φ1, φ2) can be reduced from
4D to 2D without information loss provided the dis-
tribution exhibits stationarity. Expressing pair density
ρ(η1, η2, φ2, φ2) → ρ(ηΣ, η∆, φΣ, φ∆) in differential form
d4n/dx4 we define the joint autocorrelation on (η∆, φ∆)
ρA(η∆, φ∆) ≡
dη∆dφ∆
d2n(ηΣ, η∆, φΣ, φ∆)
dηΣdφΣ
ηΣ φΣ
by averaging the 4D density over ηΣ and φΣ within a
detector acceptance. The autocorrelation averaging on
ηΣ is equivalent to 〈dn/dη〉η ≈ n(∆η)/∆η at η = 0. The
magnitude is still the 4D density d4n/dη1dη2dφ1dφ2, but
it varies only on the two difference axes (η∆, φ∆).
In Fig. 4 we illustrate two averaging schemes [18]. In
the left panel we show the averaging procedure applied
to histograms on (x1, x2) as in Fig. 3. Index k denotes
the position of an averaging diagonal on the difference
axis. In the right panel we show a definition involving
pair cuts applied on the difference axes. Pairs are his-
togrammed directly onto (η∆, φ∆). Periodicity on φ im-
plies that the averaging interval on φΣ/2 is 2π indepen-
dent of φ∆. However, η is not periodic and the averaging
interval on ηΣ/2 is ∆η − |η∆|, where ∆η is the single-
particle η acceptance [18]. The autocorrelation value for
average
FIG. 4: Autocorrelation averaging schemes on xΣ = x1 + x2
for a prebinned single-particle space (left panel) and for pairs
accumulated directly into bins on the two-particle difference
axis (right panel).
a given x∆ is the bin sum along that diagonal divided by
the averaging interval.
C. Autocorrelation examples
In Fig. 5 we show pt joint angular autocorrelations for
Hijing central Au-Au collisions at 200 GeV [25]. The left
panel shows quench-on collisions. The right panel shows
quench-off collisions. Aside from an amplitude change
Hijing shows little change in correlation structure from
N-N to central Au-Au collsions. Those results can be
contrasted with examples from an analysis of real RHIC
data [25] shown in Fig. 6.
0.002
0.004
0.006
0.008
0.012
0.002
0.004
0.006
0.008
0.012
FIG. 5: 2D pt angular autocorrelations from Hijing central
Au-Au collisions at 200 GeV for quench on (left panel) and
quench off (right panel) simulations.
In N-N or p-p collisions there is no obvious quadrupole
component. However, that possibility is not quanti-
tatively excluded by the data and requires careful fit-
ting techniques. For correlation structure which is not
sinusoidal there is no point to invoking the Wiener-
Khintchine theorem on φ∆ and transforming to a power
spectrum. Instead, model functions specifically suited to
such structure (e.g., 1D and 2D gaussians) are more ap-
propriate. The power-spectrum model is appropriate for
some parts of ρA, depending on the collision system. We
consider hybrid decompositions in Sec. VII.
D. Comparison with conventional methods
We can compare the power of the autocorrelation tech-
nique with the conventional approach based on EP es-
timation in single-particle η bins. There are concerns
in the single-particle approach about bias or distortion
from low bin multiplicities. In contrast, the autocorrela-
tion method is applicable to any collision system with any
multiplicity, as long as some pairs appear in some events.
The method is minimally biased, and the statistical error
in ∆ρA/
ρref is determined only by the number of bins
and the total number of events in an ensemble.
There is interest within the conventional context in
“correlating” flow results in different η bins. “A three-
dimensional event shape can be obtained by correlat-
ing and combining the Fourier coefficients in different
longitudinal windows” [21]. That goal is automatically
achieved with the joint autocorrelation technique but has
not been implemented with the conventional method.
The content of an autocorrelation bin at some η∆ rep-
resents a covariance averaged over all bin pairs at ηa and
ηb satisfying η∆ = ηa − ηb. If a flow component is iso-
lated (e.g., V 22 /n̄), a given autocorrelation element rep-
resents normalized covariance na nbṽ
n̄an̄b, where
ṽ2mab would be the event-wise result of a ‘subevent’ or
‘scalar-product’ analysis between bins a and b. Averaged
over many events one obtains good resolution on rapid-
ity and azimuth for arbitrary structures, without model
dependence or bias.
VI. FLOW AND MINIJETS (NONFLOW)
The relation between azimuth multipoles and minijets
is a critically important issue in heavy ion physics which
deserves precise study. We should carefully compare
multipole structures conventionally attributed to hydro-
dynamic flows and parton fragmentation dominated by
minijets in the same analysis context. The best arena for
that comparison is the 2D (joint) angular autocorrelation
and corresponding power-spectrum elements. Before pro-
ceeding to autocorrelation structure we consider nonflow
in the conventional flow context
A. Nonflow and conventional flow analysis
In conventional flow analysis azimuth correlation struc-
ture is simply divided into ‘flow’ and ‘nonflow,’ where
the latter is conceived of as non-sinusoidal structure of
indeterminant origin. The premise is that all sinusoidal
structure represents flows of hydrodynamic origin. It is
speculated that nonflow is due to resonances, HBT and
jets, including minijets. Various properties are assigned
to nonflow which are said to distinguish it from flow [26].
Nonflow is by definition non-sinusoidal and is not corre-
lated with the RP, thus it can appear perpendicular to
the RP.
The multiplicity dependence of nonflow is said to be
quite different from flow, where “multiplicity depen-
dence” can sometimes be read as centrality dependence.
For instance, ~̃Qma · ~̃Qmb = Ṽma Ṽmb cos(Ψma−Ψmb) if A,
B are disjoint, since there are no self pairs. The ensemble
average then measures covariance V 2mab. For m = 2 it is
claimed that the nonflow component of V 2
2ab is ∝ n̄ c [22].
Therefore, V 22 /n̄ ∝ c, a constant for nonflow—no cen-
trality dependence. But V 2m/n̄ ∝ ∆ρA/
ρref which, for
the minijets dominating nonflow, is very strongly depen-
dent on centrality [9, 12]; the conventional assumption
is incorrect. The above notation is inadequate because
minijets (nonflow) should not be included in the Fourier
power spectrum. They should be modeled by different
functional forms which we consider in the next section.
B. Cumulants
Another strategy for isolating flow from nonflow is to
use higher cumulants [27]. The basic assumption (a phys-
ical correlation model) is that flow sinusoids are collec-
tive phenomenon characteristic of almost all particles,
whereas nonflow is a property only of pairs, termed “clus-
ters.” That scenario is said to imply that vm should be
the same no matter what the multiplicity, whereas non-
flow should fall off as some inverse power of n.
For instance, by subtracting v2[4] (four-particle cumu-
lant) from v2[2] (two-particle cumulant) one should ob-
tain “nonflow” as the difference (cf. Eq. (10) of [26]). In
Fig. 31 of [26] we find a plot of g2 = Npart (v
2 [2]−v22 [4]) ∝
Npart/nch ×∆ρ/
ρref . Multiplying g2 by n
part we
obtain a measure of minijet correlations per participant
pair. That ‘nonflow’ component increases rapidly with
centrality (and therefore n), consistent with actual mea-
surements of minijet centrality trends. The incorrect fac-
tor in the definition of g2 removes a factor 2x increase
from peripheral to central in the minijet centrality trend,
thus suppressing the centrality dependence of ‘nonflow.’
C. Counterarguments
The conventional flow analysis method requires a com-
plex strategy to distinguish flow from nonflow in projec-
tions onto 1D azimuth difference φ∆. An intricate and
fragile system results, with multiple constraints and as-
sumptions. The assumptions are not a priori justified,
and must be tested. ‘Flow’ isolated with those assump-
tions can and does contain substantial systematic errors.
Claims about the multiplicity (centrality) dependence
of ‘nonflow’ (independent of centrality or slowly varying)
are unsupported speculations without basis in experiment.
In fact, detailed measurements of minijet centrality de-
pendence [9, 12] are quite inconsistent with typical as-
sumptions about nonflow. Multiplicity (centrality) de-
pendence of flow measurements is further compromised
by biases resulting from improper statistical methods,
especially true for small multiplicities or peripheral colli-
sions. Such biases can masquerade as physical phenom-
Finally, it is assumed that nonflow has no correlation
with the RP, thus implying the ability of and need for
the EP to distinguish flow from nonflow. But nonflow
(minijets) should be strongly correlated with the EP (jet
quenching), and such correlations should be measured.
That is the main subject of paper II in this two-part se-
ries. Precise decomposition of angular correlations into
‘flow’ sinusoids and minijet structure is realized with 2D
joint angular autocorrelations combined with proper sta-
tistical techniques.
Non-flow is a suite of physical phenomena, each wor-
thy of detailed study. In the conventional approach this
physics is seen in limited ways by various projections and
poorly-designed measures and described mainly by spec-
ulation. With more powerful analysis methods it is pos-
sibl to separate flow from the various sources of ‘nonflow’
reliably and identify those sources as interesting physical
phenomena.
VII. STRUCTURE OF THE JOINT ANGULAR
AUTOCORRELATION IN A-A COLLISIONS
We now return to the 2D angular autocorrelation. By
separating its structure into a few well-defined compo-
nents we obtain an accurate separation of multipoles,
minijets and other phenomena. Minijets and “flows” can
be compared quantitatively within the same analysis con-
text.
Each bin of an autocorrelation is a comparison of two
“subevents.” The notional term “subevent” represents a
partition element in conventional math terminology (e.g.,
topology, cf. Borel measure theory). An “event” is a dis-
tribution in a bounded region of momentum space (de-
tector acceptance), and a subevent is a partition element
thereof. A distribution can be partitioned in many ways:
by random selection, by binning the momentum space,
by particle type, etc. A uniform partition is a binning,
and the set of bin entries is a histogram.
A bin in an angular autocorrelation represents an av-
erage over all bin pairs in single-particle space separated
by certain angular differences (η∆, φ∆). The bin contents
represent normalized covariances averaged over all such
pairs of bins. The notional “scalar product method,” re-
lating two subevents in conventional flow analysis, is al-
ready incorporated in conventional mathematical meth-
ods developed over the past century as covariances in
bins of an angular autocorrelation. Using 2D angular au-
tocorrelations we easily and accurately separate nonflow
from flow. “Nonflow” so isolated has revealed the physics
of minijets—hadron fragments from the low-momentum
partons which dominate RHIC collisions.
A. Minijet angular correlations
Minijet correlations are equal partners with multi-
pole correlations on difference-variable space (η∆, φ∆).
Minimum-bias jet angular correlations (dominated by
minijets) have been studied extensively for p-p and Au-
Au collisions at 130 and 200 GeV [9, 10, 11, 12]. Those
structures dominate the “nonflow” of conventional flow
analysis. In p-p collisions minijet structure—a same-side
peak (jet cone) and away-side ridge uniform on η∆—
are evident for hadron pairs down to 0.35 GeV/c for
each hadron. Parton fragmentation down to such low
hadron momenta is fully consistent with fragmentation
studies over a broad range of parton energies (e.g., LEP,
HERA) [28].
80-90%
0.001
0.002
0.003
0.004
0.005
45-55%
-0.005
0.005
0.015
80-90%
0.001
0.002
0.003
0.004
45-55%
-0.002
0.002
0.004
0.006
0.008
0.012
FIG. 6: 2D pt angular autocorrelations from Au-Au collisions
at 200 GeV for 80-90% central collisions (left panels) and 45-
55% central collisions (right panels). In the lower panels si-
nusoids cos(φ∆) and cos(2φ∆) have been subtracted to reveal
“nonflow” structure.
In Fig. 6 we show autocorrelations obtained by inver-
sion of pt fluctuation scale (bin-size) dependence [9]. The
upper-left panel is 80-90% central and the upper-right
panel is 45-55% central Au-Au collisions. Correlation
structure is dominated by a same-side peak and mul-
tipole structures (sinusoids). Subtracting the sinusoids
reveals the minijet structure in the bottom panels and
illustrates the precision with which flow and nonflow can
be distinguished. The negative structure surrounding the
same-side peak at lower right is an interesting and unan-
ticipated new feature [9].
B. Decomposing 2D angular autocorrelations: a
controlled comparison
Based on extensive analysis [9, 10, 11, 25] we find
three main contributions to angular correlations in RHIC
nuclear collisions: 1) transverse fragmentation (mainly
minijets), 2) longitudinal fragmentation (modeled as
“string” fragmentation), 3) azimuth multipoles (flows).
Longitudinal fragmentation plays a reduced role in heavy
ion collisions. In this study we focus on the interplay be-
tween 1) and 3), transverse parton fragmentation and
azimuth multipoles, as the critical analysis issue for az-
imuth correlations in A-A collisions.
The 2D joint autocorrelation ρA(η∆, φ∆) is the basis
for decomposition. The criteria for distinguishing az-
imuth multipoles from minijet structure are η∆ depen-
dence and sinusoidal φ∆ dependence. Structure with si-
nusoidal φ∆ dependence and η∆ invariance is assigned to
azimuth multipoles. Other structure, varying generally
on (η∆, φ∆), is assigned in this exercise to minijets. We
adopt the decomposition
ρA(η∆, φ∆) = ρj(η∆, φ∆) + ρm(φ∆), (55)
where j represents (mini)jets and m represents multi-
poles. That decomposition is reasonable within a limited
pseudorapidity acceptance, e.g., the STAR TPC accep-
tance [29]. Over a larger acceptance other separation
criteria must be added.
To illustrate the separation process we construct artifi-
cial autocorrelations combining flow sinusoids and mini-
jet structure with centrality dependence taken from mea-
surements. We add statistical noise appropriate to a typ-
ical event ensemble of a few million Au-Au collisions. We
then fit the autocorrelations with model functions and χ2
0.1994E-03/ 23
P1 0.1456E-01
P2 0.4811E-02
-0.02
-0.01
0.005
0.015
0.025
0 2 4
FIG. 7: Simulated three-component 2D angular autocorrela-
tion for 80-90% central Au-Au collisions at 200 GeV (upper
left), model of data distribution from fitting (upper-right),
autocorrelation with eta-acceptance triangle imposed (lower
left) and sinusoid fit to 1D projection of lower-left panel
(lower-right).
minimization. We compare the resulting fit parameters
with the input parameters. We then project the 2D au-
tocorrelations onto φ∆ and fit the results with a sinusoid.
The result of the 1D fit represents the product of a con-
ventional flow analysis if the EP resolution is perfectly
corrected and there is no statistical bias in the method.
We then compare the resulting sinusoid amplitudes.
In Fig. 7 we show an analysis for 80-90% central Au-Au
collisions, which are nearly N-N collisions. The upper-
right panel is an accurate model of data from p-p col-
lisions [11]. The upper-left panel is the simpler repre-
sentation for this exercise with added statistical noise.
The difference in constant offsets is not relevant to the
exercise. At lower left is the distribution “seen” by a
conventional flow analysis, which simply integrates over
(projects) the η dependence. The projection includes a
triangular acceptance factor on η∆ imposed on the joint
autocorrelation, resulting in distortions that affect the si-
nusoid fit. The lower-right panel is the projection onto
φ∆ with χ
2 fit. The 1D fit gives ∆ρ[2]/
ρref = 0.0052,
compared to 0.0013 from the 2D fit
In Fig. 8 we show the same analysis applied to 5-10%
Au-Au collisions. The model distribution, derived from
observed data trends, is dominated by a dipole term
∝ cos(φ∆) and an elongated same-side jet peak, although
the combination closely mimics a quadrupole ∝ cos(2φ∆)
(elliptic flow). The lower-left panel shows the effect of the
η-acceptance triangle on η∆ applied to the upper-right
panel which is implicit in any projection onto φ∆ (obvious
in this 2D plot). The resulting projection on φ∆ is shown
0.2331E-01/ 23
P1 0.8826E-01
P2 0.1026
0 2 4
FIG. 8: Same as the previous figure but for 5-10% central Au-
Au collisions at 200 GeV. Note the pronounced effect of the
η-acceptance triangle in the lower-left panel (relative to the
upper-right panel) resulting from projection of space (η1, η2)
onto its difference axis.
at lower right. The 1D fit gives ∆ρ[2]/
ρref = 0.103,
compared to 0.060 from the 2D fit. The differences be-
tween 1D and 2D fits are much larger than the differences
between input and fitted parameters in the previous ex-
ercise. They reveal the limitations of conventional flow
analysis.
In Fig. 9 we give a summary of results for twelve cen-
trality classes, nine 10% bins with mean values 95% · · ·
15%, plus two 5% bins with mean values 7.5% and 2.5%.
The twelfth class is b = 0 (0%), constructed by extrapo-
lating the parameterizations from data. The solid curves
and points represent the parameters inferred from 2D fits
to the joint angular autocorrelations. The dashed curves
represent the model parameters used to construct the
simulated distributions. There is good agreement at the
percent level.
m = 1
projection
onto φ∆
peak amplitude
2 4 6
2 4 6
FIG. 9: A parameter summary for the previous two figures.
The solid curves are input model parameters for m = 1 and
m = 2 sinusoids cos(mφ∆) and a same-side 2D gaussian with
two widths and peak amplitude which approximate 200 GeV
Au-Au collisions. The dashed curves are the results of fits to
the model 2D autocorrelations exhibiting excellent accuracy.
The dash-dot curve represents 1D fits to projections on φ∆
(previous lower-right panels) corresponding to conventional
flow analysis.
The fits to 1D projections on φ∆ (dash-dot curve) how-
ever differ markedly from the 2D fit results and the in-
put parameters. The differences are very similar to the
changes of conventional flow measurements with different
strategies to eliminate “nonflow.” This exercise demon-
strates that with the 2D autocorrelation there is no guess-
work. We can distinguish the multipole contributions
from the minijet contributions. The 2D angular autocor-
relation provides precise control of the separation.
C. v2 in various contexts
In Fig. 10 we contrast the results of 1D conven-
tional flow analysis (dashed curves) and extraction of the
quadrupole amplitude from the 2D angular autocorrela-
tion (solid curves). We make the correspondence
∆ρA[2]√
2π n̄
n̄v22
among variables, with n̄/2π modeling ρ0 = d2n/dηdφ.
That expression defines a minimally-biased v2. We make
the comparison in four plotting formats. The upper-right
panel is most familiar from conventional flow analysis,
where v2 is plotted vs participant nucleon number. The
trends can be compared with Fig. 13 of [30] (open circles
vs solid stars). Also included in that panel is the trend
v2 ∼ 0.22 ǫ predicted by hydro for thermalized A-A col-
lisions at 200 GeV (dotted curve, [31]).
npart
0.22 ε
projection
onto φ∆
2D fit
2 4 6
0 100 200 300 400
2 4 6 0
2 4 6
FIG. 10: The quadrupole component in various plotting con-
texts derived from the previous model exercise. Conventional
flow measure v2 is shown in the upper panels. ∆ρ[2]/
based on Pearson’s normalized covariance is shown in the
lower panels. The upper-right panel shows v2 vs npart in the
conventional plotting format. The lower-left panel repeats
the left panel of the previous figure, with conventional 1D fit
results shown as the dashed curve. Dashed and solid curves
correspond in all panels. ν estimates the mean N-N encoun-
ters per participant pair. The dotted curve at lower right is
the error function used to generate the model quadrupole am-
plitudes. It’s correspondent for v2 is at upper left. The dotted
curve at upper right is eccentricity ǫ from a parameterization.
The remaining panels are plotted on parameter ν =
2nbin/npart, the ratio of N-N binary encounters to par-
ticipant pairs which estimates the mean participant path
length in number of encountered nucleons, a geometri-
cal measure. Comparing the upper panels we see that ν
treats peripheral and central collisions equitably, whereas
npart or ncharge compresses the important peripheral re-
gion into a small interval.
In the lower-left panel we plot per-particle density ra-
tio ∆ρA[2]/
ρref vs ν. That quantity, when extracted
from 2D fits, rises from near zero for peripheral collisions
to a maximum for mid-central collisions, falling toward
zero again for b = 0. In contrast to v2, which is the
square root of a per-pair correlation measure, the per-
particle density ratio reflects the trend for “flow” in the
sense of a current density. “Flow” is small for periph-
eral collisions and grows rapidly with increasing nucleon
path length. The trend with centrality is intuitive. The
values obtained from the 1D projection per conventional
flow analysis (dashed curve) are consistently high, espe-
cially for central collisions, exhibiting a strong systematic
bias. The 1D fit procedure (identical to the “standard”
and two-particle flow methods) confuses minijet structure
with quadrupole structure.
In the lower-right panel we show the density ratio di-
vided by initial spatial eccentricity ǫ defined by a pa-
rameterization derived from a Glauber simulation and
plotted as the dotted curve in the upper-right panel [31].
The trend from 2D fits (solid curve) is closely approx-
imated by a simple error function (dotted curve) with
half-maximum point at the center of the ν range. In
fact, the dotted curve is the basis for generating the input
quadrupole amplitudes for our model, and the small devi-
ation of the solid curve from the dotted curves in upper-
left and lower-right panels reveals the systematic error or
bias in the 2D fitting procedure (∼ 20% at ν ∼ 1, < 5%
at ν = 5.8). The dashed curves from the conventional 1D
fits show large relative deviations from the input trend,
especially for peripheral collisions where the emergence of
collectivity is of interest and for central collisions where
the issue of thermalization is most important.
Comparing the solid curve to existing v2 data in the
format of the upper-right panel shown in [30] (Fig. 13)
indicates that our simple formulation at lower right
(dotted curve) is roughly consistent with analysis of
flow data based on four-particle cumulants. The re-
sult from data generated with the same model and ana-
lyzed with the conventional flow analysis method (dashed
curve in upper-right panel) also agrees with the conven-
tional method applied to real RHIC data. Our model
may therefore indicate an underlying simplicity to the
quadrupole mechanism which is not hydrodynamic in ori-
gin. The model centrality trend for v2 is certainly incon-
sistent with the hydro expectation v2 ∝ ǫ [20, 23], as
demonstrated in the upper-right panel (cf. App. E).
VIII. DISCUSSION
A. Conventional flow analysis
The overarching premise of conventional flow analy-
sis is that in the azimuth distribution of each collision
event lies evidence of collective phenomena which must
be discovered to establish event-wise thermalization. The
1/n → 0 hydro limit shapes the analysis strategy, and
flow manifestations are the principal goal. Finite event
multiplicities are seen as a major source of systematic
error, as are correlation structures other than flow. Mul-
tiple strategies are constructed to deal with non-flow and
finite multiplicities. A stated advantage of the conven-
tional method is that the Fourier coefficients can be cor-
rected. The great disadvantage is theymust be corrected.
From the perspective of two-particle correlation anal-
ysis, especially in the context of autocorrelations and
power spectra, the conventional program leaves much to
be desired. The conventional analysis is essentially an
attempt to measure two-particle correlations with single-
particle methods combined with RP estimation, similar
to the use of trigger particles in high-pt jet analysis. But,
by analysis of the algebraic structure of conventional flow
analysis we have demonstrated that RP estimation does
not matter to the end result. Without a proper statistical
reference conventional analysis results contain extrane-
ous contributions from the statistical reference which are
partially ‘corrected’ in a number of ways. The improper
treatment of random variables incorporates sources of
multiplicity-dependent bias in measurements, and the fi-
nal results are questionable.
Flow measure vm is nominally the square root of per-
pair correlation measure V 2m/n(n− 1). vm centrality
trends are thus nonintuitive and misleading (e.g., “el-
liptic flow” decreases with increasing A-A centrality).
The situation is similar to per-pair fluctuation measure
Σpt , which provides a dramatically misleading picture of
pt fluctuation dependence on collision energy [10]. In
contrast, per-particle correlation measures provide intu-
itively clear results and often make dynamical correla-
tion mechanisms immediately obvious. In particular, the
mechanisms behind “nonflow” in the form of minijets are
clearly apparent when correlations are measured by per-
particle normalized covariance density ∆ρ/
ρref in a 2D
autocorrelation.
B. Autocorrelations and nonflow
In conventional flow analysis it is proposed to measure
flow in narrow rapidity bins (“strips”) so as to develop
a three-dimensional picture of event structure. However,
there has been little implementation of that proposal. In
contrast, the joint angular autocorrelation by construc-
tion contains all possible covariances among pseudora-
pidity bins within a detector acceptance. The ideal of
full event characterization is thereby realized.
The angular autocorrelation is the optimum solution to
a geometry problem—how to reduce the six-dimensional
two-particle momentum space to two-dimensional sub-
spaces with minimum distortion or information loss. The
autocorrelation is the unique solution to that problem,
involving no model assumptions. The ensemble-averaged
angular autocorrelation contains all the correlation infor-
mation obtainable from a conventional flow analysis, but
with negligible bias and no sensitivity to individual event
multiplicities.
Because it is a two-dimensional representation the
angular autocorrelation is far superior for separating
“flow” (multipoles) from “nonflow” (minijets), as we have
demonstrated. A simple exercise demonstrates that sep-
aration is complete at the percent level, whereas the con-
ventional method admits crosstalk at the tens of percent
level. Precise separation leads to new physics insights
from the multipoles and minijets so revealed.
C. Collision centrality dependence
Collision centrality dependence is of critical impor-
tance in the comparison of flow and minijets. Parton col-
lisions and hydrodynamic response to early pressure have
very different dependence on impact parameter and col-
lision geometry, especially for peripheral collisions. Pe-
ripheral A-A collisions should approach p-p (N-N) col-
lisions, and correlation structure may change rapidly in
mid-peripheral collisions as collective phenomena develop
there. The possible onset of collective behavior in mid-
peripheral collisions and reduction in more central col-
lisions are of major importance for understanding the
relation of minijets to flow. The conventional flow analy-
sis method is severely limited for peripheral collisions.
In contrast, correlation measure ∆ρ/
ρref , centrality
measure ν and associated centrality techniques described
in [32] are uniquely adapted to cover all centrality regions
down to N-N with excellent accuracy.
D. Physical interpretations
Because similar flow measurement techniques have
been applied at Bevalac and RHIC energies with sim-
ilar motivations it is commonly assumed that azimuth
multipoles have a common source over a broad collision
energy range—hydrodynamic flows, collective response
to early pressure. The hydro mechanism was proposed
as the common element in [20] and persists as the lone
interpretation of azimuth multipoles in HI collisions to
date.
At Bevalac and AGS energies it is indeed likely that
azimuth multipoles result from ‘flow’ of initial-state nu-
cleons in response to early pressure, with consequent
final-state correlations of those nucleons—a true hydro
phenomenon. However, at SPS and RHIC energies the
source of azimuth multipoles inferred from final-state
produced hadrons (mainly pions) may not be hydrody-
namic, in contrast to arguments by analogy with lower
energies. Other sources of multipole structure should be
considered [34, 35]. Multipoles at higher energies could
arise at the partonic or hadronic level, early or late in the
collision, with collective motion or not, and if collective
then implying thermalization or not.
The chain of argument most often associated with
elliptic flow asserts that observation of flow as a col-
lective phenomenon demonstrates that a thermalized
medium (QGP) has been formed which responds hy-
drodynamically to early pressure and converts an ini-
tial configuration-space eccentricity to a corresponding
quadrupole moment in momentum space.
However, nonflow in the form of minijets provides con-
tradictory evidence. Minijet centrality trends indicate
that thermalization is incomplete, and substantial mani-
festations of initial-state parton scattering remain at ki-
netic decoupling [9, 10, 12]. Precision studies of mini-
jet centrality dependence (ν dependence) indicate that a
large fraction of the minijet structure expected from lin-
ear superposition of N-N collisions (no thermalization)
persists in central Au-Au collisions. That contradic-
tion requires more complete experimental characteriza-
tion and careful theoretical study [36].
Arguments based on interpreting the quadrupole com-
ponent as hydrodynamic flow exclude alternative phys-
ical mechanisms. Aside from minijet systematics there
are other hints that a different mechanism might be re-
sponsible for azimuth multipoles. In Fig. 10 we showed
that flow measurements based on four-particle cumulants
(with bias sources and nonflow thereby reduced) are best
described by a trend (solid curves) that is inconsistent
with the hydro expectation v2 ∝ ǫ. The trend is in-
stead simply described in terms of per-particle measure
ρref and two shape parameters relative to ǫ.
We question the theoretical assumption that ǫ should
be simply related to v2 as opposed to some other mea-
sure of the azimuth quadrupole component. We expect
a priori and find experimentally that variance measures,
integrals over two-particle momentum space, more typ-
ically scale linearly with geometry parameters. Thus,
∆ρ[2]/
ρref ∝ n̄v22 may be more closely related to ǫ,
and the relation may or may not be characteristic of a
hydro scenario.
IX. SUMMARY
In conclusion, we have reviewed Fourier transform the-
ory, especially the relation of autocorrelations to power
spectra, essential for analysis of angular correlations in
nuclear collisions. In that context we have reviewed
five papers representative of conventional flow analysis
and have related the methods and results to autocorre-
lation structure and spherical and cylindrical multipole
moments.
We have examined the need for event-plane evaluation
in correlationmeasurements and find that it is extraneous
to measurement of azimuth multipole moments. The EP
estimate drops out of the final ensemble average.
We have introduced the definition of the 2D (joint) an-
gular autocorrelation and considered the distinction be-
tween flows (cylindrical multipoles) and nonflow (domi-
nated by minijet structure) in conventional flow analysis
and criticized the basic assumptions used to distinguish
the two in that context.
Based on measured minijet and flow centrality trends
we have constructed a simulation exercise in which model
autocorrelations of known composition are combined
with statistical noise from a typical event ensemble and
fit with a model function consisting of a few simple com-
ponents, first as a 2D autocorrelation and second as a 1D
projection on azimuth difference axis φ∆. We show that
the 2D fit returns input parameters accurately at the
percent level, whereas the 1D fit, representing conven-
tional flow analysis, deviates systematically and strongly
from the input. Comparisons with published flow data
indicate that the observed bias in the simulation is ex-
actly the difference attributed to “nonflow” in conven-
tional measurements.
By comparing our simple algebraic model of
quadrupole centrality dependence to data we observe
that the trend v2 ∝ ǫ is not met for any collision sys-
tem, nor is there asymptotic approach to such a trend.
That observation raises questions about the relevance of
hydrodynamics to phenomena currently attributed to el-
liptic flow at the SPS and RHIC.
This work was supported in part by the Office of Sci-
ence of the U.S. DoE under grant DE-FG03-97ER41020.
APPENDIX A: BROWNIAN MOTION
There is a close analogy between Brownian motion and
the azimuth structure of nuclear collisions. The long his-
tory of Brownian motion and its mathematical descrip-
tion can thus provide critical guidance for the analysis
of particle distributions. Brownian motion (more gen-
erally, random motion of particles suspended in a fluid)
was modeled by Einstein as a diffusion process (random
walk) [7]. He sought to test the “kinetic-molecular” the-
ory of thermodynamics and provide direct observation of
molecules. Paul Langevin developed a differential equa-
tion to describe such motion, which included a stochastic
term representing random impulses delivered to the sus-
pended particle by molecular collisions. Jean Perrin and
collaborators performed extensive measurements which
confirmed Einstein’s predictions and provided definitive
evidence for the reality of molecules [8].
1. The quasi-random walker
We model a 2D quasi-random walker (including
nonzero correlations) as follows. The walker position
is recorded in equal time intervals δt. After n steps,
with step-wise displacements r sampled randomly from a
bounded distribution, the walker position relative to an
arbitrary starting point is, in the notation of this paper,
i ri~u(φi), where ri is the i
th displacement. The
squared total displacement is then
R2 = n〈r2〉+ n(n− 1)〈r2 cos(φ∆)〉. (A1)
The first term, linear in n (or t), was described by Ein-
stein. The second term could represent “drift” of the
walker due to deterministic response to an external in-
fluence. The composite is then termed “Brownian motion
with drift,” a popular model for stock markets and other
quasi-random processes. Measuring multipole moments
on azimuth in nuclear collisions is formally equivalent
to measuring “drift” terms on time in the quasi-random
walk of a charged particle suspended in a molecular fluid
within a superposition of oscillating electric fields. There
are many other applications for Eq. (A1).
For a true random walk consisting of uncorrelated steps
Einstein expressed 〈r2〉/δt ≡ d ·2D (random walk in d di-
mensions) in terms of diffusion coefficient D. The second
term 〈r2 cos(φ∆)〉 ≡ (δt)2v2x represents a possible deter-
ministic component (correlations), with x̂ the direction of
an applied “force.” In that case successive angles φi are
correlated, and the result is a macroscopic nonstochastic
drift of the walker trajectory.
The fractal dimension of a random walk [first term
in Eq. (A1)] is df = 2. The trajectory is therefore
a “space-filling” curve in 2D configuration space. The
appropriate measure of trajectory size is area, and the
rate of size increase is the diffusion coefficient (rate of
area increase). In contrast, the second term in Eq. (A1)
represents a deterministic trajectory whose nominal di-
mension is 1 (modulo the extent of curvature, which in-
creases the dimension above 1). Therefore, the appropri-
ate measure of trajectory size is length, and speed is the
correct rate measure. For Brownian motion with drift
the trajectory dimension is not well-defined, depending
on the relative magnitudes of the drift and stochastic
terms, and the concept of speed is therefore ambiguous.
Attempts to measure the linear speed of Brownian mo-
tion in the nineteenth century failed because of the frac-
tal structure of random walks. From the structure of
Eq. (A1) the average speed over interval ∆t = nδt is
R2/(∆t)2 ∼ 〈r2/(δt)2〉/n, and the limiting case for
∆t = nδt → 0 is the so-called “infinite speed of diffu-
sion.” That topological oddity is formally equivalent to
the “multiplicity bias” of conventional flow analysis.
2. Brownian motion and nuclear collisions
We now consider the close analogy between Einstein’s
theory of Brownian motion and the measurement of p2x
in a nuclear collision, using directivity as an example.
Just as ~R is the vector total displacement of a quasi-
randomwalker in 2D configuration space, ~Q1 is the vector
total displacement of a quasi-random walker (event-wise
particle ensemble) in 2D momentum space. After n steps
the squared displacements are
R2 = n2 δt2 v′2x = n δt 4D+ n(n− 1) δt2 v2x (A2)
Q21 = n
2 p′2x = n 〈p2t 〉+ n(n− 1)p2x.
4Dδt is the increase in area per step of a random walker
in 2D configuration space. 〈p2t 〉 is the increase in area per
step (per particle) of a random walker in 2D momentum
space, playing the same role as the diffusion coefficient.
The RHS first term in the first line is the subject of Ein-
stein’s 1905 Brownian motion paper. Its measurement by
Perrin confirmed the reality of molecules and the validity
of Boltzmann’s kinetic theory.
As noted, attempts to measure mean speed v′x of a par-
ticle in a fluid failed because speed is the wrong rate mea-
sure for trajectory size increase. Speed measurements
decreased with increasing sample number or observation
time. It was not until Einstein’s formulation and later
mathematical developments that the topology of the ran-
dom walk and its consequences became apparent. Initial
attempts at the Bevalac to measure px in the form p
using directivity failed for the same reason. Corrections
were developed to approximate the unbiased quantity px,
and the failure was attributed to multiplicity bias or ‘au-
tocorrelations.’ Ironically, the autocorrelation distribu-
tion is the ideal method to access the unbiased quantity
in either case.
3. Einstein and autocorrelations
To provide a statistical description of Brownian motion
Einstein introduced the autocorrelation concept with the
following language [7].
Another important consideration can be re-
lated to this method of development. We
have assumed that the single particles are all
referred to the same co-ordinate system. But
this is unnecessary, since the movements of
the single particles are mutually independent.
We will now refer the motion of each parti-
cle to a co-ordinate system whose origin co-
incides at the [arbitrary] time t = 0 with the
[arbitrary] position of the center of gravity of
the particle in question; with this difference,
that [probability distribution] f(x, t)dx now
gives the number of the particles whose x co-
ordinate has increased between the time t = 0
and the time t = t, by a quantity which lies
between x and x+ dx.
Einstein’s function f(ξ, τ) is a 2D autocorrelation
which satisfies the diffusion equation. The solution is
a gaussian on x relative to an arbitrary starting point
(thus defining difference variables ξ = x − xstart and
τ = t − tstart), with 1D variance σ2ξ = 2Dτ . The au-
tocorrelation is sometimes called a two-point correlation
function or two-point autocorrelation. The angular auto-
correlation is a wide-spread and important analysis tool,
e.g., in astrophysics, nuclear collisions and many other
fields.
4. Wiener, Khintchine, Lévy and Kolmogorov
The names Wiener, Lévy, Kolmogorov and Khintchine
figure prominently in the copious mathematics derived
from the Brownian motion problem. Norbert Wiener led
efforts to provide a mathematical description of Brown-
ian motion, abstracted to aWiener process, a special case
of a Lévy process (generalization of a discrete random
walk to a continuous random process) [37]. The Wiener-
Khintchine theorem provides a power-spectrum represen-
tation for stationary stochastic processes such as random
walks, for which a Fourier transform does not exist. We
have acknowledged the theorem with our Eq. (10).
The analysis of azimuth structure in nuclear collisions
in terms of angular autocorrelations is based on power-
ful mathematics developed throughout the past century.
Autocorrelations make it possible to study azimuth struc-
ture for any event multiplicity down to p-p collisions with
as little as two detected particles per event. The effects of
“non-flow” can be eliminated from “flow” measurements
(and vice versa) without model dependence or guesswork.
The Brownian motion problem and Einstein’s fertile so-
lution inform two central issues for studies of the correla-
tion structure of nuclear collisions: analysis methodology
and physics interpretation.
APPENDIX B: RANDOM VARIABLES
A random variable represents a set of samples from
a parent distribution. The outcome of any one sample
is unpredictable (i.e., random), but through statistical
analysis an ensemble of samples can be used to infer prop-
erties (statistics – results of algorithms applied to a set of
samples) of the parent distribution. Sums over particles
and particle pairs of kinematic quantities are the primary
random variables in analysis of nuclear collision data.
1. The algebra of random variables
Products and ratios of random variables behave non-
intuitively because random variables don’t obey the alge-
bra of ordinary variables. E.g., factorization of random
variables results in the spawning of covariances. The
approximation xy ≃ x̄ ȳ common in conventional flow
analysis is a source of systematic error (bias) because
xy = x̄ ȳ + xy − x̄ ȳ. The omitted term is a covariance.
Such covariances play a role in statistics similar to QM
commutators, with 1/n↔ ~. Conventional flow analysis
assumes the 1/n → 0 limit for some random variables,
and the results are undependable for small multiplicities.
Similarly, improper treatment of ratios of random vari-
ables results in infinite series of covariances. E.g.,
x/n =
δx · δn
x̄ n̄
x · (δn)2
x̄ n̄2
+ · · · ), (B1)
with (δn)2/n̄ ≡ σ2n/n̄ ∼ 1 − 2. Thus, the common ap-
proximation x/n ≃ x̄/n̄ can result in significant n- and
physics-dependent (x-n covariances) bias for small n.
In this paper we distinguish between event-wise and
ensemble-averaged quantities and do not employ en-
semble averages of ratios of random variables. We in-
clude event-wise factorizations and ratios only to sug-
gest qualitative connections with conventional flow anal-
ysis. E.g., we consider Ṽ 2m ≡ n(n − 1)〈cos(mφ∆)〉 with
〈cos(mφ∆)〉 = 〈cos2(m[φ − Ψr])〉 ≡ ṽ2m. But, ṽ2m 6=
V 2m/n(n− 1) 6= v̄2m. vm as typically invoked in conven-
tional flow analysis is not a well-defined statistic.
2. Statistical references
The concept of a statistical reference is largely absent
from conventional flow analysis. By ‘statistical reference’
we mean a quantity or distribution which represents an
uncorrelated system, a system consistent with indepen-
dent samples from a fixed parent distribution (central
limit conditions [5]). Concerns about ‘bias’ from low mul-
tiplicities [19, 21, 22] typically relate to the presence of an
unsubtracted and unacknowledged statistical reference
in the final result. Finite multiplicity fluctuations are
then said to produce systematic errors, false azimuthal
anisotropies, a problem masking true collective effects.
In the limit 1/n → 0 the statistical reference may in-
deed become negligible compared to the true correlation
structure. However, its presence for nonzero 1/n is a po-
tential source of systematic error which may block access
to important small-multiplicity systems (peripheral col-
lisions and/or small kinematic bins). In general, if the
statistical reference is not correctly subtracted the result
is increasingly biased with smaller multiplicities. Identi-
fication and subtraction of the proper reference is one of
the most important tasks in statistical analysis.
Use of the term ‘statistical’ to mean ‘uncorrelated’ is
misleading (e.g., ‘statistical’ vs ‘dynamical’). All ran-
dom variables and their fluctuations about the mean are
‘statistical.’ Some random variables and their statistics
are reference quantities, representing systems that are by
construction uncorrelated (independent sampling from a
fixed parent). We therefore label statistical reference
quantities ‘ref,’ not ‘stat.’
3. Random variables and Fourier analysis
In the context of Fourier analysis the basic finite-
number (Poisson) statistical reference is manifested as
the delta-function component in the autocorrelation den-
sity Eq. (13) and the white-noise constant term n〈r2〉 in
the event-wise power spectrum. Other reference compo-
nents may arise from two-particle correlations which are
not of interest to the analysis (e.g., detector effects) and
which may be revealed in mixed-pair distributions. A
clear distinction should always be maintained between
the reference and the sought-after correlation signal.
Careful attention to random-variable algebra is es-
pecially important in a Fourier analysis. The power-
spectrum elements and autocorrelation density must sat-
isfy the transform equations both for each event and after
ensemble averaging. In conventional flow analysis that
condition is often not satisfied. For instance, Ṽ 2m and V
satisfy the FT transforms and Wiener-Khintchine the-
orem before and after ensemble averaging respectively,
whereas the vm do not.
4. Minimally-biased random variables
It is frequently stated in the conventional flow liter-
ature that flow analysis must insure sufficiently large
multiplicities. The operating assumption in the design
of conventional flow methods is the continuum limit
1/n → 0, with inevitable bias for smaller multiplicities.
However, careful reference design and algebraic manipu-
lation of random variables makes possible precise treat-
ment of event-wise multiplicities down to n = 1. Some
statistical measures perform consistently no matter what
the sample number. The full multiplicity range is essen-
tial to measure azimuth multipole evolution with central-
ity down to N-N and p-p collisions, so that A-A “flow”
phenomena may be connected to phenomena observed in
elementary collisions and understood in a QCD context.
Since multiplicity necessarily varies strongly with cen-
trality, multiplicity-dependent bias in flow measurements
is unacceptable, and every means should be used to in-
sure minimally-biased statistics. To achieve that end
analysis methods must carefully transition from safe
event-wise factorizations (as featured in this paper) to
ensemble averages minimally biased for all n. Linear
combinations of powers of random variables, e.g., vari-
ances and covariances, satisfy a linear algebra. Such in-
tegrals of two-particle momentum space are nominally
free of bias.
APPENDIX C: MULTIPOLES AND SPHERICITY
The 1D Fourier transform on azimuth is part of a larger
representation of angular structure. The encompassing
context is a 2D multipole decomposition on (θ, φ) repre-
sented by the sphericity tensor, with the spherical har-
monics Y m2 as elements. In limiting cases submatrices
of the sphericity tensor reduce to “cylindrical harmon-
ics” cos(mφ), part of the 1D Fourier representation on
azimuth.
The central premise of a multipole representation
is that the final-state particle angular distribution on
[θ(yz), φ] is efficiently represented by a few low-order
spherical harmonics (SH) Y ml (θ, φ). At the Bevalac,
sphericity tensor S containing spherical harmonics Y m2 as
elements was introduced. Directivity ~Q1, simply related
to Y 12 , was employed to represent a rotated quadrupole
as a dipole pair antisymmetric about the collision mid-
point. At lower energies (Bevalac, AGS) the quadrupole
principal axis may be rotated to a large angle with re-
spect to the collision axis and Y 12 dominates. At higher
energies and near midrapidity (θ ∼ π/2) the dominant
SH is Y 22 .
1. Spherical harmonics
The spherical harmonics are defined as
Ylm(Ω) =
2l + 1
· (l −m)!
(l +m)!
Pml (cos θ) e
imφ, (C1)
where Pml (θ) is an associated Legendre function [38]. An
event-wise density on the unit sphere can be expanded
ρ̃(Ω) =
Q̃lm Ylm(Ω) (C2)
Q̃lm =
dΩY ∗lm(Ω)ρ̃(Ω)
Y ∗lm(Ωi)
= n〈Y ∗lm(Ω)〉,
where Ω → (θ, φ) and dΩ ≡ d cos(θ)dφ. The FTs on φ
form a special case of those relations when ρ̃(Ω) is peaked
near θ ∼ π/2. The Ylm are orthonormal and complete:
dΩYlm(θ, φ)Yl′m′(θ, φ) = δll′δmm′ (C3)
Ylm(θ, φ)Y
′, φ′) = δ(Ω− Ω′).
2. Multipoles
The spherical harmonics are model functions for single-
particle densities on (θ, φ). The coefficients of the mul-
tipole expansion of a distribution are complex spherical
multipole moments describing 2l poles and defined as en-
semble averages of the spherical harmonics over the unit
sphere weighted by an angular density.
The following relation is defined by analogy with the
expansion of an electric potential in spherical harmonics,
in this case on momentum space ~p rather than configu-
ration space ~r [38]
ρ̃(p′,Ω′)
|~p− ~p ′|
2l+ 1
Ylm(Ω)
. (C4)
The coefficients are the event-wise spherical multipole
moments
q̃lm ≡
p2dp dΩ plρ̃(p,Ω)Y ∗lm(Ω) (C5)
pli Y
lm(Ωi)
= n〈pl Y ∗lm(Ω)〉.
Eq. (C2) is the special case for p restricted to unity (i.e.,
distribution on the unit sphere).
In general, ℜY mm ∝ sinm(θ) cos(mφ), and the cos(mφ)
are by analogy “cylindrical harmonics” [42]. The ensem-
ble average of a cylindrical harmonic over the unit circle
weighted by 1D density ρ(φ) results in complex cylindri-
cal multipole moments Qm. The Fourier coefficients Qm
obtained from analysis of SPS and RHIC data are there-
fore cylindrical multipole moments describing 2m poles.
E.g., m = 2 denotes a quadrupole moment and m = 4
denotes an octupole moment,.
If nonflow contributions (i.e., structure rapidly vary-
ing on η or y) are present, a multipole decomposition of
ρ(θ, φ) is no longer efficient, and the inferred multipole
moments are difficult to interpret physically (e.g., flow in-
ferences per se are biased). In Sec. V we describe a more
differential method for representing angular structure us-
ing two-particle joint angular autocorrelations on differ-
ence axes (η∆, φ∆). Given a decomposition of ρ(θ, φ)
based on variations on η∆ we can distinguish cylindri-
cal multipoles accurately from “nonflow” structure (cf.
Sec. VII).
3. Sphericity
The sphericity tensor has been employed in both jet
physics and flow studies. A normalized 3D sphericity
tensor was defined in [14] to search for initial evidence
of jets in e+-e− collisions. A decade later sphericity was
introduced to the search for collective nucleon flow in
heavy ion collisions [15]. The close connection between
flow and jets continues at RHIC, where we seek the rela-
tion between minijets and “elliptic flow.”
Event-wise sphericity S̃ is a measure of structure in
single-particle density ρ(θ, φ) on the unit sphere. We
use dyadic notation to reduce index complexity, analo-
gous to vector notation ~̃Qm =
i ri~u(mφi). S̃ (with
ri → pi) describes a 3D quadrupole with arbitrary orien-
tation. Given ~p = p [sin(θ) cos(φ), sin(θ) sin(φ), cos(θ)] ≡
p ~u(θ, φ) we have
2S̃ ≡ 2
~pi~pi = 2
p2i ~u(θi, φi) ~u(θi, φi) (C6)
p2i Ũ(θi, φi)
= n〈p2 Ũ(θ, φ)〉,
the last being an event-wise average, where
U(θ, φ) = sin2(θ) I + Y(θ, φ) (C7)
Y(θ, φ) = (C8)
sin2(θ) cos(2φ) sin2(θ) sin(2φ) sin(2θ) cos(φ)
sin2(θ) sin(2φ) − sin2(θ) cos(2φ) sin(2θ) sinφ
sin(2θ) cos(φ) sin(2θ) sinφ 3 cos2(θ) − 1
In terms of event-wise quadrupole moments q̃2m derived
from the Y2m
2S̃ = n
p2 sin2(θ)
I (C9)
2l+ 1
ℜq̃22 ℑq̃22 −ℜq̃21
ℑq̃22 −ℜq̃22 −ℑq̃21
−ℜq̃21 −ℑq̃21
an event-wise estimator of angular structure on the unit
sphere, its reference defined by 2S̃ref = n〈p2 sin2(θ)〉 I,
and p2 sin2(θ) = p2t . The sphericity tensor of [14] was
normalized to Ŝ = S/n〈p2〉.
Note that
Q̃ = 3S̃ − n
I (C10)
is the traceless Cartesian quadrupole tensor appearing in
the Taylor expansion of the (~p equivalent of the) electro-
static potential [38]. We have defined instead
Q̃′ = 3S̃ −
I, (C11)
an alternative quadrupole tensor wherein each element
is a single spherical quadrupole moment. The difference
lies in the diagonal elements: linear combinations of the
ℜq̃2m in the diagonal elements of Q̃ are simplified to
single moments in Q̃′. The ensemble mean of both ten-
sors for an uncorrelated (spherically symmetric) system
or system with event-wise quadrupole orientations ran-
domly varying is the null tensor (all elements zero).
APPENDIX D: SUBEVENTS
The “subevent” is a notional re-invention of partition-
ing/binning, the latter having a history of more than a
century in mathematics. In conventional flow analysis
subevents are groups of particles in an event segregated
on the basis of random selection, charge, strangeness,
PID or a kinematic variable such as pt, y or η. The scalar-
product method [30] is based on a covariance between two
single-particle bins (nominally equal halves of an event).
The subevent method is thus a restricted reinvention of
a common concept in multiparticle correlation analysis:
determining covariances among all pairs of single-particle
bins at some arbitrary binning scale – a two-particle cor-
relation function. Diagonal averages of such distributions
are the elements of autocorrelations.
In the language of conventional flow analysis one way
to eliminate statistical reference Q2ref from Q
m is to par-
tition events into a pair of disjoint (non-overlapping)
subevents A, B [19]. In that case ~̃Qma· ~̃Qmb = ~̃Vma · ~̃Vmb =
nanbṽ
mab, a covariance. The partition may be asym-
metric (unequal particle numbers) and may be as small
as a pair of particles. In addition to eliminating the
self-pair statistical reference such partitioning is said to
reduce nonflow correlation sources, depending on their
physical origins and the partition definition [30]. We as-
sume for simplicity that there is no nonflow contribution.
Subevent pairs can be used to determine the event-plane
resolution for subevents A, B and full events.
First, we consider the symmetric case, defining equiv-
alent subevents A and B with multiplicities nA = nB =
n/2 from an event with n particles. E.g., subevent A
has azimuth vector ~QmA =
i∈A ~u(mφi). The scalar
product is a covariance
~̃Qma · ~̃Qmb ≡ Q̃a Q̃b〈cos(m[Ψa −Ψb])〉 (D1)
na,nb
i∈A,j∈B
cos(m[φi − φj ])
≡ na nb ṽ2mab = Ṽ 2mab,
with e.g. Q̃2a = na + na(na − 1)ṽ2ma = na + Ṽ 2ma. Then
cos(m[Ψma −Ψmb]) =
Ṽ 2mab
na + Ṽ 2ma
nb + Ṽ
.(D2)
If subevents A and B are physically equivalent (e.g., a
random partition of the total of n particles), then
cos(m[Ψma −Ψmb]) = rab
V 2ma
n̄a + V 2ma
= cos(m[Ψma −Ψr]) cos(m[Ψmb −Ψr]),
where rab = V
V 2ma V
mb is Pearson’s normalized co-
variance between subevents A and B for the mth power-
spectrum elements. If A and B are perfectly correlated
(rab = 1) then
cos(m[Ψma −Ψmb]) = cos2(m[Ψma −Ψr]) (D4)
In general, V 2m/n̄ = (1+ rab)V
ma/n̄a, which provides the
exact relation between the EP resolution for subevents
and for composite events A + B. It is not generally cor-
rect that cos(m[Ψm −Ψr]) =
2 · cos(m[Ψma −Ψr]). In
this case
cos2(m[Ψma −Ψr]) =
n̄a − 1
n̄a + V 2ma
and V 2ma = V
m/4 for perfectly correlated subevents.
Second, we consider the most asymmetric case A =
one particle and B = n− 1 particles.
〈cos(m[φi −Ψr]) cos(m[Ψmi −Ψr])〉 = (D6)
(n− 1)ṽ2mi
n− 1 + (n− 1)(n− 2)ṽ2mi
ṽm ·
Ṽ ′m
where Q̃′2m = n− 1+ Ṽ ′2m describes a subevent with n− 1
particles. In general, the EP resolution for a full event of
n particles is given by
cos2(m[Ψm −Ψr]) ≃
nṼ 2m
(n− 1)Q̃2m
. (D7)
Measurement of the EP resolution is simply a measure-
ment of the corresponding power-spectrum element, since
V 2m/n̄ ≃
cos2(m[Ψm −Ψr])
1− cos2(m[Ψm −Ψr])
. (D8)
In [22] the approximation
〈cos(m[Ψm −Ψr])〉2 ≈
V 2m/n̄ (D9)
is given for V 2m/n̄≪ 1 or Q2m ∼ n̄.
Equal subevents, as the largest possible event parti-
tion, imply an expectation that only global (large-scale)
variables are relevant to collision dynamics (e.g., to de-
scribe thermalized events). The possibility of finer struc-
ture in momentum space is overlooked, whereas autocor-
relation studies with finer binnings and the covariances
among those bins discover detailed event structure highly
relevant to collision dynamics.
APPENDIX E: CENTRALITY ISSUES
Accurate A-A centrality determination and the cen-
trality dependence of azimuth multipoles and related pa-
rameters is critical to understanding heavy ion collisions.
We must locate b = 0 accurately in terms of measured
quantities to test theory expectations relative to hydro-
dynamics and thermalization. And we must obtain accu-
rate measurements for peripheral A-A collisions to pro-
vide a solid connection to elementary collisions.
1. Centrality measures
In [32] is described the power-law method of centrality
determination. Because the minimum-bias distribution
on participant-pair number npart/2 goes almost exactly
as (npart/2)
−3/4 the distribution on (npart/2)
1/4 is al-
most exactly uniform, as is the experimental distribution
ch , dominated by participant scaling. Those sim-
ple forms can greatly improve the accuracy of central-
ity determination, especially for peripheral and central
collisions. The cited paper gives simple expressions for
npart/2, nbin and ν relative to fraction of total cross sec-
tion.
In conventional centrality determination the minimum-
bias distribution on nch is divided into several bins rep-
resenting estimated fractions of the total cross section.
The main source of systematic error is uncertainty in
the fraction of total cross section which passes triggering
and event reconstruction. The total efficiency is typi-
cally 95%, the loss being mainly in the peripheral region,
and the most peripheral 10 or 20% bins therefore have
large systematic errors resulting in abandonment. Flow
measurements with EP estimation are also excluded from
peripheral collisions due to low event multiplicities.
In contrast, with the power-law method running inte-
grals of the Glauber parameters and nch can be brought
into asymptotic coincidence for peripheral collisions re-
gardless of the uncertainty in the total cross section. Pa-
rameter ν measures the centrality and greatly reduces
the cross-section error contribution. Centrality accuracy
< 2% on ν is thereby achievable down to N-N collisions.
That capability is essential to determine the correspon-
dence of A-A quadrupole structure in elementary colli-
sions, to test the LDL hypothesis for instance: is there
“collective” behavior in N-N collisions?
For central collisions the upper half-maximum point
on the power-law minimum-bias distribution provides a
precise determination of b = 0 on nch and therefore ν.
The b = 0 point is critical for evaluation of correlation
measures relative to Glauber parameter ǫ in the context
of hydro expectations for v2/ǫ.
2. Geometry parameters and azimuth structure
We consider the several A-A geometry parameters rel-
evant to azimuth structure. In Fig. 11 (left panel) we
plot npart/2 vs ν using the parameterization in [32].
The relation is very nonlinear. The dashed curve is
npart/2 ≃ 2ν2.57. The most peripheral quarter of the
centrality range is compressed into a small interval on
npart/2. Mean path-length ν is the natural geometry
measure for sensitive tests of departure from linear N-
N superposition, whereas important minijet correlations
(nonflow) ∝ ν are severely distorted on npart.
2 4 6
2 4 6
FIG. 11: Left panel: Participant pair number vs mean path-
length ν for 200 GeV Au-Au collisions. Because of the nonlin-
ear relation the peripheral third of collisions is compressed to
a small interval on npart/2. Right panel: Impact parameter b
vs ν. To good approximation the relation is linear over most
of the centrality range.
In Fig. 11 (right panel) we plot impact parameter b vs
ν, again using the parameterization in [32] with fractional
cross section σ/σ0 = (b/b0)
2. We note the interesting
fact that over most of the centrality range b/b0 ≃ (R −
ν)/(R − 1), with b0 ≡ 2R = 14.7 fm for Au-Au. Thus,
any anticipated trends on b are also accessible on ν with
minimal distortion.
In Fig. 12 (left panel) we show the LDL parameter
1/S dnch/dη [24] vs ν for three collision energies. The
energy dependence derives from the multiplicity factor,
which we parameterize in terms of a two-component
model [39]. Weighted cross-section area S(b/b0) (fm
is an optical Glauber parameterization from [31]. Both ν
and 1/S dnch/dη are pathlength measures. They can be
compared with the inverse Knudson number K−1n intro-
duced in [36] as a measure of collision number. The LDL
measure is based on energy-dependent physical particle
collisions, whereas ν is based on A-A geometry alone.
The relation is monotonic and almost linear. Thus, struc-
ture on one parameter should appear only slightly dis-
torted on the other.
17 GeV
62 GeV
200 GeV
2 4 6 1/S dnch/dη
hydro
0 10 20
FIG. 12: Left panel: Correspondence between LDL parame-
ter 1/S dnch/dη and centrality measure ν for three energies.
Right panel: Theory expectations for two limiting cases at
200 GeV. The solid curve is derived from the solid curve in
Fig. 10 (upper-right panel) using the relation in the left panel.
The hatched region is typically not measured in a con-
ventional flow analysis, due to a combination of large sys-
tematic uncertainty in the centrality determination and
large biases in flow measurements due to small multi-
plicities. However, peripheral collisions provide critical
tests of flow models: e.g., how does collective behavior
(if present) emerge with increasing centrality? In this
paper we describe analysis methods which, when com-
bined with the centrality methods of [32], make all A-A
collisions accessible for accurate measurements down to
In Fig. 12 (right panel) we show v2/ǫ vs 1/S dnch/dη
for theory expectations (hatched bands) and the simula-
tion in Sec. VII C. The latter is based on a simple error
function on ν and is roughly consistent with four-particle
cumulant results at 200 GeV [26]. We observe that the
solid curve is not consistent with either the LDL trend
for peripheral collisions (the LDL slope is arbitrary) or
the hydro trend for central collisions. That provocative
result suggests that accurate analysis of azimuth corre-
lations over a broad range of energies and centralities
with the methods introduced in this paper and [32] may
produce interesting and unanticipated results.
3. Correlation measures
If the centrality dependence of azimuth structure is
to be accurately determined the correlation measure em-
ployed must have little or no multiplicity bias, includ-
ing statistical biases and irrelevant multiplicity factors
which lead to incorrect physical inferences. The quantity
ρref is the unique solution to a measurement prob-
lem subject to multiple constraints. It is the only portable
measure (density ratio) of two-particle correlations ap-
plicable to collision systems with arbitrary multiplicity.
ρref is invariant under linear superposition. If, ac-
cording to that measure, central Au-Au is different from
N-N the difference certainly indicates a unique physical
aspect of Au-Au collisions relative to N-N, exactly what
we require in a correlation measure. Conventional flow
measures do not satisfy that basic requirement.
Drawing a parallel with measures of 〈pt〉 fluctuations
we compare v2 ↔ Σpt [40]. Both are square roots of
per-pair correlation measures which tend to yield mis-
leading systematic trends (on centrality and energy) [10].
In contrast V 2m ↔ ∆Σ2pt:n [5] (the total variance dif-
ference for pt fluctuations) are integrals of two-particle
correlations (azimuth number correlations vs pt correla-
tions). The first is a measure of total azimuth corre-
lations, the second a measure of total pt variance dif-
ference [41], the integral of a two-particle distribution
relative to its reference. In a minimally-biased con-
text we then have vm =
V 2m/n(n− 1), analogous to
Σpt(CERES) =
∆Σ2pt:n/n(n− 1) p̂
t [10] (the similar-
ity of notations is coincidental) as the square roots of
per-pair measures. Both are physically misleading.
[1] R. C. Hwa, Phys. Rev. D 32, 637 (1985); U. W. Heinz
and P. F. Kolb, Nucl. Phys. A 702, 269 (2002).
[2] F. Becattini, M. Gazdzicki and J. Sollfrank, Eur. Phys.
J. C 5, 143 (1998); P. Braun-Munzinger, I. Heppe and
J. Stachel, Phys. Lett. B 465, 15 (1999).
[3] R. Stock, “Event by event analysis of ultrarelativistic
nuclear collisions: A new method to search for criti-
cal fluctuations,” Prepared for NATO Advanced Study
Workshop on Hot Hadronic Matter: Theory and Exper-
iment, Divonne-les-Bains, France, 27 Jun - 1 Jul 1994;
R. Stock, Nucl. Phys. A661, 282c (1999); H. Heiselberg,
Phys. Rep. 351, 161 (2001).
[4] M. Gaździcki, St. Mrówczyński, Z. Phys. C 54, 127
(1992).
[5] T. A. Trainor, hep-ph/0001148.
[6] N. Borghini, P. M. Dinh and J. Y. Ollitrault, Talk given
at 31st International Symposium on Multiparticle Dy-
namics (ISMD 2001), Datong, China, 1-7 Sep 2001. Pub-
http://arxiv.org/abs/hep-ph/0001148
lished in “Datong 2001, Multiparticle dynamics” 192-197,
hep-ph/0111402.
[7] A. Einstein, Ann. Phys. 17, 549, 1905; Investigations on
the Theory of Brownian Movement, ed. R. Fürth, trans-
lated by A. D. Cowper (1926, Dover reprint 1956); Ein-
stein, Collected Papers, 2, 170-82, 206-22.
[8] J. Perrin, Ann. Phys., 18, 5 - 114 (1909), translated
by Frederick Soddy (London: Taylor and Francis, 1910)
reprinted in David M. Knight, ed., Classical scientific pa-
pers: chemistry (New York: American Elsevier, 1968).
[9] J. Adams et al. (STAR Collaboration), J. Phys. G 32,
L37 (2006).
[10] J. Adams et al. (STAR Collaboration), J. Phys. G 33,
451 (2007).
[11] R. J. Porter and T. A. Trainor (STAR Collaboration), J.
Phys. Conf. Ser. 27, 98 (2005).
[12] J. Adams et al. (STAR Collaboration), Phys. Rev. C 73,
064907 (2006).
[13] A. H. Mueller, Nucl. Phys. B 572, 227 (2000); R. Baier,
A. H. Mueller, D. Schiff and D. T. Son, Phys. Lett. B
502, 51 (2001).
[14] J. D. Bjorken and S. J. Brodsky, Phys. Rev. D 1, 1416
(1970).
[15] P. Danielewicz and M. Gyulassy, Phys. Lett. B 129
(1983) 283.
[16] G. Arfken, Mathematical Methods for Physicists, 3rd ed.
(Orlando, FL) Academic Press, 1985; Blackman, R. B.
and Tukey, J. W. The Measurement of Power Spectra,
From the Point of View of Communications Engineering.
New York, Dover, 1959.
[17] H. Nyquist, Trans. AIEE 47, 617 (1928).
[18] T.A. Trainor, R. J. Porter and D. J. Prindle, J. Phys. G
31 809.
[19] P. Danielewicz and G. Odyniec, Phys. Lett. B 157, 146
(1985).
[20] J. Y. Ollitrault, Phys. Rev. D 46, 229 (1992).
[21] S. Voloshin and Y. Zhang, Z. Phys. C 70, 665 (1996).
[22] A. M. Poskanzer and S. A. Voloshin, Phys. Rev. C 58,
1671 (1998).
[23] S. A. Voloshin and A. M. Poskanzer, Phys. Lett. B 474,
27 (2000).
[24] H. Heiselberg and A.-M. Levy, Phys. Rev. C 59, 2716
(1999).
[25] Q. J. Liu, D. J. Prindle and T.A. Trainor, Phys. Lett. B
632, 197 (2006).
[26] J. Adams et al. (STAR Collaboration), Phys. Rev. C 72,
014904 (2005).
[27] N. Borghini, P. M. Dinh and J. Y. Ollitrault, Phys. Rev.
C 64, 054901 (2001)
[28] T. A. Trainor and D. T. Kettler, Phys. Rev. D 74, 034012
(2006).
[29] K. H. Ackermann et al., Nucl. Instrum. Meth. A 499,
624 (2003).
[30] C. Adler et al. (STAR Collaboration), Phys. Rev. C 66,
034904 (2002).
[31] P. Jacobs and G. Cooper, Remarks on the geometry of
high energy nuclear collisions, STAR note SN0402 (1999).
[32] T. A. Trainor and D. J. Prindle, hep-ph/0411217.
[33] N.Borghini, P.M.Dinh, J. Y.Ollitrault, A.M. Poskanzer
and S.A.Voloshin, Phys. Rev. C 66, 014901 (2002)
[34] A. Leonidov and D. Ostrovsky, Eur. Phys. J. C 16, 683
(2000).
[35] A. Leonidov and D. Ostrovsky, Phys. Rev. C 63, 037901
(2001).
[36] N. Borghini, Eur. Phys. J. A 29, 27 (2006).
[37] H. Stark and J. W. Woods, Probability and Random Pro-
cesses with Applications to Signal Processing, 3rd edition,
Prentice Hall (New Jersey, 2002); J. Bertoin, Lévy Pro-
cesses, Cambridge Univ. Press (New York, 1996).
[38] J. D. Jackson, Classical Electrodynamics, sixth edition,
John Wiley & Sons, Inc. (New York), 1967.
[39] D. Kharzeev and M. Nardi, Phys. Lett. B 507, 121
(2001).
[40] D. Adamová et al. (CERES Collaboration), Nucl. Phys.
A727, 97 (2003).
[41] J. Adams et al. (STAR Collaboration), Phys. Rev. C 71
064906 (2005).
[42] The term “cylindrical harmonic” is conventionally ap-
plied to Bessel functions of the first kind, assuming
cylindrical symmetry. However, the term “spherical har-
monic” does not assume spherical symmetry, and we use
the term “cylindrical harmonic” in that sense by analogy
to denote sinusoids on azimuth.
http://arxiv.org/abs/hep-ph/0111402
http://arxiv.org/abs/hep-ph/0411217
|
0704.1675 | Exploiting Social Annotation for Automatic Resource Discovery | Exploiting Social Annotation for Automatic Resource Discovery
Anon Plangprasopchok and Kristina Lerman
USC Information Sciences Institute
4676 Admiralty Way
Marina del Rey, CA 90292, USA
{plangpra,lerman}@isi.edu
Abstract
Information integration applications, such as mediators or
mashups, that require access to information resources cur-
rently rely on users manually discovering and integrating
them in the application. Manual resource discovery is a slow
process, requiring the user to sift through results obtained
via keyword-based search. Although search methods have
advanced to include evidence from document contents, its
metadata and the contents and link structure of the referring
pages, they still do not adequately cover information sources
— often called “the hidden Web”— that dynamically gen-
erate documents in response to a query. The recently popu-
lar social bookmarking sites, which allow users to annotate
and share metadata about various information sources, pro-
vide rich evidence for resource discovery. In this paper, we
describe a probabilistic model of the user annotation process
in a social bookmarking system del.icio.us. We then use the
model to automatically find resources relevant to a partic-
ular information domain. Our experimental results on data
obtained from del.icio.us show this approach as a promising
method for helping automate the resource discovery task.
Introduction
As the Web matures, an increasing number of dynamic
information sources and services come online. Unlike
static Web pages, these resources generate their contents
dynamically in response to a query. They can be HTML-
based, searching the site via an HTML form, or be a Web
service. Proliferation of such resources has led to a number
of novel applications, including Web-based mashups, such
as Google maps and Yahoo pipes, information integra-
tion applications (Thakkar, Ambite, & Knoblock 2005)
and intelligent office assistants
(Lerman, Plangprasopchok, & Knoblock 2007) that com-
pose information resources within the tasks they perform. In
all these applications, however, the user must discover and
model the relevant resources. Manual resource discovery
is a very time consuming and laborious process. The
user usually queries a Web search engine with appropriate
keywords and additional parameters (e.g., asking for .kml or
.wsdl files), and then must examine every resource returned
by the search engine to evaluate whether it has the desired
Copyright c© 2018, American Association for Artificial Intelli-
gence (www.aaai.org). All rights reserved.
functionality. Often, it is desirable to have not one but
several resources with an equivalent functionality to ensure
robustness of information integration applications in the
face of resource failure. Identifying several equivalent
resources makes manual resource discovery even more time
consuming.
The majority of the research in this area of in-
formation integration has focused on automating
modeling resources — i.e., understanding seman-
tics of data they use (Heß & Kushmerick 2003;
Lerman, Plangprasopchok, & Knoblock 2006) and the
functionality they provide (Carman & Knoblock 2007). In
comparison, the resource discovery problem has received
much less attention. Note that traditional search engines,
which index resources by their contents — the words or
terms they contain — are not likely to be useful in this
domain, since the contents is dynamically generated. At
best, they rely on the metadata supplied by the resource
authors or the anchor text in the pages that link to the
resource. Woogle (Dong et al. 2004) is one of the few
search engines to index Web services based on the syntactic
metadata provided in the WSDL files. It allows a user to
search for services with a similar functionality or that accept
the same inputs as another services.
Recently, a new generation of Web sites has rapidly
gained popularity. Dubbed “social media,” these sites al-
low users to share documents, including bookmarks, photos,
or videos, and to tag the content with free-form keywords.
While the initial purpose of tagging was to help users or-
ganize and manage their own documents, it has since been
proposed that collective tagging of common documents can
be used to organize information via an informal classifica-
tion system dubbed a “folksonomy” (Mathes 2004). Con-
sider, for example, http://geocoder.us, a geocoding service
that takes an input as address and returns its latitude and
longitude. On the social bookmarking site del.icio.us1, this
resource has been tagged by more than 1, 000 people. The
most common tags associated by users with this resource are
“map,” “geocoding,” “gps,” “address,” “latitude,” and “lon-
gitude.” This example suggests that although there is gener-
ally no controlled vocabulary in a social annotation system,
tags can be used to categorize resources by their functional-
1http://del.icio.us
http://arxiv.org/abs/0704.1675v1
We claim that social tagging can be used for information
resource discovery. We explore three probabilistic gener-
ative models that can be used to describe the tagging pro-
cess on del.icio.us. The first model is the probabilistic
Latent Semantic model (Hofmann 1999) which ignores in-
dividual user by integrating bookmarking behaviors from
all users. The second model, the three-way aspect model,
was proposed (Wu, Zhang, & Yu 2006) to model del.icio.us
users’ annotations. The model assumes that there exists a
global conceptual space that generates the observed values
for users, resources and tags independently. We propose
an alternative third model, motivated by the Author-Topic
model (Rosen-Zvi et al. 2004), which maintains that latent
topics that are of interest to the author generate the words in
the documents. Since a single resource on del.icio.us could
be tagged differently by different users, we separate “top-
ics”, as defined in Author-Topic model, into “(user) inter-
ests” and “(resource) topics”. Together user interests and
resource topics generate tags for one resource. In order to
use the models for resource discovery, we describe each re-
source by a topic distribution and then compare this distri-
bution with that of all other resources in order to identify
relevant resources.
The paper is organized as follows. In the next section,
we describe how tagging data is used in resource discovery.
Subsequently we present the probabilistic model we have
developed to aid in the resource discovery task. The section
also describes two earlier related models. We then compares
the performance of the three models on the datasets obtained
from del.icio.us. We review prior work and finally present
conclusions and future research directions.
Problem Definition
Suppose a user needs to find resources that provide some
functionality: e.g., a service that returns current weather
conditions, or latitude and longitude of a given address. In
order to improve robustness and data coverage of an appli-
cation, we often want more than one resource with the nec-
essary functionality. In this paper, for simplicity, we assume
that the user provides an example resource, that we call a
seed, and wants to find more resources with the same func-
tionality. By “same” we mean a resource that will accept the
same input data types as the seed, and will return the same
data types as the seed after applying the same operation to
them. Note that we could have a more stringent requirement
that the resource return the same data as the seed for the
same input, but we don’t want to exclude resources that may
have different coverage.
We claim that users in a social bookmarking system such
as del.icio.us annotate resources according to their function-
ality or topic (category). Although del.icio.us and similar
systems provide different means for users to annotate doc-
ument, such as notes and tags, in this paper we focus on
utilizing the tags only. Thus, the variables in our model are
resources R, users U and tags T . A bookmark i of resource
r by user u can be formalized as a tuple 〈r, u, {t1, t2, . . .}〉i,
which can be further broken down into a co-occurrence of a
triple of a resource, a user and a tag: 〈r, u, t〉.
Figure 1: Graphical representations of the probabilistic La-
tent Semantic Model (left) and Multi-way Aspect Model
(right) R, U , T and Z denote variables “Resource”, “User”,
“Tag” and “Topic” repectively. Nt represents a number of
tag occurrences for a particular resource; D represents a
number of resources. Meanwhile,Nb represents a number of
all resource-user-tag co-occurrences in the social annotation
system. Note that filled circles represent observed variables.
We collect these triples by crawling del.icio.us. The sys-
tem provides three types of pages: a tag page — listing all
resources that are tagged with a particular keyword; a user
page — listing all resources that have been bookmarked by
a particular user; and a resource page — listing all the tags
the users have associated with that resource. del.icio.us also
provides a method for navigating back and forth between
these pages, allowing us to crawl the site. Given the seed,
we get what del.icio.us shows as the most popular tags as-
signed by the users to it. Next we collect other resources
annotated with these tags. For each of these we collect the
resource-user-tag triples. We use these data to discover re-
sources with the same functionality as the seed, as described
below.
Approach
We use probabilistic models in order to find a compressed
description of the collected resources in terms of topic de-
scriptions. This description is a vector of probabilities of
how a particular resource is likely to be described by dif-
ferent topics. The topic distribution of the resource is sub-
sequently used to compute similarity between resources us-
ing Jensen-Shannon divergence (Lin 1991). For the rest of
this section, we describe the probabilistic models. We first
briefly describe two existing models: the probabilistic La-
tent Semantic Analysis (pLSA) model and the Three-Way
Aspect model (MWA). We then introduce a new model that
explicitly takes into account users’ interests and resources’
topics. We compare performance of these models on the
three del.icio.us datasets.
Probabilistic Latent Semantic Model (pLSA)
Hoffman (Hofmann 1999) proposed a probabilistic la-
tent semantic model for associating word-document co-
occurrences. The model hypothesized that a particular docu-
ment is composed of a set of conceptual themes or topics Z .
Words in a document were generated by these topics with
some probability. We adapted the model to the context of
social annotation by claiming that all users have common
agreement on annotating a particular resource. All book-
marks from all users associated with a given resource were
aggregated into a single corpus. Figure 1 shows the graphi-
cal representation of this model. Co-occurrences of a partic-
ular resource-tag pair were computed by summing resource-
user-tag triples 〈r, u, t〉 over all users. The joint distribution
over resource and tag is
p(r, t) =
p(t|z)p(z|r)p(r) (1)
In order to estimate parameters p(t|z), p(z|r), p(r) we
define log likelihood L, which measures how the estimated
parameters fit the observed data
n(r, t)log(p(r, t)) (2)
where n(r, t) is a number of resource-tag co-occurrences.
The EM algorithm (Dempster, Laird, & Rubin 1977) was
applied to estimate those parameters that maximize L.
Three-way Aspect Model (MWA)
The three-way aspect model (or multi-way aspect model,
MWA) was originally applied to document recommenda-
tion systems (Popescul et al. 2001), involving 3 entities:
user, document and word. The model takes into account
both user interest (pure collaborative filtering) and docu-
ment content (content-based). Recently, the three-way as-
pect model was applied on social annotation data in or-
der to demonstrate emergent semantics in a social annota-
tion system and to use these semantics for information re-
trieval (Wu, Zhang, & Yu 2006). In this model, conceptual
space was introduced as a latent variable, Z , which indepen-
dently generated occurrences of resources, users and tags for
a particular triple 〈r, u, t〉 (see Figure 1). The joint distribu-
tion over resource, user, and tag was defined as follows
p(r, u, t) =
p(r|z)p(u|z)p(t|z)p(z) (3)
Similar to pLSA, the parameters p(r|z), p(u|z), p(t|z)
and p(z) were estimated by maximizing the log likelihood
objective function, L =
r,u,t
n(r, u, t)log(p(r, u, t)). EM
algorithm was again applied to estimate those parameters.
Interest-Topic Model (ITM)
The motivation to implement the model proposed in this pa-
per comes from the observation that users in a social anno-
tation system have very broad interests. A set of tags in a
particular bookmark could reflect both users’ interests and
resources’ topics. As in the three-way aspect model, using a
single latent variable to represent both “interests” and “top-
ics” may not be appropriate, as intermixing between these
two may skew the final similarity scores computed from the
topic distribution over resources.
Figure 2: Graphical representation on the proposed model.
R, U , T , I and Z denote variables “Resource”, “User”,
“Tag”, “Interest” and “Topic” repectively. Nt represents a
number of tag occurrences for a one bookmark (by a partic-
ular user to a particular resource); D represents a number of
all bookmarks in social annotation system.
Instead, we propose to explicitly separate the latent vari-
ables into two: one representing user interests, I; another
representing resource topics, Z . According to the proposed
model, the process of resource-user-tag co-occurrence could
be described as a stochastic process:
• User u finds a resource r interesting and she would like to
bookmark it
• User u has her own interest profile i; meanwhile the re-
source has a set of topics z.
• Tag t is then chosen based on users’s interest and re-
source’s topic
The process is depicted in a graphical form in Figure 2.
From the process described above, the joint probability of
resource, user and tag is written as
P (r, u, t) =
p(t|i, z)p(i|u)p(z|r)p(u)p(r) (4)
Log likelihood L is used as the objective function to es-
timate all parameters. Note that p(u) and p(r) could be
obtained directly from observed data – the estimation thus
involves three parameters p(t|i, z), p(i|u) and p(z|r). L is
defined as
r,u,t
n(r, u, t)log(p(r, u, t)) (5)
EM algorithm is applied to estimate these parameters. In
the expectation step, the joint probability of hidden variables
I and Z given all observations is computed as
p(i, z|u, r, t) =
p(t|i, z)p(i|u)p(z|r)∑
p(t|i, z)p(i|u)p(z|r)
Subsequently, each parameter is re-estimated using
p(i, z|u, r, t) we just computed from the E step
p(t|i, z) =
n(r, u, t)p(i, z|u, r, t)
r,u,t
n(r, u, t)p(i, z|u, r, t)
p(i|u) =
n(r, u, t)
p(i, z|u, r, t)
p(z|r) =
n(r, u, t)
p(i, z|u, r, t)
The algorithm iterates between E and M step until the log
likelihood or all parameter values converges.
Once all the models are learned, we use the distribution
of topics of a resource p(z|r) to compute similarity between
resources and the seed using Jensen-Shannon divergence.
Empirical Validation
To evaluate our approach, we collected data for three seed
resources: flytecomm2 geocoder3 and wunderground4. The
first resource allows users to track flights given the airline
and flight number or departure and arrival airports; the sec-
ond resource returns coordinates of a given address; while,
the third resource supplies weather information for a partic-
ular location (given by zipcode, city and state, or airport).
Our goal is to find other resources that provide flight track-
ing, geocoding and weather information. Our approach is
to crawl del.icio.us to gather resources possibly related to
the seed; apply the probabilistic models to find the topic dis-
tribution of the resources; then rank all gathered resources
based on how similar their topic distribution is to the seed’s
topic distribution. The crawling strategy is defined as fol-
lows: for each seed
• Retrieve the 20 most popular tags that users have applied
to that resource
• For each of the tags, retrieve other resources that have
been annotated with that tag
• For each resource, collect all bookmarks that have been
created for it (i.e., resource-user-tag triples)
We wrote special-purpose Web page scrapers to extract this
information from del.icio.us. In principle, we could continue
to expand the collection of resources by gathering tags and
retrieving more resources that have been tagged with those
tags, but in practice, even after the small traversal we do, we
obtain more than 10 million triples for the wunderground
seed.
We obtained the datasets for the seeds flytecomm and
geocoder in May 2006 and for the seed wunderground in
January 2007. We reduced the dataset by omitting low
(fewer than ten) and high (more than ten thousand) fre-
quency tags and all the triples associated with those tags.
After this reduction, we were left with (a) 2,284,308 triples
with 3,562 unique resources; 14,297 unique tags; 34,594
unique users for the flytecomm seed; (b) 3,775,832 triples
with 5,572 unique resources; 16,887 unique tags and 46,764
2http://www.flytecomm.com/cgi-bin/trackflight/
3http://geocoder.us
4http://www.wunderground.com/
unique users for the geocoder seed; (c) 6,327,211 triples
with 7,176 unique resources; 77,056 unique tags and 45,852
unique users for the wunderground seed.
Next, we trained all three models on the data: pLSA,
MWA and ITM. We then used the learned topic distributions
to compute the similarity of the resources in each dataset to
the seed, and ranked the resources by similarity. We evalu-
ated the performance of each model by manually checking
the top 100 resources produced by the model according to
the criteria below:
• same: the resource has the same functionality if it pro-
vides an input form that takes the same type of data as the
seed and returns the same type of output data: e.g., a flight
tracker takes a flight number and returns flight status
• link-to: the resource contains a link to a page with the
same functionality as the seed (see criteria above). We
can easily automate the step that check the links for the
right functionality.
Although evaluation is performed manually now,
we plan to automate this process in the future by
using the form’s metadata to predict semantic types
of inputs (Heß & Kushmerick 2003), automatically
query the source, extract data from it and classify it
using the tools described in (Gazen & Minton 2005;
Lerman, Plangprasopchok, & Knoblock 2006). We will
then be able to validate that the resource has functionality
similar to the seed by comparing its input and output data
with that of the seed (Carman & Knoblock 2007). Note that
since each step in the automatic query and data extraction
process has some probability of failure, we will need to
identify many more relevant resources than required in
order to guarantee that we will be able to automatically
verify some of them.
Figure 3 shows the performance of different models
trained with either 40 or 100 topics (and interests) on the
three datasets. The figure shows the number of resources
within the top 100 that had the same functionality as the
seed or contained a link to a resource with the same func-
tionality. The Interest-Topic model performed slightly bet-
ter than pLSA, while both ITM and pLSA significantly out-
performed the MWA model. Increasing the dimensionality
of the latent variable Z from 40 to 100 generally improved
the results, although sometimes only slightly. Google’s find
“Similar pages” functionality returned 28, 29 and 15 re-
sources respectively for the three seeds flytecomm, geocoder
and wunderground, out of which 5, 6, and 13 had the same
functionality as the seed and 3, 0, 0 had a link to a resource
with the same functionality. The ITM model, in comparison,
returned three to five times as many relevant results.
Table 1 provides another view of performance of differ-
ent resource discovery methods. It shows how many of the
method’s predictions have to be examined before ten re-
sources with correct functionality are identified. Since the
ITM model ranks the relevant resources highest, fewer Web
sites have to be examined and verified (either manually or
automatically); thus, ITM is the most efficient model.
One possible reason why ITM performs slightly better
than pLSA might be because in the datasets we collected,
flytecomm
(100)
(100)
(100)
link-to
geocoder
(100)
(100)
(100)
wunderground
(100)
(100)
(100)
Figure 3: Performance of different models on the three datasets. Each model was trained with 40 or 100 topics. For ITM, we
fix interest to 20 interests across all different datasets. The bars show the number of resources within the top 100 returned by
each model that had the same functionality as the seed or contained a link to a resource with the same functionality as the seed.
there is low variance of user interest. The resources were
gathered starting from a seed and following related tag links;
therefore, we did not obtain any resources that were anno-
tated with different tags than the seed, even if they are tagged
by the same user who bookmarks the seed. Hence user-
resource co-occurrences are incomplete: they are limited by
a certain tag set. pLSA and ITM would perform similarly if
all users had the same interests. We believe that ITM would
perform significantly better than pLSA when variation of
user interest is high. We plan to gather more complete data
to weigh ITM behavior in more detail.
Although performances pLSA and ITM are only slightly
different, pLSA is much better than ITM in terms of effi-
ciency since the former ignores user information and thus
reduces iterations required in its training process. However,
for some applications, such as personalized resource discov-
ery, it may be important to retain user information. For such
applications the ITM model, which retains this information,
may be preferred over pLSA.
Previous Research
Popular methods for finding documents relevant to a user
query rely on analysis of word occurrences (including meta-
data) in the document and across the document collection.
Information sources that generate their contents dynamically
in response to a query cannot be adequately indexed by con-
ventional search engines. Since they have sparse metadata,
PLSA MWA ITM GOOGLE*
flytecomm 23 65 15 > 28
geocoder 14 44 16 > 29
wunderground 10 14 10 10
Table 1: The number of top predictions that have to be exam-
ined before the system finds ten resources with the desired
functionality (the same or link-to). Each model was trained
with 100 topics. For ITM, we fixed the number of interests
at 20. *Note that Google returns only 8 and 6 positive re-
sources out of 28 and 29 retrieved resources for flytecomm
and geocoder dataset respectively.
the user has to find the correct search terms in order to get
results.
A recent research (Dong et al. 2004) proposed to utilize
metadata in the Web services’ WSDL and UDDI files in or-
der to find Web services offering similar operations in an
unsupervised fashion. The work is established on a heuris-
tic that similar operations tend to be described by similar
terms in service description, operation name and input and
output names. The method uses clustering techniques using
cohesion and correlation scores (distances) computed from
co-occurrence of observed terms to cluster Web service op-
erations. In this approach, a given operation can only belong
to a single cluster. Meanwhile, our approach is grounded on
a probabilistic topic model, allowing a particular resource to
be generated by several topics, which is more intuitive and
robust. In addition, it yields a method to determine how the
resource similar to others in certain aspects.
Although our objective is similar, instead of words or
metadata created by the authors of online resources, our ap-
proach utilizes the much denser descriptive metadata gen-
erated in a social bookmarking system by the readers or
users of these resources. One issue to be considered is
the metadata cannot be directly used for categorizing re-
sources since they come from different user views, interests
and writing styles. One needs algorithms to detect patterns
in these data, find hidden topics which, when known, will
help to correctly group similar resources together. We apply
and extend the probabilistic topic model, in particular pLSA
(Hofmann 1999) to address such issue.
Our model is conceptually motivated by the Author-Topic
model (Rosen-Zvi et al. 2004), where we can view a user
who annotate a resource as an author who composes a docu-
ment. The aim in that approach is to learn topic distribution
for a particular author; while our goal is to learn the topic
distribution for a certain resource. Gibbs sampling was used
in parameter estimation for that model; meanwhile, we use
the generic EM algorithm to estimate parameters, since it is
analytically straightforward and ready to be implemented.
The most relevant work, (Wu, Zhang, & Yu 2006), uti-
lizes multi-way aspect model on social annotation data in
del.icio.us. The model doesn’t explicitly separate user in-
terests and resources topics as our model does. Moreover,
the work focuses on emergence of semantic and personal-
ized resource search, and is evaluated by demonstrating that
it can alleviate a problem of tag sparseness and synonymy in
a task of searching for resources by a tag. In our work, on
the other hand, our model is applied to search for resources
similar to a given resource.
There is another line of researches on resource discov-
ery that exploits social network information of the web
graph. Google (Brin & Page 1998) uses visitation rate ob-
tained from resources’ connectivity to measure their popu-
larity. HITS (Kleinberg 1999) also use web graph to rate rel-
evant resources by measuring their authority and hub values.
Meanwhile, ARC (Chakrabarti et al. 1998) extends HITS
by including content information of resource hyperlinks to
improve system performance. Although the objective is
somewhat similar, our work instead exploits resource meta-
data generated by community to compute resources’ rele-
vance score.
Conclusion
We have presented a probabilistic model that models social
annotation process and described an approach to utilize the
model in the resource discovery task. Although we can-
not compare to performance to state-of-the-art search en-
gine directly, the experimental results show the method to
be promising.
There remain many issues to pursue. First, we would like
to study the output of the models, in particular, what the user
interests tell us. We would also like to automate the source
modeling process by identifying the resource’s HTML form
and extracting its metadata. We will then use techniques de-
scribed in (Heß & Kushmerick 2003) to predict the seman-
tic types of the resource’s input parameters. This will enable
us to automatically query the resource and classify the re-
turned data using tools described in (Gazen & Minton 2005;
Lerman, Plangprasopchok, & Knoblock 2006). We will
then be able to validate that the resource has the same func-
tionality as the seed by comparing its input and output data
with that of the seed (Carman & Knoblock 2007). This will
allow agents to fully exploit our system for integrating in-
formation across different resources without human inter-
vention.
Our next goal is to generalize the resource discovery
process so that instead of starting with a seed, a user can
start with a query or some description of the information
need. We will investigate different methods for translating
the query into tags that can be used to harvest data from
del.icio.us. In addition, there is other evidence potentially
useful for resource categorization such as user comments,
content and input fields in the resource. We plan to extend
the present work to unify evidence both from annotation and
resources’ content to improve the accuracy of resource dis-
covery.
Acknowledgements This research is based by work sup-
ported in part by the NSF under Award No. CNS-0615412
and in part by DARPA under Contract No. NBCHD030010.
References
[Brin & Page 1998] Brin, S., and Page, L. 1998. The
anatomy of a large-scale hypertextual web search engine.
Computer Networks and ISDN Systems 30(1–7):107–117.
[Carman & Knoblock 2007] Carman, M. J., and Knoblock,
C. A. 2007. Learning semantic descriptions of web infor-
mation sources. In Proc. of IJCAI.
[Chakrabarti et al. 1998] Chakrabarti, S.; Dom, B.; Gibson,
D.; Kleinberg, J.; Raghavan, P.; and Rajagopalan, S. 1998.
Automatic resource list compilation by analyzing hyper-
link structure and associated text. In Proceedings of the
7th International World Wide Web Conference.
[Dempster, Laird, & Rubin 1977] Dempster, A. P.; Laird,
N. M.; and Rubin, D. B. 1977. Maximum likelihood from
incomplete data via the em algorithm. Journal of the Royal
Statistical Society. Series B (Methodological) 39(1):1–38.
[Dong et al. 2004] Dong, X.; Halevy, A. Y.; Madhavan, J.;
Nemes, E.; and Zhang, J. 2004. Simlarity search for web
services. In Proc. of VLDB, 372–383.
[Gazen & Minton 2005] Gazen, B. C., and Minton, S. N.
2005. Autofeed: an unsupervised learning system for gen-
erating webfeeds. In Proc. of K-CAP 2005, 3–10.
[Heß & Kushmerick 2003] Heß, A., and Kushmerick, N.
2003. Learning to attach semantic metadata to web ser-
vices. In International Semantic Web Conference, 258–
[Hofmann 1999] Hofmann, T. 1999. Probabilistic latent
semantic analysis. In Proc. of UAI, 289–296.
[Kleinberg 1999] Kleinberg, J. M. 1999. Authoritative
sources in a hyperlinked environment. Journal of the ACM
46(5):604–632.
[Lerman, Plangprasopchok, & Knoblock 2006] Lerman,
K.; Plangprasopchok, A.; and Knoblock, C. A. 2006.
Automatically labeling the inputs and outputs of web
services. In Proc. of AAAI.
[Lerman, Plangprasopchok, & Knoblock 2007] Lerman,
K.; Plangprasopchok, A.; and Knoblock, C. A. 2007.
Semantic labeling of online information sources. Interna-
tional Journal on Semantic Web and Information Systems,
Special Issue on Ontology Matching.
[Lin 1991] Lin, J. 1991. Divergence measures based on
the shannon entropy. IEEE Transactions on Information
Theory 37(1):145–151.
[Mathes 2004] Mathes, A. 2004. Folksonomies: coop-
erative classification and communication through shared
metadata.
[Popescul et al. 2001] Popescul, A.; Ungar, L.; Pennock,
D.; and Lawrence, S. 2001. Probabilistic models for uni-
fied collaborative and content-based recommendation in
sparse-data environments. In 17th Conference on Uncer-
tainty in Artificial Intelligence, 437–444.
[Rosen-Zvi et al. 2004] Rosen-Zvi, M.; Griffiths, T.;
Steyvers, M.; and Smyth, P. 2004. The author-topic model
for authors and documents. In AUAI ’04: Proceedings
of the 20th conference on Uncertainty in artificial intelli-
gence, 487–494. Arlington, Virginia, United States: AUAI
Press.
[Thakkar, Ambite, & Knoblock 2005] Thakkar, S.; Am-
bite, J. L.; and Knoblock, C. A. 2005. Composing, op-
timizing, and executing plans for bioinformatics web ser-
vices. VLDB Journal 14(3):330–353.
[Wu, Zhang, & Yu 2006] Wu, X.; Zhang, L.; and Yu, Y.
2006. Exploring social annotations for the semantic web.
In WWW ’06: Proceedings of the 15th international confer-
ence on World Wide Web, 417–426. New York, NY, USA:
ACM Press.
Introduction
Problem Definition
Approach
Probabilistic Latent Semantic Model (pLSA)
Three-way Aspect Model (MWA)
Interest-Topic Model (ITM)
Empirical Validation
Previous Research
Conclusion
|
0704.1676 | Personalizing Image Search Results on Flickr | Personalizing Image Search Results on Flickr
Kristina Lerman, Anon Plangprasopchok and Chio Wong
University of Southern California
Information Sciences Institute
4676 Admiralty Way
Marina del Rey, California 90292
{lerman,plangpra,chiowong}@isi.edu
Abstract
The social media site Flickr allows users to upload their pho-
tos, annotate them with tags, submit them to groups, and also
to form social networks by adding other users as contacts.
Flickr offers multiple ways of browsing or searching it. One
option is tag search, which returns all images tagged with a
specific keyword. If the keyword is ambiguous, e.g., “beetle”
could mean an insect or a car, tag search results will include
many images that are not relevant to the sense the user had in
mind when executing the query. We claim that users express
their photography interests through the metadata they add in
the form of contacts and image annotations. We show how
to exploit this metadata to personalize search results for the
user, thereby improving search performance. First, we show
that we can significantly improve search precision by filtering
tag search results by user’s contacts or a larger social network
that includes those contact’s contacts. Secondly, we describe
a probabilistic model that takes advantage of tag information
to discover latent topics contained in the search results. The
users’ interests can similarly be described by the tags they
used for annotating their images. The latent topics found by
the model are then used to personalize search results by find-
ing images on topics that are of interest to the user.
Introduction
The photosharing site Flickr is one of the earliest and more
popular examples of the new generation of Web sites, la-
beled social media, whose content is primarily user-driven.
Other examples of social media include: blogs (personal
online journals that allow users to share thoughts and re-
ceive feedback on them), Wikipedia (a collectively writ-
ten and edited online encyclopedia), and Del.icio.us and
Digg (Web sites that allow users to share, discuss, and rank
Web pages, and news stories respectively). The rise of so-
cial media underscores a transformation of the Web as fun-
damental as its birth. Rather than simply searching for,
and passively consuming, information, users are collabora-
tively creating, evaluating, and distributing information. In
the near future, new information-processing applications en-
abled by social media will include tools for personalized in-
formation discovery, applications that exploit the “wisdom
of crowds” (e.g., emergent semantics and collaborative in-
Copyright c© 2018, American Association for Artificial Intelli-
gence (www.aaai.org). All rights reserved.
formation evaluation), deeper analysis of community struc-
ture to identify trends and experts, and many others still dif-
ficult to imagine.
Social media sites share four characteristics: (1) Users
create or contribute content in a variety of media types;
(2) Users annotate content with tags; (3) Users evaluate con-
tent, either actively by voting or passively by using content;
and (4) Users create social networks by designating other
users with similar interests as contacts or friends. In the pro-
cess of using these sites, users are adding rich metadata in
the form of social networks, annotations and ratings. Avail-
ability of large quantities of this metadata will lead to the
development of new algorithms to solve a variety of infor-
mation processing problems, from new recommendation to
improved information discovery algorithms.
In this paper we show how user-added metadata on Flickr
can be used to improve image search results. We claim that
users express their photography interests on Flickr, among
other ways, by adding photographers whose work they ad-
mire to their social network and through the tags they use
to annotate their own images. We show how to exploit this
information to personalize search results to the individual
user.
The rest of the paper is organized as follows. First, we
describe tagging and why it can be viewed as a useful ex-
pression of user’s interests, as well as some of the challenges
that arise when working with tags. In Section “Anatomy of
Flickr” we describe Flickr and its functionality in greater de-
tails, including its tag search capability. In Section “Data
collections” we describe the data sets we have collected
from Flickr, including image search results and user infor-
mation. In Sections “Personalizing by contacts” and “Per-
sonalizing by tags” we present the two approaches to per-
sonalize search results for an individual user by filtering by
contacts and filtering by tags respectively. We evaluate the
performance of each method on our Flickr data sets. We
conclude by discussing results and future work.
Tagging for organizing images
Tags are keyword-based metadata associated with some con-
tent. Tagging was introduced as a means for users to orga-
nize their own content in order to facilitate searching and
browsing for relevant information. It was popularized by
http://arxiv.org/abs/0704.1676v1
the social bookmarking site Delicious1, which allowed users
to add descriptive tags to their favorite Web sites. In re-
cent years, tagging has been adopted by many other so-
cial media sites to enable users to tag blogs (Technorati),
images (Flickr), music (Last.fm), scientific papers (CiteU-
Like), videos (YouTube), etc.
The distinguishing feature of tagging systems is that they
use an uncontrolled vocabulary. This is in marked contrast
to previous attempts to organize information via formal tax-
onomies and classification systems. A formal classification
system, e.g., Linnaean classification of living things, puts an
object in a unique place within a hierarchy. Thus, a tiger
(Panthera tigris) is a carnivorous mammal that belongs to
the genus Panthera, which also includes large cats, such as
lions and leopards. Tiger is also part of the felidae family,
which includes small cats, such as the familiar house cat of
the genus Felis.
Tagging is a non-hierarchical and non-exclusive cat-
egorization, meaning that a user can choose to high-
light any one of the tagged object’s facets or proper-
ties. Adapting the example from Golder and Huber-
man (Golder and Huberman 2005), suppose a user takes an
image of a Siberian tiger. Most likely, the user is not famil-
iar with the formal name of the species (P. tigris altaica) and
will tag it with the keyword “tiger.” Depending on his needs
or mood, the user may even tag is with more general or spe-
cific terms, such as “animal,” “mammal” or “Siberian.” The
user may also note that the image was taken at the “zoo”
and that he used his “telephoto” lens to get the shot. Rather
than forcing the image into a hierarchy or multiple hierar-
chies based on the equipment used to take the photo, the
place where the image was taken, type of animal depicted,
or even the animal’s provenance, tagging system allows the
user to locate the image by any of its properties by filtering
the entire image set on any of the tags. Thus, searching on
the tag “tiger” will return all the images of tigers the user has
taken, including Siberian and Bengal tigers, while searching
on “Siberian” will return the images of Siberian animals,
people or artifacts the user has photographed. Filtering on
both “Siberian” and “tiger” tags will return the intersection
of the images tagged with those keywords, in other words,
the images of Siberian tigers.
As Golder and Huberman point out, tagging systems are
vulnerable to problems that arise when users try to attach
semantics to objects through keywords. These problems are
exacerbated in social media where users may use different
tagging conventions, but still want to take advantage of the
others’ tagging activities. The first problem is of homonymy,
where the same tag may have different meanings. For exam-
ple, the “tiger” tag could be applied to the mammal or to
Apple computer’s operating system. Searching on the tag
“tiger” will return many images unrelated the carnivorous
mammals, requiring the user to sift through possibly a large
amount of irrelevant content. Another problem related to
homonymy is that of polysemy, which arises when a word
has multiple related meanings, such as “apple” to mean the
company or any of its products. Another problem is that
1http://del.icio.us
of synonymy, or multiple words having the same or related
meaning, for example, “baby” and “infant.” The problem
here is that if the user wants all images of young children
in their first year of life, searching on the tag “baby” may
not return all relevant images, since other users may have
tagged similar photographs with “infant.” Of course, plurals
(“tigers” vs ”tiger”) and many other tagging idiosyncrasies
(”myson” vs “son”) may also confound a tagging system.
Golder and Huberman identify yet another problem that
arises when using tags for categorization — that of the “ba-
sic level.” A given item can be described by terms along
a spectrum of specificity, ranging from specific to general.
A Siberian tiger can be described as a “tiger,” but also as
a “mammal” and “animal.” The basic level is the category
people choose for an object when communicating to others
about it. Thus, for most people, the basic level for canines
is “dog,” not the more general “animal” or the more specific
“beagle.” However, what constitutes the basic level varies
between individuals, and to a large extent depends on the
degree of expertise. To a dog expert, the basic level may be
the more specific “beagle” or “poodle,” rather than “dog.”
The basic level problem arises when different users choose
to describe the item at different levels of specificity. For ex-
ample, a dog expert tags an image of a beagle as “beagle,”
whereas the average user may tag a similar image as “dog.”
Unless the user is aware of the basic level variation and sup-
plies more specific (and more general) keywords during tag
search, he may miss a large number of relevant images.
Despite these problems, tagging is a light weight, flexi-
ble categorization system. The growing number of tagged
images provides evidence that users are adopting tag-
ging on Flickr (Marlow et al. 2006). There is specula-
tion (Mika 2005) that collective tagging will lead to a com-
mon informal classification system, dubbed a “folksonomy,”
that will be used to organize all information from all users.
Developing value-added systems on top of tags, e.g., which
allow users to better browse or search for relevant items, will
only accelerate wider acceptance of tagging.
Anatomy of Flickr
Flickr consists of a collection of interlinked user, photo, tag
and group pages. A typical Flickr photo page is shown in
Figure 1. It provides a variety of information about the im-
age: who uploaded it and when, what groups it has been sub-
mitted to, its tags, who commented on the image and when,
how many times the image was viewed or bookmarked as a
“favorite.” Clicking on a user’s name brings up that user’s
photo stream, which shows the latest photos she has up-
loaded, the images she marked as “favorite,” and her profile,
which gives information about the user, including a list of
her contacts and groups she belong to. Clicking on the tag
shows user’s images that have been tagged with this key-
word, or all public images that have been similarly tagged.
Finally, the group link brings up the group’s page, which
shows the photo group, group membership, popular tags,
discussions and other information about the group.
Figure 1: A typical photo page on Flickr
Groups Flickr allows users to create special interest
groups on any imaginable topic. There are groups for
showcasing exceptional images, group for images of circles
within a square, groups for closeups of flowers, for the color
red (and every other color and shade), groups for rating sub-
mitted images, or those used solely to generate comments.
Some groups are even set up as games, such as The Infinite
Flickr, where the rule is that a user post an image of her-
self looking at the screen showing the last image (of a user
looking at the screen showing next to last image, etc).
There is redundancy and duplication in groups. For exam-
ple, groups for child photography include Children’s Por-
traits, Kidpix, Flickr’s Cutest Kids, Kids in Action, Tod-
dlers, etc. A user chooses one, or usually several, groups to
which to submit an image. We believe that group names can
be viewed as a kind of publicly agreed upon tags.
Contacts Flickr allows users to designate others as friends
or contacts and makes it easy to track their activities. A
single click on the “Contacts” hyperlink shows the user the
latest images from his or her contacts. Tracking activities of
friends is a common feature of many social media sites and
is one of their major draws.
Interestingness Flickr uses the “interestingness” criterion
to evaluate the quality of the image. Although the algorithm
that is used to compute this is kept secret to prevent gaming
the system, certain metrics are taken into account: “where
the clickthroughs are coming from; who comments on it and
when; who marks it as a favorite; its tags and many more
things which are constantly changing.”2
Browsing and searching
Flickr offers the user a number of browsing and searching
methods. One can browse by popular tags, through the
groups directory, through the Explore page and the calen-
dar interface, which provides access to the 500 most “inter-
esting” images on any given day. A user can also browse
geotagged images through the recently introduced map in-
terface. Finally, Flickr allows for social browsing through
the “Contacts” interface that shows in one place the recent
images uploaded by the user’s designated contacts.
Flickr allows searching for photos using full text or tag
search. A user can restrict the search to all public photos,
his or her own photos, photos she marked as her favorite, or
photos from a specific contact. The advanced search inter-
face currently allows further filtering by content type, date
and camera.
Search results are by default displayed in reverse chrono-
logical order of being uploaded, with the most recent images
on top. Another available option is to display images by their
“interestingness” value, with the most “interesting” images
on top.
2http://flickr.com/explore/interesting/
Personalizing search results
Suppose a user is interested in wildlife photography and
wants to see images of tigers on Flickr. The user can search
for all public images tagged with the keyword “tiger.” As
of March 2007, such a search returns over 55, 500 results.
When images are arranged by their “interestingness,” the
first page of results contains many images of tigers, but
also of a tiger shark, cats, butterfly and a fish. Subsequent
pages of search results show, in addition to tigers, children in
striped suits, flowers (tiger lily), more cats, Mac OS X (tiger)
screenshots, golfing pictures (Tiger Woods), etc. In other
words, results include many false positives, images that are
irrelevant to what the user had in mind when executing the
search.
We assume that when the search term is ambiguous, the
sense that the user has in mind is related to her interests. For
example, when a child photographer is searching for pictures
of a “newborn,” she is most likely interested in photographs
of human babies, not kittens, puppies, or ducklings. Simi-
larly, a nature photographer specializing in macro photogra-
phy is likely to be interested in insects when searching on
the keyword “beetle,” not a Volkswagen car. Users express
their photography preferences and interests in a number of
ways on Flickr. They express them through their contacts
(photographers they choose to watch), through the images
they upload to Flickr, through the tags they add to these im-
ages, through the groups they join, and through the images
of other photographers they mark as their favorite. In this pa-
per we show that we can personalize results of tag search by
exploiting information about user’s preferences. In the sec-
tions below, we describe two search personalization meth-
ods: one that relies on user-created tags and one that exploits
user’s contacts. We show that both methods improve search
performance by reducing the number of false positives, or
irrelevant results, returned to the user.
Data collections
To show how user-created metadata can be used to personal-
ize results of tag search, we retrieved a variety of data from
Flickr using their public API.
Data sets
We collected images by performing a single keyword tag
search of all public images on Flickr. We specified that the
returned images are ordered by their “interestingness” value,
with most interesting images first. We retrieved the links to
the top 4500 images for each of the following search terms:
tiger possible senses include (a) big cat ( e.g., Asian tiger),
(b) shark (Tiger shark), (c) flower (Tiger Lily), (d) golfing
(Tiger Woods), etc.
newborn possible senses include (a) a human baby, (b) kit-
ten, (c) puppy, (d) duckling, (e) foal, etc.
beetle possible senses include (a) a type of insect and (b)
Volkswagen car model
For each image in the set, we used Flickr’s API to retrieve
the name of the user who posted the image (image owner),
and all the image’s tags and groups.
query relevant not relevant precision
newborn 412 83 0.82
tiger 337 156 0.67
beetle 232 268 0.46
Table 1: Relevance results for the top 500 images retrieved
by tag search
Users
Our objective is to personalize tag search results; therefore,
to evaluate our approach, we need to have users to whose
interests the search results are being tailored. We identi-
fied four users who are interested in the first sense of each
search term. For the newborn data set, those users were one
of the authors of the paper and three other contacts within
that user’s social network who were known to be interested
in child photography. For the other datasets, the users were
chosen from among the photographers whose images were
returned by the tag search. We studied each user’s profile
to confirm that the user was interested in that sense of the
search term. We specifically looked at group membership
and user’s tags. Thus, for the tiger data set, groups that
pointed to the user’s interest in P. tigris were Big Cats, Zoo,
The Wildlife Photography, etc. In addition to group mem-
bership, tags that pointed to user’s interest in a topic, e.g., for
the beetle data set, we assumed that users who used tags na-
ture and macro were interested in insects rather than cars.
Likewise, for the newborn data set, users who had uploaded
images they tagged with baby and child were probably in-
terested in human newborns.
For each of the twelve users, we collected the names of
their contacts, or Level 1 contacts. For each of these con-
tacts, we also retrieved the list of their contacts. These are
called Level 2 contacts. In addition to contacts, we also re-
trieved the list of all the tags, and their frequencies, that the
users had used to annotate their images. In addition to all
tags, we also extracted a list of related tags for each user.
These are the tags that appear together with the tag used as
the search term in the user’s photos. In other words, suppose
a user, who is a child photographer, had used tags such as
“baby”, “child”, “newborn”, and “portrait” in her own im-
ages. Tags related to newborn are all the tags that co-occur
with the “newborn” tag in the user’s own images. This in-
formation was also extracted via Flickr’s API.
Search results
We manually evaluated the top 500 images in each data set
and marked each as relevant if it was related to the first sense
of the search term listed above, not relevant or undecided, if
the evaluator could not understand the image well enough to
judge its relevance.
In Table 1, we report the precision of the search within the
500 labeled images, as judged from the point of view of the
searching users. Precision is defined as the ratio of relevant
images within the result set over the 500 retrieved images.
Precision of tag search on these sample queries is not very
high due to the presence of false positives — images not rel-
evant to the sense of the search term the user had in mind.
In the sections below we show how to improve search per-
formance by taking into consideration supplementary infor-
mation about user’s interests provided by her contacts and
tags.
Personalizing by contacts
Flickr encourages users to designate others as contacts by
making is easy to view the latest images submitted by them
through the “Contacts” interface. Users add contacts for a
variety of reasons, including keeping in touch with friends
and family, as well as to track photographers whose work
is of interest to them. We claim that the latter reason is the
most dominant of the reasons. Therefore, we view user’s
contacts as an expression of the user’s interests. In this sec-
tion we show that we can improve tag search results by filter-
ing through the user’s contacts. To personalize search results
for a particular user, we simply restrict the images returned
by the tag search to those created by the user’s contacts.
Table 2 shows how many of the 500 images in each data
set came from a user’s contacts. The column labeled “# L1”
gives the number of user’s Level 1 contacts. The follow-
ing columns show how many of the images were marked as
relevant or not relevant by the filtering method, as well as
precision and recall relative to the 500 images in each data
set. Recall measures the fraction of relevant retrieved im-
ages relative to all relevant images within the data set. The
last column “improv” shows percent improvement in preci-
sion over the plain (unfiltered) tag search.
As Table 2 shows, filtering by contacts improves the pre-
cision of tag search for most users anywhere from 22% to
over 100% when compared to plain search results in Ta-
ble 1. The best performance is attained for users within the
newborn set, with a large number of relevant images cor-
rectly identified as being relevant, and no irrelevant images
admitted into the result set. The tiger set shows an average
precision gain of 42% over four users, while the beetle set
shows an 85% gain.
Increase in precision is achieved by reducing the number
of false positives, or irrelevant images that are marked as rel-
evant by the search method. Unfortunately, this gain comes
at the expense of recall: many relevant images are missed
by this filtering method. In order to increase recall, we en-
large the contacts set by considering two levels of contacts:
user’s contacts (Level 1) and her contacts’ contacts (Level
2). The motivation for this is that if the contact relation-
ship expresses common interests among users, user’s inter-
ests will also be similar to those of her contacts’ contacts.
The second half of Table 2 shows the performance of
filtering the search results by the combined set of user’s
Level 1 and Level 2 contacts. This method identifies many
more relevant images, although it also admits more irrele-
vant images, thereby decreasing precision. This method still
shows precision improvement over plain search, with pre-
cision gain of 9%, 16% and 11% respectively for the three
data sets.
Personalizing by tags
In addition to creating lists of contacts, users express their
photography interests through the images they post on
Flickr. We cannot yet automatically understand the content
of images. Instead, we turn to the metadata added by the
user to the image to provide a description of the image. The
metadata comes in a variety of forms: image title, descrip-
tion, comments left by other users, tags the image owner
added to it, as well as the groups to which she submitted the
image. As we described in the paper, tags are useful im-
age descriptors, since they are used to categorize the image.
Similarly, group names can be viewed as public tags that a
community of users have agreed on. Submitting an image to
a group is, therefore, equivalent to tagging it with a public
In the section below we describe a probabilistic model
that takes advantage of the images’ tag and group informa-
tion to discover latent topics in each search set. The users’
interests can similarly be described by collections of tags
they had used to annotate their own images. The latent top-
ics found by the model can be used to personalize search
results by finding images on topics that are of interest to a
particular user.
Model definition
We need to consider four types of entities in the model: a
set of users U = {u1, ..., un}, a set of images or photos
I = {i1, ..., im}, a set of tags T = {t1, ..., to}, and a set
of groups G = {g1, ..., gp}. A photo ix posted by owner
ux is described by a set of tags {tx1, tx2, ...} and submitted
to several groups {gx1, gx2, ...}. The post could be viewed
as a tuple < ix, ux, {tx1, tx2, ...}, {gx1, gx2, ...} >. We as-
sume that there are n users, m posted photos and p groups
in Flickr. Meanwhile, the vocabulary size of tags is q. In
order to filter images retrieved by Flickr in response to tag
search and personalize them for a user u, we compute the
conditional probability p(i|u), that describes the probability
that the photo i is relevant to u based on her interests. Im-
ages with high enough p(i|u) are then presented to the user
as relevant images.
As mentioned earlier, users choose tags from an uncon-
trolled vocabulary according to their styles and interests.
Images of the same subject could be tagged with different
keywords although they have similar meaning. Meanwhile,
the same keyword could be used to tag images of different
subjects. In addition, a particular tag frequently used by one
user may have a different meaning to another user. Proba-
bilistic models offer a mechanism for addressing the issues
of synonymy, polysemy and tag sparseness that arise in tag-
ging systems.
We use a probabilistic topic
model (Rosen-Zvi et al. 2004) to model user’s image
posting behavior. As in a typical probabilistic topic model,
topics are hidden variables, representing knowledge cate-
gories. In our case, topics are equivalent to image owner’s
interests. The process of photo posting by a particular user
could be described as a stochastic process:
• User u decides to post a photo i.
user # L1 rel. not rel. Pr Re improv # L2+L2 rel. not rel. Pr Re improv
newborn
user1 719 232 0 1.00 0.56 22% 49,539 349 62 0.85 0.85 4%
user2 154 169 0 1.00 0.41 22% 10,970 317 37 0.9 0.77 10%
user3 174 147 0 1.00 0.36 22% 13,153 327 39 0.89 0.79 9%
user4 128 132 0 1.00 0.32 22% 8,439 310 29 0.91 0.75 11%
tiger
user5 63 11 1 0.92 0.03 37% 13,142 255 71 0.78 0.76 16%
user6 103 78 3 0.96 0.23 44% 14,425 266 83 0.76 0.79 13%
user7 62 65 1 0.98 0.19 47% 7,270 226 60 0.79 0.67 18%
user8 56 30 0 0.97 0.09 44% 7,073 240 63 0.79 0.71 18%
beetle
user9 445 18 1 0.95 0.08 106% 53,480 215 221 0.49 0.93 7%
user10 364 35 8 0.81 0.15 77% 41,568 208 217 0.49 0.90 7%
user11 783 78 25 0.75 0.34 65% 62,610 218 227 0.49 0.94 7%
user12 102 7 1 0.88 0.03 90% 14,324 163 152 0.52 0.70 13%
Table 2: Results of filtering tag search by user’s contacts. “# L1” denotes the number of Level 1 contacts and “# L1+L2” shows
the number of Level 1 and Level 2 contacts, with the succeeding columns displaying filtering results of that method: the number
of images marked relevant or not relevant, as well as precision and recall of the filtering method relative to the top 500 images.
The columns marked “improv” show improvement in precision over plain tag search results.
Figure 2: Graphical representation for model-based infor-
mation filtering. U , T , G and Z denote variables “User”,
“Tag”, “Group”, and “Topic” respectively. Nt represents a
number of tag occurrences for a one photo (by the photo
owner); D represents a number of all photos on Flickr.
Meanwhile, Ng denotes a number of groups for a particu-
lar photo.
• Based on user u’s interests and the subject of the photo, a
set of topics z are chosen.
• Tag t is then selected based on the set of topics chosen in
the previous state.
• In case that u decides to expose her photo to some groups,
a group g is then selected according to the chosen topics.
The process is depicted in a graphical form in Figure 2.
We do not treat the image i as a variable in the model but
view it as a co-occurrence of a user, a set of tags and a set of
groups. From the process described above, we can represent
the joint probability of user, tag and group for a particular
photo as
p(i) = p(ui, Ti, Gi)
= p(ui) ·
p(zk|ui)p(ti|z)
)ni(t)
p(zk|ui)p(gi|z)
)ni(g)
Note that it is straightforward to exclude photo’s group
information from the above equation simply by omitting the
terms relevant to g. nt and ng is a number of all possible tags
and groups respectively in the data set. Meanwhile, ni(t)
and ni(g) act as indicator functions: ni(t) = 1 if an image i
is tagged with tag t; otherwise, it is 0. Similarly, ni(g) = 1
if an image i is submitted to group g; otherwise, it is 0. k is
the predefined number of topics.
The joint probability of photos in the data set I is defined
p(I) =
p(im).
In order to estimate parameters p(z|ui), p(ti|z), and p(gi|z),
we define a log likelihood L, which measures how the esti-
mated parameters fit the observed data. According to the
EM algorithm (Dempster et al. 1977), L will be used as an
objective function to estimate all parameters. L is defined as
L(I) = log(p(I)).
In the expectation step (E-step), the joint probability of
the hidden variable Z given all observations is computed
from the following equations:
p(z|t, u) ∝ p(z|u) · p(t|z) (1)
p(z|g, u) ∝ p(z|u) · p(g|z). (2)
L cannot be maximized easily, since the summation over
the hidden variable Z appears inside the logarithm. We in-
stead maximize the expected complete data log-likelihood
over the hidden variable, E[Lc], which is defined as
E[Lc] =
log(p(u))
ni(t) ·
p(z|u, t) (log(p(z|u)· (t|z))
ni(g) ·
p(z|u, g) (log(p(z|g)· (g|z))
Since the term
log(p(ui)) is not related to parame-
ters and can be computed directly from the observed data,
we discard this term from the expected complete data log-
likelihood. With normalization constraints on all parame-
ters, Lagrange multipliers τ , ρ, ψ are added to the expected
log likelihood, yielding the following equation
H = E[Lc] +
p(t|z)
p(g|z)
p(z|u)
We maximize H with respect to p(t|zk), p(g|zk), and
p(zk|u), and then eliminate the Lagrange multipliers to ob-
tain the following equations for the maximization step:
p(t|z) ∝
ni(t) · p(z|t, u) (3)
p(g|z) ∝
ni(g) · p(z|g, u) (4)
p(zk|um) ∝
nm(t) · p(zk|um, t) (5)
nm(g) · p(zk|um, g)
The algorithm iterates between E and M step until the log
likelihood for all parameter values converge.
Model-based personalization
We can use the model developed in the previous section to
find the images i most relevant to the interests of a partic-
ular user u′. We do so by learning the parameters of the
model from the data and using these parameters to compute
the conditional probability p(i|u′). This probability can be
factorized as follows:
p(i|u′) =
p(ui, Ti, Gi|z) · p(z|u
′) , (6)
where ui is the owner of image i in the data set, and Ti and
Gi are, respectively, the set of all the tags and groups for the
image i.
The former term in Equation 6 can be factorized further
p(ui, Ti, Gi|z) ∝ p(Ti|z)· (Gi|z)· (z|ui) · p(ui)
p(ti|z)
p(gi|z)
· p(z|ui) · p(ui) .
We can use the learned parameters to compute this term di-
rectly.
We represent the interests of user u′ as an aggregate of the
tags that u′ had used in the past for tagging her own images.
This information is used to to approximate p(z|u′):
p(z|u′) ∝
n(t′ = t) · p(z|t)
where n(t′ = t) is a frequency (or weight) of tag t′ used
by u′. Here we view n(t′ = t) is proportional to p(t′|u′).
Note that we can use either all the tags u′ had applied to the
images in her photostream, or a subset of these tags, e.g.,
only those that co-occur with some tag in user’s images.
Evaluation
We trained the model separately on each data set of 4500
images. We fixed the number of topics at ten. We then eval-
uated our model-based personalization framework by using
the learned parameters and the information about the in-
terests of the selected users to compute p(i|u′) for the top
500 (manually labeled) images in the set. Information about
user’s interests was captured either by (1) all tags (and their
frequencies) that are used in all the images of the user’s pho-
tostream or (2) related tags that occurred in images that were
tagged with the search keyword (e.g., “newborn”) by the
user.
Computation of p(t|z) is central to the parameter estima-
tion process, and it tells us something about how strongly a
tag t contributes to a topic z. Table 3 shows the most prob-
able 25 tags for each topic for the tiger data set trained on
ten topics. Although the tag “tiger” dominates most topics,
we can discern different themes from the other tags that ap-
pear in each topic. Thus, topic z5 is obviously about domes-
tic cats, while topic z8 is about Apple computer products.
Meanwhile, topic z2 is about flowers and colors (“flower,”
“lily,” “yellow,” “pink,” “red”); topic z6 is about about
places (“losangeles,” “sandiego,” “lasvegas,” “stuttgard,”),
presumably because these places have zoos. Topic z7 con-
tains several variations of tiger’s scientific name, “panthera
tigris.” This method appears to identify related words well.
Topic z5, for example, gives synonyms “cat,” “kitty,” as well
as the more general term “pet” and the more specific terms
“kitten” and “tabby.” It even contains the Spanish version of
the word: “gatto.” In future work we plan to explore using
this method to categorize photos in a more abstract way. We
also note that related terms can be used to increase search
recall by providing additional keywords for queries.
Table 4 presents results of model-based personalization
for the case that uses information from all of user’s tags.
The model was trained with ten topics. Results are pre-
sented for different thresholds. The first two columns, for
example, report precision and recall for a high threshold that
z1 z2 z3 z4 z5
tiger tiger tiger tiger tiger
zoo specanimal cat thailand cat
animal animalkingdomelite kitty bengal animal
nature abigfave cute animals animals
animals flower kitten tigers zoo
wild butterfly cats canon bigcat
tijger macro orange d50 cats
wildlife yellow eyes tigertemple tigre
ilovenature swallowtail pet 20d animalplanet
cub lily tabby white tigers
siberiantiger green stripes nikon bigcats
blijdorp canon whiskers kanchanaburi whitetiger
london insect white detroit mammal
australia nature art life wildlife
portfolio pink feline michigan colorado
white red fur detroitzoo stripes
dierentuin flowers animal eos denver
toronto orange gatto temple sumatrantiger
stripes eastern pets park white
amurtiger usa black asia feline
nikonstunninggallery impressedbeauty paws ball mammals
s5600 tag2 furry marineworld sumatran
eyes specnature nose baseball exoticcats
sydney black teeth detroittigers exoticcat
cat streetart beautiful wild big
z6 z7 z8 z9 z10
tiger nationalzoo tiger tiger tiger
tigers tiger apple india lion
dczoo sumatrantiger mac canon dog
tigercub zoo osx wildlife shark
california nikon macintosh impressedbeauty nyc
lion washingtondc screenshot endangered cat
cat smithsonian macosx safari man
cc100 washington desktop wildanimals people
florida animals imac wild arizona
girl cat stevejobs tag1 rock
wilhelma bigcat dashboard tag3 beach
self tigris macbook park sand
lasvegas panthera powerbook taggedout sleeping
stuttgart bigcats os katze tree
me d70s 104 nature forest
baby pantheratigrissumatrae canon bravo puppy
tattoo dc x nikon bird
endangered sumatrae ipod asia portrait
illustration animal computer canonrebelxt marwell
?? 2005 ibook bandhavgarh boy
losangeles pantheratigris intel vienna fish
portrait nikond70 keyboard schnbrunn panther
sandiego d70 widget zebra teeth
lazoo 2006 wallpaper pantheratigris brooklyn
giraffe topv111 laptop d2x bahamas
Table 3: Top tags ordered by p(t—z) for the ten topic model of the “tiger” data set.
Pr Re Pr Re Pr Re Pr Re Pr Re
newborn
n=50 n=100 n=200 n=300 n=412*
user1 1.00 0.12 1.00 0.24 1.00 0.49 0.94 0.68 0.89 0.89
user2 1.00 0.12 1.00 0.24 1.00 0.49 0.92 0.67 0.87 0.87
user3 1.00 0.12 0.88 0.21 0.84 0.41 0.85 0.62 0.89 0.89
user4 1.00 0.12 0.99 0.24 1.00 0.48 0.94 0.69 0.89 0.89
tiger
n=50 n=100 n=200 n=300 n=337*
user5 0.94 0.14 0.90 0.27 0.82 0.48 0.80 0.71 0.79 0.79
user6 0.76 0.11 0.80 0.24 0.79 0.47 0.77 0.69 0.77 0.77
user7 0.94 0.14 0.90 0.27 0.82 0.48 0.80 0.71 0.79 0.79
user8 0.90 0.13 0.88 0.26 0.82 0.49 0.79 0.71 0.79 0.79
beetle
n=50 n=100 n=200 n=232* n=300
user9 1.00 0.22 0.99 0.43 0.77 0.66 0.70 0.70 0.66 0.85
user10 0.98 0.21 0.99 0.43 0.77 0.66 0.70 0.70 0.66 0.85
user11 0.98 0.21 0.93 0.40 0.50 0.43 0.51 0.51 0.50 0.65
user12 1.00 0.22 0.99 0.43 0.77 0.66 0.70 0.70 0.66 0.85
Table 4: Filtering results where a number of learned topics is 10, excluding group information, and user’s personal information
obtained from all tags she used for her photos. Asterisk denotes R-precision of the method, or precision of the first n results,
where n is the number of relevant results in the data set.
marks only the 50 most probable images as relevant. The re-
maining 450 images are marked as not relevant to the user.
Recall is low, because many relevant images are excluded
from the results for such a high threshold. As the thresh-
old is decreased (n = 100, n = 200, . . .), recall relative to
the 500 labeled images increases. Precision remains high in
all cases, and higher than precision of the plain tag search
reported in Table 1. In fact, most of the images in the top
100 results presented to the user are relevant to her query.
The column marked with the asterisk gives the R-precision
of the method, or precision of the first R results, where R is
the number of relevant results. The average R-precision of
this filtering method is 8%, 17% and 42% better than plain
search precision on our three data sets.
Performance results of the approach that uses related tags
instead of all tags are given in Table 5. We explored this di-
rection, because we believed it could help discriminate be-
tween different topics that interest a user. Suppose, a child
photographer is interested in nature photography as well as
child portraiture. The subset of tags he used for tagging his
“newborn” portraits will be different from the tags used for
tagging nature images. These tags could be used to differen-
tiate between newborn baby and newborn colt images. How-
ever, on the set of users selected for our study, using related
tags did not appear to improve results. This could be be-
cause the tags a particular user used together with, for ex-
ample, “beetle” do not overlap significantly with the rest of
the data set.
Including group information did not significantly improve
results (not presented in this manuscript). In fact, group in-
formation sometimes hurts the estimation rather than helps.
We believe that this is because our data sets (sorted by Flickr
according to image interestingness) are biased by the pres-
ence of general topic groups (e.g., Search the Best, Spec-
tacular Nature, Let’s Play Tag, etc.). We postulate that
group information would help estimate p(i|z) in cases where
the photo has few or no tags. Group information would help
filling in the missing data by using group name as another
tag. We also trained the model on the data with 15 topics,
but found no significant difference in results.
Previous research
Recommendation or personalization systems can be cate-
gorized into two main categories. One is collaborative fil-
tering (Breese et al. 1998) which exploits item ratings from
many users to recommend items to other like-minded users.
The other is content-based recommendation, which relies
on the contents of an item and user’s query, or other user
information, for prediction (Mooney and Roy 2000). Our
first approach, filtering by contacts, can be viewed as im-
plicit collaborative filtering, where the user–contact rela-
tionship is viewed as a preference indicator: it assumes
that the user likes all photos produced by her contacts. In
our previous work, we showed that users do indeed agree
with the recommendations made by contacts (Lerman 2007;
Lerman and Jones 2007). This is similar to the ideas imple-
mented by MovieTrust (Golbeck 2006), but unlike that sys-
tem, social media sites do not require users to rate their trust
in the contact.
Meanwhile, our second approach, filtering by tags (and
groups), shares some characteristics with both methods. It
is similar to collaborative filtering, since we use tags to rep-
resent agreement between users. It is also similar to content-
based recommendation, because we represent image content
by the tags and group names that have been assigned to it by
the user.
Our model-based filtering system is technically similar to,
but conceptually different from, probabilistic models pro-
Pr Re Pr Re Pr Re Pr Re Pr Re
newborn
n=50 n=100 n=200 n=300 n=412*
Pr Re Pr Re Pr Re Pr Re Pr Re
user1 0.8 0.10 0.78 0.19 0.79 0.38 0.77 0.56 0.79 0.79
user2 0.8 0.10 0.82 0.20 0.80 0.39 0.77 0.56 0.83 0.83
user3 0.98 0.12 0.88 0.21 0.84 0.41 0.80 0.58 0.85 0.85
user4 0.98 0.12 0.88 0.21 0.84 0.41 0.85 0.62 0.88 0.88
tiger
n=50 n=100 n=200 n=300 n=337*
user5 0.84 0.12 0.86 0.26 0.78 0.46 0.78 0.69 0.77 0.77
user6 0.72 0.11 0.79 0.23 0.78 0.46 0.76 0.68 0.76 0.76
user7 0.72 0.11 0.78 0.23 0.78 0.46 0.76 0.68 0.76 0.76
user8 0.9 0.13 0.82 0.24 0.80 0.47 0.78 0.69 0.78 0.78
beetle
n=50 n=100 n=200 n=232* n=300
user9 0.78 0.17 0.62 0.27 0.58 0.50 0.54 0.54 0.53 0.68
user10 0.98 0.21 0.88 0.38 0.77 0.66 0.72 0.72 0.65 0.84
user11 0.96 0.21 0.74 0.32 0.62 0.53 0.59 0.59 0.56 0.72
user12 0.98 0.21 0.99 0.43 0.77 0.66 0.70 0.70 0.66 0.85
Table 5: Filtering results where a number of learned topics is 10, excluding group information, and user’s personal information
obtained from all tags she used for her photos, which are tagged by the search term
posed by (Popescul et al. 2001). Both models are proba-
bilistic generative models that describe co-occurrences of
users and items of interest. In particular, the model assumes
a user generates her topics of interest; then the topics gen-
erate documents and words in those documents if the user
prefers those documents. In our model, we metaphorically
assume the photo owner generates her topics of interest. The
topics, in turn, generate tags that the owner used to annotate
her photo. However, unlike the previous work, we do not
treat photos as variables, as they do for documents. This is
because images are tagged only by their owners; meanwhile,
in their model, all users who are interested in a document
generate topics for that document.
Our model-based approach is almost identical to the
author-topic model(Rosen-Zvi et al. 2004). However, we
extend their framework to address (1) how to exploit photo’s
group information for personalized information filtering; (2)
how to approximate user’s topics of interest from partially
observed personal information (the tags the user used to de-
scribe her own images). For simplicity, we use the classi-
cal EM algorithm to train the model; meanwhile they use a
stochastic approximation approach due to the difficulty in-
volved in performing exact an inference for their generative
model.
Conclusions and future work
We presented two methods for personalizing results of im-
age search on Flickr. Both methods rely on the meta-
data users create through their everyday activities on Flickr,
namely user’s contacts and the tags they used for annotating
their images. We claim that this information captures user’s
tastes and preferences in photography and can be used to
personalize search results to the individual user. We showed
that both methods dramatically increase search precision.
We believe that increasing precision is an important goal for
personalization, because dealing with the information over-
load is the main issue facing users, and we can help users
by reducing the number of irrelevant results the user has to
examine (false positives). Having said that, our tag-based
approach can also be used to expand the search by suggest-
ing relevant related keywords (e.g., “pantheratigris,” “big-
cat” and ”cub” for the query tiger).
In addition to tags and contacts, there exists other meta-
data, favorites and comments, that can be used to aid infor-
mation personalization and discovery. In our future work
we plan to address the challenge of combing these heteroge-
neous sources of evidence within a single approach. We will
begin by combining contacts information with tags.
The probabilistic model needs to be explored further.
Right now, there is no principled way to pick the number
of latent topics that are contained in a data set. We also plan
to have a better mechanism for dealing with uninformative
tags and groups. We would like to automatically identify
general interest groups, such as the Let’s Play Tag group,
that do not help to discriminate between topics.
The approaches described here can be applied to other so-
cial media sites, such as Del.icio.us. We imagine that in
near future, all of Web will be rich with metadata, of the sort
described here, that will be used to personalize information
search and discovery to the individual user.
Acknowledgements
This research is based on work supported in part by the Na-
tional Science Foundation under Award Nos. IIS-0535182
and in part by DARPA under Contract No. NBCHD030010.
The U.S.Government is authorized to reproduce and dis-
tribute reports for Governmental purposes notwithstanding
any copyright annotation thereon. The views and conclu-
sions contained herein are those of the authors and should
not be interpreted as necessarily representing the official
policies or endorsements, either expressed or implied, of
any of the above organizations or any person connected with
them.
References
[Breese et al. 1998] John Breese, David Heckerman, and
Carl Kadie. Empirical analysis of predictive algorithms
for collaborative filtering. In Proceedings of the 14th An-
nual Conference on Uncertainty in Artificial Intelligence
(UAI-98), pages 43–52, San Francisco, CA, 1998. Morgan
Kaufmann.
[Dempster et al. 1977] A. P. Dempster, N. M. Laird, and
D. B. Rubin. Maximum likelihood from incomplete data
via the em algorithm. Journal of the Royal Statistical Soci-
ety. Series B (Methodological), 39(1):1–38, 1977.
[Golbeck 2006] J. Golbeck. Generating predictive movie
recommendations from trust in social networks. In Pro-
ceedings of the Fourth International Conference on Trust
Management, Pisa, Italy, May 2006.
[Golder and Huberman 2005] S. A. Golder and B. A.
Huberman. The structure of collaborative tag-
ging systems. Technical report, HP Labs, 2005.
http://www.hpl.hp.com/research/idl/papers/tags/.
[Lerman and Jones 2007] K. Lerman and Laurie Jones. So-
cial browsing on flickr. In Proc. of International Confer-
ence on Weblogs and Social Media (ICWSM-07), 2007.
[Lerman 2007] K. Lerman. Social networks and social in-
formation filtering on digg. In Proc. of International Con-
ference on Weblogs and Social Media (ICWSM-07), 2007.
[Marlow et al. 2006] C. Marlow, M. Naaman, d. boyd, and
M. Davis. Ht06, tagging paper, taxonomy, flickr, academic
article, toread. In Proceedings of Hypertext 2006, New
York, 2006. ACM, New York: ACM Press.
[Mika 2005] P. Mika. Ontologies are us: A unified model
of social networks and semantics. In nternational Semantic
Web Conference (ISWC-05), 2005.
[Mooney and Roy 2000] Raymond J. Mooney and Loriene
Roy. Content-based book recommending using learning
for text categorization. In Proceedings of 5th ACM Con-
ference on Digital Libraries, pages 195–204, San Antonio,
US, 2000. ACM Press, New York, US.
[Popescul et al. 2001] Alexandrin Popescul, Lyle Ungar,
David Pennock, and Steve Lawrence. Probabilistic mod-
els for unified collaborative and content-based recommen-
dation in sparse-data environments. In 17th Conference on
Uncertainty in Artificial Intelligence, pages 437–444, Seat-
tle, Washington, August February–May 2001.
[Rosen-Zvi et al. 2004] Michal Rosen-Zvi, Thomas Grif-
fiths, Mark Steyvers, and Padhraic Smyth. The author-
topic model for authors and documents. In AUAI ’04: Pro-
ceedings of the 20th conference on Uncertainty in artificial
intelligence, pages 487–494, Arlington, Virginia, United
States, 2004. AUAI Press.
Introduction
Tagging for organizing images
Anatomy of Flickr
Browsing and searching
Personalizing search results
Data collections
Data sets
Users
Search results
Personalizing by contacts
Personalizing by tags
Model definition
Model-based personalization
Evaluation
Previous research
Conclusions and future work
|
0704.1677 | Resummed Cross Section for Jet Production at Hadron Colliders | BNL-NT-07/17
hep-ph/yymmnnn
Resummed Cross Section for Jet Production
at Hadron Colliders
Daniel de Florian
Departamento de F́ısica, FCEYN, Universidad de Buenos Aires,
(1428) Pabellón 1 Ciudad Universitaria, Capital Federal, Argentina
Werner Vogelsang
BNL Nuclear Theory, Brookhaven National Laboratory, Upton, NY 11973, USA
Abstract
We study the resummation of large logarithmic perturbative corrections to the single-
inclusive jet cross section at hadron colliders. The corrections we address arise near the
threshold for the partonic reaction, when the incoming partons have just enough energy
to produce the high-transverse-momentum final state. The structure of the resulting
logarithmic corrections is known to depend crucially on the treatment of the invariant
mass of the produced jet at threshold. We allow the jet to have a non-vanishing mass
at threshold, which most closely corresponds to the situation in experiment. Matching
our results to available semi-analytical next-to-leading-order calculations, we derive
resummed results valid to next-to-leading logarithmic accuracy. We present numerical
results for the resummation effects at Tevatron and RHIC energies.
hep-ph/yymmnnn
November 2, 2018
http://arxiv.org/abs/0704.1677v1
1 Introduction
High-transverse-momentum jet production in hadronic collisions, H1H2 → jetX , plays a fun-
damental role in High-Energy Physics. It offers possibilities to explore QCD, for example the
structure of the interacting hadrons or the emergence of hadronic final states, but is also inti-
mately involved in many signals (and their backgrounds) for New Physics. At the heart of all
these applications of jet production is our ability to perform reliable and precise perturbative cal-
culations of the partonic short-distance interactions that generate the high-transverse-momentum
final states. Up to corrections suppressed by inverse powers of the jet’s transverse momentum pT ,
the hadronic jet cross section factorizes into parton distribution functions that contain primar-
ily long-distance information, and these short-distance cross sections. In the present paper, we
address large logarithmic perturbative corrections to the latter.
At partonic threshold, when the initial partons have just enough energy to produce the high-
pT jet and an unobserved recoiling partonic final state, the phase space available for gluon
bremsstrahlung vanishes, so that only soft and collinear emission is allowed, resulting in large
logarithmic corrections to the partonic cross section. To be more specific, if we consider the cross
section as a function of the jet transverse momentum, integrated over all jet rapidities, the par-
tonic threshold is reached when
s = 2pT , where
s is the partonic center-of-mass (c.m.) energy.
Defining x̂T ≡ 2pT/
s, the leading large contributions near threshold arise as αkS(pT ) ln
2m (1− x̂2T )
at the kth order in perturbation theory, where m ≤ k (the logarithms with m = k are leading)
and αS is the strong coupling. Even if pT is large so that αS(pT ) is small, sufficiently close to
threshold the logarithmic terms will spoil the perturbative expansion to any fixed order. Thresh-
old resummation [1, 2, 3, 4, 5, 6, 7], however, allows to reinstate a useful perturbative series by
systematically taking into account the terms αkS ln
2m (1− x̂2T ) to all orders in αS. This is achieved
after taking a Mellin-transform of the hadronic cross section in xT = 2pT/
S, with
S the
hadronic c.m. energy. The threshold logarithms exponentiate in transform space.
Regarding phenomenology, the larger xT , the more dominant will the threshold logarithms be,
and hence the more important will threshold resummation effects be. In addition, because of the
convoluted form of the partonic cross sections and the parton distribution functions (PDFs), the
steep fall-off of the PDFs with momentum fraction x automatically enhances the contributions
from the threshold regime to the cross section, because it makes it relatively unlikely that the
initial partons have very high c.m. energy. This explains why partonic threshold effects often
dominate the hadronic cross section even at not so high xT . Studies of cross sections for [8]
pp → hX (with h a high-pT hadron) and [9, 10, 11, 12] pp → γX in the fixed-target regime,
where typically 0.2 . xT . 0.7, indeed demonstrate that threshold-resummation effects dominate
there and can be very large and important for phenomenology. They enhance the theoretical cross
section with respect to fixed-order calculations.
These observations suggest to study the resummation also for jet production at hadron col-
liders, in particular when xT is rather large. An application of particular interest is the jet cross
section at very high transverse momenta (pT ∼ several hundreds GeV) at the Tevatron [13, 14],
for which initially an excess of the experimental data over next-to-leading order (NLO) theory
was reported, which was later mostly attributed to an insufficient knowledge of the gluon distri-
bution [15]. Similarly large values of xT are now probed in pp collisions at RHIC, where currently√
s = 200 GeV and jet cross section measurements by the STAR collaboration are already ex-
tending to pT & 40 GeV [16]. In both these cases, one does expect threshold resummation effects
to be smaller than in the case of related processes at similar xT in the fixed-target regime, just
because (among other things) the strong coupling constant is smaller at these higher pT . On the
other hand, as we shall see, the effects are still often non-negligible.
Apart from addressing these interesting phenomenological applications, we believe we also
improve in this paper the theoretical framework for threshold resummation in jet production.
There has been earlier work in the literature on this topic [4, 17, 18]. In Ref. [4] the threshold
resummation formalism for the closely related dijet production at large invariant mass of the
jet pair was developed to next-to-leading logarithmic (NLL) order. In [17], these results were
applied to the single-inclusive jet cross section at large transverse momentum, making use of
earlier work [6] on the high-pT prompt-photon cross section, which is kinematically similar. As
was emphasized in [4], there is an important subtlety for the resummed jet cross section related to
the treatment of the invariant mass of the jet. The structure of the large logarithmic corrections
that are addressed by resummation depends on whether or not the jet is assumed to be massless
at partonic threshold, even at the leading-logarithmic (LL) level. This is perhaps surprising at
first sight, because one might expect the jet mass to be generally inessential since it is typically
much smaller than the jet’s transverse momentum pT and in fact vanishes for lowest-order partonic
scattering. However, the situation can be qualitatively understood as follows [4]: let us assume
that we are defining the jet cross section from the total four-momentum deposited in a cone of
aperture R †. Considering for simplicity the next-to-leading order, we can have contributions by
virtual 2 → 2 diagrams, or by 2 → 3 real-emission diagrams. For the former, a single particle
produces the (massless) jet, in case of the latter, there are configurations where two particles in
the final state jointly form the jet.
Then, for a jet forced to be massless at partonic threshold, the contributions with two partons
in the cone must either have one parton arbitrarily soft, or the two partons exactly collinear.
The singularities associated with these configurations cancel against analogous ones in the virtual
diagrams, but because the constraint on the real-emission diagrams is so restrictive, large double-
and single-logarithmic contributions remain after the cancellation. This will happen regardless
of the size R of the cone aperture, implying that the coefficients of the large logarithms will be
independent of R. These final-state threshold logarithms arising from the observed jet suppress
the cross section near threshold. Their structure is identical to that of the threshold logarithms
generated by the recoiling “jet”, because the latter is not observed and is indeed massless at
partonic threshold. The combined final-state logarithms then act against the threshold logarithms
associated with initial-state radiation which are positive and enhance the cross section [1].
If, on the other hand, the jet invariant mass is not constrained to vanish near threshold, far
more final states contribute– in fact, there will be an integration over the jet mass to an upper
limit proportional to the aperture of the jet cone. As the 2 → 3 contributions are therefore much
less restricted, the cancellations of infrared and collinear divergences between real and virtual
diagrams leave behind only single logarithms [4], associated with soft, but not with collinear,
emission. Compared to the previously discussed case, there is therefore no double-logarithmic
suppression of the cross section by the observed jet, and one expects the calculated cross section
to be larger. Also, the single-logarithmic terms will now depend on the jet cone size R.
†Details of the jet definition do not matter for the present argument.
The resummation for both these cases, with the jet massless or massive at threshold, has
been worked out in [4]. The study [17] of the resummed single-inclusive high-pT jet cross section
assumed massless jets at threshold. From a phenomenological point of view, however, we see no
reason for demanding the jet to become massless at the partonic threshold. The experimental jet
cross sections will, at any given pT , contain jet events with a large variety of jet invariant masses.
NLO calculations of single-inclusive jet cross sections indeed reflect this: they have the property
that jets produced at partonic threshold are integrated over a range of jet masses. This becomes
evident in the available semi-analytical NLO calculations [19, 20, 21, 22]. For these, the jet cross
section is obtained by assuming that the jet cone is relatively narrow, in which case it is possible
to treat the jet definition analytically, so that collinear and infrared final-state divergences may be
canceled by hand. This approximation is referred to as the “small-cone approximation (SCA)”.
Section II.E in the recent calculation in [21] explicitly demonstrates for the SCA that the threshold
double-logarithms associated with the observed final-state jet cancel, as described above.
In light of this, we will study in this work the resummation in the more realistic case of jets
that are massive at threshold. We will in fact make use of the NLO calculation in the SCA
approximation of [21] to “match” our resummed cross sections to finite (next-to-leading) order.
Knowledge of analytical NLO expressions allows one to extract certain hard-scattering coefficients
that are finite at threshold and part of the full resummation formula. These coefficients will be
presented and used in our paper for the first time.
We emphasize that the use of the SCA in our work is not to be regarded as a limitation to
the usefulness of our results. First, the SCA is known to be very accurate numerically even at
relatively large jet cone sizes of R ∼ 0.7 [21, 20, 23]. In addition, one may use our results to obtain
ratios of the resummed over the NLO cross sections. Such “K-factors” are then expected to be
extremely good approximations for the effects of higher orders even when one goes away from the
SCA and uses, for example, a full NLO Monte-Carlo integration code that allows to compute the
jet cross section for larger cone aperture and for other jet definitions (see, for example, Ref. [24]).
We will therefore in particular present K-factors for the resummed jet cross section in this paper.
The paper is organized as follows: in Sec. 2 we provide the basic formulas for the single-
inclusive-jet cross section at fixed order in perturbation theory, and discuss the SCA and the role
of the threshold region. Section 3 presents details of the threshold resummation for the inclusive-
jet cross section and describes the matching to the analytical expressions for the NLO cross section
in the SCA. In Sec. 4 we give phenomenological results for the Tevatron and for RHIC. Finally,
we summarize our results in Sec. 5. The Appendix collects the formulas for the hard-scattering
coefficients in the threshold-resummed cross section mentioned above.
2 Next-to-leading order single-inclusive jet cross section
Jets produced in high-energy hadronic scattering, H1(P1)H2(P2) → jet(PJ)X , are typically defined
in terms of a deposition of transverse energy or four-momentum in a cone of aperture R in pseudo-
rapidity and azimuthal-angle space, with detailed algorithms specifying the jet kinematic variables
in terms of those of the observed hadron momenta [13, 25, 26, 27, 28, 29]. QCD factorization
theorems allow to write the cross section for single-inclusive jet production in hadronic collisions in
terms of convolutions of parton distribution functions with partonic hard-scattering functions [30]:
dx1dx2 fa/H1
x1, µ
fb/H2
x2, µ
dσ̂ab(x1P1, x2P2, PJ , µF , µR) , (1)
where the sum runs over all initial partons, quarks, anti-quarks, and gluons, and where µF and µR
denote the factorization and renormalization scales, respectively. It is possible to use perturbation
theory to describe the formation of a high-pT jet, as long as the definition of the jet is infrared-
safe. The jet is then constructed from a subset of the final-state partons in the short-distance
reaction ab → partons, and a “measurement function” in the dσ̂ab specifies the momentum PJ of
the jet in terms of the momenta of the final-state partons, in accordance with the (experimental)
jet definition.
The computation of jet cross sections beyond the lowest order in perturbative QCD is rather
complicated, due to the need for incorporating a jet definition and the ensuing complexity of the
phase space, and due to the large number of infrared singularities of soft and collinear origin at
intermediate stages of the calculation. Different methods have been introduced that allow the
calculation to be performed largely numerically by Monte-Carlo “parton generators”, with only
the divergent terms treated in part analytically (see, for example, Ref. [24]).
A major simplification occurs if one assumes that the jet cone is rather narrow, a limit known as
the “small-cone approximation (SCA)” [19, 20, 21, 22]. In this case, a semi-analytical computation
of the NLO single-inclusive jet cross section can be performed, meaning that fully analytical
expressions for the partonic hard-scattering functions dσ̂ab can be derived which only need to be
integrated numerically against the parton distribution functions as shown in Eq. (1). The SCA
may be viewed as an expansion of the partonic cross section for small δ ≡ R/ cosh η, where η is
the jet’s pseudo-rapidity. Technically, the parameter δ is the half-aperture of a geometrical cone
around the jet axis, when the four-momentum of the jet is defined as simply the sum of the four-
momenta of all the partons inside the cone [19, 21]. At small δ, the behavior of the jet cross section
is of the form A log(δ)+B+O(δ2), with both A and B known from Refs. [19, 21]. Jet codes based
on the SCA have the virtue that they produce numerically stable results on much shorter time
scales than Monte-Carlo codes. Moreover, as we shall see below, the relatively simple and explicit
results for the NLO single-inclusive jet cross section obtained in the SCA are a great convenience
for the implementation of threshold resummation, particularly for the matching needed to achieve
full NLL accuracy.
It turns out that the SCA is a very good approximation even for relatively large cone sizes of up
toR ≃ 0.7 [21, 20, 23], the value used by both Tevatron collaborations. Figure 1 shows comparisons
between the NLO cross sections for single-inclusive jet production obtained using a full Monte-
Carlo code [24] and the SCA code of [21], for pp̄ collisions at c.m. energy
S = 1800 GeV and very
high pT . Throughout this paper we use the CTEQ6M [31] NLO parton distribution functions. We
have chosen two different jet definitions in the Monte-Carlo calculation. One uses a conventional
cone algorithm [25], the other the CDF jet definition [13]. One can see that the differences with
respect to the SCA are of the order of only a few per cent. We note that similar comparisons
in the RHIC kinematic regime have been shown in [21]. In their recent paper [16], the STAR
collaboration used R = 0.4, for which the SCA is even more accurate.
Encouraged by this good agreement, we will directly use the SCA analytical results when
performing the threshold resummation. As stated in the Introduction, this is anyway not a
Figure 1: Ratio between NLO jet cross sections at Tevatron at
S = 1800 GeV, computed with
a full Monte-Carlo code [24] and in the SCA. The solid line corresponds to the jet definition
implemented by CDF [13] (with parameter Rsep = 1.3), and the dashed one to the standard cone
definition [25]. In both cases the size of the jet cone is set to R = 0.7, and the CTEQ6M NLO
[31] parton distributions, evaluated at the factorization scale µF = PT , were used.
limitation, because we will also always provide the ratio of resummed over NLO cross sections
(K-factors), which may then be used along with full NLO Monte-Carlo calculations to obtain
resummed cross sections for any desired cone size or jet algorithm.
A further simplification that we will make is to consider the cross section integrated over all
pseudo-rapidities of the jet. As was discussed in [8], this considerably reduces the complexity
of the resummed expressions. By simply rescaling the resummed prediction by an appropriate
ratio of NLO cross sections one can nonetheless obtain a very good approximation also for the
resummation effects on the non-integrated cross section, at central rapidities [11]. To perform the
NLL threshold resummation for the full rapidity-dependence of the jet cross section remains an
outstanding task for future work.
From Eq. (1), we find for the single-inclusive jet cross section integrated over all jet pseudo-
rapidity η, in the SCA:
p3T dσ
SCA(xT )
dx1 fa/H1
x1, µ
dx2 fb/H2
x2, µ
dx̂T δ
x̂T −
∫ η̂+
x̂4T s
dσ̂ab(x̂
T , η̂, R)
dx̂2Tdη̂
, (2)
where as before xT ≡ 2pT/
S is the customary scaling variable, and x̂T ≡ 2pT/
s with s = x1x2S
is its partonic counterpart. η̂ is the partonic pseudo-rapidity, η̂ = η − 1
ln(x1/x2), which has the
limits η̂+ = −η̂− = ln[(1 +
1− x̂2T )/x̂T ]. The dependence of the partonic cross sections on µF
and µR has been suppressed for simplicity. The perturbative expansion of the dσ̂ab in the coupling
constant αS(µR) reads
dσ̂ab(x̂
T , η̂, R) =α
S(µR)
ab (x̂
T , η̂) + αS(µR) dσ̂
ab (x̂
T , η̂, R) +O(α2S)
. (3)
As indicated, the leading-order (LO) term dσ̂ab has no dependence on the cone size R, because
for this term a single parton produces the jet. The analytical expressions for the NLO terms dσ̂
have been obtained in [19, 21]. It is customary to express them in terms of a different set of
variables, v and w, that are related to x̂T and η̂ by
x̂2T = 4vw(1− v) , e2η̂ =
. (4)
Schematically, the NLO corrections to the partonic cross section for each scattering channel then
take the form
s dσ̂
ab (w, v, R)
dw dv
= Aab(v, R) δ(1− w) +Bab(v, R)
ln(1− w)
+Cab(v, R)
+ Fab(w, v, R) , (5)
where the “plus”-distributions are defined as usual by
dwf(w)[g(w)]+ ≡
dw(f(w)− f(1))g(w) , (6)
and where the Fab(w, v, δ) collect all terms without distributions in w. Partonic threshold corre-
sponds to the limit w → 1. The “plus”-distribution terms in Eq. (5) generate the large logarithmic
corrections that are addressed by threshold resummation. At order k of perturbation theory, the
leading contributions are proportional to αkS
ln(1−w)
)2k−1
. Performing the integration of these
terms over η̂, they turn into contributions ∝ αkS ln
2k (1− x̂2T ), as we anticipated in the Introduc-
tion, and as we shall show below. Subleading terms are down by one or more powers of the
logarithm. We will now turn to the NLL resummation of the threshold logarithms.
3 Resummed cross section
The resummation of the soft-gluon contributions is carried out in Mellin-N moment space, where
they exponentiate [1, 2, 3, 4, 5, 6, 7]. At the same time, in moment space the convolutions between
the parton distributions and the partonic subprocess cross sections turn into ordinary products.
For our present calculation, the appropriate Mellin moments are in the scaling variable xT
σ(N) ≡
)(N−1) p
T dσ(xT )
. (7)
‡We drop the superscript “SCA” from now on.
In Mellin-N space the QCD factorization formula in Eq. (2) becomes
σ(N) =
(µ2F ) f
(µ2F ) σ̂ab(N) , (8)
where the fN
are the moments of the parton distribution functions,
fNa/H(µ
F ) ≡
dxxN−1fa/H(x, µ
F ) , (9)
and where
σ̂ab(N) ≡
dv [4v(1− v)w]N+1 s dσ̂ab(w, v)
dw dv
. (10)
The threshold limit w → 1 corresponds to N → ∞, and the LL soft-gluon corrections contribute
as αmS ln
2mN . The large-N behavior of the NLO partonic cross sections can be easily obtained by
using
[4v(1− v)]N+1 f(v) =
[4v(1− v)]N+1
. (11)
Up to corrections suppressed by 1/N it is therefore possible to perform the v-integration of the
partonic cross sections by simply evaluating them at v = 1/2. According to Eq. (5), when v = 1/2
is combined with the threshold limit w = 1, one has x̂T = 1. It is worth mentioning that in
the same limit one has η̂ = 0, and therefore the coefficients for the soft-gluon resummation for
the rapidity-integrated cross section agree with those for the cross section at vanishing partonic
rapidity. This explains why generally the resummation for the rapidity-integrated hadronic cross
section yields a good approximation to the resummation of the cross section integrated over only
a finite rapidity interval, as long as a region around η = 0 is contained in that interval [11].
The resummation of the large logarithms in the partonic cross sections is achieved by showing
that they exponentiate in Sudakov form factors. The resummed cross section for each partonic
subprocess is given by a formally rather simple expression in Mellin-N space:
(res)
ab (N) =
Cab ∆
GIab→cd∆
(int)ab→cd
(Born)
ab→cd(N) , (12)
where the first sum runs over all possible final state partons c and d, and the second over all possible
color configurations I of the hard scattering. Except for the Born cross sections σ̂
(Born)
ab→cd (which we
have presented in earlier work [8]), each of the N -dependent factors in Eq. (12) is an exponential
containing logarithms in N . The coefficients Cab collect all N -independent contributions, which
partly arise from hard virtual corrections and can be extracted from comparison to the analytical
expressions for the full NLO corrections in the SCA. Finally, the GIab→cd are color weights obeying
ab→cd = 1. The color interferences expressed by the sum over I appear whenever the number
of partons involved in the process at Born level is larger than three, as it is the case here §. Figure 2
gives a simple graphical mnemonic of the structure of the resummation formula, and of the origin
of its various factors, whose expressions we know present.
§In a more general case without rapidity integration, the terms GI
ab→cd × σ̂
(Born)
ab→cd
(N) should be replaced by the
color-correlated Born cross sections σ̂
(Born I)
ab→cd
(N) [4, 7, 17].
(int)
Figure 2: Pictorial representation of the resummation formula in Eq. (12).
Effects of soft-gluon radiation collinear to the initial-state partons are exponentiated in the
functions ∆
N , which read:
ln∆aN =
zN−1 − 1
∫ (1−z)2Q2
Aa(αS(q
2)) , (13)
(and likewise for b) where Q2 = 2p2T . J
N is the exponent associated with collinear, both soft and
hard, radiation in the unobserved recoiling “jet”,
ln JdN =
zN−1 − 1
∫ (1−z)Q2
(1−z)2Q2
Ad(αS(q
2)) +
Bd(αS((1− z)Q2))
. (14)
The function J ′cN describes radiation in the observed jet. As we reviewed in the Introduction,
this function is sensitive to the assumption made about the jet’s invariant mass at threshold [4]
even at LL level. Our choice is to allow the jet to be massive at partonic threshold, which is
consistent with the experimental definitions of jet cross sections and with the available NLO
calculations. In this case, J ′cN is given as
ln J ′cN =
zN−1 − 1
C ′c(αS((1− z)2Q2)) (jet massive at threshold) . (15)
A similar exponent was derived for this case in [4]. The expression given in Eq.(15) agrees with
the one of [4] to the required NLL accuracy. Notice that J ′cN contains only single logarithms, which
arise from soft emission, whereas logarithms of collinear origin are absent. This is explicitly seen
in the NLO calculations in the SCA [20, 21] in which there is an integration over the jet mass up
to a maximum value of O(δ ∼ R), even when the threshold limit is strictly reached. Collinear
contributions that would usually generate large logarithms are actually “regularized” by the cone
size δ and give instead rise to log(δ) terms in the perturbative cross sections.
If, however, the jet is forced to be massless at partonic threshold the jet-function is identical
to the function for an “unobserved” jet given in Eq. (14) [4, 17]:
ln J ′cN = lnJ
N (jet massless at threshold) , (16)
which produces a (negative) double-logarithm per emitted gluon, because it also receives collinear
contributions due to the stronger restriction on the gluon phase space. There is then no dependence
on log(δ) in this case.
Finally, large-angle soft-gluon emission is accounted for by the factor ∆
(int)ab→cd
I N , which reads
(int)ab→cd
I N =
zN−1 − 1
DI ab→cd(αS((1− z)2Q2)) , (17)
and depends on the color configuration I of the participating partons.
The coefficients Aa, Ba, C
a, DI ab→cd in Eqs. (13),(14),(15),(17) are free of large logarithmic
contributions and are given as perturbative series in the coupling constant αS:
F(αS) =
F (1) +
F (2) + . . . (18)
for each of them. For the resummation to NLL accuracy we need the coefficients A
a , A
a , C
a , and D
I ab→cd. The last of these depends on the specifics of the partonic process under
consideration; all the others are universal in the sense that they only distinguish whether the
parton they are associated with is a quark or a gluon. The LL and NLL coefficients A
a , A
a and
a are well known [32]:
A(1) = Ca , A
(2) =
CaK , B
a = γa (19)
K = CA
Nf , (20)
where Cg = CA = Nc = 3, Cq = CF = (N
c − 1)/2Nc = 4/3, γq = −3/2CF = −2, γg = −2π0¯, and
Nf is the number of flavors. The coefficients C
a needed in the case of jets that are massive at
threshold, may be obtained by comparing the first-order expansion of the resummed formula to
the analytic NLO results in the SCA. They are also universal and read
C ′(1)a = −Ca log
. (21)
This coefficient contains the anticipated dependence on log(δ) that regularizes the final-state
collinear configurations. As expected from a term of collinear origin, the exponent (15) hence
provides one power of log(δ) for each perturbative order.
The coefficients D
I ab→cd governing the exponentiation of large-angle soft-gluon emission to NLL
accuracy, and the corresponding “color weights” GI ab→cd, depend both on the partonic process
and on its “color configuration”. They can be obtained from [4, 5, 17, 33, 34] where the soft
anomalous dimension matrices were computed for all partonic processes. The results are given
for the general case of arbitrary partonic rapidity η̂. As discussed above, the coefficients for the
rapidity-integrated cross section may be obtained by setting η̂ = 0. We have presented the full
set of the D
I ab→cd and GI ab→cd in the Appendix of our previous paper [8].
Before we continue, we mention that one expects that a jet cross section defined by a cone
algorithm will also have so-called “non-global” threshold logarithms [35, 36]. These logarithms
arise when the observable is sensitive to radiation in only a limited part of phase space, as is the
case in presence of a jet cone. For instance, a soft gluon radiated at an angle outside the jet cone
may itself emit a secondary gluon at large angle that happens to fall inside the jet cone, thereby
becoming part of the jet [35, 36]. Such configurations appear first at the next-to-next-to-leading
order, but may produce threshold logarithms at the NLL level. One may therefore wonder if our
NLL resummation formulas given above are complete. Fortunately, as an explicit example in [35]
shows, it turns out that these effects are suppressed as R log(R) in the SCA. They may therefore
be neglected at the level of approximation we are making here. Given their only mild suppression
as R → 0, a study of non-global logarithms in hadronic jet cross sections is an interesting topic
for future work.
Returning to our resummed formulas, it is instructive to consider the structure of the leading
logarithms. The LL expressions for the radiative factors are
∆aN = exp
Ca ln
JdN = exp
Cd ln
. (22)
As discussed above, J ′c does not contribute at the double-logarithmic level. Therefore, for a given
partonic channel, the leading logarithms are
(res)
ab→cd(N) ∝ exp
Ca + Cb −
ln2(N)
. (23)
The exponent is positive for each partonic channel, implying that the soft-gluon effects will increase
the cross section. This enhancement arises from the initial-state radiation represented by the ∆a,b
and is related to the fact that finite partonic cross sections are obtained after collinear (mass)
factorization [1, 2]. In the MS scheme such an enhancing contribution is (for a given parton
species) twice as large as the suppressing one associated with final-state radiation in Jd, for which
no mass factorization is needed. For quark or anti-quark initiated processes, the color factor
combination appearing in Eq. (23) ranges from 2CF − CF/2 = 2 for the qq → qq channel to
2CF −CA/2 = 7/6 for qq̄ → gg, while for those involving a quark-gluon initial state one has larger
factors, CF + CA − CF/2 = 11/3 (for qg → qg) or CF + CA − CA/2 = 17/6 (for qg → gq). Yet
larger factors are obtained for gluon-gluon scattering, with 2CA − CA/2 = 9/2 for gg → gg and
2CA−CF/2 = 16/3 for gg → qq̄. Initial states with more gluons therefore are expected to receive
the larger resummation effects. We mention that if the observed jet is assumed strictly massless
at threshold, an extra suppression term proportional to Jc arises (see Eq. (16)).
It is customary to give the NLL expansions of the Sudakov exponents in the following way [2]:
ln∆aN(αS(µ
R), Q
2/µ2R;Q
2/µ2F ) = lnN h
a (λ) + h
a (λ,Q
2/µ2R;Q
2/µ2F ) +O
αS(αS lnN)
, (24)
lnJaN (αS(µ
R), Q
2/µ2R) = lnN f
a (λ) + f
a (λ,Q
2/µ2R) +O
αS(αS lnN)
, (25)
ln J ′aN (αS(µ
R)) =
ln(1− 2λ) +O
αS(αS lnN)
, (26)
(int)ab→cd
I N (αS(µ
R)) =
I ab→cd
ln(1− 2λ) +O
αS(αS lnN)
, (27)
with λ = 0
R) lnN . The LL and NLL auxiliary functions h
(1,2)
a and f
(1,2)
a are
h(1)a (λ) = +
[2λ+ (1− 2λ) ln(1− 2λ)] , (28)
h(2)a (λ,Q
2/µ2R;Q
2/µ2F ) =−
[2λ+ ln(1− 2λ)]−
ln(1− 2λ)
2λ+ ln(1− 2λ) + 1
ln2(1− 2λ)
[2λ+ ln(1− 2λ)] ln Q
, (29)
f (1)a (λ) = −
2πb0λ
(1− 2λ) ln(1− 2λ)− 2(1− λ) ln(1− λ)
, (30)
f (2)a (λ,Q
2/µ2R) = −
2πb30
ln(1− 2λ)− 2 ln(1− λ) + 1
ln2(1− 2λ)− ln2(1− λ)
ln(1− λ)−
ln(1− λ)− ln(1− 2λ)
2π2b20
2 ln(1− λ)− ln(1− 2λ)
2 ln(1− λ)− ln(1− 2λ)
where 0
, b1 are the first two coefficients of the QCD β-function:
(11CA − 2Nf ) , b1 =
17C2A − 5CANf − 3CFNf
. (32)
The N -independent coefficients Cab in Eq. (12), which include the hard virtual corrections,
have the perturbative expansion
Cab = 1 +
ab +O(α
S) . (33)
The C
ab we need to NLL are obtained by comparing the O(αS)-expansion (not counting the
overall factor α2S of the Born cross sections) of the resummed expression with the fixed-order
NLO result for the process, as given in [19, 21]. The full analytic expressions for the C
ab are
rather lengthy and will not be given here. For convenience, we present them in numerical form
in the Appendix. We note that apart from being useful for extracting the coefficients C
a and
ab , the comparison of the O(αS)-expanded resummed result with the full NLO cross section also
provides an excellent check of the resummation formula, since one can verify that all leading and
next-to-leading logarithms are properly accounted for by Eq. (12).
The improved resummed hadronic cross section is finally obtained by performing an inverse
Mellin transformation, and by properly matching to the NLO cross section p3T dσ
(NLO)(xT )/dpT as
follows:
p3T dσ
(match)(xT )
a,b,c
∫ CMP+i∞
CMP−i∞
)−N+1
fNa/H1(µ
F ) f
(µ2F )
(res)
ab→cd(N)−
(res)
ab→cd(N)
(NLO)
p3T dσ
(NLO)(xT )
, (34)
where σ̂
(res)
ab→cd is given in Eq. (12) and (σ̂
(res)
ab→cd)(NLO) represents its perturbative truncation at NLO.
Thus, as a result of this matching procedure, in the final cross section in Eq. (34) the NLO cross
section is exactly taken into account, and NLL soft-gluon effects are resummed beyond those
already contained in the NLO cross section.
The functions h
(1,2)
a (λ) and f
(1,2)
a (λ) in Eqs. (28)-(31) are singular at the points λ = 1/2 and/or
λ = 1. These singularities are related to the divergent behavior of the perturbative running
coupling αS near the Landau pole, and we deal with them by using the Minimal Prescription
introduced in Ref. [2]. In the evaluation of the inverse Mellin transformation in Eq. (34), the
constant CMP is chosen in such a way that all singularities in the integrand are to the left of the
integration contour, except for the Landau singularities, that are taken to lie to its far right. We
note that an alternative to such a definition one could choose to expand the resummed formula
to a finite order, say, next-to-next-to-leading order (NNLO), and neglect all terms of yet higher
order. This approach was adopted in Ref. [17]. We prefer to keep the full resummed formula in
our phenomenological applications since, depending on kinematics, high orders in perturbation
theory may still be very relevant [8]. It was actually shown in [2] that the results obtained within
the Minimal Prescription converge asymptotically to the perturbative series.
This completes the presentation of all ingredients to the NLL threshold resummation of the
hadronic single-inclusive jet cross section. We will now turn to some phenomenological applica-
tions.
4 Phenomenological Results
We will study the effects of threshold resummation on the single-inclusive jet cross section in pp̄
collisions at
S = 1.8 TeV and
S = 630 GeV c.m. energies, and in pp collisions at
200 GeV. These choices are relevant for comparisons to Tevatron and RHIC data, respectively.
Unless otherwise stated, we always set the factorization and renormalization scales to µF = µR =
pT and use the NLO CTEQ6M [31] set of parton distributions, along with the two-loop expression
for the strong coupling constant αS.
We will first analyze the relevance of the different subprocesses contributing to single-jet pro-
duction. The left part of Fig. 3 shows the relative contributions by “qq” (qq, qq′, qq̄ and qq̄′
combined), qg and gg initial states at Born level (dashed lines) and for the NLL resummed case
(without matching, solid lines). Here we have chosen the case of pp̄ collisions at
S = 1.8 TeV.
As can be seen, the overall change in the curves when going from Born level to the resummed case
is moderate. The main noticeable effect is an increase of the relative importance of processes with
gluon initial states toward higher pT , compensated by a similar decrease in that of the qq channels.
In the right part of Fig. 3 we show the enhancements from threshold resummation for each initial
partonic state individually, and also for their sum. At the higher pT , where threshold resummation
is expected to be best applicable, the enhancements are biggest for the gg channel, followed by
the qg one. All patterns observed in Fig. 3 are straightforwardly understood from Eq. (23), which
demonstrates that resummation yields bigger enhancements when the number of gluons in the
initial state is larger. We note that results very similar to those shown in the figure hold also at√
S = 630 GeV, if the same value of xT = 2pT/
S is considered. This remains qualitatively true
even when we go to pp collisions at
S = 200 GeV, except for the larger enhancement in the
Figure 3: Left: relative contributions of the various partonic initial states to the single-inclusive jet
cross section in pp̄ collisions at
S = 1.8 TeV, at Born level (dashed) and for the NLL resummed
case (solid). We have chosen the jet cone size R = 0.7. Right: ratios between resummed and Born
contributions for the various channels, and for the full jet cross section.
Figure 4: Same as Fig. 3, but for pp collisions at
S = 200 GeV and R = 0.4.
quark contribution, due to the dominance of the qq channel (instead of the qq̄ as in pp̄ collisions)
with a larger color factor combination in the Sudakov exponent (see the discussion after Eq. (22)).
Figure 4 repeats the studies made for Fig. 3 for this case. As one can see, if the same xT
as in Fig. 3 is considered, the qq scattering contributions are overall slightly less important. At
the same time, resummation effects are overall somewhat larger because the pT values are now
much smaller than in Fig. 3, so that the strong coupling constant that appears in the resummed
exponents is larger.
Figure 5: Ratio between the expansion to NLO of the (unmatched) resummed cross section and
the full NLO one (in the SCA), for pp̄ collisions at
s = 1.8 TeV (solid) and
s = 630 GeV
(dots), and for pp collisions at
s = 200 GeV (dashed).
Before presenting the results for the matched NLL resummed jet cross section and K-factors,
we would like to identify the kinematical region where the logarithmic terms constitute the bulk
of the perturbative corrections and subleading contributions are unimportant. Only in these is
the resummation expected to provide an accurate picture of the higher-order terms. We can
determine this region by comparing the resummed formula, expanded to NLO, to the full fixed-
order (NLO) perturbative result, that is, by comparing the last two terms in Eq. (34). Figure 5
shows this comparison for both Tevatron energies and for the RHIC case, as function of the
“scaling” variable xT . As can be observed, the expansion correctly reproduces the NLO result
within at most a few per cent over a region corresponding to pT & 200 GeV for the higher Tevatron
energy, and to pT & 30 GeV at RHIC. This demonstrates that, at this order, the perturbative
corrections are strongly dominated by the terms of soft and/or collinear origin that are addressed
by resummation. The accuracy of the expansion improves toward the larger values of the jet
transverse momentum, were one approaches the threshold limit more closely.
Having established the importance of the threshold corrections in a kinematic regime of interest
for phenomenology, we show in Fig. 6 the impact of the resummation on the predicted single-jet
cross section at
S = 1.8 TeV. NLO and NLL resummed results are presented, computed at three
different values of the factorization and renormalization scales, defined by µF = µR = ζpT (with
ζ = 1, 2, 1/2). The most noticeable effect is a remarkable reduction in the scale dependence of
the cross section. This observation was also made in the previous study [17]. If, as customary,
one defines a theoretical scale “uncertainty” by ∆ ≡ (σ(ζ = 0.5) − σ(ζ = 2))/σ(ζ = 1), the
improvement is considerable. While ∆ lies between 20 and 25% at NLO, it never exceeds 8% for
the matched NLL result. The inset plot shows the NLL K-factor, defined as
K(res) =
dσ(match)/dpT
dσ(NLO)/dpT
, (35)
at each of the scales. The corrections from resummation on top of NLO are typically very mod-
erate, at the order of a few per cent, depending on the set of scales chosen. The higher-order
corrections increase for larger values of the jet transverse momentum. These findings are again
consistent with those of [17], even though more detailed comparisons reveal some quantitative
differences that must be related to either the different choice of the resummed final-state jet func-
tion in [17] (see discussion in Sec. 3), or to the fact that [17] uses only a NNLO expansion of the
resummed cross section. The main features of our results remain unchanged when we go to the
Tevatron-run II energy of
S = 1.96 TeV, at which measured jet cross sections are now avail-
able [37]. Quantitatively very similar results are also found for the lower Tevatron center-of-mass
energy, as seen in Fig. 7. In the case of pp collisions at
s = 200 GeV, presented in Fig. 8, a
similar pattern emerges, even though the resummation effects tend to be overall somewhat more
substantial here.
Figure 6: NLO and NLL results for the single-inclusive jet cross section in pp̄ collisions at
S = 1.8
TeV, for different values of the renormalization and factorization scales. We have chosen R = 0.7.
The inset plot shows the corresponding K-factors as defined in Eq. (35).
In Fig. 9 we analyze how the resummation effects build up order by order in perturbation
theory. We expand the matched resummed formula beyond NLO and define the “partial” soft-
gluon K-factors as
dσ(match)/dpT
dσ(NLO)/dpT
, (36)
which for n = 2, 3, . . . give the additional enhancement over full NLO due to the O(α2+nS ) terms
in the resummed formula¶. Formally, K1 = 1 and K∞ = K(res) of Eq. (35). The results for
K2,3,4,∞ are given in the figure, for the case of pp̄ collisions at
S = 1.8 TeV. One can see that
¶We recall that the Born cross sections are of O(α2S), hence the additional power of two in this definition.
Figure 7: Same as Fig. 6, but for
S =630 GeV.
contributions beyond N3LO (n = 3) are very small, and that the O(α6S) result can hardly be
distinguished from the full NLL one.
It is interesting to contrast the rather modest enhancement of the jet cross section by resum-
mation to the dramatic resummation effects that we observed in [8] in the case of single-inclusive
pion production, H1H2 → πX , in fixed-target scattering at typical c.m. energies of
S ∼ 30 GeV.
Even though in both cases the same partonic processes are involved at Born level, there are several
important differences. First of all, the values of pT are much smaller in fixed-target scattering
(even though roughly similar values of xT = 2pT/
S are probed), so that the strong coupling
constant αS(pT ) is larger and resummation effects are bound to be more significant. Furthermore,
for the process H1H2 → πX one needs to introduce fragmentation functions into the theoretical
calculation that describe the formation of the observed hadron from a final-state parton. As the
hadron takes only a certain fraction z & 0.5 of the parent parton’s momentum, the partonic hard-
scattering necessarily has to be at the higher transverse momentum pT/z in order to produce a
hadron with pT . Thus one is closer to partonic threshold than in case of a jet produced with
transverse momentum pT which takes all of a final-state parton’s momentum. In addition, it
turns out [8, 38] that due to the factorization of final-state collinear singularities associated with
the fragmentation functions, the “jet” function J ′cN in the resummation formula Eq. (12) is to be
replaced by a factor ∆cN , which has enhancing double logarithms.
Finally, as one illustrative example, we compare our resummed jet cross section to data from
CDF [13] at
S = 1800 GeV. While so far we have always considered the cross section integrated
over all jet rapidities, we here need to account for the fact that the data cover only a finite
region in rapidity, 0.1 ≤ |η| ≤ 0.7. Also, we would like to properly match the jet algorithm
chosen in experiment, rather than using the SCA. We have mentioned before that both these
issues can be accurately addressed by “rescaling” the resummed cross section by an appropriate
Figure 8: Same as Fig. 6, but for pp collisions at
S =200 GeV and R = 0.4.
ratio of NLO cross sections. We simply multiply our K-factors defined in Eq. (35) and shown
in Fig. 6 by dσ(MC)(0.1 ≤ |η| ≤ 0.7)/dpT , the NLO cross section obtained with a full Monte-
Carlo code [24], in the experimentally accessed rapidity regime. The comparison between the
data and the NLO and NLL-resummed cross sections is shown in Fig. 10 in terms of the ratios
“(data−theory)/theory”. As expected from Fig. 6, the impact of resummation is moderate and
in fact smaller than the current uncertainties of the parton distributions [37]. Nonetheless, it does
lead to a slight improvement of the comparison and, in particular, the plot again demonstrates
the reduction of scale dependence by resummation.
5 Conclusions
We have studied in this paper the resummation of large logarithmic threshold corrections to
the partonic cross sections contributing to single-inclusive jet production at hadron colliders. Our
study differs from previous work [17] mostly in that we allow the jet to have a finite invariant mass
at partonic threshold, which is consistent with the experimental definitions of jet cross sections
and with the available NLO calculations. Moreover, using semi-analytical expressions for the
NLO partonic cross sections derived in the SCA [19, 21], we have extracted the N−independent
coefficients that appear in the resummation formula, and properly matched our resummed cross
section to the NLO one. We hope that with these improvements yet more realistic estimates of
the higher-order corrections to jet cross sections in the threshold regime emerge.
It is well known that the NLO description of jet production at hadron colliders is overall very
successful, within the uncertainties of the theoretical framework and the experimental data. From
that perspective, it is gratifying to see that the effects of NLL resummation are relatively moderate.
Figure 9: Soft-gluon Kn factors as defined in Eq. (36), for pp̄ collisions at
S = 1.8 TeV.
On the other hand, resummation leads to a significant decrease of the scale dependence, and we
expect that knowledge of the resummation effects should be useful in comparisons with future,
yet more precise, data, and for extracting parton distribution functions. Given the general success
of the NLO description, we have mostly focused on K-factors for the resummed cross section
over the NLO one, and only given one example of a more detailed comparison with experimental
data. We believe that these K-factors may be readily used in conjunction with other, more flexible
NLO Monte-Carlo programs for jet production, to estimate threshold-resummation effects on cross
sections for other jet algorithms and possibly for larger cone sizes.
Acknowledgments
We are grateful to Stefano Catani, Barbara Jäger, Nikolaos Kidonakis, Douglas Ross, George
Sterman, and Marco Stratmann for helpful discussions. DdF is supported in part by UBACYT
and CONICET. WV is supported by the U.S. Department of Energy under contract number
DE-AC02-98CH10886.
Appendix: First-order coefficients C
ab in the SCA
In this appendix we collect the process-dependent coefficients C
ab for the various partonic channels
in jet hadroproduction in the SCA. The C
ab are constant, that is, they do not depend on the
Mellin-moment variable N . They may be extracted by expanding the resummed cross section in
Eq. (12) to first order and comparing it to the full NLO cross section in the SCA. For the sake
Figure 10: Ratios “(data−theory)/theory” for data from CDF at 1.8 TeV [13] and for NLO and
NLL resummed theoretical calculations. We have chosen the theory result at scale pT as “default”;
results for other scales are also displayed in terms of their relative shifts with respect to the default
theory.
of simplicity, we provide the C
ab only in numerical form, as the full analytic coefficients have
relatively lengthy expressions. We find:
qq′ = 17.9012 + 1.38763 log
= 19.0395 + 1.38763 log
qq̄ = 13.4171 + 1.6989 log
C(1)qq = 17.1973 + 1.38763 log
C(1)qg = 14.4483 + 2.58824 log
C(1)gg = 14.5629 + 3.67884 log
, (37)
where R is the jet cone size.
References
[1] G. Sterman, Nucl. Phys. B 281, 310 (1987); S. Catani and L. Trentadue, Nucl. Phys. B 327,
323 (1989); Nucl. Phys. B 353, 183 (1991).
[2] S. Catani, M. L. Mangano, P. Nason and L. Trentadue, Nucl. Phys. B 478, 273 (1996)
[arXiv:hep-ph/9604351].
http://arxiv.org/abs/hep-ph/9604351
[3] N. Kidonakis and G. Sterman, Nucl. Phys. B 505, 321 (1997) [arXiv:hep-ph/9705234].
[4] N. Kidonakis, G. Oderda and G. Sterman, Nucl. Phys. B 525, 299 (1998)
[arXiv:hep-ph/9801268].
[5] N. Kidonakis, G. Oderda and G. Sterman, Nucl. Phys. B 531, 365 (1998)
[arXiv:hep-ph/9803241].
[6] E. Laenen, G. Oderda and G. Sterman, Phys. Lett. B 438, 173 (1998) [arXiv:hep-ph/9806467].
[7] R. Bonciani, S. Catani, M. L. Mangano and P. Nason, Phys. Lett. B 575, 268 (2003)
[arXiv:hep-ph/0307035].
[8] D. de Florian and W. Vogelsang, Phys. Rev. D 71, 114004 (2005) [arXiv:hep-ph/0501258].
[9] S. Catani, M. L. Mangano and P. Nason, JHEP 9807, 024 (1998) [arXiv:hep-ph/9806484];
S. Catani, M. L. Mangano, P. Nason, C. Oleari and W. Vogelsang, JHEP 9903, 025 (1999)
[arXiv:hep-ph/9903436].
[10] N. Kidonakis and J. F. Owens, Phys. Rev. D 61, 094004 (2000) [arXiv:hep-ph/9912388].
[11] G. Sterman and W. Vogelsang, JHEP 0102, 016 (2001) [arXiv:hep-ph/0011289].
[12] D. de Florian and W. Vogelsang, Phys. Rev. D 72, 014014 (2005) [arXiv:hep-ph/0506150].
[13] A. A. Affolder et al. [CDF Collaboration], Phys. Rev. D 64, 032001 (2001) [Erratum-ibid. D
65, 039903 (2002)] [arXiv:hep-ph/0102074].
[14] B. Abbott et al. [D0 Collaboration], Phys. Rev. Lett. 82, 2451 (1999) [arXiv:hep-ex/9807018];
Phys. Rev. D 64, 032003 (2001) [arXiv:hep-ex/0012046]; V. M. Abazov et al. [D0 Collabora-
tion], Phys. Lett. B 525, 211 (2002) [arXiv:hep-ex/0109041].
[15] S. Kuhlmann, H. L. Lai and W. K. Tung, Phys. Lett. B 409, 271 (1997)
[arXiv:hep-ph/9704338]; J. Huston et al., Phys. Rev. Lett. 77, 444 (1996)
[arXiv:hep-ph/9511386]; H. L. Lai et al., Phys. Rev. D 55, 1280 (1997)
[arXiv:hep-ph/9606399].
[16] B. I. Abelev et al. [STAR Collaboration], Phys. Rev. Lett. 97, 252001 (2006)
[arXiv:hep-ex/0608030].
[17] N. Kidonakis and J. F. Owens, Phys. Rev. D 63, 054019 (2001) [arXiv:hep-ph/0007268].
[18] for initial work on jet production in the “Soft-collinear effective theory”, see: C. W. Bauer
and M. D. Schwartz, Phys. Rev. Lett. 97, 142001 (2006) [arXiv:hep-ph/0604065].
[19] M. Furman, Nucl. Phys. B 197, 413 (1982); F. Aversa, P. Chiappetta, M. Greco and J. P. Guil-
let, Nucl. Phys. B 327, 105 (1989); Z. Phys. C 46, 253 (1990).
[20] J. P. Guillet, Z. Phys. C 51, 587 (1991).
[21] B. Jäger, M. Stratmann and W. Vogelsang, Phys. Rev. D 70, 034010 (2004)
[arXiv:hep-ph/0404057].
http://arxiv.org/abs/hep-ph/9705234
http://arxiv.org/abs/hep-ph/9801268
http://arxiv.org/abs/hep-ph/9803241
http://arxiv.org/abs/hep-ph/9806467
http://arxiv.org/abs/hep-ph/0307035
http://arxiv.org/abs/hep-ph/0501258
http://arxiv.org/abs/hep-ph/9806484
http://arxiv.org/abs/hep-ph/9903436
http://arxiv.org/abs/hep-ph/9912388
http://arxiv.org/abs/hep-ph/0011289
http://arxiv.org/abs/hep-ph/0506150
http://arxiv.org/abs/hep-ph/0102074
http://arxiv.org/abs/hep-ex/9807018
http://arxiv.org/abs/hep-ex/0012046
http://arxiv.org/abs/hep-ex/0109041
http://arxiv.org/abs/hep-ph/9704338
http://arxiv.org/abs/hep-ph/9511386
http://arxiv.org/abs/hep-ph/9606399
http://arxiv.org/abs/hep-ex/0608030
http://arxiv.org/abs/hep-ph/0007268
http://arxiv.org/abs/hep-ph/0604065
http://arxiv.org/abs/hep-ph/0404057
[22] see also: S.G. Salesch, Ph.D. thesis, Hamburg University, 1993, DESY-93-196 (unpublished).
[23] F. Aversa, P. Chiappetta, M. Greco, and J.-Ph. Guillet, Phys. Rev. Lett. 65, 401 (1990); F.
Aversa, P. Chiappetta, L. Gonzales, M. Greco, and J.-Ph. Guillet, Z. Phys. C 49, 459 (1991).
[24] S. Frixione, Nucl. Phys. B 507, 295 (1997) [arXiv:hep-ph/9706545]; D. de Florian, S. Frixione,
A. Signer and W. Vogelsang, Nucl. Phys. B 539, 455 (1999) [arXiv:hep-ph/9808262].
[25] J. E. Huth et al., FERMILAB-CONF-90-249-E Presented at “Summer Study on High Energy
Physics, Research Directions for the Decade”, Snowmass, CO, Jun 25 - Jul 13, 1990.
[26] S. D. Ellis, Z. Kunszt and D. E. Soper, Phys. Rev. Lett. 62, 726 (1989); Phys. Rev.
D 40, 2188 (1989); Phys. Rev. Lett. 64, 2121 (1990); Phys. Rev. Lett. 69, 3615 (1992)
[arXiv:hep-ph/9208249].
[27] J. Alitti et al. [UA2 Collaboration], Phys. Lett. B 257, 232 (1991); F. Abe et al. [CDF
Collaboration], Phys. Rev. D 45, 1448 (1992); S. Abachi et al. [D0 Collaboration], Phys.
Rev. D 53, 6000 (1996).
[28] S. Catani, Yu.L. Dokshitzer, M.H. Seymour and B.R. Webber, Nucl. Phys. B 406, 187 (1993);
S.D. Ellis and D.E. Soper, Phys. Rev. D 48, 3160 (1993).
[29] see also the discussions about jet definitions and algorithms in, for example: W.B. Kilgore
and W.T. Giele, Phys. Rev. D 55, 7183 (1997); M.H. Seymour, Nucl. Phys. B 513 (1998)
269; proceedings of the “8th International Workshop on Deep Inelastic Scattering and QCD
(DIS 2000)”, Liverpool, England, 2000, eds. J.A. Gracey and T. Greenshaw, World Scientific,
2001, p. 27.
[30] see: J. C. Collins, D. E. Soper and G. Sterman, Adv. Ser. Direct. High Energy Phys. 5, 1
(1988) [arXiv:hep-ph/0409313], and references therein.
[31] J. Pumplin et al., JHEP 0207, 012 (2002) [arXiv:hep-ph/0201195].
[32] J. Kodaira and L. Trentadue, Phys. Lett. B 112, 66 (1982); Phys. Lett. B 123, 335 (1983);
S. Catani, E. D’Emilio and L. Trentadue, Phys. Lett. B 211, 335 (1988).
[33] N. Kidonakis, Int. J. Mod. Phys. A 15, 1245 (2000) [arXiv:hep-ph/9902484].
[34] the two-loop corrections to these anomalous dimension matrices were recently calculated
in: S. Mert Aybat, L. J. Dixon and G. Sterman, Phys. Rev. Lett. 97, 072001 (2006)
[arXiv:hep-ph/0606254]; S. Mert Aybat, L. J. Dixon and G. Sterman, Phys. Rev. D 74,
074004 (2006) [arXiv:hep-ph/0607309].
[35] M. Dasgupta and G. P. Salam, Phys. Lett. B 512, 323 (2001) [arXiv:hep-ph/0104277]; JHEP
0203, 017 (2002) [arXiv:hep-ph/0203009].
[36] C. F. Berger, T. Kucs and G. Sterman, Phys. Rev. D 65, 094031 (2002)
[arXiv:hep-ph/0110004].
[37] A. Abulencia [CDF - Run II Collaboration], arXiv:hep-ex/0701051; M. Voutilainen [D0 Col-
laboration], arXiv:hep-ex/0609026.
[38] M. Cacciari and S. Catani, Nucl. Phys. B 617, 253 (2001) [arXiv:hep-ph/0107138].
http://arxiv.org/abs/hep-ph/9706545
http://arxiv.org/abs/hep-ph/9808262
http://arxiv.org/abs/hep-ph/9208249
http://arxiv.org/abs/hep-ph/0409313
http://arxiv.org/abs/hep-ph/0201195
http://arxiv.org/abs/hep-ph/9902484
http://arxiv.org/abs/hep-ph/0606254
http://arxiv.org/abs/hep-ph/0607309
http://arxiv.org/abs/hep-ph/0104277
http://arxiv.org/abs/hep-ph/0203009
http://arxiv.org/abs/hep-ph/0110004
http://arxiv.org/abs/hep-ex/0701051
http://arxiv.org/abs/hep-ex/0609026
http://arxiv.org/abs/hep-ph/0107138
Introduction
Next-to-leading order single-inclusive jet cross section
Resummed cross section
Phenomenological Results
Conclusions
|
0704.1679 | Equation of State in Relativistic Magnetohydrodynamics: variable versus
constant adiabatic index | Mon. Not. R. Astron. Soc. 000, 1–14 (2007) Printed 11 February 2013 (MN LATEX style file v2.2)
Equation of State in Relativistic Magnetohydrodynamics:
variable versus constant adiabatic index
A. Mignone1,2 ⋆ and Jonathan C. McKinney3⋆
1INAF Osservatorio Astronomico di Torino, 10025 Pino Torinese, Italy
2Dipartimento di Fisica Generale dell’Università, Via Pietro Giuria 1, I-10125 Torino, Italy
3Institute for Theory and Computation, Center for Astrophysics, Harvard University, 60 Garden St., Cambridge, MA, 02138
Accepted 2007 April 12. Received 2007 April 12; in original form 2007 January 25
ABSTRACT
The role of the equation of state for a perfectly conducting, relativistic magnetized
fluid is the main subject of this work. The ideal constant Γ-law equation of state,
commonly adopted in a wide range of astrophysical applications, is compared with a
more realistic equation of state that better approximates the single-specie relativistic
gas. The paper focus on three different topics. First, the influence of a more realis-
tic equation of state on the propagation of fast magneto-sonic shocks is investigated.
This calls into question the validity of the constant Γ-law equation of state in problems
where the temperature of the gas substantially changes across hydromagnetic waves.
Second, we present a new inversion scheme to recover primitive variables (such as
rest-mass density and pressure) from conservative ones that allows for a general equa-
tion of state and avoids catastrophic numerical cancellations in the non-relativistic
and ultrarelativistic limits. Finally, selected numerical tests of astrophysical relevance
(including magnetized accretion flows around Kerr black holes) are compared using
different equations of state. Our main conclusion is that the choice of a realistic equa-
tion of state can considerably bear upon the solution when transitions from cold to
hot gas (or viceversa) are present. Under these circumstances, a polytropic equation
of state can significantly endanger the solution.
Key words: equation of state - relativity - hydrodynamics shock waves - methods:
numerical - MHD
1 INTRODUCTION
Recent developments in numerical hydrodynamics have
made a breach in the understanding of astrophysical phe-
nomena commonly associated with relativistic magnetized
plasmas. Existence of such flows has nowadays been largely
witnessed by observations indicating superluminal motion in
radio loud active galactic nuclei and galactic binary systems,
as well as highly energetic events occurring in proximity of
X-ray binaries and super-massive black holes. Strong evi-
dence suggests that the two scenarios may be closely related
and that the production of relativistic collimated jets results
from magneto-centrifugal mechanisms taking place in the in-
ner regions of rapidly spinning accretion disks (Meier et al.
2001).
Due to the high degree of nonlinearity present in the
equations of relativistic magnetohydrodynamics (RMHD
henceforth), analytical models are often of limited appli-
cability, relying on simplified assumptions of time inde-
⋆ E-mail:[email protected](AM);[email protected](JCM)
pendence and/or spatial symmetries. For this reason, they
are frequently superseded by numerical models that appeal
to a consolidated theory based on finite difference meth-
ods and Godunov-type schemes. The propagation of rel-
ativistic supersonic jets without magnetic field has been
studied, for instance, in the pioneering work of van Putten
(1993); Duncan & Hughes (1994) and, subsequently, by
Mart́ı et al. (1997); Hardee et al. (1998); Aloy et al. (1999);
Mizuta et al. (2004) and references therein. Similar inves-
tigations in presence of poloidal and toroidal magnetic
fields have been carried on by Nishikawa et al. (1997);
Koide (1997); Komissarov (1999) and more recently by
Leismann et al. (2005); Mignone et al. (2005).
The majority of analytical and numerical models, in-
cluding the aforementioned studies, makes extensive use of
the polytropic equation of state (EoS henceforth), for which
the specific heat ratio is constant and equal to 5/3 (for a
cold gas) or to 4/3 (for a hot gas). However, the theory of
relativistic perfect gases (Synge 1957) teaches that, in the
limit of negligible free path, the ratio of specific heats can-
not be held constant if consistency with the kinetic theory
http://arxiv.org/abs/0704.1679v1
2 A. Mignone and J.C. McKinney
is to be required. This was shown in an even earlier work by
Taub (1948), where a fundamental inequality relating spe-
cific enthalpy and temperature was proved to hold.
Although these results have been known for many
decades, only few investigators seem to have faced this im-
portant aspect. Duncan et al. (1996) suggested, in the con-
text of extragalactic jets, the importance of self-consistently
computing a variable adiabatic index rather than using a
constant one. This may be advisable, for example, when
the dynamics is regulated by multiple interactions of shock
waves, leading to the formation of shock-heated regions in
an initially cold gas. Lately, Scheck et al. (2002) addressed
similar issues by investigating the long term evolution of
jets with an arbitrary mixture of electrons, protons and
electron-positron pairs. Similarly, Meliani et al. (2004) con-
sidered thermally accelerated outflows in proximity of com-
pact objects by adopting a variable effective polytropic index
to account for transitions from non-relativistic to relativis-
tic temperatures. Similar considerations pertain to models
of Gamma Ray Burst (GRB) engines including accretion
discs, which have an EoS that must account for a combi-
nation of protons, neutrons, electrons, positrons, and neu-
trinos, etc. and must include the effects of electron degen-
eracy, neutronization, photodisintegration, optical depth of
neutrinos, etc. (Popham et al. 1999; Di Matteo et al. 2002;
Kohri & Mineshige 2002; Kohri et al. 2005). However, for
the disk that is mostly photodisintegrated and optically
thin to neutrinos, a decent approximation of such EoS is
a variable Γ-law with Γ = 5/3 when the temperature is be-
low mec
2/kb and Γ = 4/3 when above mec
2/kb due to the
production of positrons at high temperatures that gives a
relativistic plasma (Broderick, McKinney, Kohri in prep.).
Thus, the variable EoS considered here may be a reasonable
approximation of GRB disks once photodisintegration has
generated mostly free nuclei.
The additional complexity introduced by more elabo-
rate EoS comes at the price of extra computational cost since
the EoS is frequently used in the process of obtaining numer-
ical solutions, see for example, Falle & Komissarov (1996).
Indeed, for the Synge gas, the correct EoS does not have a
simple analytical expression and the thermodynamics of the
fluid becomes entirely formulated in terms of the modified
Bessel functions.
Recently Mignone et al. (2005a, MPB henceforth) in-
troduced, in the context of relativistic non-magnetized flows,
an approximate EoS that differs only by a few percent from
the theoretical one. The advantage of this approximate EoS,
earlier adopted by Mathews (1971), is its simple analytical
representation. A slightly better approximation, based on an
analytical expression, was presented by Ryu et al. (2006).
In the present work we wish to discuss the role of the
EoS in RMHD, with a particular emphasis to the one pro-
posed by MPB, properly generalized to the context of rel-
ativistic magnetized flows. Of course, it is still a matter of
debate the extent to which equilibrium thermodynamic prin-
ciples can be correctly prescribed when significant deviations
from the single-fluid ideal approximation may hold (e.g.,
non-thermal particle distributions, gas composition, cosmic
ray acceleration and losses, anisotropy, and so forth). Nev-
ertheless, as the next step in a logical course of action, we
will restrict our attention to a single aspect - namely the use
of a constant polytropic versus a variable one - and we will
ignore the influence of such non-ideal effects (albeit poten-
tially important) on the EoS.
In §2, we present the relevant equations and discuss the
properties of the new EoS versus the more restrictive con-
stant Γ-law EoS. In §3, we consider the propagation of fast
magneto-sonic shock waves and solve the jump conditions
across the front using different EoS. As we shall see, this
calls into question the validity of the constant Γ-law EoS
in problems where the temperature of the gas substantially
changes across hydromagnetic waves. In §4, we present nu-
merical simulations of astrophysical relevance such as blast
waves, axisymmetric jets, and magnetized accretion disks
around Kerr black holes. A short survey of some existing
models is conducted using different EoS’s in order to deter-
mine if significant interesting deviations arise. These results
should be treated as a guide to some possible avenues of
research rather than as the definitive result on any individ-
ual topic. Results are summarized in §5. In the Appendix,
we present a description of the primitive variable inversion
scheme.
2 RELATIVISTIC MHD EQUATIONS
In this section we present the equations of motion for rel-
ativistic MHD, discuss the validity of the ideal gas EoS as
applied to a perfect gas, and review an alternative EoS that
properly models perfect gases in both the hot (relativistic)
and cold (non-relativistic) regimes.
2.1 Equations of Motion
Our starting point are the relativistic MHD equations in
conservative form:
+ ∇ ·
vv − bb + Ipt
vB − Bv
= 0 , (1)
together with the divergence-free constraint ∇·B = 0, where
v is the velocity, γ is the Lorentz factor, wt ≡ (ρh+p+b2) is
the relativistic total (gas+magnetic) enthalpy, pt = p+ b
is the total (gas+magnetic) fluid pressure, B is the lab-frame
field, and the field in the fluid frame is given by
= γ{v · B, B
(v · B)}, (2)
with an energy density of
|b|2 = |B|
+ (v · B)2. (3)
Units are chosen such that the speed of light is equal to one.
Notice that the fluxes entering in the induction equation
are the components of the electric field that, in the infinite
conductivity approximation, become
Ω = −v × B . (4)
The non-magnetic case is recovered by letting B → 0 in the
previous expressions.
Equation of state in RMHD 3
Figure 1. Equivalent Γ (top left), specific enthalpy (top right),
sound speed (bottom left) and specific internal energy (bottom
right) as functions of temperature Θ = p/ρ. Different lines corre-
spond to the various EoS mentioned the text: the ideal Γ = 5/3-
law (dotted line), ideal Γ = 4/3-law (dashed line), TM EoS (solid
line). For clarity the Synge-gas (dashed-dotted line) has been
plotted only in the top left panel, where the “unphysical region”
marks the area where Taub’s inequality is not fulfilled.
The conservative variables are, respectively, the labora-
tory density D, the three components of momentum mk and
magnetic field Bk and the total energy density E:
D = ργ , (5)
mk = (Dhγ + |B|2)vk − (v · B)Bk , (6)
E = Dhγ − p + |B|
|v|2|B|2 − (v · B)2
, (7)
The specific enthalpy h and internal energy ǫ of the gas
are related by
h = 1 + ǫ +
, (8)
and an additional equation of state relating two thermody-
namical variables (e.g. ρ and ǫ) must be specified for proper
closure. This is the subject of the next section.
Equations (5)–(7) are routinely used in numerical codes
to recover conservative variables from primitive ones (e.g.,
ρ, v, p and B). The inverse relations cannot be cast in
closed form and require the solution of one or more non-
linear equations. Noble et al. (2006) review several methods
of inversion for the constant Γ-law, for which ρǫ = p/(Γ−1).
We present, in Appendix A, the details of a new inversion
procedure suitable for a more general EoS.
2.2 Equation of State
Proper closure to the conservation law (1) is required in
order to solve the equations. This is achieved by specifying
an EoS relating thermodynamic quantities. The theory of
relativistic perfect gases shows that the specific enthalpy is
a function of the temperature Θ = p/ρ alone and it takes
the form (Synge 1957)
K3(1/Θ)
K2(1/Θ)
, (9)
where K2 and K3 are, respectively, the order 2 and 3 modi-
fied Bessel functions of the second kind. Equation (9) holds
for a gas composed of material particles with the same mass
and in the limit of small free path when compared to the
sound wavelength.
Direct use of Eq. (9) in numerical codes, however,
results in time-consuming algorithms and alternative ap-
proaches are usually sought. The most widely used and pop-
ular one relies on the choice of the constant Γ-law EoS
h = 1 +
Γ − 1
Θ , (10)
where Γ is the constant specific heat ratio. However, Taub
(1948) showed that consistency with the relativistic kinetic
theory requires the specific enthalpy h to satisfy
(h − Θ) (h − 4Θ) > 1 , (11)
known as Taub’s fundamental inequality. Clearly the con-
stant Γ-law EoS does not fulfill (11) for an arbitrary choice
of Γ, while (9) certainly does. This is better understood in
terms of an equivalent Γeq, conveniently defined as
Γeq =
h − 1
h − 1 − Θ
, (12)
and plotted in the top left panel of Fig. 1 for different EoS.
In the limit of low and high temperatures, the physically
admissible region is delimited, respectively, by Γeq 6 5/3
(for Θ → 0) and Γeq 6 4/3 (for Θ → ∞). Indeed, Taub’s
inequality is always fulfilled when Γ 6 4/3 while it cannot
be satisfied for Γ > 5/3 for any positive value of the tem-
perature.
In a recent paper, Mignone et al. (2005a) showed that
if the equal sign is taken in Eq. (11), an equation with the
correct limiting values may be derived. The resulting EoS
(TM henceforth), previously introduced by Mathews (1971),
can be solved for the enthalpy, yielding
Θ2 + 1 , (13)
or, using ρh = ρ + ρǫ + p in (11) with the equal sign,
ρǫ (ρǫ + 2ρ)
3 (ρǫ + ρ)
ǫ + 2
ǫ + 1
. (14)
Direct evaluation of Γeq using (13) shows that the TM EoS
differs by less than 4% from the theoretical value given by
the relativistic perfect gas EoS (9). The proposed EoS be-
haves closely to the Γ = 4/3 law in the limit of high tem-
peratures, whereas reduces to the Γ = 5/3 law in the cold
gas limit. For intermediate temperatures, thermodynamical
quantities (such as specific internal energy, enthalpy and
sound speed) smoothly vary between the two limiting cases,
as illustrated in Fig. 1. In this respect, Eq. (13) greatly im-
proves over the constant Γ-law EoS and, at the same time,
offers ease of implementation over Eq. (9). Since thermody-
namics is frequently invoked during the numerical solution
of (1), it is expected that direct implementation of Eq. (13)
4 A. Mignone and J.C. McKinney
in numerical codes will result in faster and more efficient
algorithms.
Thermodynamical quantities such as sound speed and
entropy are computed from the 2nd law of thermodynamics,
− d log p , (15)
where S is the entropy. From the definition of the sound
speed,
, (16)
and using de = hdρ (at constant S), one finds the useful
expression
ḣ − 1
Γ-law EoS ,
5h − 8Θ
h − Θ TM EoS .
where we set ḣ = dh/dΘ. In a similar way, direct integration
of (15) yields S = k log σ with
Γ-law EoS ,
(h − Θ) TM EoS .
with h given by (13).
3 PROPAGATION OF FAST
MAGNETO-SONIC SHOCKS
Motivated by the previous results, we now investigate the
role of the EoS on the propagation of magneto-sonic shock
waves. To this end, we proceed by constructing a one-
parameter family of shock waves with different velocities,
traveling in the positive x direction. States ahead and be-
hind the front are labeled with U 0 and U 1, respectively, and
are related by the jump conditions
vs [U ] = [F (U )] , (19)
where vs is the shock speed and [q] = q1 − q0 is the jump
across the wave for any quantity q. The set of jump condi-
tions (19) may be reduced (Lichnerowicz 1976) to the fol-
lowing five positive-definite scalar invariants
[J ] = 0 , (20)
[hη] = 0 , (21)
[H] =
= 0 , (22)
p + b2/2
[h/ρ]
= 0 , (23)
+ 2H [p] + 2
= 0 , (24)
where
J = ργγs(vs − vx) , (25)
is the mass flux across the shock, and
η = −J
(v · B) + γs
Bx . (26)
Figure 2. Compression ratio (top panels), internal energy (mid-
dle panels) and downstream Mach number (bottom panels) as
functions of the shock four-velocity γsvs. The profiles give the so-
lution to the shock equation for the non magnetic case. Plots on
the left have zero tangential velocity ahead of the front, whereas
plots on right are initialized with vy0 = 0.99. Axis spacing is
logarithmic. Solid, dashed and dotted lines correspond to the so-
lutions obtained with the TM EoS and the Γ = 4/3 and Γ = 5/3
laws, respectively.
Here γs denotes the Lorentz factor of the shock. Fast or
slow magneto-sonic shocks may be discriminated through
the condition α0 > α1 > 0 (for the formers) or α1 < α0 < 0
(for the latters), where α = h/ρ −H.
We consider a pre-shock state characterized by a cold
(p0 = 10
−4) gas with density ρ = 1. Without loss of gen-
erality, we choose a frame of reference where the pre-shock
velocity normal to the front vanishes, i.e., vx0 = 0. Notice
that, for a given shock speed, J2 can be computed from the
pre-shock state and thus one has to solve only Eqns. (21)–
(24).
3.1 Purely Hydrodynamical Shocks
In the limit of vanishing magnetic field, only Eqns. (23) and
(24) need to be solved. Since J2 is given, the problem sim-
Equation of state in RMHD 5
plifies to the 2 × 2 nonlinear system of equations
[h/ρ]
= 0 , (27)
= 0 . (28)
We solve the previous equations starting from vs = 0.2,
for which we were able to provide a sufficiently close guess to
the downstream state. Once the p1 and ρ1 have been found,
we repeat the process by slowly increasing the shock velocity
vs and using the previously converged solution as the initial
guess for the new value of vs.
Fig. 2 shows the compression ratio, post-shock inter-
nal energy ǫ1 and Mach number v1/cs1 as functions of the
shock four velocity vsγs. For weakly relativistic shock speeds
and vanishing tangential velocities (left panels), density and
pressure jumps approach the classical (i.e. non relativistic)
strong shock limit at γsvs ≈ 0.1, with the density ratio be-
ing 4 or 7 depending on the value of Γ (5/3 or 4/3, respec-
tively). The post-shock temperature keeps non-relativistic
values (Θ ≪ 1) and the TM EoS behaves closely to the
Γ = 5/3 case, as expected.
With increasing shock velocity, the compression ratio
does not saturate to a limiting value (as in the classical
case) but keeps growing at approximately the same rate
for the constant Γ-law EoS cases, and more rapidly for the
TM EoS. This can be better understood by solving the
jump conditions in a frame of reference moving with the
shocked material and then transforming back to our origi-
nal system. Since thermodynamics quantities are invariant
one finds that, in the limit h1 ≫ h0 ≈ 1, the internal energy
becomes ǫ1 = γ1 − 1 and the compression ratio takes the
asymptotic value
= γ1 +
γ1 + 1
Γ − 1
, (29)
when the ideal EoS is adopted. Since γ1 can take arbitrarily
large values, the downstream density keeps growing indefi-
nitely. At the same time, internal energy behind the shock
rises faster than the rest-mass energy, eventually leading to
a thermodynamically relativistic configuration. In absence
of tangential velocities (left panels in Fig. 2), this transi-
tion starts at moderately high shock velocities (γsvs & 1)
and culminates when the shocked gas heats up to relativis-
tic temperatures (Θ ∼ 1 ÷ 10) for γsvs & 10. In this regime
the TM EoS departs from the Γ = 5/3 case and merges on
the Γ = 4/3 curve. For very large shock speeds, the Mach
number tends to the asymptotic value (Γ−1)−1/2, regardless
of the frame of reference.
Inclusion of tangential velocities (right panels in Fig. 2)
leads to an increased mass flux (J2 ∝ γ20) and, consequently,
to higher post-shock pressure and density values. Still, since
pressure grows faster than density, temperature in the post-
shock flow strains to relativistic values even for slower shock
velocities and the TM EoS tends to the Γ = 4/3 case at even
smaller shock velocities (γsvs & 2).
Generally speaking, at a given shock velocity, density
and pressure in the shocked gas attain higher values for lower
Γeq. Downstream temperature, on the other hand, follows
the opposite trend being higher as Γeq → 5/3 and lower
when Γeq → 4/3.
Figure 3. Compression ratio (top), downstream plasma β (mid-
dle) and magnetic field strength (bottom) as function of the shock
four-velocity γsvs with vanishing tangential component of the
velocity. The magnetic field makes an angle π/6 (left) and π/2
(right) with the shock normal. The meaning of the different lines
is the same as in Fig. 2.
3.2 Magnetized Shocks
In presence of magnetic fields, we solve the 3 × 3 nonlin-
ear system given by Eqns. (22), (23) and (24), and di-
rectly replace η1 = η0h0/h1 with the aid of Eq. (21).
The magnetic field introduces three additional parameters,
namely, the thermal to magnetic pressure ratio (β ≡ 2p/b2)
and the orientation of the magnetic field with respect to
the shock front and to the tangential velocity. This is ex-
pressed by the angles αx and αy such that Bx = |B| cos αx,
By = |B| sin αx cos αy , Bz = |B| sin αx sin αy . We restrict
our attention to the case of a strongly magnetized pre-shock
flow with β0 ≡ 2p0/b20 = 10−2.
Fig. 3 shows the density, plasma β and magnetic pres-
sure ratios versus shock velocity for αx = π/6 (left panels)
and αx = π/2 (perpendicular shock, right panels). Since
there is no tangential velocity, the solution depends on one
angle only (αx) and the choice of αy is irrelevant. For small
shock velocities (γsvs . 0.4), the front is magnetically driven
with density and pressure jumps attaining lower values than
the non-magnetized counterpart. A similar behavior is found
in classical MHD (Jeffrey & Taniuti 1964). Density and
6 A. Mignone and J.C. McKinney
Figure 4. Density ratio (top), downstream plasma β (middle)
and magnetic field strength (bottom) as function of γsvs when
the tangential component of the upstream velocity is vt = 0.99.
The magnetic field and the shock normal form an angle π/6. The
tangential components of magnetic field and velocity are aligned
(left) and orthogonal (right).Different lines have the same mean-
ing as in Fig. 2.
magnetic compression ratios across the shock reach the clas-
sical values around γsvs ≈ 1 (rather than γsvs ≈ 0.1 as in
the non-magnetic case) and increase afterwards. The mag-
netic pressure ratio grows faster for the perpendicular shock,
whereas internal energy and density show little dependence
on the orientation angle αx. As expected, the TM EoS mim-
ics the constant Γ = 5/3 case at small shock velocities. At
γsvs . 0.46, the plasma β exceeds unity and the shock starts
to be pressure-dominated. In other words, thermal pressure
eventually overwhelms the Lorentz force and the shock be-
comes pressure-driven for velocities of the order of vs ≈ 0.42.
When γsvs & 1, the internal energy begins to become com-
parable to the rest mass energy (c2) and the behavior of the
TM EoS detaches from the Γ = 5/3 curve and slowly joins
the Γ = 4/3 case. The full transition happens in the limit of
strongly relativistic shock speeds, γsvs . 10.
Inclusion of transverse velocities in the right state af-
fects the solution in a way similar to the non-magnetic case.
Relativistic effects play a role already at small velocities
because of the increased inertia of the pre-shock state in-
troduced by the upstream Lorentz factor. For αx = π/6
Figure 5. Density contrast (top), plasma β (middle) and mag-
netic field strength (bottom) for vt = 0.99. The magnetic field is
purely transverse and aligned with the tangential component of
velocity on the left, while it is orthogonal on the right. Different
lines have the same meaning as in Fig. 2.
(Fig. 4), the compression ratio does not drop to small val-
ues and keeps growing becoming even larger (. 400) than
the previous case when vt = 0. The same behavior is re-
flected on the growth of magnetic pressure that, in addi-
tion, shows more dependence on the relative orientation of
the velocity and magnetic field projections in the plane of
the front. When αy = π/2, indeed, magnetic pressure at-
tains very large values (b2/b20 . 10
4, bottom right panel in
Fig. 4). Consequently, this is reflected in a decreased post-
shock plasma β. For the TM EoS, the post-shock properties
of the flow begin to resemble the Γ = 4/3 behavior at lower
shock velocities than before, γsvs ≈ 2 ÷ 3. Similar consid-
erations may be done for the case of a perpendicular shock
(αx = π/2, see Fig. 5), although the plasma β saturates
to larger values thus indicating larger post-shock pressures.
Again, the maximum increase in magnetic pressure occurs
when the velocity and magnetic field are perpendicular.
4 NUMERICAL SIMULATIONS
With the exception of very simple flow configurations, the
solution of the RMHD fluid equations must be carried out
Equation of state in RMHD 7
Figure 6. Solution to the mildly relativistic blast wave (problem
1) at t = 0.4. From left to right, the different profiles give den-
sity, thermal pressure, total pressure (top panels), the three com-
ponents of velocity (middle panel) and magnetic fields (bottom
panels). Computations with the TM EoS and constant Γ = 5/3
EoS are shown using solid and dotted lines, respectively.
numerically. This allows an investigation of highly nonlinear
regimes and complex interactions between multiple waves.
We present some examples of astrophysical relevance, such
as the propagation of one dimensional blast waves, the prop-
agation of axisymmetric jets, and the evolution of magne-
tized accretion disks around Kerr black holes. Our goal is to
outline the qualitative effects of varying the EoS for some in-
teresting astrophysical problems rather than giving detailed
results on any individual topic.
Direct numerical integration of Eq. (1) has been
achieved using the PLUTO code (Mignone et al. 2007) in
§4.1, §4.2 and HARM (Gammie et al. 2003) in §4.3. The new
primitive variable inversion scheme presented in Appendix
A has been implemented in both codes and the results pre-
sented in §4.1 were used for code validation. The novel in-
version scheme offers the advantage of being suitable for a
more general EoS and avoiding catastrophic cancellation in
the non-relativistic and ultrarelativistic limits.
4.1 Relativistic Blast Waves
A shock tube consists of a sharp discontinuity separat-
ing two constant states. In what follows we will be con-
sidering the one dimensional interval [0, 1] with a discon-
tinuity placed at x = 0.5. For the first test problem,
states to the left and to the right of the discontinuity are
given by (ρ, p,By , Bz)L = (1, 30, 6, 6) for the left state and
(ρ, p, By , Bz)R = (1, 1, 0.7, 0.7) for the right state. This re-
sults in a mildly relativistic configuration yielding a max-
imum Lorentz factor of 1.3 6 γ 6 1.4. The second test
Figure 7. Solution to the strong relativistic blast wave (problem
2) at t = 0.4. From left to right, the different profiles give den-
sity, thermal pressure, total pressure (top panels), the three com-
ponents of velocity (middle panel) and magnetic fields (bottom
panels). Computations with the TM EoS and constant Γ = 5/3
EoS are shown using solid and dotted lines, respectively.
consists of a left state given by (ρ, p,By , Bz)L = (1, 10
3, 7, 7)
and a right state (ρ, p, By, Bz)R = (1, 0.1, 0.7, 0.7). This con-
figuration involves the propagation of a stronger blast wave
yielding a more relativistic configuration (3 6 γ 6 3.5). For
both states, we use a base grid with 800 zones and 6 levels
of refinement (equiv. resolution = 800 · 26) and evolve the
solution up to t = 0.4.
Computations carried with the ideal EoS with Γ = 5/3
and the TM EoS are shown in Fig. 6 and Fig. 7 for the
first and second shock tube, respectively. From left to right,
the wave pattern is comprised of a fast and slow rarefac-
tions, a contact discontinuity and a slow and a fast shocks.
No rotational discontinuity is observed. Compared to the
Γ = 5/3 case, one can see that the results obtained with
the TM EoS show considerable differences. Indeed, waves
propagate at rather smaller velocities and this is evident at
the head and the tail points of the left-going magneto-sonic
rarefaction waves. From a simple analogy with the hydrody-
namic counterpart, in fact, we know that these points prop-
agate increasingly faster with higher sound speed. Since the
sound speed ratio of the TM and Γ = 5/3 is always less
than one (see, for instance, the bottom left panel in Fig. 1),
one may reasonably predict slower propagation speed for the
Riemann fans when the TM EoS is used. Furthermore, this is
confirmed by computations carried with Γ = 4/3 that shows
even slower velocities. Similar conclusions can be drawn for
the shock velocities. The reason is that the opening of the
Riemann fan of the TM equation state is smaller than the
Γ = 5/3 case, because the latter always over-estimates the
sound speed. The higher density peak behind the slow shock
8 A. Mignone and J.C. McKinney
Figure 8. Jet velocity as a function of the Mach number for
different values of the initial density contrast η. The beam Lorentz
factor is the same for all plots, γb = 10. Solid, dashed and dotted
lines correspond to the solutions obtained with the TM EoS and
the Γ = 4/3 and Γ = 5/3 laws, respectively.
follows from the previous considerations and the conserva-
tion of mass across the front.
4.2 Propagation of Relativistic Jets
Relativistic, pressure-matched jets are usually set up by
injecting a supersonic cylindrical beam with radius rb
into a uniform static ambient medium (see, for instance,
Mart́ı et al. 1997). The dynamical and morphological prop-
erties of the jet and its interaction with the surrounding are
most commonly investigated by adopting a three parameter
set: the beam Lorentz factor γb, Mach number Mb = vb/cs
and the beam to ambient density ratio η = ρb/ρm. The
presence of a constant poloidal magnetic field introduces a
fourth parameter βb = 2pb/b
2, which specifies the thermal
to magnetic pressure ratio.
4.2.1 One Dimensional Models
The propagation of the jet itself takes place at the velocity
Vj , defined as the speed of the working surface that sepa-
rates shocked ambient fluid from the beam material. A one-
dimensional estimate of Vj (for vanishing magnetic fields)
can be derived from momentum flux balance in the frame of
the working surface (Mart́ı et al. 1997). This yields
ηhb/hm
1 + γb
ηhb/hm
, (30)
where hb and hm are the specific enthalpies of the beam and
the ambient medium, respectively. For given γb and den-
sity contrast η, Eq. (30) may be regarded as a function of
Figure 9. Computed results for the non magnetized jet at t = 90
for the ideal EoS (Γ = 5/3 and Γ = 4/3, top and middle panels)
and the TM EoS (bottom panel), respectively. The lower and
upper half of each panels shows the gray-scale map of density
and internal energy in logarithmic scale.
Figure 10. Position of the working surface as a function of time
for Γ = 5/3 (circles), Γ = 4/3 (stars) and the TM EoS (dia-
monds). Solid, dotted and dashed lines gives the one-dimensional
expectation.
Equation of state in RMHD 9
Figure 11. Density and magnetic field for the magnetized jet at
t = 80 (first and second panels from top) and at t = 126 (third
and fourth panels). Computations were carried with 40 zones per
beam radius with the TM EoS.
the Mach number alone that uniquely specifies the pres-
sure pb through the definitions of the sound speed, Eq. (17).
For the constant Γ-law EoS the inversion is straightforward,
whereas for the TM EoS one finds, using the substitution
Θ = 2/3 sinh x,
pb = η
1 − t2m
, (31)
where tm satisfies the negative branch of the quadratic equa-
15 − 6M
24 − 10M
+ 9 = 0 , (32)
with t = tanh x. In Fig. 8 we show the jet velocity for in-
creasing Mach numbers (or equivalently, decreasing sound
speeds) and different density ratios η = 10−5, 10−3, 10−1, 10.
The Lorentz beam factor is γb = 10. Prominent discrepan-
cies between the selected EoS arise at low Mach numbers,
where the relative variations of the jet speed between the
constant Γ and the TM EoS’s can be more than 50%. This
regime corresponds to the case of a hot jet (Θ ≈ 10 in the
η = 10−3 case) propagating into a cold (Θ ≈ 10−3) medium,
for which neither the Γ = 4/3 nor the Γ = 5/3 approxima-
tion can properly characterize both fluids.
4.2.2 Two Dimensional Models
Of course, Eq. (30) is strictly valid for one-dimensional flows
and the question remains as to whether similar conclusions
can be drawn in more than one dimension. To this end we
investigate, through numerical simulations, the propagation
of relativistic jets in cylindrical axisymmetric coordinates
(r, z). We consider two models corresponding to different
sets of parameters and adopt the same computational do-
main [0, 12] × [0, 50] (in units of jet radius) with the beam
being injected at the inlet region (r 6 1, z = 0). Jets are in
pressure equilibrium with the environment.
In the first model, the density ratio, beam Lorentz fac-
tor and Mach number are given, respectively, by η = 10−3,
γb = 10 and Mb = 1.77. Magnetic fields are absent. Inte-
gration are carried at the resolution of 20 zones per beam
radius using the relativistic Godunov scheme described in
MPB. Computed results showing density and internal en-
ergy maps at t = 90 are given in Fig. 9 for Γ = 5/3, Γ = 4/3
and the TM EoS. The three different cases differ in several
morphological aspects, the most prominent one being the
position of the leading bow shock, z ≈ 18 when Γ = 5/3,
z ≈ 48 for Γ = 4/3 and z ≈ 33 for the TM EoS. Smaller
values of Γ lead to larger beam internal energies and there-
fore to an increased momentum flux, in agreement with the
one dimensional estimate (30). This favors higher propaga-
tion velocities and it is better quantified in Fig. 10 where
the position of the working surface is plotted as a function
of time and compared with the one dimensional estimate.
For the cold jet (Γ = 5/3), the Mach shock exhibits a larger
cross section and is located farther behind the bow shock
when compared to the other two models. As a result, the
jet velocity further decreases promoting the formation of a
thicker cocoon. On the contrary, the hot jet (Γ = 4/3) prop-
agates at the highest velocity and the cocoon has a more
elongated shape. The beam propagates almost undisturbed
and cross-shocks are weak. Close to is termination point,
the beam widens and the jet slows down with hot shocked
gas being pushed into the surrounding cocoon at a higher
rate. Integration with the TM EoS reveals morphological
and dynamical properties more similar to the Γ = 4/3 case,
although the jet is ≈ 40% slower. At t = 90 the beam does
not seem to decelerate and its speed remains closer to the
one-dimensional expectation. The cocoon develops a thin-
ner structure with a more elongated conical shape and cross
shocks form in the beam closer to the Mach disk.
In the second case, we compare models C2-pol-1 and
B1-pol-1 of Leismann et al. (2005) (corresponding to an
ideal gas with Γ = 5/3 and Γ = 4/3, respectively) with
the TM EoS adopting the same numerical scheme. For this
model, η = 10−2, vb = 0.99, Mb = 6 and the ambient
medium is threaded by a constant vertical magnetic field,
2pb. Fig. 11 shows the results at t = 80 and
t = 126, corresponding to the final integration times shown
in Leismann et al. (2005) for the selected values of Γ. For the
sake of conciseness, integration pertaining to the TM EoS
only are shown and the reader is reminded to the original
work by Leismann et al. (2005) for a comprehensive descrip-
tion. Compared to ideal EoS cases, the jet shown here pos-
sesses morphological and dynamical properties intermediate
between the hot (Γ = 4/3) and the cold (Γ = 5/3) cases. As
expected, the jet propagates slower than in model B1-pol-1
(hot jet), but faster than the cold one (C2-pol-1). The head
of the jet tends to form a hammer-like structure (although
less prominent than the cold case) towards the end of the
integration, i.e., for t & 100, but the cone remains more con-
fined at previous times. Consistently with model C2-pol-1,
the beam develops a series of weak cross shocks and outgo-
ing waves triggered by the interaction of the flow with bent
magnetic field lines. Although the magnetic field inhibits the
formation of eddies, turbulent behavior is still observed in
cocoon, where interior cavities with low magnetic fields are
10 A. Mignone and J.C. McKinney
Figure 12. Magnetized accretion flow around a Kerr black hole
for the ideal Γ-law EoS with Γ = 4/3. Shows the logarithm of the
rest-mass density in colour from high (red) to low (blue) values.
The magnetic field has been overlayed. This model demonstrates
more vigorous turbulence and a thicker corona that leads to a
more confined magnetized jet near the poles.
Figure 13. As in figure 12 but for Γ = 5/3. Compared to the
Γ = 4/3 model, there is less vigorous turbulence and the corona
is more sharply defined.
formed. In this respect, the jet seems to share more features
with the cold case.
Figure 14. As in figure 12 but for the TM EoS. This EoS leads
to turbulence that is less vigorous than in the Γ = 4/3 model but
more vigorous than in the Γ = 5/3 model. Qualitatively the TM
EoS leads to an accretion disk that behaves somewhere between
the behavior of the Γ = 4/3 and Γ = 5/3 models.
4.3 Magnetized Accretion near Kerr Black Holes
In this section we study time-dependent GRMHD numerical
models of black hole accretion in order to determine the ef-
fect of the EoS on the behavior of the accretion disk, corona,
and jet. We study three models similar to the models stud-
ied by McKinney & Gammie (2004) for a Kerr black hole
with a/M ≈ 0.94 and a disk with a scale height (H) to ra-
dius (R) ratio of H/R ∼ 0.3. The constant Γ-law EoS with
Γ = {4/3, 5/3} and the TM EoS are used. The initial torus
solution is in hydrostatic equilibrium for the Γ-law EoS, but
we use the Γ = 5/3 EoS as an initial condition for the TM
EoS. Using the Γ = 4/3 EoS as an initial condition for the
TM EoS did not affect the final quasi-stationary behavior
of the flow. The simplest question to ask is which value of Γ
will result in a solution most similar to the TM EoS model’s
solution.
More advanced questions involve how the structure
of the accretion flow depends on the EoS. The previ-
ous results of this paper indicate that the corona above
the disk seen in the simulations (De Villiers et al. 2003;
McKinney & Gammie 2004) will be most sensitive to the
EoS since this region can involve both non-relativistic and
relativistic temperatures. The corona is directly involved
is the production of a turbulent, magnetized, thermal disk
wind (McKinney & Narayan 2006a,b), so the disk wind is
also expected to depend on the EoS. The disk inflow near
the black hole has a magnetic pressure comparable to the gas
pressure (McKinney & Gammie 2004), so the EoS may play
a role here and affect the flux of mass, energy, and angular
momentum into the black hole. The magnetized jet asso-
ciated with the Blandford & Znajek solution seen in simu-
lations (McKinney & Gammie 2004; McKinney 2006) is not
expected to depend directly on the EoS, but may depend in-
Equation of state in RMHD 11
directly through the confining action of the corona. Finally,
the type of field geometries observed in simulations that
thread the disk and corona (Hirose et al. 2004; McKinney
2005) might depend on the EoS through the effect of the
stiffness (larger Γ leads to harder EoSs) of the EoS on the
turbulent diffusion of magnetic fields.
Figs. 12, 13 and 14 show a snapshot of the accretion
disk, corona, and jet at t ∼ 1000GM/c3 . Overall the re-
sults are quite comparable, as could be predicted since the
Γ = {4/3, 5/3} models studied in McKinney & Gammie
(2004) were quite similar. For all models, the field geometries
allowed are similar to that found in McKinney (2005). The
accretion rate of mass, specific energy, and specific angular
momentum are similar for all models, so the EoS appears to
have only a small effect on the flow through the disk near
the black hole.
The most pronounced effect is that the soft EoS (Γ =
4/3) model develops more vigorous turbulence due to the
non-linear behavior of the magneto-rotational instability
(MRI) than either the Γ = 5/3 or TM EoSs. This causes
the coronae in the Γ = 4/3 model to be slightly thicker and
to slightly more strongly confine the magnetized jet resulting
in a slight decrease in the opening angle of the magnetized
jet at large radii. Also, the Γ = 4/3 model develops a fast
magnetized jet at slightly smaller radii than the other mod-
els. An important consequence is that the jet opening angle
at large radii might depend sensitively on the EoS of the ma-
terial in the accretion disc corona. This should be studied in
future work.
5 CONCLUSIONS
The role of the EoS in relativistic magnetohydrodynamics
has been investigated both analytically and numerically. The
equation of state previously introduced by Mignone et al.
(2005a) (for non magnetized flows) has been extended to
the case where magnetic fields are present. The proposed
equation of state closely approximates the single-specie per-
fect relativistic gas, but it offers a much simpler analyti-
cal representation. In the limit of very large or very small
temperatures, for instance, the equivalent specific heat ratio
reduces, respectively, to the 4/3 or 5/3 limits.
The propagation of fast magneto-sonic shock waves has
been investigated by comparing the constant Γ laws to the
new equation of state. Although for small shock veloci-
ties the shock dynamics is well described by the cold gas
limit, dynamical and thermodynamical quantities (such as
the compression ratio, internal energy, magnetization and so
forth) substantially change across the wave front at moder-
ately or highly relativistic speeds. Eventually, for increasing
shock velocities, flow quantities in the downstream region
smoothly vary from the cold (Γ = 5/3) to the hot (Γ = 4/3)
regimes.
We numerically studied the effect of the EoS on shocks,
blast waves, the propagation of relativistic jets, and magne-
tized accretion flows around Kerr black holes. Our results
should serve as a useful guide for future more specific stud-
ies of each topic. For these numerical studies, we formu-
lated the inversion from conservative quantities to primitive
quantities that allows a general EoS and avoids catastrophic
numerical cancellation in the non-relativistic and ultrarela-
tivistic limits. The analytical and numerical models confirm
the general result that large temperature gradients cannot
be properly described by a polytropic EoS with constant
specific heat ratio. Indeed, when compared to a more re-
alistic EoS, for which the polytropic index is a function of
the temperature, considerable dynamical differences arises.
This has been repeatedly shown in presence of strong dis-
continuities, such shocks, across which the internal energy
can change by several order of magnitude.
We also showed that the turbulent behavior of magne-
tized accretion flows around Kerr black holes depends on the
EoS. The Γ = 4/3 EoS leads to more vigorous turbulence
than the Γ = 5/3 or TM EoSs. This affects the thickness of
the corona that confines the magnetized jet. Any study of
turbulence within the accretion disk, the subsequent genera-
tion of heat in the coronae, and the opening and acceleration
of the jet (especially at large radii where the cumulative dif-
ferences due to the EoS in the disc are largest) should use
an accurate EoS. The effect of the EoS on the jet opening
angle and Lorentz factor at large radii is a topic of future
study.
The proposed equation state holds in the limit where
effects due to radiation pressure, electron degeneracies and
neutrino physics can be neglected. It also omits potentially
crucial physical aspects related to kinetic processes (such
as suprathermal particle distributions, cosmic rays), plasma
composition, turbulence effects at the sub-grid levels, etc.
These are very likely to alter the equation of state by ef-
fectively changing the adiabatic index computed on merely
thermodynamic arguments. Future efforts should properly
address additional physical issues and consider more gen-
eral equations of state.
ACKNOWLEDGMENTS
We are grateful to our referee, P. Hughes, for his worthy
considerations and comments that led to the final form of
this paper. JCM was supported by a Harvard CfA Institute
for Theory and Computation fellowship. AM would like to
thank S. Massaglia and G. Bodo for useful discussions on
the jet propagation and morphology.
REFERENCES
Aloy, M. A., Ibáñez, J. M. , Mart́ı, J. M. , Gómez, J.-L.,
Müller, E. 1999, ApJL, 523, L125
Aloy, M. A., Ibáñez, J. M., Mart́ı, J. M., Müller, E. 1999,
ApJS, 122, 151
Anile, M., & Pennisi, S. 1987, Ann. Inst. Henri Poincaré,
46, 127
Anile, A. M. 1989, Relativistic Fluids and Magneto-fluids
(Cambridge: Cambridge University Press), 55
Begelman, M. C., Blandford, R. D., & Rees, M. J. 1984,
Reviews of Modern Physics, 56, 255
Bernstein, J.P., & Hughers, P.A. 2006, astro-ph/0606012
Blandford R. D., Znajek R. L., 1977, MNRAS, 179, 433
Einfeldt, B., Munz, C.D., Roe, P.L., and Sjögreen, B. 1991,
J. Comput. Phys., 92, 273
Del Zanna, L., Bucciantini, N., & Londrillo, P. 2003, As-
tronomy & Astrophysics, 400, 397 (dZBL)
http://arxiv.org/abs/astro-ph/0606012
12 A. Mignone and J.C. McKinney
De Villiers J.-P., Hawley J. F., Krolik J. H., 2003, ApJ,
599, 1238
Di Matteo, T., Perna, R., & Narayan, R. 2002, ApJ, 579,
Duncan, G. C., & Hughes, P. A. 1994, ApJL, 436, L119
Duncan, C., Hughes, P., & Opperman, J. 1996, ASP
Conf. Ser. 100: Energy Transport in Radio Galaxies and
Quasars, 100, 143
Falle, S. A. E. G., & Komissarov, S. S. 1996, mnras, 278,
Gammie, C. F., McKinney, J. C., & Tóth, G. 2003, ApJ,
589, 444
Giacomazzo, B., & Rezzolla, L. 2005, J. Fluid Mech., xxx
Hardee, P. E., Rosen, A., Hughes, P. A., & Duncan, G. C.
1998, ApJ, 500, 599
Harten, A., Lax, P.D., and van Leer, B. 1983, SIAM Re-
view, 25(1):35,61
Hirose S., Krolik J. H., De Villiers J.-P., Hawley J. F., 2004,
ApJ, 606, 1083
Jeffrey A., Taniuti T., 1964, Non-linear wave propagation.
Academic Press, New York
Kohri, K., & Mineshige, S. 2002, ApJ, 577, 311
Kohri, K., Narayan, R., & Piran, T. 2005, ApJ, 629, 341
Koide, S. 1997, ApJ, 478, 66
Komissarov, S. S. 1997, Phys. Lett. A, 232, 435
Komissarov, S. S. 1999, mnras, 308, 1069
Leismann, T., Antón, L., Aloy, M. A., Müller, E., Mart́ı,
J. M., Miralles, J. A., & Ibáñez, J. M. 2005, Astronomy
& Astrophysics, 436, 503
Lichnerowicz, A. 1976, Journal of Mathematical Physics,
17, 2135
Mart́ı, J. M. & Müller, E. 2003, Living Reviews in Relativ-
ity, 6, 7
Mart́ı, J. M. A., Müller, E., Font, J. A., Ibáñez, J. M. A.,
& Marquina, A. 1997, ApJ, 479, 151
Mathews, W. G. 1971, ApJ, 165, 147
Meier, D. L., Koide, S., & Uchida, Y. 2001, Science, 291,
Meliani, Z., Sauty, C., Tsinganos, K., & Vlahakis, N. 2004,
Astronomy & Astrophysics, 425, 773
McKinney, J. C., & Gammie, C. F. 2004, ApJ, 611, 977
McKinney, J. C. 2005, ApJL, 630, L5
McKinney, J. C. 2006, MNRAS, 368, 1561
McKinney J. C., Narayan R., 2006a, MNRAS, in press
(astro-ph/0607575)
McKinney J. C., Narayan R., 2006b, MNRAS, in press
(astro-ph/0607576)
Mignone, A., Plewa, T., and Bodo, G. 2005, ApJS, 160,
Mignone, A., Massaglia, S., & Bodo, G. 2005, Space Science
Reviews, 121, 21
Mignone, A., & Bodo, G. 2006, MNRAS, 368, 1040
Mignone, A., Bodo, G., Massaglia, S., Matsakos, T.,
Tesileanu, O., Zanni, C., and A. Ferrari 2006, accepted
for publication on ApJ.
Misner, C. W., Thorne, K. S., & Wheeler, J. A. 1973, Grav-
itation, San Francisco: W.H. Freeman and Co., 1973
Mizuta, A., Yamada, S., & Takabe, H. 2004, ApJ, 606, 804
Nishikawa, K.-I., Koide, S., Sakai, J.-I., Christodoulou,
D. M., Sol, H., & Mutel, R. L. 1997, ApJL, 483, L45
Noble, S. C., Gammie, C. F., McKinney, J. C., & Del
Zanna, L. 2006, ApJ, 641, 626
Popham, R., Woosley, S. E., & Fryer, C. 1999, pJ, 518, 356
Ryu, D., Chattopadhyay, I., & Choi, E. 2006, ApJS, 166,
Scheck, L., Aloy, M. A., Mart́ı, J. M., Gómez, J. L., Müller,
E. 2002, MNRAS, 331, 615
Synge, J. L. 1957, The relativistic Gas, North-Holland Pub-
lishing Company
Taub, A. H. 1948, Physical Review, 74, 328
Tchekhovskoy, A, McKinney, J. C.,& Narayan, R. 2006,
MNRAS, submitted
Toro, E. F. 1997, Riemann Solvers and Numerical Methods
for Fluid Dynamics, Springer-Verlag, Berlin
van Putten, M. H. P. M. 1993, ApJL, 408, L21
APPENDIX A: PRIMITIVE VARIABLE
INVERSION SCHEME
We outline a new primitive variable inversion scheme that
is used to convert the evolved conserved quantities into so-
called primitive quantities that are necessary to obtain the
fluxes used for the evolution. This scheme allows a gen-
eral EoS by only requiring specification of thermodynamical
quantities and it also avoids catastrophic cancellation in the
non-relativistic and ultrarelativistic limits. Large Lorentz
factors (up to 106) may not be uncommon in some astro-
physical contexts (e.g. Gamma-Ray-Burst) and ordinary in-
version methods can lead to severe numerical problems such
as effectively dividing by zero and subtractive cancellation,
see, for instance, Bernstein & Hughes (2006).
First, we note that the general relativistic conserva-
tive quantities can be written more like special relativis-
tic quantities by choosing a special frame in which to mea-
sure all quantities. A useful frame is the zero angular mo-
mentum (ZAMO) observer in an axisymmetric space-time.
See Noble et al. (2006) for details. From their expressions,
it is useful to note that catastrophic cancellations for non-
relativistic velocities can be avoided by replacing γ − 1 in
any expression with (uαu
α)/(γ + 1), where here uα is the
relative 4-velocity in the ZAMO frame. From here on the
expressions are in the ZAMO frame and appear similar to
the same expressions in special relativity.
A1 Inversion Procedure
Numerical integration of the conservation law (1) proceeds
by evolving the conservative state vector U = (D, m, B, E)
in time. Computation of the fluxes, however, requires veloc-
ity and pressure to be recovered from U by inverting Eqns.
(5)–(7), a rather time consuming and challenging task. For
the constant-Γ law, a recent work by Noble et al. (2006)
examines several methods of inversion. In this section we
discuss how to modify the equations of motion, interme-
diate calculations, and the inversion from conservative to
primitive quantities so that the RMHD method 1) permits
a general EoS; and 2) avoids catastrophic cancellations in
the non-relativistic and ultrarelativistic limits.
Our starting relations are the total energy density (7),
E = W − p + 1 + |v|
|B|2 − S
, (A1)
and the square modulus of Eq. (6),
http://arxiv.org/abs/astro-ph/0607575
http://arxiv.org/abs/astro-ph/0607576
Equation of state in RMHD 13
|m|2 =
W + |B|2
)2 |v|2 − S
2W + |B|2
, (A2)
where S ≡ m · B and W = Dhγ. Note that in order for
this expression to be accurate in the non-relativistic limit,
one should analytically cancel any appearance of E in this
expression. Eq. (A2) can be inverted to express the square
of the velocity in terms of the only unknown W :
|v|2 = S
2(2W + |B|2) + |m|2W 2
(W + |B|2)2W 2
. (A3)
After inserting (A3) into (A1) one has:
E = W − p + |B|
|B|2|m|2 − S2
2(|B|2 + W )2 . (A4)
In order to avoid numerical errors in the non-relativistic
limit one must modify the equations of motion and sev-
eral intermediate calculations. One solves the conservation
equations with the mass density subtracted from the en-
ergy by defining a new conserved quantity (E′ = E − D)
and similarly for the energy flux. In addition, operations
based upon γ can lead to catastrophic cancellations since
the residual γ − 1 is often requested and is dominant in the
non-relativistic limit. A more natural quantity to consider
is |v|2 or γ2|v|2. Also, in the ultrarelativistic limit calcula-
tions based upon γ(|v|2) have catastrophic cancellation er-
rors when |v| → 1. This can be avoided by 1) using instead
|u|2 ≡ γ2|v|2 and 2) introducing the quantities E′ = E −D
and W ′ = W − D, with W ′ properly rewritten as
D|u|2
1 + γ
to avoid machine accuracy problems in the nonrelativistic
limit, where χ ≡ ρǫ+p. Thus our relevant equations become:
′ − p + |B|
|B|2|m|2 − S2
2(|B|2 + W ′ + D)2 , (A6)
|m|2 = (W + |B|2)2 |u|
1 + |u|2 −
2W + |B|2
, (A7)
where W = W ′ + D.
Equations (A6) and (A7) may be inverted to find W ′,
p and |u|2. A one dimensional inversion scheme is derived
by regarding Eq. (A6) as a single nonlinear equation in the
only unknown W ′ and using Eq. (A7) to express |u|2 as a
function of W ′. Using Newton’s iterative scheme as our root
finder, one needs to compute the derivative
= 1 − dp
|B|2|m|2 − S2
(|B|2 + W ′ + D)3 . (A8)
The explicit form of dp/dW ′ depends on the particular EoS
being used. While prior methods in principle allow for a
general EoS, one has to re-derive many quantities that in-
volve kinematical expressions. This can be avoided by split-
ting the kinematical and thermodynamical quantities. This
also allows one to write the expressions so that there is no
catastrophic cancellations in the non-relativistic or ultrarel-
ativistic limits. Assuming that p = p(χ, ρ), we achieve this
by applying the chain rule to the pressure derivative:
. (A9)
Partial derivatives involving purely thermodynamical quan-
tities must now be supplied by the EoS routines. Derivatives
with respect to W ′, on the other hand, involve purely kine-
matical terms and do not depend on the choice of the EoS.
Relevant expressions needed in our computations are given
in the Appendix.
Once W ′ has been determined to some accuracy, the
inversion process is completed by computing the velocities
from an inversion of equation (6) to obtain
W + |B|2
, (A10)
One then computes χ from an inversion of equation (A5) to
obtain
− D|u|
(1 + γ)γ2
, (A11)
from which p or ρǫ can be obtained for any given EoS. The
rest mass density is obtained from
, (A12)
and the magnetic field is trivially inverted.
In summary, we have formulated an inversion scheme
that 1) allows a general EoS without re-deriving kinematical
expressions; and 2) avoids catastrophic cancellation in the
non-relativistic and ultrarelativistic limits. This inversion in-
volves solving a single non-linear equation using, e.g., a one-
dimensional Newton’s method. A similar two-dimensional
method can be easily written with the same properties, and
such a method may be more robust in some cases since the
one-dimensional version described here involves more com-
plicated non-linear expressions.
One can show analytically that the inversion is accurate
in the ultrarelativistic limit as long as γ.ǫ
machine for γ and
p/(ργ2)&ǫmachine for pressure, where ǫmachine ≈ 2.2 × 10−16
for double precision. The method used by Noble et al. (2006)
requires γ.ǫ
machine/10 due to the repeated use of the ex-
pression γ = 1/
1 − v2 in the inversion. Note that we
use γ =
1 + |u|2 that has no catastrophic cancellation.
The fundamental limit on accuracy is due to evolving en-
ergy and momentum separately such that the expression
E − |m| appears in the inversion. Only a method that
evolves this quantity directly (e.g. for one-dimensional prob-
lems one can evolve the energy with momentum subtracted)
can reach higher Lorentz factors. An example test problem
is the ultrarelativistic Noh test in Aloy et al. (1999) with
p = 7.633 × 10−6, Γ = 4/3, 1 − v = 10−11 (i.e. γ = 223607)
This test has p/(ργ2) ≈ 1.6 × 10−16, which is just below
double precision and so the pressure is barely resolved in
the pre-shock region. The post-shock region is insensitive to
the pre-shock pressure and so is evolved accurately up to
γ ≈ 6 × 107. These facts are have been also confirmed nu-
merically using this inversion within HARM. Using the same
error measures as in Aloy et al. (1999) we can evolve their
test problem with an even higher Lorentz factor of γ = 107
and obtain similar errors of .0.1%.
A2 Kinematical and Thermodynamical
Expressions
The kinematical terms required in equation (A9) may be
easily found from the definition of W ′,
′ ≡ Dhγ − D = D(γ − 1) + χγ2 , (A13)
14 A. Mignone and J.C. McKinney
by straightforward differentiation. This yields
(D + 2γχ)
d|v|2
, (A14)
d(1/γ)
= −Dγ
d|v|2
, (A15)
where
d|v|2
= − 2
3W (W + |B|2) + |B|4
+ |m|2W 3
(W + |B|2)3
, (A16)
is computed by differentiating (A3) with respect to W (note
that d/dW ′ ≡ d/dW ). Equation (A14) does not depend on
the knowledge of the EoS.
Thermodynamical quantities such as ∂p/∂χ, on the
other hand, do require the explicit form of the EoS. For
the ideal gas EoS one simply has
p(χ, ρ) =
Γ − 1
χ , (A17)
where χ = ρǫ+ p. By taking the partial derivatives of (A17)
with respect to χ (keeping ρ constant) and ρ (keeping χ
constant) one has
Γ − 1
= 0 . (A18)
For the TM EoS, one can more conveniently rewrite
(14) as
3p(ρ + χ − p) = (χ − p)(χ + 2ρ − p) , (A19)
which, upon differentiation with respect to χ (keeping ρ con-
stant) yields
2χ + 2ρ − 5p
5ρ + 5χ − 8p
. (A20)
Similarly, by taking the derivative with respect to ρ at con-
stant χ gives
2χ − 5p
5ρ + 5χ − 8p
. (A21)
In order to use the above expressions and avoid catas-
trophic cancellation in the non-relativistic limit, one must
solve for the gas pressure as functions of only ρ and χ and
then write the pressure that explicitly avoids catastrophic
cancellation as {χ, p} → 0. One obtains:
p(χ, ρ) =
2χ(χ + 2ρ)
5(χ + ρ) +
9χ2 + 18ρχ + 25ρ2
. (A22)
Also, for setting the initial conditions it is useful to be able
to convert from a given pressure to the internal energy by
using
ρǫ(ρ, p) =
9p2 + 4ρ2
, (A23)
which also avoids catastrophic cancellation in the non-
relativistic limit.
A3 Newton-Raphson Scheme
Equation (A6) may be solved using a Newton-Raphson it-
erative scheme, where the (k + 1)-th approximation to the
W ′ is computed as
′(k+1)
′(k) − f(W
df(W ′)/dW ′
W ′=W ′(k)
, (A24)
where
) = W
′ − E′ − p + |B|
|B|2|m|2 − S2
2(|B|2 + W ′ + D)2
, (A25)
and df(W ′)/dW ′ ≡ dE′/dW ′ is given by Eq. (A8).
The iteration process terminates when the residual
∣W ′(k+1)/W ′(k) − 1
∣ falls below some specified tolerance.
We remind the reader that, in order to start the iter-
ation process given by (A24), a suitable initial guess must
be provided. We address this problem by initializing, at the
beginning of the cycle, W ′(0) = W̃+ − D, where W̃+ is the
positive root of
P(W, 1) = 0 , (A26)
and P(W, |v|) is the quadratic function
P(W, |v|) = |m|2−|v|2W 2+(2W+|B|2)(2W+|B|2−2E) .(A27)
This choice guarantees positivity of pressure, as it can be
proven using the relation
P(W, |v|)
2(2W + |B|2) , (A28)
which follows upon eliminating the (S/W )2 term in Eq. (A2)
with the aid of Eq. (A1). Seeing that P(W, |v|) is a con-
vex quadratic function, the condition p > 0 is equivalent to
the requirement that the solution W must lie outside the
interval [W−, W+], where P(W±, |v|) = 0. However, since
P(W, |v|) > P(W, 1), it must follow that W̃+ > W+ and
thus W̃+ lies outside the specified interval. We tacitly as-
sume that the roots are always real, a condition that is al-
ways met in practice.
Introduction
Relativistic MHD Equations
Equations of Motion
Equation of State
Propagation of Fast Magneto-sonic Shocks
Purely Hydrodynamical Shocks
Magnetized Shocks
Numerical Simulations
Relativistic Blast Waves
Propagation of Relativistic Jets
Magnetized Accretion near Kerr Black Holes
Conclusions
Primitive Variable Inversion Scheme
Inversion Procedure
Kinematical and Thermodynamical Expressions
Newton-Raphson Scheme
|
0704.1680 | Carbon Nanotube Thin Film Field Emitting Diode: Understanding the System
Response Based on Multiphysics Modeling | Carbon Nanotube Thin Film Field Emitting
Diode: Understanding the System Response
Based on Multiphysics Modeling
N. Sinhaa, D. Roy Mahapatrab, J.T.W. Yeowa1, R.V.N. Melnikb and D.A. Jaffrayc
aDepartment of Systems Design Engineering, University of Waterloo, ON,
N2L3G1, Canada
bMathematical Modeling and Computational Sciences, Wilfrid Laurier University,
Waterloo, ON, N2L3C5, Canada
cDepartment of Radiation Physics, Princess Margaret Hospital, Toronto, ON,
M5G2M9, Canada
Abstract
In this paper, we model the evolution and self-assembly of randomly oriented
carbon nanotubes (CNTs), grown on a metallic substrate in the form of a thin film
for field emission under diode configuration. Despite high output, the current in
such a thin film device often decays drastically. The present paper is focused on
understanding this problem. A systematic, multiphysics based modelling approach
is proposed. First, a nucleation coupled model for degradation of the CNT thin film
is derived, where the CNTs are assumed to decay by fragmentation and formation of
clusters. The random orientation of the CNTs and the electromechanical interaction
are then modeled to explain the self-assembly. The degraded state of the CNTs and
the electromechanical force are employed to update the orientation of the CNTs.
Field emission current at the device scale is finally obtained by using the Fowler-
Nordheim equation and integration over the computational cell surfaces on the anode
side. The simulated results are in close agreement with the experimental results.
Based on the developed model, numerical simulations aimed at understanding the
1Corresponding author: JTWY e-mail: [email protected];
Tel: 1 (519) 8884567 x 2152; Fax: 1 (519) 7464791
effects of various geometric parameters and their statistical features on the device
current history are reported.
Keywords: Field emission, carbon nanotube, degradation, electrodynamics, self-
assembly.
1 Introduction
The conventional mechanism used for electron emission is thermionic in nature where
electrons are emitted from hot cathodes (usually heated filaments). The advantage
of these hot cathodes is that they work even in environments that contain a large
number of gaseous molecules. However, thermionic cathodes in general have slow
response time and they consume high power. These cathodes have limited lifetime
due to mechanical wear. In addition, the thermionic electrons have random spatial
distribution. As a result, fine focusing of electron beam is very difficult. This
adversely affects the performance of the devices such as X-ray tubes. An alternative
mechanism to extract electrons is field emission, in which electrons near the Fermi
level tunnel through the energy barrier and escape to the vacuum under the influence
of a sufficiently high external electric field. The field emission cathodes have faster
response time, consume less power and have longer life compared to thermionic
cathodes. However, field emission cathodes require ultra-high vacuum as they are
highly reactive to gaseous molecules during the field emission.
The key to the high performance of a field emission device is the behavior of
its cathode. In the past, the performance of cathode materials such as spindt-type
emitters and nanostructured diamonds for field emission was studied by Spindt et
al.1 , Gotoh et al.2 , and Zhu3 . However, the spindt type emitters suffer from
high manufacturing cost and limited lifetime. Their failure is often caused by ion
bombardment from the residual gas species that blunt the emitter cones2 . On
the other hand, nanostructured diamonds are unstable at high current densities3 .
Carbon nanotube (CNT), which is an allotrope of carbon, has potential to be used
as cathode material in field emission devices. Since their discovery by Iijima in 19914
, extensive research on CNTs has been conducted. Field emission from CNTs was
first reported in 1995 by Rinzler et al.5 , de Heer et al.6 , and Chernozatonskii et
al.7 . Field emission from CNTs has been studied extensively since then. Currently,
with significant improvement in processing technique, CNTs are among the best field
emitters. Their applications in field emission devices, such as field emission displays,
gas discharge tubes, nanolithography systems, electron microscopes, lamps, and X-
ray tube sources have been successfully demonstrated8−9 . The need for highly
controlled application of CNTs in X-ray devices is one of the main reasons for the
present study. The remarkable field emission properties of CNTs are attributed
to their geometry, high thermal conductivity, and chemical stability. Studies have
reported that CNT sources have a high reduced brightness and their energy spread
values are comparable to conventional field emitters and thermionic emitters10 .
The physics of field emission from metallic surfaces is well understood. The
current density (J) due to field emission from a metallic surface is usually obtained
by using the Fowler-Nordheim (FN) equation11
CΦ3/2
, (1)
where E is the electric field, Φ is the work function of the cathode material, and B
and C are constants. The device under consideration in this paper is a X-ray source
where a thin film of CNTs acts as the electron emitting surface (cathode). Under the
influence of sufficiently high voltage at ultra high vacuum, the electrons are extracted
from the CNTs and hit the heavy metal target (anode) to produce X-rays. However,
in the case of a CNT thin film acting as cathode, the surface of the cathode is not
smooth (like the metal emitters). In this case, the cathode consists of hollow tubes
grown on a substrate. Also, some amount of carbon clusters may be present within
the CNT-based film. An added complexity is that there is realignment of individual
CNTs due to electrodynamic interaction between the neighbouring CNTs during
field emission. At present, there is no adequate mathematical models to address
these issues. Therefore, the development of an appropriate mathematical modeling
approach is necessary to understand the behavior of CNT thin film field emitters.
1.1 Role of various physical processes in the degradation of
CNT field emitter
Several studies have reported experimental observations in favour of considerable
degradation and failure of CNT cathodes. These studies can be divided into two
categories: (i) studies related to degradation of single nanotube emitters12−17 and
(ii) studies related to degradation of CNT thin films18−23 . Dean et al.13 found
gradual decrease of field emission of single walled carbon nanotubes (SWNTs) due
to “evaporation” when large field emitted current (300nA to 2µA) was extracted. It
was observed by Lim et al.23 that CNTs are susceptible to damage by exposure to
gases such as oxygen and nitrogen during field emission. Wei et al.14 observed that
after field emission over 30 minutes at field emission current between 50 and 120nA,
the length of CNTs reduced by 10%. Wang et al.15 observed two types of structural
damage as the voltage was increased: a piece-by-piece and segment-by-segment
splitting of the nanotubes, and a layer-by-layer stripping process. Occasional spikes
in the current-voltage curves were observed by Chung et al.16 when the voltage
was increased. Avouris et al.17 found that the CNTs break down when subjected
to high bias over a long period of time. Usually, the breakdown process involves
stepwise increases in the resistance. In the experiments performed by the present
authors, peeling of the film from the substrate was observed at high bias. Some of
the physics pertinent to these effects is known but the overall phenomenon governing
such a complex system is difficult to explain and quantify and it requires further
investigation.
There are several causes of CNT failures:
(i) In case of multi-walled carbon nanotubes (MWNTs), the CNTs undergo layer-
by-layer stripping during field emission15 . The complete removal of the shells
are most likely the reason for the variation in the current voltage curves16 ;
(ii) At high emitted currents, CNTs are resistively heated. Thermal effect can
sublime a CNT causing cathode-initiated vacuum breakdown24 . Also, in case
of thin films grown using chemical vapor deposition (CVD), fewer catalytic
metals such as nickel, cobalt, and iron are observed as impurities in CNT thin
films. These metal particles melt and evaporate by high emission currents,
and abruptly surge the emission current. This results in vacuum breakdown
followed by the failure of the CNT film23 ;
(iii) Gas exposure induces chemisorption and physisorption of gas molecules on the
surface of CNTs. In the low-voltage regime, the gas adsorbates remain on the
surface of the emitters. On the other hand, in the high-voltage regime, large
emission currents resistively anneal the tips, and the strong electric field on the
locally heated tips promotes the desorption of gas adsorbates from the tip sur-
face. Adsorption of materials with high electronegativity hinders the electron
emission by intensifying the local potential barriers. Surface morphology can
be changed by an erosion of the cap of the CNT as the gases desorb reactively
from the surface of the CNTs25 ;
(iv) CVD-grown CNTs tend to show more defects in the wall as their radius in-
creases. Possibly, there are rearrangements of atomic structures (for example,
vacancy migration) resulting in the reduction of length of CNTs14 . In addition,
the presence of defects may act as a centre for nucleation for voltage-induced
oxidation, resulting in electrical breakdown16 ;
(v) As the CNTs grow perpendicular to the substrate, the contact area of CNTs
with the substrate is very small. This is a weak point in CNT films grown on
planar substrates, and CNTs may fail due to tension under the applied fields20
. Small nanotube diameters and lengths are an advantage from the stability
point of view.
Although the degradation and failure of single nanotube emitters can be either
abrupt or gradual, the degradation and failure of a thin film emitter with CNT clus-
ter is mostly gradual. The gradual degradation occurs either during initial current-
voltage measurement21 (at a fast time scale) or during measurements at constant
applied voltage over a long period of time22 (at a slow time scale). Nevertheless, it
can be concluded that the gradual degradation of thin films occurs due to the failure
of individual emitters.
Till date, several studies have reported experimental observations on CNT thin
films26 . However, from mathematical, computational and design view points, the
models and characterization methods are available only for vertically aligned CNTs
grown on the patterned surface27−28 . In a CNT film, the array of CNTs may ideally
be aligned vertically. However, in this case it is desired that the individual CNTs
be evenly separated in such a way that their spacing is greater than their height to
minimize the screening effect29 . If the screening effect is minimized following the
above argument, then the emission properties as well as the lifetime of the cathodes
are adversely affected due to the significant reduction in density of CNTs. For the
cathodes with randomly oriented CNTs, the field emission current is produced by
two types of sources: (i) small fraction of CNTs that point toward the anode and
(ii) oriented and curved CNTs subjected to electromechanical forces causing reori-
entation. As often inferred (see e.g., ref.29 ), the advantage of the cathodes with
randomly oriented CNTs is that always a large number of CNTs take part in the field
emission, which is unlikely in the case of cathodes with uniformly aligned CNTs.
Such a thin film of randomly oriented CNTs will be considered in the present study.
From the modeling point of view, its analysis becomes much more challenging. Al-
though some preliminary works have been reported (see e.g., refs.30−31 ), neither a
detailed model nor a subsequent characterization method are available that would
allow to describe the array of CNTs that may undergo complex dynamics during
the process of charge transport. In the detailed model, the effects of degradation
and fragmentation of CNTs during field emission need to be considered. However,
in the majority of analytical and design studies, the usual practice is to employ
the classical Fowler-Nordheim equation11 to determine the field emission from the
metallic surface, with correction factors to deal with the CNT tip geometry. Ideally,
one has to tune such an empirical approach to specific materials and methods used
(e.g. CNT geometry, method of preparation, CNT density, diode configuration,
range of applied voltage, etc.). Also, in order to account for the oriented CNTs and
interaction between themselves, it is necessary to consider the space charge and the
electromechanical forces. By taking into account the evolution of the CNTs, a mod-
eling approach is developed in this paper. In order to determine phenomenologically
the concentration of carbon clusters due to degradation of CNTs, we introduce a
homogeneous nucleation rate. This rate is coupled to a moment model for the evo-
lution. The moment model is incorporated in a spatially discrete sense, that is by
introducing volume elements or cells to physically represent the CNT thin film. Elec-
tromechanical forces acting on the CNTs are estimated in time-incremental manner.
The oriented state of CNTs are updated using a mechanics based model. Finally,
the current density is calculated by using the details regarding the CNT orientation
angle and the effective electric field in the Fowler-Nordheim equation.
The remainder of this paper is organized as follows: in Sec. 2, a model is pro-
posed, which combines the nucleation coupled model for CNT degradation with the
electromechanical forcing model. Section 3 illustrates the computational scheme.
Numerical simulations and the comparison of the simulated current-voltage charac-
teristics with experimental results are presented in Sec. 4.
2 Model formulation
The CNT thin film is idealized in our mathematical model by using the following
simplifications.
(i) CNTs are grown on a substrate to form a thin film. They are treated as
aggregate while deriving the nucleation coupled model for degradation phe-
nomenologically;
(ii) The film is discretized into a number of representative volume element (cell),
in which a number of CNTs can be in oriented forms along with an estimated
amount of carbon clusters. This is schematically shown in Fig. 1. The car-
bon clusters are assumed to be in the form of carbon chains and networks
(monomers and polymers);
(iii) Each of the CNTs with hexagonal arrangement of carbon atoms (shown in
Fig. 2(a)) are treated as effectively one-dimensional (1D) elastic members and
discretized by nodes and segments along its axis as shown in Fig. 2(b). Defor-
mation of this 1D representation in the slow time scale defines the orientations
of the segments within the cell. A deformation in the fast time scale (due to
electron flow) defines the fluctuation of the sheet of carbon atoms in the CNTs
and hence the resulting state of atomic arrangements. The latter aspect is ex-
cluded from the present modeling and numerical simulations, however they
will be discussed within a quantum-hydrodynamic framework in a forthcom-
ing article.
2.1 Nucleation coupled model for degradation of CNTs
Let NT be the total number of carbon atoms (in CNTs and in cluster form) in a
cell (see Fig. 1). The volume of a cell is given by Vcell = ∆Ad, where ∆A is the cell
surface interfacing the anode and d is distance between the inner surfaces of cathode
substrate and the anode. Let N be the number of CNTs in the cell, and NCNT be
the total number of carbon atoms present in the CNTs. We assume that during
field emission some CNTs are decomposed and form clusters. Such degradation and
fragmentation of CNTs can be treated as the reverse process of CVD or a similar
growth process used for producing the CNTs on a substrate. Hence,
NT = NNCNT +Ncluster , (2)
where Ncluster is the total number of carbon atoms in the clusters in a cell at time t
and is given by
Ncluster = Vcell
dn1(t) , (3)
where n1 is the concentration of carbon cluster in the cell. By combining Eqs. (2)
and (3), one has
NT − Vcell
dn1(t)
. (4)
The number of carbon atom in a CNT is proportional to its length. Let the length
of a CNT be a function of time, denoted as L(t). Therefore, one can write
NCNT = NringL(t) , (5)
where Nring is the number of carbon atoms per unit length of a CNT and can be
determined from the geometry of the hexagonal arrangement of carbon atoms in the
CNT. By combining Eqs. (4) and (5), one can write
NringL(t)
NT − Vcell
dn1(t)
. (6)
In order to determine n1(t) phenomenologically, we need to know the nature of
evolution of the aggregate in the cell. From the physical point of view, one may
expect the rate of formation of the carbon clusters from CNTs to be a function
of thermodynamic quantities, such as temperature (T ), the relative distances (rij)
between the carbon atoms in the CNTs, the relative distances between the clusters
and a set of parameters (p∗) describing the critical cluster geometry. The relative
distance rij between carbon atoms in CNTs is a function of the electromechanical
forces. Modeling of this effect is discussed in Sec. 2.2. On the other hand, the relative
distances between the clusters influence in homogenizing the thermodynamic energy,
that is, the decreasing distances between the clusters (hence increasing densities of
clusters) slow down the rate of degradation and fragmentation of CNTs and lead to
a saturation in the concentration of clusters in a cell. Thus, one can write
= f(T, rij, p
∗) . (7)
To proceed further, we introduce a nucleation coupled model32−33 , which was origi-
nally proposed to simulate aerosol formation. Here we modify this model according
to the present problem which is opposite to the process of growth of CNTs from
the gaseous phase. With this model the relative distance function is replaced by a
collision frequency function (βij) describing the frequency of collision between the
i-mers and j-mers, with
βij =
)1/6√6kT
i1/3 + j1/3
, (8)
and the set of parameters describing the critical cluster geometry by
p∗ = {vj sj g∗ d∗p} , (9)
where vj is the j-mer volume, sj is the surface area of j-mer, g
∗ is the normalized
critical cluster size, d∗p is the critical cluster diameter, k is the Boltzmann constant, T
is the temperature and ρp is the particle mass density. The detailed form of Eq. (7)
is given by four nonlinear ordinary differential equations:
dNkin
= Jkin , (10)
JkinSg
− (S − 1)
, (11)
= Jkind
p + (S − 1)B1Nkin , (12)
JkinSg
∗2/3s1
2πB1S(S − 1)M1
, (13)
where Nkin is the kinetic normalization constant, Jkin is the kinetic nucleation rate,
S is the saturation ratio, An is the total surface area of the carbon cluster and M1
is the moment of cluster size distribution. The quantities involved are expressed as
, M1 =
∫ dmaxp
n(dp, t)dp
d(dp) , (14)
Nkin =
exp(Θ) , Jkin =
27(lnS)2
, (15)
, d∗p =
kT lnS
, B1 = 2nsv1
, (16)
where ns is the equilibrium saturation concentration of carbon cluster, d
p is the
maximum diameter of the clusters, n(dp, t) is the cluster size distribution function,
dp is the cluster diameter, mj is the mass of j-mer, Θ is the dimensionless surface
tension given by
, (17)
σ is the surface tension. In this paper, we have considered i = 1 and j = 1 for numer-
ical simulations, that is, only monomer type clusters are considered. In Eqs. (10)-
(13), the variables are n1(t), S(t), M1(t) and An(t), and all other quantities are
assumed constant over time. In the expression for moment M1(t) in Eq. (14), the
cluster size distribution in the cell is assumed to be Gaussian, however, random
distribution can be incorporated. We solve Eqs. (10)-(13) using a finite difference
scheme as discussed in Sec. 3. Finally, the number of CNTs in the cell at a given time
is obtained with the help of Eq. (6), where the reduced length L(t) is determined
using geometric properties of the individual CNTs as formulated next.
2.2 Effect of CNT geometry and orientation
It has been discussed in Sec. 1.1 that the geometry and orientation of the tip of the
CNTs are important factors in the overall field emission performance of the film and
must be considered in the model.
As an initial condition, let L(0) = h at t = 0, and let h0 be the average height of
the CNT region as shown in Fig. 1. This average height h0 is approximately equal
to the height of the CNTs that are aligned vertically. If ∆h is the decrease in the
length of a CNT (aligned vertically or oriented as a segment) over a time interval
∆t due to degradation and fragmentation, and if dt is the diameter of the CNT,
then the surface area of the CNT decreased is πdt∆h. By using the geometry of the
CNT, the decreased surface area can be expressed as
πdt∆h = Vcelln1(t)
s(s− a1)(s− a2)(s− a3)
, (18)
where Vcell is the volume of the cell as introduced in Sec. 2.1, a1, a2, a3 are the lattice
constants, and s = 1
(a1 + a2 + a3) (see Fig. 2(a)). The chiral vector for the CNT is
expressed as
C h = n~a1 +m~a2 , (19)
where n and m are integers (n ≥ |m| ≥ 0) and the pair (n,m) defines the chirality
of the CNT. The following properties hold: ~a1.~a1 = a
1, ~a2.~a2 = a
2, and 2~a1.~a2 =
a21 + a
2 − a23. With the help of these properties the circumference and the diameter
of the CNT can be expressed as, respectively34 ,
|−→C h| =
n2a21 +m
2a22 + nm(a
1 + a
2 − a23) , dt =
|−→C h|
, (20)
Let us now introduce the rate of degradation of the CNT or simply the burning rate
as vburn = lim
∆h/∆t. By dividing both side of Eq. (18) by ∆t and by applying
limit, one has
πdtvburn = Vcell
dn1(t)
s(s− a1)(s− a2)(s− a3)
, (21)
By combining Eqs. (20) and (21), the burning rate is finally obtained as
vburn = Vcell
dn1(t)
[ s(s− a1)(s− a2)(s− a3)
n2a21 +m
2a22 + nm(a
1 + a
2 − a23)
. (22)
In Fig. 3 we show a schematic drawing of the CNTs almost vertically aligned,
that is along the direction of the electric field E(x, y). This electric field E(x, y) is
assumed to be due to the applied bias voltage. However, there will be an additional
but small amount of electric field due to several localized phenomena (e.g., electron
flow in curved CNTs, field emission from the CNT tip etc.). Effectively, we assume
that the distribution of the field parallel to z-axis is of periodic nature (as shown
in Fig. 3) when the CNT tips are vertically oriented. Only a cross-sectional view in
the xz plane is shown in Fig. 3 because only an array of CNTs across x-direction
will be considered in the model for simplicity. Thus, in this paper, we shall restrict
our attention to a two-dimensional problem, and out-of-plane motion of the CNTs
will not be incorporated in the model.
To determine the effective electric field at the tip of a CNT oriented at an angle
θ as shown in Fig. 3, we need to know the tip coordinate with respect to the cell
coordinate system. If it is assumed that a CNT tip was almost vertically aligned at
t = 0 (as it is the desired configuration for the ideal field emission cathode), then
its present height is L(t) = h0 − vburnt and the present distance between the tip
and the anode is dg = d − L(t) = d − h0 + vburnt. We assume that the tip electric
field has a z-dependence of the form E0L(t)/dg, where E0 = V/d and V is the
applied bias voltage. Also, let (x, y) be the deflection of the tip with respect to its
original location and the spacing between the two neighboring CNTs at the cathode
substrate is 2R. Then the electric field at the deflected tip can be approximated as
Ez′ =
x2 + y2
(h0 − vburnt)
(d− h0 + vburnt)
E0 , θ(t) ≤ θc , (23)
where θc is a critical angle to be set during numerical calculations along with the
condition: Ez′ = 0 when θ(t) > θc. This is consistent with the fact that those CNTs
which are low lying on the substrate do not contribute to the field emission. The
electric field at the individual CNT tip derived here is defined in the local coordinate
system (X ′, Z ′) as shown in Fig. 3. The components of the electric field in the cell
coordinate system (X, Y, Z) is given by the following transformation:
=
nz lz mz√
1− n2z
−lznz√
1−n2z
mznz√
1−n2z
0 −lznz√
1−n2z
1−n2z
, (24)
where nz, lz, mz are the direction cosines. According to the cell coordinate system
in Figs. 1 and 3, nz = cos θ(t), lz = sin θ(t), and mz = 0. Therefore, Eq. (24) can
be rewritten as
=
cos θ(t) sin θ(t) 0√
1− cos2 θ(t) − cos θ(t) 0
0 − cos θ(t) −1
. (25)
By simplifying Eq. (25), we get
Ez = Ez′ cos θ(t) , Ex = Ez′ sin θ(t) . (26)
Note that the identical steps of this transformation also apply to a generally oriented
(θ 6= 0) segment of CNT as idealized in Fig. 2(b). The electric field components Ez
and Ex are later used for calculation of the electromechanical force acting on the
CNTs. Since in this study we aim at estimating the current density at the anode due
to the field emission from the CNT tips, we also use Ez from Eq. (26) to compute
the output current based on the Fowler-Nordheim equation (1).
2.3 Electromechanical forces
For each CNT, the angle of orientation θ(t) is dependent on the electromechanical
forces. Such dependence is geometrically nonlinear and it is not practical to solve
the problem exactly, especially in the present situation where a large number of
CNTs are to be dealt with. However, it is possible to solve the problem in time-
dependent manner with an incremental update scheme. In this section we derive
the components of the electromechanical forces acting on a generally oriented CNT
segment. The numerical solution scheme based on an incremental update scheme
will be discussed in Sec. 3.
From the studies reported in published literature and based on the discussions
made in Sec. 1.1, it is reasonable to expect that the major contribution is due to (i)
the Lorentz force under electron gas flow in CNTs (a hydrodynamic formalism), (ii)
the electrostatic force (background charge in the cell), (iii) the van der Waals force
against bending and shearing of MWNT and (iv) the ponderomotive force acting on
the CNTs.
2.3.1 Lorentz force
It is known that the electrical conduction and related properties of CNTs depend on
the mechanical deformation and the geometry of the CNT. In this paper we model
the field emission behaviour of the CNT thin film by considering the time-dependent
electromechanical effects, whereas the electronic properties and related effects are
incorporated through the Fowler-Nordheim equation empirically. Electronic band-
structure calculations are computationally prohibitive at this stage and at the same
spatio-temporal scales considered for this study. However, a quantum-hydrodynamic
formalism seems practical and such details will be dealt in a forthcoming article.
Within the quantum-hydrodynamic formalism, one generally assumes the flow of
electron gas along the cylindrical sheet of CNTs. The associated electron density
distribution is related to the energy states along the length of the CNTs including the
tip region. What is important for the present modeling is that the CNTs experience
Lorentz force under the influence of the bias electric field as the electrons flow from
the cathode substrate to the tip of a CNT. The Lorentz force is expressed as
~fl = e(n̂0 + n̂1) ~E ≈ en̂0 ~E , (27)
where e is the electronic charge, n̂0 is the surface electron density corresponding to
the Fermi level energy, n̂1 is the electron density due to the deformation in the slow
time scale, and phonon and electromagnetic wave coupling at the fast time scale,
and ~E is the electric field. The surface electron density corresponding to the Fermi
level energy is expressed as35
n̂0 =
, (28)
where b is the interatomic distance and ∆ is the overlap integral (≈ 2eV for carbon).
The quantity b can be related to the mechanical deformation of the 1D segments (See
Fig. 2) and formulations reported by Xiao et al.36 can be employed. For simplicity,
the electron density fluctuation n̂1 is neglected in this paper. Now, with the electric
field components derived in Eq. 26, the components of the Lorentz force acting along
z and x directions can now be written as, respectively,
flz = πdten̂0Ez , flx = πdten̂0Ex ≈ 0 . (29)
2.3.2 Electrostatic force
In order to calculate the electrostatic force, the interaction among two neighboring
CNTs is considered. For such calculation, let us consider a segment ds1 on a CNT
(denoted 1) and another segment ds2 on its neighboring CNT (denoted 2). These
are parts of the representative 1D member idealized as shown in Fig. 2(b). The
charges associated with these two segments can be expressed as
q1 = en̂0πd
t ds1 , q2 = en̂0πd
t ds2 , (30)
where d
t and d
t are diameters of two neighbouring CNTs (1) and (2). The
electrostatic force on the segment ds1 by the segment ds2 is
4π��0
where � is the effective permittivity of the aggregate of CNTs and carbon clusters,
�0 is the permittivity of free space, and r12 is the effective distance between the
centroids of ds1 and ds2. The electrostatic force on the segment ds1 due to charge
in the entire segment (s2) of the neighboring CNT (see Fig. 4) can be written as
4π��0
en̂0πd
t ds1en̂0πd
ds2 .
The electrostatic force per unit length on s1 due to s2 is then
4π��0
(πen̂0)
ds2 . (31)
The differential of the force dfc acts along the line joining the centroids of the
segments ds1 and ds2 as shown in Fig. 4. Therefore, the components of the total
electrostatic force per unit length of CNT (1) in X and Z directions can be written
as, respectively,
fcx =
dfc cosφ =
4π��0
(πen̂0)
cosφ ds2
4π��0
h0/∆s2∑
(πen̂0)
cosφ ∆s2 , (32)
fcz =
dfc sinφ =
4π��0
(πen̂0)
sinφ ds2
4π��0
h0/∆s2∑
(πen̂0)
sinφ ∆s2 , (33)
where φ is the angle the force vector dfc makes with the X-axis. For numerical
computation of the above integrals, we compute the angle φ = φ(sk1, s
2) and r12 =
r12(s
2) at each of the centroids of the segments between the nodes k + 1 and k,
where the length of the segments are assumed to be uniform and denoted as ∆s1
for CNT (1) and ∆s2 for CNT (2). As shown in Fig. 4, the distance r12 between the
centroids of the segments ds1 and ds2 is obtained as
r12 =
(d1 − lx2 + lx1)
2 + (lz1 − lz2)
, (34)
where d1 is the spacing between the CNTs at the cathode substrate, lx1 and lx2 are
the deflections along X-axis, and lz1 and lz2 are the deflections along Z-axis. The
angle of projection φ is expressed as
φ = tan−1
( lz1 − lz2
d1 − lx2 + lx1
. (35)
The deflections lx1 , lz1 , lx2 , and lz2 are defined as, respectively,
lx1 =
ds1 sin θ1 ≡
∆s1 sin θ
1 (36)
lz1 =
ds1 cos θ1 ≡
∆s1 cos θ
1 (37)
lx2 =
ds2 sin θ2 ≡
∆s2 sin θ
2 (38)
lz2 =
ds2 cos θ2 ≡
∆s2 cos θ
2 . (39)
Note that the total electrostatic force on a particular CNT is to be obtained by
summing up all the binary contributions within the cell, that is by summing up
Eqs. (32) and (33) over the upper integer number of the quantity N − 1, where N
is the number of CNTs in the cell as discussed in Sec. 2.1.
2.3.3 The van der Waals force
Next, we consider the van der Waals effect. The van der Waals force plays important
role not only in the interaction of the CNTs with the substrate, but also in the
interaction between the walls of MWNTs and CNT bundles. Due to the overall
effect of forces and flexibility of the CNTs (here assumed to be elastic 1D members),
the cylindrical symmetry of CNTs is destroyed, leading to their axial and radial
deformations. The change in cylindrical symmetry may significantly affect the the
properties of CNTs37−38 . Here we estimate the van der Waals forces due to the
interaction between two concentric walls of the MWCNTs.
Let us assume that the lateral and the longitudinal displacements of a CNT be
ux′ and uz′ , respectively. We use updated Lagrangian approach with local coordi-
nate system for this description (similar to (X ′, Z ′) system shown in Fig. 3), where
the longitudinal axis coincides with Z ′ and the lateral axis coincides with X ′. Such
a description is consistent with the incremental procedure to update the CNT orien-
tations in the cells as adopted in the computational scheme. Also, due to the large
length-to-diameter ratio (L(t)/dt), let the kinematics of the CNTs, which are ideal-
ized in this work as 1D elastic members, be governed by that of an Euler-Bernoulli
beam. Therefore, the kinematics can be written as
z′ = u
z′0 − r
(m)∂u
, (40)
where the superscript (m) indicates the mth wall of the MWNT with r(m) as its
radius and uz′0 is the longitudinal displacement of the center of the cylindrical cross-
section. Under tension, bending moment and lateral shear force, the elongation of
one wall relative to its neighboring wall is
z′ = u
(m+1)
z′ − u
z′ = r
(m+1)∂u
(m+1)
− r(m)
≈ (r(m+1) − r(m))
, (41)
where we assume u
x′ = u
(m+1)
x′ = ∆x′ as the lateral displacement as some function
of tensile force or compression buckling or pressure in the thin film device. The
lateral shear stress (τ
vs ) due to the van der Waals effect can now be written as
τ (m)vs = Cvs
, (42)
where Cvs is the van der Waals coefficient. Hence, the shear force per unit length
can be obtained by integrating Eq. (42) over the individual wall circumferences and
then by summing up for all the neighboring pair interactions, that is,
fvs =
reff dψ =
(r(m+1) − r(m))∂∆x′
r(m+1) + r(m)
⇒ fvs =
πCvs[(r
(m+1))2 − (r(m))2]
. (43)
The components of van der Waals force in the cell coordinate system (X ′, Z ′) is then
obtained as
fvsz = fvs sin θ(t) , fvsx = fvs cos θ(t) . (44)
2.3.4 Ponderomotive force
Ponderomotive force, which acts on free charges on the surface of CNTs, tends to
straighten the bent CNTs under the influence of electric field in the Z-direction.
Furthermore, the ponderomotive forces induced by the applied electric field stretch
every CNT39 . We add this effect by assuming that the free charge at the tip region
is subjected to Ponderomotive force, which is computed as40
fpz =
0∆A cos θ(t) , fpx = 0 , (45)
where ∆A is the surface area of the cell on the anode side, fpz is the Z component
of the Ponderomotive force and the X component fpx is assumed to be negligible.
2.4 Modelling the reorientation of CNTs
The net force components acting on the CNTs along Z and X directions can be
expressed as, respectively,
(flz + fvzz) ds+ fcz + fpz , (46)
(flx + fvsx) ds+ fcx + fpx . (47)
For numerical computation, at each time step the force components obtained using
Eqs. (46) and (47) are employed to update the curved shape S ′(x′ + ux′ , z
′ + uz′),
where the displacements are approximated using simple beam mechanics solution:
uz′ ≈
E ′A0
(f j+1z − f
z )(z
′j+1 − z′j) , (48)
ux′ ≈
3E ′A2
(f j+1x − f
x′j+1 − x′j
, (49)
where A0 is the effective cross-sectional area, A2 is the area moment, E
′ is the
modulus of elasticity for the CNT under consideration. The angle of orientation,
θ(t), of the corresponding segment of the CNT, that is between the node j + 1 and
node j, is given by
θ(t) = θ(t)j = tan−1
(xj+1 + uj+1x )− (xj + ujx)
(zj+1 + u
z )− (zj + ujz)
, (50)
Γ(θ(t−∆t)j)
]{ ujx′
, (51)
where Γ is the usual coordinate transformation matrix which maps the displace-
ments (ux′ , uz′) defined in the local (X
′, Z ′) coordinate system into the displace-
ments (ux, uz) defined in the cell coordinate system (X,Z). For this transformation,
we employ the angle θ(t−∆t) obtained in the previous time step and for each node
j = 1, 2, . . ..
3 Computational scheme
As already highlighted in the previous section, we model the CNTs as generally
oriented 1D elastic members. These 1D members are represented by nodes and
segments. With given initial distribution of the CNTs in the cell, we discretize
the time into uniform steps ti+1 − ti = ∆t. The computational scheme involves
three parts: (i) discretization of the nucleation coupled model for degradation of
CNTs derived in Sec. 2.1, (ii) incremental update of the CNT geometry using the
estimated electromechanical force and (iii) computation of the field emission current
in the device.
3.1 Discretization of the nucleation coupled model for degra-
dation
With the help of Eqs. (14)-(16) and by eliminating the kinetic nucleation rate Nkin,
we first rewrite the simplified form of Eqs. (10)-(13), which are given by, respectively,
27(lnS)2
, (52)
2β11nsΘS
2π(lnS)3
27(ln s)2
(S − 1)An , (53)
27(lnS)2
+ 2n2sv1 exp(Θ)
(S − 1) , (54)
2π(lnS)2
27(lnS)2
+ 4πnsv1
M1(S − 1) . (55)
By eliminating dS/dt from Eq. (52) with the help of Eq. (53) and by applying a
finite difference formula in time, we get
n1i − n1i−1
ti − ti−1
27(lnSi−1)2
Θ− 4Θ
27(lnSi−1)2
(lnSi−1)3
n21i(Si − 1)An(i)
. (56)
Similarly, Eqs. (53)-(55) are discretized as, respectively,
Si − Si−1
ti − ti−1
Θ− 4Θ
27(lnSi−1)2
(lnSi−1)3
n1i(Si − 1)Ani
, (57)
M1i −M1i−1
ti − ti−1
27(lnSi−1)2
+ 2v1
n21i(Si − 1)
exp(Θ)
, (58)
Ani − Ani−1
ti − ti−1
β11s1Θ
5/2n1i
Θ− 4Θ
27(lnSi−1)2
(lnSi−1)2
+4πv1
(Si−1)M1i . (59)
By simplifying Eq. (56) with the help of Eqs. (57)-(59), we get a quadratic polyno-
mial of the form
(b1 − b2 − b3)n1i
2 − n1i + n1i−1 = 0 , (60)
where
b1 = ∆t
27(lnSi−1)2
, (61)
b2 = ∆t
Θ− 4Θ
27(lnSi−1)2
Si(lnSi−1)3
, (62)
b3 = ∆t
Si − 1
. (63)
Solution of Eq. (60) yields two roots (denoted by superscripts (1, 2)):
(1,2)
2(b1 − b2 − b3)
1− 4n1i−1(b1 − b2 − b3)
2(b1 − b2 − b3)
. (64)
For the first time step, the values of b1, b2 and b3 are obtained by applying the
initial conditions: S(0) = S0, n10 = n0, and An0 = An0. Since the n1i must be real
and finite, the following two conditions are imposed: 1−4n1i−1(b1− b2− b3) ≥ 0 and
(b1 − b2 − b3) 6= 0. Also, it has been assumed that the degradation of CNTs is an
irreversible process, that is, the reformation of CNTs from the carbon cluster does
not take place. Therefore, an additional condition of positivity, that is, n1i > n1i−1
is introduced while performing the time stepping. Along with the above constraints,
the n1 history in a cell is calculated as follows:
• If n(1)1i > n1i−1 and n
, then n1i = n
• Else if n(2)1i > n1i−1 , then n1i = n
• Otherwise the value of n1 remains the same as in the previous time step, that
is, n1i = n1i−1 .
Simplification of Eq. (57) results in the following equation:
2 + (c1 + c2 − Si−1)Si − c1 = 0 , (65)
where
c1 = ∆tn1iAni
, (66)
c2 = ∆t
Θ− 4Θ
27(lnSi−1)2
(lnSi−1)3
. (67)
Solution of Eq. (65) yields the following two roots:
Si = −
(c1 + c2 − Si−1)±
c1 + c2 − S2i−1 + 4c1 . (68)
For the first time step, c1 and c2 are calculated with the following conditions: n11
from the above calculation, S(0) = S0, and An0 = An0. Realistically, the saturation
ratio S cannot be negative or equal to one. Therefore, Si > 0 yields c1 > 0.
While solving for An, the Eq. (59) is solved with the values of n1 and S from the
above calculations and the initial conditions An0 = An0, M10 = M0. The value of
M10 was calculated by assuming n(dp, t) as a standard normal distribution function.
3.2 Incremental update of the CNT geometry
At each time time step t = ti, once the n1i is solved, we are in a position to compute
the net electromechanical force (see Sec. 2.3) as
fi = fi(E0, n1i−1 , θ(ti−1)) . (69)
Subsequently, the orientation angle for each segment of each CNT is then obtained
as (see Sec. 2.4)
θ(ti)
j = θ(fi)
j (70)
and it is stored for future calculations. A critical angle, (θc), is generally employed
with θc ≈ π/4 to π/2.5 for the present numerical simulations. For θ ≤ θc, the
meaning of fz is the “longitudinal force” and the meaning of fx is the “lateral force”
in the context of Eqs. (48) and (49). When θ > θc, the meanings of fz and fx are
interchanged.
3.3 Computation of field emission current
Once the updated tip angles and the electric field at the tip are obtained at a
particular time step, we employ Eq. (1) to compute the current density contribution
from each CNT tip, which can be rewritten as
BE2zi
CΦ3/2
, (71)
with B = (1.4 × 10−6) × exp(9.8929 × Φ−1/2) and C = 6.5 × 107 taken from ref.41
. The device current (Ii) from each computational cell with surface area ∆A at the
anode at the present time step ti is obtained by summing up the current density
over the number of CNTs in the cell, that is,
Ii = ∆A
Ji . (72)
Fig. 5 shows the flow chart of the computational scheme discussed above.
At t = 0, in our model, the CNTs can be randomly oriented. This random
distribution is parameterized in terms of the upper bound of the CNT tip deflection,
which is given by ∆xmax = h/q, where h is the CNT length and q is a real number.
In the numerical simulations which will be discussed next, the initial tip deflections
can vary widely. The following values of the upper bound of the tip deflection have
been considered: ∆xmax = h0/(5 + 10p), (p = 0, 1, 2, ..., 9). The tip deflection ∆x is
randomized between zero and these upper bounds. Simulation for each initial input
with a randomized distribution of tip deflections was run for a number of times and
the maximum, minimum, and average values of the output current were obtained.
In the first set, the simulations were run for a uniform height, radius and spacing
of CNTs in the film. Subsequently, the height, the radius and the spacing were
varied randomly within certain bounds, and their effects on the output current were
analyzed.
4 Results and discussions
The CNT film under study in this work consists of randomly oriented multi-walled
nanotubes (MWNTs). The film samples were grown on a stainless steel substrate.
The film has a surface area of 1cm2 and thickness of 10−14µm. The anode consists
of a 1.59mm thick copper plate with an area of 49.93mm2. The current-voltage
history is measured over a range of DC bias voltages for a controlled gap between
the cathode and the anode. In the experimental set-up, the device is placed within a
vacuum chamber of a multi-stage pump. The gap (d) between the cathode substrate
and the anode is controlled from outside by a micrometer.
4.1 Degradation of the CNT thin films
We assume that at t = 0, the film contains negligible amount of carbon cluster.
To understand the phenomena of degradation and fragmentation of the CNTs, fol-
lowing three sets of input are considered: n1(0) = 100, 150, 500. The other initial
conditions are set as S(0) = 100, M1(0) = 2.12× 10−16, An(0) = 0, and T = 303K.
Fig. 6 shows the three n1(t) histories over a small time duration (160s) for the three
cases of n1(0), respectively. For n1(0) = 100 and 150, the time histories indicate that
the rate of decay is very slow, which in turn implies longer lifetime of the device.
For n1(0) = 500, the time history indicates that the CNTs decay comparatively
faster, but still insignificant for the first 34s, and then the cluster concentration
becomes constant. It can be concluded from the above three cases that the rate of
decay of CNTs is generally slow under operating conditions, which implies stable
performance and longer lifetime of the device if this aspect is considered alone.
Next, the effect of variation in the initial saturation ratio S(0) on n1(t) history
is studied. The value of n1(0) is set as 100, while other parameters are assumed to
have identical value as considered previously. The following three initial conditions
in S(0) are considered: S(0) = 50, 100, 150. Fig. 7 shows the n1(t) histories. It
can be seen in this figure that for S(0) = 100 (moderate value), the carbon cluster
concentration first increases and then tends to a steady state. This was also observed
in Fig. (6). For higher values of S(0), n1 increases exponentially over time. For
S(0) = 50, a smaller value, the decay is not observed at all. This implies that a
small value of S(0) is favorable for longer lifetime of the cathode. However, a more
detailed investigation on the physical mechanism of cluster formation and CNT
fragmentation may be necessary, which is an open area of research.
At t = 0, we assign random orientation angles (θ(0)j) to the CNT segments.
For a cell containing 100 CNTs, Fig. 8 shows the terminal distribution of the CNT
tip angles (at t = 160s corresponding to the n1(0) = 100 case discussed previously)
compared to the initial distribution (at t = 0). The large fluctuations in the tip
angles for many of the CNTs can be attributed to the significant electromechanical
interactions.
4.2 Current-voltage characteristics
In the present study, the quantum-mechanical treatment has not been explicitly
carried out, and instead, the Fowler-Nordheim equation has been used to calculate
the current density. In such a semi-empirical calculation, the work function Φ42
for the CNTs must be known accurately under a range of conditions for which the
device-level simulations are being carried out. For CNTs, the field emission electrons
originate from several excited energy states (non metallic electronic states)43−44 .
Therefore, the the work function for CNTs is usually not well identified and is more
complicated to compute than for metals. Several methodologies for calculating the
work function for CNTs have been proposed in literature. On the experimental
side, Ultraviolet Photoelectron Spectroscopy (UPS) was used by Suzuki et al.45
to calculate the work function for SWNTs. They reported a work function value
of 4.8 eV for SWNTs. By using UPS, Ago et al.46 measured the work function
for MWNTs as 4.3 eV. Fransen et al.47 used the field emission electronic energy
distribution (FEED) to investigate the work function for an individual MWNT that
was mounted on a tungsten tip. Form their experiments, the work function was
found to be 7.3±0.5 eV. Photoelectron emission (PEE) was used by Shiraishi et al.48
to measure the work function for SWNTs and MWNTs. They measured the work
function for SWNTs to be 5.05 eV and for MWNTs to be 4.95 eV. Experimental
estimates of work function for CNTs were carried out also by Sinitsyn et al.49 .
Two types were investigated by them: (i) 0.8-1.1 nm diameter SWNTs twisted into
ropes of 10 nm diameter, and (ii) 10 nm diameter MWNTs twisted into 30-100 nm
diameter ropes. The work functions for SWNTs and MWNTs were estimated to
be 1.1 eV and 1.4 eV, respectively. Obraztsov et al.50 reported the work function
for MWNTs grown by CVD to be in the range 0.2-1.0 eV. These work function
values are much smaller than the work function values of metals (≈ 3.6 − 5.4eV ),
silicon(≈ 3.30 − 4.30eV ), and graphite(≈ 4.6 − 5.4eV ). The calculated values of
work function of CNTs by different techniques is summarized in Table 1. The
wide range of work functions in different studies indicates that there are possibly
other important effects (such as electromechanical interactions and strain) which also
depend on the method of sample preparation and different experimental techniques
used in those studies. In the present study, we have chosen Φ = 2.2eV .
The simulated current-voltage (I-V) characteristics of a film sample for a gap
d = 34.7µm is compared with the experimental measurement in Fig. 9. The average
height, the average radius and the average spacing between neighboring CNTs in the
film sample are taken as h0 = 12µm, r = 2.75nm, and d1 = 2µm. The simulated I-V
curve in Fig. 9 corresponds to the average of the computed current for the ten runs.
This is the first and preliminary simulation of its kind based on a multiphysics
based modeling approach and the present model predicts the I-V characteristics
which is in close agreement with the experimental measurement. However, the above
comparison indicates that there are some deviations near the threshold voltage of
≈ 500 − 600V , which needs to be looked at by improving the model as well as
experimental materials and method.
4.3 Field emission current history
Next, we simulate the field emission current histories for the similar sample con-
figuration as used previously, but for three different parametric variations: height,
radius, and spacing. Current histories are shown for constant bias voltages of 440V ,
550V and 660V .
4.3.1 Effects of uniform height, uniform radius and uniform spacing
In this case, the values of height, radius, and the spacing between the neighboring
CNTs are kept identical to the previous current-voltage calculation in Sec. 4.2.
Fig. 10(a), (b) and (c) show the current histories for three different bias voltages
of 440V , 550V and 660V . In the subfigures, we plot the minimum, the maximum
and the average currents over time as post-processed from a number of runs with
randomized input distributions. At a bias voltage of 440V , the average current
decreases from 1.36× 10−8A to 1.25× 10−8A in steps. The maximum current varies
between 1.86×10−8A to 1.68×10−8A, whereas the minimum current varies between
2.78 × 10−9A to 2.52 × 10−9A. Comparisons among the scales in the sub-figures
indicate that there is an increase in the order of magnitude of current when the bias
voltage is increased. The average current decreases from 1.25×10−5A to 1.06×10−5A
in steps when the bias voltage is increased from 440V to 550V . At the bias voltage of
660V , the average value of the current decreases from 1.26×10−3A to 1.02×10−3A.
The increase in the order of magnitude in the current at higher bias voltage is due
to the fact that the electrons are extracted with a larger force. However, at a higher
bias voltage, the current is found to decay faster (see Fig. 10(c)).
4.3.2 Effects of non-uniform radius
In this case, the uniform height and the uniform spacing between the neighboring
CNTs are taken as h0 = 12µm and d1 = 2µm, respectively. Random distribution of
radius is given with bounds 1.5−4nm. The simulated results are shown in Fig. 11. At
the bias voltage of 440V , the average current decreases from 1.37× 10−8A at t = 1s
to 1.23× 10−8A at t = 138s in steps and then the current stabilizes. The maximum
current varies between 1.87× 10−8A to 1.72× 10−8A, whereas the minimum current
varies between 2.53 × 10−9A to 2.52 × 10−9A. The average current decreases from
1.26× 10−5A to 1.08× 10−5A in steps when the bias voltage is increased from 440V
to 550V . At a bias voltage of 660V , the average current decreases from 1.26×10−3A
to 1.02 × 10−3A. As expected, a more fluctuation between the maximum and the
minimum current have been observed here when compared to the case of uniform
radius.
4.3.3 Effects of non-uniform height
In this case, the uniform radius and the uniform spacing between neighboring CNTs
are taken as r = 2.75nm and d1 = 2µm, respectively. Random initial distribution
of the height is given with bounds 10 − 14µm. The simulated results are shown in
Fig. 12. At the bias voltage of 440V , the average current decreases from 1.79×10−6A
to 1.53×10−6A. The maximum current varies between 6.33×10−6A to 5.89×10−6A,
whereas the minimum current varies between 2.69× 10−10A to 4.18× 10−10A. The
average current decreases from 0.495 × 10−3A to 0.415 × 10−3A in steps when the
bias voltage is increased from 440V to 550V . At the bias voltage of 660V , the
average current decreases from 0.0231A to 0.0178A. The device response is found
to be highly sensitive to the height distribution.
4.3.4 Effects of non-uniform spacing between neighboring CNTs
In this case, the uniform height and the uniform radius of the CNTs are taken as h0 =
12µm and r = 2.75nm, respectively. Random distribution of spacing d1 between
the neighboring CNTs is given with bounds 1.5− 2.5µm. The simulated results are
shown in Fig. 13. At the bias voltage of 440V , the average current decreases from
1.37× 10−8A to 1.26× 10−8A. The maximum current varies between 1.89× 10−8A
to 1.76 × 10−8A, whereas the minimum current varies between 2.86 × 10−9A to
2.61× 10−9A. The average current decreases from 1.24× 10−5A to 1.08× 10−5A in
steps when the bias voltage is increased from 440V to 550V . At the bias voltage of
660V , the average current decreases from 1.266 × 10−3A to 1.040 × 10−3A. There
is a slight increase in the order of magnitude of current for non-uniform spacing. It
can attributed to the reduction in screening effect at some emitting sites in the film
where the spacing is large.
5 Conclusions
In this paper, we have developed a multiphysics based modelling approach to analyze
the evolution of the CNT thin film. The developed approach has been applied to
the simulation of the current-voltage characteristics at the device scale. First, a
phenomenological model of degradation and fragmentation of the CNTs has been
derived. From this model we obtain degraded state of CNTs in the film. This
information, along with electromechanical force, is then employed to update the
initially prescribed distribution of CNT geometries in a time incremental manner.
Finally, the device current is computed at each time step by using the semi-empirical
Fowler-Nordheim equation and integration over the computational cell surfaces on
the anode side. The model thus handles several important effects at the device
scale, such as fragmentation of the CNTs, formation of the carbon clusters, and self-
assembly of the system of CNTs during field emission. The consequence of these
effects on the I-V characteristics is found to be important as clearly seen from the
simulated results which are in close agreement with experiments. Parametric studies
reported in the concluding part of this paper indicate that the effects of the height
of the CNTs and the spacing between the CNTs on the current history is significant
at the fast time scale.
There are several other physical factors, such as the thermoelectric heating,
interaction between the cathode substrate and the CNTs, time-dependent electronic
properties of the CNTs and the clusters, ballistic transport etc., which may be
important to consider while improving upon the model developed in the present
paper. Effects of some of these factors have been discussed in the literature before
in the context of isolated CNTs, but little is known at the system level. We note also
that in the present model, the evolution mechanism is not fully coupled with the
electromechanical forcing mechanism. The incorporation of the above factors and
the full systematic coupling into the modelling framework developed here presents
an appealing scope for future work.
Acknowledgment The authors would like to thank Natural Sciences and Engi-
neering Research Council (NSERC), Canada, for financial support.
References
[1] C. A. Spindt, I. Brodie, L. Humphrey and E. R. Westerberg, J. Appl. Phys. 47,
5248 (1976).
[2] Y. Gotoh, M. Nagao, D. Nozaki, K. Utsumi, K. Inoue, T. Nakatani, T,
Sakashita, K. Betsui, H. Tsuji and J. Ishikawa, J. Appl. Phys. 95, 1537 (2004).
[3] W. Zhu (Ed.), Vacuum microelectronics, Wiley, NY (2001).
[4] S. Iijima, Nature 354, 56 (1991).
[5] A. G. Rinzler, J. H. Hafner, P. Nikolaev, L. Lou, S. G. Kim, D. Tomanek, D.
Colbert and R. E. Smalley, Science 269, 1550 (1995).
[6] W. A. de Heer, A. Chatelain and D. Ugrate, Science 270, 1179 (1995).
[7] L. A. Chernozatonskii, Y. V. Gulyaev, Z. Y. Kosakovskaya, N. I. Sinitsyn, G.
V. Torgashov, Y. F. Zakharchenko, E. A. Fedorov and V. P. Valchuk, Chem.
Phys. Lett. 233, 63 (1995).
[8] J. M. Bonard, J. P. Salvetat, T. Stockli, L. Forro and A. Chatelain, Appl. Phys.
A 69, 245 (1999).
[9] H. Sugie, M. Tanemure, V. Filip, K. Iwata, K. Takahashi and F. Okuyama,
Appl. Phys. Lett. 78, 2578 (2001).
[10] O. Groening, O. M. Kuettel, C. Emmenegger, P. Groening, and L. Schlapbach,
J. Vac. Sci. Tech. B18, 665 (2000).
[11] R. H. Fowler, and L. Nordheim, Proc. Royal Soc. London A 119, 173 (1928).
[12] J. M. Bonard, F. Maier, T. Stockli, A. Chatelain, W. A. de Heer, J. P. Salvetat
and L. Forro, Ultramicroscopy 73, 7 (1998).
[13] K. A. Dean, T. P. Burgin and B. R. Chalamala, Appl. Phys. Lett. 79, 1873
(2001).
[14] Y. Wei, C. Xie, K. A. Dean and B. F. Coll, Appl. Phys. Lett. 79, 4527 (2001).
[15] Z. L. Wang, R. P. Gao, W. A. de Heer and P. Poncharal, Appl. Phys. Lett. 80,
856 (2002).
[16] J. Chung, K. H. Lee, J. Lee, D. Troya, and G. C. Schatz, Nanotechnology 15,
1596 (2004).
[17] P. Avouris, R. Martel, H. Ikeda, M. Hersam, H. R. Shea and A. Rochefort,
in Fundamental Mater. Res. Series, M. F. Thorpe (Ed.), Kluwer Aca-
demic/Plenum Publishers (2000) pp.223-237.
[18] L. Nilsson, O. Groening, P. Groening and L. Schlapbach, Appl. Phys. Lett. 79,
1036 (2001).
[19] L. Nilsson, O. Groening, P. Groening and L. Schlapbach, J. Appl. Phys. 90,
768 (2001).
[20] J. M. Bonard, C. Klinke, K. A. Dean and B. F. Coll, Phys. Rev. B 67, 115406
(2003).
[21] J. M. Bonard, N. Weiss, H. Kind, T. Stockli, L. Forro, K. Kern and A. Chate-
lain, Adv. Matt. 13, 184 (2001).
[22] J. M. Bonard, J. P. Salvetat, T. Stockli, W. A. de Heer, L. Forro and A.
Chatelain, Appl. Phys. Lett. 73, 918 (1998).
[23] S. C. Lim, H. J. Jeong, Y. S. Park, D. S. Bae, Y. C. Choi, Y. M. Shin, W. S.
Kim, K. H. An and Y. H. Lee, J. Vac. Sci. Technol. A 19, 1786 (2001).
[24] N. Y. Huang, J. C. She, J. Chen, S. Z. Deng, N. S. Xu, H. Bishop, S. E. Huq, L.
Wang, D. Y. Zhong, E. G. Wang and D. M. Chen, Phys. Rev. Lett. 93, 075501
(2004).
[25] X. Y. Zhu, S. M. Lee, Y. H. Lee and T. Frauenheim, Phys. Rev. Lett. 85, 2757
(2000).
[26] P. G. Collins and A. Zettl, Appl. Phys. Lett. 69, 1969 (1996).
[27] D. Nicolaescu, L. D. Filip, S. Kanemaru and J. Itoh, Jpn. J. Appl. Phys. 43,
485 (2004).
[28] D. Nicolaescu, V. Filip, S. Kanemaru and J. Itoh, J. Vac. Sci. Technol. 21, 366
(2003).
[29] Y. Cheng and O. Zhou, Electron field emission from carbon nanotubes, C. R.
Physique 4, 1021 (2003).
[30] N. Sinha, D. Roy Mahapatra, J. T. W. Yeow, R. V. N. Melnik and D. A. Jaffray,
Proc. 6th IEEE Conf. Nanotech., Cincinnati, USA, July 16-20 (2006).
[31] N. Sinha, D. Roy Mahapatra, J. T. W. Yeow, R. V. N. Melnik and D. A. Jaffray,
Proc. 7th World Cong. Comp. Mech., Los Angeles, USA, July 16-22 (2006).
[32] S. K. Friedlander, Ann. N.Y. Acad. Sci. 404, 354 (1983).
[33] S. L. Grishick, C. P. Chiu and P. H. McMurry, Aerosol Sci. Technol. 13, 465
(1990).
[34] H. Jiang, P. Zhang, B. Liu, Y. Huang, P. H. Geubelle, H. Gao and K. C. Hwang,
Comp. Mat. Sci. 28, 429 (2003).
[35] G. Y. Slepyan, S. A. Maksimenko, A. Lakhtakia, O. Yevtushenko and A. V.
Gusakov, Phys. Rev. B 60, 17136 (1999).
[36] J. R. Xiao B. A. Gama and J. W. Gillespie Jr., Int. J. Solids Struct. 42, 3075
(2005).
[37] R. S. Ruoff, J. Tersoff, D. C. Lorents, S. Subramoney and B. Chan, Nature 364,
514 (1993).
[38] T. Hertel, R. E. Walkup and P. Avouris, Phys. Rev. B 58, 13870 (1998).
[39] O. E. Glukhova, A. I. Zhbanov, I. G. Torgashov, N. I. Sinistyn and G. V.
Torgashov, Appl. Surf. Sci. 215, 149 (2003).
[40] A. L. Musatov, N. A. Kiselev, D. N. Zakharov, E. F. Kukovitskii, A. I. Zhbanov,
K. R. Izrael’yants and E. G. Chirkova, Appl. Surf. Sci. 183, 111 (2001).
[41] Z. P. Huang, Y. Tu, D. L. Carnahan and Z. F. Ren, in Encycl. Nanosci. Nan-
otechnol. 3, Edited by H. S. Nalwa, American Scientific Publishers, Los Angeles
(2004), pp.401-416.
[42] J. W. Gadzuk and E. W. Plummer, Rev. Mod. Phys. 45, 487 (1973).
[43] K. A. Dean, O. Groening, O. M. Kuttel and L. Schlapbach, Appl. Phys. Lett.
75, 2773 (1999).
[44] A. Takakura, K. Hata, Y. Saito, K. Matsuda, T. Kona and C. Oshima, Ultra-
microscopy 95, 139 (2003).
[45] S. Suzuki, C. Bower, Y. Watanabe and O. Zhou, Appl. Phys. Lett. 76, 4007
(2000).
[46] H. Ago, T. Kugler, F. Cacialli, W. R. Salaneck, M. S. P. Shaffer, A. H. Windle
and R. H. Friend, J. Phys. Chem. B 103, 8116 (1999).
[47] M. J. Fransen, T. L. van Rooy and P. Kruit, Appl. Surf. Sci. 146, 312 (1999).
[48] M. Shiraishi and M. Ata, Carbon 39, 1913 (2001).
[49] N. I. Sinitsyn, Y. V. Gulyaev, G. V. Torgashov, L. A. Chernozatonskii, Z.
Y. Kosakovskaya, Y. F. Zakharchenko, N. A. Kiselev, A. L. Musatov, A. I.
Zhbanov, S. T. Mevlyut and O. E. Glukhova, Appl. Surf. Sci. 111, 145 (1997).
[50] A. N. Obraztsov, A. P. Volkov and I. Pavlovsky, Diam. Rel. Mater. 9, 1190
(2000).
Table 1: Summary of work function values for CNTs.
Type of CNT Φ (eV ) Method
SWNT 4.8 Ultraviolet photoelectron spectroscopy45
MWNT 4.3 Ultraviolet photoelectron spectroscopy46
MWNT 7.3±0.5 Field emission electronic energy distribution47
SWNT 5.05 Photoelectron emission48
MWNT 4.95 Photoelectron emission48
SWNT 1.1 Experiments49
MWNT 1.4 Experiments49
MWNT 0.2-1.0 Numerical approximation50
Figure 1: Schematic drawing of the CNT thin film for model idealization.
(a) (b)
Figure 2: Schematic drawing showing (a) hexagonal arrangement of carbon atoms
in CNT and (b) idealization of CNT as a one-dimensional elastic member.
Figure 3: CNT array configuration.
Figure 4: Schematic description of neighboring CNT pair interaction for calculation
of electrostatic force.
Figure 5: Computational flow chart for calculating the device current.
Figure 6: Variation of carbon cluster concentration over time. Initial condition:
S(0) = 100, T = 303K, M1(0) = 2.12× 10−16, An(0) = 0.
Figure 7: Variation of carbon cluster concentration over time. Initial condition:
n1(0) = 100m
−3, T = 303K, M1(0) = 2.12× 10−16, An(0) = 0.
Figure 8: Distribution of tip angles over the number of CNTs.
Figure 9: Comparison of simulated current-voltage characteristics with experiments.
(a) (b) (c)
Figure 10: Simulated current histories for uniform radius, uniform height and uni-
form spacing of CNTs at a bias voltage of (a) 440 V, (b) 550 V, and (c) 660 V.
(a) (b) (c)
Figure 11: Simulated current histories for non-uniform radius of CNTs at a bias
voltage of (a) 440 V, (b) 550 V, and (c) 660 V.
(a) (b) (c)
Figure 12: Simulated current histories for non-uniform height of CNTs at a bias
voltage of (a) 440 V, (b) 550 V, and (c) 660 V.
(a) (b) (c)
Figure 13: Simulated current histories for non-uniform spacing between neighboring
CNTs at a bias voltage of (a) 440 V, (b) 550 V, and (c) 660 V.
Introduction
Role of various physical processes in the degradation of CNT field emitter
Model formulation
Nucleation coupled model for degradation of CNTs
Effect of CNT geometry and orientation
Electromechanical forces
Lorentz force
Electrostatic force
The van der Waals force
Ponderomotive force
Modelling the reorientation of CNTs
Computational scheme
Discretization of the nucleation coupled model for degradation
Incremental update of the CNT geometry
Computation of field emission current
Results and discussions
Degradation of the CNT thin films
Current-voltage characteristics
Field emission current history
Effects of uniform height, uniform radius and uniform spacing
Effects of non-uniform radius
Effects of non-uniform height
Effects of non-uniform spacing between neighboring CNTs
Conclusions
|
0704.1682 | Modeling the Field Emission Current Fluctuation in Carbon Nanotube Thin
Films | arXiv:0704.1682v1 [cond-mat.mtrl-sci] 13 Apr 2007
Modeling the Field Emission Current Fluctuation in Carbon Nanotube
Thin Films
N. Sinha*, D. Roy Mahapatra**, J.T.W. Yeow* and R. Melnik***
* Department of Systems Design Engineering, University of Waterloo,Waterloo, ON, Canada
** Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India
*** Mathematical Modeling and Computational Sciences,Wilfrid Laurier University, Waterloo, ON, Canada
[email protected]
ABSTRACT
Owing to their distinct properties, carbon nanotubes
(CNTs) have emerged as promising candidate for field
emission devices. It has been found experimentally that
the results related to the field emission performance
show variability. The design of an efficient field emit-
ting device requires the analysis of the variabilities with
a systematic and multiphysics based modeling approach.
In this paper, we develop a model of randomly oriented
CNTs in a thin film by coupling the field emission phe-
nomena, the electron-phonon transport and the mechan-
ics of single isolated CNT. A computational scheme is
developed by which the states of CNTs are updated in
time incremental manner. The device current is calcu-
lated by using Fowler-Nordheim equation for field emis-
sion to study the performance at the device scale.
Keywords: carbon nanotube, field emission, electro-
dynamics, current density.
1 INTRODUCTION
Field emission from carbon nanotubes (CNTs) was
first reported in 1995 [1],[2]. With advancement in syn-
thesis techniques, application of CNTs in field emis-
sion devices, such as field emission displays, gas dis-
charge tubes and X-ray tube sources has been success-
fully demonstrated [3], [4]. Field emission performance
of a single isolated CNT is found to be remarkable due to
its structural integrity, geometry, chemical stability and
high thermal conductivity. One can use a single CNT
to produce an electron beam in a single electron beam
device. However, in many applications (such as X-ray
imaging systems), a continuous or patterned film is re-
quired to produce several independent electron beams.
However, the situation in these cases becomes complex
due to coupling among (i) the ballistic electron- phonon
transport at moderate to high temperature range, (ii)
field emission from each of the CNT tip and (iii) electro-
dynamic forces causing mechanical srain and deforming
CNTs (and thus changing the electron density and dy-
namic conductivity). In such cases, the individual CNTs
are not always inclined normal to the substrate surface
(as shown in Fig. 1, where CNT tips are oriented in a
random manner). This is the most common situation,
which can evolve from an initially ordered state of uni-
formly distributed and vertically oriented CNTs. Such
evolution process must be analyzed accurately from the
view point of long-term performance of the device. The
interest of the authors’ towards such an analysis and de-
sign studies stem from the problem of precision biomed-
ical X-ray generation.
In this paper, we focus on a diode configuration,
where the cathode contains a CNT thin film grown on a
metallic substrate and the anode is a copper plate acting
as emission current collector. Here, the most important
requirement is to have a stable field emission current
without compromising the lifetime of the device. As
the CNTs in the film deplete with time, which is due
to burning and fragmentation that result in a decreas-
ing number of emitting sites, one observes fluctuation
in the output current. Small spikes in the current have
also been observed experimentally [5], which can gener-
ally be attributed to the change in the gap between the
elongated CNT tip and the anode, and also possibly a
dynamic contact of pulled up tip with the anode under
high voltage. As evident from the reported studies [5],
it is important to include various coupled phenomena in
a numerical model, which can then be employed to un-
derstand the effects of various material parameters and
also geometric parameters (e.g. CNT geometry as well
as thin film patterns) on the collective field emission
performance of the thin film device. Another aspect of
interest in this paper is the effect of the angle of orien-
tation of the CNT tips on the collective performance.
A physics based modeling approach has been developed
here to analyze the device-level performance of a CNT
thin film.
2 MODEL FORMULATION
The physics of field emission from metallic surfaces
is fairly well understood. The current density (J) due to
field emission from a metallic surface is usually obtained
by using the Fowler-Nordheim (FN) equation [6]
CΦ3/2
, (1)
where E is the electric field, Φ is the work function of
the cathode material, and B and C are constants. How-
http://arxiv.org/abs/0704.1682v1
Figure 1: SEM image showing randomly oriented tips
of CNTs in a thin film.
E(x,y)
Cathode
Anode
Figure 2: CNT array configuration.
ever, in the case of a CNT thin film acting as cathode,
the surface of the cathode is not smooth (like the metal
emitters) and consists of hollow tubes in curved shapes
and with certain spacings. An added complexity is the
realignment of individual CNTs due to electrodynamic
interaction between the neighbouring CNTs during field
emission. Analysis of these processes requires the de-
termination of the current density by considering the
individual geometry of the CNTs, their dynamic ori-
entations and the variation in the electric field during
electronic transport.
Based on our previously developed model [7], which
describes the degradation of CNTs and the CNT geom-
etry and orientation, the rate of degradation of CNTs is
defined as
vburn = Vcell
dn1(t)
s(s− a1)(s− a2)(s− a3)
n2a21 +m
2a22 + nm(a
1 + a
2 − a
where Vcell is the representative volume element, n1 is
the concentration of carbon atoms in the cluster form
in the cell, a1, a2, a3 are lattice constants, s =
(a1 +
a2 + a3),n and m are integers (n ≥ |m| ≥ 0). The pair
(n,m) defines the chirality of the CNT. Therefore, at
a given time, the length of a CNT can be expressed as
h(t) = h0− vburnt, where h0 is the initial average height
of the CNTs and d is the distance between the cathode
substrate and the anode (see Fig. 2).
The effective electric field component for field emis-
sion calculation in Eq. (1) is expressed as
Ez = −e
−1 dV(z)
, (3)
where e is the positive electronic charge and V is the
electrostatic potential energy. The total electrostatic
potential energy can be expressed as
V(x, z) = −eVs−e(Vd−Vs)
G(i, j)(n̂j−n) , (4)
where Vs is the constant source potential (on the sub-
strate side), Vd is the drain potential (on the anode side),
G(i, j) is the Green’s function [8] with i being the ring
position, n̂j denotes the electron density at node po-
sition j on the ring, and (n,m) denotes the chirality
parameter of the CNT. The field emission current (Icell)
from the anode surface associated with the elemental
volume Vcell of the film is obtained as
Icell = Acell
Jj , (5)
where Acell is the anode surface area and N is the num-
ber of CNTs in the volume element. The toal current is
obtained by summing the cell-wise current (Icell). This
formulation takes into account the effect of CNT tip ori-
entations, and one can perform statistical analysis of the
device current for randomly distributed and randomly
oriented CNTs. However, due to the deformation of the
CNTs due to electrodynamic forces, the evolution pro-
cess requires a much more detailed treatment from the
mechanics point of view.
Based on the studies reported in published literature,
it is reasonable to expect that a major contribution is by
the Lorentz force due to the flow of electron gas along
the CNT and the ponderomotive force due to electrons
in the oscillatory electric field . The oscillatory electric
field could be due to hopping of the electrons along the
CNT surfaces and the changing relative distances be-
tween two CNT surfaces. In addition, the electrostatic
force and the van der Waals force are also important.
The net force components acting on the CNTs parallel
to the Z and the X directions are calculated as [9]
(flz + fvsz )ds+ fcz + fpz , (6)
(flx + fvsx)ds+ fcx + fpx . (7)
where fl, fvs, fc and fp are Lorentz, van der Waals,
coulomb and ponderomotive forces, respectively and ds
is the length of a small segment of CNTs. Next, we em-
ploy these force components in the expression of work
done on the ensemble of CNTs and formulate an en-
ergy conservation law. Due to their large aspect ratio,
the CNTs have been idealized as one-dimensional elastic
members (as in Euler-Bernoulli beam). By introducing
the strain energy density, the kinetic energy density and
the work density, and applying the Hamilton principle,
we obtain the governing equations in (ux′ , uz′) for each
CNT, which can be expressed as
+ρA0ü
x′ −ρA2
πCvs[(r
(m+1))2
−(r(m))2]
θ(z′)
− flx′ − fcx′ = 0 , (8)
−E′A0
E′A0α
∂∆T (z′)
+ ρA0ü
z′0 − πCvs
[(r(m+1))2−(r(m))2]
θ(z′)
−flz′−fcz′ = 0 ,
where ux′ and uz′ are lateral and longitudinal dispace-
ments of the oriented CNTs, E′ is the effective modulus
of elasticity of CNTs, A0 is the effective cross-sectional
area, A2 is the second moment of cross-sectional area
about Z-axis, ∆T (z′) = T (z′)− T0 is the difference be-
tween the absolute temperature (T ) during field emis-
sion and a reference temperature (T0), α is the effective
coefficient of thermal expansion (longitudinal), Cvs is
the van der Waals coefficient, superscript (m) indicates
the mth wall of the MWNT with r(m) as its radius, ∆x′
is the lateral displacement due to pressure and ρ is the
mass per unit length of CNT. We assume fixed bound-
ary conditions (u = 0) at the substrate-CNT interface
(z = 0) and forced boundary conditions at the CNT tip
(z = h(t)).
The governing equation in temperature is obtained
by the thermodynamics of electron-phonon interaction.
By considering the Fourier heat conduction and ther-
mal radiation from the surface of CNT, the energy rate
balance equation in T can be expressed as
dqF − πdtσSB(T
4 − T 40 )dz
′ = 0 , (10)
where dQ is the heat flux due to Joule heating over a
segment of a CNT, qF is the Fourier heat conduction,
dt is the diameter of the CNT and σSB is the Stefan-
Boltzmann constant. Here, we assume the emissivity
to be unity. At the substrate-CNT interface (z′ = 0),
the boundary condition T = T0 is applied and at the tip
we assign a reported estimate of the power dissipated by
phonons exiting the CNT tip [10] to the conductive flux.
We first compute the electric field at the nodes and then
solve all the governing equations simultaneously at each
time step and the curved shape s(x′ + ux′ , z
′ + uz′) of
each of the CNTs is updated. The angle of orientation θ
between the nodes j+1 and j at the two ends of segment
∆sj is expressed as
θ(t) = tan−1
(xj+1 + uj+1x )− (x
j + ujx)
(zj+1 + u
z )− (zj + u
, (11)
= [Γ(θ(t−∆t)j)]
, (12)
where Γ is the usual coordinate transformation matrix
which maps the displacements (ux′ , uz′) defined in the
local (X ′, Z ′) coordinate system into the displacements
(ux, uz) defined in the cell coordinate system (X,Z).
For this transformation, we employ the angle θ(t −∆t)
obtained at the previous time step and for each node
j = 1, 2, 3, . . ..
3 RESULTS AND DISCUSSIONS
The CNT film considered in this study consists of
randomly oriented multiwalled CNTs. The film was
grown on a stainless steel substrate. The film surface
area (projected on anode) is 49.93 mm2 and the aver-
age thickness of the film (based on randomly distributed
CNTs) is 10-14 µm. In the simulation and analysis,
the constants B and C in Eq. (1) were taken as B =
(1.4× 10−6)× exp((9.8929)×Φ−1/2) and C = 6.5× 107,
respectively [11]. It has been reported in the literature
(e.g., [11]) that the work function Φ for CNTs is smaller
than the work functions for metal, silicon, and graphite.
However, there are significant variations in the exper-
imental values of Φ depending on the types of CNTs
(i.e., SWNT/MWNT), geometric parameters. The type
of substrate materials have also significant influence on
the electronic band-edge potential. The results reported
in this paper are based on computation with Φ = 2.2eV .
Following sample configuration has been used in this
study: average height of CNTs h0 = 12µm, uniform
diameter dt = 3.92nm and uniform spacing between
neighboring CNTs at the substrate contact region in the
film d1 = 2µm. The initial height distribution h and the
orientation angle θ are randomly distributed. The elec-
trode gap (d) is maintained at 34.7µm. The orientation
of CNTs is parametrized in terms of the upper bound of
the CNT tip deflection (denoted by h0/m
′, m′ >> 1).
Several computational runs are performed and the out-
put data are averaged out at each sampling time step.
For a constant bias voltage (650V in this case), as the
initial state of deflection of the CNTs increases (from
h0/50 to h0/25), the average current reduces until the
initial state of deflection becomes large enough that the
electrodynamic interaction among CNTs produces sud-
den pull in the deflected tips towards the anode result-
ing in current spikes (see Fig. 3). As mentioned earlier,
0 20 40 60 80 100
Time (s)
Figure 3: Field emission current histories for various
initial average tip deflections and under bias voltage of
650V. The current I is in Ampere unit.
0 20 40 60 80 100
CNT number
t=100
Figure 4: Comparison of tip orientation angles at t=0
and t=100s.
0 20 40 60 80 100
CNT number
Figure 5: Maximum temperature of CNT tips during
100s of field emission.
spikes in the current have also been observed experimen-
tally. Fig. 4 reveals that after experiencing the elec-
trodynamic pull and Coulombic repulsion, some CNTs
reorient themselves. In Fig. 5, maximum tip tempera-
ture distribution over an array of 100 CNTs during field
emission over 100 s duration is plotted. The maximum
temperature rises up to approximately 350 K.
4 CONCLUSION
In this paper, a model has been developed from the
device design point of view, which sheds light on the
coupling issues related to the mechanics, the thermo-
dynamics, and the process of collective field emission
from CNTs in a thin film, rather than a single isolated
CNT. The proposed modeling approach handles several
complexities at the device scale. While the previous
works by the authors mainly dealt with decay, kinemat-
ics and dynamics of CNTs during field emission, this
work includes some more aspects that were assumed
constant earlier. These include: (i) non-local nature of
the electric field, (ii) non-linear relationship between the
electronic transport and the electric field, and (iii) non-
linear relationship between the electronic transport and
the heat conduction. The trend in the simulated results
matches qualitatively well with the results of published
experimental studies.
REFERENCES
[1] A.G. Rinzler, J.H. Hafner, P. Nikolaev, L. Lou, S.G.
Kim, D. Tomanek, D. Colbert and R.E. Smalley,
Science 269, 1550, 1995.
[2] W.A. de Heer, A. Chatelain, and D. Ugrate, Science
270, 1179, 1995.
[3] J.M. Bonard, J.P. Salvetat, T. Stockli, L. Forro and
A. Chatelain, Appl. Phys. A 69, 245, 1999.
[4] Y. Saito and S. Uemura, Carbon 38, 169, 2000.
[5] J.M. Bonard, J.P. Salvetat, T. Stockli, L. Forro and
A. Chatelain, Phys. Rev. B 67, 115406, 2003.
[6] R.H. Fowler and L. Nordheim, Proc. Royal Soc.
London A 119, 173, 1928.
[7] N. Sinha, D. Roy Mahapatra, J.T.W. Yeow, R. Mel-
nik and D. A. Jaffray, Proc. IEEE Nano 2006.
[8] A. Svizhenko, M.P. Anantram and T.R. Govindan,
IEEE Trans. Nanotech. 4, 557, 2005.
[9] N. Sinha, D. Roy Mahapatra, J.T.W. Yeow, R. Mel-
nik and D. A. Jaffray, J. comp. Theor. Nanosci.
(Accepted).
[10] H.-Y. Chiu, V.V. Deshpande, H.W.Ch. Postma,
C.N. Lau, C. Mikó, L. Forró and M. Bockrath,
Phys. Rev. Lett. 95, 226101, 2005.
[11] Z.P. Huang, Y. Tu, D.L. Carnahan and Z.F. Ren,
“Field emission of carbon nanotubes,” Encyclope-
dia of Nanoscience and Nanotechnology (Ed. H.S.
Nalwa) 3, 401-416, 2004.
|
0704.1683 | Spectral averaging for trace compatible operators | SPECTRAL AVERAGING FOR TRACE
COMPATIBLE OPERATORS
N.A.AZAMOV AND F.A. SUKOCHEV
Abstract. In this note the notions of trace compatible operators and infin-
itesimal spectral flow are introduced. We define the spectral shift function
as the integral of infinitesimal spectral flow. It is proved that the spectral
shift function thus defined is absolutely continuous and Krein’s formula is es-
tablished. Some examples of trace compatible affine spaces of operators are
given.
1. Introduction
Let H0 be a self-adjoint operator, and let V be a trace class operator on a
Hilbert space H. Then M.G.Krĕın’s famous result [13] says that there is a unique
L1 -function ξH0+V,H0(λ), known as the Krein spectral shift function, such that for
any C∞c (R) function f
Tr(f(H0 + V )− f(H0)) =
f ′(λ)ξH0+V,H0(λ) dλ.(1)
The notion of the spectral shift function was discovered by the physicist I.M. Lifshits
[15]. An excellent survey on the spectral shift function can be found in [6].
In 1975, Birman and Solomyak [7] proved the following remarkable formula for
the spectral shift function
ξ(λ) =
Tr(V EHr
(−∞,λ]) dr,(2)
where Hr = H0 + rV, r ∈ R, and EHr(−∞,λ] is the spectral projection. Birman-
Solomyak’s proof relies on double operator integrals. An elementary derivation of
(2) was obtained in [10] (without using double operator integrals).
Actually, this spectral averaging formula was discovered for the first time by
Javrjan [12] in 1971, in case of a Sturm-Liouville operator on a half-line, perturba-
tion being a perturbation of the boundary condition, so that in this case V was
one-dimensional. An important contribution to spectral averaging was made by
A.B.Alexandrov [1]. In 1998, B. Simon [18, Theorem 1] gave a simple short proof
of the Birman-Solomyak formula. He also noticed, that this formula holds for the
wide class of Schrödinger operators on Rn [18, Theorems 3,4]. The connection of
1991 Mathematics Subject Classification. Primary 47A11; Secondary 47A55 .
Key words and phrases. Spectral shift function, spectral averaging, infinitesimal spectral flow,
trace compatible operators, semifinite von Neumann algebra .
http://arxiv.org/abs/0704.1683v2
2 N.A.AZAMOV AND F.A. SUKOCHEV
this formula with the integral formula for spectral flow from non-commutative ge-
ometry is outlined in [3]. An interesting approach to spectral averaging via Herglotz
functions can be found in [11].
In this note we present an alternative viewpoint to the spectral shift function,
and generalize the result of Simon so that it becomes applicable to a class of Dirac
operators as well.
The new point of view, which the Birman-Solomyak formula suggests, is that
there is a more fundamental notion than that of the spectral shift function. We
call this notion the speed of spectral flow or infinitesimal spectral flow of a self-
adjoint operator H under perturbation by a bounded self-adjoint operator V. It
was introduced in [3] in the case of operators with compact resolvent. It is defined
by formula
ΦH(V )(ϕ) = Tr(V ϕ(H)), ϕ ∈ C∞c (R),(3)
whenever this definition makes sense. This naturally leads to the notion of trace
compatibility of two operators. We say that operators H and H + V are trace
compatible, if for all ϕ ∈ C∞c (R) the operator V ϕ(H) is trace class. The spectral
shift function between two trace compatible operators is then considered as the
integral of infinitesimal spectral flow. It turns out that the spectral shift function
does not depend on the path connecting the initial and final operators, a fact
which follows from the aforementioned result of B. Simon in the case of trace class
perturbations.
The results of this note are summarized in Theorem 2.9. This theorem extends
formulae (1) and (2) to the setting of trace compatible pairs (H,H + V ) and also
strengthens [18, Theorems 3,4] in the sense that it does not require H to be a
positive operator and maximally weakens conditions on the path H+rV, r ∈ [0, 1].
Our results also hold for a more general setting, when H = H∗ is affiliated with a
semifinite von Neumann algebra N and V = V ∗ ∈ N .
Our investigation here also strengthens the link between the theory of the Krein
spectral shift function and that of spectral flow firstly discovered in [2]. For exposi-
tion of the latter theory we refer to [5] and a detailed discussion of the connection
between the two theories in the situation where the resolvent of H is τ -compact
(here, τ is an arbitrary faithful normal semifinite trace on N ) is contained in [3].
It should be pointed out here that the idea of viewing the spectral shift function
as the integral of infinitesimal spectral flow is akin to I.M. Singer’s ICM-1974 pro-
posal to define the η invariant (and hence spectral flow) as the integral of a one
form. Very general formulae of that type have been produced in the framework
of noncommutative geometry (see [5] and references therein). We believe that our
present approach will have applications to noncommutative geometry, in particular,
it may be useful in avoiding ”summability constraints” on H customarily used in
that theory.
In semifinite von Neumann algebras N Krein’s formula (1) was proved for the
first time in [9] in case of a bounded self-adjoint operator H ∈ N and a trace class
perturbation V = V ∗ ∈ L1(N , τ) and in [4] for self-adjoint operators H affiliated
with N .
SPECTRAL AVERAGING 3
An additional reason to call ΦH(V ) the speed of spectral flow is the follow-
ing observation. Let H be the operator of multiplication by λ on L2(R, dρ(λ))
with some measure ρ and let the perturbation V be an integral operator with a
sufficiently regular (for example C1 ) kernel k(λ′, λ). Then for any test function
ϕ ∈ C∞c (R)
ΦH(V )(ϕ) = Tr(V ϕ(H)) =
k(λ, λ)ϕ(λ) dρ(λ).
Hence, the infinitesimal spectral flow of H under perturbation by V is the measure
on the spectrum of H with density k(λ, λ) dρ(λ). We note that this agrees with
the classical formula [14, (38.6)]
n = Vnn
from formal perturbation theory. Here E
n is the n -th eigenvalue of the unper-
turbed operator H0, E
n , j = 1, 2, . . . is the j -th correction term for the n -th
eigenvalue En of the perturbed operator H = H0 + V in the formal perturbation
series En = E
n + E
n + E
n + . . . , and Vmn =
m |V |ψ(0)n
is the matrix
element of the perturbation operator V with respect to the eigenfunctions ψ
and ψ
n of the unperturbed operator H0 [14].
Acknowledgement. We thank Alan Carey for useful comments and criticism.
2. Results
Let N be a von Neumann algebra on a Hilbert space H with faithful normal
semifinite trace τ. Let A = H0 + A0 be an affine space of self-adjoint operators
affiliated with N , where H0 is a self-adjoint operator affiliated with N and A0
is a vector subspace of the real Banach space of all self-adjoint operators from N .
We say that A is trace compatible, if for all ϕ ∈ C∞c (R), V ∈ A0 and H ∈ A
V ϕ(H) ∈ L1(N , τ),(4)
where L1(N , τ) is the ideal of trace class operators from N , and if A0 is endowed
with a locally convex topology which coincides with or is stronger than the uniform
topology, such that the map (V1, V2) ∈ A20 7→ V1ϕ(H0 + V2) is L1 continuous for
all H0 ∈ A and ϕ ∈ C∞c (R). In particular, A is a locally convex affine space.
The ideal property of L1(N , τ) and [16, Theorem VIII.20(a)] imply that, in the
definition of trace compatibility, the condition ϕ ∈ C∞c (R) may be replaced by
ϕ ∈ Cc(R). It follows from the definition of the topology on A0 that H ∈ A 7→ eitH
is norm continuous.
If A = H0+A0 is a trace compatible affine space then we define a (generalized)
one-form (on A ) of infinitesimal spectral flow or speed of spectral flow by the
formula
ΦH(V ) = τ(V δ(H)), H ∈ A, V ∈ A0,(5)
where δ is Dirac’s delta function. The last formula is to be understood in a
generalized function sense, i.e. ΦH(V )(ϕ) = τ(V ϕ(H)), ϕ ∈ C∞c (R). Φ is a
generalized function, since if ϕn → 0 in C∞c (R) such that supp(ϕn) ⊆ ∆ then
|τ(V ϕn(H))| 6
∥V EH∆
‖ϕn(H)‖ → 0. Here ‖A‖1 = τ(|A|).
4 N.A.AZAMOV AND F.A. SUKOCHEV
Since ϕ can be taken from Cc(R), for each V ∈ A0 the infinitesimal spectral
flow ΦH(V ) is actually a measure on the spectrum of H.
By a smooth path {Hr}r∈R in A, we mean a differentiable path, such that its
derivative dHr
∈ A0 is continuous.
Let Π =
(s0, s1) ∈ R2 : s0s1 > 0, |s1| 6 |s0|
, and let
dνf (s0, s1) = sgn(s0)
f̂(s0) ds0 ds1.
If f ∈ C2c (R) then (Π, νf ) is a finite measure space [2]. For any H0, H1 ∈ A, any
X ∈ A0 and any non-negative f ∈ C∞c (R) set by definition
(6) T
H1,H0
f [1]
(X) =
ei(s0−s1)H1
f(H1)Xe
is1H0
+ ei(s0−s1)H1X
f(H0)e
is1H0
dν√f (s0, s1),
where the integral is taken in the so∗ -topology. For justification of this notation
and details see [3].
Lemma 2.1. If {Hr} ⊂ A is a path, continuous (smooth) in the topology of A0,
and if f ∈ C2c (R) then
r 7→ f(Hr)− f(H0)
takes values in L1(N , τ) and it is L1(N , τ) continuous (smooth).
Proof. We can assume that f is non-negative and that
f ∈ C2c (R). It is proved
in [3] that
f(Hr)− f(H0) = THr ,H0f [1] (Hr −H0).(7)
Since ei(s0−s1)x
f(x), eis1x
f(x) ∈ C2c (R), trace compatibility implies that the
integrand of the right hand side of (6) takes values in L1 and is L1 -continuous
(smooth), so the dominated convergence theorem completes the proof. �
If Γ = {Hr}r∈[0,1] is a smooth path in A, then we define the spectral shift
function ξ along this path as the integral of infinitesimal spectral flow: ξ =
ξ(ϕ) =
ϕ(Hr)
dr, ϕ ∈ C∞c (R).(8)
Now we prove that the spectral shift function is well-defined in the sense that it
does not depend on the path of integration.
A one-form αH(V ) on an affine space A is called exact if there exists a zero-
form θH on A such that dθ = α, i.e.
αH(V ) =
θH+rV
We say that the generalized one-form Φ is exact if Φ(ϕ) is an exact form for any
ϕ ∈ C∞c (R).
SPECTRAL AVERAGING 5
The proof of the following proposition follows the lines of the proof of [3, Propo-
sition 3.5].
Proposition 2.2. The infinitesimal spectral flow Φ is exact.
Proof. Let V ∈ A0, Hr = H0 + rV, r ∈ [0, 1], and let f ∈ C∞c (R). By (7)
(9) f(Hr)− f(H0) = THr ,H0f [1] (rV )
ei(s0−s1)Hr
f(Hr)rV e
is1H0 + ei(s0−s1)HrrV
f(H0)e
is1H0
dν√f (s0, s1),
ei(s0−s1)H0
f(H0)rV e
is1H0 + ei(s0−s1)H0rV
f(H0)e
is1H0
dν√f (s0, s1)
(ei(s0−s1)Hr
f(Hr)− ei(s0−s1)H0
f(H0))rV e
is1H0
+ (ei(s0−s1)Hr − ei(s0−s1)H0)rV
f(H0)e
is1H0
dν√f (s0, s1)
H0,H0
f [1]
(rV ) +R1 +R2.
All three summands here are trace class by the trace compatibility assumption. So,
for any S ∈ N
τ(S(f(Hr)− f(H0))) = rτ
H0,H0
f [1]
+ τ(SR1) + τ(SR2).
Now, Duhamel’s formula and (7) show that τ(SR1) = o(r) and τ(SR2) = o(r).
Hence,
τ(S(f(Hr)− f(H0))) = τ
Hr,Hr
f [1]
.(10)
This implies that for any S ∈ N
τ(S(f(H1)− f(H0))) = τ
Hr ,Hr
f [1]
(V ) dr
Now let H0 ∈ A be a fixed operator and for any f ∈ C∞c (R) let
τ(V f(Hr)) dr,
where Hr = H0 + rV, H = H1. We are going to show that dθ
H(X) = ΦH(X)(f)
for any X ∈ A0.
Following the proof of [3, Proposition 3.5] we have
(A) := dθ
H(X) = lim
Xf(Hr + srX)
+ lim
f(Hr + srX)− f(Hr)
By definition of A0 topology the integrand of the first summand is continuous with
respect to r and s. So, the first summand is equal to
Xf(Hr)
6 N.A.AZAMOV AND F.A. SUKOCHEV
By [2, Theorem 5.3] the second summand is equal to
Hr+srX,Hr
f [1]
(srX)
dr = lim
Hr+srX,Hr
f [1]
Hr,Hr
f [1]
Hr ,Hr
f [1]
r dr,
where the second equality follows from the definition of A0 -topology and the last
equality follows from [3, Lemma 3.2]. Now, using (10) and integrating by parts we
(11) (A) = τ (Xf(H1)−Xf(H0)) +
(τ(Xf(Hr))− τ(X [f(Hr)− f(H0)])) dr
= τ (Xf(H1)) .
The argument before [8, Proposition 1.5] now implies
Corollary 2.3. The spectral shift function given by (8) is well-defined.
Proposition 2.4. If r ∈ R 7→ Hr ∈ A is smooth then the equality
df(Hr)
ϕ(Hr)
′(Hr)ϕ(Hr)
holds for any f ∈ C2c (R) and any bounded measurable function ϕ.
Proof. Without loss of generality, we can assume that f > 0 and
f ∈ C2c (R). We
prove the above equality at r = 0. The formula (7) and the dominated convergence
theorem imply that
df(Hr)
ϕ(H0)
ϕ(H0)
ei(s0−s1)Hr
f(Hr)
Hr −H0
eis1H0
+ ei(s0−s1)Hr
Hr −H0
f(H0)e
is1H0
dν√f (s0, s1)
where the limit is taken in L1(N , τ). By the A0 -smoothness of {Hr} , we have
df(Hr)
ϕ(H0)
ϕ(H0)
ei(s0−s1)H0
f(H0)Ḣr
eis1H0
+ ei(s0−s1)H0Ḣr
f(H0)e
is1H0
dν√f (s0, s1)
so that by [2, Lemmas 3.7, 3.10] and letting A = Ḣr
df(Hr)
ϕ(H0)
ϕ(H0)e
is0H0
f(H0)A
dν√f (s0, s1)
Aϕ(H0)
f(H0)
eis0H0
f)(s0) ds0
Aϕ(H0)f
′(H0)
SPECTRAL AVERAGING 7
Proposition 2.5. The spectral shift function given by (8) satisfies Krein’s formula,
i.e. for any f ∈ C∞c , H0, H1 ∈ A
τ (f(H1)− f(H0)) = ξ(f ′).
Proof. Taking the integral of (12) with ϕ = 1 we have
d(f(Hr)− f(H0))
′(Hr)
The right hand side is ξ(f ′) by definition. It follows from Lemma 2.1 that one can
interchange the trace and the derivative in the left hand side. �
Corollary 2.6. In case of trace class perturbations, the spectral shift function ξ
defined by (8) coincides with classical definition, given by [4, Theorem 3.1].
Proof. This follows from Theorem 6.3 and Corollary 6.4 of [2]. �
For trace class perturbations the absolute continuity of the spectral shift function
is established in [13] (see also [11]). For the general semifinite case we refer to [4, 2].
Lemma 2.7. Let A be a trace compatible affine space and let f ∈ C∞c (R). Let
H0, H1 ∈ A, let ξ and ξf be the spectral shift distributions of the pairs (H0, H1)
and f(H0), f(H1) respectively. Then for any ϕ ∈ C∞c (R)
ξf (ϕ) = ξ(ϕ ◦ f · f ′).(13)
Proof. By Proposition 2.4 for any f, ϕ ∈ C∞c (R)
df(Hr)
ϕ(f(Hr))
′(Hr)ϕ(f(Hr))
Ḣr(F ◦ f)′(Hr)
where F ′ = ϕ. Hence, for any smooth path Γ = {Hr}r∈[0,1] ⊆ A
df(Hr)
ϕ(f(Hr))
Ḣr(F ◦ f)′(Hr)
dr,(14)
which is (13). �
Proposition 2.8. Let A = H0 + A0 be a trace compatible affine space and let
A0 be such that for any V ∈ A0 there exist positive V1, V2 ∈ A such that V =
V1−V2. Then the spectral shift function ξ of any pair H,H+V ∈ A is absolutely
continuous.
Proof. Since the map (V1, V2) 7→ τ (V1ϕ(H + V2)) is L1(N , τ) -continuous (by
definition), it follows that the infinitesimal spectral flow is a uniformly locally finite
measure with respect to the path parameter. Hence, the spectral shift function is
also a locally finite measure being the integral of locally finite measures, which are
uniformly bounded on every segment.
If, for H0, H1, H2 ∈ A, the spectral shift functions from H0 to H1 and from H1
to H2 are absolutely continuous, then evidently the spectral shift function from H0
to H2 is also absolutely continuous. Hence, if V = V1 − V2 with 0 6 V1, V2 ∈ A0,
then representing the spectral shift function from H to H + V as the sum of
8 N.A.AZAMOV AND F.A. SUKOCHEV
spectral shift function from H to H + V1 and from H + V1 to H + V, we see
that we can assume that the perturbation V is positive.
By Lemma 2.1 and [4, Theorem 3.1] the spectral shift function ξf of the pair
(f(H), f(H + V )) is absolutely continuous. Let us suppose that the spectral shift
function ξ of the pair (H,H + V ) has non-absolutely continuous part µ.
Without loss of generality, we can assume that there exists a set of Lebesgue
measure zero E ⊂ (ε, 1− ε) such that µ(E) > 0. For any a, b ∈ R with b− a > 2
let us consider a ”cap”-function fa,b, i.e. f is a smooth function which is zero on
(−∞, a) and (b,∞), it is 1 on (a+1, b−1) and its derivatives on (a+ε, a+1−ε)
and (b− 1 + ε, b− ε) is 1 and −1 respectively.
Let U be an open set of Lebesgue measure < δ such that E ⊂ U and let ϕ
be a smoothed indicator of U. Then (13), applied to functions ϕ and f0,b and to
functions ϕ and fa,b (with big enough b ) implies that µ(E) = µ(a+ E), i.e. µ
is translation invariant. Since it is also locally finite it is some multiple of Lebesgue
measure. This yields a contradiction. �
We summarize the results in the following theorem.
Theorem 2.9. Let A be a trace compatible affine space of operators in a semifinite
von Neumann algebra N with a normal semifinite faithful trace τ. Let H and H+
V be two operators from A. Let the spectral shift (generalized) function ξH,H+V
be defined as the integral of infinitesimal spectral flow by the formula
ξH,H+V (ϕ) =
Φ(ϕ) =
ΦHr (Ḣr)(ϕ) dr, ϕ ∈ C∞c ,
where Γ = {Hr}r∈[0,1] is any piecewise smooth path in A connecting H and
H+V. Then the spectral shift function is well-defined in the sense that the integral
does not depend on the choice of the piecewise smooth path Γ connecting H and
H + V, and it satisfies Krein’s formula
τ(f(H + V )− f(H)) = ξ(f ′), f ∈ C∞c .
Moreover, if for any V ∈ A0 there exist V1, V2 ∈ A0 such that V = V1 −V2, then
ξH,H+V is an absolutely continuous measure.
Two extreme examples of trace compatible affine spaces are H0 + L1sa(N , τ),
H0 = H
0ηN , with the topology induced by L1(N , τ) [2, 4], and D0 + Nsa,
where (D0 − i)−1 is τ -compact, with the topology induced by operator norm
[3]. In particular, the space −∆+C(M), where (M, g) is a compact Riemannian
manifold, and ∆ is the Laplacian, is trace compatible.
As an example of an intermediate trace compatible affine space one can consider
Schrödinger operators −∆+ Cc(Rn) with the inductive topology of uniform con-
vergence. It is proved in [19, Section B9] that for this example the condition (4)
holds. It also follows from [19, Section B9] that ‖gf(H)‖1 6 C ‖g‖2 , where C
depends only on f, on the support of g and on ‖V−‖∞ , where V− is the negative
part of V, H = −∆+V. So, the condition on the topology of A0 is fulfilled by (7).
SPECTRAL AVERAGING 9
Another example is given by Dirac operators of the form D +A0, where D =
, α1, . . . , αn are m×m -matrices such that αjαk + αkαj = −2δjk, and
a = a∗ ∈ Cc(Rn,Mm(R)) : ∃ϕ = ϕ∗ ∈ C1(Rn) iDϕ = a
with the inductive topology of uniform convergence. A proof that the space D+A0
is trace compatible can be reduced to [17, Theorem 4.5] via the gauge transforma-
tion ψ 7→ e−iϕ(x)ψ. We have
(D + a)(e−iϕ(x)u) =
−ie−iϕ(x) ∂
ϕ(x)αju+ e
−iϕ(x)αj
+ ae−iϕ(x)u,
where u is an m -column of C∞c -functions. So, if iDϕ = a then
eiϕ(x)(D + a)(e−iϕ(x)u) = Du.
Hence, (D+a)2 = e−iϕ(x)D2eiϕ(x). This shows that gf((D+a)2), g, a ∈ A0, f ∈
C∞c (R), is trace class iff gf(e
−iϕ(x)D2eiϕ(x)) = e−iϕ(x)geiϕ(x)f(e−iϕ(x)D2eiϕ(x)) is
trace class. But the last operator is unitarily equivalent to gf(D2), which is trace
class by [17, Theorem 4.5]. So, if f > 0 then by the same theorem
‖gf(D + a)‖1 6 C ‖g‖∞ ‖f‖∞ ,(15)
where C depends on supports of g and f. Hence, for g, g1, a, a1 ∈ A0, we have
‖gf(D + a)− g1f(D + a1)‖1 6 ‖(g − g1)f(D + a)‖1 + ‖g1(f(D + a)− f(D + a1)‖1 .
So, the condition on the topology of A0 is fulfilled by (7) and (15).
In case n = m = 1, we have D +A0 = 1i
+ Cc(R).
References
[1] A.B.Alexandrov, The multiplicity of the boundary values of inner functions, Sov. J.
Comtemp. Math. Anal. 22 5 (1987), 74–87.
[2] N.A.Azamov, A. L.Carey, P.G.Dodds, F.A. Sukochev, Operator integrals, spectral shift and
spectral flow, to appear in Canad. J. Math, arXiv:math/0703442.
[3] N.A.Azamov, A. L.Carey, F.A. Sukochev, The spectral shift function and spectral flow, to
appear in Comm. Math. Phys., arXiv: 0705.1392.
[4] N.A.Azamov, P.G.Dodds, F. A. Sukochev, The Krein spectral shift function in semifinite
von Neumann algebras, Integral Equations Operator Theory 55 (2006), 347–362.
[5] M-T.Benameur, A. L.Carey, J. Phillips, A.Rennie, F. A. Sukochev, K.P.Wojciechowski, An
analytic approach to spectral flow in von Neumann algebras, Analysis, geometry and topology
of elliptic operators, 297–352, World Sci. Publ., Hackensack, NJ, 2006.
[6] M. Sh. Birman, A.B. Pushnitski, Spectral shift function, amazing and multifaceted. Dedicated
to the memory of Mark Grigorievich Krein (1907–1989), Integral Equations Operator Theory
30 (1998), 191–199.
[7] M. Sh. Birman, M.Z. Solomyak, Remarks on the spectral shift function, J. Soviet math. 3
(1975), 408–419.
[8] A. L. Carey, J. Phillips, Unbounded Fredholm modules and spectral flow, Canad. J. Math. 50
(1998), 673–718.
[9] R.W.Carey, J. D.Pincus, Mosaics, principal functions, and mean motion in von Neumann
algebras, Acta Math. 138 (1977), 153–218.
[10] F.Gesztesy, K.A.Makarov, A.K.Motovilov, Monotonicity and concavity properties of the
spectral shift function, Stochastic processes, physics and geometry: new interplays, II (Leipzig,
1999), CMS Conf. Proc., 29, Amer. Math. Soc., Providence, RI, 2000, 207–222.
http://arxiv.org/abs/math/0703442
10 N.A.AZAMOV AND F.A. SUKOCHEV
[11] F.Gesztesy, K.A.Makarov, SL2(R), exponential Herglotz representations, and spectral aver-
aging, Algebra i Analiz 15 (2003), 393–418.
[12] V.A. Javrjan, A certain inverse problem for Sturm-Liouville operators, Izv. Akad. Nauk
Armjan. SSR Ser. Mat. 6 (1971), 246–251.
[13] M.G.Krĕın, On the trace formula in perturbation theory, Mat. Sb., 33 75 (1953), 597–626.
[14] L.D. Landau, E.M.Lifshitz, Quantum mechanics, 3rd edition, Pergamon Press.
[15] I.M. Lifshits, On a problem in perturbation theory, Uspekhi Mat. Nauk 7 (1952), 171-180
(Russian).
[16] M.Reed, B. Simon, Methods of modern mathematical physics: 1. Functional analysis, Aca-
demic Press, New York, 1972.
[17] B. Simon, Trace ideals and their applications, London Math. Society Lecture Note Series, 35,
Cambridge University Press, Cambridge, London, 1979.
[18] B. Simon, Spectral averaging and the Krein spectral shift, Proc. Amer. Math. Soc. 126 (1998),
1409–1413.
[19] B. Simon, Schrödinger semigroups, Bull. Amer. Math. Soc. 7 (1982), 447–526.
School of Informatics and Engineering, Flinders University of South Australia,
Bedford Park, 5042, SA Australia.
E-mail address: [email protected], [email protected]
1. Introduction
2. Results
References
|
0704.1684 | The molecular environment of massive star forming cores associated with
Class II methanol maser emission | Astrophysical masers and their environments
Proceedings IAU Symposium No. 242, 2007
A.C. Editor, B.D. Editor & C.E. Editor, eds.
c© 2007 International Astronomical Union
DOI: 00.0000/X000000000000000X
The molecular environment of massive star
forming cores associated with Class II
methanol maser emission
S. N. Longmore1,2†, M. G. Burton1, P. J. Barnes3, T. Wong1,2,5,
C. R. Purcell1,4, J. Ott2,6
1School of Physics, University of New South Wales, Kensington, NSW 2052, Sydney, Australia
2Australia Telescope National Facility, CSIRO, PO Box 76, Epping, NSW 1710, Australia
3School of Physics A28, University of Sydney, NSW 2006, Australia
4University of Manchester, Jodrell Bank Observatory, Macclesfield, Cheshire SK11 9DL, UK
5Department of Astronomy, University of Illinois, Urbana IL 61801, USA
6National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903, USA
Abstract. Methanol maser emission has proven to be an excellent signpost of regions under-
going massive star formation (MSF). To investigate their role as an evolutionary tracer, we
have recently completed a large observing program with the ATCA to derive the dynamical
and physical properties of molecular/ionised gas towards a sample of MSF regions traced by
6.7GHz methanol maser emission. We find that the molecular gas in many of these regions
breaks up into multiple sub-clumps which we separate into groups based on their association
with/without methanol maser and cm continuum emission. The temperature and dynamic state
of the molecular gas is markedly different between the groups. Based on these differences, we
attempt to assess the evolutionary state of the cores in the groups and thus investigate the role
of class II methanol masers as a tracer of MSF.
Keywords. stars: formation, ISM: evolution, ISM: molecules, line: profiles, masers, molecular
data, stars: early-type, radio continuum: stars
1. Introduction
In terms of luminosity, energetics and chemical enrichment, massive stars exert a dis-
proportionate influence compared to their number on Galactic evolution. However, the
collective effects of their rarity, short formation timescales and heavy obscuration due to
dust, make it difficult to find large samples of massive young sources at well constrained
evolutionary stages needed to develop an understanding of their formation mechanism.
The 51→ 60 A
+ class II methanol (CH3OH) maser transition at 6.7GHz is one of the
most readily observable signposts of MSF [Menten 1991].
The specific conditions required for the masers to exist makes them a powerful probe
of the region’s evolutionary stage. While masers probe spatial scales much smaller than
their natal cores, the numerous feedback processes from newly formed stars [e.g. turbu-
lence injection from jets/outflows, ionisation (stars M>8M⊙) and heating from radiation
etc.] must significantly alter the physical conditions of the surrounding region. We have
completed an observing program with the Australia Telescope Compact Array (ATCA)
to derive properties of molecular and ionised gas towards MSF regions traced by 6.7GHz
methanol maser emission [Longmore et al. (2007)]. In this contribution, we use these re-
† E-mail:[email protected]
http://arxiv.org/abs/0704.1684v1
120 S. N. Longmore et al.
sults to investigate the use of class II methanol masers as a diagnostic of the evolutionary
stage of MSF.
2. Observations
Observations of NH3(1,1), (2,2), (4,4) & (5,5) and 24GHz continuum were carried out
using the ATCA towards 21 MSF regions traced by 6.7GHz methanol maser emission
[selected from a similar sample to Hill et al. (this volume)]. The H168 [NH3(1,1)/(2,2)]
and H214 [NH3(4,4)/(5,5)] antenna configurations with both East-West and North-South
baselines, were used to allow for snapshot imaging. Primary and characteristic synthesised
beam sizes were ∼2.′2 and ∼8 − 11′′ respectively. Each source was observed for 4×15
minute cuts in each transition separated over 8 hours to ensure the best possible sampling
of the uv-plane. The data were reduced using the MIRIAD (see Sault et al. 1995) package.
Characteristic spectra were extracted at every transition for each core at the position of
the peak NH3(1,1) emission, baseline subtracted and fit using the gauss and nh3(1,1)
methods in CLASS (see http://www.iram.fr/IRAMFR/GILDAS/). Continuum source
fluxes and angular sizes were calculated in both the image domain and directly from the
uv data.
3. Core separation
NH3 detections within each region were separated into individual cores if offset by
more than a synthesised beam width spatially, or more than the FWHM in velocity if
at the same sky position. The same criteria were used to determine whether the NH3,
continuum and methanol maser emission in each of the regions were associated. In all
but three cases, these criteria were sufficient to both unambiguously separate cores and
determine their association with continuum and maser emission. We find 41 NH3(1,1)
cores (of which 3 are in absorption and 2 are separated in velocity) and 14 24GHz
continuum cores. Observationally the cores fall in to 4 groups: NH3 only (Group 1);
NH3 + methanol maser (Group 2); NH3 + methanol maser + 24GHz continuum (Group
3); NH3 + 24GHz continuum (Group 4). The cores were distributed with 16, 16, 6 and 2
cores in Groups 1 to 4, respectively. Based on this grouping, most of the NH3(1,1) cores
are coincident with methanol maser emission (Groups 2 & 3), but there are a substantial
fraction of NH3 cores with neither 24GHz continuum nor maser emission (Group 1).
Having separated the cores into these groups, we then considered observational biases
and selection effects which may affect their distribution. The biggest potential hindrance
was the difference in linear resolution and sensitivity caused by the factor of ∼5 varia-
tion in distance to the regions. Despite this, the NH3, continuum and methanol maser
observations have the same sensitivity limit towards all the regions: therefore, the rela-
tive flux densities of these tracers in a given region are directly comparable. In addition,
no correlation was found between a region’s distance and the number of cores toward
the region or their association with the different tracers. From this we conclude the dis-
tance variation does not affect the distribution of cores into separate groups. However,
it should be remembered that any conclusions drawn about the cores are limited by the
observational parameters used to define the groups.
4. Deriving physical properties
Properties of the molecular gas in each of the cores were derived from the NH3 obser-
vations. The core size was calculated from the extent of the integrated NH3(1,1) emission
http://www.iram.fr/IRAMFR/GILDAS/
The molecular environment associated with Class II methanol masers 121
after deconvolving the synthesised beam response. The dynamical state of the molecu-
lar gas was derived from the line profiles of the high spectral resolution (0.197kms−1)
NH3(1,1) observations after deconvolving the instrumental response. Preliminary gas
kinetic temperatures were calculated by fitting the measured column densities of the
multiple NH3 transitions to the LVG models described in Ott et al. (2005). Finally,
properties of the ionised gas were derived from the 24GHz continuum emission follow-
ing Mezger & Henderson (1967), assuming it was spherically symmetric and optically
thin at an electron temperature of 104K.
5. Results
In general the core properties are comparable to those derived from similar surveys
towards young MSF regions. Below we outline differences, in particular between the cores
in the different groups described in §3.
5.1. Molecular Gas
The measured NH3(1,1) linewidth varies significantly between the groups, increasing from
1.43, 2.43, 3.00 kms−1 for Groups 1 to 3 respectively. This shows the NH3-only cores are
more quiescent than those with methanol maser emission.
The NH3(1,1) spectra of some cores deviate significantly from the predicted sym-
metric inner and outer satellite brightness temperature ratios. These line profile asym-
metries are often seen toward star forming cores and are understood to be caused
by selective radiative trapping due to multiple NH3(1,1) sub-clumps within the beam
[see Stuzki & Winnewisser (1985) and references therein]. The NH3-only cores (Group
1) have by far the strongest asymmetries.
NH3(4,4) emission is detected toward the peak of 13 NH3(1,1) cores and 11 of these
also have coincident NH3(5,5) emission. The higher spatial resolution of the NH3(4,4)
and (5,5) images compared to the NH3(1,1) observations (8
′′ vs 11′′) provides a stronger
constraint to the criteria outlined in §3 as to whether this emission is associated with
either methanol or continuum emission. In every case, the NH3(4,4) and (5,5) emission
is unresolved, within a synthesised beam width of the methanol maser emission spatially
and within the FWHM in velocity. This shows the methanol masers form at the warmest
parts of the core. Significantly, no NH3(4,4) or (5,5) emission is detected toward NH3-only
sources.
As shown in Figure 1, cores with NH3 and methanol maser emission (Groups 2 and
3) are generally significantly warmer than those with only NH3 emission (Group 1).
However, there are also a small number of cores with methanol maser emission that
have very cool temperatures and quiescent gas, similar to the NH3-only cores. Modelling
shows pumping of 6.7GHz methanol masers requires local temperatures sufficient to
evaporate methanol from the dust grains (T&90K) and a highly luminous source of IR
photons [Cragg et al. (2005)] i.e. an internal powering source. It is therefore plausible
that the cold, quiescent sources with methanol maser emission are cores in which the
feedback from the powering sources have not had time to significantly alter the larger
scale properties of the gas in the cores.
5.2. Ionised Gas
Of the 14 continuum cores detected at 24GHz, 10 are within two synthesised beams of the
6.7GHz methanol maser emission. This is contrary to the results of Walsh et al. (1998),
who found the masers generally unassociated with 8GHz continuum emission. However,
six of the 24GHz continuum sources found at the site of the methanol maser emission
122 S. N. Longmore et al.
Figure 1. NH3(1,1) linewidth vs gas kinetic temperature. Cores with NH3 only (Group 1) are
shown as crosses while those with NH3 and methanol maser emission (Groups 2 and 3) are shown
as triangles. The dashed line shows the expected linewidth due to purely thermal motions.
have no 8GHz counterparts. A possible explanation for this may be that the continuum
emission is optically thick rather than optically thin between 8 and 24GHz and hence
has a flux density proportional to ν2 rather than ν−0.1. The seemingly low emission mea-
sures derived for the 24GHz continuum are unreliable due to the large beam size of the
observations. Alternatively, the 24GHz continuum sources may have been too extended
and resolved-out by the larger array configuration used at 8GHz by Walsh et al. (1998).
Further high spatial resolution observations at ν > 24GHz are required to derive reliable
emission measures and spectral indexes to unambiguously differentiate between the two
explanations.
6. Towards an evolutionary sequence
From the previous analysis, the core properties are seen to vary depending on their
association with methanol maser and continuum emission. Making the reasonable as-
sumption that cores heat up and becomes less quiescent with age, we now investigate
what these physical conditions tell us about their evolutionary state.
As the core sizes are similar, the measured linewidths can be reliably used to indi-
cate how quiescent the gas is, without worrying about its dependence on the core size
[Larson (1981)]. It then becomes obvious that the isolated NH3 cores (Group 1) con-
tain the most quiescent gas. However, from the linewidths alone it is not clear if these
cores will eventually form stars or if they are a transitory phenomenon. The fact that a
large number of these Group 1 cores contain many dense sub-clumps (as evidenced by the
NH3(1,1) asymmetries) suggests the former is likely for at least these cores. The linewidth
of sources with methanol maser emission (Groups 2 and 3) are significantly larger, and
hence contain less quiescent gas, than those of Group 1. The larger linewidths combined
with generally higher temperatures, suggests cores in Groups 2 and 3 are more evolved
than in Group 1.
The detection of continuum emission suggests a massive star is already ionising the
gas. With the current observations, the properties of the continuum sources are not well
enough constrained to further separate their evolutionary stages. However, as all (with
one possible exception) of the continuum sources only detected at 24GHz are associated
with dense molecular gas and masers, this would suggest they are younger than those
detected at both 8 + 24GHz, despite their seemingly small emission measures. In this
The molecular environment associated with Class II methanol masers 123
scenario, the cores with only 8 + 24GHz continuum and no NH3 emission, may be
sufficiently advanced for the UCHII region to have destroyed its natal molecular material.
From this evidence, the cores in the different groups do appear to be at different
evolutionary stages, going from most quiescent to most evolved according to the group
number.
7. Conclusions
What then, can we conclude about the role of methanol masers as a signpost of MSF?
The observations show that 6.7GHz methanol masers:
• are found at the warmest location within each core.
• generally highlight significantly warmer regions with less quiescent gas (i.e. more
evolved sources) than those with only NH3 emission.
• may also highlight regions in which the internal pumping source is sufficiently young
that it has not yet detectably altered the large scale core properties.
Methanol masers therefore trace regions at stages shortly after a suitable powering
source has formed right through to relatively evolved UCHII regions. While remaining
a good general tracer of young MSF regions, the presence of a methanol maser does not
single out any particular intermediate evolutionary stage.
Finally, these data confirm and strengthen the results of Hill et al. (this volume), that
the youngest MSF regions appear to be molecular cores with no detectable methanol
maser emission.
8. Acknowledgements
SNL is supported by a scholarship from the School of Physics at UNSW. We thank
Andrew Walsh for comments on the manuscript. We also thank the Australian Research
Council for funding support. The Australia Telescope is funded by the Commonwealth
of Australia for operation as a National Facility managed by CSIRO.
References
Cragg D. M., Sobolev A. M., Godfrey P. D. 2005, MNRAS, 360, 533
Larson R. B., 1981, MNRAS, 194, 809
Longmore S. N., Burton M. G., Barnes P. J., Wong T., Purcell C. R., Ott J., 2007, MNRAS, in
press.
Menten K.M. 1991, ApJ, 380, 75
Mezger, P. G., Henderson, A. P. 1967, ApJ 147, 471
Ott J., Weiss A., Henkel C., Walter F., 2005, ApJ, 629, 767
Sault, R. J., Teuben, P. J., Wright, M. C. H. 1995, in: R. A. Shaw, H. E. Payne & J. J. E. Hayes
(eds.), Astronomical Data Analysis Software and Systems IV, ASP Conf. Ser. 77, p. 433
Stutzki J., Winnewisser G. 1985, A&A, 144, 13
Walsh A. J., Burton M. G., Hyland A. R., Robinson G. 1998, MNRAS, 301, 640
Introduction
Observations
Core separation
Deriving physical properties
Results
Molecular Gas
Ionised Gas
Towards an evolutionary sequence
Conclusions
Acknowledgements
|
0704.1685 | Gravitating Global k-monopole | Gravitating Global k-monopole
Xing-hua Jin, Xin-zhou Li†and Dao-jun Liu
Shanghai United Center for Astrophysics(SUCA), Shanghai Normal University,
Shanghai 200234, China
School of Science, East China University of Science and Technology, 130 Meilong
Road, Shanghai, 200237, China
E-mail: †[email protected]
Abstract.
A gravitating global k-monopole produces a tiny gravitational field outside the
core in addition to a solid angular deficit in the k-field theory. As a new feature, the
gravitational field can be attractive or repulsive depending on the non-canonical kinetic
term.
PACS numbers: 11.27.+d, 11.10. Lm
http://arxiv.org/abs/0704.1685v2
Gravitating Global k-monopole 2
1. Introduction
The phase transition in the early universe could produce different kinds of topological
defects which have some important implications in cosmology[1]. Domain walls are two-
dimensional defects, and strings are one-dimensional defects. Point-like defects also arise
in same theories which undergo the spontaneous symmetry breaking, and they appears
as monopoles. The global monopole, which has divergent mass in flat spacetime, is one
of the most interesting defects. The idea that monopoles ought to exist has been proved
to be remarkably durable. Barriola and Vilenkin [2] firstly researched the characteristic
of global monopole in curved spacetime, or equivalently, its gravitational effects. When
one considers the gravity, the linearly divergent mass of global monopole has an effect
analogous to that of a deficit solid angle plus that of a tiny mass at the origin. Harari
and Loustò [3], and Shi and Li [4] have shown that this small gravitational potential
is actually repulsive. Furthermore, Li et al [5, 6, 7] have proposed a new class of cold
stars which are called D-stars (defect stars). One of the most important features of
such stars, comparing to Q-stars, is that the theory has monopole solutions when the
matter field is absent, which makes the D-stars behave very differently from the Q-
stars. The topological defects are also investigated in the Friedmann-Robertson-Walker
spacetime [8]. It is shown that the properties of global monopoles in asymptotically
dS/AdS spacetime [9] and the Brans-Dicke theory [10] are very different from those of
ordinary ones. The similar issue for the gravitational mass of composite monopole, i.e.,
global and local monopole has also been discussed [22].
The huge attractive force between global monopole M and antimonopole M̄
proposes that the monopole over-production problem does not exist, because the pair
annihilation is very efficient. Barriola and Vilenkin have shown that the radiative
lifetime of the pair is very short as they lose energy by Goldstone boson radiation [2]. No
serious attempt has made to develop an analytical model of the cosmological evolution of
a global monopole, so we are limited to the numerical simulations of evolution by Bennett
and Rhie [11]. In the σ-model approximation, the average number of monopoles per
horizon is NH ∼ 4. The gravitational field of global monopoles can lead to clustering
in matter, and later evolve into galaxies and clusters. The scale-invariant spectrum of
fluctuations has been given [11]. Furthermore, one can numerically obtain the microwave
background anisotropy (δT/T )rms patterns [12]. Comparing theoretical value to the
observed rms fluctuation, one can find the constraint of parameters in global monopole.
On the other hand, non-canonical kinetic terms are rather ordinary for effective field
theories. The k-field theory, in which the non-canonical kinetic terms are introduced in
the Lagrangian, have been recently investigated to serve as the inflaton in the inflation
scenario, which is so-called k-inflation [13], and to explain the current acceleration
of the universe and the cosmic coincidence problem, k-essence [14]. Armendariz-
Picon et al [15, 16] have discussed gravitationally bound static and spherically
symmetric configurations of k-essence fields. Another interesting application of k-
fields is topological defects, dubbed by k-defects [17]. Monopole [18] and vortex [19]
Gravitating Global k-monopole 3
of tachyon field, which as an example of k-field comes from string/M theory, have also
been investigated. The mass of global k-monopole diverges in flat spacetime, just as that
of standard global monopole, therefore, it is of more physical significance to consider
the gravitational effects of global k-monopole.
In this paper, we study the gravitational field of global k-monopole and derive the
solutions numerically and asymptotically. We find that the topological condition of
vacuum manifold for the formation of a k-monopole is identical to that of an ordinary
monopole, but their physical properties are disparate. Especially, we show that the
mass of k-monopole can be positive in some form of the non-canonical kinetic terms.
In other words, the gravitational field can be attractive or repulsive depending on the
non-canonical kinetic term.
2. Equations of Motion
We shall work within a particular model in units c = 1, where a global O(3) symmetry
is broken down to U(1) in the k-field theory. Its action is given by
M4K(X̃/M4)−
λ2(φ̃aφ̃a − σ̃2
, (1)
where κ = 8πG and λ is a dimensionless constant. In action (1), X̃ = 1
∂̃µφ̃
a∂̃µφ̃a
where φ̃a is the SO(3) triplet of goldstone field and σ̃0 is the symmetry breaking
scale with a dimension of mass. After setting the dimensionless quantities: x = Mx̃,
φa = φ̃a/M and σ0 = σ̃0/M , action (1) becomes
−g [K(X)− V (φ)] , (2)
where V (φ) = 1
λ2(φaφa − σ2
)2. The hedgedog configuration describing a global k-
monopole is
φa = σ0f(ρ)
, (3)
where xaxa = ρ2 and a = 1, 2, 3, so that we shall actually have a global k-monopole
solution if f → 1 at spatial infinity and f → 0 near the origin.
The static spherically symmetric metric can be written as
ds2 = B(ρ)dt2 −A(ρ)dρ2 − ρ2(dθ2 + sin2 θdϕ2) (4)
with the usual relation between the spherical coordinates ρ, θ, ϕ and the ”Cartesian”
coordinate xa. Introducing a dimensionless parameter r = σ0ρ, from (2) and (3), we
obtain the equations of motion for f as
f ′′ +
f ′ − 2
X ′f ′
− λ2f(f 2 − 1) = 0, (5)
where the prime denotes the derivative with respect to r, the dot denotes the derivative
with respect to X and X = −f2
− f ′2
. Since we only consider the static solution, positive
X and negative X are irrelevant each other. In this paper, we will assume K(X) to be
valid for negative X .
Gravitating Global k-monopole 4
The Einstein equation for k-monopole is
Gµν = κTµν (6)
where Tµν is the energy-momentum tensor for the action (2). The tt and rr components
of the Einstein equations now could be written as
= ǫ2T 0
= ǫ2T 1
, (8)
where
= −K + λ
(f 2 − 1)2 (9)
= −K + λ
(f 2 − 1)2 − K̇ f
and ǫ2 = κσ2
= 8πGσ2
is a dimensionless parameter.
3. k-monopole
Although the existence of global k-monopole, as well as the standard one, is guaranteed
by the symmetry-breaking potential, there exist the non-canonical kinetic term in k-
monopole which certainly leads to the appearance of a new scale in the action and the
mass parameter in the potential term. However, the non-canonical kinetic term is non-
trivial. At small gradients, it can be chosen to have a same asymptotical behavior with
that of the standard one, so that it ensures the standard manner of a small perturbations.
While at large gradient we choose it to have a different form from the standard one.
In the small X case, we assume that the kinetic term has the asymptotically
canonical behavior, which can avoid ”zero-kinetic problem”. If |X| ≪ 1, we have
K(X) ∼ Xα, and α < 1 then there is a singularity at X = 0; and α > 1 then the
system becomes non-dynamical at X = 0. For the monopole solution, it is easily found
that K(X) ∼ X at r ≫ 1. On the other hand, we assume that the modificatory kinetic
term K(X) ∼ Xα and α 6= 1 at |X| ≫ 1. One can easily obtain the equation of motion
inside the core of a global monopole after assuming that |X| ≫ 1 in the core of the
global monopole. The equations of motion are highly non-linear and cannot be solved
analytically. Next, we investigate the asymptotic behaviors of global monopole with
non-linear in X kinetic term. To be specific, we consider the following type of kinetic
K(X) = X − βX2, (11)
where β is a parameter of global k-monopole. It is easy to find that global k-monopole
will reduce to be the standard one when β = 0. It is easy to check whether the kinetic
term (11) satisfy the condition for the hyperbolicity [16, 20, 17]
2XK̈ + K̇
> 0, (12)
Gravitating Global k-monopole 5
which leads to a positive definite speed of sound for the small perturbations of the field.
The stability of solutions shows that for the case β < 0, the range 1
> X > 1
be excluded. However, this will not destroy the results which are carried out from the
case β > 0. We here only consider the cases for β > 0.
Using Eqs.(5)-(10), we get the asymptotic expression for A(r), B(r), and f(r) which
is valid near r = 0,
f(r) = f0r +
2 λ2 (−3 + ǫ2) + +42β ǫ2 f04
1 + 5β f0
) r3 (13)
(9 + 7β λ2) ǫ2 f0
2 + 36β2 ǫ2 f0
1 + 5β f0
) r3 +O(r4)
A(r) = 1 +
λ2 + 6 f0
2 + 9β f0
r2 +O(r3) (14)
B(r) = 1 +
λ2 − 9β f04
r2 +O(r3), (15)
where the undetermined coefficient f0 is characterized as the mass of k-monopole, which
can be determined in the numerical calculation.
In the region of r ≫ 1, similarly we can expand f(r), A(r) and B(r) as
f(r) = 1− 1
− 3− 2 ǫ
2 + 4β λ2
+O(r−5) (16)
A(r) =
1− ǫ2
(1− ǫ2)2
ǫ2(1− βλ2)
(1− ǫ2)2 λ2
(1− ǫ2)3
+ O(r−3)
B(r) = (1− ǫ2) +M∞
2 (1− βλ2)
+O(r−3), (18)
where the constant M∞ will be discussed in the following.
Using shooting method for boundary value problems, we get the numerical results
of the function f(r) which describes the configuration of global k-monopole. In Fig.1
we show the function f(r) for β = 0, β = 1, β = 5 and β = 10 respectively and for
given values of λ and ǫ. Obviously, the configuration of field f is not impressible to the
choice of the parameter β.
From Eqs.(13) and (16), it is easy to construct global k-monopole which has the
same asymptotic condition with standard global monopole, i.e., f will approach to zero
when r ≪ 1 and approach to unity when r ≫ 1.
Actually there is a general solution to Einstein equation with energy-momentum
tensor Tµν which takes the form as (9) and (10) for spherically symmetric metric (4)
A(r)−1 = 1−
[−K +
(f 2 − 1)2]r2dr (19)
B(r) = A(r)−1 exp
(K̇f ′2r)dr
. (20)
Gravitating Global k-monopole 6
0 5 10 15 20
Figure 1. The plot of f(r) as a function of r. Here we choose λ = 1, G = 1 and
ǫ = 0.001. The four curved lines are plotted when β = 0, β = 1, β = 5 and β = 10
respectively.
In terms of the dimensionless quantity ǫ, the metric coefficient A(r) and B(r) can be
formally integrated and read as
A(r)−1 = 1− ǫ2 −
2Gσ0MA(r)
B(r) = 1− ǫ2 − 2Gσ0MB(r)
. (22)
The small dimensionless parameter ǫ arise naturally from Einstein equations and clearly
ǫ2 describes a solid angular deficit of space-time.
A global k-monopole solution f should approach unity when r ≫ 1. If this
convergence is fast enough, then MA(r) and MB(r) will also rapidly converge to finite
values. Therefore, from Eqs.(16)-(18) we have the asymptotic expansions:
MA(r) = M∞ + 4πσ0
8πσ0 (−1 + ǫ2 + 2 β λ2)
+ O(r−5)
MB(r) = M∞ + 4πσ0
4πσ0 (−1 + ǫ2 − 4 β λ2)
+ O(r−5),
where M∞ ≡ limr→∞MA(r). One can easily find that the dependence on ǫ of the
asymptotic expansion for f(r) is very weak, in other words, the asymptotic behavior is
quite independent of the scale of symmetry breakdown σ0 up to value as large as the
planck scale. On the contrary, MA(r) depends obviously on σ0.
Gravitating Global k-monopole 7
0 5 10 15 20
Figure 2. The plot of MA(r)/σ0 as a function of r. Here we choose λ = 1, G = 1 and
ǫ = 0.001. The four curved lines are plotted when β = 0, β = 1, β = 5 and β = 10
respectively.
The numerical results of MA(r)/σ0 are shown in Fig.2 by shooting method for
boundary value problems where we choose λ = 1, G = 1 and ǫ = 0.001. From the
figure, we find that the mass of global k-monopole decrease to a negative asymptotic
value when r approaches infinity in the case that β = 0 and β = 1. While the mass will
be positive if β = 5 or β = 10. The asymptotic mass for the cases above are −19.15σ0,
−13.83σ0, 3.62σ0 and 22.14σ0 respectively. It is clear that the presence of parameter β,
which measures the degree of deviation of kinetic term from canonical one, affects the
effective mass of the global k-monopole significantly. It is not difficult to understand
this property. From Eqs.(11), (19) and (21), the mass function MA(r) can be expressed
explicitly as
MA(r)
= −r +
βX2 −X +
(f 2 − 1)2
r2dr. (25)
Obviously, the β-term in the integration has the positive contribution for the mass
function. From Fig.1, f (then X) is not sensitive to the value of β, so the greater
parameter β is chosen, the larger value MA(r) takes for a given r, and if β is greater
than some value, MA(r) will be positive for large r as Fig.2 shows. However, inside
the core, X varies slowly but f varies fast with respect to r, therefore, λ2-term in the
integration will become dominant as r decreases. This leads to two characteristics of the
mass curves which is also shown in Fig.2: (i) the mass curves with different β converge
gradually in the region near r = 0; (ii) in the case that β is large enough, MA(r) has a
minimum.
Gravitating Global k-monopole 8
To show the effect of solid defect angle, we then investigate the motion test particles
around a global k-monopole. It is a good approximation to take MA(r) as a constant in
the region far away from the core of the global k-monopole, since the the mass MA(r)
approaches very quickly to its asymptotic value. Therefore, we can consider the geodesic
equation in the metric (4) with
A(r)−1 = B(r) = 1− ǫ2 −
where M = σ0M∞. Solving the geodesic equation and introducing a dimensionless
quantaty u = GM/r, one will get the second order differentiating equation of u with
respect to ϕ [9, 21]
+ (1− ǫ2)u =
+ 3u2, (27)
where L is the angular momentum per unit of mass. When
ϕ ≪ 1, one have the
approximate solution of u
1− ǫ2
+ e cos
1− 3√
(1− ǫ2)3
, (28)
where e denotes the eccentricity. When a test particle rotates one loop around the global
k-monopole, the precession of it will be
∆ϕ = 6π
(1− ǫ2)3
ǫ2. (29)
The last term in Eq.(29) is the modificaiton comparing this result with that for the
precession around an ordinary star.
4. Conclusion
In summary, k-monopole could arise during the phase transition in the early universe.
We calculate the asymptotic solutions of global k-monopole in static spherically
symmetric spacetime, and find that the behavior of a k-monopole is similar to that
of a standard one. Although the choice of the parameter β, which measures the degree
of deviation of kinetic term from canonical one, have little influence on the configuration
of k-field φ, the effective mass of global k-monopole is affected significantly. The mass
might be negative or positive when different parameters β are chosen. This shows
that the gravitational field of the global k-monopole could be attractive or repulsive
depending on the different non-canonical kinetic term.
The configuration of a global k-monopole is more complicated than that of a
standard one. As for its cosmological evolution, we should not attempt to get the
analytical mode, instead we can only use numerical simulation. However, the energy
dominance of global k-monopole is in the region outside the core. We can roughly
estimate that global k-monopoles will result in the clustering in matter and evolve into
galaxies and clusters in a way similar to that of standard monopoles.
Gravitating Global k-monopole 9
Acknowledgement
This work is supported in part by National Natural Science Foundation of China under
Grant No. 10473007 and No. 10503002 and Shanghai Commission of Science and
Technology under Grant No. 06QA14039.
References
[1] Vilenkin A and Shellard E P S, 1994 Cosmic Strings and Other Topological Defects (Cambridge
Unversity Press, Cambridge, England)
[2] Barriola M and Vilenkin A, 1989 Phys. Rev. Lett. 63, 341
[3] Harari D and Loustò C, 1990 Phys. Rev.D42, 2626
[4] Shi X and Li X Z, 1991 Class. Quantum Grav. 8, 761
[5] Li X Z and Zhai X H, 1995 Phys. Lett. B364, 212
[6] Li J M and Li X Z, 1998 Chin. Phys. Lett.15, 3; Li X Z, Liu D J and Hao J G, 2002 Science in
China A45, 520
[7] Li X Z, Zhai X H and Chen G, 2000 Astropart. Phys. 13, 245
[8] Basu R, Guth A H and Vilenkin A, 1991 Phys. Rev. D44 340; Basu R and Vilenkin A, 1994 Phys.
Rev. D50 7150; Chen C, Cheng H, Li X Z and Zhai X H, 1996 Class. Quantum Grav. 13, 701
[9] Li X Z and Hao J G, 2002 Phys. Rev. D66, 107701; Hao J G and Li X Z, 2003 Class. Quantum
Grav. 20, 1703
[10] Li X Z and Lu J Z, 2000 Phys. Rev. D62, 107501
[11] Bennett D P and Rhie S H, 1990 Phys. Rev. Lett. 65, 1709
[12] Bennett D P and Rhie S H, 1993 Astrophys. J. 406, L7
[13] Armendariz-Picon C, Damour T, Mukhanov V, 1999 Phys.Lett. B458, 209
[14] Armendariz-Picon C, Mukhanov V, Steinhardt P J, 2000 Phys.Rev.Lett.85, 4438; Armendariz-
Picon C, Mukhanov V, Steinhardt P J, 2001 Phys. Rev. D63: 103510
[15] Armendariz-Picon C and Lim E A, 2005 JCAP 0508, 007
[16] Bilic N, Tupper G B and Viollier R D, 2006 JCAP 0602 013; Diez-Tejedor A and Feinstein A,
2006 Phys. Rev. D74 023530; Nucamendi U, Salgado M and Sudarsky D, 2000 Phys. Rev. Lett.
84 3037; Nucamendi U, Salgado M and Sudarsky D, 2001 Phys. Rev. D63 125016.
[17] Babichev E, 2006 Phys. Rev. D74 085004
[18] Li X Z and Liu D J, 2005 Int. J. Mod. Phys. A20, 5491
[19] Liu D J and Li X Z, 2003 Chin. Phys. Lett. 20, 1678
[20] Rendall A D, 2006 Class.Quant.Grav. 23, 1557.
[21] Wald R M, 1984 General Ralitivity (The university of Chicago Press, Chicago)
[22] Spinelly J, de Freitas U and Bezerra de Mello E R, 2002 Phys.Rev. D66, 024018.
Introduction
Equations of Motion
k-monopole
Conclusion
|
0704.1686 | Effect of atomic beam alignment on photon correlation measurements in
cavity QED | Effect of atomic beam alignment on photon correlation measurements
in cavity QED
L. Horvath and H. J. Carmichael
Department of Physics, University of Auckland, Private Bag 92019, Auckland, New Zealand
(Dated: November 3, 2018)
Quantum trajectory simulations of a cavity QED system comprising an atomic beam traversing a
standing-wave cavity are carried out. The delayed photon coincident rate for forwards scattering is
computed and compared with the measurements of Rempe et al. [Phys. Rev. Lett. 67, 1727 (1991)]
and Foster et al. [Phys. Rev. A 61, 053821 (2000)]. It is shown that a moderate atomic beam
misalignment can account for the degradation of the predicted correlation. Fits to the experimental
data are made in the weak-field limit with a single adjustable parameter—the atomic beam tilt from
perpendicular to the cavity axis. Departures of the measurement conditions from the weak-field limit
are discussed.
PACS numbers: 42.50.Pq, 42.50.Lc, 02.70.Uu
I. INTRODUCTION
Cavity quantum electrodynamics [1, 2, 3, 4, 5, 6] has
as its central objective the realization of strong dipole
coupling between a discrete transition in matter (e.g., an
atom or quantum dot) and a mode of an electromag-
netic cavity. Most often strong coupling is demonstrated
through the realization of vacuum Rabi splitting [7, 8].
First realized for Rydberg atoms in superconducting mi-
crowave cavities [9, 10] and for transitions at optical
wavelengths in high-finesse Fabry Perots [11, 12, 13, 14],
vacuum Rabi splitting was recently observed in mono-
lithic structures where the discrete transition is provided
by a semiconductor quantum dot [15, 16, 17], and in a
coupled system of qubit and resonant circuit engineered
from superconducting electronics [18].
More generally, vacuum Rabi spectra can be observed
for any pair of coupled harmonic oscillators [19] without
the need for strong coupling of the one-atom kind. Prior
to observations for single atoms and quantum dots, sim-
ilar spectra were observed in many-atom [20, 21, 22] and
-exciton [23, 24] systems where the radiative coupling is
collectively enhanced.
The definitive signature of single-atom strong coupling
is the large effect a single photon in the cavity has on
the reflection, side-scattering, or transmission of another
photon. Strong coupling has a dramatic effect, for exam-
ple, on the delayed photon coincidence rate in forwards
scattering when a cavity QED system is coherently driven
on axis [25, 26, 27, 28]. Photon antibunching is seen at
a level proportional to the parameter 2C1 = 2g
2/γκ [27],
where g is the atomic dipole coupling constant, γ is the
atomic spontaneous emission rate, and 2κ is the pho-
ton loss rate from the cavity; the collective parameter
2C = N2C1, with N the number of atoms, does not en-
ter into the magnitude of the effect when N ≫ 1. In the
one-atom case, and for 2κ ≫ γ, the size of the effect is
raised to (2C1)
2 [25, 26] [see Eq. (30)].
The first demonstration of photon antibunching was
made [29] for moderately strong coupling (2C1 ≈ 4.6)
and N = 18, 45, and 110 (effective) atoms. The mea-
surement has subsequently been repeated for somewhat
higher values of 2C1 and slightly fewer atoms [30, 31], and
a measurement for one trapped atom [32], in a slightly al-
tered configuration, has demonstrated the so-called pho-
ton blockade effect [33, 34, 35, 36, 37, 38]—i.e., the anti-
bunching of forwards-scattered photons for coherent driv-
ing of a vacuum-Rabi resonance, in which case a two-state
approximation may be made [39], assuming the coupling
is sufficiently strong.
The early experiments of Rempe et al. [29] and those
of Mielke et al. [30] and Foster et al. [31] employ systems
designed around a Fabry-Perot cavity mode traversed
by a thermal atomic beam. Their theoretical modeling
therefore presents a significant challenge, since for the
numbers of effective atoms used, the atomic beam car-
ries hundreds of atoms—typically an order of magnitude
larger than the effective number [40]—into the interac-
tion volume. The Hilbert space required for exact calcu-
lations is enormous (2100 ∼ 1030); it grows and shrinks
with the number of atoms, which inevitably fluctuates
over time; and the atoms move through a spatially vary-
ing cavity mode, so their coupling strengths are chang-
ing in time. Ideally, all of these features should be taken
into account, although certain approximations might be
made.
For weak excitation, as in the experiments, the lowest
permissible truncation of the Hilbert space—when cal-
culating two-photon correlations—is at the two-quanta
level. Within a two-quanta truncation, relatively simple
formulas can be derived so long as the atomic motion
is overlooked [27, 28]. It is even possible to account for
the unequal coupling strengths of different atoms, and,
through a Monte-Carlo average, fluctuations in their spa-
tial distribution [29]. A significant discrepancy between
theory and experiment nevertheless remains: Rempe et
al. [29] describe how the amplitude of the Rabi oscillation
(magnitude of the antibunching effect) was scaled down
by a factor of 4 and a slight shift of the theoretical curve
was made in order to bring their data into agreement with
this model; the discrepancy persists in the experiments
of Foster et al. [31], except that the required adjustment
http://arxiv.org/abs/0704.1686v2
is by a scale factor closer to 2 than to 4.
Attempts to account for these discrepancies have been
made but are unconvincing. Martini and Schenzle [41]
report good agreement with one of the data sets from
Ref. [29]; they numerically solve a many-atom master
equation, but under the unreasonable assumption of sta-
tionary atoms and equal coupling strengths. The unlikely
agreement results from using parameters that are very
far from those of the experiment—most importantly, the
dipole coupling constant is smaller by a factor of approx-
imately 3.
Foster et al. [31] report a rather good theoretical fit
to one of their data sets. It is obtained by using the
mentioned approximations and adding a detuning in the
calculation to account for the Doppler broadening of a
misaligned atomic beam. They state that “Imperfect
alignment . . . can lead to a tilt from perpendicular of
as much as 1◦”. They suggest that the mean Doppler
shift is offset in the experiment by adjusting the driving
laser frequency and account for the distribution about
the mean in the model. There does appear to be a dif-
ficulty with this procedure, however, since while such an
offset should work for a ring cavity, it is unlikely to do
so in the presence of the counter-propagating fields of a
Fabry-Perot. Indeed, we are able to successfully simulate
the procedure only for the ring-cavity case (Sec. IVC).
The likely candidates to explain the disagreement be-
tween theory and experiment have always been evident.
For example, Rempe et al. [29] state:
“Apparently the transient nature of the atomic mo-
tion through the cavity mode (which is not included
here or in Ref. [7]) has a profound effect in decorre-
lating the otherwise coherent response of the sam-
ple to the escape of a photon.”
and also:
“Empirically, we also know that |g(2)(0)− 1| is re-
duced somewhat because the weak-field limit is not
strictly satisfied in our measurements.”
To these two observations we should add—picking up on
the comment in [31]—that in a standing-wave cavity an
atomic beam misalignment would make the decorrelation
from atomic motion a great deal worse.
Thus, the required improvements in the modeling are:
(i) a serious accounting for atomic motion in a thermal
atomic beam, allowing for up to a few hundred inter-
acting atoms and a velocity component along the cavity
axis, and (ii) extension of the Hilbert space to include 3,
4, etc. quanta of excitation, thus extending the model
beyond the weak-field limit. The first requirement is
entirely achievable in a quantum trajectory simulation
[42, 43, 44, 45, 46], while the second, even with recent
improvements in computing power, remains a formidable
challenge.
In this paper we offer an explanation of the discrepan-
cies between theory and experiment in the measurements
Parameter Set 1 Set 2
cavity halfwidth
κ/2π 0.9MHz 7.9MHz
dipole coupling constant
gmax/κ 3.56 1.47
atomic linewidth
γ/κ 5.56 0.77
mode waist
w0 50µm 21.5µm
wavelength
852nm (Cs) 780nm (Rb)
effective atom number
N̄eff
18 13
oven temperature
473K 430K
mean speed in oven
voven
274.5m/s 326.4m/s
mean speed in beam
vbeam
323.4m/s 384.5m/s
TABLE I: Parameters used in the simulations. Set 1 is taken
from Ref. [29] and Set 2 from Ref. [31].
of Refs. [29] and [31]. We perform ab initio quantum tra-
jectory simulations in parallel with a Monte-Carlo sim-
ulation of a tilted atomic beam. The parameters used
are listed in Table I: Set 1 corresponds to the data dis-
played in Fig. 4(a) of Ref. [29], and Set 2 to the data dis-
played in Fig. 4 of Ref. [31]. All parameters are measured
quantities— or are inferred from measured quantities—
and the atomic beam tilt alone is varied to optimize the
data fit. Excellent agreement is demonstrated for atomic
beam misalignments of approximately 10mrad (a little
over 1/2◦). These simulations are performed using a two-
quanta truncation of the Hilbert space.
Simulations based upon a three-quanta truncation are
also carried out, which, although not adequate for the
experimental conditions, can begin to address physics be-
yond the weak-field limit. From these, an inconsistency
with the intracavity photon number reported by Foster
et al. [31] is found.
Our model is described in Sec. II, where we formu-
late the stochastic master equation used to describe the
atomic beam, its quantum trajectory unraveling, and the
two-quanta truncation of the Hilbert space. The previous
modeling on the basis of a stationary-atom approxima-
tion is reviewed in Sect. III and compared with the data
of Rempe et al. [29] and Foster et al. [31]. The effects
of atomic beam misalignment are discussed in Sec. IV;
here the results of simulations with a two-quanta trunca-
tion are presented. Results obtained with a three-quanta
truncation are presented in Sec. V, where the issue of
intracavity photon number is discussed. Our conclusions
are stated in Sec. VI.
II. CAVITY QED WITH ATOMIC BEAMS
A. Stochastic Master Equation: Atomic Beam
Simulation
Thermal atomic beams have been used extensively for
experiments in cavity QED [9, 10, 11, 12, 20, 21, 22, 29,
30, 31]. The experimental setups under consideration
are described in detail in Refs. [47] and [48]. As typi-
cally, the beam is formed from an atomic vapor created
inside an oven, from which atoms escape through a colli-
mated opening. We work from the standard theory of an
effusive source from a thin-walled oriface [49], for which
for an effective number N̄eff of intracavity atoms [11, 40]
and cavity mode waist ω0 (N̄eff is the average number
of atoms within a cylinder of radius w0/2), the average
escape rate is
R = 64N̄eff v̄beam/3π
2w0, (1)
with mean speed in the beam
v̄beam =
9πkBT/8M, (2)
where kB is Boltzmann’s constant, T is the oven tem-
perature, and M is the mass of an atom; the beam has
atomic density
̺ = 4N̄eff/πw
0l, (3)
where l is the beam width, and distribution of atomic
speeds
P (v)dv = 2u3(v)e−u
2(v)du(v), (4)
u(v) ≡ 2v/
π v̄oven, where
v̄oven =
8kBT/πM = (8/3π)v̄beam (5)
is the mean speed of an atom inside the oven, as cal-
culated from the Maxwell-Boltzmann distribution. Note
that v̄beam is larger than v̄oven because those atoms that
move faster inside the oven have a higher probability of
escape.
In an open-sided cavity, neither the interaction volume
nor the number of interacting atoms is well-defined; the
cavity mode function and atomic density are the well-
defined quantities. Clearly, though, as the atomic dipole
coupling strength decreases with the distance of the atom
from the cavity axis, those atoms located far away from
the axis may be neglected, introducing, in effect, a finite
interaction volume. How far from the cavity axis, how-
ever, is far enough? One possible criterion is to require
that the interaction volume taken be large enough to give
an accurate result for the collective coupling strength, or,
considering its dependence on atomic locations (at fixed
average density), the probability distribution over collec-
tive coupling strengths. According to this criterion, the
actual number of interacting atoms is typically an order
of magnitude larger than N̄eff [40]. If, for example, one
introduces a cut-off parameter F < 1, and defines the
interaction volume by [40, 50, 51]
VF ≡ {(x, y, z) : g(x, y, z) ≥ Fgmax}, (6)
g(x, y, z) = gmax cos(kz) exp
−(x2 + y2)/w20
the spatially varying coupling constant for a standing-
wave TEM00 cavity mode [52]—wavelength λ = 2π/k—
the computed collective coupling constant is [40]
N̄eff gmax →
N̄Feff gmax,
N̄Feff = (2N̄eff/π)
(1− 2F 2) cos−1 F + F
1− F 2
. (8)
For the choice F = 0.1, one obtains N̄Feff = 0.98N̄eff, a
reduction of the collective coupling strength by 1%, and
the interaction volume—radius r ≈ 3(w0/2)—contains
approximately 9N̄eff atoms on average. This is the choice
made for the simulations with a three-quanta trunca-
tion reported in Sec. V. When adopting a two-quanta
truncation, with its smaller Hilbert space for a given
number of atoms, we choose F = 0.01, which yields
N̄Feff = 0.9998N̄eff and r ≈ 4.3(w0/2), and approximately
18N̄eff atoms in the interaction volume on average.
In fact, the volume used in practice is a little larger
than VF . In the course of a Monte-Carlo simulation of
the atomic beam, atoms are created randomly at rate R
on the plane x = −w0
| lnF |. At the time, tj0, of its
creation, each atom is assigned a random position and
velocity (j labels a particular atom),
| lnF |
, vj = vj
cos θ
sin θ
, (9)
where yj(t
0) and zj(t
0) are random variables, uniformly
distributed on the intervals |yj(tj0)| ≤ w0
| lnF | and
|zj(tj0)| ≤ λ/4, respectively, and vj is sampled from the
distribution of atomic speeds [Eq. (4)]; θ is the tilt of
the atomic beam away from perpendicular to the cavity
axis. The atom moves freely across the cavity after its
creation, passing out of the interaction volume on the
plane x = w0
| lnF |. Thus the interaction volume has
a square rather than circular cross section and measures
| lnF |w0 on a side. It is larger than VF by approxi-
mately 30%.
Atoms are created in the ground state and returned
to the ground state when they leave the interaction vol-
ume. On leaving an atom is disentangled from the sys-
tem by comparing its probability of excitation with a
uniformly distributed random number r, 0 ≤ r ≤ 1, and
deciding whether or not it will—anytime in the future—
spontaneously emit; thus, the system state is projected
onto the excited state of the leaving atom (the atom will
emit) or its ground state (it will not emit) and propa-
gated forwards in time.
Note that the effects of light forces and radiative heat-
ing are neglected. At the thermal velocities considered,
typically the ratio of kinetic energy to recoil energy is of
order 108, while the maximum light shift h̄gmax (assum-
ing one photon in the cavity) is smaller than the kinetic
energy by a factor of 107; even if the axial component
of velocity only is considered, these ratios are as high as
104 and 103 with θ ∼ 10mrad, as in Figs. 10 and 11. In
fact, the mean intracavity photon number is considerably
less than one (Sec. V); thus, for example, the majority of
atoms traverse the cavity without making a single spon-
taneous emission.
Under the atomic beam simulation, the atom number,
N(t), and locations rj(t), j = 1, . . . , N(t), are chang-
ing in time; therefore, the atomic state basis is dynamic,
growing and shrinking with N(t). We assume all atoms
couple resonantly to the cavity mode, which is coher-
ently driven on resonance with driving field amplitude
E . Then, including spontaneous emission and cavity loss,
the system is described by the stochastic master equation
in the interaction picture
ρ̇ = E [↠− â, ρ] +
g(rj(t))[â
†σ̂j− − âσ̂j+, ρ]
(2σ̂j−ρσ̂j+ − σ̂j+σ̂j−ρ− ρσ̂j+σ̂j−)
2âρ↠− â†âρ− ρâ†â
, (10)
with dipole coupling constants
g(rj(t)) = gmax cos(kzj(t)) exp
x2j (t) + y
j (t)
, (11)
where ↠and â are creation and annihilation operators
for the cavity mode, and σ̂j+ and σ̂j−, j = 1 . . .N(t), are
raising and lowering operators for two-state atoms.
B. Quantum Trajectory Unraveling
In principle, the stochastic master equation might be
simulated directly, but it is impossible to do so in prac-
tice. Table I lists effective numbers of atoms N̄eff = 18
and N̄eff = 13. For cut-off parameter F = 0.01 and an
interaction volume of approximately 1.3×VF [see the dis-
cussion below Eq. (8)], an estimate of the number of in-
teracting atoms gives N(t) ∼ 1.3×18N̄eff ≈ 420 and 300,
respectively, which means that even in a two-quanta trun-
cation the size of the atomic state basis (∼ 105 states)
is far too large to work with density matrix elements.
We therefore make a quantum trajectory unraveling of
Eq. (10) [42, 43, 44, 45, 46], where, given our interest
in delayed photon coincidence measurements, condition-
ing of the evolution upon direct photoelectron counting
records is appropriate: the (unnormalized) conditional
state satisfies the nonunitary Schrödinger equation
d|ψ̄REC〉
ĤB(t)|ψ̄REC〉, (12)
with non-Hermitian Hamiltonian
ĤB(t)/ih̄ = E(↠− â) +
g(rj(t))(â
†σ̂j− − âσ̂j+)
− κâ†â− γ
σ̂j+σ̂j−, (13)
and this continuous evolution is interrupted by quantum
jumps that account for photon scattering. There are
N(t)+1 scattering channels and correspondinglyN(t)+1
possible jumps:
|ψ̄REC〉 → â|ψ̄REC〉, (14a)
for forwards scattering—i.e., the transmission of a photon
by the cavity—and
|ψ̄REC〉 → σ̂j−|ψ̄REC〉, j = 1, . . . , N(t), (14b)
for scattering to the side (spontaneous emission). These
jumps occur, in time step ∆t, with probabilities
Pforwards = 2κ〈â†â〉REC∆t, (15a)
side = γ〈σ̂j+σ̂j−〉REC∆t, j = 1, . . . , N(t); (15b)
otherwise, with probability
1− Pforwards −
side,
the evolution under Eq. (12) continues.
For simplicity, and without loss of generality, we as-
sume a negligible loss rate at the cavity input mirror
compared with that at the output mirror. Under this
assumption, backwards scattering quantum jumps need
not be considered. Note that non-Hermitian Hamiltonian
(13) is explicitly time dependent and stochastic, due to
the Monte-Carlo simulation of the atomic beam, and the
normalized conditional state is
|ψREC〉 =
|ψ̄REC〉
〈ψ̄REC|ψ̄REC〉
. (16)
C. Two-Quanta Truncation
Even as a quantum trajectory simulation, a full im-
plementation of our model faces difficulties. The Hilbert
space is enormous if we are to consider a few hundred
two-state atoms, and a smaller collective-state basis is
inappropriate, due to spontaneous emission and the cou-
pling of atoms to the cavity mode at unequal strengths.
If, on the other hand, the coherent excitation is suffi-
ciently weak, the Hilbert space may be truncated at the
two-quanta level. The conditional state is expanded as
|ψREC(t)〉 = |00〉+ α(t)|10〉+
βj(t)|0j〉+ η(t)|20〉+
ζj(t)|1j〉+
j>k=1
ϑjk(t)|0jk〉, (17)
where the state |n0〉 has n = 0, 1, 2 photons inside the
cavity and no atoms excited, |0j〉 has no photon inside
the cavity and the j th atom excited, |1j〉 has one photon
inside the cavity and the j th atom excited, and |0jk〉 is
the two-quanta state with no photons inside the cavity
and the j th and kth atoms excited.
The truncation is carried out at the minimum level per-
mitted in a treatment of two-photon correlations. Since
each expansion coefficient need be calculated to domi-
nant order in E/κ only, the non-Hermitian Hamiltonian
(13) may be simplified as
ĤB(t)/ih̄ = E ↠+
g(rj(t))(â
†σ̂j− − âσ̂j+)
− κâ†â− γ
σ̂j+σ̂j−, (18)
dropping the term −E â from the right-hand side. While
this self-consistent approximation is helpful in the ana-
lytical calculations reviewed in Sec. III, we do not bother
with it in the numerical simulations.
Truncation at the two-quanta level may be justified by
expanding the density operator, along with the master
equation, in powers of E/κ [25, 26, 53]. One finds that,
to dominant order, the density operator factorizes as a
pure state, thus motivating the simplification used in all
previous treatments of photon correlations in many-atom
cavity QED [27, 28]. The quantum trajectory formula-
tion provides a clear statement of the physical conditions
under which this approximation holds.
Consider first that there is a fixed number of atoms N
and their locations are also fixed. Under weak excitation,
the jump probabilities (15a) and (15b) are very small,
and quantum jumps are extremely rare. Then, in a time
of order 2(κ+γ/2)−1, the continuous evolution (12) takes
the conditional state to a stationary state, satisfying
ĤB |ψss〉 = 0, (19)
without being interrupted by quantum jumps. In view of
the overall rarity of these jumps, to a good approximation
the density operator is
ρss = |ψss〉〈ψss|, (20)
or, if we recognize now the role of the atomic beam,
the continuous evolution reaches a quasi-stationary state,
with density operator
ρss = |ψqs(t)〉〈ψqs(t)|, (21)
where |ψqs(t)〉 satisfies Eq. (12) (uninterrupted by quan-
tum jumps) and the overbar indicates an average over
the fluctuations of the atomic beam.
This picture of a quasi-stationary pure-state evolution
requires the time between quantum jumps to be much
larger than 2(κ+ γ/2)−1, the time to recover the quasi-
stationary state after a quantum jump has occurred. In
terms of photon scattering rates, we require
Rforwards +Rside ≪ 12 (κ+ γ/2), (22)
where
Rforwards = 2κ〈â†â〉REC, (23a)
Rside = γ
〈σ̂j+σ̂j−〉REC. (23b)
When considering delayed photon coincidences, after a
first forwards-scattered photon is detected, let us say at
time tk, the two-quanta truncation [Eq. (17)] is tem-
porarily reduced by the associated quantum jump to a
one-quanta truncation:
|ψREC(tk)〉 → |ψREC(t+k )〉,
where
|ψREC(t+k )〉 = |00〉+ α(t
k )|10〉+
N(tk)
k )|0j〉, (24)
α(t+k ) =
2η(tk)
|α(tk)|
, βj(t
k ) =
ζ(tk)
|α(tk)|
. (25)
Then the probability for a subsequent photon detection
at tk + τ is
Pforwards = 2κ|α(tk + τ)|2∆t. (26)
Clearly, if this probability is to be computed accurately
(to dominant order) no more quantum jumps of any kind
should occur before the full two-quanta truncation has
been recovered in its quasi-stationary form; in the ex-
periment a forwards-scattered “start” photon should be
followed by a “stop” photon without any other scatter-
ing events in between. We discuss how well this condi-
tion is met by Rempe et al. [29] and Foster et al. [31]
in Sec. V. Its presumed validity is the basis for com-
paring their measurements with formulas derived for the
weak-field limit.
III. DELAYED PHOTON COINCIDENCES FOR
STATIONARY ATOMS
Before we move on to full quantum trajectory simula-
tions, including the Monte-Carlo simulation of the atomic
beam, we review previous calculations of the delayed
photon coincidence rate for forwards scattering with the
atomic motion neglected. Beginning with the original
calculation of Carmichael et al. [27], which assumes a
fixed number of atoms, denoted here by N̄eff , all cou-
pled to the cavity mode at strength gmax, we then relax
the requirement for equal coupling strengths [29]; finally
a Monte-Carlo average over the spatial configuration of
atoms, at fixed density ̺, is taken. The inadequacy of
modeling at this level is shown by comparing the com-
puted correlation functions with the reported data sets.
A. Ideal Collective Coupling
For an ensemble of N̄eff atoms located on the cavity
axis and at antinodes of the standing wave, the non-
Hermitian Hamiltonian (18) is taken over in the form
ĤB/ih̄ = E ↠+ gmax(â†Ĵ− − âĴ+)
− κâ†â− γ
(Ĵz +Neff), (27)
where
Ĵ± ≡
σ̂j±, Ĵz ≡
σ̂jz (28)
are collective atomic operators, and we have written
2σ̂j+σ̂j− = σ̂jz + 1. The conditional state in the two-
quanta truncation is now written more simply as
|ψREC(t)〉 = |00〉+ α(t)|10〉+ β(t)|01〉+ η(t)|20〉+ ζ(t)|11〉+ ϑ(t)|02〉, (29)
where |nm〉 is the state with n photons in the cavity and
m atoms excited, the m-atom state being a collective
state. Note that, in principle, side-scattering denies the
possibility of using a collective atomic state basis. While
spontaneous emission from a particular atom results in
the transition |n1〉 → σ̂j−|n1〉 → |n0〉, which remains
within the collective atomic basis, the state σ̂j−|n2〉 lies
outside it; thus, side-scattering works to degrade the
atomic coherence induced by the interaction with the cav-
ity mode. Nevertheless, its rate is assumed negligible in
the weak-field limit [Eq. (22)], and therefore a calculation
carried out entirely within the collective atomic basis is
permitted.
The delayed photon coincidence rate obtained from
|ψREC(tk)〉 = |ψss〉 and Eqs. (24) and (26) yields the
second-order correlation function [27, 28, 54]
g(2)(τ) =
1− 2C1
1 + ξ
1 + 2C − 2C1ξ/(1 + ξ)
(κ+γ/2)τ
cos (Ωτ)+
(κ+ γ/2)
sin (Ωτ)
, (30)
with vacuum Rabi frequency
N̄effg2max − 14 (κ− γ/2)2, (31)
where
ξ ≡ 2κ/γ, (32)
C ≡ N̄effC1, C1 ≡ g2max/κγ. (33)
For N̄eff ≫ 1, as in Parameter Sets 1 and 2 (Table I), the
deviation from second-order coherence—i.e., g(2)(τ) =
630-3-6
210-1-2
FIG. 1: Second-order correlation function for ideal coupling
[Eq. (30)]: (a) Parameter Set 1, (b) Parameter Set 2.
1—is set by 2C1ξ/(1 + ξ) and provides a measure of the
single-atom coupling strength. For small time delays the
deviation is in the negative direction, signifying a photon
antibunching effect. It should be emphasized that while
second-order coherence serves as an unambiguous indica-
tor of strong coupling in the single-atom sense, vacuum
Rabi splitting—the frequency Ω—depends on the collec-
tive coupling strength alone.
Both experiments of interest are firmly within the
strong coupling regime, with 2C1ξ/(1+ ξ) = 1.2 for that
of Rempe et al. [29] (2C1 = 4.6), and 2C1ξ/(1+ ξ) = 4.0
for that of Foster et al. [31] (2C1 = 5.6). Figure 1 plots
the correlation function obtained from Eq. (30) for Pa-
rameter Sets 1 and 2. Note that since the expression is
a perfect square, the apparent photon bunching of curve
(b) is, in fact, an extrapolation of the antibunching ef-
fect of curve (a); the continued nonclassicality of the
correlation function is expressed through the first two
side peaks, which, being taller than the central peak, are
classically disallowed [26, 30]. A measurement of the in-
tracavity electric field perturbation following a photon
detection [the square root of Eq. (30)] presents a more
unified picture of the development of the quantum fluctu-
ations with increasing 2C1ξ/(1+ξ). Such a measurement
may be accomplished through conditional homodyne de-
tection [55, 56, 57].
In Fig. 1 the magnitude of the antibunching effect—
the amplitude of the vacuum Rabi oscillation— is larger
than observed in the experiments by approximately an
order of magnitude (see Fig. 3). Significant improvement
is obtained by taking into account the unequal coupling
strengths of atoms randomly distributed throughout the
cavity mode.
B. Fixed Atomic Configuration
Rempe et al. [29] extended the above treatment to the
case of unequal coupling strengths, adopting the non-
Hermitian Hamiltonian (18) while keeping the number
of atoms and the atom locations fixed. For N atoms in
a spatial configuration {rj}, the second-order correlation
function takes the same form as in Eq. (30)—still a per-
fect square—but with a modified amplitude of oscillation
[29, 58]:
(τ) =
[1 + ξ(1 + C{rj})]S{rj} − 2C{rj}
1 + (1 + ξ/2)S{rj}
(κ+γ/2)τ
cos (Ωτ) +
(κ+ γ/2)
sin (Ωτ)
, (34)
C{rj} ≡
C1j , C1j ≡ g2(rj)/κγ, (35)
S{rj} ≡
1 + ξ(1 + C{rj})− 2ξC1j
, (36)
where the vacuum Rabi frequency is given by Eq. (31)
with effective number of interacting atoms
N̄eff → N{rj}eff ≡
g2(rj)/g
max. (37)
C. Monte-Carlo Average and Comparison with
Experimental Results
In reality the number of atoms and their configuration
both fluctuate in time. These fluctuations are readily
taken into account if the typical atomic motion is suf-
ficiently slow; one takes a stationary-atom Monte-Carlo
average over configurations, adopting a finite interaction
volume VF and combining a Poisson average over the
number of atoms N with an average over their uniformly
distributed positions rj , j = 1, . . . , N . In particular, the
effective number of interacting atoms becomes
N̄eff = N
eff , (38)
where the overbar denotes the Monte-Carlo average.
Although it is not justified by the velocities listed in
Table I, a stationary-atom approximation was adopted
when modeling the experimental results in Refs. [29]
and [31]. The correlation function was computed as the
Monte-Carlo average
g(2)(τ) = g
(τ), (39)
with g
(τ) given by Eq. (34). In fact, taking a Monte-
Carlo average over normalized correlation functions in
this way is not, strictly, correct. In practice, first the
delayed photon coincidence rate is measured, as a sepa-
rate average, then subsequently normalized by the aver-
age photon counting rate. The more appropriate averag-
ing procedure is therefore
g(2)(τ) =
〈â†(0)â†(τ)â(τ)â(0)〉{rj}
〈â†â〉{rj}
, (40)
or, in a form revealing more directly the relationship to
Eq. (34), the average is to be weighted by the square of
the photon number:
g(2)(τ) =
〈â†â〉{rj}
〈â†â〉{rj}
, (41)
where
〈â†â〉{rj} =
1 + 2C{rj}
is the intracavity photon number expectation—in sta-
tionary state |ψss〉 [Eq. (19)]—for the configuration of
atoms {rj}.
Note that the statistical independence of forwards-
scattering events that are widely separated in time yields
the limit
(τ) → 1, (43)
which clearly holds for the average (39) as well. Equa-
tion (41), on the other hand, yields
g(2)(τ) →
〈â†â〉{rj}
〈â†â〉{rj}
≥ 1. (44)
A value greater than unity arises because while there are
fluctuations in N and {rj}, their correlation time is in-
finite under the stationary-atom approximation; the ex-
pected decay of the correlation function to unity is there-
fore not observed.
The two averaging schemes are compared in the plots
of Fig. 2, which suggest that atomic beam fluctuations
should have at least a small effect in the experiments;
although, just how important they turn out to be is not
captured at all by the figure. The actual disagreement
between the model and the data is displayed in Fig. 3.
The measured photon antibunching effect is significantly
630-3-6
210-1-2
FIG. 2: Second-order correlation function with Monte-Carlo
average over number of atoms N and configuration {rj}. The
average is taken according to Eq. (39) (thin line) and Eq. (41)
(thick line) for (a) Parameter Set 1, (b) Parameter Set 2.
smaller than predicted in both experiments: smaller by
a factor of 4 in Fig. 3(a), as the authors of Ref. [29]
explicitly state, and by a factor of a little more than 2 in
Fig. 3(b).
The rest of the paper is devoted to a resolution of this
disagreement. It certainly arises from a breakdown of the
stationary-atom approximation as suggested by Rempe
et al. [29]. Physics beyond the addition of a finite corre-
lation time for fluctuations of N(t) and {rj(t)} is needed,
however. We aim to show that the single most important
factor is the alignment of the atomic beam.
52.50-2.5-5
0.90.50-0.5-0.9
•••••
•••••••••••
FIG. 3: Second-order correlation function with Monte-Carlo
average, Eq. (41), over number of atoms N and configuration
{rj} compared with the experimental data from (a) Fig. 4(a)
of Ref. [29] (Parameter Set 1) and (b) Fig. 4 of Ref. [31]
(Parameter Set 2).
IV. DELAYED PHOTON COINCIDENCES FOR
AN ATOMIC BEAM
We return now to the full atomic beam simulation out-
lined in Sec. II. With the beam perpendicular to the
cavity axis, the rate of change of the dipole coupling con-
stants might be characterized by the cavity-mode transit
time, determined from the mean atomic speed and the
cavity-mode waist. Taking the values of these quanti-
ties from Table I, the experiment of Rempe et al. has
w0/v̄source = 182nsec, which should be compared with
a vacuum-Rabi-oscillation decay time 2(κ + γ/2)−1 =
94nsec, while Foster et al. have w0/v̄source = 66nsec and
a decay time 2(κ+ γ/2)−1 = 29nsec. In both cases, the
ratio between the transit time and decay time is ∼ 2;
thus, we might expect the internal state dynamics to fol-
low the atomic beam fluctuations adiabatically, to a good
approximation at least, thus providing a justifying for the
stationary-atom approximation. Figure 3 suggests that
this is not so. Our first task, then, is to see how well in
practice the adiabatic following assertion holds.
A. Monte-Carlo Simulation of the Atomic Beam:
Effect of Beam Misalignment
Atomic beam fluctuations induce fluctuations of the
intracavity photon number expectation, as illustrated
by the examples in Figs. 4 and 5. Consider the two
curves (a) in these figures first, where the atomic beam
is aligned perpendicular to the cavity axis. The ring-
ing at regular intervals along these curves is the tran-
sient response to enforced cavity-mode quantum jumps—
jumps enforced to sample the quantum fluctuations effi-
ciently (see Sec. IVB). Ignoring these perturbations for
the present, we see that with the atomic beam aligned
perpendicular to the cavity axis the fluctuations evolve
more slowly than the vacuum Rabi oscillation—at a simi-
lar rate, in fact, to the vacuum Rabi oscillation decay. As
anticipated, an approximate adiabatic following is plau-
sible.
Consider now the two curves (b); these introduce a
9.6mrad misalignment of the atomic beam, following up
on the comment of Foster et al. [31] that misalignments
as large as 1◦ (17.45mrad) might occur. The changes in
the fluctuations are dramatic. First, their size increases,
though by less on average than it might appear. The
altered distributions of intracavity photon numbers are
shown in Fig. 6. The means are not so greatly changed,
but the variances (measured relative to the square of the
mean) increase by a factor of 2.25 in Fig. 4 and 1.45
in Fig. 5. Notably, the distribution is asymmetric, so
the most probable photon number lies below the mean.
The asymmetry is accentuated by the tilt, especially for
Parameter Set 1 [Fig. 6(a)].
More important than the change in amplitude of the
fluctuations, though, is the increase in their frequency.
Again, the most significant effect occurs for Parameter
Set 1 (Fig. 4), where the frequency with a 9.6mrad tilt
approaches that of the vacuum Rabi oscillation itself;
clearly, there can be no adiabatic following under these
conditions. Indeed, the net result of the changes from
Fig. 4(a) to Fig. 4(b) is that the quantum fluctuations,
initiated in the simulation by quantum jumps, are com-
pletely lost in a background of classical noise generated
by the atomic beam. It is clear that an atomic beam
misalignment of sufficient size will drastically reduce the
photon antibunching effect observed.
For a more quantitative characterization of its effect,
we carried out quantum trajectory simulations in a one-
quantum truncation (without quantum jumps) and com-
puted the semiclassical photon number correlation func-
g(2)sc (τ) =
〈(â†â)(t)〉REC〈(â†â)(t+ τ)〉REC
〈(â†â)(t)〉REC
, (45)
where the overbar denotes a time average (in practice
an average over an ensemble of sampling times tk). The
photon number expectation was calculated in two ways:
740 760 780
FIG. 4: Typical trajectory of the intracavity photon number
expectation for Parameter Set 1: (a) atomic beam aligned
perpendicular to the cavity axis, (b) with a 9.6mrad tilt of the
atomic beam. The driving field strength is E/κ = 2.5× 10−2.
560 590 620
FIG. 5: As in Fig. 4 but for Parameter Set 2.
〈a†a〉REC/〈a
†a〉REC
31.50
FIG. 6: Distribution of intracavity photon number expecta-
tion with the atom beam perpendicular to the cavity axis
(thin line) and a 9.6mrad tilt of the atomic beam (thick line):
(a) Parameter Set 1, (b) Parameter Set 2.
first, by assuming that the conditional state adiabatically
follows the fluctuations of the atomic beam, in which
case, from Eq. (42), we may write
〈(â†â)(t)〉REC =
1 + 2C{rj(t)}
, (46)
and second, without the adiabatic assumption, in which
case the photon number expectation was calculated from
the state vector in the normal way.
Correlation functions computed for different atomic
beam tilts according to this scheme are plotted in Figs. 7
and 8. In each case the curves shown in the left column
assume adiabatic following while those in the right col-
umn do not. The upper-most curves [frames (a) and (e)]
hold for a beam aligned perpendicular to the cavity axis
and those below [frames (b)–(d) and (f)–(h)] show the
effects of increasing misalignment of the atomic beam.
A number of comments are in order. Consider first the
aligned atomic beam. Correlation times read from the
figures are in approximate agreement with the cavity-
mode transit times computed above: the numbers are
191nsec and 167nsec from frames (a) and (e), respec-
tively, of Fig. 7, compared with w0/v̄oven = 182nsec; and
68nsec and 53nsec from frames (a) and (e) of Fig. 8, re-
spectively, compared with w0/v̄oven = 66nsec. The num-
bers show a small decrease in the correlation time when
the adiabatic following assumption is lifted (by 10-20%)
but no dramatic change; and there is a corresponding
small increase in the fluctuation amplitude.
Consider now the effect of an atomic beam tilt. Here
the changes are significant. They are most evident in
frames (d) and (h) of each figure, but clear already in
frames (c) and (g) of Fig. 7, and frames (b) and (f) of
Fig. 8, where the tilts are close to the tilt used to generate
Figs. 4(b) and 5(b) (also to those used for the data fits in
Sec. IVB). There is first an increase in the magnitude of
the fluctuations—the factors 2.25 and 1.45 noted above—
but, more significant, a separation of the decay into two
40-4 40-4
FIG. 7: Semiclassical correlation function for Parameter Set 1,
with adiabatic following of the photon number (left column)
and without adiabatic following (right column); for atomic
beam tilts of (a,e) 0mrad, (b,f) 4mrad, (c,g) 9mrad, (d,h)
13mrad.
pieces: a central component, with short correlation time,
and a much broader component with correlation time
larger than w0/v̄oven. Thus, for a misaligned atomic
beam, the dynamics become notably nonadiabatic.
Our explanation of the nonadiabaticity begins with the
observation that any tilt introduces a velocity compo-
nent along the standing wave, with transit times through
a quarter wavelength of λ/4v̄oven sin θ = 86nsec in the
Rempe et al. [29] experiment and λ/4v̄oven sin θ = 60nsec
in the Foster et al. [31] experiment. Compared with
the transit time w0/v̄oven, these numbers have moved
closer to the decay times of the vacuum Rabi oscillation—
94nsec and 29nsec, respectively. Note that the distances
traveled through the standing wave during the cavity-
mode transit, in time w0/v̄oven, are w0 sin θ = 0.53λ (Pa-
rameter Set 1) and w0 sin θ = 0.28λ (Parameter Set 2).
It is difficult to explain the detailed shape of the correla-
tion function under these conditions. Speaking broadly,
though, fast atoms produce the central component, the
short correlation time associated with nonadiabatic dy-
namics, while slow atoms produce the background com-
ponent with its long correlation time, which follows from
an adiabatic response. Increased tilt brings greater sep-
aration between the responses to fast and slow atoms.
Simple functional fits to the curves in frame (g) of
Fig. 7 and frame (f) of Fig. 8 yield short correlation times
80-8 80-8
FIG. 8: As in Fig. 7 but for Parameter Set 2 and atomic
beam tilts of (a,e) 0mrad, (b,f) 10mrad, (c,g) 17mrad, (d,h)
34mrad.
of 40-50nsec and 20nsec, respectively. Consistent num-
bers are recovered by adding the decay rate of the vac-
uum Rabi oscillation to the inverse travel time through a
quarter wavelength; thus, (1/94+1/86)−1nsec = 45nsec
and (1/29+ 1/60)−1nsec = 20nsec, respectively, in good
agreement with the correlation times deduced from the
figures.
The last and possibly most important thing to note
is the oscillation in frames (g) and (h) of Fig. 7 and
frame (h) of Fig. 8. Its frequency is the vacuum Rabi
frequency, which shows unambiguously that the oscilla-
tion is caused by a nonadiabatic response of the intra-
cavity photon number to the fluctuations of the atomic
beam. For the tilt used in frame (g) of Fig. 7, the tran-
sit time through a quarter wavelength is approximately
equal to the vacuum-Rabi-oscillation decay time, while it
is twice that in frame (f) of Fig. 8. As the tilts used are
close to those giving the best data fits in Sec. IVB, this
would suggest that atomic beam misalignment places the
experiment of Rempe et al. [29] further into the nonadi-
abatic regime than that of Foster et al. [31], though the
tilt is similar in the two cases. The observation is consis-
tent with the greater contamination by classical noise in
Fig. 4(b) than in Fig. 5(b) and with the larger departure
of the Rempe et al. data from the stationary-atom model
in Fig. 3.
B. Simulation Results and Data Fits
The correlation functions in the right-hand column of
Figs. 7 and 8 account for atomic-beam-induced classi-
cal fluctuations of the intracavity photon number. While
some exhibit a vacuum Rabi oscillation, the signals are, of
course, photon bunched; a correlation function like that
of Fig. 7(g) provides evidence of collective strong cou-
pling, but not of strong coupling of the one-atom kind,
for which a photon antibunching effect is needed. We
now carry out full quantum trajectory simulations in a
two-quanta truncation to recover the photon antibunch-
ing effect—i.e., we bring back the quantum jumps.
In the weak-field limit the normalized photon corre-
lation function is independent of the amplitude of the
driving field E [Eqs. (30) and (34)]. The forwards photon
scattering rate itself is proportional to (E/κ)2 [Eq. (42)],
and must be set in the simulations to a value very much
smaller than the inverse vacuum-Rabi-oscillation decay
time [Eq. (22)]. Typical values of the intracavity pho-
ton number were ∼ 10−7 − 10−6. It is impractical, un-
der these conditions, to wait for the natural occurrence
of forwards-scattering quantum jumps. Instead, cavity-
mode quantum jumps are enforced at regular sample
times tk [see Figs. 4(a) and 5(a)]. Denoting the record
with enforced cavity-mode jumps by REC, the second-
order correlation function is then computed as the ratio
of ensemble averages
g(2)(τ) =
〈(â†â)(tk)〉REC〈(â†â)(tk + τ)〉REC
〈(â†â)(tl)〉REC
, (47)
where the sample times in the denominator, tl, are cho-
sen to avoid the intervals—of duration a few correlation
times—immediately after the jump times tk; this ensures
that both ensemble averages are taken in the steady state.
With the cut-off parameter [Eq. (6)] set to F = 0.01, the
number of atoms within the interaction volume typically
fluctuates around N(t) ∼ 400-450 atoms for Parameter
Set 1 and N(t) ∼ 280-320 atoms for Parameter Set 2;
in a two-quanta truncation, the corresponding numbers
of state amplitudes are ∼ 90, 000 (Parameter Set 1) and
∼ 45, 000 (Parameter Set 2).
Figure 9 shows the computed correlation functions for
various atomic beam tilts. We select from a series of such
results the one that fits the measured correlation function
most closely. Optimum tilts are found to be 9.7mrad
for the Rempe et al. [29] experiment and 9.55mrad for
the experiment of Foster et al. [31]. The best fits are
displayed in Fig. 10. In the case of the Foster et al. data
the fit is extremely good. The only obvious disagreement
is that the fitted frequency of the vacuum Rabi oscillation
is possibly a little low. This could be corrected by a small
increase in atomic beam density—the parameter N̄eff—
which is only known approximately from the experiment,
in fact by fitting the formula (31) to the data.
The fit to the data of Rempe et al. [29] is not quite
so good, but still convincing with some qualifications.
Note, in particular, that the tilt used for the fit might be
judged a little too large, since the three central minima
in Fig. 10(a) are almost flat, while the data suggest they
should more closely follow the curve of a damped oscil-
lation. As the thin line in the figure shows, increasing
the tilt raises the central minimum relative to the two
on the side; thus, although a better fit around κτ = 0
is obtained, the overall fit becomes worse. This trend
results from the sharp maximum in the semiclassical cor-
relation function of Fig. 7(g), which becomes more and
more prominent as the atomic beam tilt is increased.
The fit of Fig. 10(b) is extremely good, and, although it
is not perfect, the thick line in Fig. 10(a), with a 9.7mrad
tilt, agrees moderately well with the data once the un-
certainty set by shot noise is included, i.e., adding error
bars of a few percent (see Fig. 13). Thus, leaving aside
possible adjustments due to omitted noise sources, such
as spontaneous emission—to which we return in Sec. V—
and atomic and cavity detunings, the results of this and
the last section provide strong support for the proposal
that the disagreement between theory and experiment
presented in Fig. 3 arises from an atomic beam misalign-
ment of approximately 0.5◦.
One final observation should be made regarding the
fit to the Rempe et al. [29] data. Figure 11 replots
the comparison made in Fig. 10(a) for a larger range
of time delays. Frame (a) plots the result of our sim-
ulation for a perfectly aligned atomic beam, and frames
(b) and (c) shows the results, plotted in Fig. 10(a), cor-
responding to atomic beam tilts of θ = 9.7mrad and
10mrad, respectively. The latter two plots are overlayed
by the experimental data. Aside from the reduced am-
plitude of the vacuum Rabi oscillation, in the presence
of the tilt the correlation function exhibits a broad back-
ground arising from atomic beam fluctuations. Notably,
the background is entirely absent when the atomic beam
52.50-2.5-5
210-1-2
FIG. 9: Second-order correlation function from full quantum
trajectory simulations with a two-quanta truncation: (a) Pa-
rameter Set 1 and θ = 0mrad (thick line), 7mrad (medium
line), 12mrad (thin line); (b) Parameter Set 2 and θ = 0mrad
(thick line), 10mrad (medium line), 17mrad (thin line).
52.50-2.5-5
0.90.50-0.5-0.9
FIG. 10: Best fits to experimental results: (a) data from
Fig. 4(a) of Ref. [29] are fitted with Parameter Set 1 and
θ = 9.7mrad (thick line) and 10mrad (thin line); (b) data
from Fig. 4 of Ref. [31] are fitted with Parameter Set 2 and θ =
9.55mrad. Averages of (a) 200,000 and (b) 50,000 samples
were taken with a cavity-mode cut-off F = 0.01.
is aligned. The experimental data exhibit just such a
background (Fig. 3(a) of Ref. [29]); moreover, an esti-
mate, from Fig. 11, of the background correlation time
yields approximately 400nsec, consistent with the exper-
imental measurement. It is significant that this number
is more than twice the transit time, w0/v̄oven = 182nsec,
and therefore not explained by a perpendicular transit
across the cavity mode. In fact the background mimics
the feature noted for larger tilts in Figs. 7 and 8; as men-
tioned there, it appears to find its origin in the separa-
tion of an adiabatic (slowest atoms) from a nonadiabatic
(fastest atoms) response to the density fluctuations of the
atomic beam.
Note, however, that a correlation time of 400nsec ap-
pears to be consistent with a perpendicular transit across
the cavity when the cavity-mode transit time is defined
as 2w0/v̄oven = 364nsec, or, using the peak rather than
average velocity, as 4w0/
πv̄oven = 411nsec; the latter
definition was used to arrive at the 400nsec quoted in
Ref. [29]. There is, of course, some ambiguity in how a
transit time should be defined. We are assuming that the
time to replace an ensemble of interacting atoms with a
statistically independent one—which ultimately is what
determines the correlation time—is closer to w0/v̄oven
than 2w0/v̄oven. In support of the assumption we recall
that the number obtained in this way agrees with the
semiclassical correlation function for an aligned atomic
beam [Figs. 7 and 8, frame (a)].
C. Mean-Doppler-Shift Compensation
Foster et al. [31], in an attempt to account for the
disagreement of their measurements and the stationary-
atom model, extended the results of Sec. III B to include
52.50-2.5-5
FIG. 11: Second-order correlation function from full quantum
trajectory simulations with a two-quanta basis for Parameter
Set 1 and (a) θ = 0mrad, (b) θ = 9.7mrad, (c) θ = 10mrad.
Averages of (a) 15,000, and (b) and (c) 200,000 samples were
taken with a cavity-mode cut-off F = 0.01.
an atomic detuning. They then fitted the data using
the following procedure: (i) the component of atomic
velocity along the cavity axis is viewed as a Doppler shift
from the stationary-atom resonance, (ii) the mean shift is
assumed to be offset by an adjustment of the driving field
frequency (tuning to moving atoms) at the time the data
are taken, and (iii) an average over residual detunings—
deviations from the mean—is taken in the model, i.e.,
the detuning-dependent generalization of Eq. (34). The
approach yields a reasonable fit to the data (Fig. 6 of
Ref. [31]).
The principal difficulty with this approach is that a
standing-wave cavity presents an atom with two Doppler
shifts, not one. It seems unlikely, then, that adjusting
the driving field frequency to offset one shift and not the
other could compensate for even the average effect of the
atomic beam tilt. This difficulty is absent in a ring cavity,
though, so we first assess the performance of the outlined
prescription in the ring-cavity case.
In a ring cavity, the spatial dependence of the coupling
constant [Eq. (11)] is replaced by
g(rj(t)) =
gmax√
exp(ikzj(t)) exp
x2j (t) + y
j (t)
where the factor
2 ensures that the collective coupling
strength and vacuum Rabi frequency remain the same.
Figure 12(a) shows the result of a numerical implemen-
tation of the proposed mean-Doppler-shift compensation
for an atomic beam tilt of 17.3mrad, as used in Fig. 6 of
Ref. [31]. It works rather well. The compensated curve
(thick line) almost recovers the full photon antibunching
630-3-6
FIG. 12: Doppler-shift compensation for a misaligned atomic
beam in (a) ring and (b) standing-wave cavities (Parameter
Set 2). The second-order correlation function is computed
with the atomic beam perpendicular to the cavity axis (thin
line), a 17.3mrad tilt of the atomic beam (medium line), and
a 17.3mrad tilt plus compensating detuning of the cavity and
stationary atom resonances ∆ω/κ = kv̄oven sin θ/κ = 0.916
(thick line).
effect that would be seen with an aligned atomic beam
(thin line). The degradation that remains is due to the
uncompensated dispersion of velocities (Doppler shifts)
in the atomic beam.
For the case of a standing-wave cavity, on the other
hand, the outcome is entirely different. This is shown by
Fig. 12(b). There, offsetting one of the two Doppler shifts
only makes the degradation of the photon antibunching
effect worse. In fact, we find that any significant detuning
of the driving field from the stationary atom resonance
is highly detrimental to the photon antibunching effect
and inconsistent with the Foster et al. data.
V. INTRACAVITY PHOTON NUMBER
The best fits displayed in Fig. 10 were obtained from
simulations with a two-quanta truncation and premised
upon the measurements being made in the weak-field
limit. The strict requirement of the limit sets a severe
constraint on the intracavity photon number. We con-
sider now whether the requirement is met in the experi-
ments.
Working from Eqs. (23a) and (23b), and the solu-
tion to Eq. (19), a fixed configuration {rj} of N atoms
(Sec. III B) yields photon scattering rates [27, 28, 53]
Rforwards = 2κ〈â†â〉REC = 2κ
1 + 2C{rj}
, (49a)
Rside = γ
〈σ̂k+σ̂k−〉
g(rk)
1 + 2C{rj}
= 2C{rj}2κ〈â
†â〉REC, (49b)
with ratio
Rside
Rforwards
= 2C{rj} =
eff g
∼ 2N̄effg
. (50)
The weak-field limit [Eq. (22)] requires that the greater
of the two rates be much smaller than 1
(κ + γ/2); it is
not necessarily sufficient that the forwards scattering rate
be low. The side scattering (spontaneous emission) rate
is larger than the forwards scattering rate in both of the
experiments being considered—larger by a large factor of
70–80. Thus, from Eqs. (49a) and (50), the constraint on
intracavity photon number may be written as
〈â†â〉 ≪ 1 + γ/2κ
8N̄effg2max/κγ
, (51)
where, from Table I, the right-hand side evaluates as
1.2×10−2 for Parameter Set 1 and 4.7×10−3 for Param-
eter Set 2, while the intracavity photon numbers inferred
from the experimental count rates are 3.8×10−2 [29] and
7.6 × 10−3 [31]. It seems that neither experiment satis-
fies condition (51). As an important final step we should
therefore relax the weak-driving-field assumption (pho-
ton number ∼ 10−7–10−6 in the simulations) and assess
what effect this has on the data fits; can the simulations
fit the inferred intracavity photon numbers as well?
To address this question we extended our simulations
to a three-quanta truncation of the Hilbert space with
cavity-mode cut-off changed from F = 0.01 to F = 0.1.
With the changed cut-off the typical number of atoms
in the interaction volume is halved: N(t) ∼ 180–220
atoms for Parameter Set 1 and N(t) ∼ 150–170 atoms
for Parameter Set 2, from which the numbers of state
amplitudes (including three-quanta states) increase to
1, 300, 000 and 700, 000, respectively. The new cut-off
introduces a small error in N̄eff , hence in the vacuum
Rabi frequency, but the error is no larger than one or
two percent.
At this point an additional approximation must be
made. At the excitation levels of the experiments, even a
three-quanta truncation is not entirely adequate. Clumps
of three or more side-scattering quantum jumps can oc-
cur, and these are inaccurately described in a three-
quanta basis. In an attempt to minimize the error, we ar-
tificially restrict (through a veto) the number of quantum
jumps permitted within some prescribed interval of time.
The accepted number was set at two and the time inter-
val to 1κ−1 for Parameter Set 1 and 3κ−1 for Parameter
52.50-2.5-5
0.90.50-0.5-0.9
FIG. 13: Second-order correlation function from full quan-
tum trajectory simulations with a three-quanta truncation
and atomic beam tilts as in Fig. 10: (a) Parameter Set 1, mean
intracavity photon number 〈a†a〉 = 6.7×10−3; (b) Parameter
Set 2, mean intracavity photon numbers 〈a†a〉 = 2.2 × 10−4,
5.7×10−4, 1.1×10−3, and 1.7×10−3 (thickest curve to thinest
curve). Averages of 20,000 samples were taken with a cavity-
mode cut-off F = 0.1. Shot noise error bars are added to the
data taken from Ref. [29].
Set 2 (the correlation time measured in cavity lifetimes is
longer for Parameter Set 2). With these settings approx-
imately 10% of the side-scattering jumps were neglected
at the highest excitation levels considered.
The results of our three-quanta simulations appear in
Fig. 13; they use the optimal atomic beam tilts of Fig. 10.
Figure 13(a) compares the simulation with the data of
Rempe et al. [29] at an intracavity photon number that
is approximately six times smaller than what we estimate
for the experiment (a more realistic simulation requires a
higher level of truncation and is impossible for us to han-
dle numerically). The overall fit in Fig. 13 is as good as
that in Fig. 10, with a slight improvement in the relative
depths of the three central minima. A small systematic
disagreement does remain, however. We suspect that the
atomic beam tilt used is actually a little large, while the
contribution to the decoherence of the vacuum Rabi os-
cillation from spontaneous emission should be somewhat
more. We are satisfied, nevertheless, that the data of
Rempe et al. [29] are adequately explained by our model.
Results for the experiment of Foster et al [31] lead
in a rather different direction. They are displayed in
Fig. 13(b), where four different intracavity photon num-
bers are considered. The lowest, 〈â†â〉 = 2.2 × 10−4,
reproduces the weak-field result of Fig. 10(b). As the
photon number is increased, the fit becomes progressively
worse. Even at the very low value of 5.7× 10−4 intracav-
ity photons, spontaneous emission raises the correlation
function for zero delay by a noticeable amount. Then we
obtain g(2)(0) > 1 at the largest photon number consid-
ered. Somewhat surprisingly, even this photon number,
〈â†â〉 = 1.7×10−3, is smaller than that estimated for the
experiment—smaller by a factor of five. Our simulations
therefore disagree significantly with the measurements,
despite the near perfect fit of Fig. 10(b). The simplest
resolution would be for the estimated photon number to
be too high. A reduction by more than an order of mag-
nitude is needed, however, implying an unlikely error,
considering the relatively straightforward method of in-
ference from photon counting rates. This anomaly, for
the present, remains unresolved.
VI. CONCLUSIONS
Spatial variation of the dipole coupling strength has
for many years been a particular difficulty for cavity
QED at optical frequencies. The small spatial scale
set by the optical wavelength makes any approach to a
resolution a formidable challenge. There has neverthe-
less been progress made with cooled and trapped atoms
[13, 14, 32, 59, 60, 61], and in semiconductor systems
[15, 16, 17] where the participating ‘atoms’ are fixed.
The earliest demonstrations of strong coupling at opti-
cal frequencies employed standing-wave cavities and ther-
mal atomic beams, where control over spatial degrees of
freedom is limited to the alignment of the atomic beam.
Of particular note are the measurements of photon anti-
bunching in forwards scattering [29, 30, 31]. They pro-
vide a definitive demonstration of strong coupling at the
one-atom level; although many atoms might couple to
the cavity mode at any time, a significant photon anti-
bunching effect occurs only when individual atoms are
strongly coupled.
Spatial effects pose difficulties of a theoretical na-
ture as well. Models that ignore them can point the
direction for experiments, but fail, ultimately, to ac-
count for experimental results. In this paper we have
addressed a long-standing disagreement of this kind—
disagreement between the theory of photon antibunch-
ing in forwards scattering for stationary atoms in a cav-
ity [25, 26, 27, 28, 29] and the aforementioned experi-
ments [29, 30, 31]. Ab initio quantum trajectory sim-
ulations of the experiments have been carried out, in-
cluding a Monte-Carlo simulation of the atomic beam.
Importantly, we allow for a misalignment of the atomic
beam, since this was recognized as a critical issue in
Ref. [31]. We conclude that atomic beam misalignment
is, indeed, the most likely reason for the degradation
of the measured photon antibunching effect from pre-
dicted results. Working first with a two-quanta trunca-
tion, suitable for the weak-field limit, data sets measured
by Rempe et al. [29] and Foster et al. [31] were fitted
best by atomic beam tilts from perpendicular to the cav-
ity axis of 9.7mrad and 9.55mrad, respectively.
Atomic motion is recognized as a source of decorrela-
tion omitted from the model used to fit the measurements
in Ref. [29]. We found that the mechanism is more com-
plex than suggested there, however. An atomic beam
tilt of sufficient size results in a nonadiabatic response of
the intracavity photon number to the inevitable density
fluctuations of the beam. Thus classical noise is writ-
ten onto the forwards-scattered photon flux, obscuring
the antibunched quantum fluctuations. The parameters
of Ref. [29] are particularly unfortunate in this regard,
since the nonadiabatic response excites a bunched vac-
uum Rabi oscillation, which all but cancels out the anti-
bunched oscillation one aims to measure.
Although both of the experiments modeled operate at
relatively low forwards scattering rates, neither is strictly
in the weak-field limit. We have therefore extended our
simulations—subject to some numerical constraints—to
assess the effects of spontaneous emission. The fit to
the Rempe et al. data [29] was slightly improved. We
noted that the optimum fit might plausibly be obtained
by adopting a marginally smaller atomic beam tilt and
allowing for greater decorrelation from spontaneous emis-
sion, though a more efficient numerical method would be
required to verify this possibility. The fit to the Fos-
ter et al. data [31] was highly sensitive to spontaneous
emission. Even for an intracavity photon number five
times smaller than the estimate for the experiment, a
large disagreement with the measurement appeared. No
explanation of the anomaly has been found.
We have shown that cavity QED experiments can call
for elaborate and numerically intensive modeling before
a full understanding, at the quantitative level, is reached.
Using quantum trajectory methods, we have significantly
increased the scope for realistic modeling of cavity QED
with atomic beams. While we have shown that atomic
beam misalignment has significantly degraded the mea-
surements in an important set of experiments in the
field, this observation leads equally to a positive con-
clusion: potentially, nonclassical photon correlations in
cavity QED can be observed at a level at least ten times
higher than so far achieved.
Acknowledgements
This work was supported by the NSF under Grant No.
PHY-0099576 and by the Marsden Fund of the RSNZ.
[1] P. R. Berman, ed., Cavity Quantum Electrodynamics,
Advances in Atomic, Molecular, and Optical Physics,
Supplement 2 (Academic Press, San Diego, 1994).
[2] J. M. Raimond, M. Brune, and S. Haroche, Rev. Mod.
Phys. 73, 565 (2001).
[3] H. Mabuchi and A. Doherty, Nature 298, 1372 (2002).
[4] K. J. Vahala, Nature 424, 839 (2003).
[5] G. Khitrova, H. M. Gibbs, M. Kira, S. W. Koch, and
A. Scherer, Nature Physics 2, 81 (2006).
[6] H. J. Carmichael, Statistical Methods in Quantum Optics
2: Nonclassical Fields (Springer, Berlin, 2007), pp. 561–
[7] J. J. Sanchez-Mondragon, N. B. Narozhny, and J. H.
Eberly, Phys. Rev. Lett. 51, 550 (1983).
[8] G. S. Agarwal, Phys. Rev. Lett. 53, 1732 (1984).
[9] M. Brune, F. Schmidt-Kaler, A. Maali, J. Dreyer, E. Ha-
gley, J. M. Raimond, and S. Haroche, Phys. Rev. Lett.
76, 18003 (1996).
[10] F. Bernardot, P. Nussenzveig, M. Brune, J. M. Raimond,
and S. Haroche, Europhys. Lett. 17, 33 (1992).
[11] R. J. Thompson, G. Rempe, and H. J. Kimble, Phys.
Rev. Lett. 68, 1132 (1992).
[12] J. J. Childs, K. An, M. S. Otteson, R. R. Dasari, and
M. S. Feld, Phys. Rev. Lett. 77, 29013 (1996).
[13] A. Boca, R. Miller, K. M. Birnbaum, A. D. Boozer,
J. McKeever, and H. J. Kimble, Phys. Rev. Lett. 93,
233603 (2004).
[14] P. Maunz, T. Puppe, I. Schuster, N. Syassen, P. W. H.
Pinkse, and G. Rempe, Phys. Rev. Lett. 94, 033002
(2005).
[15] T. Yoshie, A. Scherer, J. Hendrickson, G. Khitrova, H. M.
Gibbs, G. Rupper, C. Ell, O. B. Shchekin, and D. G.
Deppe, Nature 432, 200 (2004).
[16] J. P. Reithmaier, G. Sek, A. Löffler, C. Hofmann,
S. Kuhn, S. Reitzenstein, L. V. Keldysh, V. D. Ku-
lakovskii, T. L. Reinecke, and A. Forchel, Nature 432,
197 (2004).
[17] E. Peter, P. Senellart, D. Martrou, A. Lemâıtre, J. Hours,
J. M. Gérard, and J. Bloch, Phys. Rev. Lett. 95, 067401
(2005).
[18] A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R.-S.
Huang, J. Majer, S. Kumar, S. M. Girvin, and R. J.
Schoelkopf, Nature 431, 162 (2004).
[19] See, for example, H. J. Carmichael, L. Tian, W. Ren, and
P. Alsing, “Nonperturbative interactions of photons and
atoms in a cavity,” in Ref. [1], Sect. IIA.
[20] M. G. Raizen, R. J. Thompson, R. J. Brecha, H. J.
Kimble, and H. J. Carmichael, Phys. Rev. Lett. 63, 240
(1989).
[21] Y. Z. Zhu, D. J. Gauthier, S. E. Morin, Q. Wu, H. J.
Carmichael, and H. J. Kimble, Phys. Rev. Lett. 64, 2499
(1990).
[22] J. Gripp, S. L. Mielke, L. A. Orozco, and H. J.
Carmichael, Phys. Rev. A 54, R3746 (1996).
[23] C. Weisbuch, M. Nishioka, A. Ishikawa, and Y. Arakawa,
Phys. Rev. Lett. 69, 3314 (1992).
[24] G. Khitrova, H. M. Gibbs, F. Jhanke, M. Kira, and S. W.
Koch, Rev. Mod. Phys. 71, 1591 (1999).
[25] H. J. Carmichael, Phys. Rev. Lett. 55, 2790 (1985).
[26] P. R. Rice and H. J. Carmichael, IEEE J. Quantum Elec-
tron. 24, 1351 (1988).
[27] H. J. Carmichael, R. Brecha, and P. R. Rice, Opt. Comm.
82, 73 (1991).
[28] R. J. Brecha, P. R. Rice, and M. Xiao, Phys. Rev. A 59,
2392 (1999).
[29] G. Rempe, R. J. Thompson, R. J. Brecha, W. D. Lee,
and H. J. Kimble, Phys. Rev. Lett. 67, 1727 (1991).
[30] S. L. Mielke, G. T. Foster, and L. A. Orozco, Phys. Rev.
Lett. 80, 3948 (1998).
[31] G. T. Foster, S. L. Mielke, and L. A. Orozco, Phys. Rev.
A 61, 053821 (2000).
[32] K. M. Birnbaum, A. Boca, R. Miller, A. D. Boozer, T. E.
Northup, and H. J. Kimble, Nature 436, 87 (2005).
[33] A. Imamoḡlu, H. Schmidt, G. Woods, and M. Deutsch,
Phys. Rev. Lett. 79, 1467 (1979).
[34] M. J. Werner and A. Imamoḡlu, Phys. Rev. A 61, 011801
(1999).
[35] S. Rebić, S. M. Tan, A. S. Parkins, and D. F. Walls, J.
Opt. B 1, 1464 (1999).
[36] S. Rebić, A. S. Parkins, and S. M. Tan, Phys. Rev. A 65,
063804 (2002).
[37] J. Kim, O. Benson, H. Kan, and Y. Yamamoto, Nature
397, 500 (1999).
[38] I. I. Smolyaninov, A. V. Zayats, A. Gungor, and C. C.
Davis, Phys. Rev. Lett. 88, 187402 (2002).
[39] L. Tian and H. J. Carmichael, Phys. Rev. A 46, R6801
(1992).
[40] H. J. Carmichael and B. C. Sanders, Phys. Rev. A 60,
2497 (1999).
[41] U. Martini and A. Schenzle, in Directions in Quantum
Optics, edited by H. J. Carmichael, R. J. Glauber, and
M. O. Scully (Springer, Berlin, 2001), Lecture Notes in
Physics, pp. 238–249.
[42] H. J. Carmichael, An Open Systems Approach to Quan-
tum Optics (Springer, Berlin, 1993), vol. m18 of Lecture
Notes in Physics, pp. 113–179.
[43] J. Dalibard, Y. Castin, and K. Mølmer, Phys. Rev. Lett.
68, 127902 (1992).
[44] L.-M. Dum, P. J. Zoller, and H. Ritsch, Phys. Rev. A 45,
4879 (1992).
[45] C. W. Gardiner and P. Zoller, Quantum Noise (Springer,
Berlin, 2004), pp. 341–396, 3rd ed.
[46] Ref. [6], pp. 767-879.
[47] R. J. Brecha, Phd thesis, University of Texas at Austin
(1990).
[48] G. T. Foster, Phd thesis, State University of New York
at Stony Brook (1999).
[49] N. F. Ramsey, Molecular Beams (Oxford University
Press, Oxford, 1956), pp. 11–25.
[50] H. J. Carmichael, P. Kochan, and B. C. Sanders, Phys.
Rev. Lett. 77, 631 (1996).
[51] B. C. Sanders, H. J. Carmichael, and B. F. Wielinga,
Phys. Rev. A 55, 1358 (1997).
[52] For both experiments the cavity mode function is almost
plane; the Rayleigh length exceeds the cavity length by
a factor of nine in the Rempe et al. experiment [29] and
by approximately half that amount in the experiment of
Foster et al. [31].
[53] Ref. [6], pp. 702-710.
[54] Ref. [6], pp. 711-715.
[55] H. J. Carmichael, H. M. Castro-Beltran, G. T. Foster,
and L. A. Orozco, Phys. Rev. Lett. 85, 1855 (2000).
[56] G. T. Foster, L. A. Orozco, H. M. Castro-Beltran, and
H. J. Carmichael, Phys. Rev. Lett. 85, 3149 (2000).
[57] G. T. Foster, W. P. Smith, J. E. Reiner, and L. A. Orozco,
Phys. Rev. A 66, 033807 (2002).
[58] Ref. [6], pp. 726-733.
[59] C. J. Hood, T. W. Lynn, A. C. Doherty, A. S. Parkins,
and H. J. Kimble, Science 287, 1447 (2000).
[60] P. W. H. Pinkse, T. Fischer, P. Maunz, and G. Rempe,
Nature 404, 365 (2000).
[61] M. Hennrich, A. Kuhn, and G. Rempe, Phys. Rev. Lett.
94, 053604 (2005).
|
0704.1687 | Studies on optimizing potential energy functions for maximal intrinsic
hyperpolarizability | Studies on optimizing potential energy functions for maximal intrinsic
hyperpolarizability
Juefei Zhou, Urszula B. Szafruga, David S. Watkins* and Mark G. Kuzyk
Department of Physics and Astronomy, Washington State University,
Pullman, Washington 99164-2814; and *Department of Mathematics
We use numerical optimization to study the properties of (1) the class of one-dimensional po-
tential energy functions and (2) systems of point charges in two-dimensions that yield the largest
hyperpolarizabilities, which we find to be within 30% of the fundamental limit. We investigate the
character of the potential energy functions and resulting wavefunctions and find that a broad range
of potentials yield the same intrinsic hyperpolarizability ceiling of 0.709.
I. INTRODUCTION
Materials with large nonlinear-optical suscep-
tibilities are central for optical applications such
as telecommunications,[1] three-dimensional nano-
photolithography,[2, 3] and making new materials[4] for
novel cancer therapies.[5] The fact that quantum calcu-
lations show that there is a limit to the nonlinear-optical
response[6, 7, 8, 9, 10, 11] is both interesting from the
basic science perspective; and, provides a target for
making optimized materials. In this work, we focus
on the second-order susceptibility and the underlying
molecular hyperpolarizability, which is the basis of
electro-optic switches and frequency doublers.
The fundamental limit of the off-resonance hyperpo-
larizability is given by,[8]
βMAX =
, (1)
where N is the number of electrons and E10 the energy
difference between the first excited state and the ground
state, E10 = E1 − E0. Using Equation 1, we can de-
fine the off-resonant intrinsic hyperpolarizability, βint, as
the ratio of the actual hyperpolarizability (measured or
calculated), β, to the fundamental limit,
βint = β/βMAX . (2)
We note that since the dispersion of the fundamental
limit of β is also known, [12] it is possible to calculate
the intrinsic hyperpolarizability at any wavelength. In
the present work, we treat only the zero-frequency limit.
Until recently, the largest nonlinear susceptibilities of
the best molecules fell short of the fundamental limit by a
factor of 103/2, [10, 13, 14] so the very best molecules had
a value of βint = 0.03. Since a Sum-Over-States (SOS)
calculation of the hyperpolarizability[15] using the ana-
lytical wavefunctions of the clipped harmonic oscillator
yields a value βint = 0.57,[14] the factor-of-thirty gap is
not of a fundamental nature. Indeed, recently, it was re-
ported that a new molecule with asymmetric conjugation
modulation has a measured value of βint = 0.048.[16]
To investigate how one might make molecules with
a larger intrinsic hyperpolarizability, Zhou and cowork-
ers used a numerical optimization process where a trial
potential energy function is entered as an input, and
the code iteratively deforms the potential energy func-
tion until the intrinsic hyperpolarizability, calculated
from the resulting wavefunctions, converges to a local
maximum.[17] In this work, a hyperbolic tangent func-
tion was used as the starting potential due to the fact that
it is both asymmetric yet relatively flat away from the ori-
gin. This calculation was one-dimensional and included
only one electron, so electron correlation effects were
ignored. Furthermore, the intrinsic hyperpolarizability
was calculated using the new dipole-free sum-over-states
expression[18] and only 15 excited states were included.
The resulting optimized potential energy function showed
strong oscillations, which served to separate the spatial
overlap between the energy eigenfunctions. This led Zhou
and coworkers to propose that modulated conjugation
in the bridge between donor and acceptor ends of such
molecules may be a new paradigm for making molecules
with higher intrinsic hyperpolarizability.[17]
Based on this paradigm, Pérez Moreno reported mea-
surements of a class of chromophores with varying de-
gree of modulated conjugation.[16] The best measured
intrinsic hyperpolarizability was βint = 0.048, about 50%
larger than the best previously-reported. Given the mod-
est degree of conjugation modulation for this molecule,
this new paradigm shows promise for further improve-
ments.
In the present work, we extend Zhou’s calculations to
a larger set of starting potentials. To circumvent trun-
cation problems associated with sum-over-states calcula-
tions, we instead determine the hyperpolarizability using
a finite difference technique. The optimization procedure
is then applied to this non-perturbative hyperpolarizabil-
To study the effects of geometry on the hyperpo-
larizability, Kuzyk and Watkins calculated the hyper-
polarizability of various arrangements of point charges,
representing nuclei, in two-dimensions using a two-
dimensional Coulomb potential.[19] In the present con-
tribution, we apply our numerical optimization technique
to determine the arrangement and charges of the nuclei
in a planar molecule that maximize the intrinsic hyper-
polarizability.
http://arxiv.org/abs/0704.1687v1
II. THEORY
In our previous work, we used a finite-state SOS model
of the hyperpolarizability that derives from perturbation
theory (we used both the standard Orr andWard SOS ex-
pression, βSOS ,[15] and the newer dipole free expression,
βDF [18]). The use of a finite number of states in lieu of
the full infinite sums can result in inaccuracies, so, in the
present work, we use the non-perturbative approach, as
follows. We begin by solving the 1-d Schrodinger Equa-
tion on the interval a < x < b for the ground state wave-
function ψ(x,E) of an electron in a potential well defined
by V (x) and in the presence of a static electric field, E,
that adds to the potential δV = −exE. From this, the
off-resonant hyperpolarizability is calculated with numer-
ical differentiation, i.e. using finite differences, yielding
βNP =
|ψ(x,E)|2 ex dx
. (3)
Equation 3 is evaluated using the standard second-order
approximation to the second derivative:
f ′′(z) ≈ f(z + h)− 2f(z) + f(z − h)
with several h values h0, h0/5, h0/25, . . . . We then refine
these values by Richardson extrapolation [20] and obtain
our estimate from the two closest extrapolated values.
Our computational mesh consists of 200 quadratic fi-
nite elements with a total of 399 degrees of freedom. The
potential energy function is a cubic spline with 40 degrees
of freedom. Thus the numerical calculations in regions
where the potential function is represented by 3 points
in the spline are covered by 15 elements with a total of
about 30 degrees of freedom.
Calculating βint from Equations 3, 2 and 1 for a specific
potential, we use an optimization algorithm that contin-
uously varies the potential in a way that maximizes βint.
We also compute the matrix[17, 26]
τ (N)mp = δm,p −
xmax10
xmax10
, (4)
where xmax10 is the magnitude of the fundamental limit of
the position matrix element x10 for a one electron system,
and is given by,
xmax10 =
2mE10
. (5)
Each matrix element of τ (N), indexed by m and p, is
a measure of how well the (m, p) sum rule is obeyed
when truncated to N states. If the sum rules are ex-
actly obeyed, τ
mp = 0 for all m and p. We note that if
the sum rules are truncated to an N-state model, the sum
rules indexed by a large value of m or p (i.e. m, p ∼ N)
are disobeyed even when the position matrix elements
and energies are exact. We have found that the values
mp are small for exact wavefunctions when m < N/2
and p < N/2. So, when evaluating the τ matrix to
test our calculations, we consider only the components
m≤N/2,p≤N/2
We observe that when using more than about 40 states
in SOS calculations of the hyperpolarizability only a
marginal increase of accuracy results when the poten-
tial energy function is parameterized with 400 degrees of
freedom. So, to ensure overkill, we use 80 states when
calculating the τ matrix or the hyperpolarizability with
an SOS expression so that truncation errors are kept to
a minimum. Since the hyperpolarizability depends crit-
ically on the transition dipole moment from the ground
state to the excited states, we use the value of τ
00 as
one important test of the accuracy of the calculated wave-
functions. Additionally, we use the standard deviation of
τ (N),
∆τ (N) =
, (6)
which quantifies, on average, how well the sum rules are
obeyed in aggregate, making ∆τ (N) a broader test of the
accuracy of a large set of wavefunctions.
Our code is written in MATLAB. For each trial po-
tential we use a quadratic finite element method [21] to
approximate the Schrödinger eigenvalue problem and the
implicitly restarted Arnoldi method [22] to compute the
wave functions and energy levels. To optimize β we use
the Nelder-Mead simplex algorithm [23].
As described in our previous work,[17] we perform op-
timization, but this time using the exact intrinsic hyper-
polarizability β = βNP /βMAX , where βMAX is the fun-
damental limit of the hyperpolarizability, which is pro-
portional to E
10 . To determine E10 ≡ E1 −E0, we also
calculate the first excited state energy E1.
III. RESULTS AND DISCUSSIONS
Figure 1 shows an example of the optimized poten-
tial energy function after 7,000 iterations when starting
with the potential V (x) = 0 and optimizing the non-
perturbative intrinsic hyperpolarizability βNP /βMAX as
calculated with Equation 3. Also shown are the eigen-
functions of the first 15 states computed from the opti-
mized potential. First, we note that the potential energy
function shows the same kinds of wiggles as in our original
paper,[17] though not of sufficient amplitude to localize
the wavefunctions.
For the starting potentials we have investigated, our
results fall into two broad classes. In the first, three
common features are: (1) The best intrinsic hyperpolar-
izabilities are near βint = 0.71; (2) the best potentials
have a series of wiggles; and (3) the systems behave as a
0 5 10 15 20
FIG. 1: Optimized potential energy function and first 15
wavefunctions after 7,000 iterations. Starting potential is
V (x) = 0, using the non-perturbative hyperpolarizability for
optimization.
limited-state model. In the second class of starting po-
tentials, (2) the wiggles are much less pronounced and (3)
more states contribute evenly. Figure 1 is an example of
a Class II potential. However, in both classes, the max-
imum calculated intrinsic hyperpolarizability appears to
be bounded by βint = 0.71. Using the set of potentials
from both classes that lead to optimized βNP /βMAX , we
calculate the lowest 80 eigenfunctions and eigenvalues,
from which we calculate βDF and βSOS . In most cases,
we find that the three different formulas for β converge
to the same value when only the first 20 excited states
are used (using 80 states, the three are often the same to
within at least 4 decimal places) and τ00 ≈ 10−4, showing
that the ground state sum rules are well obeyed. Further-
more, the rms deviation of the τ matrix when including
40 states leads to τ (80) < 0.001.
Figure 2 shows an example of the optimized potential
energy function when starting with the potential V (x) =
tanhx and optimizing the exact (non-perturbative) in-
trinsic hyperpolarizability. Also shown are the eigenfunc-
tions of the first 15 states computed with the optimized
potential. First, we note that the potential energy func-
tion shows the same kinds of wiggles as in our original
paper;[17] and only 2 excited state wavefunctions and the
ground state are localized in the first deep well - placing
this system in Class I.
The observation that such potentials lead to hyper-
polarizabilities that are near the fundamental limit mo-
tivated Zhou and coworkers to suggest that molecules
with modulated conjugation may have enhanced intrin-
sic hyperpolarizabilities.[17] A molecule with a modu-
lated conjugations bridge between the donor and accep-
tor end was later shown to have record-high intrinsic
hyperpolarizability.[16] As such, this result warrants a
more careful analysis.
It is worthwhile to compare our present results charac-
0 5 10 15 20
FIG. 2: Optimized potential energy function and first 15
wavefunctions after 8,000 iterations. Starting potential is
V (x) = tanh(x), using the non-perturbative hyperpolarizabil-
ity for optimization.
terized by Figure 2 with our past work,[17] particularly
for the purpose of examining the impact of the approx-
imations used in the previous work.[17] Figure 3 shows
the optimized potential and wavefunctions obtained by
Zhou and coworkers using a 15-state model and opti-
mizing the dipole-free intrinsic hyperpolarizability. Since
only 15 states were used, the SOS expression for β did not
fully converge; making the result inaccurate as suggested
by the fact that βSOS and βDF did not agree. How-
ever, since the code focused on optimizing the dipole-free
form of β, and τ00 was small when βint was optimized,
the dipole-free expression may have converged to a rea-
sonably accurate value while the commonly-used SOS
expression was inaccurate. Indeed, it was found that
βDF ≈ 0.72 - in contrast to our more precise present
calculations using the non-perturbative approach, which
yields βNP < 0.71. So, the fact that our more precise
calculations, which do not rely on a sum-over states ex-
pression, agree so well with the 15-state model suggests
that in both cases, the limit for a one-dimensional single
electron molecule is just over β ≈ 0.7. This brute force
calculation serves as a numerical illustration of the obser-
vation that the limiting value of β is the same for an exact
non-perturbation calculation and for a calculation that
truncates the SOS expression, which presumedly should
lead to large inaccuracies.[24, 25] At minimum, this result
supports the existence of fundamental limits of nonlinear
susceptibilities that are in line with past calculations.
To state Zhou’s approach more precisely,[17] the cal-
culations optimized the very special case of the intrin-
sic hyperpolarizability for a 15 state model for a poten-
tial energy function that is parameterized with 20 spline
points. As such, the potential energy function can at
most develop about 20 wiggles. As a consequence, there
are enough degrees of freedom in the potential energy
function to force the 15 states to be spatially well sepa-
0 5 10 15 20
V(x)
FIG. 3: Optimized potential energy function using βDF and
first 15 wavefunctions after 7,000 iterations. Starting po-
tential is the tanh(x) potential. The final potential (shown
above) we refer to as the Zhou potential.
TABLE I: Evolution of Zhou’s Potential. βs is the hyperpo-
larizability of the starting potential using 80 states while the
other ones are after optimization of βNP .
Number of βS βSOS βDF βNP τ
00 ∆τ
Iterations (×10−5) (×10−4)
0 0.5612 0.5612 0.5607 0.5612 11.2 15
1000 0.5612 0.7087 0.6682 0.7083 1810 40
rated. Interestingly, after optimization, only two excited
states overlap with ground state, allowing only these two
states to have nonzero transition dipole moments with
each other and the ground state – forcing the system
into a three-level SOS model for βDF . This behavior is
interesting in light of the three-level ansatz, which asserts
that only three states determine the nonlinear response
of a system when it is near the fundamental limits.
It is interesting to compare the exact non-perturbation
calculation, which does not depend on the excited state
wavefunctions (Figure 2) and Zhou’s contrived system of
15 states (Figure 3). Both cases have wiggles and the
wavefunctions appear to be mostly non-overlapping. So,
for the first 15 states, the wavefunctions appear simi-
larly localized. The situation becomes more interesting
when 80 states are included in calculating the hyperpo-
larizability for Zhou’s potential or when the exact non-
perturbative approach is used. The first line in Table
I summarizes the results with Zhou’s potential and 80
states.
First, let’s focus on the sum-over-states results.
Clearly, when 80 states are used in the calculation, it
is impossible for the excited state wavefunctions to not
overlap with each other, so the three-level approximation
to β breaks down. According to the three-level ansatz,
we would expect the hyperpolarizability to get smaller.
Indeed, the additional excited states result in a smaller
0 5 10 15 20
FIG. 4: Optimized potential energy function and first 15
wavefunctions after 1,000 iterations. Starting potential is
Zhou’s potential, using the non-perturbative hyperpolariz-
ability for optimization.
hyperpolarizability (≈ 0.56). Note that the exact and
SOS expressions agree with each other and that τ
and ∆τ (80) are small.
Figure 4 shows the result after 1000 iterations, us-
ing Zhou’s potential as the starting potential and us-
ing the non-perturbative hyperpolarizability for opti-
mization. First, the non-perturbative hyperpolarizabil-
ity reaches just under 0.71, but, the SOS and dipole-
free expressions do not agree with each other. Further-
more, both convergence metrics (τ
00 and ∆τ
(80)) are
larger than before optimization. It would appear that
for Zhou’s potential, even 80 states are not sufficient
to characterize the nonlinear susceptibility when a sum-
over-states expression is used (either dipole free or tradi-
tional SOS expression - though the SOS expression agrees
better with the non-perturbative approach).
Interestingly, the optimized potential energy function
still retains the wiggles and the wave functions are still
well separated. This result is consistent with the sug-
gestion of Zhou and coworkers that modulation of conju-
gation may be a good design strategy for making large-
hyperpolarizability molecules. We note that wiggles in
the potential energy function are not required to get
a large nonlinear-optical response; but, appears to be
one way that Mother Nature optimizes the hyperpo-
larizability. Since this idea has been used to identify
molecules with experimentally measured record intrin-
sic hyperpolarizability,[16] the concept of modulation of
conjugation warrants further experimental studies.
As a case in point that non-wiggly potentials can lead
to a large nonlinear susceptibility is the clipped harmonic
oscillator, which we calculated to have an intrinsic hyper-
polarizability of about 0.57.[14] Figure 5 shows the opti-
mized non-perturbative hyperpolarizability when using a
clipped harmonic oscillator as the starting potential. The
properties of all of the optimized potentials are summa-
TABLE II: Summary of calculations with different starting
potentials. βs is the hyperpolarizability of the starting po-
tential while the other ones are after optimization.
Function βS βSOS βDF βNP τ
00 ∆τ
V (x) (×10−5) (×10−4)
0 0 0.7089 0.7089 0.7089 37.8 5.33
30 tanh(x) 0.67 0.7084 0.6918 0.7083 779 11.8
x 0.66 0.7088 0.7072 0.7088 78.7 8.79
x2 0.57 0.7089 0.7085 0.7088 18.6 703
x1/2 0.68 0.7087 0.7049 0.7087 190 9.76
x+ sin(x) 0.67 0.7088 0.7073 0.7088 75.0 8.46
x+ 10 sin(x) 0.04 0.7085 0.7085 0.7085 1.65 7.78
0 5 10 15 20
FIG. 5: Optimized potential energy function and first 15
wavefunctions after 8,000 iterations. Starting potential is
V (x) = x2, using the non-perturbative hyperpolarizability
for optimization.
rized in Table II. The clipped square root function also
has a large hyperpolarizability (0.69). The optimized po-
tential is shown in Figure 6. In these cases, the amplitude
of the wiggles are small and all the wavefunctions overlap.
So, these fall into Class II. Note that the lack of wiggles
shows that they are not an inevitable consequence of our
numerical calculations.
We may question whether small wiggles in the poten-
tial energy function lead to large amplitude wiggles as an
artifact of our numerical optimization technique. To test
this hypothesis, we used the trial potential energy func-
tion x + sin(x), where the wiggle amplitude is not large
enough to cause the wavefunctions to localize at the min-
ima. The optimized potential energy function retains an
approximately linear from with only small fluctuation.
In fact, the results are very similar to what we found for
the linear starting potential and the wiggles do not affect
the final result. The similarity between these cases can
be seen in Table II.
Next, we test a starting potential with large wiggles
as shown in the upper portion of Figure 7. The lower
0 5 10 15 20
FIG. 6: Optimized potential energy function and first 15
wavefunctions after 8,000 iterations. Starting potential is
V (x) =
x, using the non-perturbative hyperpolarizability
for optimization.
energy eigenfunctions are found to be localized mostly in
the first two wells. In fact, the lowest four energy eigen-
functions are well approximated by harmonic oscillator
wavefunctions, which are centrosymmetric. As a result,
the first excited state holds most of the oscillator strength
and the value of the intrinsic hyperpolarizability is only
0.04.
After 3000 interactions, this Class I potential energy
function has high amplitude wiggles at a wavelength that
is significantly shorter than the wavelength of the ini-
tial sine function (bottom portion of Figure 7). In com-
mon with the optimized tanh(x) function, the wiggles
are of large but almost chaotically varying amplitude.
This leads to wavefunctions that are spatially separated.
While the wavefunctions are not as well separated as
we find for the tanh(x) starting potential, the optimized
potential yields only two dominant transition from the
ground state; so, this system is well approximated by
a three-level model. As is apparent from Table II, the
ground state sum rule (characterized by τ
00 ) is better
obeyed in this optimized potential than in any others.
So, the wavefunctions are accurate and all of the values
of β have converged to the same value, suggesting that
this calculation may be the most accurate of the set
Our results bring up several interesting questions.
First, all of our extensive numerical calculations, inde-
pendent of the starting potential, yield an optimized in-
trinsic hyperpolarizability with an upper bound of 0.71,
which is about thirty percent lower than what the sum
rules allow. Since numerical optimization can settle in to
a local maximum, it is possible that all of the starting
potentials are far from the global maximum of βint = 1.
Indeed, since most potentials lead to systems that require
more than three dominant states to express the hyper-
polarizability, this may in itself be an indicator that we
are not at the fundamental limit precisely because these
0 5 10 15 20
0 5 10 15 20
FIG. 7: Potential energy function and first 15 wavefunctions
before (top) and after (bottom) 3,000 iterations. Starting
potential is of the form V (x) = x+ 10 sin(x), using the non-
perturbative hyperpolarizability for optimization.
systems have more than three states. Indeed, the orig-
inal results of Zhou and coworkers frames the problem
in a way (i.e. a 15-level model in a potential limited to
about 20 wiggles) that allows a solution to the optimiza-
tion problem to lead to three dominant states. So, while
it may be argued that this system is contrived and un-
physical, we have found value in trying such toy models
when testing various hypotheses. This toy model
• leads to a three-level system as the three-level
ansatz proposes
• has the same qualitative properties as more precise
methods
• has given insights into making new molecules with
record-breaking intrinsic hyperpolarizability
Given the complexity of calculating nonlinear-
susceptibilities, our semi-quantitative method may
be a good way of generating new ideas.
The three-level ansatz proposes that at the fundamen-
tal limit, all transitions are negligible except between
three dominant states. There appears to be no proof of
the ansatz aside from the fact that it leads to an accurate
prediction of the upper bound of nonlinear susceptibili-
ties, both calculated and measured. To understand the
motivation behind the ansatz, it is useful to understand
how the two-level model optimizes the polarizability, α,
without the need to rely on any assumptions. This is
trivial to show by using the fact that the polarizability
depends only on the positive-definite transition moments,
〈0|x |n〉 〈n|x |0〉, the same parameters that are found in
the ground state sum rules.[26]
For nonlinear susceptibilities, the situation is much
more complicated because the SOS expression depends
on quantities such as 〈0|x |n〉 〈n|x |m〉 〈m|x |0〉, where
these terms can be both positive and negative. Fur-
thermore, the sum rules that relate excited states mo-
ments to each other allow for these moments to be much
larger than transition moments to the ground state. So,
it would seem plausible that one could design a system
with many excited states in a way that all of the tran-
sition moments between excited states would add con-
structively to yield a larger hyperpolarizability than what
we calculate with the three-level ansatz. None of our
numerical calculations, independent of the potential en-
ergy function, yield a value greater than 0.71. Since our
potential energy functions are general 1-dimensional po-
tentials (i.e. the potentials are not limited to Coulomb
potentials, nor are the wavefunctions approximated as is
common in standard quantum chemical computations),
our calculations most likely span a broader range of pos-
sible wavefunctions leading to a larger variety of states
that contribute to the hyperpolarizability.
However, there appear to be local maxima associated
with systems that behave as a three-level system and oth-
ers with many states, and, the maximum values both are
0.71. It is interesting that so may different sets of transi-
tion moments and energies can yield the exact same local
maximum. To gain a deeper appreciation of the under-
lying physics, let’s consider the transition moments and
energies in the sum-over-states expression for the hyper-
polarizability as adjustable parameters. For a system
with N states, there are N − 1 energy parameters of the
form En − E0. The moment matrix xij has N2 com-
ponents. If the matrix is real, there are (N2 − N)/2
unique off-diagonal terms and N diagonal dipole mo-
ments. Since all dipole moments appear as differences
of the form xnn − x00, there are only N − 1 dipole mo-
ment parameters. Therefore, the dipole matrix is char-
acterized by (N2 − N)/2 + N − 1 = (N + 2)(N − 1)/2
parameters. Combining the energy and dipole matrix pa-
rameters, there are a total of (N + 2)(N − 1)/2 +N − 1
parameters.
The N-state sum rules are of the form:
(Em + Ep)
〈m|x |n〉 〈n|x |p〉 (7)
δm,p,
so the sum rules comprise a total of N2 equations (i.e. an
equation for each (m, p)). If the sum rules are truncated
to N states, the sum rule indexed by (m = N, p = N)
is nonsensical because it contradicts the other sum rules.
Furthermore, if the transition moments are real, then
xmp = xpm, so only (N
2 − N)/2 of the equations are
independent. As such, there are a total of (N2 −N)/2+
N − 1 = (N + 2)(N − 1)/2 independent equations.
Since the SOS expression for the nonlinear-
susceptibility has (N + 2)(N − 1)/2 +N − 1 parameters
and the sum rules provide (N + 2)(N − 1)/2 equations,
the SOS expression can be reduced to a form with N − 1
parameters. For example, the three-level model for the
hyperpolarizability, which is expressed in terms of 7
parameters, can be reduced to two parameters using 5
sum rule equations. In practice, however, even fewer
sum rule equations are usually available because some
of them lead to physically unreasonable consequences.
While the (N,N) sum rule is clearly unphysical due
to truncation, sum rule equations that are near equa-
tion (N,N) may also be unphysical. In the case of
the three-level model, it is found that the equation
(2, 1) allows for an infinite hyperpolarizability, so that
equation is ignored on the grounds that it violates the
principle of physical soundness.[12, 25, 26] This leads to
a hyperpolarizability in terms of 3 variables, which are
chosen to be E10, E = E10/E20, and X = x10/x
The expression is then maximized with respect to the
two parameters E and X , leaving the final result a
function of E10.
We conclude that the SOS expression for the hyperpo-
larizability can be expressed in terms of at least N − 1
parameters; so, it would appear that as more levels are
included in the SOS expression, there are more free pa-
rameters that can be varied without violating the sum
rules. As N → ∞, there are an infinite number of ad-
justable parameters. So, it is indeed puzzling that the
three-level ansatz yields a fundamental limit that is con-
sistent with all of our calculations for a wide range of
potentials, many of which have many excited states. It
may be that we are only considering a small subset of
potential energy functions; or, perhaps the expression
for the hyperpolarizability depends on the parameters in
such a way that large matrix elements contribute to the
hyperpolarizability with alternating signs so that the big
terms cancel. This is a puzzle that needs to be solved if
we are to understand what makes β large.
To investigate whether the limiting behavior is
due to our use of 1-dimensional potentials, we have
also optimized the intrinsic hyperpolarizability in two-
dimensions. In this case, we focus on the largest ten-
sor component, βxxx and describe the potential as a
superposition of point charges. As described in the
literature,[19] we solve the two-dimensional Schrödinger
eigenvalue problem,
∇2Ψ+ VΨ = EΨ, (8)
for the lowest ten to 25 energy eigenstates, depending on
the degree of convergence of the resulting intrinsic hyper-
polarizability. We use the two-dimensional logarithmic
Coulomb potential, which for k nuclei with charges q1e,
. . . , qke located at points s
(1), . . . , s(k), is given by
V (s) =
qj log ‖s− s(j)‖, (9)
where L is a characteristic length. With L = 2Å, the
force due to a charge at distance 2Å is the same as it
would be for a 3D Coulomb potential.
We discretize the eigenvalue problem given by Equa-
tion 8 using a quadratic finite element method [21, 27]
and solve the resulting matrix eigenvalue problem for the
ten to 25 smallest energy eigenvalues and corresponding
eigenvectors by the implicitly-restarted Arnoldi method
[22] as implemented in ARPACK [28]. Each eigenvector
yields a wave function Ψn corresponding to energy level
En. The moments
xmn =
s1Ψm(s1, s2)Ψn(s1, s2) ds1ds2
are computed, and these and the energy levels En are
used to compute β
Figure 8 shows the intrinsic hyperpolarizability of a
two-nucleus molecule plotted as a function of the distance
between the two nuclei and nuclear charge q1. The total
nuclear charge is q1 + q2 = +e, and is expressed in units
of the proton charge, e. Three extrema are observed.
The positive peak parameters are βint = 0.649 for q1 =
0.58 and d = 4.36Å. The negative one yields βint =
−0.649 for q1 = 0.42 and d = 4.36Å. The local negative
peak that extends past the graph on the right reaches its
maximum magnitude of βint = −0.405 at q1 = 2.959 and
d = 2.0Å.
Applying numerical optimization to the intrinsic hy-
perpolarizability using the charges and separation be-
tween the nuclei as parameters, we get βint = 0.654 at
d = 4.539Å, and q1 = 0.430 when the starting param-
eters are near the positive peak; and βint = −0.651,
d = 4.443Å, and q1 = 0.572 when optimization gives
the negative peak. The peak parameters are the same
within roundoff errors when optimization or plotting is
used, confirming that the optimization procedure yields
the correct local extrema.
Figure 9 shows the intrinsic hyperpolarizability of an
octupolar-like molecule made of three evenly-spaced nu-
clei on a circle plotted as a function of the circle’s diam-
eter and charge fraction ǫ (q = ǫe). The charge on each
of the other nuclei is e(1 − ǫ)/2. The positive peak at
ǫ = 0.333 and diameter D = 6.9Å has a hyperpolariz-
ability βint = 0.326, while βint = −0.605 for a charge
fraction ǫ = 0.44 and a diameter D = 6.8Å.
Charge (q
Intrinsic Hyperpolarizability
0 0.5 1 1.5 2
FIG. 8: The intrinsic hyperpolarizability of two nuclei as a
function of the distance between them and the charge of one
nucleus, q1 where q1 + q2 = +e.
Intrinsic Hyperpolarizability
0 0.2 0.4 0.6 0.8 1
FIG. 9: The intrinsic hyperpolarizability of three evenly-
spaced nuclei on a circle as a function of the circle’s diameter
and the charge, ǫ (in units of e), on one of the nuclei. The
charge on each of the other nuclei is e(1− ǫ)/2.
When the positions and magnitudes of the three
charges are allowed to move freely, the best intrinsic
hyperpolarizability obtained using numerical optimiza-
tion is βint = 0.685 for charges located at ~r1 = (0, 0).
~r2 = (−4.87Å, 0.33Å), and ~r3 = (−9.57Å,−0.16Å); with
charges q1 = 0.43e, q2 = 0.217e, and q3 = 0.351e. There
are only small differences in the optimized values of βint
depending on the starting positions and charges; and the
best results are for a “molecule” that is nearly linear
along the x-direction. This is not surprising given that
the xxx-component of βint is the optimized quantity.
The two-dimensional analysis illustrates that numer-
ical optimization correctly identifies the local maxima
(peaks and valleys) and that the magnitude of maximum
intrinsic hyperpolarizability (0.65 vs 0.68) is close to the
maximum we get for the one-dimensional optimization
of the potential energy function (0.71). All computa-
tions we have tried, including varying the potential en-
ergy function in one dimension or moving around point
charges in a plane all yield an intrinsic hyperpolarizabil-
ity that is less than 0.71.
An open question is the origin of the factor-of-thirty
gap between the best molecules and the fundamental
limit, which had remained firm for decades through the
year 2006. Several of the common proposed explana-
tions, such as vibronic dilution, have been eliminated.[14]
Perhaps it is not possible to make large-enough varia-
tions of the potential energy function without making
the molecule unstable. Or, perhaps there are subtle is-
sues with electron correlation, which prevents electrons
from responding to light with their full potential. The
fact that the idea of modulation of conjugation has lead
to a 50% increase over the long-standing ceiling - reduc-
ing the gap to a factor of twenty - makes it a promising
approach for potential further improvements. Continued
theoretical scrutiny, coupled with experiment, will be re-
quired to confirm the validity of our approach.
IV. CONCLUSIONS
There appear to be many potential energy functions
that lead to an intrinsic hyperpolarizability that is near
the fundamental limit. These separate into two broad
classes: one in which wiggles in the potential energy
function forces the eigenfunctions to be spatially sepa-
rated and a second class of monotonically varying wave-
functions with small or no wiggles that allow for many
strongly overlapping wavefunctions. Interestingly, all
these one-dimensional “molecules” have the same max-
imal intrinsic hyperpolarizability of 0.71. It is puzzling
that the three-level ansatz correctly predicts the funda-
mental limit even when the ansatz does not apply. A
second open question pertains to the origin of the long-
standing factor of 30 gap between the fundamental limit
and the best molecules. The idea of conjugation modu-
lation may be one promising approach for making wiggly
potential energy profiles that lead to molecules that fall
into the gap. Given that there are so many choices of
potential energy functions that lead to maximal intrinsic
hyperpolarizability, it may be possible to engineer many
new classes of exotic molecules with record intrinsic hy-
perpolarizability.
Acknowledgements: MGK thanks the National
Science Foundation (ECS-0354736) and Wright Paterson
Air Force Base for generously supporting this work.
[1] Q. Y. Chen, L. Kuang, Z. Y. Wang, and E. H. Sargent,
Nano. Lett. 4, 1673 (2004).
[2] B. H. Cumpston, S. P. Ananthavel, S. Barlow, D. L. Dyer,
J. E. Ehrlich, L. L. Erskine, A. A. Heikal, S. M. Kuebler,
I.-Y. S. Lee, D. McCord-Maughon, et al., Nature 398, 51
(1999).
[3] S. Kawata, H.-B. Sun, T. Tanaka, and K. Takada, Nature
412, 697 (2001).
[4] A. Karotki, M. Drobizhev, Y. Dzenis, P. N. Taylor, H. L.
Anderson, and A. Rebane, Phys. Chem. Chem. Phys. 6,
7 (2004).
[5] I. Roy, O. T. Y., H. E. Pudavar, E. J. Bergey, A. R.
Oseroff, J. Morgan, T. J. Dougherty, and P. N. Prasad,
J. Am. Chem. Soc. 125, 7860 (2003).
[6] M. G. Kuzyk, Opt. Lett. 25, 1183 (2000).
[7] M. G. Kuzyk, IEEE Journal on Selected Topics in Quan-
tum Electronics 7, 774 (2001).
[8] M. G. Kuzyk, Phys. Rev. Lett. 85, 1218 (2000).
[9] M. G. Kuzyk, Opt. Lett. 28, 135 (2003).
[10] M. G. Kuzyk, Phys. Rev. Lett. 90, 039902 (2003).
[11] M. G. Kuzyk, J. Nonl. Opt. Phys. & Mat. 13, 461 (2004).
[12] M. G. Kuzyk, J. Chem Phys. 125, 154108 (2006).
[13] M. G. Kuzyk, Optics & Photonics News 14, 26 (2003).
[14] K. Tripathi, P. Moreno, M. G. Kuzyk, B. J. Coe,
K. Clays, and A. M. Kelley, J. Chem. Phys. 121, 7932
(2004).
[15] B. J. Orr and J. F. Ward, Molecular Physics 20, 513
(1971).
[16] J. Pérez Moreno, Y. Zhao, K. Clays, and M. G. Kuzyk,
Opt. Lett. 32, 59 (2007).
[17] J. Zhou, M. Kuzyk, and D. S. Watkins, Opt. Lett. 31,
2891 (2006).
[18] M. G. Kuzyk, Phys. Rev. A 72, 053819 (2005).
[19] M. G. Kuzyk and D. S. Watkins, J. Chem Phys. 124,
244104 (2006).
[20] D. Kincaid and E. W. Cheney, Numerical Analy-
sis: Mathematics of Scientific Computing (Brooks-Cole,
2002), 3rd ed.
[21] O. C. Zienkiewicz, R. L. Taylor, and J. Z. Zhu, The
Finite Element Method: Its Basis and Fundamentals
(Butterworth-Heinemanm, 2005), 6th ed.
[22] D. C. Sorensen, SIAM J. Matrix Anal. Appl. 13, 357
(1992).
[23] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. Wright,
SIAM J. Optim. 9, 112 (1998).
[24] B. Champagne and B. Kirtman, Phys. Rev. Lett. 95,
109401 (2005).
[25] M. G. Kuzyk, Phys. Rev. Lett. 95, 109402 (2005).
[26] M. G. Kuzyk, J. Nonl. Opt. Phys. & Mat. 15, 77 (2006).
[27] K. Atkinson and W. Han, Theoretical Numerical Anal-
ysis, a Functional Analysis Framework (Springer, New
York, 2001).
[28] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK
Users’ Guide: Solution of Large-Scale Eigenvalue Prob-
lems with Implicitly Restarted Arnoldi Methods (SIAM,
Philadelphia, 1998).
0 5 10 15 20
0 5 10 15 20
0 5 10 15 20
|
0704.1688 | Critical test for Altshuler-Aronov theory: Evolution of the density of
states singularity in double perovskite Sr$_2$FeMoO$_6$ with controlled
disorder | Critical test for Altshuler-Aronov theory: Evolution of the density of states
singularity in double perovskite Sr2FeMoO6 with controlled disorder
M. Kobayashi,1 K. Tanaka,2 A. Fujimori,1 Sugata Ray,3, 4 and D. D. Sarma4, 5
Department of Physics and Department of Complexity Science and Engineering,
University of Tokyo, Kashiwa, Chiba 277-8561, Japan
Department of Applied Physics and Stanford Synchrotron Radiation Laboratory,
Stanford University, Stanford, California 94305, USA
Materials and Structures Laboratory, Tokyo Institute of Technology, Midori, Yokohama 226-8503, Japan
Solid State and Structural Chemistry Unit, Indian Institute of Science, Bangalore 560012, India
Centra for Advanced Materials, Indian Association for the Cultivation of Science, Kolkata 700032, India
(Dated: October 31, 2018)
With high-resolution photoemission spectroscopy measurements, the density of states (DOS) near
the Fermi level (EF) of double perovskite Sr2FeMoO6 having different degrees of Fe/Mo antisite
disorder has been investigated with varying temperature. The DOS near EF showed a systematic
depletion with increasing degree of disorder, and recovered with increasing temperature. Altshuler-
Aronov (AA) theory of disordered metals well explains the dependences of the experimental results.
Scaling analysis of the spectra provides experimental indication for the functional form of the AA
DOS singularity.
PACS numbers: 71.20.-b, 71.23.-k, 71.27.+a, 79.60.-i
Disordered electronic systems, which have random po-
tentials deviating from an ideal crystal, have been inves-
tigated from both fundamental and application points of
view [1]. Ever since the finding of filling-control metal-
insulator transitions (MIT) in transition-metal oxides
known as strongly correlated system, disorder has at-
tracted even more attention because not only electron-
electron interaction but also disorder are supposed to
play fundamentally important roles in the MIT. Altshuler
and Aronov [2] studied the effect of electron-electron in-
teraction in a disordered metallic medium, and predicted
that the density of states (DOS) near the Fermi level
(EF) shows a singularity of |E −EF|1/2 and the DOS at
EF increases with increasing temperature in proportion
T . The theory has been applied to the low tempera-
ture conductivity of disordered metals such as disordered
Au and Ag films [3], amorphous alloy Ge1−xAux [4], and
transition metal chalcogenide Ni(S,Se)2 [5].
In a previous work, Sarma et al. [6] have reported
photoemission (PES) measurements on B-site disordered
perovskites LaNi1−xMxO3 (M=Mn and Fe), which show
MIT as a function of x, and shown that the disor-
der affects the DOS near EF in such a way that had
been theoretically predicted by Altshuler and Aronov
[2]. In a similar B-site substituted transition-metal ox-
ide SrRu1−xTixO3, which demonstrates MIT at x ∼ 0.3
(SrRuO3 is metallic), the depletion of the DOS near EF
has shown an unusual |E − EF|1.2 dependence in both
metallic and insulating phases [7]. In addition, although
it is believed that a disorder-induced insulator shows
a soft Coulomb gap characterized by a (E − EF)2 de-
pendence of the DOS near EF [8, 9], the unexpected
|E − EF|3/2 dependence of the DOS near EF related
to charge density wave has been observed in insulating
BaIrO3 [10]. It is considered that fine structure in the
DOS in the vicinity of EF is sensitive to both the degrees
of disorder and electron correlation, and therefore experi-
mental confirmation of a basic theory for disordered elec-
tronic system such as the Altshuler-Aronov (AA) theory
is necessary for understanding of the DOS singularity.
While AA theory makes specific predictions about both
E and T dependences, photoelectron spectroscopy has
been used only to probe the E dependence with abso-
lutely no reference to the T dependence. Therefore, de-
tailed high-resolution temperature-dependent PES mea-
surements are also highly desired to verify the AA theory.
The present paper reports on high-resolution PES
experiments on the B-site ordered double perovskite
Sr2FeMoO6 (SFMO), where we have controlled the de-
gree of Fe/Mo antisite disorder (AD) in the sample prepa-
ration procedure. Through detailed analysis for the tem-
perature and degree of disorder dependences of the PES
spectra near EF, the results provide experimental confir-
mation of the AA theory of disordered metals. SFMO
has been investigated intensively due to the theoreti-
cal prediction of half-metallic nature and the observa-
tion of large magnetoresistance under low magnetic fields
at room temperature [11]. In this system, there are
characteristic defects known as Fe/Mo AD at the B-
site, which remarkably affects the physical properties of
SFMO [12, 13, 14, 15, 16]. By controlling the degree of
AD, one can investigate disorder effects in the metal with-
out changing the chemical composition and other condi-
tions.
Polycrystalline SFMO samples having different degrees
of Fe/Mo AD prepared as follows; First, the samples with
the highest degree of AD 45% were prepared. Then they
were annealed at 1173 K, 1673 K, and 1523 K for a pe-
riod of 5 hours under 2% H2/Ar to obtain the degrees of
AD 40%, 25%, and 10%, respectively (SFMO having AD
50% is the same as ordinary perovskite SrFe0.5Mo0.5O3).
Details of the sample preparation are given in Ref. [12]
http://arxiv.org/abs/0704.1688v2
2.0 1.5 1.0 0.5 0
Binding Energy (eV)
0.4 0.2 0.0
1.5 1.0 0.5 0.0
Binding Energy (eV)
0.5 0.2
Fe eg↑ Fe t2g↓-Mo t2g↓
AD 40%
T = 10 K
fractured
scraped
AD 40%
AD 10%
background
Fe t2g↓-
Mo t2g↓
Fe eg↑
FIG. 1: Valence-band photoemission spectra of Sr2FeMoO6
taken at 10 K with He-I radiation. (a) Comparison between
the spectra taken from the fractured and scraped surfaces.
The spectra have been normalized to the area from 0 eV to
0.8 eV (Fe t2g↓-Mo t2g↓ states). (b) Comparison between
different degrees of antisite disorder (AD) 10% and 40%. The
spectra have been normalized to the area from 0.8 eV to 2
eV (Fe eg↑ states). Top: Background subtraction. The insets
show enlarged plots near EF.
and [13]. Using x-ray diffraction, the degree of disorder
was quantified from the intensity of a supercell-reflection
peak. Scanning electron microscopy in conjunction with
energy dispersive x-ray analysis revealed no change in
composition during the annealing. Transport measure-
ments were performed on the AD 10% and 40% sam-
ples by a standard four prove technique using a Physi-
cal Property Measurement System (Quantum Design Co.
Ltd). PES spectra were recorded using a spectrometer
equipped with a monochromatized He resonance lamp
(hν = 21.2 eV), where photoelectrons were collected with
a Gammadata Scienta SES-100 hemispherical analyzer in
the angle integrated mode. The total resolution of the
spectrometer was ∼10 meV, and the base pressure was
1.0×10−8 Pa. Clean surfaces were obtained by repeated
scraping in situ with a diamond file. The position of EF
was determined by measuring PES spectra of evaporated
gold which was electrically in contact with the sample.
The line shapes of the PES spectra obtained were al-
most the same as those in a previous report on single
crystalline SFMO taken with synchrotron radiation [17].
In order to examine the influence of surface treatment,
we measured valence-band spectra taken from fractured
and scraped surfaces. In the binding-energy (EB) range
from 2 eV to 10 eV, there were appreciable differences
such as the sharpness of structures in the O 2p and the
Fe t2g↑ bands (not shown), the background intensity, and
to some extent the Fe eg↑ band. On the other hand,
within ∼ 1 eV of EF, i.e., in the Fe t2g↓ +Mo t2g↓ conduc-
tion bands, the line shapes of the fractured and scraped
samples were similar to each other as shown in Fig. 1(a).
0.3 0.2 0.1 0
Binding Energy (eV)
0.01 0.00 -0.01
0.3 0.2 0.1 0.0 -0.1
Binding Energy (eV)
0.01 0.00 -0.01
AD 10%
AD 25%
AD 40%
AD 45%
T= 10 K
T= 100 K
T= 200 K
T= 300 K
(a) T=10 K (b)
AD 10%
Au 8 K
150 K
300 K
FIG. 2: Photoemission spectra of Sr2FeMoO6 near the Fermi
level. The spectra have been normalized to the area from
EB = 0.3 eV to 0.6 eV. (a) Degree of Fe/Mo AD dependence
at 10 K. (b) Temperature dependence of the AD 10% sample.
As a reference, Au spectra are also shown. The insets show
an enlarged plot in the vicinity of EF.
Since in previous PES studies it has been reported that
LDA+U calculation well explains the valence-band spec-
tra taken from the fractured surface [17, 18], we consider
that the different surface treatments have not affected
the spectra near EF which reflect the bulk properties.
In contrast, the spectra are intensively influenced by AD
and temperature as we shall see below.
Figure 1(b) shows valence-band spectra for different
degrees of disorder, i.e., AD 40% and 10%. Although
the peak due to the localized Fe eg↑ states was nearly
identical between AD 10% and AD 40%, there was a
clear difference in the Fe t2g↓-Mo t2g↓ conduction band
between the two spectra. This result suggests that the
disorder influences the DOS near EF. Figure 2 shows the
temperature and degree of disorder dependences of the
spectra near EF normalized to the area in the region from
EB = 0.3 eV to 0.6 eV, in which the spectra were iden-
tical and independent of the degree of AD as shown in
Fig. 1(b). For a fixed temperature, the intensity of the
spectra near EF was depleted with degree of disorder.
This behavior is consistent with the previous report on
a disordered metal system LaNi1−xMnxO3 [6]. Indeed,
temperature dependent resistivity measurements on the
AD 40% sample showed a minimum around 40 K, i.e., the
resistivity increased with decreasing temperature below
40 K, while on the AD 10% one there was no minimum
till the lowest measured temperature (20 K). The obser-
vations are consistent with previous reports of transport
measurements on SFMO having various degrees of dis-
order [16, 19] and on SFMO with high Fe/Mo ordering
[13, 20]. For a fixed degree of disorder, the intensity at
EF increased with temperature as shown in Fig. 2(b).
3456789
23456789
√X (=√|²|/kBT )
AD 45% 10K, 100K,
200K, 300K
AD 40% 10K, 100K,
200K, 300K
AD 25% 10K, 100K,
200K, 300K
AD 10% 10K, 100K,
200K, 300K
j(X) = [X
+(1.07)
T=300 K
100 K
FIG. 3: Scaling analysis for the spectral depletion. Scaled
spectra as a function of
X (X = |ǫ|/kBT ) are plotted on
a bilogarithmic scale. An analytical form of ϕ = [X2 +
(1.07)4]1/4 is also plotted.
This behavior indicates that SFMO differs from normal
metals such as Au in which PES spectra at various tem-
peratures have temperature-independent intensity at EF
and intersect at EF irrespective of temperature as shown
in Fig 2(b), representing the simple Fermi-Dirac distri-
bution function.
Now, we analyze the temperature and degree of disor-
der dependences of the spectra based on the theory for
disordered metal suggested by Altshuler and Aronov [2].
The theory predicted that electron-electron interaction
accompanied by impurity scattering leads to an anomaly
in the DOS around EF and the resulting singular part of
the DOS δD is given by
δD(ǫ)
(~D)3/2
ϕ (|ǫ|/kBT ) ,
ϕ (X) =
X (X ≫ 1)
1.07 +O(X2) (X ≪ 1)
where D0 is the original DOS at EF, ǫ is the energy mea-
sured from EF, H is the inverse Debye radius, and D is
the diffusion coefficient due to impurity scattering.
In order to see whether experimental spectra satisfy
Eq. (1), we have made the following scaling analysis.
According to Eq. (1), the PES spectra I(ǫ, T ) near EF
should be proportional to D′0+ δD(ǫ), where D′0 is a con-
stant and D′0 < D0. Therefore, I(ǫ, T ) can be parame-
terized as
I(ǫ, T ) = A+G
kBT ϕ
, (2)
0.3 0.2 0.1 0 -0.1
Binding Energy (eV)
3002001000
Temperature (K)
403020100
Disorder (%)
a + g [²
+(1.07)
Fitted
AD 10%
AD 25%
AD 40%
AD 45%
AD 45%
(b) (c)
300 K
200 K
100 K
FIG. 4: Results of fitting of the photoemission spectra of
Sr2FeMoO6 to the analytical curves of Altshuler-Aronov the-
ory. (a) Fitted spectra for the AD 45% sample at various
temperatures. (b) Degree of disorder dependence of the val-
ues of γ/α. G/A is also plotted. The dotted line is a guide
to the eye. (c) Temperature dependence of the DOS at EF.
Solid curves demonstrate fitted curves using Eq. (4) with pa-
rameters deduced from the spectral line-shape fitting of panel
where A and G are only dependent on the degree of dis-
order. (I − A)/G
kBT plotted against X = |ǫ|/kBT
should fall onto the same curve if Eq. (1) is valid, and
can be used to evaluate the functional form of ϕ if A and
G are chosen to satisfy the condition that ϕ(0) → 1.07
as ǫ/kBT → 0. In Fig. 3, the results are plotted on a log-
arithmic scale, where the PES spectra have been divided
by the Fermi-Dirac function convoluted with the experi-
mental resolution estimated from the Au spectra. Notice
that all the low energy part of the spectra fell onto the
same curve, which we attribute to the scaling function
ϕ(X). Deviation from the scaling function ϕ(X) occurs
at high energies where the original DOS starts to deviate
from the constant one. We find that ϕ(X) approaches√
X for X > 1, corresponding to AA theory. The results
ensure the validness of analysis based on AA theory for
the depletion of PES spectra near EF.
In order to analyze the spectra using Eq. (1), we pro-
pose an analytical form of ϕ(X) = [X2+(1.07)4]1/4 inter-
polating both the limits of large and small X of Eq. (1).
This form is shown to accurately reproduce the experi-
mentally scaling function ϕ(X) deduced above as shown
in Fig. 3. Therefore, we employ a model function
{α+ γ [ǫ2 + (1.07)4(kBT )2]1/4} f(ǫ), (3)
where f(ǫ) is the Fermi-Dirac function, α and γ are fit-
ting parameters. γ/α depends only on degree of disorder
and are independent of a way of intensity normalization.
Figure 4(a) shows fitted results for the spectra of the
AD 45% sample at various temperatures. The fitting
function given by Eq. (3) well reproduced the spectra
near EF, where the fitted ranges were chosen to the valid
range of the scaling function ϕ(X) as shown in Fig. 3 [21].
Figure 4(b) shows the values of the coefficients which rep-
resent the strength of the DOS singularity as a function
of disorder. The γ/α value is independent of temperature
and approximately linearly increases with degree of disor-
der as shown in Fig. 4(b), indicating that the DOS singu-
larity near EF is enhanced with increasing degree of dis-
order as predicted by AA theory. The constant G will be
relative to the value of γ. Actually, the G/A value shows
the same dependence as γ/α [Fig. 4(b)]. Equation (3) as
a function of X without f , i.e., α + γ[X2 + (1.07)4]1/4,
well reproduced the line shape of the depletion as shown
in Fig. 3, where the parameters were chosen α = 0 and
γ = 1 to correspond with the analytical ϕ(X) [22]. The
result demonstrates validness of our assumption for the
functional form of ϕ given by Eq. (3).
Equation (1) indicates that the singular contribution
to the DOS δD/D0 at EF is proportional to
T . In or-
der to study the temperature dependence of the DOS at
EF, comparison was made between the experimental and
theoretical δD/D0 at EF. The PES intensity (or DOS)
at EF is plotted as a function of temperature in Fig. 4(c).
For a fixed temperature, the DOS at EF increased with
decreasing degree of disorder. For a fixed degree of dis-
order, the DOS at EF increased with increasing temper-
ature. According to Eq. (1), temperature dependence of
the DOS at EF is expressed as
α+ 1.07 γ
kBT , (4)
corresponding Eq. (3) at ǫ = 0. Using the values of γ/α
obtained by fitting to the PES spectra, Eq. (4) well re-
produces the temperature-dependence of the DOS at EF
as shown in Fig. 4(c). The result is consistent with theo-
retically predicted temperature dependence of δD/D0 at
EF. It follows from the arguments described above that
AA theory applies to not only the disorder dependent
depletion near EF but also the temperature dependent
DOS at EF.
As mentioned above, AA theory treats the scattering
processes on general ground and does not depend on the
functional form of the potential of the scattering center.
This theory can be applied to the case that the mean free
path of itinerant electrons is larger than or comparable
with its wave length.
In conclusion, we have performed high-resolution pho-
toemission experiments on polycrystalline Sr2FeMoO6
samples having different degrees of Fe/Mo antisite dis-
order. The photoemission spectra near the Fermi level
depended on degree of the Fe/Mo antisite disorder as
well as on temperature. The Altshuler-Aronov theory on
disordered metal well explained both the dependences.
Scaling analysis for the spectral depletion clarifies the
functional form of the density of states singularity near
the Fermi level. We believe that the findings will provide
an indicator for degrees of disorder and electron-electron
correlation, and promote spectral analysis for the index
of the density of states depletion near the Fermi level.
The present results point to a need for taking into ac-
count both electron-electron interaction and disorder ef-
fects for an understanding of the electronic structure of
metallic correlated electron system.
The authors thank H. Yagi and M. Hashimoto for help
in experiments. This work was supported by a Grant-
in-Aid for Scientific Research in Priority Area “Inven-
tion of Anomalous Quantum Materials” (16076208) from
MEXT, Japan. D.D.S. thanks DST and BRNS for fund-
ing this research. S.R. thanks JSPS postdoctoral fellow-
ship for foreign researchers. M.K. acknowledges support
from the Japan Society for the Promotion of Science for
Young Scientists.
[1] P. A. Lee and T. V. Ramakrishnan, Rev. Mod. Phys. 57,
287 (1985).
[2] B. L. Altshuler and A. G. Aronov, Solid State Commun.
30, 115 (1979).
[3] S. Schmitz and S. Ewert, Solid State Commun. 74, 1067
(1990).
[4] W. L. McMillan and J. Mochel, Phys. Rev. Lett. 46, 556
(1981).
[5] A. Husmann, D. S. Jin, Y. V. Zastavker, T. F. Rosen-
baum, X. Yao, and J. M. Honig, Science 274, 1874
(1996).
[6] D. D. Sarma, A. Chainani, S. R. Krishnakumar, E.
Vescovo, C. Carbone, W. Eberhardt, O. Rader, C. Jung,
C. Hellwing, W. Gudat, H. Srikanth, and A. K. Ray-
chaudhuri, Phys. Rev. Lett. 80, 4004 (1998).
[7] J. Kim, J.-Y. Kim, B.-G. Park, and S.-J. Oh, Phys. Rev.
B 73, 235109 (2006).
[8] A. F. Efros and B. I. Shklovskii, J. Phys. C 8, L49 (1975).
[9] J. G. Massey and M. Lee, Phys. Rev. Lett. 75, 4266
(1995).
[10] K. Maiti, R. S. Singh, V. R. R. Medicherla, S. Rayaprol,
and E. V. Sampathkumaran, Phys. Rev. Lett. 95, 016404
(2005).
[11] K.-I. Kobayashi, T. Kimura, H. Sawada, K. Terakura,
and Y. Tokura, Nature 395, 677 (1998).
[12] D. D. Sarma, E. V. Sampathkumaran, S. Ray, R. Natara-
jan, S. Majumdar, A. Kimar, G. Nalini, and T. N. G.
Row, Solid State Commun. 114, 465 (2000).
[13] D. D. Sarma, Sugata Ray, K. Tanaka, and A. Fujimori,
PRL in press.
[14] J. Navarro, J. Nogués, J. S. Muñoz, and J. Fontcuberta,
Phys. Rev. B 67, 174416 (2003).
[15] B. J. Park, H. Han, J. Kim, Y. J. Kim, C. S. Kim, and
B. W. Lee, J. Magn. Magn. Mater. 272-276, 1851 (2004).
[16] Y. H. Huang, M. Karppinen, H. Yamauchi, and J. B.
Goodenough, Phys. Rev. B 73, 104408 (2006).
[17] T. Saitoh, M. Nakatake, A. Kakizaki, H. Nakajima, O.
Morimoto, S. Xu, Y. Moritomo, N. Hamada, and Y.
Aiura, Phys. Rev. B 66, 035112 (2002).
[18] J.-S. Kang, J. H. Kim, A. Sekiyama, S. Kasai, S. Suga,
S. W. Han, K. H. Kim, T. Muro, Y. Saitoh, C. Hwang,
C. G. Olson, B. J. Park, B. W. Lee, J. H. Shin, J. H.
Park, and B. I. Min, Phys. Rev. B 66, 113105 (2002).
[19] Y.-H. Huang, H. Yamauchi, and M. Karppinen, Phys.
Rev. B 74, 174418 (2006).
[20] Y. Tomioka, T. Okuda, Y. Okimoto, R. Kumai, K.-I.
Kobayashi, and Y. Tokura, Phys. Rev. B 61, 422 (2000).
[21] The starting points are about 76 meV, 109 meV, 129
meV, and 145 meV for 10 K, 100 K, 200 K, and 300 K,
respectively.
[22] The analytical form of ϕ(X) can be expressed as more
general formula: ϕn(X) = [X
2n + (1.07)4n]1/4n, which
also satisfies both the limits ofX. Using this function, the
scaling function has been fitted well as n = 1.09 ± 0.14.
This result emphasizes the validness of the analytical
ϕ(X) because the exponent n is nearly 1.
|
0704.1689 | Some Properties of and Open Problems on Hessian Nilpotent Polynomials | SOME PROPERTIES OF AND OPEN PROBLEMS ON
HESSIAN NILPOTENT POLYNOMIALS
WENHUA ZHAO
Abstract. In the recent work [BE1], [M], [Z1] and [Z2], the well-known
Jacobian conjecture ([BCW], [E]) has been reduced to a problem on HN
(Hessian nilpotent) polynomials (the polynomials whose Hessian matrix
are nilpotent) and their (deformed) inversion pairs. In this paper, we
prove several results on HN polynomials, their (deformed) inversion pairs
as well as on the associated symmetric polynomial or formal maps. We
also propose some open problems for further study of these objects.
1. Introduction
In the recent work [BE1], [M], [Z1] and [Z2], the well-known Jacobian
conjecture (see [BCW] and [E]) has been reduced to a problem on HN (Hes-
sian nilpotent) polynomials, i.e. the polynomials whose Hessian matrix are
nilpotent, and their (deformed) inversion pairs. In this paper, we prove some
properties of HN polynomials, the (deformed) inversion pairs of (HN) poly-
nomial, the associated symmetric polynomial or formal maps, the graphs
assigned to homogeneous harmonic polynomials, etc. Another purpose of
this paper is to draw the reader’s attention to some open problems which we
believe will be interesting and important for further study of these objects.
In this section we first discuss some backgrounds and motivations in Sub-
section 1.1 for the study of HN polynomials and their (deformed) inversion
pairs. We also fix some terminology and notation in this subsection that
will be used throughout this paper. Then in Subsection 1.2 we give an
arrangement description of this paper.
Date: November 17, 2021.
2000 Mathematics Subject Classification. 14R15, 32H02, 32A50.
Key words and phrases. Hessian nilpotent polynomials, inversion pairs, harmonic
polynomials, the Jacobian conjecture.
The author has been partially supported by NSA Grant R1-07-0053.
http://arxiv.org/abs/0704.1689v2
2 WENHUA ZHAO
1.1. Background and Motivation. Let z = (z1, z2, . . . , zn) be n free com-
mutative variables. We denote by C[z] (resp.C[[z]]) the algebra of poly-
nomials (resp. formal power series) of z over C. A polynomial or formal
power series P (z) is said to be HN (Hessian nilpotent) if its Hessian matrix
HesP := ( ∂
∂zi∂zj
) are nilpotent. The study of HN polynomials is mainly mo-
tivated by the recent progress achieved in [BE1], [M], [Z1] and [Z2] on the
well-known JC (Jacobian conjecture), which we will briefly explain below.
Recall that the JC first proposed by Keller [Ke] in 1939 claims: for any
polynomial map F of Cn with the Jacobian j(F ) = 1, its formal inverse
map G must also be a polynomial map. Despite intense study for more than
half a century, the conjecture is still open even for the case n = 2. For
more history and known results before 2000 on the Jacobian conjecture, see
[BCW], [E] and references there. In 2003, M. de Bondt, A. van den Essen
([BE1]) and G. Meng ([M]) independently made the following breakthrough
on the JC.
Let Di :=
(1 ≤ i ≤ n) and D = (D1, . . . , Dn). For any P (z) ∈ C[[z]],
denote by∇P (z) the gradient of P (z), i.e. ∇P (z) := (D1P (z), . . . , DnP (z)).
We say a formal map F (z) = z − H(z) is symmetric if H(z) = ∇P (z) for
some P (z) ∈ C[[z]]. Then, the symmetric reduction of the JC achieved in
[BE1] and [M] is that, to prove or disprove the JC, it will be enough to
consider only symmetric polynomial maps. Combining with the classical
homogeneous reduction achieved in [BCW] and [Y], one may further assume
that the symmetric polynomial maps have the form F (z) = z−∇P (z) with
P (z) homogeneous (of degree 4). Note that, in this case the Jacobian condi-
tion j(F ) = 1 is equivalent to the condition that P (z) is HN. For some other
recent results on symmetric polynomial or formal maps, see [BE1]–[BE5],
[EW], [M], [Wr1], [Wr2], [Z1], [Z2] and [EZ].
Based on the homogeneous reduction and the symmetric reduction of
the JC discussed above, the author further showed in [Z2] that the JC is
actually equivalent to the following so-called vanishing conjecture of HN
polynomials.
Conjecture 1.1. (Vanishing Conjecture) Let ∆ :=
i be the
Laplace operator of C[z]. Then, for any HN polynomial P (z) (of homo-
geneous of degree d = 4), ∆mPm+1(z) = 0 when m >> 0.
Furthermore, the following criterion of Hessian nilpotency for formal
power series was also proved in [Z2].
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 3
Proposition 1.2. For any P (z) ∈ C[[z]] with o(P (z)) ≥ 2, the following
statements are equivalent.
(1) P (z) is HN.
(2) ∆mPm = 0 for any m ≥ 1.
(3) ∆mPm = 0 for any 1 ≤ m ≤ n.
One crucial idea of the proofs in [Z2] for the results above is to study
a special formal deformation of symmetric formal maps. More precisely,
let t be a central formal parameter. For any P (z) ∈ C[[z]], we call F (z) =
z−∇P (z) the associated symmetric maps of P (z). Let Ft(z) = z−t∇P (z).
When the order o(P (z)) of P (z) with respect to z is greater than or equal
to 2, Ft(z) is a formal map of C[[t]][[z]] with Ft=1(z) = F (z). Therefore, we
may view Ft(z) as a formal deformation of the formal map F (z). In this
case, one can also show (see [M] or Lemma 3.14 in [Z1]) that the formal
inverse map Gt(z) := F
t (z) of Ft(z) does exist and is also symmetric,
i.e. there exists a unique Qt(z) ∈ C[[t]][[z]] with o(Qt(z)) ≥ 2 such that
Gt(z) = z + t∇Qt(z). We call Qt(z) the deformed inversion pair of P (z).
Note that, whenever Qt=1(z) makes sense, the formal inverse map G(z) of
F (z) is given by G(z) = Gt=1(z) = z + ∇Qt=1(z), so in this case we call
Q(z) := Qt=1(z) the inversion pair of P (z).
Note that, under the condition o(P (z)) ≥ 2, the deformed inversion pair
Qt(z) of P (z) might not be in C[t][[z]], so Qt=1(z) may not make sense.
But, if we assume further that J(Ft)(0) = 1, or equivalently, (HesP )(0) is
nilpotent, then Ft(z) is an automorphism of C[t][[z]], hence so is its inverse
map Gt(z). Therefore, in this case Qt(z) lies in C[t][[z]] and Qt=1(z) makes
sense. Throughout this paper, whenever the inversion pair Q(z) of a poly-
nomial or formal power series P (z) ∈ C[[z]] (not necessarily HN) is under
concern, our assumption on P (z) will always be o(P (z)) ≥ 2 and (HesP )(0)
is nilpotent. Note that, for any HN P (z) ∈ C[[z]] with o(P (z)) ≥ 2, the
condition that (HesP )(0) is nilpotent holds automatically.
For later purpose, let us recall the following formula derived in [Z2] for
the deformed inversion pairs of HN formal power series.
Theorem 1.3. Suppose P (z) ∈ C[[z]] with o(P (z)) ≥ 2 is HN. Then, we
Qt(z) =
2mm!(m+ 1)!
∆mPm+1(z),(1.1)
4 WENHUA ZHAO
From the equivalence of the JC and the VC discussed above, we see
that the study on the HN polynomials and their (deformed) inversion pairs
becomes important and necessary, at least when the JC is concerned. Note
that, due to the identity TrHesP = ∆P , HN polynomials are just a special
family of harmonic polynomials which are among the most classical objects
in mathematics. Even though harmonic polynomials had been very well
studied since the late of the eighteenth century, it seems that not much has
been known on HN polynomials. We believe that these mysterious (HN)
polynomials deserve much more attentions from mathematicians.
1.2. Arrangement. Considering the length of this paper, we here give a
more detailed arrangement description of the paper.
In Section 2, we consider the following two questions. Let P, S, T ∈ C[[z]]
with P = S + T and Q,U, V their inversion pairs, respectively.
Q1: Under what conditions, P is HN iff both S and T are HN?
Q2: Under what conditions, we have Q = U + V ?
We give some sufficient conditions in Theorems 2.1 and 2.7 for the two
questions above. In Section 3, we employ a recursion formula of inversion
pairs derived in [Z1] and Eq. (1.1) above to derive some estimates for the
radius of convergence of inversion pairs of homogeneous (HN) polynomials
(see Propositions 3.1 and 3.3).
For any P (z) ∈ C[[z]], we say it is self-inverting if its inversion pair Q(z)
is P (z) itself. In Section 4, by using a general result on quasi-translations
proved in [B], we derive some properties of HN self-inverting formal power
series P (z). Another purpose of this section is to draw the reader’s attention
to Open Problem 4.8 on classification of HN self-inverting polynomials or
formal power series.
In Section 5, we show in Proposition 5.1, when the base field has char-
acteristic p > 0, the VC, unlike the JC, actually holds for any polynomials
P (z) even without the HN condition on P (z). It also holds in this case for
any HN formal power series. One interesting question (see Open Problem
5.2) is to see if the VC like the JC fails over C when P (z) is allowed to be
any HN formal power series.
In Section 6, we prove a criterion of Hessian nilpotency for homogeneous
polynomials over C (see Theorem 6.1). Considering the criterion in Propo-
sition 1.2, this criterion is somewhat surprising but its proof turns out to
be very simple.
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 5
Section 7 is mainly motivated by the following question raised by M.
Kumar ([K]) and D. Wright ([Wr3]). Namely, for a symmetric formal map
F (z) = z −∇P (z), how to write f(z) := 1
σ2 − P (z) (where σ2 :=
i=1 z
and P (z) itself as formal power series in F (z)? In this section, we derive
some explicit formulas to answer the questions above and also for the same
question for σ2 (see Proposition 7.2). From these formulas, we also show in
Theorem 7.4 that, the VC holds for a HN polynomial P (z) iff one (hence,
all) of σ2, P (z) and f(z) can be written as a polynomial in F , where F (z) =
z −∇P (z) is the associated polynomial maps of P (z).
Finally, in Section 8, we discuss a graph G(P ) assigned to each homo-
geneous harmonic polynomials P (z). The graph G(P ) was first proposed
by the author and later was further studied by Roel Willems in his master
thesis [Wi] under direction of Professor Arno van den Essen. In Subsection
8.1 we give the definition of the graph G(P ) for any homogeneous harmonic
polynomial P (z) and discuss the connectedness reduction (see Corollary 8.5)
which says, to study the VC for homogeneous HN polynomials P (z), it will
be enough to consider the case when the graph G(P ) is connected. In Sub-
section 8.2 we consider a connection of G(P ) with the tree expansion formula
derived in [M] and [Wr2] for the inversion pair Q(z) of P (z) (see also Propo-
sition 8.9). As an application of the connection, we use it to give another
proof for the connectedness reduction discussed in Corollary 8.5.
One final remark on the paper is as follows. Even though we could have
focused only on (HN) polynomials, at least when only the JC is concerned,
we will formulate and prove our results in the more general setting of (HN)
formal power series whenever it is possible.
Acknowledgement: The author is very grateful to Professors Arno van
den Essen, Mohan Kumar and David Wright for inspiring communications
and constant encouragement. Section 7 was mainly motivated by some
questions raised by Professors Mohan Kumar and David Wright. The author
also would like to thank Roel Willems for sending the author his master
thesis in which he has obtained some very interesting results on the graphs
G(P ) of homogeneous harmonic polynomials. At last but not the least, the
author thanks the referee and the editor for many valuable suggestions.
6 WENHUA ZHAO
2. Disjoint Formal Power Series and Their Deformed Inversion
Pairs
Let P, S, T ∈ C[[z]] with P = S + T , and Q, U and V their inversion
pairs, respectively. In this section, we consider the following two questions:
Q1: Under what conditions, P is HN if and only if both S and T are
Q2: Under what conditions, we have Q = U + V ?
We give some answers to the questions Q1 and Q2 in Theorems 2.1 and
2.7, respectively. The results proved here will also be needed in Section 8
when we consider a graph associated to homogeneous harmonic polynomials.
To question Q1 above, we have the following result.
Theorem 2.1. Let S, T ∈ C[[z]] such that 〈∇(DiS),∇(DjT )〉 = 0 for any
1 ≤ i, j ≤ n, where 〈·, ·〉 denotes the standard C-bilinear form of Cn. Let
P = S + T . Then, we have
(a) Hes (S)Hes (T ) = Hes (T )Hes (S) = 0.
(b) P is HN iff both S and T are HN.
Note that statement (b) in the theorem above was first proved by R.
Willems ([Wi]) in a special setting as in Lemma 2.6 below for homogeneous
harmonic polynomials.
Proof: (a) For any 1 ≤ i, j ≤ n, consider the (i, j)th entry of the product
Hes (S)Hes (T ):
∂zi∂zk
∂zk∂zj
= 〈∇(DiS),∇(DjT )〉 = 0.(2.1)
Hence Hes (S) Hes (T ) = 0. Similarly, we have Hes (T ) Hes (S) = 0.
(b) follows directly from (a) and the lemma below. ✷
Lemma 2.2. Let A, B and C be n × n matrices with entries in any com-
mutative ring. Suppose that A = B + C and BC = CB = 0. Then, A is
nilpotent iff both B and C are nilpotent.
Proof: The (⇐) part is trivial because B and C in particular commute
with each other.
To show (⇒), note that BC = CB = 0. So for any m ≥ 1, we have
AmB = (B + C)mB = (Bm + Cm)B = Bm+1.
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 7
Similarly, we have Cm+1 = AmC. Therefore, if AN = 0 for some N ≥ 1, we
have BN+1 = CN+1 = 0. ✷
Note that, for the (⇐) part of (b) in Theorem 2.1, we need only a weaker
condition. Namely, for any 1 ≤ i, j ≤ n,
〈∇(DiS),∇(DjT )〉 = 〈∇(DjS),∇(DiT )〉,
which will ensure that Hes (S) and Hes (T ) commute.
To consider the second question Q2, let us first fix the following notation.
For any P ∈ C[[z]], let A(P ) denote the subalgebra of C[[z]] generated
by all partial derivatives of P (of any order). We also define a sequence
{Q[m](z) |, m ≥ 1} by writing the deformed inversion pair Qt(z) of P (z) as
Qt(z) =
tm−1Q[m](z).(2.2)
Lemma 2.3. For any P ∈ C[[z]], we have
(a) A(P ) is closed under the action of any differential operator of C[z]
with constant coefficients.
(b) For any m ≥ 1, we have Q[m](z) ∈ A(P ).
Proof: (a) Note that, by the definition of A(P ), a formal power series
g(z) ∈ C[[z]] lies in A(P ) iff it can be written (not necessarily uniquely)
as a polynomial in partial derivatives of P (z). Then, by the Leibniz Rule,
it is easy to see that, for any g(z) ∈ A(P ), Dig(z) ∈ A(P ) (1 ≤ i ≤ n).
Repeating this argument, we see that any partial derivative of g(z) is in
A(P ). Hence (a) follows.
(b) Recall that, by Proposition 3.7 in [Z1], we have the following recurrent
formula for Q[m](z) (m ≥ 1) in general:
Q[1](z) = P (z),(2.3)
Q[m](z) =
2(m− 1)
k,l≥1
k+l=m
〈∇Q[k](z),∇Q[l](z)〉.(2.4)
for any m ≥ 2.
By using (a), the recurrent formulas above and induction on m ≥ 1, it is
easy to check that (b) holds too. ✷
Definition 2.4. For any S, T ∈ C[[z]], we say S and T are disjoint to each
other if, for any g1 ∈ A(S) and g2 ∈ A(T ), we have 〈∇g1,∇g2〉 = 0.
8 WENHUA ZHAO
This terminology will be justified in Section 8 when we consider a graph
G(P ) associated to homogeneous harmonic polynomials P .
Lemma 2.5. Let S, T ∈ C[[z]]. Then S and T are disjoint to each other
iff, for any α, β ∈ Nn, we have
〈∇(DαS),∇(DβT )〉 = 0.(2.5)
Proof: The (⇒) part of the lemma is trivial. Conversely, for any g1 ∈
A(S) and g2 ∈ A(T ) (i = 1, 2), we need show
〈∇g1,∇g2〉 = 0.
But this can be easily checked by, first, reducing to the case that g1 and
g2 are monomials of partial derivatives of S and T , respectively, and then
applying the Leibniz rule and Eq. (2.5) above. ✷
A family of examples of disjoint polynomials or formal power series are
given as in the following lemma, which will also be needed later in Section
Lemma 2.6. Let I1 and I2 be two finite subsets of C
n such that, for any
αi ∈ Ii (i = 1, 2), we have 〈α1, α2〉 = 0. Denote by Ai (i = 1, 2) the
completion of the subalgebra of C[[z]] generated by hα(z) := 〈α, z〉 (α ∈ Ii),
i.e. Ai is the set of all formal power series in hα(z) (α ∈ Ii) over C. Then,
for any Pi ∈ Ai (i = 1, 2), P1 and P2 are disjoint.
Proof: First, by a similar argument as the proof for Lemma 2.3, (a), it
is easy to check that Ai (i = 1, 2) are closed under action of any differen-
tial operator with constant coefficients. Secondly, since Ai (i = 1, 2) are
subalgebras of C[[z]], we have A(Pi) ⊂ Ai (i = 1, 2).
Therefore, to show P1 and P2 are disjoint to each other, it will be enough
to show that, for any gi ∈ Ai (i = 1, 2), we have 〈∇g1,∇g2〉 = 0. But this
can be easily checked by first reducing to the case when gi (i = 1, 2) are
monomials of hα(z) (α ∈ Ii), and then applying the Leibniz rule and the
following identity: for any α, β ∈ Cn,
〈∇hα(z),∇hβ(z)〉 = 〈α, β〉.
Now, for the second question Q2 on page 6, we have the following result.
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 9
Theorem 2.7. Let P, S, T ∈ C[[z]] with order greater than or equal to
2, and Qt, Ut, Vt their deformed inversion pairs, respectively. Assume that
P = S + T and S, T are disjoint to each other. Then
(a) Ut and Vt are also disjoint to each other, i.e. for any α, β ∈ N
n, we
∇DαUt(z),∇D
βVt(z)
(b) We further have
Qt = Ut + Vt.(2.6)
Proof: (a) follows directly from Lemma 2.3, (b) and Lemma 2.5.
(b) Let Q[m], U[m] and V[m] (m ≥ 1) be defined as in Eq. (2.2). Hence it
will be enough to show
Q[m] = U[m] + V[m](2.7)
for any m ≥ 1.
We use induction on m ≥ 1. When m = 1, Eq. (2.7) follows from the
condition P = S + T and Eq. (2.3) . For any m ≥ 2, by Eq. (2.4) and the
induction assumption, we have
Q[m] =
2(m− 1)
k,l≥1
k+l=m
〈∇Q[k],∇Q[l]〉
2(m− 1)
k,l≥1
k+l=m
〈∇U[k] +∇V[k],∇U[l] +∇V[l]〉
Noting that, by Lemma 2.3, U[j] ∈ A(S) and V[j] ∈ A(T ) (1 ≤ j ≤ m):
2(m− 1)
k,l≥1
k+l=m
〈∇U[k],∇U[l]〉+
2(m− 1)
k,l≥1
k+l=m
〈∇V[k],∇V[l]〉
Applying the recursion formula Eq. (2.4) to both U[m] and V[m]:
= U[m] + V[m].
As later will be pointed out in Remark 8.11, one can also prove this
theorem by using a tree expansion formula of inversion pairs, which was
derived in [M] and [Wr2], in the setting as in Lemma 2.6.
10 WENHUA ZHAO
From Theorems 2.1, 2.7 and Eqs. (1.1), (2.2), it is easy to see that we
have the following corollary.
Corollary 2.8. Let Pi ∈ C[[z]] (1 ≤ i ≤ k) which are disjoint to each other.
Set P =
i=1 Pi. Then, we have
(a) P is HN iff each Pi is HN.
(b) Suppose that P is HN. Then, for any m ≥ 0, we have
∆mPm+1 =
∆mPm+1i .(2.8)
Consequently, if the VC holds for each Pi, then it also holds for P .
3. Local Convergence of Deformed Inversion Pairs of
Homogeneous (HN) Polynomials
Let P (z) be a formal power series which is convergent near 0 ∈ Cn. Then
the associated symmetric map F (z) = z − ∇P is a well-defined analytic
map from an open neighborhood of 0 ∈ Cn to Cn. If we further assume that
JF (0) = In×n, the formal inverse G(z) = z +∇Q(z) of F (z) is also locally
well-defined analytic map. So the inversion pair Q(z) of P (z) is also locally
convergent near 0 ∈ Cn. In this section, we use the formulas Eqs. (2.4),
(1.1) and the Cauchy estimates to derive some estimates for the radius of
convergence of inversion pairs Q(z) of homogeneous (HN) polynomials P (z)
(see Propositions 3.1 and 3.3).
First let us fix the following notation.
For any a ∈ Cn and r > 0, we denote by B(a, r) (resp.S(a, r)) the open
ball (resp. the sphere) centered at a ∈ C with radius r > 0. The unit
sphere S(0, 1) will also be denoted by S2n−1. Furthermore, we let Ω(a, r)
be the polydisk centered at a ∈ Cn with radius r > 0, i.e. Ω(a, r) := {z ∈
n | |zi − ai| < r, 1 ≤ i ≤ n}. For any subset A ⊂ C
n, we will use Ā to
denote the closure of A in Cn.
For any polynomial P (z) ∈ C[z] and a compact subset D ⊂ Cn, we set
|P |D to be the maximum value of |P (z)| over D. In particular, when D is
the unit sphere S2n−1, we also write |P | = |P |D, i.e.
|P | := max{|P (z)| | z ∈ S2n−1}.(3.1)
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 11
Note that, for any r ≥ 0 and a ∈ B(0, r), we have Ω(a, r) ⊂ B(a, r) ⊂
B(0, 2r). Combining with the well-known Maximum Principle of holomor-
phic functions, we get
Ω(a,r)
≤ |P |
B(a,r)
≤ |P |
B(0,2r)
= |P |S(0,2r).(3.2)
For the inversion pairs Q of homogeneous polynomials P without HN
condition, we have the following estimate for the radius of convergence at
0 ∈ Cn.
Proposition 3.1. Let P (z) be a non-zero homogeneous polynomial (not
necessarily HN) of degree d ≥ 3 and r0 = (n2
d−1|P |)
2−d . Then the inversion
pair Q(z) converges over the open ball B(0, r0).
To prove the proposition, we need the following lemma.
Lemma 3.2. Let P (z) be any polynomial and r > 0. Then, for any a ∈
B(0, r) and m ≥ 1, we have
∣Q[m](a)
nm−1|P |m
S(0,2r)
2m−1r2m−2
.(3.3)
Proof: We use induction on m ≥ 1. First, when m = 1, by Eq. (2.3) we
have Q[1] = P . Then Eq. (3.3) follows from the fact B(a, r) ⊂ B(0, 2r) and
the maximum principle of holomorphic functions.
Assume Eq. (3.3) holds for any 1 ≤ k ≤ m − 1. Then, by the Cauchy
estimates of holomorphic functions (e.g. see Theorem 1.6 in [R]), we have
∣(DiQ[k])(a)
∣Q[k]
Ω(0,r)
nk−1|P |k
B(0,2r)
2k−1r2k−1
.(3.4)
By Eqs. (2.4) and (3.4), we have
|Q[m](a)| ≤
2(m− 1)
k,l≥1
k+l=m
∣〈∇Q[k],∇Q[l]〉
2(m− 1)
k,l≥1
k+l=m
nk−1|P |k
S(0,2r)
2k−1r2k−1
nℓ−1|P |ℓ
S(0,2r)
2ℓ−1r2ℓ−1
nm−1|P |m
S(0,2r)
2m−1r2m−2
12 WENHUA ZHAO
Proof of Proposition 3.1: By Eq. (2.2) , we know that,
Q(z) =
Q[m](z).(3.5)
To show the proposition, it will be enough to show the infinite series
above converges absolutely over B(0, r) for any r < r0.
First, for any m ≥ 1, let Am be the RHS of the inequality Eq. (3.3). Note
that, since P is homogeneous of degree d ≥ 3, we further have
|P |mB(0,2r) =
(2r)d|P |S2n−1
= (2r)dm|P |m.(3.6)
Therefore, for any m ≥ 1, we have
Am = 2
(d−1)m+1nm−1r(d−2)m+2|P |m,(3.7)
and by Lemma 3.2,
|Q[m](a)| ≤ Am(3.8)
for any a ∈ B(0, r).
Since 0 < r < r0 = (n2
d−1|P |)2−d, it is easy to see that
= n2d−1rd−2|P | < 1.
Therefore, by the comparison test, the infinite series in Eq. (3.5) converges
absolutely and uniformly over the open ball B(0, r). ✷
Note that the estimate given in Proposition 3.1 depends on the number n
of variables. Next we show that, with the HN condition on P , an estimate
independent of n can be obtained as follows.
Proposition 3.3. Let P (z) be a homogeneous HN polynomial of degree
d ≥ 4 and set r0 := (2
d+1|P |)
2−d . Then, the inversion pair Q(z) of P (z)
converges over the open ball B(0, r0).
Note that, when d = 2 or 3, by Wang’s Theorem ([Wa]), the JC holds
in general. Hence it also holds for the associated symmetric map F (z) =
z −∇P when P (z) is HN. Therefore Q(z) in this case is also a polynomial
of z and converges over the whole space Cn.
To prove the proposition above, we first need the following two lemmas.
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 13
Lemma 3.4. Let P (z) be a homogeneous polynomial of degree d ≥ 1 and
r > 0. For any a ∈ B(0, r), m ≥ 0 and α ∈ Nn, we have
|(DαPm+1)(a)| ≤
(2r)d(m+1)|P |m+1.(3.9)
Proof: First, by the Cauchy estimates and Eq. (3.2), we have
|(DαPm+1)(a)| ≤
|Pm+1|
Ω(a,r)
|Pm+1|
B(0,2r)
.(3.10)
On the other hand, by the maximum principle and the condition that P
is homogeneous of degree d ≥ 3, we have
|Pm+1|
B(0,2r)
= |P |m+1
B(0,2r)
= |P |m+1
S(0,2r)
= ((2r)d|P |)m+1(3.11)
= (2r)d(m+1)|P |m+1.
Then, combining Eqs. (3.10) and (3.11), we get Eq. (3.9). ✷
Lemma 3.5. For any m ≥ 1, we have
|α|=m
α! ≤ m!
m+ n− 1
(m+ n− 1)!
(n− 1)!
.(3.12)
Proof: First, for any α ∈ Nn with |α| = m, we have α! ≤ m! since the
binomial
is always a positive integer. Therefore, we have
|α|=m
α! ≤ m!
|α|=m
Secondly, note that
|α|=m
1 is just the number of distinct α ∈ Nn with
|α| = m, which is the same as the number of distinct monomials in n free
commutative variables of degree m. Since the latter is well-known to be the
binomial
m+n−1
, we have
|α|=m
α! ≤ m!
m+ n− 1
(m+ n− 1)!
(n− 1)!
Proof of Proposition 3.3: By Eq. (1.1) , we know that,
Q(z) =
∆mPm+1
2mm!(m+ 1)!
.(3.13)
14 WENHUA ZHAO
To show the proposition, it will be enough to show the infinite series
above converges absolutely over B(0, r) for any r < r0.
We first give an upper bound for the general terms in the series Eq. (3.13)
over B(0, r).
Consider
∆mPm+1 = (
D2i )
mPm+1 =
|α|=m
D2αPm+1.(3.14)
Therefore, we have
|∆mPm+1(a)| ≤
|α|=m
|D2αPm+1(a)|
Applying Lemma 3.4 with α replaced by 2α:
|α|=m
(2α)!
(2r)d(m+1)|P |m+1
Noting that (2α)! ≤ [(2α)!!]2 = 22m(α!)2:
|α|=m
22m(α!)2
(2r)d(m+1)|P |m+1
= m!22m+d(m+1)rd(m+1)−2m|P |m+1
|α|=m
Applying Lemma 3.5:
m!(m+ n− 1)!22m+d(m+1)rd(m+1)−2m|P |m+1
(n− 1)!
Therefore, for any m ≥ 1, we have
∆mPm+1
2mm!(m+ 1)!
2m+d(m+1)rd(m+1)−2m|P |m+1(m+ n− 1)!
(m+ 1)!(n− 1)!
.(3.15)
For any m ≥ 1, let Am be the right hand side of Eq. (3.15) above. Then,
by a straightforward calculation, we see that the ratio
2d+1rd−2|P |.(3.16)
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 15
Since r < r0 = (2
d+1|P |)
2−d , it is easy to see that
= 2d+1rd−2|P | < 1.
Therefore, by the comparison test, the infinite series in Eq. (3.13) con-
verges absolutely and uniformly over the open ball B(0, r). ✷
4. Self-Inverting Formal Power Series
Note that, by the definition of inversion pairs (see page 3), Q ∈ C[[z]] is
the inversion pair of P ∈ C[[z]] iff P is the inversion pair of Q. In other
words, the relation that Q and P are inversion pair of each other in some
sense is a duality relation. Naturally, one may ask, for which P (z), it is
self-dual or self-inverting? In this section, we discuss this special family of
polynomials or formal power series.
Another purpose of this section is to draw the reader’s attention to the
problem of classification of (HN) self-inverting polynomials (see Open Prob-
lem 4.8). Even though the classification of HN polynomials seems to be out
of reach at the current time, we believe that the classification of (HN) self-
inverting polynomials is much more approachable.
Definition 4.1. A formal power series P (z) ∈ C[[z]] with o(P (z)) ≥ 2
and (HesP )(0) nilpotent is said to be self-inverting if its inversion pair
Q(z) = P (z).
Following the terminology introduced in [B], we say a formal map F (z) =
z − H(z) with H(z) ∈ C[[z]]×n and o(H(z)) ≥ 1 is a quasi-translation if
j(F )(0) 6= 0 and its formal inverse map is given by G(z) = z +H(z).
Therefore, for any P (z) ∈ C[[z]] with o(P (z)) ≥ 2 and (HesP )(0) nilpo-
tent, it is self-inverting iff the associated symmetric formal map F (z) =
z −∇P (z) is a quasi-translation.
For quasi-translations, the following general result has been proved in
Proposition 1.1 of [B] for polynomial quasi-translations.
Proposition 4.2. A formal map F (z) = z − H(z) with o(H) ≥ 1 and
JH(0) nilpotent is a quasi-translation if and only if JH ·H = 0.
Even though the proposition above was proved in [B] only in the setting
of polynomial maps, the proof given there works equally well for formal
quasi-translations under the condition that JH(0) is nilpotent. Since it has
16 WENHUA ZHAO
also been shown in Proposition 1.1 in [B] that, for any polynomial quasi-
translations F (z) = z −H(z), JH(z) is always nilpotent, so the condition
that JH(0) is nilpotent in the proposition above does not put any extra
restriction for the case of polynomial quasi-translations.
From Proposition 4.2 above, we immediately have the following criterion
for self-inverting formal power series.
Proposition 4.3. For any P (z) ∈ C[[z]] with o(P ) ≥ 2 and (HesP )(0)
nilpotent, it is self-inverting if and only if 〈∇P,∇P 〉 = 0.
Proof: Since o(P ) ≥ 2 and (HesP )(0) is nilpotent, by Proposition 4.2,
we see that, P (z) ∈ C[[z]] is self-inverting iff J(∇P )·∇P = (HesP )·∇P = 0.
But, on the other hand, it is easy to check that, for any P (z) ∈ C[[z]], we
have the following identity:
(HesP ) · ∇P =
∇〈∇P,∇P 〉.
Therefore, (HesP ) · ∇P = 0 iff ∇〈∇P,∇P 〉 = 0, and iff 〈∇P,∇P 〉 = 0
because o(〈∇P,∇P 〉) ≥ 2. ✷
Corollary 4.4. For any P (z) ∈ C[[z]] with o(P ) ≥ 2 and (HesP )(0) nilpo-
tent, if it is self-inverting, then so is Pm(z) for any m ≥ 1.
Proof: Note that, for any m ≥ 2, we have o(Pm(z)) ≥ 2m > 2 and
(HesP )(0) = 0. Then, the corollary follows immediately from Proposition
4.3 and the following general identity:
〈∇Pm,∇Pm〉 = m2P 2m−2〈∇P,∇P 〉.(4.1)
Corollary 4.5. For any harmonic formal power series P (z) ∈ C[[z]] with
o(P ) ≥ 2 and (HesP )(0) nilpotent, it is self-inverting iff ∆P 2 = 0.
Proof: This follows immediately from Proposition 4.3 and the following
general identity:
∆P 2 = 2(∆P )P + 2〈∇P,∇P 〉.(4.2)
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 17
Proposition 4.6. Let P (z) be a harmonic self-inverting formal power se-
ries. Then, for any m ≥ 1, Pm is HN.
Proof: First, we use the mathematical induction on m ≥ 1 to show that
∆Pm = 0 for any m ≥ 1.
The case of m = 1 is given. For any m ≥ 2, consider
∆Pm = ∆(P · Pm−1)
= (∆P )Pm−1 + P (∆Pm−1) + 2〈∇P,∇Pm−1〉
= (∆P )Pm−1 + P (∆Pm−1) + 2(m− 1)Pm−2〈∇P,∇P 〉.
Then, by the mathematical induction assumption and Proposition 4.3, we
get ∆Pm = 0.
Secondly, for any fixed m ≥ 1 and d ≥ 1, we have
∆d[(Pm)d] = ∆d−1(∆P dm) = 0.
Then, by the criterion in Proposition 1.2, Pm is HN. ✷
Example 4.7. Note that, in Section 5.2 of [Z2], a family of self-inverting
HN formal power series has been constructed as follows.
Let Ξ be any non-empty subset of Cn such that, for any α, β ∈ Ξ, 〈α, β〉 =
0. Let A be the completion of the subalgebra of C[[z]] generated by hα(z) :=
〈α, z〉 (α ∈ Ξ), i.e. A is the set of all formal power series in hα(z) (α ∈ Ξ)
over C. Then it is straightforward to check (or see Section 5.2 of [Z2] for
details) that any element P (z) ∈ A is HN and self-inverting.
It is unknown if all HN self-inverting polynomials or formal power series
can be obtained by the construction above. More generally, we believe the
following open problem is worth investigating.
Open Problem 4.8. (a) Decide whether or not all self-inverting polyno-
mials or formal power series are HN.
(b) Classify all (HN) self-inverting polynomials and formal power series.
Finally, let us point out that, for any self-inverting P (z) ∈ C[[z]], the
deformed inversion pair Qt(z) (not just Q(z) = Qt=1(z)) is also same as
P (z).
Proposition 4.9. Let P (z) ∈ C[[z]] with o(P ) ≥ 2 and (HesP )(0) nilpo-
tent. Then P (z) is self-inverting if and only if Qt(z) = P (z).
18 WENHUA ZHAO
Proof: First, let us point out the following observations.
Let t be a formal central parameter and Ft(z) = z − t∇P (z) as before.
Since o(P ) ≥ 2 and (HesP )(0) is nilpotent, we have j(Ft)(0) = 1. Therefore,
Ft(z) is an automorphism of the algebra C[t][[z]] of formal power series of z
over C[t]. Since the inverse map of Ft(z) is given by Gt(z) = z + t∇Qt(z),
we see that Qt(z) ∈ C[t][[z]]. Therefore, for any t0 ∈ C, Qt=t0(z) makes
sense and lies in C[[z]]. Furthermore, by the uniqueness of inverse maps, it
is easy to see that the inverse map of Ft0 = z− t0∇P of C[t][[z]] is given by
Gt0(z) = z + t0∇Qt=t0 . Therefore the inversion pair of t0P (z) is given by
t0Qt=t0(z).
With the notation and observations above, by choosing t0 = 1, we have
Qt=1(z) = Q(z) and the (⇐) part of the proposition follows immediately.
Conversely, for any t0 ∈ C, we have 〈∇(t0P ),∇(t0P )〉 = t
0〈∇P,∇P 〉. Then,
by Proposition 4.3, t0P (z) is self-inverting and its inversion pair t0Qt=t0(z)
is same as t0P (z), i.e. t0Qt=t0(z) = t0P (z). Therefore, we have Qt=t0(z) =
P (z) for any t0 ∈ C
×. But on the other hand, we have Qt(z) ∈ C[t][[z]]
as pointed above, i.e. the coefficients of all monomials of z in Qt(z) are
polynomials of t, hence we must have Qt(z) = P (z) which is the (⇒) part
of the proposition. ✷
5. The Vanishing Conjecture over Fields of Positive
Characteristic
It is well-known that the JC may fail when F (z) is not a polynomial map
(e.g. F1(z1, z2) = e
−z1; F2(z1, z2) = z2e
z1). It also fails badly over fields of
positive characteristic even in one variable case (e.g. F (x) = x − xp over
a field of characteristic p > 0). However, the situation for the VC over
fields of positive characteristic is dramatically different from the JC even
through these two conjectures are equivalent to each other over fields of
characteristic zero. Actually, as we will show in the proposition below, the
VC over fields of positive characteristic holds for any polynomials (not even
necessarily HN) and also for any HN formal power series.
Proposition 5.1. Let k be a field of characteristic p > 0. Then
(a) For any polynomial P (z) ∈ k[z] (not necessarily homogeneous nor
HN) of degree d ≥ 1, ∆mPm+1 = 0 for any m ≥
d(p−1)
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 19
(b) For any HN formal power series P (z) ∈ k[[z]], i.e. ∆mPm = 0 for
any m ≥ 1, we have, ∆mPm+1 = 0 for any m ≥ p− 1.
In other words, over the fields of positive characteristic, the VC holds
even for HN formal power series P (z) ∈ k[[z]]; while for polynomials, it
holds even without the HN condition nor any other conditions.
Proof: The main reason that the proposition above holds is because of
the following simple fact due to the Leibniz rule and positiveness of the
characteristics of the base field k, namely, for m ≥ 1, u(z), v(z) ∈ k[[z]] and
any differential operator Λ of k[z], we have
Λ(umpv) = umpΛv.(5.1)
Now let P (z) be any polynomial or formal series as in the proposition.
For any m ≥ 1, write m+1 = qmp+rm with qm, rm ∈ Z and 0 ≤ rm ≤ p−1.
Then by Eq. (5.1) , we have
∆mPm+1 = ∆m(P qmpP rm) = P qmp∆mP rm.(5.2)
If P (z) is a polynomial of degree d ≥ 1, we have ∆mP rm = 0 when
d(p−1)
, since in this case 2m > deg(P rm). If P (z) is a HN formal power
series, we have ∆mP rm = 0 when m ≥ p− 1 ≥ rm. Therefore, (a) and (b)
in the proposition follow from Eq. (5.2) and the observations above. ✷
One interesting question is whether or not the VC fails (as the JC does)
for any HN formal power series P (z) ∈ C[[z]] but P (z) 6∈ C[z]? To our best
knowledge, no such counterexample has been known yet. We here put it as
an open problem.
Open Problem 5.2. Find a HN formal power series P (z) ∈ C[[z]] but
P (z) 6∈ C[z], if there are any, such that the VC fails for P (z).
One final remark about Proposition 5.1 is as follows. Note that the crucial
fact used in the proof is that any differential operator Λ of k[z] commutes
with the multiplication operator by the pth power of any element of k[[z]].
Then, by a parallel argument as in the proof of Proposition 5.1, it is easy
to see that the following more general result also holds.
Proposition 5.3. Let k be a field of characteristics p > 0 and Λ a differ-
ential operator of k[z]. Let f ∈ k[[z]]. Assume that, for any 1 ≤ m ≤ p− 1,
there exists Nm > 0 such that Λ
Nmfm = 0. Then, we have Λmfm+1 = 0
when m >> 0.
20 WENHUA ZHAO
In particular, if Λ strictly decreases the degree of polynomials. Then, for
any polynomial f ∈ k[z], we have Λmfm+1 = 0 when m >> 0.
6. A Criterion of Hessian Nilpotency for Homogeneous
Polynomials
Recall that 〈·, ·〉 denotes the standard C bilinear form of Cn. For any
β ∈ Cn, we set hβ(z) := 〈β, z〉 and βD := 〈β,D〉.
The main result of this section is the following criterion of Hessian nilpo-
tency for homogeneous polynomials. Considering the criterion given in
Proposition 1.2, it is somewhat surprising but the proof turns out to be
very simple.
Theorem 6.1. For any β ∈ Cn and homogeneous polynomial P (z) of degree
d ≥ 2, set Pβ(z) := β
D P (z). Then, we have
HesPβ = (d− 2)! (HesP )(β).(6.1)
In particular, P (z) is HN iff, for any β ∈ Cn, Pβ(z) is HN.
To prove the theorem, we need first the following lemma.
Lemma 6.2. Let β ∈ Cn and P (z) ∈ C[z] homogeneous of degree N ≥ 1.
βNDP (z) = N !P (β).(6.2)
Proof: Since both sides of Eq. (6.2) are linear on P (z), we may assume
P (z) is a monomial, say P (z) = za for some a ∈ Nn with |a| = N .
Consider
βNDP (z) = (
βiDi)
Nza =
|k|=N
βkDkza
βaDaza = N !βa = N !P (β).
Proof of Theorem 6.1: We consider
HesPβ(z) =
∂2(βd−2D P )
∂zi∂zj
βd−2D
∂zi∂zj
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 21
Applying Lemma 6.2 to ∂
∂zi∂zj
= (d− 2)!
∂zi∂zj
= (d− 2)! (HesP )(β).
Let {ei | 1 ≤ i ≤ n} be the standard basis of C
n. Applying the theorem
above to β = ei (1 ≤ i ≤ n), we have the following corollary, which was first
proved by M. Kumar [K].
Corollary 6.3. For any homogeneous HN polynomial P (z) ∈ C[z] of degree
d ≥ 2, Dd−2i P (z) (1 ≤ i ≤ n) are also HN.
The reason that we think the criteria given in Theorem 6.1 and Corollary
6.3 interesting is that, Pβ(z) = β
D P (z) is homogeneous of degree 2, and it
is much easier to decide whether a homogeneous polynomial of degree 2 is
HN or not. More precisely, for any homogeneous polynomial U(z) of degree
2, there exists a unique symmetric n× n matrix A such that U(z) = zτAz.
Then it is easy to check that HesU(z) = 2A. Therefore, U(z) is HN iff the
symmetric matrix A is nilpotent.
Finally we end this section with the following open question on the cri-
terion given in Proposition 1.2.
Recall that Proposition 1.2 was proved in [Z2]. We now sketch the argu-
ment.
For any m ≥ 1, we set
um(P ) = TrHes
m(P ),(6.3)
vm(P ) = ∆
mPm.(6.4)
For any k ≥ 1, we define Uk(P ) (resp.Vk(P )) to be the ideal in C[[z]]
generated by {um(P )|1 ≤ m ≤ k} (resp. {vm(P )|1 ≤ m ≤ k}) and all their
partial derivatives of any order. Then it has been shown (in a more general
setting) in Section 4 in [Z2] that Uk(P ) = Vk(P ) for any k ≥ 1.
It is well-known in linear algebra that, if um(P (z)) = 0 when m >> 0,
then HesP is nilpotent and um(P ) = 0 for anym ≥ 1. One natural question
is whether or not this is also the case for the sequence {vm(P ) |m ≥ 1}.
More precisely, we believe the following conjecture which was proposed in
[Z2] is worth investigating.
22 WENHUA ZHAO
Conjecture 6.4. Let P (z) ∈ C[[z]] with o(P (z)) ≥ 2. If ∆mPm(z) = 0 for
m >> 0, then P (z) is HN.
7. Some Results on Symmetric Polynomial Maps
Let P (z) be any formal power series with o(P (z)) ≥ 2 and (HesP )(0)
nilpotent, and F (z) and G(z) as before. Set
σ2 : =
z2i ,(7.1)
f(z) : =
σ2 − P (z).(7.2)
Professors Mohan Kumar [K] and David Wright [Wr3] once asked how to
write P (z) and f(z) in terms of F (z)? More precisely, find U(z), V (z) ∈
C[[z]] such that
U(F (z)) = P (z),(7.3)
V (F (z)) = f(z).(7.4)
In this section, we first derive in Proposition 7.2 some explicit formulas
for U(z) and V (z), and also for W (z) ∈ C[[z]] such that
W (F (z)) = σ2(z).(7.5)
We then show in Theorem 7.4 that, when P (z) is a HN polynomial, the
VC holds for P or equivalently, the JC holds for the associated symmetric
polynomial map F (z) = z −∇P , iff one of U , V and W is polynomial.
Let t be a central parameter and Ft(z) = z− t∇P . Let Gt(z) = z+ t∇Qt
be the formal inverse of Ft(z) as before. We set
ft(z) : =
σ2 − tP (z),(7.6)
Ut(z) : = P (Gt(z)),(7.7)
Vt(z) : = ft(Gt(z)),(7.8)
Wt(z) : = σ2(Gt(z)).(7.9)
Note first that, under the conditions that o(P (z)) ≥ 2 and (HesP )(0) is
nilpotent, we have Gt(z) ∈ C[t][[z]]
×n as mentioned in the proof of Propo-
sition 4.9. Therefore, we have Ut(z), Vt(z),Wt(z) ∈ C[t][[z]], and Ut=1(z),
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 23
Vt=1(z) and Wt=1(z) all make sense. Secondly, from the definitions above,
we have
Wt(z) = 2Vt(z) + 2tUt(z),(7.10)
Ft(z) = ∇ft(z),(7.11)
ft=1(z) = f(z).(7.12)
Lemma 7.1. With the notations above, we have
P (z) = Ut=1(F (z)),(7.13)
f(z) = Vt=1(F (z)),(7.14)
σ2(z) = Wt=1(F (z)).(7.15)
In particular, f(z), P (z) and σ2(z) lie in C[F ] iff Ut=1(z), Vt=1(z) and
Wt=1(z) lie in C[z].
In other words, by setting t = 1, Ut, Vt and Wt will give us U , V and W
in Eqs. (7.3)–(7.5), respectively.
Proof: From the definitions of Ut(z), Vt(z) and Wt(z) (see Eqs. (7.7)–
(7.9), we have
P (z) = Ut(Ft(z)),
ft(z) = Vt(Ft(z)),
σ2(z) = Wt(Ft(z)).
By setting t = 1 in the equations above and noticing that Ft=1(z) = F (z),
we get Eqs. (7.13)–(7.15). ✷
For Ut(z), Vt(z) and Wt(z), we have the following explicit formulas in
terms of the deformed inversion pair Qt of P .
Proposition 7.2. For any formal power series P (z) ∈ C[[z]] (not neces-
sarily HN) with o(P (z)) ≥ 2 and (HesP )(0) nilpotent, we have
Ut(z) = Qt + t
,(7.16)
Vt(z) =
σ2 + t(z
−Qt),(7.17)
Wt(z) = σ2 + 2tz
+ 2t2
.(7.18)
Proof: Note first that, Eq. (7.18) follows directly from Eqs. (7.16), (7.17)
and (7.10).
24 WENHUA ZHAO
To show Eq. (7.16), by Eqs. (3.4) and (3.6) in [Z1], we have
Ut(z) = P (Gt) = Qt +
〈∇Qt,∇Qt〉 = Qt + t
.(7.19)
To show Eq. (7.17), we consider
Vt(z) = ft(Gt)
〈z + t∇Qt(z), z + t∇Qt(z)〉 − tP (Gt)
σ2 + t〈z,∇Qt(z)〉 +
〈∇Qt,∇Qt〉 − tP (Gt)
By Eq. (7.19), substituting Qt +
〈∇Qt,∇Qt〉 for P (Gt):
σ2 + t〈z,∇Qt(z)〉 − tQt(z)
σ2 + t(z
−Qt).
When P (z) is homogeneous and HN, we have the following more explicit
formulas which in particular give solutions to the questions raised by Pro-
fessors Mohan Kumar and David Wright.
Corollary 7.3. For any homogeneous HN polynomial P (z) of degree d ≥ 2,
we have
Ut(z) =
2m(m!)2
∆mPm+1(z)(7.20)
Vt(z) =
(dm − 1)t
2mm!(m+ 1)!
∆mPm+1(z) ,(7.21)
Wt(z) = σ2 +
(dm +m)t
2m−1m!(m+ 1)!
∆mPm+1(z) ,(7.22)
where dm = deg (∆
mPm+1) = d(m+ 1)− 2m (m ≥ 0).
Proof: We give a proof for Eq. (7.20). Eqs. (7.21) can be proved similarly.
(7.22) follows directly from Eqs. Eq. (7.20), (7.21) and (7.10).
By combining Eqs. (7.16) and (1.1), we have
Ut(z) =
tm∆mPm+1(z)
2mm!(m+ 1)!
mtm∆mPm+1(z)
2mm!(m+ 1)!
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 25
= P (z) +
2m(m!)2
∆mPm+1(z)
2m(m!)2
∆mPm+1(z).
Hence, we get Eq. (7.20). ✷
One consequence of the proposition above is the following result on sym-
metric polynomials maps.
Theorem 7.4. For any HN polynomial P (z) (not necessarily homogeneous)
with o(P ) ≥ 2, the following statements are equivalent:
(1) The VC holds for P (z).
(2) P (z) ∈ C[F ].
(3) f(z) ∈ C[F ].
(4) σ2(z) ∈ C[F ].
Note that, the equivalence of the statements (1) and (3) was first proved
by Mohan Kumar ([K]) by a different method.
Proof: Note first that, by Lemma 7.1, it will be enough to show that,
∆mPm+1 = 0 when m >> 0 iff one of Ut(z), Vt(z) and Wt(z) is a polynomial
in t with coefficients in C[z]. Secondly, when P (z) is homogeneous, the
statement above follows directly from Eqs. (7.20)–(7.22).
To show the general case, for any m ≥ 0 and Mt(z) ∈ C[t][[z]], we denote
by [tm](Mt(z)) the coefficient of t
m when we write Mt(z) as a formal power
series of t with coefficients in C[[z]]. Then, from Eqs. (7.16)–(7.18) and
Eq. (1.1), it is straightforward to check that the coefficients of tm (m ≥ 1)
in Ut(z), Vt(z) and Wt(z) are given as follows.
[tm](Ut(z)) =
∆mPm+1
2m(m!)2
,(7.23)
[tm](Vt(z)) =
2m−1(m− 1)!m!
(∆m−1Pm)−∆m−1Pm
,(7.24)
[tm](Wt(z)) =
2m−2(m− 1)!m!
(∆m−1Pm) + (m− 1)∆m−1Pm
(7.25)
26 WENHUA ZHAO
From Eq. (7.23), we immediately have (1) ⇔ (2). To show the equiva-
lences (1) ⇔ (3) and (1) ⇔ (4), note first that o(P ) ≥ 2, so o(∆m−1Pm) ≥ 2
for any m ≥ 1. While, on the other hand, for any polynomial h(z) ∈
C[z] with o(h(z)) ≥ 2, we have, h(z) = 0 iff (z ∂
− 1)h(z) = 0, and iff
+ (m − 1))h(z) = 0 for some m ≥ 1. This is simply because that,
for any monomial zα (α ∈ Nn), we have (z ∂
− 1)zα = (|α| − 1)zα and
+ (m− 1))zα = (|α|+ (m− 1))zα. From this general fact, we see that
(1) ⇔ (3) follows from Eq. (7.24) and (1) ⇔ (4) from Eq. (7.25). ✷
8. A Graph Associated with Homogeneous HN Polynomials
In this section, we would like to draw the reader’s attention to a graph
G(P ) assigned to each homogeneous harmonic polynomials P (z). The graph
G(P ) was first proposed by the author and later was further studied by R.
Willems in his master thesis [Wi] under direction of Professor A. van den
Essen. The introduction of the graph G(P ) is mainly motivated by a crite-
rion of Hessian nilpotency given in [Z2] (see also Theorem 8.2 below), via
which one hopes more necessary or sufficient conditions for a homogeneous
harmonic polynomial P (z) to be HN can be obtained or described in terms
of the graph structure of G(P ).
We first give in Subsection 8.1 the definition of the graph G(P ) for any
homogeneous harmonic polynomial P (z) and discuss the connectedness re-
duction (see Corollary 8.5), i.e. a reduction of the VC to the homogeneous
HN polynomials P such that G(P ) is connected. We then consider in Sub-
section 8.2 a connection of G(P ) with the tree expansion formula derived in
[M] and [Wr2] for the inversion pair Q(z) of P (z) (see Proposition 8.9). As
an application of the connection, we give another proof for the connected-
ness reduction given in Corollary 8.5.
8.1. Definition and the Connectedness Reduction. For any β ∈ Cn,
set hβ(z) := 〈β, z〉 and βD := 〈β,D〉, where 〈·, ·〉 is the standard C-bilinear
form of Cn. Let X(C) denote the set of all isotropic elements of Cn, i.e. the
set of all elements α ∈ Cn such that 〈α, α〉 = 0.
Recall that we have the following fundamental theorem on homogeneous
harmonic polynomials.
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 27
Theorem 8.1. For any homogeneous harmonic polynomial P (z) of degree
d ≥ 2, we have
P (z) =
(z)(8.1)
for some ci ∈ C
× and αi ∈ X(C
n) (1 ≤ i ≤ k).
Note that, replacing αi in Eq. (8.1) by c
i αi, we may also write P (z) as
P (z) =
hdαi(z)(8.2)
with αi ∈ X(C
n) (1 ≤ i ≤ k).
For the proof of Theorem 8.1, see, for example, [I] and [Wi].
We fix a homogeneous harmonic polynomial P (z) ∈ C[z] of degree d ≥ 2,
and assume that P (z) is given by Eq. (8.2) for some αi ∈ X(C
n) (1 ≤ i ≤ k).
We may and will always assume {hdαi(z)|1 ≤ i ≤ k} are linearly independent
in C[z].
Recall the following matrices had been introduced in [Z2]:
AP = (〈αi, αj〉)k×k,(8.3)
ΨP = (〈αi, αj〉h
(z))k×k.(8.4)
Then we have the following criterion of Hessian nilpotency for homoge-
neous harmonic polynomials. For its proof, see Theorem 4.3 in [Z2].
Theorem 8.2. Let P (z) be as above. Then, for any m ≥ 1, we have
TrHes
m(P ) = (d(d− 1))mTrΨmP .(8.5)
In particular, P (z) is HN if and only if the matrix ΨP is nilpotent.
One simple remark on the criterion above is as follows.
Let B be the k × k diagonal matrix with the ith (1 ≤ i ≤ k) diagonal
entry being hαi(z). For any 1 ≤ j ≤ k, set
ΨP ;j := B
d−2−j = (hjαi〈αi, αj〉h
d−2−j
).(8.6)
Then, by repeatedly applying the fact that, for any two k× k matrices C
and D, CD is nilpotent iff so is DC, it is easy to see that Theorem 8.2 can
also be re-stated as follows.
28 WENHUA ZHAO
Corollary 8.3. Let P (z) be given by Eq. (8.2) with d ≥ 2. Then, for any
1 ≤ j ≤ d− 2 and m ≥ 1, we have
TrHes
m(P ) = (d(d− 1))mTrΨmP ;j.(8.7)
In particular, P (z) is HN if and only if the matrix ΨP ;j is nilpotent.
Note that, when d is even, we may choose j = (d− 2)/2. So P is HN iff
the symmetric matrix
ΨP ;(d−2)/2(z) = ( h
(d−2)/2
(z) 〈αi, αj〉 h
(d−2)/2
(z) )(8.8)
is nilpotent.
Motivated by the criterion above, we assign a graph G(P ) to any homo-
geneous harmonic polynomial P (z) as follows.
We fix an expression as in Eq. (8.2) for P (z). The set of vertices of G(P )
will be the set of positive integers [k] := {1, 2, . . . , k}. The vertices i and
j of G(P ) are connected by an edge iff 〈αi, αj〉 6= 0. In this case, we get a
finite graph.
Furthermore, we may also label edges of G(P ) by assigning 〈αi, αj〉 or
(d−2)/2
αi 〈αi, αj〉h
(d−2)/2
αi ), when d is even, for the edge connecting vertices
i, j ∈ [k]. We then get a labeled graph whose adjacency matrix is exactly
AP or ΨP,(d−2)/2 (depending on the labels we choose for the edges of G(P )).
Naturally, one may also ask the following (open) questions.
Open Problem 8.4. (a) Find some necessary or sufficient conditions on
the (labeled) graph G(P ) such that the homogeneous harmonic polynomial
P (z) is HN.
(b) Find some necessary or sufficient conditions on the (labeled) graph
G(P ) such that the VC holds for the homogeneous HN polynomial P (z).
First, let us point out that, to approach the open problems above, it will
be enough to focus on homogeneous harmonic polynomials P such that the
graph G(P ) is connected.
Suppose that the graph G(P ) is a disconnected graph with r ≥ 2 con-
nected components. Let [k] = ⊔ri=1Ii be the corresponding partition of the
set [k] of vertices of G(P ). For each 1 ≤ i ≤ r, we set Pi(z) :=
hdα(z).
Note that, by Lemma 2.6, Pi (1 ≤ i ≤ r) are disjoint to each other, so
Corollary 2.8 applies to the sum P =
i=1 Pi. In particular, we have,
(a) P is HN iff each Pi is HN.
(b) if the VC holds for each Pi, then it also holds for P .
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 29
Therefore, we have the following connectedness reduction.
Corollary 8.5. To study homogeneous HN polynomials P or the VC for
homogeneous HN polynomials P , it will be enough to consider the case when
G(P ) is connected.
Note that, the property (a) above was first proved by R. Willems ([Wi])
by using the criterion in Theorem 8.2. (b) was first proved by the author
by a different argument, and with the author’s permission, it had also been
included in [Wi].
Finally, let us point out that R. Willems ([Wi]) has proved the following
very interesting results on Open Problem 8.4.
Theorem 8.6. ([Wi]) Let P be a homogeneous HN polynomial as in Eq.(8.2)
with d ≥ 4. Let l(P ) be the dimension of the vector subspace of Cn spanned
by {αi | 1 ≤ i ≤ k}. Then
(1) If l(P ) = 1, 2, k−1 or k, the graph G(P ) is totally disconnected (i.e.
G(P ) is the graph with no edges).
(2) If l(P ) = k − 2 and G(P ) is connected, then G(P ) is the complete
bi-graph K(4, k − 4).
(3) In the case of (a) and (b) above, the VC holds.
Furthermore, it has also been shown in [Wi] that, for any homogeneous
HN polynomials P , the graph G(P ) can not be any path nor cycles of any
positive length. For more details, see [Wi].
8.2. Connection with the Tree Expansion Formula of Inversion
Pairs. First let us recall the tree expansion formula derived in [M], [Wr2]
for the inversion pair Q(z).
Let T denote the set of all trees, i.e. the set of all connected and simply
connected finite simple graphs. For each tree T ∈ T, denote by V (T ) and
E(T ) the sets of all vertices and edges of T , respectively. Then we have the
following tree expansion formula for inversion pairs.
Theorem 8.7. ([M], [Wr2]) Let P ∈ C[[z]] with o(P ) ≥ 2 and Q its inver-
sion pair. For any T ∈ T, set
QT,P =
ℓ:E(T )→[n]
v∈V (T )
Dadj(v),ℓP,(8.9)
where adj(v) is the set {e1, e2, . . . , es} of edges of T adjacent to v, and
Dadj(v),ℓ = Dℓ(e1)Dℓ(e2) · · ·Dℓ(es).
30 WENHUA ZHAO
Then the inversion pair Q of P is given by
|Aut(T )|
QT,P .(8.10)
Now we assume P (z) is a homogeneous harmonic polynomial d ≥ 2 and
has expression in Eq. (8.2). Under this assumption, it is easy to see that
QT,P (T ∈ T) becomes
QT,P =
f :V (T )→[k]
ℓ:E(T )→[n]
v∈V (T )
Dadj(v),ℓh
αf(v)
(z).(8.11)
The role played by the graph G(P ) of P is to restrict the maps f : V (T ) →
V (G(P ))(= [k]) in Eq. (8.11) to a special family of maps. To be more precise,
let Ω(T,G(P )) be the set of maps f : V (T ) → [k] such that, for any distinct
adjoint vertices u, v ∈ V (T ), f(u) and f(v) are distinct and adjoint in G(P ).
Then we have the following lemma.
Lemma 8.8. For any f : V (T ) → [k] with f 6∈ Ω(T,G(P )), we have
ℓ:E(T )→[n]
v∈V (T )
Dadj(v),ℓh
αf(v)
(z) = 0.(8.12)
Proof: Let f : V (T ) → [k] as in the lemma. Since f 6∈ Ω(T,G(P )),
there exist distinct adjoint v1, v2 ∈ V (T ) such that, either f(v1) = f(v2) or
f(v1) and f(v2) are not adjoint in the graph G(P ). In any case, we have
〈αf(v1), αf(v2)〉 = 0.
Next we consider contributions to the RHS of Eq. (8.11) from the vertices
v1 and v2. Denote by e the edge of T connecting v1 and v2, and {e1, . . . er}
(resp. {ẽ1, . . . ẽs}) the set of edges connected with v1 (resp. v2) besides the
edge e. Then, for any ℓ : E(T ) → [n], the factor in the RHS of Eq. (8.11)
from the vertices v1 and v2 is the product
Dℓ(e)Dℓ(e1) · · ·Dℓ(er)h
αf(v1)
Dℓ(e)Dℓ(ẽ1) · · ·Dℓ(ẽs)h
αf(v2)
.(8.13)
Define an equivalent relation for maps ℓ : E(T ) → [n] by setting ℓ1 ∼ ℓ2
iff ℓ1, ℓ2 have same image at each edge of T except e. Then, by taking sum
of the terms in Eq. (8.13) over each equivalent class, we get the factor
∇Dℓ(e1) · · ·Dℓ(er)h
αf(v1)
(z), ∇Dℓ(ẽ1) · · ·Dℓ(ẽs)h
αf(v2)
.(8.14)
Note that Dℓ(e1) · · ·Dℓ(er)h
αf(v1)
(z) and Dℓ(ẽ1) · · ·Dℓ(ẽs)h
αf(v2)
(z) are con-
stant multiples of some integral powers of hαf(v1)(z) and hαf(v2)(z), respec-
tively. Therefore, 〈αf(v1), αf(v2)〉(= 0) appears as a multiplicative constant
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 31
factor in the term in Eq. (8.14), which makes the term zero. Hence the
lemma follows. ✷
One immediate consequence of the lemma above is the following propo-
sition.
Proposition 8.9. With the setting and notation as above, we have
QT,P =
f∈Ω(T,G(P ))
ℓ:E(T )→[n]
v∈V (T )
Dadj(v),ℓh
αf(v)
(z).(8.15)
Remark 8.10. (a) For any f ∈ Ω(T,G(P )), {f−1(j) | j ∈ Im(f)} gives a
partition of V (T ) since no two distinct vertices in f−1(j) (j ∈ Im(f)) can
be adjoint. In other words, f is nothing but a proper coloring for the tree T ,
which is also subject to certain more conditions from the graph structure of
G(P ). It is interesting to see that the coloring problem of graphs also plays
a role in the inversion problem of symmetric formal maps.
(b) It will be interesting to see if more results can be derived from the
graph G(P ) via the formulas in Eqs. (8.10) and (8.15).
Remark 8.11. By similar arguments as those in proofs of Lemma 8.8, one
may get another proof for Theorem 2.7 in the setting as in Lemma 2.6.
Finally, as an application of Proposition 8.9 above, we give another proof
for the connectedness reduction given in Corollary 8.5.
Let P as given in Eq. (8.2) with the inversion pair Q. Suppose that
there exists a partition [k] = I1 ⊔ I2 with Ii 6= ∅. Let Pi =
hdα(z)
(i = 1, 2) and Qi the inversion pair of Pi. Then we have P = P1 + P2
and G(P1)⊔G(P2) = G(P ). Therefore, to show the connectedness reduction
discussed in the previous subsection, it will be enough to show Q = Q1+Q2.
But this will follow immediately from Eqs. (8.10), (8.15) and the following
lemma.
Lemma 8.12. Let P , P1 and P2 as above, then, for any tree T ∈ T, we
Ω(T,G(P )) = Ω(T,G(P1)) ⊔ Ω(T,G(P2)).
Proof: For any f ∈ Ω(T,G(P )), f preserves the adjacency of vertices
of G(P ). Since T as a graph is connected, Im(f) ⊂ V (G(P )) as a (full)
subgraph of G(P ) must also be connected. Therefore, Im(f) ⊂ V (G(P1))
32 WENHUA ZHAO
or Im(f) ⊂ V (G(P2)). Hence Ω(T,G(P )) ⊂ Ω(T,G(P1)) ⊔ Ω(T,G(P2)). The
other way of containess is obvious. ✷
References
[BCW] H. Bass, E. Connell, D. Wright, The Jacobian Conjecture, Reduction of Degree
and Formal Expansion of the Inverse. Bull. Amer. Math. Soc. 7, (1982), 287–330.
[MR 83k:14028]. [Zbl.539.13012].
[B] M. de Bondt, Quasi-translations and Counterexamples to the Homogeneous De-
pendence Problem. Proc. Amer. Math. Soc. 134 (2006), no. 10, 2849–2856 (elec-
tronic). [MR2231607].
[BE1] M. de Bondt and A. van den Essen, A reduction of the Jacobian Conjecture
to the Symmetric Case. Proc. Amer. Math. Soc. 133 (2005), no. 8, 2201–2205
(electronic). [MR2138860].
[BE2] M. de Bondt and A. van den Essen, Nilpotent Symmetric Jacobian Matrices
and the Jacobian Conjecture, J. Pure Appl. Algebra 193 (2004), no. 1-3, 61–70.
[MR2076378].
[BE3] M. de Bondt and A. van den Essen, Singular Hessians, J. Algebra 282 (2004),
no. 1, 195–204. [MR2095579].
[BE4] M. de Bondt and A. van den Essen, Nilpotent Symmetric Jacobian Matrices and
the Jacobian Conjecture II, J. Pure Appl. Algebra 196 (2005), no. 2-3, 135–148.
[MR2110519].
[BE5] M. de Bondt and A. van den Essen, Hesse and the Jacobian Conjecture, Affine
algebraic geometry, 63–76, Contemp. Math., 369, Amer. Math. Soc., Providence,
RI, 2005. [MR2126654].
[E] A. van den Essen, Polynomial Automorphisms and the Jacobian Conjecture.
Progress in Mathematics, 190. Birkhuser Verlag, Basel, 2000. [MR1790619].
[EW] A. van den Essen and S. Washburn, The Jacobian Conjecture for Symmet-
ric Jacobian Matrices, J. Pure Appl. Algebra, 189 (2004), no. 1-3, 123–133.
[MR2038568]
[EZ] A. van den Essen and W. Zhao, Two Results on Hessian Nilpotent Polynomials.
To appear in J. Pure Appl. Algebra. See also arXiv:0704.1690v1 [math.AG].
[I] H. Iwaniec, Topics in Classical Automorphic Forms, Graduate Studies in Mathe-
matics, 17. American Mathematical Society, Providence, RI, 1997. [MR1474964]
[Ke] O. H. Keller, Ganze Gremona-Transformation, Monats. Math. Physik 47 (1939),
no. 1, 299-306. [MR1550818].
[K] M. Kumar, Personal commucations.
http://arxiv.org/abs/0704.1690
SOME PROPERTIES OF AND OPEN PROBLEMS ON HNPS 33
[M] G. Meng, Legendre Transform, Hessian Conjecture and Tree Formula, Appl.
Math. Lett. 19 (2006), no. 6, 503–510. [MR2221506]. See also math-ph/0308035.
[R] R. M. Range, Holomorphic Functions and Integral Representations in Several
Complex Variables, Graduate Texts in Mathematics, 108. Springer-Verlag New
York Inc., 1986. [MR0847923].
[Wa] S. Wang, A Jacobian Criterion for Separability, J. Algebra 65 (1980), 453-494.
[MR83e:14010].
[Wi] R. Willems, Graphs and the Jacobian Conjecture, Master Thesis, July 2005.
Radboud University Nijmegen, The Netherlands.
[Wr1] D. Wright, The Jacobian Conjecture: Ideal Membership Questions and Recent
advances, Affine algebraic geometry, 261–276, Contemp. Math., 369, Amer.
Math. Soc., Providence, RI, 2005. [MR2126667].
[Wr2] D. Wright, The Jacobian Conjecture as a Combinatorial Problem. Affine
algebraic geometry, 483–503, Osaka Univ. Press, Osaka, 2007. See also
math.CO/0511214. [MR2330486].
[Wr3] D. Wright, Personal communications.
[Y] A. V. Jagžev, On a problem of O.-H. Keller. (Russian) Sibirsk. Mat. Zh. 21
(1980), no. 5, 141–150, 191. [MR0592226].
[Z1] W. Zhao, Inversion Problem, Legendre Transform and Inviscid Burgers’ Equa-
tion, J. Pure Appl. Algebra, 199 (2005), no. 1-3, 299–317. [MR2134306]. See
also math. CV/0403020.
[Z2] W. Zhao, Hessian Nilpotent Polynomials and the Jacobian Conjecture, Trans.
Amer. Math. Soc. 359 (2007), no. 1, 249–274 (electronic). [MR2247890]. See also
math.CV/0409534.
Department of Mathematics, Illinois State University, Normal, IL
61790-4520.
E-mail: [email protected].
http://arxiv.org/abs/math-ph/0308035
http://arxiv.org/abs/math/0511214
http://arxiv.org/abs/math/0409534
1. Introduction
1.1. Background and Motivation
1.2. Arrangement
2. Disjoint Formal Power Series and Their Deformed Inversion Pairs
3. Local Convergence of Deformed Inversion Pairs of Homogeneous (HN) Polynomials
4. Self-Inverting Formal Power Series
5. The Vanishing Conjecture over Fields of Positive Characteristic
6. A Criterion of Hessian Nilpotency for Homogeneous Polynomials
7. Some Results on Symmetric Polynomial Maps
8. A Graph Associated with Homogeneous HN Polynomials
8.1. Definition and the Connectedness Reduction
8.2. Connection with the Tree Expansion Formula of Inversion Pairs
References
|
0704.1690 | Two Results on Homogeneous Hessian Nilpotent Polynomials | TWO RESULTS ON HOMOGENEOUS HESSIAN
NILPOTENT POLYNOMIALS
ARNO VAN DEN ESSEN∗ AND WENHUA ZHAO∗∗
Abstract. Let z = (z1, · · · , zn) and ∆ =
the Laplace
operator. A formal power series P (z) is said to be Hessian Nilpo-
tent(HN) if its Hessian matrix HesP (z) = ( ∂
∂zi∂zj
) is nilpotent. In
recent developments in [BE1], [M] and [Z], the Jacobian conjec-
ture has been reduced to the following so-called vanishing conjec-
ture(VC) of HN polynomials: for any homogeneous HN polynomial
P (z) (of degree d = 4), we have ∆mPm+1(z) = 0 for any m >> 0.
In this paper, we first show that, the VC holds for any homogeneous
HN polynomial P (z) provided that the projective subvarieties ZP
and Zσ2 of CP
n−1 determined by the principal ideals generated by
P (z) and σ2(z) :=
, respectively, intersect only at regular
points of ZP . Consequently, the Jacobian conjecture holds for the
symmetric polynomial maps F = z−∇P with P (z) HN if F has no
non-zero fixed point w ∈ Cn with
= 0. Secondly, we show
that the VC holds for a HN formal power series P (z) if and only
if, for any polynomial f(z), ∆m(f(z)P (z)m) = 0 when m >> 0.
1. Introduction and Main Results
Let z = (z1, z2, · · · , zn) be commutative free variables. Recall that
the well-known Jacobian conjecture claims that: any polynomial map
F (z) : Cn → Cn with the Jacobian j(F )(z) ≡ 1 is an autompophism of
n and its inverse map must also be a polynomial map. Despite intense
study from mathematicians in more than sixty years, the conjecture is
still open even for the case n = 2. In 1998, S. Smale [S] included the
Jacobian conjecture in his list of 18 important mathematical problems
for 21st century. For more history and known results on the Jacobian
conjecture, see [BCW], [E] and references there.
Recently, M. de Bondt and the first author [BE1] and G. Meng [M]
independently made the following remarkable breakthrough on the Ja-
cobian conjecture. Namely, they reduced the Jacobian conjecture to
2000 Mathematics Subject Classification. 14R15, 31B05.
Key words and phrases. Hessian nilpotent polynomials, the vanishing conjecture,
symmetric polynomial maps, the Jacobian conjecture.
http://arxiv.org/abs/0704.1690v1
2 ARNO VAN DEN ESSEN AND WENHUA ZHAO
the so-called symmetric polynomial maps, i.e the polynomial maps of
the form F = z − ∇P , where ∇P := ( ∂P
, · · · , ∂P
), i.e. ∇P (z) is
the gradient of P (z) ∈ C[z].
For more recent developments on the Jacobian conjecture for sym-
metric polynomial maps, see [BE1]–[BE4].
Based on the symmetric reduction above and also the classical homo-
geneous reduction in [BCW] and [Y], the second author in [Z] further
reduced the Jacobian conjecture to the following so-called vanishing
conjecture.
Let ∆:=
the Laplace operator and call a formal power series
P (z) Hessian nilpotent(HN) if its Hessian matrix HesP (z) := ( ∂
∂zi∂zj
is nilpotent. It has been shown in [Z] that the Jacobian conjecture is
equivalent to
Conjecture 1.1. (Vanishing Conjecture of HN Polynomials)
For any homogeneous HN polynomial P (z) (of degree d = 4), we have
∆mPm+1 = 0 when m >> 0.
Note that, it has also been shown in [Z] that P (z) is HN if and only
if ∆mPm = 0 for m ≥ 1.
In this paper, we will prove the following two results on HN polyno-
mials.
Let P (z) be a homogeneous HN polynomial of degree d ≥ 3 and
σ2(z):=
i=1 z
i . We denote by ZP and Zσ2 the projective subvarieties
of CP n−1 determined by the principal ideals generated by P (z) and
σ2(z), respectively. The first main result of this paper is the following
theorem.
Theorem 1.2. Let P (z) be a homogeneous HN polynomial of degree
d ≥ 4. Assume that ZP intersects with Zσ2 only at regular points of
ZP , then the vanishing conjecture holds for P (z). In particular, the
vanishing conjecture holds if the projective variety ZP is regular.
Remark 1.3. Note that, when degP (z) = d = 2 or 3, the Jacobian
conjecture holds for the symmetric polynomial map F = z −∇P . This
is because, when d = 2, F is a linear map with j(F ) ≡ 1. Hence F
is an automorphism of Cn; while when d = 3, we have degF = 2. By
Wang’s theorem [W], the Jacobian conjecture holds for F again. Then,
by the equivalence of the vanishing conjecture for the homogeneous HN
polynomial P (z) and the Jacobian conjecture for the symmetric map
F = z −∇P established in [Z], we see that, when deg P (z) = d = 2 or
3, Theorem 1.2 actually also holds even without the condition on the
projective variety ZP .
HOMOGENEOUS HESSIAN NILPOTENT POLYNOMIALS 3
For any non-zero z ∈ Cn, denote by [z] its image in the projective
space CP n−1. Set
Z̃σ2 := {z ∈ C
n | z 6= 0; [z] ∈ Zσ2}.(1.1)
In other words, Z̃σ2 is the set of non-zero z ∈ C
n such that
i=1 z
i = 0.
Note that, for any homogeneous polynomial P (z) of degree d, it
follows from the Euler’s formula dP =
i=1 zi
, that any non-zero
w ∈ Cn, [w] ∈ CP n−1 is a singular point of ZP if and only if w is a
fixed point of the symmetric map F = z−∇P . Furthermore, it is also
well-known that, j(F ) ≡ 1 if and only if P (z) is HN.
By the observations above and Theorem 1.2, it is easy to see that
we have the following corollary on symmetric polynomial maps.
Corollary 1.4. Let F = z − ∇P with P homogeneous and j(F ) ≡ 1
(or equivalently, P is HN). Assume that F does not fix any w ∈ Z̃σ2 .
Then the Jacobian holds for F (z). In particular, if F has no non-zero
fixed point, the Jacobian conjecture holds for F .
Our second main result is following theorem which says that the
vanishing conjecture is actually equivalent to a formally much stronger
statement.
Theorem 1.5. For any HN polynomial P (z), the vanishing conjec-
ture holds for P (z) if and only if, for any polynomial f(z) ∈ C[z],
∆m(f(z)P (z)m) = 0 when m >> 0.
2. Proof of the Main Results
Let us first fix the following notation. Let z = (z1, z2, · · · , zn) be
free complex variables and C[z] (resp.C[[z]]) the algebra of polynomials
(resp. formal power series) in z. For any d ≥ 0, we denote by Vd the
vector space of homogeneous polynomials in z of degree d.
For any 1 ≤ i ≤ n, we set Di =
and D = (D1, D2, · · · , Dn). We
define a C-bilinear map {·, ·} : C[z]× C[z] → C[z] by setting
{f, g} := f(D)g(z)
for any f(z), g(z) ∈ C[z].
Note that, for any m ≥ 0, the restriction of {·, ·} on Vm × Vm gives
a C-bilinear form of the vector subspace Vm, which we will denote by
Bm(·, ·). It is easy to check that, for any m ≥ 1, Bm(·, ·) is symmetric
and non-singular.
The following lemma will play a crucial role in our proof of the first
main result.
4 ARNO VAN DEN ESSEN AND WENHUA ZHAO
Lemma 2.1. For any homogeneous polynomials gi(z) (1 ≤ i ≤ k) of
degree di ≥ 1, let S be the vector space of polynomial solutions of the
following system of PDEs:
g1(D) u(z) = 0,
g2(D) u(z) = 0,
.....
gk(D) u(z) = 0.
(2.2)
Then, dimS < +∞ if and only if gi(z) (1 ≤ i ≤ k) have no non-zero
common zeroes.
Proof: Let I the homogeneous ideal of C[z] generated by {gi(z)|1 ≤
i ≤ k}. Since all gi(z)’s are homogeneous, S is a homogeneous vector
subspace S of C[z].
Write
Sm,(2.3)
Im.(2.4)
where Im := I ∩ Vm and Sm := I ∩ Vm for any m ≥ 0.
Claim: For any m ≥ 1 and u(z) ∈ Vm, u(z) ∈ Sm if and only if
{u, Im} = 0, or in other words, Sm = I
m with respect to the C-bilinear
form Bm(·, ·) of Vm.
Proof of the Claim: First, by the definitions of I and S, we have
{Im, Sm} = 0 for any m ≥ 1, hence Sm ⊆ I
m. Therefore, we need only
show that, for any u(z) ∈ I⊥m ⊂ Vm, gi(D)u(z) = 0 for any 1 ≤ i ≤ n.
We first fix any 1 ≤ i ≤ n. If m < di, there is nothing to prove. If
m = di, then gi(z) ∈ Im, hence {gi, u} = gi(D)u = 0. Now suppose
m > di. Note that, for any v(z) ∈ Vm−di , v(z)gi(z) ∈ Im. Hence we
0 = {v(z)gi(z), u(z)}
= v(D)gi(D)u(z)
= v(D) (gi(D)u) (z)
= {v(z), (gi(D)u) (z)}.
Therefore, we have
Bm−di ((gi(D)u)(z), Vm−di) = 0.
HOMOGENEOUS HESSIAN NILPOTENT POLYNOMIALS 5
Since Bm−di(·, ·) is a non-singular C-bilinear form of Vm−di , we have
gi(D)u = 0. Hence, the Claim holds. ✷
By a well-known fact in Algebraic Geometry (see Exercise 2.2 in
[H], for example), we know that the homogeneous polynomials gi(z)
(1 ≤ i ≤ k) have no non-zero common zeroes if and only if Im = Vm
when m >> 0. While, by the Claim above, we know that, Im = Vm
when m >> 0 if and only if Sm = 0 when m >> 0, and if and only if
the solution space S of the system (2.2) is finite dimensional. Hence,
the lemma follows. ✷
Now we are ready to prove our first main result, Theorem 1.2.
Proof of Theorem 1.2: Let P (z) be a homogeneous HN polynomial
of degree d ≥ 4 and S the vector space of polynomial solutions of the
following system of PDEs:
(D) u(z) = 0,
(D) u(z) = 0,
.....
(D) u(z) = 0,
∆ u(z) = 0.
(2.5)
First, note that the projective subvariety ZP intersects with Zσ2 only
at regular points of ZP if and only if
(z) (1 ≤ i ≤ n) and σ2 =∑n
i=1 z
i have no non-zero common zeros (agian use Euler’s formula).
Then, by Lemma 2.1, we have dimS < +∞.
On the other hand, by Theorem 6.3 in [Z], we know that ∆mPm+1 ∈
S for any m ≥ 0. Note that deg∆mPm+1 = (d−2)m+d for any m ≥ 0.
So deg∆mPm+1 > deg∆kP k+1 for any m > k. Since dimS < +∞
(from above), we have ∆mPm+1 = 0 when m >> 0, i.e. the vanishing
conjecture holds for P (z). ✷
Next, we give a proof for our second main result, Theorem 1.5.
Proof of Theorem 1.5: The (⇐) part follows directly by choosing
f(z) to be P (z) itself.
To show (⇒) part, let d = deg f(z). If d = 0, f is a constant. Then,
∆m(f(z)P (z)m) = f(z)∆mPm = 0 for any m ≥ 1.
So we assume d ≥ 1. By Theorem 6.2 in [Z], we know that, if
the vanishing conjecture holds for P (z), then, for any fixed a ≥ 1,
∆mPm+a = 0 when m >> 0. Therefore there exists N > 0 such that,
for any 0 ≤ b ≤ d and any m > N , we have ∆mPm+b = 0.
6 ARNO VAN DEN ESSEN AND WENHUA ZHAO
By Lemma 6.5 in [Z], for any m ≥ 1, we have
∆m(f(z)P (z)m) =(2.6)
k1+k2+k3=m
k1,k2,k3≥0
k1, k2, k3
|s|=k2
∂k2∆k1f(z)
∂k2∆k3Pm(z)
where
k1,k2,k3
denote the usual binomials.
Note first that, the general term in the sum above is non-zero only
if 2k1 + k2 ≤ d. But on the other hand, since
0 ≤ k1 + k2 ≤ 2k1 + k2 ≤ d,(2.7)
by the choice of N ≥ 1, we have ∆k3Pm(z) = ∆k3P k3+(k1+k2)(z) is
non-zero only if
k3 ≤ N.(2.8)
From the observations above and Eqs. (2.6), (2.7), (2.8) it is easy to
see that, ∆m(f(z)P (z)m) 6= 0 only if m = k1 + k2 + k3 ≤ d + N . In
other words, ∆m(f(z)P (z)m) = 0 for any m > d+N . Hence Theorem
1.5 holds. ✷
Note that all results used in the proof above for the (⇐) part of the
theorem also hold for all HN formal power series. Therefore we have
the following corollary.
Corollary 2.2. Let P (z) be a HN formal power series such that the
vanishing conjecture holds for P (z). Then, for any polynomial f(z),
we have ∆m(f(z)P (z)m) = 0 when m >> 0.
References
[BCW] H. Bass, E. Connell, D. Wright, The Jacobian conjecture, reduction of
degree and formal expansion of the inverse. Bull. Amer. Math. Soc. 7,
(1982), 287–330. [MR83k:14028], [Zbl.539.13012].
[BE1] M. de Bondt and A. van den Essen, A Reduction of the Jacobian Conjecture
to the Symmetric Case, Proc. Amer. Math. Soc. 133 (2005), no. 8, 2201–
2205. [MR2138860].
[BE2] M. de Bondt and A. van den Essen, Nilpotent Symmetric Jacobian Matrices
and the Jacobian Conjecture, J. Pure Appl. Algebra 193 (2004), no. 1-3,
61–70. [MR2076378].
[BE3] M. de Bondt and A. van den Essen, Nilpotent symmetric Jacobian matrices
and the Jacobian conjecture II, J. Pure Appl. Algebra 196 (2005), no. 2-3,
135–148. [MR2110519].
[BE4] M. de Bondt and A. van den Essen, Singular Hessians, J. Algebra 282
(2004), no. 1, 195–204. [MR2095579].
HOMOGENEOUS HESSIAN NILPOTENT POLYNOMIALS 7
[E] A. van den Essen, Polynomial automorphisms and the Jacobian conjecture.
Progress in Mathematics, 190. Birkhuser Verlag, Basel, 2000. [MR1790619].
[H] R. Hartshorne, Algebraic Geometry, Springer-Verlag, New York-
Heidelberg-Berlin, 1977.
[Ke] O. H. Keller, Ganze Gremona-Transformation, Monats. Math. Physik 47
(1939), 299-306.
[M] G. Meng, Legendre Transform, Hessian Conjecture and Tree Formula,
Appl. Math. Lett. 19 (2006), no. 6, 503–510. [MR2170971]. See also
math-ph/0308035.
[S] S. Smale, Mathematical Problems for the Next Century, Math. Intelligencer
20, No. 2, 7-15, 1998. [MR1631413 (99h:01033)].
[W] S. Wang, A Jacobian criterion for Separability, J. Algebra 65 (1980), 453-
494. [MR 83e:14010].
[Y] A. V. Jagžev, On a problem of O.-H. Keller. (Russian) Sibirsk. Mat. Zh.
21 (1980), no. 5, 141–150, 191. [MR0592226].
[Z] W. Zhao, Hessian Nilpotent Polynomials and the Jacobian Conjecture.
Trans. Amer. Math. Soc. 359 (2007), 249-274. [MR2247890]. See also
math.CV/0409534.
∗ Department of Mathematics, Radboud University Nijmegen,
Postbus 9010, 6500 GL Nijmegen, The Netherlands.
E-mail: [email protected]
∗∗ Department of Mathematics, Illinois State University, Nor-
mal, IL 61790-4520.
E-mail: [email protected].
http://arxiv.org/abs/math-ph/0308035
http://arxiv.org/abs/math/0409534
1. Introduction and Main Results
2. Proof of the Main Results
References
|
0704.1691 | A Vanishing Conjecture on Differential Operators with Constant
Coefficients | A VANISHING CONJECTURE ON DIFFERENTIAL
OPERATORS WITH CONSTANT COEFFICIENTS
WENHUA ZHAO
Abstract. In the recent progress [BE1], [Me] and [Z2], the well-
known JC (Jacobian conjecture) ([BCW], [E]) has been reduced
to a VC (vanishing conjecture) on the Laplace operators and HN
(Hessian nilpotent) polynomials (the polynomials whose Hessian
matrix are nilpotent). In this paper, we first show that the vanish-
ing conjecture above, hence also the JC, is equivalent to a vanishing
conjecture for all 2nd order homogeneous differential operators Λ
and Λ-nilpotent polynomials P (the polynomials P (z) satisfying
ΛmPm = 0 for all m ≥ 1). We then transform some results in the
literature on the JC, HN polynomials and the VC of the Laplace
operators to certain results on Λ-nilpotent polynomials and the
associated VC for 2nd order homogeneous differential operators
Λ. This part of the paper can also be read as a short survey
on HN polynomials and the associated VC in the more general
setting. Finally, we discuss a still-to-be-understood connection of
Λ-nilpotent polynomials in general with the classical orthogonal
polynomials in one or more variables. This connection provides
a conceptual understanding for the isotropic properties of homo-
geneous Λ-nilpotent polynomials for 2nd order homogeneous full
rank differential operators Λ with constant coefficients.
1. Introduction
Let z = (z1, z2, . . . , zn, . . . ) be a sequence of free commutative vari-
ables and D = (D1, D2, . . . , Dn, . . . ) with Di :=
(i ≥ 1). For any
n ≥ 1, denote by An (resp. Ān) the algebra of polynomials (resp. formal
power series) in zi (1 ≤ i ≤ n). Furthermore, we denote by D[An] or
D[n] (resp.D[An] or D[n]) the algebra of differential operators of the
polynomial algebra An (resp.with constant coefficients). Note that, for
any k ≥ n, elements of D[n] are also differential operators of Ak and
Date: November 7, 2018.
2000 Mathematics Subject Classification. 14R15, 33C45, 32W99.
Key words and phrases. Differential operators with constant coefficients, Λ-
nilpotent polynomials, Hessian nilpotent polynomials, classical orthogonal poly-
nomials, the Jacobian conjecture.
http://arxiv.org/abs/0704.1691v2
2 WENHUA ZHAO
Āk. For any d ≥ 0, denote by Dd[n] the set of homogeneous differen-
tial operators of order d with constants coefficients. We let A (resp. Ā)
be the union of An (resp. Ān) (n ≥ 1), D (resp.D) the union of D[n]
(resp.D[n]) (n ≥ 1), and, for any d ≥ 1, Dd the union of Dd[n] (n ≥ 1).
Recall that JC (the Jacobian conjecture) which was first proposed
by Keller [Ke] in 1939, claims that, for any polynomial map F of Cn
with Jacobian j(F ) = 1, its formal inverse map G must also be a
polynomial map. Despite intense study from mathematicians in more
than sixty years, the conjecture is still open even for the case n = 2.
For more history and known results before 2000 on JC, see [BCW], [E]
and references there.
Based on the remarkable symmetric reduction achieved in [BE1],
[Me] and the classical celebrated homogeneous reduction [BCW] and
[Y] on JC, the author in [Z2] reduced JC further to the following
vanishing conjecture on the Laplace operators ∆n :=
i of the
polynomial algebra An and HN (Hessian nilpotent) polynomials P (z) ∈
An, where we say a polynomial or formal power series P (z) ∈ Ān is
HN if its Hessian matrix Hes (P ) := ( ∂
∂zi∂zj
)n×n is nilpotent.
Conjecture 1.1. For any HN (homogeneous) polynomial P (z) ∈ An
(of degree d = 4), we have ∆mn P
m+1(z) = 0 when m >> 0.
Note that, the following criteria of Hessian nilpotency were also
proved in Theorem 4.3, [Z2].
Theorem 1.2. For any P (z) ∈ Ān with o(P (z)) ≥ 2, the following
statements are equivalent.
(1) P (z) is HN.
(2) ∆mPm = 0 for any m ≥ 1.
(3) ∆mPm = 0 for any 1 ≤ m ≤ n.
Through the criteria in the proposition above, Conjecture 1.1 can
be generalized to other differential operators as follows (see Conjecture
1.4 below).
First let us fix the following notion that will be used throughout the
paper.
Definition 1.3. Let Λ ∈ D[An] and P (z) ∈ Ān. We say P (z) is
Λ-nilpotent if ΛmPm = 0 for any m ≥ 1.
Note that, when Λ is the Laplace operator ∆n, by Theorem 1.2, a
polynomial or formal power series P (z) ∈ An is Λ-nilpotent iff it is HN.
With the notion above, Conjecture 1.1 has the following natural
generalization to differential operators with constant coefficients.
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 3
Conjecture 1.4. For any n ≥ 1 and Λ ∈ D[n], if P (z) ∈ An is Λ-
nilpotent, then ΛmPm+1 = 0 when m >> 0.
We call the conjecture above the vanishing conjecture for differential
operators with constant coefficients and denote it by VC. The special
case of VC with P (z) homogeneous is called the homogeneous vanish-
ing conjecture and denoted by HVC. When the number n of variables
is fixed, VC (resp.HVC) is called (resp. homogeneous) vanishing con-
jecture in n variables and denoted by VC[n] (resp.HVC[n]).
Two remarks on VC are as follows. First, due to a counter-example
given by M. de Bondt (see example 2.4), VC does not hold in general
for differential operators with non-constant coefficients. Secondly, one
may also allow P (z) in VC to be any Λ-nilpotent formal power series.
No counter-example to this more general VC is known yet.
In this paper, we first apply certain linear automorphisms and Lef-
schetz’s principle to show Conjecture 1.1, hence also JC, is equivalent
to VC or HVC for all 2nd order homogeneous differential operators
Λ ∈ D2 (see Theorem 2.9). We then in Section 3 transform some results
on JC, HN polynomials and Conjecture 1.1 obtained in [Wa], [BE2],
[BE3], [Z2], [Z3] and [EZ] to certain results on Λ-nilpotent (Λ ∈ D2)
polynomials and VC for Λ. Another purpose of this section is to give
a survey on recent study on Conjecture 1.1 and HN polynomials in the
more general setting of Λ ∈ D2 and Λ-nilpotent polynomials. This is
also why some results in the general setting, even though their proofs
are straightforward, are also included here.
Even though, due to M. de Bondt’s counter-example (see Example
2.4), VC does not hold for all differential operators with non-constant
coefficients, it is still interesting to consider whether or not VC holds
for higher order differential operators with constant coefficients; and
if it also holds even for certain families of differential operators with
non-constant coefficients. For example, when Λ = Da with a ∈ Nn and
|a| ≥ 2, VC[n] for Λ is equivalent to a conjecture on Laurent polynomi-
als (see Conjecture 3.21). This conjecture is very similar to a non-trivial
theorem (see Theorem 3.20) on Laurent polynomials, which was first
conjectured by O. Mathieu [Ma] and later proved by J. Duistermaat
and W. van der Kallen [DK].
In general, to consider the questions above, one certainly needs to get
better understandings on the Λ-nilpotency condition, i.e. ΛmPm = 0
for any m ≥ 1. One natural way to look at this condition is to consider
the sequences of the form {ΛmPm |m ≥ 1} for general differential op-
erators Λ and polynomials P (z) ∈ A. What special properties do these
4 WENHUA ZHAO
sequences have so that VC wants them all vanish? Do they play any
important roles in other areas of mathematics?
The answer to the first question above is still not clear. The answer
to the later seems to be ”No”. It seems that the sequences of the form
{ΛmPm |m ≥ 1} do not appear very often in mathematics. But the
answer turns out to be “Yes” if one considers the question in the setting
of some localizationsB ofAn. Actually, as we will discuss in some detail
in subsection 4.1, all classical orthogonal polynomials in one variable
have the form {ΛmPm |m ≥ 1} except there one often chooses P (z)
from some localizations B of An and Λ a differential operators of B.
Some classical polynomials in several variables can also be obtained
from sequences of the form {ΛmPm |m ≥ 1} by a slightly modified
procedure.
Note that, due to their applications in many different areas of math-
ematics, especially in ODE, PDE, the eigenfunction problems and rep-
resentation theory, orthogonal polynomials have been under intense
study by mathematicians in the last two centuries. For example, in
[SHW] published in 1940, about 2000 published articles mostly on one-
variable orthogonal polynomials have been included. The classical ref-
erence for one-variable orthogonal polynomials is [Sz] (see also [AS],
[C], [Si]). For multi-variable orthogonal polynomials, see [DX], [Ko]
and references there.
It is hard to believe that the connection discussed above between
Λ-nilpotent polynomials or formal power series and classical orthog-
onal polynomials is just a coincidence. But a precise understanding
of this connection still remains mysterious. What is clear is that, Λ-
nilpotent polynomials or formal power series and the polynomials or
formal power series P (z) ∈ Ān such that the sequence {ΛmPm |m ≥ 1}
for some differential operator Λ provides a sequence of orthogonal poly-
nomials lie in two opposite extreme sides, since, from the same sequence
{ΛmPm |m ≥ 1}, the former provides nothing but zero; while the later
provides an orthogonal basis for An.
Therefore, one naturally expects that Λ-nilpotent polynomials P (z)∈
An should be isotropic with respect to a certain C-bilinear form of An.
It turns out that, as we will show in Theorem 4.10 and Corollary 4.11,
it is indeed the case when P (z) is homogeneous and Λ ∈ D2[n] is of
full rank. Actually, in this case ΛmPm+1 (m ≥ 0) are all isotropic with
respect to same properly defined C-bilinear form. Note that, Theorem
4.10 and Corollary 4.11 are just transformations of the isotropic prop-
erties of HN nilpotent polynomials, which were first proved in [Z2].
But the proof in [Z2] is very technical and lacks any convincing inter-
pretations. From the “formal” connection of Λ-nilpotent polynomials
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 5
and orthogonal polynomials discussed above, the isotropic properties
of homogeneous Λ-nilpotent polynomials with Λ ∈ D2[n] of full rank
become much more natural.
The arrangement of the paper is as follows. In Section 2, we mainly
show that Conjecture 1.1, hence also JC, is equivalent to VC or HVC
for all Λ ∈ D2 (see Theorem 2.9). One consequence of this equivalence
is that, to prove or disprove VC or JC, the Laplace operators are not
the only choices, even though they are the best in many situations.
Instead, one can choose any sequence Λnk ∈ D2 with strictly increasing
ranks (see Proposition 2.10). For example, one can choose the “Laplace
operators” with respect to the Minkowski metric or symplectic metric,
or simply choose Λ to be the complex ∂̄-Laplace operator ∆∂̄,k (k ≥ 1)
in Eq. (2.11).
In Section 3, we transform some results on JC, HN polynomials
and Conjecture 1.1 in the literature to certain results on Λ-nilpotent
(Λ ∈ D2) polynomials P (z) and VC for Λ. In subsection 3.1, we
discuss some results on the polynomial maps and PDEs associated
with Λ-nilpotent polynomials for Λ ∈ D2[n] of full rank (see Theorems
3.1–3.3). The results in this subsection are transformations of those
in [Z1] and [Z2] on HN polynomials and their associated symmetric
polynomial maps.
In subsection 3.2, we give four criteria of Λ-nilpotency (Λ ∈ D2) (see
Propositions 3.4, 3.6, 3.7 and 3.10). The criteria in this subsection
are transformations of the criteria of Hessian nilpotency derived in
[Z2] and [Z3]. In subsection 3.3, we transform some results in [BCW],
[Wa] and [Y] on JC; [BE2] and [BE3] on symmetric polynomial maps;
[Z2], [Z3] and [EZ] on HN polynomials to certain results on VC for
Λ ∈ D2. Finally, we recall a result in [Z3] which says, VC over fields
k of characteristic p > 0, even under some conditions weaker than Λ-
nilpotency, actually holds for any differential operators Λ of k[z] (see
Proposition 3.22 and Corollary 3.23).
In subsection 3.4, we consider VC for high order differential opera-
tors with constant coefficients. In particular, we show in Proposition
3.18 VC holds for Λ = δk (k ≥ 1), where δ is a derivation of A. In
particular, VC holds for any Λ ∈ D1 (see Corollary 3.19). We also
show that, when Λ = Da with a ∈ Nn and |a| ≥ 2, VC is equivalent
to a conjecture, Conjecture 3.21, on Laurent polynomials. This con-
jecture is very similar to a non-trivial theorem (see Theorem 3.20) first
conjectured by O. Mathieu [Ma] and later proved by J. Duistermaat
and W. van der Kallen [DK].
6 WENHUA ZHAO
In subsection 4.1, by using Rodrigues’ formulas Eq. (4.1), we show
that all classical orthogonal polynomials in one variable have the form
{ΛmPm |m ≥ 1} for some P (z) in certain localizations B of An and Λ a
differential operators of B. We also show that some classical polynomi-
als in several variables can also be obtained from sequences of the form
{ΛmPm |m ≥ 1} with a slight modification. Some of the most classical
orthogonal polynomials in one or more variables are briefly discussed
in Examples 4.2–4.5, 4.8 and 4.9. In subsection 4.2, we transform the
isotropic properties of homogeneous HN homogeneous polynomials de-
rived in [Z2] to homogeneous Λ-nilpotent polynomials for Λ ∈ D2[n] of
full rank (see Theorem 4.10 and Corollary 4.11).
Acknowledgment: The author is very grateful to Michiel de Bondt
for sharing his counterexample (see Example 2.4) with the author, and
to Arno van den Essen for inspiring personal communications. The
author would also like to thank the referee very much for many valuable
suggestions to improve the first version of the paper.
2. The Vanishing Conjecture for the 2nd Order
Homogeneous Differential Operators with Constant
Coefficients
In this section, we apply certain linear automorphisms and Lef-
schetz’s principle to show Conjecture 1.1, hence also JC, is equivalent
to VC or HVC for all Λ ∈ D2 (see Theorem 2.9). In subsection 2.1, we
fix some notation and recall some lemmas that will be needed through-
out this paper. In subsection 2.2, we prove the main results of this
section, Theorem 2.9 and Proposition 2.10.
2.1. Notation and Preliminaries. Throughout this paper, unless
stated otherwise, we will keep using the notations and terminology in-
troduced in the previous section and also the ones fixed as below.
(1) For any P (z) ∈ An, we denote by ∇P the gradient of P (z), i.e.
we set
∇P (z) := (D1P, D2P, . . . , DnP ).(2.1)
(2) For any n ≥ 1, we let SM(n,C) (resp.SGL(n,C)) denote the
symmetric complex n× n (resp. invertible) matrices.
(3) For any A = (aij) ∈ SM(n,C), we set
∆A :=
i,j=1
aijDiDj ∈ D2[n].(2.2)
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 7
Note that, for any Λ ∈ D2[n], there exists a unique A ∈
SM(n,C) such that Λ = ∆A. We define the rank of Λ = ∆A
simply to be the rank of the matrix A.
(4) For any n ≥ 1, Λ ∈ D2[n] is said to be full rank if Λ has rank n.
The set of full rank elements of D2[n] will be denoted by D
2[n].
(5) For any r ≥ 1, we set
∆r :=
D2i .(2.3)
Note that ∆r is a full rank element in D2[r] but not in D2[n] for
any n > r.
For U ∈ GL(n,C), we define
ΦU : Ān → Ān(2.4)
P (z) → P (U−1z)
ΨU : D[n] → D[n](2.5)
Λ → ΦU ◦ Λ ◦ Φ−1U
It is easy to see that, ΦU (resp.ΨU ) is an algebra automorphism of
An (resp.D[n]). Moreover, the following standard facts are also easy
to check directly.
Lemma 2.1. (a) For any U = (uij) ∈ GL(n,C), P (z) ∈ Ān and
Λ ∈ D[n], we have
ΦU(ΛP ) = ΨU(Λ)ΦU(P ).(2.6)
(b) For any 1 ≤ i ≤ n and f(z) ∈ An we have
ΨU(Di) =
ujiDj ,
ΨU(f(D)) = f(U
In particular, for any A ∈ SM(n,C), we have
ΨU(∆A) = ∆UAUτ .(2.7)
The following lemma will play a crucial role in our later arguments.
Actually the lemma can be stated in a stronger form (see [Hu], for
example) which we do not need here.
8 WENHUA ZHAO
Lemma 2.2. For any A ∈ SM(n,C) of rank r > 0, there exists U ∈
GL(n,C) such that
A = U
Ir×r 0
U τ(2.8)
Combining Lemmas 2.1 and 2.2, it is easy to see we have the following
corollary.
Corollary 2.3. For any n ≥ 1 and Λ,Ξ ∈ D2[n] of same rank, there
exists U ∈ GL(n,C) such that ΨU(Λ) = Ξ.
2.2. The Vanishing Conjecture for the 2nd Order Homoge-
neous Differential Operators with Constant Coefficients. In
this subsection, we show that Conjecture 1.1, hence also JC, is actu-
ally equivalent to VC or HVC for all 2nd order homogeneous differ-
ential operators Λ ∈ D2 (see Theorem 2.9). We also show that the
Laplace operators are not the only choices in the study of VC or JC
(see Proposition 2.10 and Example 2.11).
First, let us point out that VC fails badly for differential opera-
tors with non-constant coefficients. The following counter-example was
given by M. de Bondt [B].
Example 2.4. Let x be a free variable and Λ = x d
. Let P (x) = x.
Then one can check inductively that P (x) is Λ-nilpotent, but ΛmPm+1 6=
0 for any m ≥ 1.
Lemma 2.5. For any Λ ∈ D[n], U ∈ GL(n,C), A ∈ SM(n,C) and
P (z) ∈ Ān, we have
(a) P (z) is Λ-nilpotent iff ΦU (P ) is ΨU(Λ)-nilpotent. In particular,
P (z) is ∆A-nilpotent iff ΦU (P ) = P (U
−1z) is ∆UAUτ -nilpotent.
(b) VC[n] (resp.HVC[n]) holds for Λ iff it holds for ΨU(Λ). In
particular, VC[n] (resp.HVC[n]) holds for ∆A iff it holds for
∆UAUτ .
Proof: Note first that, for any m, k ≥ 1, we have
ΛmP k
= (ΦUΛ
mΦ−1U ) ΦUP
= (ΦUΛΦ
m(ΦUP )
= [ΨU(Λ)]
m(ΦUP )
When Λ = ∆A, by Eq. (2.7), we further have
= ΛmUAUτ (ΦUP )
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 9
Since ΦU (resp.ΨU ) is an automorphism of Ān (resp.D[n]), it is easy
to check directly that both (a) and (b) follow from the equations above.
Combining the lemma above with Corollary 2.3, we immediately
have the following corollary.
Corollary 2.6. Suppose HVC[n] (resp.VC[n]) holds for a differential
operator Λ ∈ D2[n] of rank r ≥ 1. Then HVC[n] (resp.VC[n]) holds
for all differential operators Ξ ∈ D2[n] of rank r.
Actually we can derive much more (as follows) from the conditions
in the corollary above.
Proposition 2.7. (a) Suppose HVC[n] holds for a full rank Λ ∈ D◦2[n].
Then, for any k ≤ n, HVC[k] holds for all full rank Ξ ∈ D◦2[k].
(b) Suppose VC[n] holds for a full rank Λ ∈ D◦2[n]. Then, for any
m ≥ n, VC[m] holds for all Ξ ∈ D2[m] of rank n.
Proof: Note first that, the cases k = n in (a) and m = n in (b)
follow directly from Corollary 2.6. So we may assume k < n in (a) and
m > n in (b). Secondly, by Corollary 2.6, it will be enough to show
HVC[k] (k < n) holds for ∆k for (a) and VC[m] (m > n) holds for
∆n for (b).
(a) Let P ∈ Ak a homogeneous ∆k-nilpotent polynomial. We view
∆k and P as elements of D2[n] and An, respectively. Since P does not
depend on zi (k + 1 ≤ i ≤ n), for any m, ℓ ≥ 0, we have
∆mk P
ℓ = ∆mn P
Hence, P is also ∆n-nilpotent. Since HVC[n] holds for ∆n (as pointed
out at the beginning of the proof), we have ∆mk P
m+1 = ∆mn P
m+1 = 0
when m >> 0. Therefore, HVC[k] holds for ∆k.
(b) Let K be the rational function field C(zn+1, . . . , zm). We view Am
as a subalgebra of the polynomial algebra K[z1, . . . , zn] in the standard
way. Note that the differential operator ∆n =
D2i of Am extends
canonically to a differential operator of K[z1, . . . , zn] with constant co-
efficients.
Since VC[n] holds for ∆n over the complex field (as pointed out
at the beginning of the proof), by Lefschetz’s principle, we know that
VC[n] also holds for ∆n over the field K. Therefore, for any ∆n-
nilpotent P (z) ∈ Am, by viewing ∆n as an element of D2(K[z1, . . . , zn])
and P (z) an element of K[z1, . . . , zn] (which is still ∆n-nilpotent in the
new setting), we have ∆knP
k+1 = 0 when k >> 0. Hence VC[m] holds
for P (z) ∈ Am and ∆n ∈ D2[m]. ✷
10 WENHUA ZHAO
Proposition 2.8. Suppose HVC[n] holds for a differential operator
Λ ∈ D2[n] with rank r < n. Then, for any k ≥ r, VC[k] holds for all
Ξ ∈ D2[k] of rank r.
Proof: First, by Corollary 2.6, we know HVC[n] holds for ∆r. To
show Proposition 2.8, by Proposition 2.7, (b), it will be enough to show
that VC[r] holds for ∆r.
Let P ∈ Ar ⊂ An be a ∆r-nilpotent polynomial. If P is homoge-
neous, there is nothing to prove since, as pointed out above, HVC[n]
holds for ∆r. Otherwise, we homogenize P (z) to P̃ ∈ Ar+1 ⊆ An.
Since ∆r is a homogeneous differential operator, it is easy to see that,
for any m, k ≥ 1, ∆mr P k = 0 iff ∆mr P̃ k = 0. Therefore, P̃ ∈ An is
also ∆r-nilpotent when we view ∆r as a differential operator of An.
Since HVC[n] holds for ∆r, we have that ∆
m+1 = 0 when m >> 0.
Then, by the observation above again, we also have ∆mr P
m+1 = 0 when
m >> 0. Therefore, VC[r] holds for ∆r. ✷
Now we are ready to prove our main result of this section.
Theorem 2.9. The following statements are equivalent to each other.
(1) JC holds.
(2) HVC[n] (n ≥ 1) hold for the Laplace operator ∆n.
(3) VC[n] (n ≥ 1) hold for the Laplace operator ∆n.
(4) HVC[n] (n ≥ 1) hold for all Λ ∈ D2[n].
(5) VC[n] (n ≥ 1) hold for all Λ ∈ D2[n].
Proof: First, the equivalences of (1), (2) and (3) have been estab-
lished in Theorem 7.2 in [Z2]. While (4) ⇒ (2), (5) ⇒ (3) and (5) ⇒ (4)
are trivial. Therefore, it will be enough to show (3) ⇒ (5).
To show (3) ⇒ (5), we fix any n ≥ 1. By Corollary 2.6, it will
be enough to show VC[n] holds for ∆r (1 ≤ r ≤ n). But under the
assumption of (3) (with n = r), we know that VC[r] holds for ∆r.
Then, by Proposition 2.7, (b), we know VC[n] also holds for ∆r. ✷
Next, we show that, to study HVC, equivalently VC or JC, the
Laplace operators are not the only choices, even though they are the
best in many situations.
Proposition 2.10. Let {nk | k ≥ 1} be a strictly increasing sequence of
positive integers and {Λnk | k ≥ 1} a sequence of differential operators
in D2 with rank (Λnk) = nk (k ≥ 1). Suppose that, for any k ≥ 1,
HVC[Nk] holds for Λnk for some Nk ≥ nk. Then, the equivalent state-
ments in Theorem 2.9 hold.
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 11
Proof: We show, under the assumption in the proposition, the state-
ment (2) in Theorem 2.9 holds, i.e. for any n ≥ 1, HVC[n] (n ≥ 1)
holds for the Laplace operator ∆n ∈ D2[n].
For any fixed n ≥ 1, let k ≥ 1 such that nk ≥ n. If Nk = nk,
then, by Proposition 2.7, (a), we have HVC[n] (n ≥ 1) holds for the
Laplace operator ∆n ∈ D2[n]. If Nk > nk, then, by Proposition 2.8, we
know VC[nk] (hence also HVC[nk]) holds for ∆nk . Since nk ≥ n, by
Proposition 2.7, (a) again, we know HVC[n] does hold for the Laplace
operator ∆n. ✷
Example 2.11. Besides the Laplace operators, by Proposition 2.10, the
following sequences of differential operators are also among the most
natural choices.
(1) Let nk = k (k ≥ 2) (or any other strictly increasing sequence of
positive integers). Let Λnk be the “Laplace operator” with respect
to the standard Minkowski metric of Rnk . Namely, choose
Λk = D
D2i .(2.9)
(2) Choose nk = 2k (k ≥ 1) (or any other strictly increasing se-
quence of positive even numbers). Let Λ2k be the “Laplace op-
erator” with respect to the standard symplectic metric on R2k,
i.e. choose
Λ2k =
DiDi+k.(2.10)
(3) We may also choose the complex Laplace operators ∆∂̄ instead of
the real Laplace operator ∆. More precisely, we choose nk = 2k
for any k ≥ 1 and view the polynomial algebra of wi (1 ≤ i ≤
2k) over C as the polynomial algebra C[zi, z̄i | 1 ≤ i ≤ k] by
setting zi = wi +
−1wi+k for any 1 ≤ i ≤ k. Then, for any
k ≥ 1, we set
Λk = ∆∂̄,k :=
∂zi∂z̄i
.(2.11)
(4) More generally, we may also choose Λk = ∆Ank , where nk ∈ N
and Ank ∈ SM(nk,C) (not necessarily invertible) (k ≥ 1) with
strictly increasing ranks.
12 WENHUA ZHAO
3. Some Properties of ∆A-Nilpotent Polynomials
As pointed earlier in Section 1 (see page 2), for the Laplace operators
∆n (n ≥ 1), the notion ∆n-nilpotency coincides with the notion of Hes-
sian nilpotency. HN (Hessian nilpotent) polynomials or formal power
series, their associated symmetric polynomial maps and Conjecture 1.1
have been studied in [BE2], [BE3], [Z1]–[Z3] and [EZ]. In this section,
we apply Corollary 2.3, Lemma 2.5 and also Lefschetz’s principle to
transform some results obtained in the references above to certain re-
sults on Λ-nilpotent (Λ ∈ D2) polynomials or formal power series, VC
for Λ and also associated polynomial maps. Another purpose of this
section is to give a short survey on some results on HN polynomials and
Conjecture 1.1 in the more general setting of Λ-nilpotent polynomials
and VC for differential operators Λ ∈ D2.
In subsection 3.1, we transform some results in [Z1] and [Z2] to the
setting of Λ-nilpotent polynomials for Λ ∈ D2[n] of full rank (see Theo-
rems 3.1–3.3). In subsection 3.2, we derive four criteria for Λ-nilpotency
(Λ ∈ D2) (see Propositions 3.4, 3.6, 3.7 and 3.10). The criteria in this
subsection are transformations of the criteria of Hessian nilpotency de-
rived in [Z2] and [Z3].
In subsection 3.3, we transform some results in [BCW], [Wa] and
[Y] on JC; [BE2] and [BE3] on symmetric polynomial maps; [Z2], [Z3]
and [EZ] on HN polynomials to certain results on VC for Λ ∈ D2.
In subsection 3.4, we consider VC for high order differential operators
with constant coefficients. We mainly focus on the differential operators
Λ = Da (a ∈ Nn). Surprisingly, VC for these operators is equivalent
to a conjecture (see Conjecture 3.21) on Laurent polynomials, which
is similar to a non-trivial theorem (see Theorem 3.20) first conjectured
by O. Mathieu [Ma] and later proved by J. Duistermaat and W. van
der Kallen [DK].
3.1. Associated Polynomial Maps and PDEs. Once and for all in
this section, we fix any n ≥ 1 and A ∈ SM(n,C) of rank 1 ≤ r ≤ n. We
use z and D, unlike we did before, to denote the n-tuples (z1, z2, . . . , zn)
and (D1, D2, . . . , Dn), respectively. We define a C-bilinear form 〈·, ·〉A
by setting 〈u, v〉A := uτAv for any u, v ∈ Cn. Note that, when A =
In×n, the bilinear form defined above is just the standard C-bilinear
form of Cn, which we also denote by 〈·, ·〉.
By Lemma 2.2, we may write A as in Eq. (2.8). For any P (z) ∈ Ān,
we set
P̃ (z) = Φ−1U P (z) = P (Uz).(3.1)
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 13
Note that, by Lemma 2.1, (b), we have Ψ−1U (∆A) = ∆r. By Lemma
2.5, (a), P (z) is ∆A-nilpotent iff P̃ (z) is ∆r-nilpotent.
Theorem 3.1. Let t be a central parameter. For any P (z) ∈ An with
o(P (z)) ≥ 2 and A ∈ SGL(n,C), set FA,t(z) := z − tA∇P (z). Then
(a) there exists a unique QA,t(z) ∈ C[t][[z]] such that the formal
inverse map GA,t(z) of FA,t(z) is given by
GA,t(z) = z + tA∇QA,t(z).(3.2)
(b) The QA,t(z) ∈ C[t][[z]] in (a) is the unique formal power series
solution of the following Cauchy problem:
∂ QA,t
(z) = 1
〈∇QA,t,∇QA,t〉A,
QA,t=0(z) = P (z).
(3.3)
Proof: Let P̃ as given in Eq. (3.1) and set
F̃A,t(z) = z − t∇P̃ (z).(3.4)
By Theorem 3.6 in [Z1], we know the formal inverse map G̃A,t(z) of
F̃A,t(z) is given by
G̃A,t(z) = z + t∇Q̃A,t(z),(3.5)
where Q̃A,t(z) ∈ C[t][[z]] is the unique formal power series solution of
the following Cauchy problem:
∂ eQA,t
(z) = 1
〈∇Q̃A,t,∇Q̃A,t〉,
Q̃A,t=0(z) = P̃ (z).
(3.6)
From the fact that ∇P̃ (z) = (U τ∇P )(Uz), it is easy to check that
(ΦU ◦ F̃A,t ◦ Φ−1U )(z) = z − tA∇P (z) = FA,t(z),(3.7)
which is the formal inverse map of
(ΦU ◦ G̃A,t ◦ Φ−1U )(z) = z + t(U∇Q̃A,t)(U
−1z).(3.8)
QA,t(z) := Q̃A,t(U
−1z).(3.9)
Then we have
∇QA,t(z) = (U τ )−1(∇Q̃A,t)(U−1z),
U τ∇QA,t(z) = (∇Q̃A,t)(U−1z),(3.10)
14 WENHUA ZHAO
Multiplying U to the both sides of the equation above and noticing
that A = UU τ by Eq. (2.8) since A is of full rank, we get
A∇QA,t(z) = (U∇Q̃A,t)(U−1z).(3.11)
Then, combining Eq. (3.8) and the equation above, we see the formal
inverse GA,t(z) of FA,t(z) is given by
GA,t(z) = (ΦU ◦ G̃A,t ◦ Φ−1U )(z) = z + tA∇QA,t(z).(3.12)
Applying ΦU to Eq. (3.6) and by Eqs. (3.9), (3.10), we see thatQA,t(z) is
the unique formal power series solution of the Cauchy problem Eq. (3.3).
By applying the linear automorphism ΦU of C[[z]] and employing a
similar argument as in the proof of Theorem 3.1 above, we can gen-
eralize Theorems 3.1 and 3.4 in [Z2] to the following theorem on ∆A-
nilpotent (A ∈ SGL(n,C)) formal power series.
Theorem 3.2. Let A, P (z) and QA,t(z) as in Theorem 3.1. We further
assume P (z) is ∆A-nilpotent. Then,
(a) QA,t(z) is the unique formal power series solution of the follow-
ing Cauchy problem:
∂ QA,t
(z) = 1
QA,t=0(z) = P (z).
(3.13)
(b) For any k ≥ 1, we have
QkA,t(z) =
2mm!(m+ k)!
m+1(z).(3.14)
Applying the same strategy to Theorem 3.2 in [Z2], we get the fol-
lowing theorem.
Theorem 3.3. Let A, P (z) and QA,t(z) as in Theorem 3.2. For any
non-zero s ∈ C, set
Vt,s(z) := exp(sQt(z)) =
skQkt (z)
Then, Vt,s(z) is the unique formal power series solution of the following
Cauchy problem of the heat-like equation:
∂Vt,s
(z) = 1
∆AVt,s(z),
Ut=0,s(z) = exp(sP (z)).
(3.15)
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 15
3.2. Some Criteria of ∆A-Nilpotency. In this subsection, with the
notation and remarks fixed in the previous subsection in mind, we
apply the linear automorphism ΦU to transform some criteria of Hessian
nilpotency derived in [Z2] and [Z3] to criteria of ∆A-nilpotency (A ∈
SM(n,C)) (see Proposition 3.4, 3.6, 3.7 and 3.10 below).
Proposition 3.4. Let A be given as in Eq. (2.8). Then, for any P (z) ∈
An, it is ∆A-nilpotent iff the submatrix of U
τ (HesP )U consisting of
the first r rows and r columns is nilpotent.
In particular, when r = n, i.e. ∆A is full rank, any P (z) ∈ D2[n] is
∆A-nilpotent iff U
τ (HesP )U is nilpotent.
Proof: Let P̃ (z) be as in Eq. (3.1). Then, as pointed earlier, P (z)
is ∆A-nilpotent iff P̃ (z) is ∆r-nilpotent.
If r = n, then by Theorem 1.2 , P̃ (z) is ∆r-nilpotent iff Hes P̃ (z) is
nilpotent. But note that in general we have
Hes P̃ (z) = HesP (Uz) = U τ [(HesP )(Uz)]U.(3.16)
Therefore, Hes P̃ (z) is nilpotent iff U τ [(HesP )(Uz)]U is nilpotent iff,
with z replaced by U−1z, U τ [(HesP )(z)]U is nilpotent. Hence the
proposition follows in this case.
Assume r < n. We view Ar as a subalgebra of the polynomial
algebra K[z1, . . . , zr], where K is the rational field C(zr+1, . . . , zn). By
Theorem 1.2 and Lefschetz’s principle, we know that P̃ is ∆r-nilpotent
iff the matrix
∂2 eP
∂zi∂zj
1≤i,j≤r
is nilpotent.
Note that the matrix
∂2 eP
∂zi∂zj
1≤i,j≤r
is the submatrix of Hes P̃ (z) con-
sisting of the first r rows and r columns. By Eq. (3.16), it is also the sub-
matrix of U τ [HesP (Uz)]U consisting of the first r rows and r columns.
Replacing z by U−1z in the submatrix above, we see
∂2 eP
∂zi∂zj
1≤i,j≤r
nilpotent iff the submatrix of U τ [HesP (z)]U consisting of the first r
rows and r columns is nilpotent. Hence the proposition follows. ✷
Note that, for any homogeneous quadratic polynomial P (z) = zτBz
with B ∈ SM(n,C), we have HesP (z) = 2B. Then, by Proposition
3.4, we immediately have the following corollary.
Corollary 3.5. For any homogeneous quadratic polynomial P (z) =
zτBz with B ∈ SM(n,C), it is ∆A-nilpotent iff the submatrix of
U τB U consisting of the first r rows and r columns is nilpotent.
16 WENHUA ZHAO
Proposition 3.6. Let A be given as in Eq. (2.8). Then, for any P (z) ∈
Ān with o(P (z)) ≥ 2, P (z) is ∆A-nilpotent iff ∆mAPm = 0 for any
1 ≤ m ≤ r.
Proof: Again, we let P̃ (z) be as in Eq. (3.1) and note that P (z) is
∆A-nilpotent iff P̃ (z) is ∆r-nilpotent.
Since r ≤ n. We view Ar as a subalgebra of the polynomial algebra
K[z1, . . . , zr], whereK is the rational field C(zr+1, . . . , zn). By Theorem
1.2 and Lefschetz’s principle (if r < n), we have P̃ (z) is ∆r-nilpotent
iff ∆mr P̃
m = 0 for any 1 ≤ m ≤ r. On the other hand, by Eqs. (2.6)
and (2.7), we have ΦU
∆mr P̃
= ∆mAP
m for any m ≥ 1. Since ΦU is
an automorphism of An, we have that, ∆
m = 0 for any 1 ≤ m ≤ r
iff ∆mAP
m = 0 for any 1 ≤ m ≤ r. Therefore, P̃ (z) is ∆A-nilpotent iff
m = 0 for any 1 ≤ m ≤ r. Hence the proposition follows. ✷
Proposition 3.7. For any A ∈ SGL(n,C) and any homogeneous
P (z) ∈ An of degree d ≥ 2, we have, P (z) is ∆A-nilpotent iff, for
any β ∈ C, (βD)d−2P (z) is Λ-nilpotent, where βD := 〈β,D〉.
Proof: Let A be given as in Eq. (2.8) and P̃ (z) as in Eq. (3.1). Note
that, Ψ−1U (∆A) = ∆n (for ∆A is of full rank), and P (z) is ∆A-nilpotent
iff P̃ (z) is ∆n-nilpotent.
Since P̃ is also homogeneous of degree d ≥ 2, by Theorem 1.2 in
[Z3], we know that, P̃ (z) is ∆n-nilpotent iff, for any β ∈ Cn, βd−2D P̃ is
∆n-nilpotent. Note that, from Lemma 2.1, (b), we have
ΨU(βD) = 〈β, U τD〉
= 〈Uβ,D〉
= (Uβ)D,
D P̃ ) = ΨU(βD)
d−2ΦU(P̃ ) = (Uβ)
D P.(3.17)
Therefore, by Lemma 2.5, (a), βd−2D P̃ is ∆n-nilpotent iff (Uβ)
D P is
∆A-nilpotent since ΨU(∆n) = ∆A. Combining all equivalences above,
we have P (z) is ∆n-nilpotent iff, for any β ∈ Cn, (Uβ)d−2D P is ∆A-
nilpotent. Since U is invertible, when β runs over Cn so does Uβ.
Therefore the proposition follows. ✷
Let {ei | 1 ≤ i ≤ n} be the standard basis of Cn. Applying the propo-
sition above to β = ei (1 ≤ i ≤ n), we have the following corollary.
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 17
Corollary 3.8. For any homogeneous ∆A-nilpotent polynomial P (z) ∈
An of degree d ≥ 2, Dd−2i P (z) (1 ≤ i ≤ n) are also ∆A-nilpotent.
We think that Proposition 3.7 and Corollary 3.8 are interesting be-
cause, due to Corollary 3.5, it is much easier to decide whether a qua-
dratic form is ∆A-nilpotent or not.
To state the next criterion, we need fix the following notation.
For any A ∈ SGL(n,C), we letXA(Cn) be the set of isotropic vectors
u ∈ Cn with respect to the C-bilinear form 〈·, ·〉A. When A = In×n, we
also denote XA(C
n) simply by of X(Cn).
For any β ∈ Cn, we set hα(z) := 〈α, z〉. Then, by applying ΦU to
a well-known theorem on classical harmonic polynomials, which is the
following theorem for A = In×n (see, for example, [He] and [T]), we
have the following result on homogeneous ∆A-nilpotent polynomials.
Theorem 3.9. Let P be any homogeneous polynomial of degree d ≥ 2
such that ∆AP = 0. We have
P (z) =
hdαi(z)(3.18)
for some k ≥ 1 and αi ∈ XA(Cn) (1 ≤ i ≤ k).
Next, for any homogeneous polynomial P (z) of degree d ≥ 2, we
introduce the following matrices:
ΞP := (〈αi, αj〉A)k×k ,(3.19)
ΩP :=
〈αi, αj〉A hd−2αj (z)
.(3.20)
Then, by applying ΦU to Proposition 5.3 in [Z2] (the details will
be omitted here), we have the following criterion of ∆A-nilpotency for
homogeneous polynomials.
Proposition 3.10. Let P (z) be as given in Eq. (3.18). Then P (z) is
∆A-nilpotent iff the matrix ΩP is nilpotent.
One simple remark on the criterion above is as follows.
Let B be the k × k diagonal matrix with hαi(z) (1 ≤ i ≤ k) as the
ith diagonal entry. For any 1 ≤ j ≤ k, set
ΩP ;j := B
d−2−j = (hjαi〈αi, αj〉h
d−2−j
).(3.21)
Then, by repeatedly applying the fact that, for any C,D ∈ M(k,C),
CD is nilpotent iff so is DC, it is easy to see that Proposition 3.10 can
also be re-stated as follows.
18 WENHUA ZHAO
Corollary 3.11. Let P (z) be given by Eq. (3.18) with d ≥ 2. Then,
for any 1 ≤ j ≤ d − 2 and m ≥ 1, P (z) is ∆A-nilpotent iff the matrix
ΩP ;j is nilpotent.
Note that, when d is even, we may choose j = (d − 2)/2. So P is
∆A-nilpotent iff the symmetric matrix
ΩP ;(d−2)/2 = (h
(d−2)/2
〈αi, αj〉Ah(d−2)/2αj )(3.22)
is nilpotent.
3.3. Some Results on the Vanishing Conjecture of the 2nd
Order Homogeneous Differential Operators with Constants
Coefficients. In this subsection, we transform some known results of
VC for the Laplace operators ∆n (n ≥ 1) to certain results on VC for
∆A (A ∈ SGL(n,C)).
First, by Wang’s theorem [Wa], we know that JC holds for any
polynomial maps F (z) with degF ≤ 2. Hence, JC also holds for
symmetric polynomials F (z) = z −∇P (z) with P (z) ∈ C[z] of degree
d ≤ 3. By the equivalence of JC and VC for the Laplace operators
established in [Z2], we know VC holds if Λ = ∆n and P (z) is a HN
polynomials of degree d ≤ 3. Then, applying the linear automorphism
ΦU , we have the following proposition.
Theorem 3.12. For any A ∈ SGL(n,C) and ∆A-nilpotent P (z) ∈ An
(not necessarily homogeneous) of degree d ≤ 3, we have ΛmPm+1 = 0
when m >> 0, i.e. VC[n] holds for Λ and P (z).
Applying the classical homogeneous reduction on JC (see [BCW],
[Y]) to associated symmetric maps, we know that, to show VC for ∆n
(n ≥ 1), it will be enough to consider only homogeneous HN polyno-
mials of degree 4. Therefore, by applying the linear automorphism ΦU
of An, we have the same reduction for HVC too.
Theorem 3.13. To study HVC in general, it will be enough to con-
sider only homogeneous P (z) ∈ A of degree 4.
In [BE2] and [BE3] it has been shown that JC holds for symmetric
maps F (z) = z−∇P (z) (P (z) ∈ An) if the number of variables n is less
or equal to 4, or n = 5 and P (z) is homogeneous. By the equivalence of
JC for symmetric polynomial maps and VC for the Laplace operators
established in [Z2], and Proposition 2.8 and Corollary 2.6, we have the
following results on VC and HVC.
Theorem 3.14. (a) For any n ≥ 1, VC[n] holds for any Λ ∈ D2 of
rank 1 ≤ r ≤ 4.
(b) HVC[5] holds for any Λ ∈ D2[5].
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 19
Note that the following vanishing properties of HN formal power
series have been proved in Theorem 6.2 in [Z2] for the Laplace operators
∆n (n ≥ 1). By applying the linear automorphism ΦU , one can show
it also holds for any Λ-nilpotent (Λ ∈ D2) formal power series.
Theorem 3.15. Let Λ ∈ D2[n] and P (z) ∈ Ān be Λ-nilpotent with
o(P ) ≥ 2. The following statements are equivalent.
(1) ΛmPm+1 = 0 when m >> 0.
(2) There exists k0 ≥ 1 such that ΛmPm+k0 = 0 when m >> 0.
(3) For any fixed k ≥ 1, ΛmPm+k = 0 when m >> 0.
By applying the linear automorphism ΦU , one can transform Theo-
rem 1.5 in [EZ] on VC of the Laplace operators to the following result
on VC of Λ ∈ D2.
Theorem 3.16. Let Λ ∈ D2[n] and P (z) ∈ Ān any Λ-nilpotent poly-
nomial with o(P ) ≥ 2. Then VC holds for Λ and P (z) iff, for any
g(z) ∈ An, we have Λm(g(z)Pm) = 0 when m >> 0.
In [EZ], the following theorem has also been proved for Λ = ∆n.
Next we show it is also true in general.
Theorem 3.17. Let A ∈ SGL(n,C) and P (z) ∈ An a homogeneous
∆A-nilpotent polynomial with degP ≥ 2. Assume that σA−1(z) :=
zτA−1z and the partial derivatives ∂P
(1 ≤ i ≤ n) have no non-zero
common zeros. Then HVC[n] holds for ∆A and P (z).
In particular, if the projective subvariety determined by the ideal
〈P (z)〉 of An is regular, HVC[n] holds for ∆A and P (z).
Proof: Let P̃ as given in Eq. (3.1). By Theorem 1.2 in [EZ], we know
that, when σ2(z) :=
i=1 z
i and the partial derivatives
(1 ≤ i ≤ n)
have no non-zero common zeros, HVC[n] holds for ∆n and P̃ . Then,
by Lemma 2.5, (b), HVC[n] also holds for ∆A and P .
But, on the other hand, since U is invertible and, for any 1 ≤ i ≤ n,
(Uz),
σ2(z) and
(1 ≤ i ≤ n) have no non-zero common zeros iff σ2(z) and
(Uz) (1 ≤ i ≤ n) have no non-zero common zeros, and iff, with z
replaced by U−1z, σ2(U
−1z) = σA−1(z) and
(z) (1 ≤ i ≤ n) have no
non-zero common zeros. Therefore, the theorem holds. ✷
20 WENHUA ZHAO
3.4. The Vanishing Conjecture for Higher Order Differential
Operators with Constant Coefficients. Even though the most in-
teresting case of VC is for Λ ∈ D2, at least when JC is concerned,
the case of VC for higher order differential operators with constant
coefficients is also interesting and non-trivial. In this subsection, we
mainly discuss VC for the differential operators Da (a ∈ Nn). At the
end of this subsection, we also recall a result proved in [Z3] which says
that, when the base field has characteristic p > 0, VC, even under a
weaker condition, actually holds for any differential operator Λ (not
necessarily with constant coefficients).
Let βj ∈ Cn (1 ≤ j ≤ ℓ) be linearly independent and set δj := 〈βj, D〉.
Let Λ =
j=1 δ
j with aj ≥ 1 (1 ≤ j ≤ ℓ).
When ℓ = 1, VC for Λ can be proved easily as follows.
Proposition 3.18. Let δ ∈ D1[z] and Λ = δk for some k ≥ 1. Then
(a) A polynomial P (z) is Λ-nilpotent if (and only if) ΛP = 0.
(b) VC holds for Λ.
Proof: Applying a change of variables, if necessary, we may assume
δ = D1 and Λ = D
Let P (z) ∈ C[z] such that ΛP (z) = Dk1P (z) = 0. Let d be the degree
of P (z) in z1. From the equation above, we have k > d. Therefore, for
any m ≥ 1, we have km > dm which implies ΛmP (z)m = Dkm1 Pm(z) =
0. Hence, we have (a).
To show (b), let P (z) be a Λ-nilpotent polynomial. By the same
notation and argument above, we have k > d. Choose a positive integer
N > d
. Then, for any m ≥ N , we have m > d
, which is equivalent
to (m + 1)d < km. Hence we have ΛmP (z)m+1 = Dkm1 P
m+1(z) = 0.
In particular, when k = 1 in the proposition above, we have the
following corollary.
Corollary 3.19. VC holds for any differential operator Λ ∈ D1.
Next we consider the case ℓ ≥ 2. Note that, when ℓ = 2 and a1 =
a2 = 1. Λ ∈ D2 and has rank 2. Then, by Theorem 3.14, we know VC
holds for Λ.
Besides the case above, VC for Λ =
j=1 δ
j with ℓ ≥ 2 seems to
be non-trivial at all. Actually, we will show below, it is equivalent to a
conjecture (see Conjecture 3.21) on Laurent polynomials.
First, by applying a change of variables, if necessary, we may (and
will) assume Λ = Da with a ∈ (N+)ℓ. Secondly, note that, for any
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 21
b ∈ Nn and h(z) ∈ C[z], Dbh(z) = 0 iff the holomorphic part of the
Laurent polynomial z−bh(z) is zero.
Now we fix a P (z) ∈ C[z] and set f(z) := z−aP (z). With the obser-
vation above, it is easy to see that, P (z) is Da-nilpotent iff the holo-
morphic parts of the Laurent polynomials fm(z) (m ≥ 1) are all zero;
and VC holds for Λ and P (z) iff the holomorphic part of P (z)fm(z)
is zero when m >> 0. Therefore, VC for Da can be restated as follows:
Re-Stated VC for Λ = Da: Let P (z) ∈ An and f(z) as above.
Suppose that, for any m ≥ 1, the holomorphic part of the Laurent poly-
nomial fm(z) is zero, then the holomorphic part of P (z)fm(z) equals
to zero when m >> 0.
Note that the re-stated VC above is very similar to the following
non-trivial theorem which was first conjectured by O. Mathieu [Ma]
and later proved by J. Duistermaat and W. van der Kallen [DK].
Theorem 3.20. Let f and g be Laurent polynomials in z. Assume that,
for any m ≥ 1, the constant term of fm is zero. Then the constant term
gfm equals to zero when m >> 0.
Note that, Mathieu’s conjecture [Ma] is a conjecture on all real com-
pact Lie groups G, which is also mainly motivated by JC. The the-
orem above is the special case of Mathieu’s conjecture when G the
n-dimensional real torus. For other compact real Lie groups, Math-
ieu’s conjecture seems to be still wide open.
Motivated by Theorem 3.20, the above re-stated VC for Λ = Da
and also the result on VC in Theorem 3.16, we would like to propose
the following conjecture on Laurent polynomials.
Conjecture 3.21. Let f and g be Laurent polynomials in z. Assume
that, for any m ≥ 1, the holomorphic part of fm is zero. Then the
holomorphic part gfm equals to zero when m >> 0.
Note that, a positive answer to the conjecture above will imply VC
for Λ = Da (a ∈ Nn) by simply choosing g(z) to be P (z).
Finally let us to point out that, it is well-known that JC does not
hold over fields of finite characteristic (see [BCW], for example), but,
by Proposition 5.3 in [Z3], the situation for VC over fields of finite
characteristic is dramatically different even though it is equivalent to
JC over the complex field C.
Proposition 3.22. Let k be a field of char. p > 0 and Λ any differential
operator of k[z]. Let f ∈ k[[z]]. Assume that, for any 1 ≤ m ≤ p− 1,
22 WENHUA ZHAO
there exists Nm > 0 such that Λ
Nmfm = 0. Then, Λmfm+1 = 0 when
m >> 0.
From the proposition above, we immediately have the following corol-
lary.
Corollary 3.23. Let k be a field of char. p > 0. Then
(a) VC holds for any differential operator Λ of k[z].
(b) If Λ strictly decreases the degree of polynomials. Then, for any
polynomial f ∈ k[z] (not necessarily Λ-nilpotent), we have Λmfm+1 = 0
when m >> 0.
4. A Remark on Λ-Nilpotent Polynomials and Classical
Orthogonal Polynomials
In this section, we first in subsection 4.1 consider the “formal” con-
nection between Λ-nilpotent polynomials or formal power series and
classical orthogonal polynomials, which has been discussed in Section
1 (see page 4). We then in subsection 4.2 transform the isotropic prop-
erties of homogeneous HN polynomials proved in [Z2] to isotropic prop-
erties of homogeneous ∆A-nilpotent (A ∈ SGL(n,C)) polynomials (see
Theorem 4.10 and Corollary 4.11). Note that, as pointed in Section
1, the isotropic results in subsection 4.2 can be understood as some
natural consequences of the connection of Λ-nilpotent polynomials and
classical orthogonal polynomials discussed in subsection 4.1.
4.1. Some Classical Orthogonal Polynomials. First, let us recall
the definition of classical orthogonal polynomials. Note that, to be
consistent with the tradition for orthogonal polynomials, we will in
this subsection use x = (x1, x2, . . . , xn) instead of z = (z1, z2, . . . , zn)
to denote free commutative variables.
Definition 4.1. Let B be an open set of Rn and w(x) a real valued
function defined over B such that w(x) ≥ 0 for any x ∈ B and 0 <∫
w(x)dx < ∞. A sequence of polynomials {fm(x) |m ∈ Nn} is said
to be orthogonal over B if
(1) deg fm = |m| for any m ∈ Nn.
fm(x)fk(x)w(x) dx = 0 for any m 6= k ∈ Nn.
The function w(x) is called the weight function. When the open
set B ⊂ Rn and w(x) are clear in the context, we simply call the
polynomials fm(x) (m ∈ Nn) in the definition above orthogonal poly-
nomials. If the orthogonal polynomials fm(x) (m ∈ Nn) also satisfy∫
(x)w(x)dx = 1 for any m ∈ Nn, we call fm(x) (m ∈ Nn) or-
thonormal polynomials. Note that, in the one dimensional case w(x)
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 23
determines orthogonal polynomials over B up to multiplicative con-
stants, i.e. if fm(x) (m ≥ 0) are orthogonal polynomials as in Defi-
nition 4.1, then, for any am ∈ R× (m ≥ 0), amfm (m ≥ 0) are also
orthogonal over B with respect to the weight function w(x).
The most natural way to construct orthogonal or orthonormal se-
quences is: first to list all monomials in an order such that the degrees
of monomials are non-decreasing; and then to apply Gram-Schmidt
procedure to orthogonalize or orthonormalize the sequence of mono-
mials. But, surprisingly, most of classical orthogonal polynomials can
also be obtained by the so-called Rodrigues’ formulas.
We first consider orthogonal polynomials in one variable.
Rodrigues’ formula: Let fm(x) (m ≥ 0) be the orthogonal polyno-
mials as in Definition 4.1. Then, there exist a function g(x) defined
over B and non-zero constants cm ∈ R (m ≥ 0) such that
fm(x) = cmw(x)
(w(x)gm(x)).(4.1)
Let P (x) := g(x) and Λ:= w(x)−1
w(x), where, throughout this
paper any polynomial or function appearing in a (differential) operator
always means the multiplication operator by the polynomial or function
itself. Then, by Rodrigues’ formula above, we see that the orthogonal
polynomials {fm(x) |m ≥ 0} have the form
fm(x) = cmΛ
mPm(x),(4.2)
for any m ≥ 0.
In other words, all orthogonal polynomials in one variable, up to
multiplicative constants, has the form {ΛmPm |m ≥ 0} for a single
differential operator Λ and a single function P (x).
Next we consider some of the most well-known classical orthonor-
mal polynomials in one variable. For more details on these orthogonal
polynomials, see [Sz], [AS], [DX].
Example 4.2. (Hermite Polynomials)
(a) B = R and the weight function w(x) = e−x
(b) Rodrigues’ formula:
Hm(x) = (−1)mex
)me−x
24 WENHUA ZHAO
(c) Differential operator Λ and polynomial P (x):
Λ = ex
− 2x,
P (x) = 1,
(d) Hermite polynomials in terms of Λ and P (x):
Hm(x) = (−1)m ΛmPm(x).
Example 4.3. (Laguerre Polynomials)
(a) B = R+ and w(x) = xαe−x (α > −1).
(b) Rodrigues’ formula:
Lαm(x) =
x−αex(
)m(xm+αe−x).
(c) Differential operator Λ and polynomial P (x):
Λα = x
−αex( d
)(e−xxα) = d
+ (αx−1 − 1),
P (x) = x,
(d) Laguerre polynomials in terms of Λ and P (x):
Lm(x) =
ΛmPm(x).
Example 4.4. (Jacobi Polynomials)
(a) B = (−1, 1) and w(x) = (1− x)α(1 + x)β, where α, β > −1.
(b) Rodrigues’ formula:
P α,βm (x) =
(−1)m
(1− x)−α(1 + x)−β( d
)m(1− x)α+m(1 + x)β+m.
(c) Differential operator Λ and polynomial P (x):
Λ = (1− x)−α(1 + x)−β(
)(1− x)α(1 + x)β
− α(1− x)−1 + β(1 + x)−1,
P (x) = 1− x2.
(d) Laguerre polynomials in terms of Λ and P (x):
P α,βm (x) =
(−1)m
ΛmPm(x).
A very important special family of Jacobi polynomials are the Gegen-
bauer polynomials which are obtained by setting α = β = λ− 1/2 for
some λ > −1/2. Gegenbauer polynomials are also called ultraspherical
polynomials in the literature.
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 25
Example 4.5. (Gegenbauer Polynomials)
(a) B = (−1, 1) and w(x) = (1− x2)λ−1/2, where λ > −1/2.
(b) Rodrigues’ formula:
P λm(x) =
(−1)m
2m(λ+ 1/2)m
(1− x2)1/2−λ(
)m(1− x2)m+λ−1/2.
where, for any c ∈ R and k ∈ N, (c)k = c(c+ 1) · · · (c+ k − 1).
(c) Differential operator Λ and polynomial P (x):
Λ = (1− x2)1/2−λ(
)(1− x2)λ−1/2(4.3)
(2λ− 1) x
(1− x2)
P (x) = 1− x2.
(d) Laguerre polynomials in terms of Λ and P (x):
P λm(x) =
(−1)m
2m(λ+ 1/2)m
ΛmPm(x).
Note that, for the special cases with λ = 0, 1, 1/2, the Gegenbauer
Polynomials P λm(x) are called the Chebyshev polynomial of the first
kind, the second kind and Legendre polynomials, respectively. Hence
all these classical orthogonal polynomials also have the form of ΛmPm
(m ≥ 0) up to some scalar multiple constants cm with P (x) = 1 − x2
and the corresponding special forms of the differential operator Λ in
Eq. (4.3).
Remark 4.6. Actually, the Gegenbauer polynomials are more closely
and directly related with VC in some different ways. See [Z4] for more
discussions on connections of the Gegenbauer polynomials with VC.
Next, we consider some classical orthogonal polynomials in several
variables. We will see that they can also be obtained from certain
sequences of the form {ΛmPm |m ≥ 0} in a slightly modified way. One
remark is that, unlike the one-variable case, orthogonal polynomials
in several variables up to multiplicative constants are not uniquely
determined by weight functions.
The first family of classical orthogonal polynomials in several vari-
ables can be constructed by taking Cartesian products of orthogonal
polynomials in one variable as follows.
26 WENHUA ZHAO
Suppose {fm |m ≥ 0} is a sequence of orthogonal polynomials in one
variable, say as given in Definition 4.1. We fix any n ≥ 2 and set
W (x) :=
w(xi),(4.4)
fm(x) :=
fmi(xi),(4.5)
for any x ∈ B×n and m ∈ Nn.
Then it is easy to see that the sequence {fm(x) |m ∈ Nn} are orthog-
onal polynomials over B×n with respect to the weight function W (x)
defined above.
Note that, by applying the construction above to the classical one-
variable orthogonal polynomials discussed in the previous examples,
one gets the classical multiple Hermite Polynomials, multiple Laguerre
polynomials, multiple Jacobi polynomials andmultiple Gegenbauer poly-
nomials, respectively.
To see that the multi-variable orthogonal polynomials constructed
above can be obtained from a sequence of the form {ΛmPm(x) |m ≥
0}, we suppose fm (m ≥ 0) have Rodrigues’ formula Eq. (4.1). Let
s = (s1, . . . , sn) be n central formal parameters and set
Λs :=W (x)
W (x),(4.6)
P (x) :=
g(xi).(4.7)
Let Vm(x) (m ∈ Nn) be the coefficient of sm in Λ|m|s P |m|(x). Then,
from Eqs. (4.1), (4.4)–(4.7), it is easy to check that, for any m ∈ Nn,
we have
fm(x) = cm
Vm(x),(4.8)
where cm =
i=1 cmi .
Therefore, we see that any multi-variable orthogonal polynomials
constructed as above from Cartesian products of one-variable orthogo-
nal polynomials can also be obtained from a single differential operator
Λs and a single function P (x) via the sequence {Λms Pm |m ≥ 0}.
Remark 4.7. Note that, one can also take Cartesian products of dif-
ferent kinds of one-variable orthogonal polynomials to create more or-
thogonal polynomials in several variables. By a similar argument as
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 27
above, we see that all these multi-variable orthogonal polynomials can
also be obtained similarly from a single sequence {Λms Pm |m ≥ 0}.
Next, we consider the following two examples of classical multi-
variable orthogonal polynomials which are not Cartesian products of
one-variable orthogonal polynomials.
Example 4.8. (Classical Orthogonal Polynomials over Unit
Balls)
(a) Choose B to be the open unit ball Bn of Rn and the weight func-
Wµ(x) = (1− ||x||2)µ−1/2,
where ||x|| =
i=1 x
i and µ > 1/2.
(b) Rodrigues’ formula: For any m ∈ Nn, set
Um(x) :=
(−1)m(2µ)|m|
2|m|m!(µ+ 1/2)|m|
∂xm11 · · ·∂xmnn
(1− ||x||2)|m|+µ−1/2.
Then, by Proposition 2.2.5 in [DX], {Um(x) |m ∈ Nn} are orthonormal
over Bn with respect to the weight function Wµ(x).
(c) Differential operator Λs and polynomial P (x): Let s = (s1, . . . , sn)
be n central formal parameters and set
Λs :=Wµ(x)
Wµ(x),
P (x) :=1− ||x||2.
Let Vm(x) (m ∈ Nn) be the coefficient of sm in Λ|m|s P |m|(x). Then
from the Rodrigues type formula above, we have, for any m ∈ Nn,
Um(x) =
(−1)|m|(2µ)|m|
2|m||m|!(µ+ 1/2)|m|
Vm(x).
Therefore, the classical orthonormal polynomials {Um(x) |m ∈ Nn}
over Bn can be obtained from a single differential operator Λs and P (x)
via the sequence {Λms Pm |m ≥ 0}.
Example 4.9. (Classical Orthogonal Polynomials over Sim-
plices)
(a) Choose B to be the simplex
T n = {x ∈ Rn |
xi < 1; x1, ..., xn > 0}
28 WENHUA ZHAO
in Rn and the weight function
Wκ(x) = x
1 · · ·xκnn (1− |x|1)κn+1−1/2,(4.9)
where κi > −1/2 (1 ≤ i ≤ n+ 1) and |x|1 =
i=1 xi.
(b) Rodrigues’ formula: For any m ∈ Nn, set
Um(x) := Wκ(x)
∂xm11 · · ·∂xmnn
Wκ(x)(1 − |x|1)|m|
Then, {Um(x) |m ∈ Nn} are orthonormal over T n with respect to the
weight function Wκ(x). See Section 2.3.3 of [DX] for a proof of this
claim.
(c) Differential operator Λ and polynomial P (x): Let s = (s1, . . . , sn)
be n central formal parameters and set
Λs :=Wκ(x)
Wκ(x),
P (x) :=1− |x|1.
Let Vm(x) (m ∈ Nn) be the coefficient of sm in Λ|m|s P |m|(x). Then
from the Rodrigues type formula in (b), we have, for any m ∈ Nn,
Um(x) =
Vm(x).
Therefore, the classical orthonormal polynomials {Um(x) |m ∈ Nn}
over T n can be obtained from a single differential operator Λs and a
function P (x) via the sequence {Λms Pm |m ≥ 0}.
4.2. The Isotropic Property of ∆A-Nilpotent Polynomials. As
discussed in Section 1, the “formal” connection of Λ-nilpotent polyno-
mials with classical orthogonal polynomials predicts that Λ-nilpotent
polynomials should be isotropic with respect to a certain C-bilinear
form of An. In this subsection, we show that, for differential operators
Λ = ∆A (A ∈ SGL(n,C)), this is indeed the case for any homogeneous
Λ-nilpotent polynomials (see Theorem 4.10 and Corollaries 4.11, 4.12).
We fix any n ≥ 1 and let z and D denote the n-tuples (z1, . . . , zn)
and (D1, D2, . . . , Dn), respectively. Let A ∈ SGL(n,C) and define the
C-bilinear map
{·, ·}A : An ×An → An(4.10)
(f, g) → f(AD)g(z),
Furthermore, we also define a C-bilinear form
(·, ·)A : An ×An → C(4.11)
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 29
(f, g) → {f, g}
|z=0,
It is straightforward to check that the C-bilinear form defined above
is symmetric and its restriction on the subspace of homogeneous poly-
nomials of any fixed degree is non-singular. Note also that, for any
homogeneous polynomials f, g ∈ An of the same degree, we have
{f, g}A = (f, g)A.
The main result of this subsection is the following theorem.
Theorem 4.10. Let A ∈ SGL(n,C) and P (z) ∈ An a homogeneous
∆A-nilpotent polynomial of degree d ≥ 3. Let I(P ) be the ideal of An
generated by σA−1(z) := z
τA−1z and ∂P
(1 ≤ i ≤ n). Then, for any
f(z) ∈ I(P ) and m ≥ 0, we have
{f,∆mAPm+1}A = f(AD)∆mAPm+1 = 0.(4.12)
Note that, by Theorem 6.3 in [Z2], we know that the theorem does
hold when A = In and ∆A = ∆n.
Proof: Note first that, elements of An satisfying Eq. (4.12) do form
an ideal. Therefore, it will be enough to show σA−1(z) and
i ≤ n) satisfy Eq. (4.12). But Eq. (4.12) for σA−1(z) simply follows the
facts that σA−1(Az) = z
τAz and σA−1(AD) = ∆A.
Secondly, by Lemma 2.2, we can write A = UU τ for some U = (uij) ∈
GL(n,C). Then, by Eq. (2.7), we have ΨU(∆n) = ∆A or Ψ
U (∆A) =
∆n. Let P̃ (z) := Φ
U (P ) = P (Uz). Then by Lemma 2.5, (a), P̃ is a
homogeneous ∆n-nilpotent polynomial, and by Eq. (2.6), we also have
Φ−1U (∆
m+1) = ∆mn P̃
m+1.(4.13)
By Theorem 6.3 in [Z2], for any 1 ≤ i ≤ n and m ≥ 0, we have,
∆mn P̃
Since
(z) =
(Uz),
we further have,
∆mn P̃
Since U is invertible, for any 1 ≤ i ≤ n, we have
∆mn P̃
= 0.(4.14)
30 WENHUA ZHAO
Combining the equation above with Eq. (4.13), we get
(UD)Φ−1U
Φ−1U (ΦU
(UD)Φ−1U )
(UD)Φ−1U )
= 0.(4.15)
By Lemma 2.1, (b), Eq. (4.15) and the fact that A = UU τ , we get
(UU τD)
which is Eq. (4.12) for ∂P
(1 ≤ i ≤ n). ✷
Corollary 4.11. Let A be as in Theorem 4.10 and P (z) be a homoge-
neous ∆A-nilpotent polynomial of degree d ≥ 3. Then, for any m ≥ 1,
m+1 is isotropic with respect to the C-bilinear form (·, ·)A, i.e.
(∆mAP
m+1,∆mAP
m+1)A = 0.(4.16)
In particular, we have (P, P )A = 0.
Proof: By the definition Eq. (4.11) of the C-bilinear form (·, ·)A and
Theorem 4.10, it will be enough to show that P and ∆mAP
m+1 (m ≥ 1)
belong to the ideal generated by the polynomials ∂P
(1 ≤ i ≤ n) (here
we do not need to consider the polynomial σA−1(z)). But this statement
has been proved in the proof of Corollary 6.7 in [Z2]. So we refer the
reader to [Z2] for a proof of the statement above. ✷
Theorem 4.10 and Corollary 4.11 do not hold for homogeneous HN
polynomials P (z) of degree d = 2. But, by applying similar arguments
as in the proof of Theorem 4.10 above to Proposition 6.8 in [Z2], one
can show that the following proposition holds.
Proposition 4.12. Let A be as in Theorem 4.10 and P (z) a homoge-
neous ∆A-nilpotent polynomial of degree d = 2. Let J(P ) the ideal of
C[z] generated by P (z) and σA−1(z). Then, for any f(z) ∈ J(P ) and
m ≥ 0, we have
{f,∆mAPm+1}A = f(AD)∆mAPm+1 = 0.(4.17)
In particular, we still have (P, P )A = 0.
A VANISHING CONJECTURE ON DIFFERENTIAL OPERATORS 31
References
[AS] Handbook of mathematical functions with formulas, graphs, and mathemat-
ical tables. Edited by Milton Abramowitz and Irene A. Stegun. Reprint of
the 1972 edition. Dover Publications, Inc., New York, 1992. [MR0757537].
[BCW] H. Bass, E. Connell, D. Wright, The Jacobian conjecture, reduction of
degree and formal expansion of the inverse. Bull. Amer. Math. Soc. 7,
(1982), 287–330. [MR 83k:14028]. Zbl.539.13012.
[B] M. de Bondt, Personal Communications.
[BE1] M. de Bondt and A. van den Essen, A Reduction of the Jacobian Conjecture
to the Symmetric Case, Proc. Amer. Math. Soc. 133 (2005), no. 8, 2201–
2205 (electronic). [MR2138860].
[BE2] M. de Bondt and A. van den Essen, Nilpotent Symmetric Jacobian Matrices
and the Jacobian Conjecture, J. Pure Appl. Algebra 193 (2004), no. 1-3,
61–70. [MR2076378].
[BE3] M. de Bondt and A. van den Essen, Nilpotent Symmetric Jacobian Matrices
and the Jacobian Conjecture II, J. Pure Appl. Algebra 196 (2005), no. 2-3,
135–148. [MR2110519].
[C] T. S. Chihara, An introduction to orthogonal polynomials. Mathematics
and its Applications, Vol. 13. Gordon and Breach Science Publishers, New
York-London-Paris, 1978. [MR0481884].
[DK] J. J. Duistermaat and W. van der Kallen, Constant terms in powers
of a Laurent polynomial. Indag. Math. (N.S.) 9 (1998), no. 2, 221–231.
[MR1691479].
[DX] C. Dunkl and Y. Xu, Orthogonal polynomials of several variables. Ency-
clopedia of Mathematics and its Applications, 81. Cambridge University
Press, Cambridge, 2001. [MR1827871].
[E] A. van den Essen, Polynomial automorphisms and the Jacobian con-
jecture. Progress in Mathematics, 190. Birkhäuser Verlag, Basel, 2000.
[MR1790619].
[EZ] A. van den Essen and W. Zhao, Two results on Hessian nilpotent polyno-
mials. Preprint, arXiv:0704.1690v1 [math.AG].
[He] H. Henryk, Topics in classical automorphic forms, Graduate Studies in
Mathematics, 17. American Mathematical Society, Providence, RI, 1997.
[MR1474964]
[Hu] L. K. Hua, On the theory of automorphic functions of a matrix level. I.
Geometrical basis. Amer. J. Math. 66, (1944). 470–488. [MR0011133].
[Ke] O. H. Keller, Ganze Gremona-Transformation, Monats. Math. Physik 47
(1939), no. 1, 299-306. [MR1550818].
[Ko] T. H. Koornwinder, Two-variable analogues of the classical orthogonal
polynomials. Theory and application of special functions (Proc. Advanced
Sem., Math. Res. Center, Univ. Wisconsin, Madison, Wis., 1975), pp. 435–
495. Math. Res. Center, Univ. Wisconsin, Publ. No. 35, Academic Press,
New York, 1975. [MR0402146].
[Ma] O. Mathieu, Some conjectures about invariant theory and their applica-
tions. Algèbre non commutative, groupes quantiques et invariants (Reims,
1995), 263–279, Sémin. Congr., 2, Soc. Math. France, Paris, 1997.
[MR1601155].
http://arxiv.org/abs/0704.1690
32 WENHUA ZHAO
[Me] G. Meng, Legendre transform, Hessian conjecture and tree formula.
Appl. Math. Lett. 19 (2006), no. 6, 503–510, [MR2221506]. See also
math-ph/0308035.
[SHW] J. A. Shohat; E. Hille and J. L. Walsh, A Bibliography on Orthogonal
Polynomials. Bull. Nat. Research Council, no. 103. National Research
Council of the National Academy of Sciences, Washington, D. C., 1940.
[MR0003302].
[Si] B. Simon, Orthogonal polynomials on the unit circle. Part 1. Classical
theory. American Mathematical Society Colloquium Publications, 54, Part
1. American Mathematical Society, Providence, RI, 2005. [MR2105088].
[Sz] G. Szegö, Orthogonal Polynomials. 4th edition. American Mathematical
Society, Colloquium Publications, Vol. XXIII. American Mathematical So-
ciety, Providence, R.I., 1975. [MR0372517].
[T] M. Takeuchi, Modern spherical functions, Translations of Mathematical
Monographs, 135. American Mathematical Society, Providence, RI, 1994.
[MR 1280269].
[Wa] S. Wang, A Jacobian Criterion for Separability, J. Algebra 65 (1980), 453-
494. [MR 83e:14010].
[Y] A. V. Jagžev, On a problem of O.-H. Keller. (Russian) Sibirsk. Mat. Zh.
21 (1980), no. 5, 141–150, 191. [MR0592226].
[Z1] W. Zhao, Inversion Problem, Legendre Transform and Inviscid Burg-
ers’ Equation, J. Pure Appl. Algebra, 199 (2005), no. 1-3, 299–317.
[MR2134306]. See also math. CV/0403020.
[Z2] W. Zhao, Hessian Nilpotent Polynomials and the Jacobian Conjec-
ture, Trans. Amer. Math. Soc. 359 (2007), no. 1, 249–274 (electronic).
[MR2247890]. See also math.CV/0409534.
[Z3] W. Zhao, Some Properties and Open Problems of Hessian Nilpotent Poly-
nomials. Preprint, arXiv:0704.1689v1 [math.CV].
[Z4] W. Zhao, A Conjecture on the Laplace-Beltrami Eigenfunctions. In prepa-
ration.
Department of Mathematics, Illinois State University, Nor-
mal, IL 61790-4520.
E-mail: [email protected].
http://arxiv.org/abs/math-ph/0308035
http://arxiv.org/abs/math/0409534
http://arxiv.org/abs/0704.1689
1. Introduction
2. The Vanishing Conjecture for the 2nd Order Homogeneous Differential Operators with Constant Coefficients
2.1. Notation and Preliminaries
2.2. The Vanishing Conjecture for the 2nd Order Homogeneous Differential Operators with Constant Coefficients
3. Some Properties of A-Nilpotent Polynomials
3.1. Associated Polynomial Maps and PDEs
3.2. Some Criteria of A-Nilpotency
3.3. Some Results on the Vanishing Conjecture of the 2nd Order Homogeneous Differential Operators with Constants Coefficients
3.4. The Vanishing Conjecture for Higher Order Differential Operators with Constant Coefficients
4. A Remark on -Nilpotent Polynomials and Classical Orthogonal Polynomials
4.1. Some Classical Orthogonal Polynomials
4.2. The Isotropic Property of A-Nilpotent Polynomials
References
|
0704.1693 | N_p N_n Scheme Based on New Empirical Formula for Excitation Energy | NpNn Scheme Based on New Empirical Formula
for Excitation Energy
Jin-Hee Yoon, Eunja Ha, and Dongwoo Cha∗
Department of Physics, Inha University, Incheon 402-751, Korea
(Dated: July 25, 2007)
Abstract
We examine the NpNn scheme based on a recently proposed simple empirical formula which
is highly valid for the excitation energy of the first excited natural parity even multipole states
in even-even nuclei. We demonstrate explicitly that the NpNn scheme for the excitation energy
emerges from the separate exponential dependence of the excitation energy on the valence nucleon
numbers Np and Nn together with the fact that only a limited set of numbers is allowed for the
Np and Nn of the existing nuclei.
PACS numbers: 21.10.Re, 23.20.Lv
∗Electronic address: [email protected]; Fax: +82-32-866-2452
http://arxiv.org/abs/0704.1693v2
mailto:[email protected]
The valence nucleon numbers Np and Nn have been frequently adopted in parameter-
izing various nuclear properties phenomenologically over more than the past four decades.
Hamamoto was the first to point out that the square roots of the ratios of the measured
and the single particle B(E2) values were proportional to the product NpNn [1]. It was
subsequently shown that a very simple pattern emerged whenever the nuclear data concern-
ing the lowest collective states was plotted against NpNn [2]. This phenomenon has been
called the NpNn scheme in the literature [3]. For example, when the measured excitation
energies Ex(2
1 ) of the first excited 2
+ states in even-even nuclei were plotted against the
mass number A (A-plot), we got data points scattered irregularly over the Ex-A plane as
seen in Fig. 1(a). However, we suddenly had a very neat rearrangement of the data points by
just plotting them against the product NpNn (NpNn-plot) as shown in Fig. 1(b). A similar
simplification was observed not only from Ex(2
1 ) but also from the ratio Ex(4
1 )/Ex(2
[5, 6, 7], the transition probability B(E2; 2+1 → 0
+) [8], and the quadrupole deformation
parameter e2 [9].
The chief attraction of the NPNn scheme is twofold. One is the fact that the simplification
in the graph occurs marvelously every time the NpNn plot is drawn. The other attraction
0 100 200 300 400 5000 100 200
4 (b) N
-Plot
(a) A-Plot
Mass Number A
FIG. 1: A typical example demonstrating the NpNn scheme. The excitation energies of the first 2
states in even-even nuclei are plotted (a) against the mass number A and (b) against the product
NpNn. The dashed curve in part (a) represents the bottom contour line which is drawn by the first
term αA−γ of Eq. (1). The excitation energies are quoted from Ref. 4.
0 100 200
0 100 200
Mass Number A
(b) Empirical Formula(a) Data
FIG. 2: Excitation energies of the first excited natural parity even multipole states. Part (a) shows
the measured excitation energies while part (b) shows those calculated by the empirical formula
given by Eq. (1). The measured excitation energies are quoted from the compilation in Raman et
al. for 2+
states [4] and extracted from the Table of Isotopes, 8th-edition by Firestone et al. for
other multipole states [20].
is the universality of the pattern, namely the exactly same sort of graphs appears even at
different mass regions [2]. Since the performance of the NpNn scheme has been so impressive,
many expected that the residual valence proton-neutron (p-n) interaction must have been
the dominant controlling factor in the development of collectivity in nuclei and that the
product NpNn may represent an empirical measure of the integrated valence p-n interaction
strength [3]. Also, the importance of the p-n interaction in determining the structure of
nuclei has long been pointed out by many authors [10, 11, 12, 13, 14, 15, 16].
In the meantime, we have recently proposed a simple empirical formula which describes
the essential trends of the excitation energies Ex(2
1 ) in even-even nuclei throughout the
periodic table [17]. This formula, which depends on the valence nucleon numbers, Np and
Nn, and the mass number A, can be expressed as
Ex = αA
−γ + β [exp(−λNp) + exp(−λNn)] (1)
where the parameters α, β, γ, and λ are fitted from the data. We have also shown that the
source, which governs the 2+1 excitation energy dependence given by Eq. (1) on the valence
nucleon numbers, is the effective particle number participating in the residual interaction
from the Fermi level [18]. Furthermore, the same empirical formula can be applied quite
successfully to the excitation energies of the lowest natural parity even multipole states such
as 4+1 , 6
1 , 8
1 , and 10
1 [19]. It can be confirmed by Fig. 2 where the measured excitation
energies in part (a) are compared with those in part (b) which are calculated by Eq. (1).
The values of the parameters adopted for Fig. 2(b) are listed in Table I.
0 100 200 300 400 500
0 100 200 300 400 500
(I) Z=2~8
Data
Empirical Formula
(II) Z=10~20
(III) Z=22~28
(IV) Z=30~50
(V) Z=52~82
(VI) Z=84~126
FIG. 3: The NpNn-plot for the excitation energies of the first 2
+ states using both the data (open
triangles) and the empirical formula (solid circles). The plot is divided into six panels each of which
contains plotted points that come from each one of the proton major shells.
TABLE I: Values adopted for the four parameters in Eq. (1) for the excitation energies of the
following multipole states: 2+
, and 10+
Multipole α(MeV) β(MeV) γ λ
34.9 1.00 1.19 0.36
94.9 1.49 1.15 0.30
441.4 1.51 1.31 0.25
1511.5 1.41 1.46 0.19
2489.0 1.50 1.49 0.17
In this study, we want to further elucidate about our examination of the NpNn scheme
based on the empirical formula, Eq. (1), for Ex(2
1 ). Our goal is to clarify why Ex(2
complies with the NpNn scheme although the empirical formula, which reproduces the data
quite well, does not depend explicitly on the product NpNn.
First, we check how well the empirical formula does meet the requirements of the NpNn
scheme. In Fig. 3, we display the NpNn-plot for the excitation energies of the first 2
states using both the data (empty triangles) and the empirical formula (solid circles). We
show them with six panels. Each panel contains plotted points from nuclei which make up
the following six different proton major shells: (I) 2 ≤ Z ≤ 8, (II) 10 ≤ Z ≤ 20, (III)
TABLE II: The maximum value of NpNn and the minimum value of Ex for each major shell in
Fig. 3 are indicated here. The numbers in the parenthesis represent Ex calculated by the empirical
formula given by Eq. (1).
Major Shell Z Max. NpNn Min. Ex (MeV)
I 2 ∼ 8 8 1.59 (1.85)
II 10 ∼ 20 36 0.67 (0.82)
III 22 ∼ 28 16 0.75 (0.77)
IV 30 ∼ 50 140 0.13 (0.18)
V 52 ∼ 82 308 0.07 (0.08)
VI 84 ∼ 126 540 0.04 (0.05)
0 50 100 150 200 250
0 50 100 150 200 250
Empirical Formula
FIG. 4: Extract from Fig. 3 for some typical nuclei which belong to the rare earth elements.
Different symbols are used to denote excitation energies of individual nuclei.
22 ≤ Z ≤ 28, (IV) 30 ≤ Z ≤ 50, (V) 52 ≤ Z ≤ 82, and (VI) 84 ≤ Z ≤ 126. From this
figure, we can see an intrinsic feature of the NpNn-plot, namely, the plotted points have their
own typical location in the Ex-NpNn plane according to which major shell they belong. For
example, the plotted points of the first three major shells I, II, and III occupy the far left side
part of the Ex-NpNn plane in Fig. 3 since their value of the product NpNn does not exceed
several tens. On the contrary, the plotted points of the last major shell VI extend to the far
right part of the Ex-NpNn plane along the lowest portion in Fig. 3. This is true since their
value of the excitation energy Ex is very small and also their value of NpNn reaches more
than five hundreds. We present specific information such as the maximum value ofNpNn and
the minimum value of Ex in Table II for the plotted points which belong to each major shell
in Fig. 3. There are two numbers for each major shell in the last column of Table II where
one number is determined from the data and the other number in parenthesis is calculated
by the empirical formula. We can find that those two numbers agree reasonably well. We
also find in Fig. 3 that the results, calculated by the empirical formula (solid circles), meet
the requirement of the NpNn scheme very well and agree with the data (empty triangles)
satisfactorily for each and every panel.
In order to make more detailed comparison between the measured and calculated excita-
tion energies, we expand the largest two major shells V and VI of Fig. 3 and redraw them
in Fig. 4 for some typical nuclei which belong to the rare earth elements. The upper part of
Fig. 4 shows the data and the lower part of the same figure exhibits the corresponding cal-
culated excitation energies. We can confirm that the agreement between them is reasonable
even though the calculated excitation energies somewhat overestimate the data and also the
empirical formula can not separate enough to distinguish the excitation energies of the two
isotopes with the same value of the product NpNn for some nuclei.
According to the empirical formula given by Eq. (1), the excitation energy Ex is deter-
mined by two components: one is the first term αA−γ which depends only on the mass
number A and the other is the second term β[exp(−λNp)+exp(−λNn)] which depends only
on the valence nucleon numbers, Np and Nn. Let us first draw the NpNn-plot of Ex(2
1 ) by
using only the first term αA−γ. The results are shown in Fig. 5(a) where we can find that
the plotted points fill the lower left corner of the Ex-NpNn plane leaving almost no empty
spots. These results simply reflect the fact that a large number of nuclei with different mass
numbers, values of A, can have the same value of NpNn. Now we draw the same NpNn-plot
by using both of the two terms in Eq. (1). We display the plot of the calculated excitation
energies in Fig. 5(b) which is just the same sort of graph of the measured excitation energies
shown in Fig. 1(b) except that the type of scale for Ex is changed from linear to log. By
comparing Fig. 5 (a) and (b), we find that the second term of Eq. (1), which depends on the
valence nucleon numbers, Np and Nn, pushes the plotted points up in the direction of higher
excitation energies and arranges them to comply with the NpNn scheme.
It is worthwhile to note the difference between the A-plot and the NpNn-plot. The graph
drawn by using only the first term of Eq. (1) becomes a single curve in the A-plot as shown
in Fig. 1(a) with the dashed curve. It becomes scattered plotted points in the NpNn-plot as
can be seen from Fig. 5(a). Now, by adding the second term of Eq. (1) in the A-plot, the
plotted points are dispersed as shown in the top graph of Fig. 2(b) which corresponds to the
measured data points in Fig. 1(a); while by adding the same second term in NpNn-plot, we
find a very neat rearrangement of the plotted points as shown in Fig. 5(b). Thus, the same
second term plays the role of spreading plotted points in the A-plot while it plays the role
of collecting them in the NpNn-plot.
However, this mechanism of the second term alone is not sufficient to explain why the
empirical formula given by Eq. (1) which obviously does not depend on NpNn at all, can
0 100 200 300 400 500
0 100 200 300 400 500
(a) Only the first term (b) Both terms
FIG. 5: The NpNn-plot of the calculated first excitation energy Ex of 2
+ states. The excitation
energies Ex are calculated by (a) using only the first term and (b) using both terms of Eq. (1).
show the characteristic feature of the NpNn scheme. In order to shed light on this question,
we calculate the excitation energy Ex(2
1 ) by the following three different conditions on the
exponents, Np and Nn, of the second term in Eq. (1). First, let Np and Nn have any even
numbers as long as they satisfy Np +Nn ≤ A. The resulting excitation energy Ex is plotted
against NpNn in Fig. 6(a). Next, let Np and Nn have any numbers that are allowed for the
valence nucleon numbers. For example, suppose the three numbers of a plotted point are
A = 90, Np = 40, and Nn = 50 in the previous case. For the fourth major shell IV in Table II,
the valence proton number for the nucleus with the atomic number Z = 40 is 10 and the
valence neutron number for the nucleus with the neutron number N = 50 is 0. Therefore,
we assign Np = 10 and Nn = 0 instead of 40 and 50, respectively. The excitation energy
Ex, calculated under such a condition, is plotted against NpNn in Fig. 6(b). Last, we take
only those excitation energies which are actually measured among the excitation energies
shown in Fig. 6(b). The results are shown in Fig. 6(c), which is, of course, exactly the same
as shown in Fig. 5(b). From Fig. 6(d) where all the three previous plots (a), (b), and (c) are
placed together, we can observe how the NpNn scheme emerges from the empirical formula
given by Eq. (1) even though this equation does not depend on the product NpNn at all.
On one hand, the two exponential terms which depend on Np and Nn separately push the
excitation energy Ex upward as discussed with respect to Fig. 5. On the other hand, the
restriction on the values of the valence nucleon numbers Np and Nn of the actually existing
nuclei determines the upper bound of the excitation energy Ex as discussed regarding Fig. 6.
Finally, we show the NpNn-plots of the first excitation energies for (a) 4
1 , (b) 6
1 , (c)
8+1 , and (d) 10
1 states in Fig. 7. The measured excitation energies are represented by the
empty triangles and the calculated ones from the empirical formula, Eq. (1), are denoted by
solid circles. These graphs are just the NpNn-plot versions of the A-plot shown in Fig. 2
with exactly the same set of plotted points. We can learn from Fig. 7 that the same kind
of NpNn scheme observed in the excitation energy of 2
1 states is also functioning in the
excitation energies of other natural parity even multipole states. We can also find from
Fig. 7 that the calculated results, using the empirical formula, agree with the measured data
quite well. Moreover, it is interesting to find from Fig. 7 that the width in the central part
of the NpNn-plot is enlarged as the multipole of the state is increased. The origin of this
enlargement in the empirical formula can be traced to the parameter α of the first term in
Eq. (1). The value of α is monotonously increased from 34.9MeV for Ex(2
1 ) to 2489.0MeV
for Ex(10
1 ) as can be seen in Table I.
0 100 200 300 400 5000 100 200 300 400 500
(a) All Possible Values
(b) All Possible N
's and N
(d) All Three Cases
(c) Only Existing N
's and N
FIG. 6: The NpNn-plot of the first excitation energy of the 2
+ states calculated by the empirical
formula given by Eq. (1) using the following three different conditions on the exponent Np and Nn:
(a) Np and Nn can have any even numbers as long as they satisfy Np +Nn ≤ A. (b) Np and Nn
can have any number that is allowed for the valence nucleon numbers. (c) Np and Nn can have
numbers which are allowed for the actually existing nuclei. (d) All of the previous three cases are
shown together.
0 100 200 300 400 500
0 100 200 300 400 500
(b) E
(c) E
(d) E
) (a) E
Data
Empirical Formula
FIG. 7: The NpNn-plot for the first excitation energies of the natural parity even multipole states
(a) 4+
, (b) 6+
, (c) 8+
, and (d) 10+
using both the measured data (open triangles) and the empirical
formula (solid circles). These graphs are just the NpNn-plot versions of the A-plot shown in Fig. 2
with exactly the same set of data points.
In summary, we have examined how the recently proposed empirical formula, Eq. (1),
for the excitation energy Ex(2
1 ) of the first 2
1 state meets the requirement of the NpNn
scheme even though it does not depend on the product NpNn at all. We have demonstrated
explicitly that the structure of the empirical formula itself together with the restriction on
the values of the valence nucleon numbers Np and Nn of the actually existing nuclei make
the characteristic feature of the NpNn scheme appear. Furthermore, our result shows that
the composition of the empirical formula, Eq. (1), is in fact ideal for revealing the NpNn
scheme. Therefore it is better to regard the NpNn scheme as a strong signature suggesting
that this empirical formula is indeed the right one. As a matter of fact, this study about
the NpNn scheme has incidentally exposed the significance of the empirical formula given by
Eq. (1) as a universal expression for the lowest collective excitation energy. A more detailed
account of the empirical formula for the first excitation energy of the natural parity even
multipole states in even-even nuclei will be published elsewhere [19]. However, it has been
well established that the NpNn scheme holds not only for the lowest excitation energies
1 ) but also for the transition strength B(E2) [8]. Unfortunately, our empirical study
intended to express only the excitation energies in terms of the valence nucleon numbers.
The extension of our study to include the B(E2) values in our parametrization is in progress.
Acknowledgments
This work was supported by an Inha University research grant.
[1] I. Hamamoto, Nucl. Phys. 73, 225 (1965).
[2] R. F. Casten, Nucl. Phys. A443, 1 (1985).
[3] For a review of the NpNn scheme, see R. F. Casten and N. V. Zamfir, J. Phys. G22, 1521
(1996).
[4] S. Raman, C. W. Nestor, Jr., and P. Tikkanen, At. Data Nucl. Data Tables 78, 1 (2001).
[5] R. F. Casten, Phys. Rev. Lett. 54, 1991 (1985).
[6] R. F. Casten, D. S. Brenner, and P. E. Haustein, Phys. Rev. Lett. 58, 658 (1987).
[7] R. B. Cakirli and R. F. Casten, Phys. Rev. Lett. 96, 132501 (2006).
[8] R. F. Casten and N. V. Zamfir, Phys. Rev. Lett. 70, 402 (1993).
[9] Y. M. Zhao, R. F. Casten, and A. Arima, Phys. Rev. Lett. 85, 720 (2000).
[10] A. de-Shalit and M. Goldhaber, Phys. Rev. 92, 1211 (1953).
[11] I. Talmi, Rev. Mod. Phys. 34, 704 (1962).
[12] K. Heyde, P. Vanisacker, R. F. Casten, and J. L. Wood, Phys. Lett. 155B, 303 (1985).
[13] R. F. Casten, K. Heyde, and A. Wolf, Phys. Lett. 208B, 33 (1988).
[14] J.-Y. Zhang, R. F. Casten, and D. S. Brenner, Phys. Lett. 227B, 1 (1989).
[15] P. Federmann and S. Pittel, Phys. Lett. 69B, 385 (1977).
[16] J. Dobaczewski, W. Nazarewicz, J. Skalski, and T. Werner, Phys. Rev. Lett. 60, 2254 (1988).
[17] E. Ha and D. Cha, J. Korean Phys. Soc. 50, 1172 (2007).
[18] E. Ha and D. Cha, Phys. Rev. C 75, 057304 (2007).
[19] D. Kim, E. Ha, and D. Cha, arXiv:0705.4620[nucl-th].
[20] R. B. Firestone, V. S. Shirley, C. M. Baglin, S. Y. Frank Chu, and J. Zipkin, Table of Isotopes
(Wiley, New York, 1999).
http://arxiv.org/abs/0705.4620
Acknowledgments
References
|
0704.1694 | Locally Decodable Codes From Nice Subsets of Finite Fields and Prime
Factors of Mersenne Numbers | arXiv:0704.1694v1 [cs.CC] 13 Apr 2007
Locally Decodable Codes From Nice Subsets of Finite Fields
and Prime Factors of Mersenne Numbers
Kiran S. Kedlaya
[email protected]
Sergey Yekhanin
[email protected]
Abstract
A k-query Locally Decodable Code (LDC) encodes an n-bit message x as an N -bit codeword C(x), such that
one can probabilistically recover any bit xi of the message by querying only k bits of the codeword C(x), even
after some constant fraction of codeword bits has been corrupted. The major goal of LDC related research is to
establish the optimal trade-off between length and query complexity of such codes.
Recently [34] introduced a novel technique for constructing locally decodable codes and vastly improved the
upper bounds for code length. The technique is based on Mersenne primes. In this paper we extend the work
of [34] and argue that further progress via these methods is tied to progress on an old number theory question
regarding the size of the largest prime factors of Mersenne numbers.
Specifically, we show that every Mersenne number m = 2t − 1 that has a prime factor p > mγ yields a family
of k(γ)-query locally decodable codes of length exp
. Conversely, if for some fixed k and all ǫ > 0 one can
use the technique of [34] to obtain a family of k-query LDCs of length exp (nǫ) ; then infinitely many Mersenne
numbers have prime factors larger than known currently.
1 Introduction
Classical error-correcting codes allow one to encode an n-bit string x into in N -bit codeword C(x), in such
a way that x can still be recovered even if C(x) gets corrupted in a number of coordinates. It is well-known
that codewords C(x) of length N = O(n) already suffice to correct errors in up to δN locations of C(x) for
any constant δ < 1/4. The disadvantage of classical error-correction is that one needs to consider all or most
of the (corrupted) codeword to recover anything about x. Now suppose that one is only interested in recovering
one or a few bits of x. In such case more efficient schemes are possible. Such schemes are known as locally
decodable codes (LDCs). Locally decodable codes allow reconstruction of an arbitrary bit xi, from looking only
at k randomly chosen coordinates of C(x), where k can be as small as 2. Locally decodable codes have numerous
applications in complexity theory [15, 29], cryptography [6, 11] and the theory of fault tolerant computation [24].
Below is a slightly informal definition of LDCs:
A (k, δ, ǫ)-locally decodable code encodes n-bit strings to N -bit codewords C(x), such that for every i ∈ [n],
the bit xi can be recovered with probability 1− ǫ, by a randomized decoding procedure that makes only k queries,
even if the codeword C(x) is corrupted in up to δN locations.
One should think of δ > 0 and ǫ < 1/2 as constants. The main parameters of interest in LDCs are the length
N and the query complexity k. Ideally we would like to have both of them as small as possible. The concept
of locally decodable codes was explicitly discussed in various papers in the early 1990s [2, 28, 21]. Katz and
http://arxiv.org/abs/0704.1694v1
Trevisan [15] were the first to provide a formal definition of LDCs. Further work on locally decodable codes
includes [3, 8, 20, 4, 16, 30, 34, 33, 14, 23].
Below is a brief summary of what was known regarding the length of LDCs prior to [34]. The length of optimal
2-query LDCs was settled by Kerenidis and de Wolf in [16] and is exp(n).1 The best upper bound for the length
of 3-query LDCs was exp
due to Beimel et al. [3], and the best lower bound is Ω̃(n2) [33]. For general
(constant) k the best upper bound was exp
nO(log log k/(k log k))
due to Beimel et al. [4] and the best lower bound
is Ω̃
n1+1/(⌈k/2⌉−1)
[33].
The recent work [34] improved the upper bounds to the extent that it changed the common perception of what
may be achievable [12, 11]. [34] introduced a novel technique to construct codes from so-called nice subsets
of finite fields and showed that every Mersenne prime p = 2t − 1 yields a family of 3-query LDCs of length
. Based on the largest known Mersenne prime [9], this translates to a length of less than exp
Combined with the recursive construction from [4], this result yields vast improvements for all values of k > 2. It
has often been conjectured that the number of Mersenne primes is infinite. If indeed this conjecture holds, [34] gets
three query locally decodable codes of length N = exp
log log n
for infinitely many n. Finally, assuming
that the conjecture of Lenstra, Pomerance and Wagstaff [31, 22, 32] regarding the density of Mersenne primes
holds, [34] gets three query locally decodable codes of length N = exp
log1−ǫ log n
for all n, for every ǫ >
1.1 Our results
In this paper we address two natural questions left open by [34]:
1. Are Mersenne primes necessary for the constructions of [34]?
2. Has the technique of [34] been pushed to its limits, or one can construct better codes through a more clever
choice of nice subsets of finite fields?
We extend the work of [34] and answer both of the questions above. In what follows let P (m) denote the
largest prime factor of m. We show that one does not necessarily need to use Mersenne primes. It suffices to have
Mersenne numbers with polynomially large prime factors. Specifically, every Mersenne number m = 2t − 1 such
that P (m) ≥ mγ yields a family of k(γ)-query locally decodable codes of length exp
. A partial converse
also holds. Namely, if for some fixed k ≥ 3 and all ǫ > 0 one can use the technique of [34] to (unconditionally)
obtain a family of k-query LDCs of length exp (nǫ) ; then for infinitely many t we have
P (2t − 1) ≥ (t/2)1+1/(k−2). (1)
The bound (1) may seem quite weak in light of the widely accepted conjecture saying that the number of
Mersenne primes is infinite. However (for any k ≥ 3) this bound is substantially stronger than what is currently
known unconditionally. Lower bounds for P (2t − 1) have received a considerable amount of attention in the
number theory literature [25, 26, 10, 27, 19, 18]. The strongest result to date is due to Stewart [27]. It says that
for all integers t ignoring a set of asymptotic density zero, and for all functions ǫ(t) > 0 where ǫ(t) tends to zero
monotonically and arbitrarily slowly:
P (2t − 1) > ǫ(t)t (log t)2 / log log t. (2)
1Throughout the paper we use the standard notation exp(x)
= eO(x).
There are no better bounds known to hold for infinitely many values of t, unless one is willing to accept some
number theoretic conjectures [19, 18]. We hope that our work will further stimulate the interest in proving lower
bounds for P (2t − 1) in the number theory community.
In summary, we show that one may be able to improve the unconditional bounds of [34] (say, by discovering a
new Mersenne number with a very large prime factor) using the same technique. However any attempts to reach
the exp (nǫ) length for some fixed query complexity and all ǫ > 0 require either progress on an old number theory
problem or some radically new ideas.
In this paper we deal only with binary codes for the sake of clarity of presentation. We remark however that
our results as well as the results of [34] can be easily generalized to larger alphabets. Such generalization will be
discussed in detail in [35].
1.2 Outline
In section 3 we introduce the key concepts of [34], namely that of combinatorial and algebraic niceness of
subsets of finite fields. We also briefly review the construction of locally decodable codes from nice subsets. In
section 4 we show how Mersenne numbers with large prime factors yield nice subsets of prime fields. In section 5
we prove a partial converse. Namely, we show that every finite field Fq containing a sufficiently nice subset, is an
extension of a prime field Fp, where p is a large prime factor of a large Mersenne number. Our main results are
summarized in sections 4.3 and 5.4.
2 Notation
We use the following standard mathematical notation:
• [s] = {1, . . . , s};
• Zn denotes integers modulo n;
• Fq is a finite field of q elements;
• dH(x, y) denotes the Hamming distance between binary vectors x and y;
• (u, v) stands for the dot product of vectors u and v;
• For a linear space L ⊆ Fm2 , L⊥ denotes the dual space. That is, L⊥ = {u ∈ Fm2 | ∀v ∈ L, (u, v) = 0};
• For an odd prime p, ord2(p) denotes the smallest integer t such that p | 2t − 1.
3 Nice subsets of finite fields and locally decodable codes
In this section we introduce the key technical concepts of [34], namely that of combinatorial and algebraic
niceness of subsets of finite fields. We briefly review the construction of locally decodable codes from nice
subsets. Our review is concise although self-contained. We refer the reader interested in a more detailed and
intuitive treatment of the construction to the original paper [34]. We start by formally defining locally decodable
codes.
Definition 1 A binary code C : {0, 1}n → {0, 1}N is said to be (k, δ, ǫ)-locally decodable if there exists a
randomized decoding algorithm A such that
1. For all x ∈ {0, 1}n, i ∈ [n] and y ∈ {0, 1}N such that dH(C(x), y) ≤ δN : Pr[Ay(i) = xi] ≥ 1− ǫ, where
the probability is taken over the random coin tosses of the algorithm A.
2. A makes at most k queries to y.
We now introduce the concepts of combinatorial and algebraic niceness of subsets of finite fields. Our defini-
tions are syntactically slightly different from the original definitions in [34]. We prefer these formulations since
they are more appropriate for the purposes of the current paper. In what follows let F∗q denote the multiplicative
group of Fq.
Definition 2 A set S ⊆ F∗q is called t combinatorially nice if for some constant c > 0 and every positive integer
m there exist two n = ⌊cmt⌋-sized collections of vectors {u1, . . . , un} and {v1, . . . , vn} in Fmq , such that
• For all i ∈ [n], (ui, vi) = 0;
• For all i, j ∈ [n] such that i 6= j, (uj , vi) ∈ S.
Definition 3 A set S ⊆ F∗q is called k algebraically nice if k is odd and there exists an odd k′ ≤ k and two sets
S0, S1 ⊆ Fq such that
• S0 is not empty;
• |S1| = k′;
• For all α ∈ Fq and β ∈ S : |S0 ∩ (α+ βS1)| ≡ 0 mod (2).
The following lemma shows that for an algebraically nice set S, the set S0 can always be chosen to be large. It
is a straightforward generalization of [34, lemma 15].
Lemma 4 Let S ⊆ F∗q be a k algebraically nice set. Let S0, S1 ⊆ Fq be sets from the definition of algebraic
niceness of S. One can always redefine the set S0 to satisfy |S0| ≥ ⌈q/2⌉.
Proof: Let L be the linear subspace of Fq2 spanned by the incidence vectors of the sets α+ βS1, for α ∈ Fq and
β ∈ S. Observe that L is invariant under the actions of a 1-transitive permutation group (permuting the coordinates
in accordance with addition in Fq). This implies that the space L
⊥ is also invariant under the actions of the same
group. Note that L⊥ has positive dimension since it contains the incidence vector of the set S0. The last two
observations imply that L⊥ has full support, i.e., for every i ∈ [q] there exists a vector v ∈ L⊥ such that vi 6= 0. It
is easy to verify that any linear subspace of Fq2 that has full support contains a vector of Hamming weight at least
⌈q/2⌉. Let v ∈ L⊥ be such a vector. Redefining the set S0 to be the set of nonzero coordinates of v we conclude
the proof.
We now proceed to the core proposition of [34] that shows how sets exhibiting both combinatorial and algebraic
niceness yield locally decodable codes.
Proposition 5 Suppose S ⊆ F∗q is t combinatorially nice and k algebraically nice; then for every positive integer
n there exists a code of length exp(n1/t) that is (k, δ, 2kδ) locally decodable for all δ > 0.
Proof: Our proof comes in three steps. We specify encoding and local decoding procedures for our codes and
then argue the lower bound for the probability of correct decoding. We use the notation from definitions 2 and 3.
Encoding: We assume that our message has length n = ⌊cmt⌋ for some value of m. (Otherwise we pad the
message with zeros. It is easy to see that such padding does not not affect the asymptotic length of the code.) Our
code will be linear. Therefore it suffices to specify the encoding of unit vectors e1, . . . , en, where ej has length n
and a unique non-zero coordinate j. We define the encoding of ej to be a q
m long vector, whose coordinates are
labelled by elements of Fmq . For all w ∈ Fmq we set:
Enc(ej)w =
1, if (uj , w) ∈ S0;
0, otherwise.
It is straightforward to verify that we defined a code encoding n bits to exp(n1/t) bits.
Local decoding: Given a (possibly corrupted) codeword y and an index i ∈ [n], the decoding algorithm A picks
w ∈ Fmq , such that (ui, w) ∈ S0 uniformly at random, reads k′ ≤ k coordinates of y, and outputs the sum:
yw+λvi . (4)
Probability of correct decoding: First we argue that decoding is always correct if A picks w ∈ Fmq such that
all bits of y in locations {w + λvi}λ∈S1 are not corrupted. We need to show that for all i ∈ [n], x ∈ {0, 1}n and
w ∈ Fmq , such that (ui, w) ∈ S0:
xj Enc(ej)
w+λvi
= xi. (5)
Note that
xj Enc(ej)
w+λvi
Enc(ej)w+λvi =
I [(uj , w + λvi) ∈ S0] , (6)
where I[γ ∈ S0] = 1 if γ ∈ S0 and zero otherwise. Now note that
I [(uj , w + λvi) ∈ S0] =
I [(uj , w) + λ(uj , vi) ∈ S0] =
1, if i = j,
0, otherwise.
The last identity in (7) for i = j follows from: (ui, vi) = 0, (ui, w) ∈ S0 and k′ = |S1| is odd. The last identity
for i 6= j follows from (uj , vi) ∈ S and the algebraic niceness of S. Combining identities (6) and (7) we get (5).
Now assume that up to δ fraction of bits of y are corrupted. Let Ti denote the set of coordinates whose labels
belong to
w ∈ Fmq | (ui, w) ∈ S0
. Recall that by lemma 4, |Ti| ≥ qm/2. Thus at most 2δ fraction of coor-
dinates in Ti contain corrupted bits. Let Qi =
{w + λvi}λ∈S1 | w : (ui, w) ∈ S0
be the family of k′-tuples
of coordinates that may be queried by A. (ui, vi) = 0 implies that elements of Qi uniformly cover the set Ti.
Combining the last two observations we conclude that with probability at least 1 − 2kδ A picks an uncorrupted
k′-tuple and outputs the correct value of xi.
All locally decodable codes constructed in this paper are obtained by applying proposition 5 to certain nice
sets. Thus all our codes have the same dependence of ǫ (the probability of the decoding error) on δ (the fraction
of corrupted bits). In what follows we often ignore these parameters and consider only the length and query
complexity of codes.
4 Mersenne numbers with large prime factors yield nice subsets of prime fields
In what follows let 〈2〉 ⊆ F∗p denote the multiplicative subgroup of F∗p generated by 2. In [34] it is shown
that for every Mersenne prime p = 2t − 1 the set 〈2〉 ⊆ F∗p is simultaneously 3 algebraically nice and ord2(p)
combinatorially nice. In this section we prove the same conclusion for a substantially broader class of primes.
Lemma 6 Suppose p is an odd prime; then 〈2〉 ⊆ F∗p is ord2(p) combinatorially nice.
Proof: Let t = ord2(p). Clearly, t divides p− 1. We need to specify a constant c > 0 such that for every positive
integer m there exist two n = ⌊cmt⌋-sized collections of m long vectors over Fp satisfying:
• For all i ∈ [n], (ui, vi) = 0;
• For all i, j ∈ [n] such that i 6= j, (uj , vi) ∈ 〈2〉.
First assume that m has the shape m =
m′−1+(p−1)/t
(p−1)/t
, for some integer m′ ≥ p − 1. In this case [34, lemma
13] gives us a collection of n =
vectors with the right properties. Observe that n ≥ cmt for a constant
c that depends only on p and t. Now assume m does not have the right shape, and let m1 be the largest integer
smaller than m that does have it. In order to get vectors of length m we use vectors of length m1 coming from [34,
lemma 13] padded with zeros. It is not hard to verify such a construction still gives us n ≥ cmt large families of
vectors for a suitably chosen constant c.
We use the standard notation F to denote the algebraic closure of the field F. Also let Cp ⊆ F
2 denote the
multiplicative subgroup of p-th roots of unity in F2. The next lemma generalizes [34, lemma 14].
Lemma 7 Let p be a prime and k be odd. Suppose there exist ζ1, . . . , ζk ∈ Cp such that
ζ1 + . . . + ζk = 0; (8)
then 〈2〉 ⊆ F∗p is k algebraically nice.
Proof: In what follows we define the set S1 ⊆ Fp and prove the existence of a set S0 such that that together S0
and S1 yield k algebraic niceness of 〈2〉. Identity 8 implies that there exists an odd integer k′ ≤ k and k′ distinct
p-th roots of unity ζ ′1, . . . , ζ
k ∈ Cp such that
ζ ′1 + . . . + ζ
k′ = 0. (9)
Let t = ord2(p). Observe that Cp ⊆ F2t . Let g be a generator of Cp. Identity (9) yields gγ1 + . . .+ gγk′ = 0, for
some distinct values of {γi}i∈[k′]. Set S1 = {γ1, . . . , γk′}.
Consider a natural one to one correspondence between subsets S′ of Fp and polynomials φS′(x) in the ring
F2[x]/(x
p − 1) : φS′(x) =
xs. It is easy to see that for all sets S′ ⊆ Fp and all α, β ∈ Fp, such that β 6= 0 :
φα+βS′(x) = x
αφS′(x
Let α be a variable ranging over Fp and β be a variable ranging over 〈2〉. We are going to argue the existence of a
set S0 that has even intersections with all sets of the form α+βS1, by showing that all polynomials φα+βS1 belong
to a certain linear space L ∈ F2[x]/(xp − 1) of dimension less than p. In this case any nonempty set T ⊆ Fp such
that φT ∈ L⊥ can be used as the set S0. Let τ(x) = gcd(xp−1, φS1(x)). Note that τ(x) 6= 1 since g is a common
root of xp−1 and φS1(x). Let L be the space of polynomials in F2[x]/(xp−1) that are multiples of τ(x). Clearly,
dimL = p− deg τ. Fix some α ∈ Fp and β ∈ 〈2〉. Let us prove that φα+βS1(x) is in L :
φα+βS1(x) = x
αφS1(x
β) = xα(φS1(x))
The last identity above follows from the fact that for any f ∈ F2[x] and any integer i : f(x2
) = (f(x))2
In what follows we present sufficient conditions for the existence of k-tuples of p-th roots of unity in F2 that
sum to zero. We treat the k = 3 case separately since in that case we can use a specialized argument to derive a
more explicit conclusion.
4.1 A sufficient condition for the existence of three p-th roots of unity summing to zero
Lemma 8 Let p be an odd prime. Suppose ord2(p) < (4/3) log2 p; then there exist three p-th roots of unity in F2
that sum to zero.
Proof: We start with a brief review of some basic concepts of projective algebraic geometry. Let F be a field, and
f ∈ F[x, y, z] be a homogeneous polynomial. A triple (x0, y0, z0) ∈ F3 is called a zero of f if f(x0, y0, z0) = 0.
A zero is called nontrivial if it is different from the origin. An equation f = 0 defines a projective plane curve χf .
Nontrivial zeros of f considered up to multiplication by a scalars are called F-rational points of χf . If F is a finite
field it makes sense to talk about the number of F-rational points on a curve.
Let t = ord2(p). Note that Cp ⊆ F2t . Consider a projective plane Fermat curve χ defined by
t−1)/p + y(2
t−1)/p + z(2
t−1)/p = 0. (10)
Let us call a point a on χ trivial if one of the coordinates of a is zero. Cyclicity of F∗
implies that χ contains
exactly 3(2t − 1)/p trivial F2t-rational points. Note that every nontrivial point of χ yields a triple of elements of
Cp that sum to zero. The classical Weil bound [17, p. 330] provides an estimate
|Nq − (q + 1)| ≤ (d− 1)(d − 2)
q (11)
for the number Nq of Fq-rational points on an arbitrary smooth projective plane curve of degree d. (11) implies
that in case
2t + 1 >
2t − 1
2t − 1
2t/2 + 3
2t − 1
there exists a nontrivial point on the curve (10). Note that (12) follows from
2t + 1 >
2t/2 −
23t/2+1
3 ∗ 2t
, (13)
and (13) follows from
2t > 22t+t/2/p2 and 2t/2+1 > 3.
Now note that the first inequality above follows from t < (4/3) log2 p and the second follows from t > 1.
Note that the constant 4/3 in lemma 8 cannot be improved to 2: there are no three elements of C13264529 that
sum to zero, even though ord2(13264529) = 47 < 2 ∗ log2 13264529 ≈ 47.3.
4.2 A sufficient condition for the existence of k p-th roots of unity summing to zero
Our argument in this section comes in three steps. First we briefly review the notion of (additive) Fourier
coefficients of subsets of F2t . Next, we invoke a folklore argument to show that subsets of F2t with appropriately
small nontrivial Fourier coefficients contain k-tuples of elements that sum to zero. Finally, we use a recent result of
Bourgain and Chang [5] (generalizing the classical estimate for Gauss sums) to argue that (under certain constraints
on p) all nontrivial Fourier coefficients of Cp are small.
For x ∈ F2t let Tr(x) = x + x2 + . . . + x2
denote the trace of x. It is not hard to verify that for all x,
Tr(x) ∈ F2. Characters of F2t are homomorphisms from the additive group of F2t into the multiplicative group
{±1}. There exist 2t characters. We denote characters by χa, where a ranges in F2t , and set χa(x) = (−1)Tr(ax).
Let C(x) denote the incidence function of a set C ⊆ F2t . For arbitrary a ∈ Ft2 the Fourier coefficient χa(C) is
defined by χa(C) =
χa(x)C(x), where the sum is over all x ∈ F2t . Fourier coefficient χ0(C) = |C| is called
trivial, and other Fourier coefficients are called nontrivial. In what follows
χ stands for summation over all 2
characters of F2t . We need the following two standard properties of characters and Fourier coefficients.
χ(x) =
2t, if x = 0;
0, otherwise.
χ2(C) = 2t|C|. (15)
The following lemma is a folklore.
Lemma 9 Let C ⊆ F2t and k ≥ 3 be a positive integer. Let F be the largest absolute value of a nontrivial Fourier
coefficient of C. Suppose
)1/(k−2)
then there exist k elements of C that sum to zero.
Proof: Let M(C) = # {ζ1, . . . , ζk ∈ C | ζ1 + . . .+ ζk = 0} . (14) yields
M(C) =
x1,...,xk∈F2t
C(x1) . . . C(xk)
χ(x1 + . . .+ xk). (17)
Note that χ(x1 + . . .+ xk) = χ(x1) . . . χ(xk). Changing the order of summation in (17) we get
M(C) =
x1,...,xk∈F2t
C(x1) . . . C(xk)χ(x1) . . . χ(xk) =
χk(C). (18)
Note that
χk(C) =
χ 6=χ0
χk(C) ≥
− F k−2
χ2(C) =
− F k−2|C|, (19)
where the last identity follows from (15). Combining (18) and (19) we conclude that (16) implies M(C) > 0.
The following lemma is a special case of [5, theorem 1].
Lemma 10 Assume that n | 2t − 1 and satisfies the condition
2t − 1
′ − 1
< 2t(1−ǫ)−t
, for all 1 ≤ t′ < t, t′ | t,
where ǫ > 0 is arbitrary and fixed. Then for all a ∈ F∗
x∈F2t
(−1)Tr(axn)
< c12
t(1−δ), (20)
where δ = δ(ǫ) > 0 and c1 = c1(ǫ) are absolute constants.
Below is the main result of this section. Recall that Cp denotes the set of p-th roots of unity in F2.
Lemma 11 For every c > 0 there exists an odd integer k = k(c) such that the following implication holds. If p is
an odd prime and ord2(p) < c log2 p then some k elements of Cp sum to zero.
Proof: Note that if there exist k′ elements of a set C ⊆ F2 that sum to zero, where k′ is odd; then there exist
k elements of C that sum to zero for every odd k ≥ k′. Also note that the sum of all p-th roots of unity is
zero. Therefore given c it suffices to prove the existence of an odd k = k(c) that works for all sufficiently large
p. Let t = ord2(p). Observe that p > 2
t/c. Assume p is sufficiently large so that t > 2c. Next we show that
the precondition of lemma 10 holds for n = (2t − 1)/p and ǫ = 1/(2c). Let t′ | t and 1 ≤ t′ < t. Clearly
gcd(2t
′ − 1, p) = 1. Therefore
2t − 1
2t − 1
′ − 1
2t − 1
′ − 1)
2t(1−1/c)
′ − 1
, (21)
where the inequality follows from p > 2t/c. Clearly, t > 2c yields 2t/(2c)/2 > 1. Multiplying the right hand side
of (21) by 2t/(2c)/2 and using 2(2t
′ − 1) > 2t′ we get
2t − 1
2t − 1
′ − 1
< 2t(1−1/(2c))−t
. (22)
Combining (22) with lemma 10 we conclude that there exist δ > 0 and c1 such that for all a ∈ F∗2t
x∈F2t
(−1)Tr
−1)/p
< c12
t(1−δ). (23)
Observe that x(2
t−1)/p takes every value in Cp exactly (2
t−1)/p times when x ranges over F∗
. Thus (23) implies
(2t − 1)(F/p) < c12t(1−δ), (24)
where F denotes that largest nontrivial Fourier coefficient of Cp. (24) yields F/p < (2c1)2
−δt. Pick k ≥ 3 to be
the smallest odd integer such that (1 − 1/c)/(k − 2) < δ. We now have
(1−1/c)t
(k−2) (25)
for all sufficiently large values of p. Combining p > 2t/c with (25) we get
)1/(k−2)
and the application of lemma 9 concludes the proof.
4.3 Summary
In this section we summarize our positive results and show that one does not necessarily need to use Mersenne
primes to construct locally decodable codes via the methods of [34]. It suffices to have Mersenne numbers with
polynomially large prime factors. Recall that P (m) denotes the largest prime factor of an integer m. Our first
theorem gets 3-query LDCs from Mersenne numbers m with prime factors larger than m3/4.
Theorem 12 Suppose P (2t − 1) > 20.75t; then for every message length n there exists a three query locally
decodable code of length exp(n1/t).
Proof: Let P (2t − 1) = p. Observe that p | 2t − 1 and p > 20.75t yield ord2(p) < (4/3) log2 p. Combining
lemmas 8,7 and 6 with proposition 5 we obtain the statement of the theorem.
As an example application of theorem 12 one can observe that P (223−1) = 178481 > 2(3/4)∗23 ≈ 155872 yields
a family of three query locally decodable codes of length exp(n1/23). Theorem 12 immediately yields:
Theorem 13 Suppose for infinitely many t we have P (2t − 1) > 20.75t; then for every ǫ > 0 there exists a family
of three query locally decodable codes of length exp(nǫ).
The next theorem gets constant query LDCs from Mersenne numbers m with prime factors larger than mγ for
every value of γ.
Theorem 14 For every γ > 0 there exists an odd integer k = k(γ) such that the following implication holds.
Suppose P (2t − 1) > 2γt; then for every message length n there exists a k query locally decodable code of length
exp(n1/t).
Proof: Let P (2t − 1) = p. Observe that p | 2t − 1 and p > 2γt yield ord2(p) < (1/γ) log2 p. Combining
lemmas 22,7 and 6 with proposition 5 we obtain the statement of the theorem.
As an immediate corollary we get:
Theorem 15 Suppose for some γ > 0 and infinitely many t we have P (2t − 1) > 2γt; then there is a fixed k such
that for every ǫ > 0 there exists a family of k query locally decodable codes of length exp(nǫ).
5 Nice subsets of finite fields yield Mersenne numbers with large prime factors
Definition 16 We say that a sequence
Si ⊆ F∗qi
of subsets of finite fields is k-nice if every Si is k alge-
braically nice and t(i) combinatorially nice, for some integer valued monotonically increasing function t.
The core proposition 5 asserts that a subset S ⊆ F∗q that is k algebraically nice and t combinatorially nice yields
a family of k-query locally decodable codes of length exp(n1/t). Clearly, to get k-query LDCs of length exp(nǫ)
for some fixed k and every ǫ > 0 via this proposition, one needs to exhibit a k-nice sequence. In this section
we show how the existence of a k-nice sequence implies that infinitely many Mersenne numbers have large prime
factors. Our argument proceeds in two steps. First we show that a k-nice sequence yields an infinite sequence of
primes {pi}i≥1 , where every Cpi contains a k-tuple of elements summing to zero. Next we show that Cp contains
a short additive dependence only if p is a large factor of a Mersenne number.
5.1 A nice sequence yields infinitely many primes p with short dependencies between p-th roots of unity
We start with some notation. Consider a a finite field Fq = Fpl, where p is prime. Fix a basis e1, . . . , el of Fq
over Fp. In what follows we often write (α1, . . . , αl) ∈ Flp to denote α =
i=1 αiei ∈ Fq. Let R denote the ring
F2[x1, . . . , xl]/(x
1 − 1, . . . , x
l − 1). Consider a natural one to one correspondence between subsets S1 of Fq and
polynomials φS1(x1, . . . , xl) ∈ R.
φS1(x1, . . . , xl) =
(α1,...,αl)∈S1
1 . . . x
It is easy to see that for all sets S1 ⊆ Fq and all α, β ∈ Fq :
φ(α1,...,αl)+βS1(x1, . . . , xl) = x
1 . . . x
φβS1(x1, . . . , xl). (26)
Let Γ be a family of subsets of Fq. It is straightforward to verify that a set S0 ⊆ Fq has even intersections with
every element of Γ if and only if φS0 belongs to L
⊥, where L is the linear subspace of R spanned by {φS1}S1∈Γ .
Combining the last observation with formula (26) we conclude that a set S ⊆ F∗q is k algebraically nice if and
only if there exists a set S1 ⊆ Fq of odd size k′ ≤ k such that the ideal generated by polynomials {φβS1}{β∈S}
is a proper ideal of R. Note that polynomials {f1, . . . , fh} ∈ R generate a proper ideal if an only if polynomials
{f1, . . . , fh, xp1 − 1, . . . , x
l − 1} generate a proper ideal in F2[x1, . . . , xl]. Also note that a family of polynomials
generates a proper ideal in F2[x1, . . . , xl] if and only if it generates a proper ideal in F2[x1, . . . , xl]. Now an
application of Hilbert’s Nullstellensatz [7, p. 168] implies that a set S ⊆ F∗q is k algebraically nice if and only
if there is a set S1 ⊆ Fq of odd size k′ ≤ k such that the polynomials {φβS1}{β∈S} and {x
i − 1}1≤i≤l have a
common root in F2.
Lemma 17 Let Fq = Fpl , where p is prime. Suppose Fq contains a nonempty k algebraically nice subset; then
there exist ζ1, . . . , ζk ∈ Cp such that ζ1 + . . .+ ζk = 0.
Proof: Assume S ⊆ F∗q is nonempty and k algebraically nice. The discussion above implies that there exists
S1 ⊆ Fq of odd size k′ ≤ k such that all polynomials {φβS1}{β∈S} vanish at some (ζ1, . . . , ζl) ∈ C
p. Fix an
arbitrary β0 ∈ S, and note that Cp is closed under multiplication. Thus,
φβ0S1(ζ1, . . . , ζl) = 0 (27)
yields k′ p-th roots of unity that add up to zero. It is readily seen that one can extend (27) (by adding an appropriate
number of pairs of identical roots) to obtain k p-th roots of unity that add up to zero for any odd k ≥ k′.
Note that lemma 17 does not suffice to prove that a k-nice sequence
Si ⊆ F∗qi
yields infinitely many primes p
with short (nontrivial) additive dependencies in Cp. We need to argue that the set {charFqi}i≥1 can not be finite. To
proceed, we need some more notation. Recall that q = pl and p is prime. For x ∈ Fq let Tr(x) = x+. . .+xp
l−1 ∈
Fp denote the (absolute) trace of x. For γ ∈ Fq, c ∈ F∗p we call the set πγ,c = {x ∈ Fq | Tr(γx) = c} a proper
affine hyperplane of Fq.
Lemma 18 Let Fq = Fpl , where p is prime. Suppose S ⊆ F∗q is k algebraically nice; then there exist h ≤ pk
proper affine hyperplanes {πγi,ci}1≤i≤h of Fq such that S ⊆
πγi,ci.
Proof: Discussion preceding lemma 17 implies that there exists a set S1 = {σ1, . . . , σk′} ⊆ Fq of odd size
k′ ≤ k such that all polynomials {φβS1}{β∈S} vanish at some (ζ1, . . . , ζl) ∈ C
p. Let ζ be a generator of Cp. For
every 1 ≤ i ≤ l pick ωi ∈ Zp such that ζi = ζωi . For every β ∈ S, φβS1(ζ1, . . . , ζl) = 0 yields
µ=(µ1,...,µl)∈βS1
i=1 µiωi = 0. (28)
Observe that for fixed values {ωi}1≤i≤l ∈ Zp the map D(µ) =
i=1 µiωi is a linear map from Fq to Fp. It is
not hard to prove that every such map can be expressed as D(µ) = Tr(δµ) for an appropriate choice of δ ∈ Fq.
Therefore we can rewrite (28) as
µ∈βS1
ζTr(δµ) =
ζTr(δβσ) = 0. (29)
Let W =
(w1, . . . , wk′) ∈ Zk
p | ζw1 + . . . + ζwk′ = 0
denote the set of exponents of k′-dependencies be-
tween powers of ζ. Clearly, |W | ≤ pk. Identity (29) implies that every β ∈ S satisfies
Tr((δσ1)β) = w1,
Tr((δσk′)β) = wk′ ;
for an appropriate choice of (w1, . . . , wk′) ∈ W. Note that the all-zeros vector does not lie in W since k′ is odd.
Therefore at least one of the identities in (30) has a non-zero right-hand side, and defines a proper affine hyperplane
of Fq. Collecting one such hyperplane for every element of W we get a family of |W | proper affine hyperplanes
containing every element of S.
Lemma 18 gives us some insight into the structure of algebraically nice subsets of Fq. Our next goal is to develop
an insight into the structure of combinatorially nice subsets. We start by reviewing some relations between tensor
and dot products of vectors. For vectors u ∈ Fmq and v ∈ Fnq let u⊗v ∈ Fmnq denote the tensor product of u and v.
Coordinates of u⊗ v are labelled by all possible elements of [m]× [n] and (u⊗ v)i,j = uivj . Also, let u⊗l denote
the l-the tensor power of u and u ◦ v denote the concatenation of u and v. The following identity is standard. For
any u, x ∈ Fmq and v, y ∈ Fnq :
(u⊗ v, x⊗ y) =
i∈[m],j∈[n]
uivjxiyj =
i∈[m]
j∈[n]
= (u, x)(v, y). (31)
In what follows we need a generalization of identity (31). Let f(x1, . . . , xh) =
i cix
1 . . . x
h be a polynomial
in Fq[x1, . . . , xh]. Given f we define f̄ ∈ Fq[x1, . . . , xh] by f̄ =
1 . . . x
, i.e., we simply set all nonzero
coefficients of f to 1. For vectors u1, . . . , uh in F
q define
f(u1, . . . , uh) = ◦i ciu
1 ⊗ . . .⊗ u
h . (32)
Note that to obtain f(u1, . . . , uh) we replaced products in f by tensor products and addition by concatenation.
Clearly, f(u1, . . . , uh) is a vector whose length may be larger than m.
Claim 19 For every f ∈ Fq[x1, . . . , xh] and u1, . . . , uh, v1, . . . , vh ∈ Fmq :
f(u1, . . . , uh), f̄(v1, . . . , vh)
= f((u1, v1), . . . , (uh, vh)). (33)
Proof: Let u = (u1, . . . , uh) and v = (v1, . . . , vh). Observe that if (33) holds for polynomials f1 and f2 defined
over disjoint sets of monomials then it also holds for f = f1 + f2 :
f(u), f̄(v)
(f1 + f2)(u), (f̄1 + f̄2)(v)
f1(u) ◦ f2(u), f̄1(v) ◦ f̄2(v)
f1 ((u1, v1), . . . , (uh, vh)) + f2 ((u1, v1), . . . , (uh, vh)) = f ((u1, v1), . . . , (uh, vh)) .
Therefore it suffices to prove (33) for monomials f = cxα11 . . . x
. It remains to notice identity (33) for monomi-
als f = cxα11 . . . x
h follows immediately from formula (31) using induction on
i=1 αi.
The next lemma bounds combinatorial niceness of certain subsets of F∗q.
Lemma 20 Let Fq = Fpl, where p is prime. Let S ⊆ F∗q. Suppose there exist h proper affine hyperplanes
{πγr ,cr}1≤r≤h of Fq such that S ⊆
πγr ,cr ; then S is at most h(p − 1) combinatorially nice.
Proof: Assume S is t combinatorially nice. This implies that for some c > 0 and every m there exist two
n = ⌊cmt⌋-sized collections of vectors {ui}i∈[n] and {vi}i∈[n] in Fmq , such that:
• For all i ∈ [n], (ui, vi) = 0;
• For all i, j ∈ [n] such that i 6= j, (uj , vi) ∈ S.
For a vector u ∈ Fmq and integer e let ue denote a vector resulting from raising every coordinate of u to the power e.
For every i ∈ [n] and r ∈ [h] define vectors u(r)i and v
i in F
i = (γrui) ◦ (γrui)
p ◦ . . . ◦ (γrui)p
and v
i = vi ◦ v
i ◦ . . . ◦ v
i . (34)
Note that for every r1, r2 ∈ [h], v
i = v
i . It is straightforward to verify that for every i, j ∈ [n] and r ∈ [h] :
j , v
= Tr(γr(uj , vi)). (35)
Combining (35) with the fact that S is covered by proper affine hyperplanes πγi,ci we conclude that
• For all i ∈ [n] and r ∈ [h],
i , v
• For all i, j ∈ [n] such that i 6= j, there exists r ∈ [h] such that
j , v
∈ F∗p.
Pick g(x1, . . . , xh) ∈ Fp[x1, . . . , xh] to be a homogeneous degree h polynomial such that for a = (a1, . . . , ah) ∈
p : g(a) = 0 if and only if a is the all-zeros vector. The existence of such a polynomial g follows from [17,
Example 6.7]. Set f = gp−1. Note that for a ∈ Fhp : f(a) = 0 if a is the all-zeros vector, and f(a) = 1 otherwise.
For all i ∈ [n] define
u′i = f
i , . . . , u
◦ (1) and v′i = f̄
i , . . . , v
◦ (−1). (36)
Note that f and f̄ are homogeneous degree (p − 1)h polynomials in h variables. Therefore (32) implies that
for all i vectors u′i and v
i have length m
′ ≤ h(p−1)h(ml)(p−1)h. Combining identities (36) and (33) and using the
properties of dot products between vectors
discussed above we conclude that for every m there
exist two n = ⌊cmt⌋-sized collections of vectors {u′i}i∈[n] and {v′i}i∈[n] in Fm
q , such that:
• For all i ∈ [n], (u′i, v′i) = −1;
• For all i, j ∈ [n] such that i 6= j, (uj , vi) = 0.
It remains to notice that a family of vectors with such properties exists only if n ≤ m′, i.e., ⌊cmt⌋ ≤ h(p−1)h(ml)(p−1)h.
Given that we can pick m to be arbitrarily large, this implies that t ≤ (p− 1)h.
The next lemma presents the main result of this section.
Lemma 21 Let k be an odd integer. Suppose there exists a k-nice sequence; then for infinitely many primes p
some k of elements of Cp add up to zero.
Proof: Assume
Si ⊆ F∗qi
is k-nice. Let p be a fixed prime. Combining lemmas 18 and 20 we conclude
that every k algebraically nice subset S ⊆ F∗
is at most (p − 1)pk combinatorially nice. Note that our bound on
combinatorial niceness is independent of l. Therefore there are only finitely many extensions of the field Fp in the
sequence {Fqi}i≥1 , and the set P = {charFqi}i≥1 is infinite. It remains to notice that according to lemma 17 for
every p ∈ P there exist k elements of Cp that add up to zero.
In what follows we present necessary conditions for the existence of k-tuples of p-th roots of unity in F2 that
sum to zero. We treat the k = 3 case separately since in that case we can use a specialized argument to derive a
slightly stronger conclusion.
5.2 A necessary condition for the existence of k p-th roots of unity summing to zero
Lemma 22 Let k ≥ 3 be odd and p be a prime. Suppose there exist ζ1, . . . , ζk ∈ Cp such that
i=1 ζi = 0; then
ord2(p) ≤ 2p1−1/(k−1). (37)
Proof: Let t = ord2(p). Note that Cp ⊆ F2t . Note also that all elements of Cp other than the multiplicative
identity are proper elements of F2t. Therefore for every ζ ∈ Cp where ζ 6= 1 and every f(x) ∈ F2[x] such that
deg f ≤ t− 1 we have: f(ζ) 6= 0.
By multiplying
i=1 ζi = 0 through by ζ
, we may reduce to the case ζk = 1. Let ζ be the generator of Cp.
For every i ∈ [k − 1] pick wi ∈ Zp such that ζi = ζwi . We now have
i=1 ζ
wi + 1 = 0. Set h = ⌊(t − 1)/2⌋.
Consider the (k − 1)-tuples:
(mw1 + i1, . . . ,mwk−1 + ik−1) ∈ Zk−1p , for m ∈ Zp and i1, . . . , ik−1 ∈ [0, h]. (38)
Suppose two of these coincide, say
(mw1 + i1, . . . ,mwk−1 + ik−1) = (m
′w1 + i
1, . . . ,m
′wk−1 + i
k−1),
with (m, i1, . . . , ik−1) 6= (m′, i′1, . . . , i′k−1). Set n = m−m′ and jl = i′l − il for l ∈ [k − 1]. We now have
(nw1, . . . , nwk−1) = (j1, . . . , jl)
with −h ≤ j1, . . . , jk−1 ≤ h. Observe that n 6= 0, and thus it has a multiplicative inverse g ∈ Zp. Consider a
polynomial
P (z) = zj1+h + . . .+ zjk−1+h + zh ∈ F2[z].
Note that degP ≤ 2h ≤ t − 1. Note also that P (1) = 1 and P (ζg) = 0. The latter identity contradicts the fact
that ζg is a proper element of F2t . This contradiction implies that all (k−1)-tuples in (38) are distinct. This yields
pk−1 ≥ p
which is equivalent to (37).
5.3 A necessary condition for the existence of three p-th roots of unity summing to zero
In this section we slightly strengthen lemma 22 in the special case when k = 3. Our argument is loosely inspired
by the Agrawal-Kayal-Saxena deterministic primality test [1].
Lemma 23 Let p be a prime. Suppose there exist ζ1, ζ2, ζ3 ∈ Cp that sum up to zero; then
ord2(p) ≤ ((4/3)p)1/2 . (39)
Proof: Let t = ord2(p). Note that Cp ⊆ F2t . Note also that all elements of Cp other than the multiplicative
identity are proper elements of F2t. Therefore for every ζ ∈ Cp where ζ 6= 1 and every f(x) ∈ F2[x] such that
deg f ≤ t− 1 we have: f(ζ) 6= 0.
Observe that ζ1 + ζ2 + ζ3 = 0 implies ζ1ζ
2 + 1 = ζ3ζ
2 . This yields
2 + 1
= 1. Put ζ = ζ1ζ
Note that ζ 6= 1 and ζ, 1 + ζ ∈ Cp. Consider the products πi,j = ζ i(1 + ζ)j ∈ Cp for 0 ≤ i, j ≤ t− 1. Note that
πi,j, πk,l cannot be the same if i ≥ k and l ≥ j, as then
ζ i−k − (1 + ζ)l−j = 0,
but the left side has degree less than t. In other words, if πi,j = πk,l and (i, j) 6= (k, l), then the pairs (i, j) and
(k, l) are comparable under termwise comparison. In particular, either (k, l) = (i+a, j+b) or (i, j) = (k+a, l+b)
for some pair (a, b) with πa,b = 1.
We next check that there cannot be two distinct nonzero pairs (a, b), (a′, b′) with πa,b = πa′,b′ = 1. As above,
these pairs must be comparable; we may assume without loss of generality that a ≤ a′, b ≤ b′. The equations
πa,b = 1 and πa′−a,b′−b = 1 force a + b ≥ t and (a′ − a) + (b′ − b) ≥ t, so a′ + b′ ≥ 2t. But a′, b′ ≤ t − 1,
contradiction.
If there is no nonzero pair (a, b) with 0 ≤ a, b ≤ t − 1 and πa,b = 1, then all πi,j are distinct, so p ≥ t2.
Otherwise, as above, the pair (a, b) is unique, and the pairs (i, j) with 0 ≤ i, j ≤ t − 1 and (i, j) 6≥ (a, b) are
pairwise distinct. The number of pairs excluded by the condition (i, j) 6≥ (a, b) is (t− a)(t− b); since a+ b ≥ t,
(t− a)(t− b) ≤ t2/4. Hence p ≥ t2 − t2/4 = 3t2/4 as desired.
While the necessary condition given by lemma 23 is quite far away from the sufficient condition given by
lemma 8, it nonetheless suffices for checking that for most primes p, there do not exist three p-th roots of unity
summing to zero. For instance, among the 664578 odd primes p ≤ 108, all but 550 are ruled out by Lemma 23.
(There is an easy argument that t must be odd if p > 3; this cuts the list down to 273 primes.) Each remaining
p can be tested by computing gcd(xp + 1, (x + 1)p + 1); the only examples we found that did not satisfy the
condition of lemma 8 were (p, t) = (73, 9), (262657, 27), (599479, 33), (121369, 39).
5.4 Summary
In the beginning of this section 5 we argued that in order to use the method of [34], (i.e., proposition 5) to obtain
k-query locally decodable codes of length exp(nǫ) for some fixed k and all ǫ > 0, one needs to exhibit a k-nice
sequence of subsets of finite fields. In what follows we use technical results of the previous subsections to show
that the existence of a k-nice sequence implies that infinitely many Mersenne numbers have large prime factors.
Theorem 24 Let k be odd. Suppose there exists a k-nice sequence of subsets of finite fields; then for infinitely
many values of t we have
P (2t − 1) ≥ (t/2)1+1/(k−2). (40)
Proof: Using lemmas 21 and 22 we conclude that a k-nice sequence yields infinitely many primes p such that
ord2(p) ≤ 2p1−1/(k−1). Let p be such a prime and t = ord2(p). Then P (2t − 1) ≥ (t/2)1+1/(k−2).
A combination of lemmas 21 and 23 yields a slightly stronger bound for the special case of 3-nice sequences.
Theorem 25 Suppose there exists a 3-nice sequence of subsets; then for infinitely many values of t we have
P (2t − 1) ≥ (3/4)t2. (41)
We would like to remind the reader that although the lower bounds for P (2t − 1) given by (40) and (41) are
extremely weak light of the widely accepted conjecture saying that the number of Mersenne primes is infinite,
they are substantially stronger than what is currently known unconditionally (2).
6 Conclusion
Recently [34] came up with a novel technique for constructing locally decodable codes and obtained vast im-
provements upon the earlier work. The construction proceeds in two steps. First [34] shows that if there exist
subsets of finite fields with certain ’nice’ properties then there exist good codes. Next [34] constructs nice subsets
of prime fields Fp for Mersenne primes p.
In this paper we have undertaken an in-depth study of nice subsets of general finite fields. We have shown
that constructing nice subsets is closely related to proving lower bounds on the size of largest prime factors of
Mersenne numbers. Specifically we extended the constructions of [34] to obtain nice subsets of prime fields Fp
for primes p that are large factors of Mersenne numbers. This implies that strong lower bounds for size of the
largest prime factors of Mersenne numbers yield better locally decodable codes. Conversely, we argued that if one
can obtain codes of subexponential length and constant query complexity through nice subsets of finite fields then
infinitely many Mersenne numbers have prime factors larger than known currently.
Acknowledgements
Kiran Kedlaya’s research is supported by NSF CAREER grant DMS-0545904 and by the Sloan Research Fel-
lowship. Sergey Yekhanin would like to thank Swastik Kopparty for providing the reference [5] and outlining
the proof of lemma 9. He would also like to thank Henryk Iwaniec, Carl Pomerance and Peter Sarnak for their
feedback regarding the number theory problems discussed in this paper.
References
[1] M. Agrawal, N. Kayal, N. Saxena, “PRIMES is in P,” Annals of Mathematics, vol. 160, pp. 781-793, 2004.
[2] L. Babai, L. Fortnow, L. Levin, and M. Szegedy, “Checking computations in polylogarithmic time,”. In Proc.
of the 23th ACM Symposium on Theory of Computing (STOC), pp. 21-31, 1991.
[3] A. Beimel, Y. Ishai and E. Kushilevitz,“General constructions for information-theoretic private information
retrieval,” Journal of Computer and System Sciences, vol. 71, pp. 213-247, 2005. Preliminary versions in
STOC 1999 and ICALP 2001.
[4] A. Beimel, Y. Ishai, E. Kushilevitz, and J. F. Raymond. “Breaking the O
n1/(2k−1)
barrier for information-
theoretic private information retrieval,” In Proc. of the 43rd IEEE Symposium on Foundations of Computer
Science (FOCS), pp. 261-270, 2002.
[5] J. Bourgain, M. Chang, “A Gauss sum estimate in arbitrary finite fields,” Comptes Rendus Mathematique,
vol. 342, pp. 643-646, 2006.
[6] B. Chor, O. Goldreich, E. Kushilevitz, and M. Sudan. “Private information retrieval,” In Proc. of the 36rd
IEEE Symposium on Foundations of Computer Science (FOCS), pp. 41-50, 1995. Also, in Journal of the
ACM, vol. 45, 1998.
[7] D. Cox, J. Little, D. O’Shea, Ideals, varieties, and algorithms: an introduction to computational algebraic
geometry and commutative algebra. Springer, 1996.
[8] A. Deshpande, R. Jain, T. Kavitha, S. Lokam and J. Radhakrishnan, “Better lower bounds for locally decod-
able codes,” In Proc. of the 20th IEEE Computational Complexity Conference (CCC), pp. 184-193, 2002.
[9] Curtis Cooper, Steven Boone, http://www.mersenne.org/32582657.htm
[10] P. Erdos and T. Shorey, “On the greatest prime factor of 2p − 1 for a prime p and other expressions,” Acta.
Arith. vol. 30, pp. 257-265, 1976.
[11] W. Gasarch, “A survey on private information retrieval,” The Bulletin of the EATCS, vol. 82, pp. 72-107,
2004.
[12] O. Goldreich, “Short locally testable codes and proofs,” Technical Report TR05-014, Electronic Colloquim
on Computational Complexity (ECCC), 2005.
[13] O. Goldreich, H. Karloff, L. Schulman, L. Trevisan “Lower bounds for locally decodable codes and private
information retrieval,” In Proc. of the 17th IEEE Computational Complexity Conference (CCC), pp. 175-183,
2002.
[14] B. Hemenway and R. Ostrovsky, “Public key encryption which is simultaneously a locally-decodable error-
correcting code,” In Cryptology ePrint Archive, Report 2007/083.
[15] J. Katz and L. Trevisan, “On the efficiency of local decoding procedures for error-correcting codes,” In Proc.
of the 32th ACM Symposium on Theory of Computing (STOC), pp. 80-86, 2000.
[16] I. Kerenidis, R. de Wolf, “Exponential lower bound for 2-query locally decodable codes via a quantum
argument,” Journal of Computer and System Sciences, 69(3), pp. 395-420. Earlier version in STOC’03.
quant-ph/0208062.
[17] R. Lidl and H. Niederreiter, Finite Fields. Cambridge: Cambridge University Press, 1983.
[18] L. Murata, C. Pomerance, “On the largest prime factor of a Mersenne number,” Number theory, CRM Proc.
Lecture Notes of American Mathematical Society vol. 36, pp. 209-218, 2004.
[19] M. Murty and S. Wong, “The ABC conjecture and prime divisors of the Lucas and Lehmer sequences,” In
Proc. of Milennial Conference on Number Theory III, (Urbana, IL, 2000) (A. K. Peters, Natick, MA, 2002)
pp. 43-54.
[20] K. Obata, “Optimal lower bounds for 2-query locally decodable linear codes,” In Proc. of the 6th RANDOM,
vol. 2483 of Lecture Notes in Computer Science, pp. 39-50, 2002.
[21] A. Polishchuk and D. Spielman, ”Nearly-linear size holographic proofs,” In Proc. of the 26th ACM Sympo-
sium on Theory of Computing (STOC), pp. 194-203, 1994.
[22] C. Pomerance, “Recent developments in primality testing,” Math. Intelligencer, 3:3, pp. 97-105, (1980/81).
[23] P. Raghavendra, “A Note on Yekhanin’s locally decodable codes,” In Electronic Colloquium on Computa-
tional Complexity Report TR07-016, 2007.
[24] A. Romashchenko, “Reliable computations based on locally decodable codes,” In Proc. of the 23rd Inter-
national Symposium on Theoretical Aspects of Computer Science (STACS), vol. 3884 of Lecture Notes in
Computer Science, pp. 537-548, 2006.
[25] A. Schinzel, “On primitive factors of an − bn,” In Proc. of Cambridge Philos. Soc. vol. 58, pp. 555-562,
1962.
[26] C. Stewart, “The greatest prime factor of an − bn,” Acta Arith. vol. 26, pp. 427-433, 1974/75.
[27] C. Stewart, “On divisors of Fermat, Fibonacci, Lucas, and Lehmer numbers,” In Proc. of London Math. Soc.
vol. 35 (3), pp. 425-447, 1977.
[28] M. Sudan, Efficient checking of polynomials and proofs and the hardness of approximation problems. PhD
thesis, University of California at Berkeley, 1992.
[29] L. Trevisan, “Some applications of coding theory in computational complexity,” Quaderni di Matematica,
vol. 13, pp. 347-424, 2004.
[30] S. Wehner and R. de Wolf, “Improved lower bounds for locally decodable codes and private information re-
trieval,” In Proc. of 32nd International Colloquium on Automata, Languages and Programming (ICALP’05),
LNCS 3580, pp. 1424-1436.
[31] Lenstra-Pomerance-Wagstaff conjecture. (2006, May 22). In Wikipedia, The Free Encyclopedia. Re-
trieved 00:18, October 3, 2006, from http://en.wikipedia.org/w/index.php?title=Lenstra-Pomerance-
Wagstaff conjecture&oldid=54506577
[32] S. Wagstaff, “Divisors of Mersenne numbers,” Math. Comp., 40:161, pp. 385-397, 1983.
[33] D. Woodruff, “New lower bounds for general locally decodable codes,” Electronic Colloquium on Computa-
tional Complexity, TR07-006, 2007.
[34] S. Yekhanin, “Towards 3-query locally decodable codes of subexponential length,” In Proc. of the 39th ACM
Symposium on Theory of Computing (STOC), 2007.
[35] S. Yekhanin, Locally decodable codes and private information retrieval schemes. PhD thesis, MIT, to appear.
|
0704.1695 | Pair production with neutrinos in an intense background magnetic field | Pair production with neutrinos in an intense background magnetic field
Duane A. Dicus,1 Wayne W. Repko,2 and Todd M. Tinsley3
Department of Physics, University of Texas at Austin, Austin, Texas 78712
Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824
Physics and Astronomy Department, Rice University, Houston, Texas 77005
(Dated: July 7, 2021)
We present a detailed calculation of the electron-positron production rate using neutrinos in an
intense background magnetic field. The computation is done for the process ν → νeē (where ν
can be νe, νµ, or ντ ) within the framework of the Standard Model. Results are given for various
combinations of Landau-levels over a range of possible incoming neutrino energies and magnetic
field strengths.
PACS numbers: 13.15.+g, 12.15.-y, 13.40.Ks
I. INTRODUCTION
Neutrino interactions are of great importance in astrophysics because of their capacity to serve as mediators for
the transport and loss of energy. Their low mass and weak couplings make neutrinos ideal candidates for this role.
Therefore, the rates of neutrino interactions are integral in the evolution of all stars, particularly the collapse and
subsequent explosion of supernovae, where the overwhelming majority of gravitational energy lost is radiated away in
the form of neutrinos.
Neutrinos have held a prominent place in models of stellar collapse ever since Gamow and Schoenberg suggested
their role in 1941 [1]. While supernova models have progressed a great deal in the last 65 years, the precise mechanism
for explosion is still uncertain. A common feature, however, among all models is the sensitivity to neutrino transport.
Neutrino processes once thought to be negligible now become relevant, and this has inspired many authors to calculate
rates for neutrino interactions beyond that of the fundamental “Urca” processes
p e → n νe
n → p e ν̄e .
Recent examples include neutrino-electron scattering, neutrino-nucleus inelastic scattering, and electron-positron pair
annihilation [2, 3]. Furthermore, the large magnetic field strengths associated with supernovae (1012–1017 G) are
likely to cause significant changes in the behavior of neutrino transport.
While the the electromagnetic field does not couple to the Standard Model neutrino, it does affect neutrino physics
by altering the behavior of any charged particles, real or virtual, with which the neutrino may interact. A number
of authors have considered such effects on Urca-type processes [4, 5, 6, 7] and on neutrino absorption by nucleons
(and its reversed processes) [8, 9, 10]. Furthermore, Bhattacharya and Pal have prepared a very nice review of other
processes involving neutrinos that are affected by the presence of a magnetic field [11].
The problem of interest in this work is the production of electron-positron pairs with neutrinos in an intense
magnetic field
ν → ν e ē . (1)
Normally this process is kinematically forbidden, but the presence of the magnetic field changes the energy balance
of the process, thereby permitting the interaction.
Stimulation of this process with high-intensity laser fields has been shown to have an unacceptably low rate of
production [12], but such an interaction could have important consequences in astrophysical phenomena where large
magnetic field strengths exist. The process would most likely serve to transfer energy in core-collapse supernovae [13].
However, Gvozdev et al. have proposed that its role in magnetars could even help to explain observed gamma-ray
∗Electronic address: [email protected]
†Electronic address: [email protected]
‡Electronic address: [email protected]
http://arxiv.org/abs/0704.1695v1
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
(a)Neutral current reaction
(b)Charged current reaction
FIG. 1: Possible diagrams considered for the process ν → νeē. Both diagrams contribute for electron-type neutrinos, but
only the neutral current reaction (FIG. 1(a)) contributes for νµ and ντ .
bursts [14]. The interest in this reaction has led to a previous treatment in the literature [15], but those authors
present results for two special limiting cases: (1) when the generalized magnetic field strength eB is greater than the
square of the initial neutrino energy E2, and (2) when the square of the initial neutrino energy E2 is much greater than
the generalized magnetic field strength eB. In both cases the incoming neutrino energy E is much greater than the
electron’s rest energy me. In this paper we present a more complete calculation of the production rate as mediated by
the neutral and the charged-current processes (FIG. 1). We present the results of the calculation for varying Landau
levels, neutrino energies, and magnetic field strengths. A comparison with the approximate method is also discussed.
II. FIELD OPERATOR SOLUTIONS
As we have pointed out in section I, the standard model neutrino can only be affected by the electromagnetic field
through its interactions with charged particles. This means that for the process ν → νeē the Dirac field solution for
the final state electron and positron must change relative to their free-field solutions. The magnetic field will also
change the form of theW -boson’s field solution which can mediate the process when electron neutrinos are considered.
However, in our analysis we take the limit that the momentum transfer for this reaction is much less than the mass
of the W -boson (Q2 ≪ m2W ) and ignore any effects the magnetic field may have on this charged boson. Thus, in
this section we review the results of our derivation of the Dirac field operator solutions for the electron and positron.
We closely follow the conventions used by Bhattacharya and Pal and refer the reader to their work [9] for a detailed
derivation. The reader who is familiar with these solutions may wish to begin with section III where we calculate the
production rate.
We choose our magnetic field to lie along the positive z-axis
~B = B0k̂ (2)
which allows us some freedom in the choice of vector potential A(x). We make the choice
Aµ(x) = (0,−yB, 0, 0) (3)
both for its simplicity and its agreement with the choice found in reference [9]. This choice in vector potential leads us
to assume that all of the y space-time coordinate dependence is within the spinors. The absence of any y dependence
in, for instance, the phase leads us to define a notation such that
y−µ = (t, x, 0, z) (4)
~Vy− = (Vx, 0, Vz) , (5)
where ~V is any 3-vector.
A. Electron field operator
Solving the Dirac equation for our choice of vector potential (Eq. 3) results in the following electron field operator
ψe(x) =
d2~py−
(2π)2
En +me
us(~py−, n, y) e
−ip·y− âse ~py−,n + v
s(~py−, n, y) e
+ip·y− b̂
e ~py−,n
, (6)
where the creation and annihilation operators obey the following anti-commutation relations
âse ~py−,n, â
e ~p′
b̂se ~py−,n, b̂
e ~p′
= (2π)2 δss
δnn′ δ
2(~py− − ~py−′) . (7)
In Eq. 6 we sum over all possible spins s and all Landau levels n where En is the energy of fermion occupying the
nth Landau level
p2z +m
e + 2neB , n ≥ 0 . (8)
The Dirac bi-spinors are
u+(~py−, n, y) =
In−1(ξ−)
En+me
In−1(ξ−)
En+me
In(ξ−)
, u−(~py−, n, y) =
In(ξ−)
En+me
In−1(ξ−)
En+me
In(ξ−)
, (9a)
v+(~py−, n, y) =
En+me
In−1(ξ+)√
En+me
In(ξ+)
In−1(ξ+)
, v−(~py−, n, y) =
En+me
In−1(ξ+)
En+me
In(ξ+)
In(ξ+)
. (9b)
The Im(ξ) are functions of the Hermite polynomials
Im(ξ) =
2/2Hm(ξ) (10)
where the dimensionless parameter ξ is defined by
eB y ± px√
. (11)
Recall that the Hermite polynomials Hm(ξ) are only defined for nonnegative values of m. Therefore, we must define
I−1(ξ) = 0. This means that the electron in the lowest Landau energy level (n = 0) cannot exist in spin-up state and
the positron in the lowest Landau energy level cannot exist in the spin-down state.
The normalization in Eq. (10) has been chosen such that the functions Im(ξ) obey the following delta-function
representation [16, p. 86]
δ(y − y′) = δ(ξ − ξ
|∂y/∂ξ|
eB δ(ξ − ξ′)
2n n!
2/2Hn(ξ) e
−ξ′2/2Hn(ξ
δ(y − y′) =
In(ξ)In(ξ
′) . (12)
For convenience we choose to normalize our 1-particle states in a “box” with dimensions LxLyLz = V such that
the states are defined as
|e〉 = |~p1y−, n1, s1〉 =
e ~p1y−,n1
|0〉 (13a)
|ē〉 = |~p2y−, n2, s2〉 =
e ~p2y−,n2
|0〉 , (13b)
and the completeness relation for the states is
d2~py−
(2π)2
LxLz |~py−, n, s〉 〈~py−, n, s| . (14)
B. Spin sums
In order to evaluate the production rate for our process, we must derive the completeness relations for summations
over the spin of the fermions. For a detailed calculation of the rules see reference [10]. The results of the calculation
are as follows
s=+,−
us(~py−, n, y
′)ūs(~py−, n, y) = (2(En +me))
me(1− σ3)+ 6p‖+ 6q‖γ5
−)In(ξ−)
me(1 + σ
3)+ 6p‖− 6q‖γ5
In−1(ξ
−)In−1(ξ−)
γ1 + iγ2
In−1(ξ
−)In(ξ−)
γ1 − iγ2
−)In−1(ξ−)
(15a)
vs(~py−, n, y)v̄
s(~py−, n, y
′) = (2(En +me))
−me(1− σ3)+ 6p‖+ 6q‖γ5
In(ξ+)In(ξ
−me(1 + σ3)+ 6p‖− 6q‖γ5
In−1(ξ+)In−1(ξ
γ1 + iγ2
In−1(ξ+)In(ξ
γ1 − iγ2
In(ξ+)In−1(ξ
, (15b)
where
‖ = (E, 0, 0, pz) (16)
‖ = (pz, 0, 0, E) . (17)
The above results have been derived using the standard “Bjorken and Drell” representation for the γ-matrices [17]
, γi =
−σi 0
. (18)
C. Neutrino field operator
Having no charge, the neutrino’s field operator solution ψν(x) is not modified due to the magnetic field. We present
it here for easy reference
ψν(x) =
(2π)3
us(p) e−ip·x âsν + v
s(p) e+ip·x b̂s †ν
, (19)
where the creation and annihilation operators obey the conventional anticommutation relations
âsν , â
b̂sν , b̂
= (2π)3 δss
δ3(~p− ~p ′) . (20)
The neutrino bi-spinors follow the standard spin sum rules
us(p)ūs(p) =
vs(p)v̄s(p) = 6p , (21)
where we take the Standard Model neutrino mass to be zero.
With “box” normalization the 1-particle states for the neutrino are
|ν〉 = |~p, s〉 = 1√
ν ~p |0〉 , (22)
satisfying the completeness relation
(2π)3
V |~p, s〉 〈~p, s| . (23)
III. THE PRODUCTION RATE
The quantity of interest for the process ν → νeē in a background magnetic field is the rate at which the electron-
positron pairs are produced Γ. The production rate is defined as the probability per unit time for creation of pairs
Γ = lim
. (24)
where T is the timescale on which the process is normalized. We begin by finding the probability P of our reaction
n1,n2=0
d3~p′
(2π)3
d2~p1y−
(2π)2
d2~p2y−
(2π)2
s,s′,s1,s2
~p′, s′; ~p1y−, n1, s1; ~p2y−, n2, s2
~p, s
. (25)
In Eq. 25 quantities with the index 1 correspond to the electron, those with index 2 to the positron, the primed
quantities to the final neutrino, and the unprimed quantities correspond to the initial neutrino.
A. The scattering matrix
The scattering matrix
~p′, s′; ~p1y−, n1, s1; ~p2y−, n2, s2
~p, s
naturally depends on the flavor of the neutrino. While the process involving the electron neutrino can advance through
either the charged (W ) or neutral (Z) current, the muon (or tau) neutrino can only proceed through the latter. For
this reason we will break the scattering matrix into a neutral component
~p′, s′; ~p1y−, n1, s1; ~p2y−, n2, s2
~p, s
(27a)
and a charged component
~p′, s′; ~p1y−, n1, s1; ~p2y−, n2, s2
~p, s
, (27b)
where the scattering operators are defined by the Standard Model Lagrangian as
ŜZ =
23 cos2 θW sin
d4xψe(x)γ
geV − geAγ5
ψe(x)Zµ(x)
d4x′ ψνl(x
1− γ5
ψνl(x
′)Zσ(x
′) (28a)
ŜW =
23 sin2 θW
d4xψe(x)γ
1− γ5
ψνe(x)W
µ (x)
d4x′ ψνe(x
1− γ5
′)W+σ (x
′) , (28b)
and θW is the weak-mixing angle, νl indicates a neutrino of any flavor, νe refers to a electron neutrino, and the vector
and axial vector couplings for the electron are
geV = −
+ 2 sin2 θW (29a)
geA = −
. (29b)
In our analysis we will be using incoming neutrino energies that are well below the rest energies of the Z and W
bosons. Therefore, we can safely make the 4-fermion effective coupling approximation to the Z and W propagators
〈0 |T (Zµ(x)Zσ(x′))| 0〉 → δ4(x− x′)
(30a)
W−µ (x)W
→ δ4(x− x′)
. (30b)
After making this approximation our expressions for the scattering operators simplify to
ŜZ =
d4xψe(x)γ
geV − geAγ5
ψe(x)ψνl(x)γµ
1− γ5
ψνl(x) (31a)
ŜW =
d4xψe(x)γ
1− γ5
ψνe(x)ψνe(x)γµ
1− γ5
ψe(x) , (31b)
where GF /
2 = e2/(8 sin2 θW m
W ), and we have made use of the fact that cos
2 θW = m
After substituting of the scattering operators (Eqs. (31)) into the expressions for the components of the scattering
matrix (Eqs. (27)), we can use our results from sections IIA and IIC to write the components in the form of
SZ/W =
i(2π)3 δ3
py− − p′y− − py−,1 − py−,2
Lx Lz V
MZ/W , (32)
where
(En1 +me)(En2 +me)
EE′En1En2
(p′)γµ
1− γ5
us(p)
dy ei(py−p
y)y ūs1 (~p1y−, n1, y) γ
geV − geAγ5
vs2 (~p2y−, n2, y) (33a)
(En1 +me)(En2 +me)
EE′En1En2
(p′)γµ
1− γ5
vs2 (~p2y−, n2, y)
dy ei(py−p
y)y ūs1 (~p1y−, n1, y) γ
1− γ5
us(p) . (33b)
The reversal of sign on Eq. (33b) relative to Eq. (33a) is from the anticommutation of the field operators. The
scattering amplitude for the charged component MW can be transformed into the form of the neutral component
MZ by making use of a Fierz rearrangement formula
ū1γµ
1− γ5
u2 ū3γ
1− γ5
u4 = −ū1γµ
1− γ5
u4 ū3γ
1− γ5
u2 , (34)
such that
(En1 +me)(En2 +me)
EE′En1En2
(p′)γµ
1− γ5
us(p)
dy ei(py−p
y)y ūs1 (~p1y−, n1, y) γ
1− γ5
vs2 (~p2y−, n2, y) . (35)
With the rearrangement of MW in Eq. (35), we can now express the scattering amplitude in terms of the type of
incoming neutrino. The muon neutrino can only proceed through exchange of a Z-boson, so its scattering amplitude
is just that of MZ
Mνµ = MZ
Mνµ =
(En1 +me)(En2 +me)
EE′En1En2
(p′)γµ
1− γ5
us(p)
dy ei(py−p
y)y ūs1 (~p1y−, n1, y) γ
G−V − γ
vs2 (~p2y−, n2, y) . (36)
The scattering matrix for a tau neutrino, and the subsequent decay rate, is exactly the same as the muon neutrino.
We will keep the notation as νµ for simplicity.
The electron neutrino has both a Z-boson exchange component and an W -boson exchange component. Therefore
we must add the amplitudes to find its scattering amplitude
Mνe = MZ +MW
Mνe =
(En1 +me)(En2 +me)
EE′En1En2
(p′)γµ
1− γ5
us(p)
dy ei(py−p
)y ūs1 (~p1y−, n1, y)γ
G+V − γ
vs2 (~p2y−, n2, y) . (37)
Note that the scattering amplitudes for electron (Eq. 37) and non-electron neutrinos (Eq. 36) depend on a generalized
vector coupling GV defined by
G±V = 1± 4 sin
2 θW . (38)
We see that the scattering amplitudes for an incoming electron neutrino versus an incoming muon neutrino differ
only in the value of the generalized vector coupling and an overall sign. And the overall sign will be rendered
meaningless once the amplitude is squared. Therefore, we choose to make no distinction between the two processes,
other than keeping the generalized vector coupling as G±V , until we discuss the results in section IV.
B. The form of the production rate
Having determined the scattering matrix S and scattering amplitude M in section IIIA, we can now make series
of substitutions of those results to find the expression for the production rate Γ. We begin by substituting the form
of the scattering matrix (Eq. (32)) into the expression for the production rate (Eq. (24)
Γ = lim
= lim
T,V→∞
n1,n2=0
d3~p′
(2π)3
d2~p1y−
(2π)2
d2~p2y−
(2π)2
s,s′,s1,s2
~p′, s′; ~p1y−, n1, s1; ~p2y−, n2, s2
~p, s
= lim
T,V→∞
n1,n2=0
d3~p′
(2π)3
d2~p1y−
(2π)2
d2~p2y−
(2π)2
s,s′,s1,s2
i(2π)3 δ3
py− − p′y− − py−,1 − py−,2
Lx Lz V
Γ = lim
T,V→∞
(2πTV )−1
n1,n2=0
d3~p′
d2~p1y−
d2~p2y−
δ3(py− − p′y− − py−,1 − py−,2)
)2 |M|2 , (39)
where |M|2 is the square of the scattering amplitude after summing over spins
|M|2 =
s,s′,s1,s2
|M|2 . (40)
We can simplify the square of the 3-dimensional delta function by expressing one of the 3-dimensional delta functions
as a series of integrals over space-time coordinates
δ3(py− − p′y− − py−,1 − py−,2)
= δ3(py− − p′y− − py−,1 − py−,2)
(2π)3
ei(p−p
′−p1−p2)· y− . (41)
By using the remaining set of delta functions to reduce the exponential to unity, we can write the integrand in terms
of the dimensions of our normalization “box”
δ3(py− − p′y− − py−,1 − py−,2)
= δ3(py− − p′y− − py−,1 − py−,2)
(2π)3
δ3(py− − p′y− − py−,1 − py−,2)
= δ3(py− − p′y− − py−,1 − py−,2)
TLxLz
(2π)3
. (42)
With the above result for the square of the delta function, the production rate in Eq. (39) simplifies to
Γ = lim
n1,n2=0
d3~p′
d2~p1y−
d2~p2y− δ
3(py− − p′y− − py−,1 − py−,2)
(2π)4Ly
. (43)
The square of the scattering amplitude goes as the product of two traces
|M|2 =
s,s′,s1,s2
(En1 +me)(En2 +me)
EE′En1En2
ūs(p)γσ
1− γ5
(p′)ūs
(p′)γµ
1− γ5
us(p)
dy ei(py−p
dy′ e−i(py−p
s1,s2
v̄s2 (~p2y−, n2, y
′) γσ
G±V − γ
us1 (~p1y−, n1, y
′) ūs1 (~p1y−, n1, y)γ
G±V − γ
vs2 (~p2y−, n2, y)
|M|2 = G
(EE′En1En2)
dy ei(py−p
dy′ e−i(py−p
)y′Tr
1− γ5
6p′γµ
1− γ5
G±V − γ
me(1 − σ3)+ 6p1‖+ 6q1‖γ5
In1 (ξ
−,1)In1(ξ−,1)
2n1eB
γ1 + iγ2
In1−1(ξ
−,1)In1 (ξ−,1)
2n1eB
γ1 − iγ2
In1(ξ
−,1)In1−1(ξ−,1)
me(1 + σ
3)+ 6p1‖− 6q1‖γ5
In1−1(ξ
−,1)In1−1(ξ−,1)
G±V − γ
−me(1− σ3)+ 6p2‖+ 6q2‖γ5
In2(ξ
+,2)In2 (ξ+,2)
2n2eB
γ1 + iγ2
In2−1(ξ
+,2)In2 (ξ+,2)
2n2eB
γ1 − iγ2
In2(ξ
+,2)In2−1(ξ+,2)
−me(1 + σ3)+ 6p2‖− 6q2‖γ5
In2−1(ξ
+,2)In2−1(ξ+,2)
where we have used our result for the summations over spin from Eqs. (15) and (21).
The space-time dependence of Eq. (44) can be factored into terms like
In,m =
dy ei(py−p
y)y In(ξ−,1) Im(ξ+,2) (45)
I∗n,m =
dy′ e−i(py−p
)y′ In(ξ
−,1) Im(ξ
+,2) , (46)
where the In,m are functions of the momenta in the problem.
We have included a detailed calculation for the general form of In,m in appendix A, but we only present the result
In,m =
2/2 eiφ0 (ηx + iηy)
Lm−nn
, m ≥ n ≥ 0
2/2 eiφ0 (−ηx + iηy)n−m Ln−mm
, n ≥ m ≥ 0
where
p1x + p2x√
py − p′y√
(py − p′y)(p1 − p2)
η2 = η2x + η
y , (51)
and Lm−nn (η
2) are the associated Laguerre polynomials.
The full results of the traces and their subsequent contraction are nontrivial but have been included in appendix B. It
is important to note, however, that the only dependence on the x-components of the electron and positron momentum
is that which appears in Eq. (47) for In,m. Furthermore, we notice that all terms in the averaged square of the scattering
amplitude have factors that go as a product of In,m and I
n′,m′ . Therefore, the coefficient e
i φ0 in Eq. (47) will vanish
when this product is taken. The only remaining x-dependence of these two momenta appear as their sum in the
parameter ηx = (p1x + p2x)/
2eB. This helps to simplify the phase-space integral for our production rate (Eq. (43))
which is proportional to
Γ ∝ lim
dp2x . (52)
If we make a change of variable from the x-component of the positron momentum p2x to the parameter ηx, the
relationship in Eq. (52) is rewritten as
Γ ∝ lim
dp1 x
dηx . (53)
Because there is no longer any explicit dependence on the x-component of the electron’s momentum p1,x in the
averaged square of our scattering amplitude, we can simply evaluate the integral
dp1,x .
To evaluate this integral we must determine its limits. As discussed previously, we have elected to use “box” normal-
ization on our states. This means that our particle is confined to a large box with dimensions Lx, Ly, and Lz. The
careful reader will note that we have already taken the limit that these dimensions go
to infinity in some places, particularly in Eq. (A5), but it is imperative that we be cautious here, as we could naively
evaluate the integral over p1,x to be infinite.
Physically, the charged particles in our final state act as harmonic oscillators circling about the magnetic field lines.
While they are free to slide about the lines along the z-axis, the particles are confined to circular orbits in the x and
y-directions no larger than the dimensions of the box. For a charged particle undergoing circular motion in a constant
magnetic field, the x-component of momentum is related to the y-position vector by
px = −eQBy (54)
where Q is the charge of the particle in units of the proton charge e = |e|. Therefore, the limits on p1,x are proportional
to the limits on the size of our box in the y-direction. The integral over the electron’s momentum in the x-direction is
∫ eBLy/2
−eBLy/2
dp1,x = eBLy , (55)
and the result helps to cancel the factor of Ly that already appears in the form of the production rate. We can now
safely take the limit that our box has infinite size, and the production rate now has the form
n1,n2=0
d3~p′
d~p1z
d~p2z
2eB δ3(py− − p′y− − py−,1 − py−,2)
eB |M|2
(2π)4
. (56)
IV. RESULTS
In our expression for the total production rate (Eq. (56)), one will notice is that there is a sum over all possible
values of the Landau levels. As a consequence of energy conservation, upper limits do exist for the summation over
the electron’s Landau level n1
E = E′ + En1 + En2
E ≥ En1 +me
E −me ≥
m2e + 2n1eB
E(E − 2me)
, (57)
and a similar one for the positron’s Landau level
E = E′ + En1 + En2
m2e + 2n1eB + En2
m2e + 2n1eB ≥
m2e + 2n2eB
m2e + 2n1eB
. (58)
These relationships help to constrain the extent of the summations. Physically, these constraints can be thought of
as limits on the size of the electron’s (or positron’s) effective mass, where the electron (or positron) occupying the nth
Landau level has an effective mass
m2e + 2neB (59)
and energy
p2z +m∗
2 . (60)
For low incoming neutrino energies and large magnetic field strengths (eB > m2e), the constraints put very tight
bounds on the limits of the summations. However, higher incoming energies and low magnetic field strengths impose
limits that still require a great deal of computation time. For instance, at threshold (E = 2me) there can exist only
one possible configuration of Landau levels (n1 = n2 = 0), while at an energy ten times that of threshold and a
magnetic field equal to the critical field (B = Bc = m
e/e = 4.414× 1013 G) there are nearly 7000 possible states. At
the same magnetic field but an energy that is 100 times that of threshold, there are almost 70 million states. However,
for incoming neutrino energies less than a certain value
E < me +
m2e + 2eB (61)
only the lowest Landau level is occupied, n1 , n2 = 0. And even at energies above, yet near, this value we expect that
production of electrons and positrons in the n1 , n2 = 0 level is still the dominant mode of production because it has
more phase space available.
Production rates at the 0, 0 Landau level are presented in FIG. 2 for both the electron and muon neutrinos. (All
of the results for muon-type neutrinos are valid for tau-type neutrinos.) One interesting feature of these results is
the flattening out of the rates at higher energies. The energy region at which this flattening begins increases with
increasing magnetic field strength, and it appears to be in the neighborhood of energies just above the limit set in
Eq. (61). At energies in this regime we expect that modes of production into other Landau levels are stimulated,
which helps to explain why the behavior of the 0, 0 production rates change above this area.
We should note that the results given in this work are all for an incoming neutrino traveling transversely to the
magnetic field. The rates are maximized in this case as can be seen in the example found in FIG. 3 for an initial
electron neutrino with energy Eνe = 20me in a magnetic field equal to the critical field B = Bc = m
For comparison purposes, the production rates for other combinations of Landau levels have been calculated. These
include the 1, 0 and 0, 1 cases (FIG. 4), the 20, 0 and 0, 20 cases (FIG. 5), and the 10, 10 case (FIG. 6). The first
noteworthy feature of these results is that the production rates are decreasing at higher Landau levels. Because the
energy required to create the pair goes as
Epair = En1 + En2
p21 z + 2n1eB +m
p22 z + 2n2eB +m
Epair ≥
2n1eB +m2e +
2n2eB +m2e ,
B/Bc = 1000
B/Bc = 100
B/Bc = 10
B/Bc = 1
B/Bc = 0.1
νe → νeeē
Eνe (MeV)
1000100101
10−10
10−15
10−20
10−25
10−30
(a)Incoming electron neutrino
B/Bc = 1000
B/Bc = 100
B/Bc = 10
B/Bc = 1
B/Bc = 0.1
νµ → νµeē
Eνµ (MeV)
1000100101
10−10
10−15
10−20
10−25
10−30
(b)Incoming muon (tau) neutrino
FIG. 2: Production rates for the n1 , n2 = 0 Landau levels where Γ is the rate of production, Eν is the energy of the incoming
neutrino, and the magnetic field is measured relative to the critical field Bc = 4.414× 10
13 G. All plots are for a neutrino that
is perpendicularly incident to the magnetic field.
the available phase space for the process should decrease in the order 0, 0 → 0, 1 → 0, 20 → 10, 10. And as can
be seen in FIGS. 2, 4, 5, and 6, the production rates fall off accordingly.
Another interesting feature of these results is the apparent preference for the creation of electrons in the highest of
the two Landau levels. That is, the rate of production is larger for the state n1 = i , n2 = 0 than for n1 = 0 , n2 = i
(FIGS. 4 and 5). This behavior is especially significant over the range of incoming neutrino energies near its threshold
value for creating pairs in the given states. Though the i, 0 production rate is larger and increases more quickly
in this “near-threshold” range than its 0, i counterpart, both curves plateau at higher energies, and their difference
approaches zero. This difference is presumably caused by the positron having to share the W ’s energy with the final
electron-type neutrino. This also explains why such an effect is not seen for muon and tau-type neutrinos that only
proceed through the neutral current reaction.
It was mentioned in section I that previous authors have considered this process under two limiting cases [15]. One
νµ → νµeē
νe → νeeē
Directional Dependence
sin2 θ
10.90.80.70.60.50.40.30.20.10
10−15
10−16
10−17
10−18
FIG. 3: The production rate’s dependance on the direction of the incoming neutrino.. The production rate is for the 0, 0 Landau
level with an electron of energy Eν = 20me traveling at an angle θ relative to a magnetic field of strength equal to the critical
field B = Bc. Data is included for both an incoming electron-type neutrino (solid line) and a muon-type neutrino (dashed line).
If we average over θ, then the average production rate is 1.38 × 10−16 cm−1 for electron-type neutrinos or 2.94 × 10−17 cm−1
for muon or tau-type.
is when the square of the energy of the initial-state neutrino and the magnetic field strength satisfy the conditions
Eν ≫ eB ≫ m2e. Under these conditions many possible Landau levels could be stimulated, offering a multitude of
production modes. Therefore, it would be inappropriate to compare their expression to our results for a specific set
of Landau levels. However, the second limiting case is for eB > E2ν ≫ m2e. This condition is slightly more restrictive
than our condition for the energies below which only the lowest energy Landau levels are occupied (Eq. (61)). In this
regime our results for the 0, 0 state are the total production rates, and we can compare our results to the expression
derived by the previous authors [15]
eB E3ν
E2/eB
, (62)
where we have taken the direction of the incoming neutrino to be perpendicular to the magnetic field’s direction.
Results of this comparison are shown in FIG. 7.
The results in FIG. 7 demonstrate the drawbacks of using the approximation in Eq. (62). While the expression is
very simple, it gives only reasonable agreement with the production rate at a magnetic field equal to 100 times that
of the critical field (B = 100Bc). Here it overestimates, at the very least, by a factor of two, and the inclusion of
higher order corrections makes no significant improvement. One reason for the disagreement at this field strength is
that there is only a very small range of energies that satisfy the condition eB > E2ν ≫ m2e. Therefore at higher field
strengths we should get better agreement, and we do. Closer inspection of FIG. 7 reveals that the differences are
less than a factor of three for neutrino energies in the range 2 MeV < Eν < 20 MeV, and the expression successfully
provides a good order of magnitude estimation. Though the estimate will improve at higher magnetic field strengths,
it begins to loose relevance as there are only a handful of known objects (namely magnetars) that can conceivably
possess fields as high as 1015 G. Even for these objects, fields stronger than 1015 G cause instability in the star and
the field begins to diminish [13].
Probing the limiting case Eν ≫
eB is imperative because our present work has already demonstrated nontrivial
deviation from approximate methods for realistic astrophysical magnetic field strengths and neutrino energies near
and below the value
eB. But, as was mentioned previously, the number of Landau level states which contribute to
the total production rate grows very rapidly in this higher energy regime, and we need to sum over these states. Future
work will attempt to do these sums by using an approximation routine that can interpolate between rates for known
sets of Landau levels. This will provide a flexible way to balance accuracy with computation time while determining
when the production rate deviates from its limiting behavior. The significance of these deviations will only be known
when a more complete understanding of the role that neutrino processes play in events such as supernova core-collapse
B/Bc = 1000
B/Bc = 100
B/Bc = 10
B/Bc = 1
B/Bc = 0.1
νe → νeeē
Eνe (MeV)
1000100101
10−10
10−15
10−20
10−25
10−30
(a)Incoming electron neutrino
B/Bc = 1000
B/Bc = 100
B/Bc = 10
B/Bc = 1
B/Bc = 0.1
νµ → νµeē
Eνµ (MeV)
1000100101
10−10
10−15
10−20
10−25
10−30
(b)Incoming muon neutrino
FIG. 4: Production rates for the n1 = 0 , n2 = 1 (solid) and n1 = 1 , n2 = 0 (dashed) Landau levels where Γ is the rate
of production, Eν is the energy of the incoming neutrino, and the magnetic field is measured relative to the critical field
Bc = 4.414 × 10
13 G.
and in the formation of the resulting neutron star. This work aims to improve that understanding.
Acknowledgments
It is our pleasure to thank Craig Wheeler for several discussions about supernovae and Palash Pal for helping us
to understand Ref. [11]. This work was supported in part by the U.S. Department of Energy under Grant No. DE-
B/Bc = 1000
B/Bc = 100
B/Bc = 10
B/Bc = 1
B/Bc = 0.1
νe → νeeē
Eνe (MeV)
104103102101100
10−10
10−12
10−14
10−16
10−18
10−20
10−22
10−24
10−26
(a)Incoming electron neutrino
B/Bc = 1000
B/Bc = 100
B/Bc = 10
B/Bc = 1
B/Bc = 0.1
νµ → νµeē
Eνµ (MeV)
104103102101100
10−10
10−12
10−14
10−16
10−18
10−20
10−22
10−24
10−26
10−28
(b)Incoming muon neutrino
FIG. 5: Production rates for the n1 = 0 , n2 = 20 (solid) and n1 = 20 , n2 = 0 (dashed) Landau levels where Γ is the rate
of production, Eν is the energy of the incoming neutrino, and the magnetic field is measured relative to the critical field
Bc = 4.414 × 10
13 G.
F603-93ER40757 and by the National Science Foundation under Grant PHY-0244789 and PHY-0555544.
[1] G. Gamow and M. Schoenberg, Physical Review 59, 539547 (1941).
[2] S. W. Bruenn and W. C. Haxton, Astrophysical Journal 376, 678 (1991).
[3] A. Mezzacappa and S. W. Bruenn, Astrophysical Journal 410, 740 (1993).
[4] O. F. Dorofeev, V. N. Rodionov, and I. M. Ternov, JETP Lett. 40, 917 (1984).
[5] D. A. Baiko and D. G. Yakovlev, Astron. Astrophys. 342, 192 (1999), astro-ph/9812071.
[6] A. A. Gvozdev and I. S. Ognev, JETP Lett. 69, 365 (1999), astro-ph/9909154.
[7] P. Arras and D. Lai, Phys. Rev. D60, 043001 (1999), astro-ph/9811371.
B/Bc = 1000
B/Bc = 100
B/Bc = 10
B/Bc = 1
B/Bc = 0.1
νe → νeeē
Eνe (MeV)
104103102101100
10−10
10−12
10−14
10−16
10−18
10−20
10−22
10−24
10−26
(a)Incoming electron neutrino
B/Bc = 1000
B/Bc = 100
B/Bc = 10
B/Bc = 1
B/Bc = 0.1
νµ → νµeē
Eνµ (MeV)
104103102101100
10−10
10−12
10−14
10−16
10−18
10−20
10−22
10−24
10−26
(b)Incoming muon neutrino
FIG. 6: Production rates for the n1 , n2 = 10 Landau levels where Γ is the rate of production, Eν is the energy of the incoming
neutrino, and the magnetic field is measured relative to the critical field Bc = 4.414 × 10
13 G.
[8] H. Duan and Y.-Z. Qian, Phys. Rev. D72, 023005 (2005), astro-ph/0506033.
[9] K. Bhattacharya and P. B. Pal, Pramana 62, 1041 (2004), hep-ph/0209053.
[10] K. Bhattacharya, Ph.D. thesis, Jadavpu University (2004), hep-ph/0407099.
[11] K. Bhattacharya and P. B. Pal, Proc. Ind. Natl. Sci. Acad. 70, 145 (2004), hep-ph/0212118.
[12] T. M. Tinsley, Phys. Rev. D71, 073010 (2005), hep-ph/0412014.
[13] C. Thompson and R. C. Duncan, Astrophysical Journal 408, 194 (1993).
[14] A. A. Gvozdev, A. V. Kuznetsov, N. V. Mikheev, and L. A. Vassilevskaya, Phys. Atom. Nucl. 61, 1031 (1998), hep-
ph/9710219.
[15] A. V. Kuznetsov and N. V. Mikheev, Phys. Lett. B394, 123 (1997), hep-ph/9612312.
[16] G. B. Arfken and H. J. Weber, Mathematical Methods for Physicists (Academic Press, San Diego, CA, 1995), 4th ed.
[17] J. D. Bjorken and S. D. Drell, Relativistic Quantum Mechanics, International Series in Pure and Applied Physics (McGraw-
Hill, Inc., New York, 1964).
B/Bc = 1000
B/Bc = 100
νe → νeeē
103102101100
10−10
10−12
10−14
10−16
(a)Incoming electron neutrino
B/Bc = 1000
B/Bc = 100
νµ → νµeē
103102101100
10−10
10−11
10−12
10−13
10−14
10−15
10−16
10−17
10−18
(b)Incoming muon neutrino
FIG. 7: The total production rate (solid lines) and its approximation (dashed lines)[15] for energies and magnetic field satisfying
the condition E2ν < eB. Eν is the energy of the incoming neutrino, me is the mass of the electron, and the magnetic field is
measured relative to the critical field Bc = 4.414 × 10
13 G.
APPENDIX A: CALCULATION OF In,m
In section III B we discuss the fact that the squared scattering amplitude has coefficients that are integrals over the
space-time coordinate y
In,m =
dy ei(py−p
)y In(ξ−,1) Im(ξ+,2) . (A1)
In this appendix we will derive the result after integrating over y.
By defining new parameters
eB y (A2)
ζi = pix/
eB (A3)
ζ0 = (py − p′y)/
eB (A4)
and using the definition of ξ (Eq. (11)) we can make a change of variable from y to ζ and rewrite In,m as
In,m =
eiζ0ζ In(ζ − ζ1) Im(ζ + ζ2) (A5)
where the limits of integration are ±∞ because we have taken the limit of Ly as it approaches ∞. The In(ξ) in
Eq. (10) depend on the Hermite polynomials Hn(ξ), which can be represented as a contour integral in the following
way [16, Eq. (13.8)]
Hn(ξ) =
dt t−n−1 e−t
2+2tx . (A6)
Substituting this definition of the Hermite polynomial into Eq. (10) allows us to write the In,m as
In,m =
eiζ0 ζ
2n n!
e−(ζ−ζ1)
2/2Hn(ζ − ζ1)
e−(ζ+ζ2)
2/2Hm(ζ + ζ2)
2n+m n!m!π
)−1/2
dζ eiζ0 ζ e−(ζ−ζ1)
2/2Hn(ζ − ζ1)e−(ζ+ζ2)
2/2Hm(ζ + ζ2)
2n+m n!m!π
)−1/2
dζ eiζ0 ζ e−(ζ−ζ1)
2/2 e−(ζ+ζ2)
dt t−n−1 e−t
2+2t(ζ−ζ1) m!
ds s−m−1 e−s
2+2s(ζ+ζ2) . (A7)
Next, we isolate all of the ζ dependence, interchange the order of the integrals, and perform the integration over ζ
Int1 =
dζ exp
−ζ2 + ζ(ζ1 − ζ2 + iζ0 + 2t+ 2s)
π exp
(ζ1 − ζ2 + iζ0 + 2t+ 2s)2 /4
. (A8)
Substitution of this result back into Eq. (A7) gives
In,m =
2n+m n!m!
)−1/2
e−((ζ1+ζ2)
0)/4 eiζ0(ζ1−ζ2)/2
dt t−n−1 et(−ζ1−ζ2+iζ0)
ds s−m−1 es(ζ1+ζ2+iζ0) e2st .
If m ≥ n, then we can perform the integration over s first
Int2 =
ds s−m−1 es(ζ1+ζ2+iζ0) e2st .
es(ζ1+ζ2+iζ0+2t)
= (ζ1 + ζ2 + iζ0 + 2t)
m , (A10)
such that
In,m =
2n+m n!m!
)−1/2
(ζ1 + ζ2)
2 + ζ20
exp (iζ0(ζ1 − ζ2)/2)
(ζ1 + ζ2 + iζ0 + 2t)
exp (t(−ζ1 − ζ2 + iζ0)) . (A11)
The integration over t is made easier by making the following changes of variable
ζ1 + ζ2√
p1x + p2x√
(A12)
py − p′y√
(A13)
ζ0 (ζ1 − ζ2)
(py − p′y)(p1 − p2)
(A14)
η± = ηx ± iηy (A15)
η2 = η+ η− = η2x + η
y (A16)
u− η+
2 . (A17)
The integration over the variable t can now be written as
Int3 =
2 u)m
(u− η+) /
e((u−η
= 2(n+m)/2 eη
(u− η+)n+1
= 2(n+m)/2 eη
um e−uη
= 2(n+m)/2 eη
(η−)n−m
d(η2)
2)m e−η
= n! 2(n+m)/2 (η+)(m−n) Lm−nn (η
where we have used the Rodrigues’ representation for Laguerre polynomials [16, Eq. (13.47)]
Lkn(x) =
ex x−k
xn+k e−x , n, k ≥ 0 . (A18)
With the result from Eq. (A18), we can now express the In,m as
In,m =
2/2 eiφ0 (ηx + i ηy)
Lm−nn (η
2) , m ≥ n ≥ 0 , (A19)
For the case when n > m we first integrate over t in Eq. (A9) and follow a similar procedure to find
In,m =
2/2 eiφ0 (−ηx + i ηy)n−m Ln−mm (η2) , n ≥ m ≥ 0 . (A20)
APPENDIX B: RESULT OF TRACE
We can express the trace result for the average of the squared scattering amplitude from from Eq. (44) as a sum of
terms
|M|2 = G
29EE′En1En2
Ai Ti , (B1)
where the coefficients Ai depend on the products of In,m and I
n′,m′ defined in Eq. (47) and presented in appendix A,
and the Ti are the parts that depend on the contraction of the traces in Eq. (44). The results are as follows:
A1 = In1,n2I
n1,n2
T1 = Tr
G±V − γ
m(1− σ3)+ 6p1‖+ 6q1‖γ5
G±V − γ
−m(1− σ3)+ 6p2‖+ 6q2‖γ5
T1 = −27
2 − 1
x + pyp
G±V + 1
(E − pz)(E′ − p′z)(En1 + p1 z)(En2 + p2 z)
G±V − 1
(E + pz)(E
′ + p′z)(En1 − p1 z)(En2 − p2 z) (B3)
A2 = In1,n2−1I
n1,n2
T2 = Tr
G±V − γ
m(1− σ3)+ 6p1‖+ 6q1‖γ5
G±V − γ
2n2eB)
γ1 + iγ2
T2 = −26
G±V + 1
2n2eB (px + i py)(E
′ − p′z)(En1 + p1 z)
G±V − 1
2n2eB (p
x + i p
y)(E + pz)(En1 − p1 z) (B5)
A3 = In1,n2I
n1,n2−1 = A
2 (B6)
T3 = T
2 (B7)
A4 = In1,n2−1I
n1,n2−1 (B8)
T4 = Tr
G±V − γ
m(1− σ3)+ 6p1‖+ 6q1‖γ5
G±V − γ
−m(1 + σ3)+ 6p2‖− 6q2‖γ5
T4 = 2
2 − 1
m2e(E + pz)(E
′ − p′z) + 26
G±V + 1
(E + pz)(E
′ − p′z)(En1 + p1 z)(En2 − p2 z)
G±V − 1
(E + pz)(E
′ − p′z)(En1 − p1 z)(En2 + p2 z) (B9)
A5 = In1,n2I
n1−1,n2 (B10)
T5 = Tr
G±V − γ
2n1eB)
γ1 + iγ2
G±V − γ
−m(1− σ3)+ 6p2‖+ 6q2‖γ5
T5 = 2
G±V + 1
2n1eB (p
x + i p
y)(E − pz)(En2 + p2 z)
G±V − 1
2n1eB (px + i py)(E
′ + p′z)(En2 − p2 z) (B11)
A6 = In1,n2−1I
n1−1,n2 (B12)
T6 = Tr
G±V − γ
2n1eB)
γ1 + iγ2
G±V − γ
2n2eB)
γ1 + iγ2
T6 = −26
G±V + 1
2n1eB
2n2eB (px + i py)(p
x + i p
G±V − 1
2n1eB
2n2eB (px + i py)(p
x + i p
y) (B13)
A7 = In1,n2I
n1−1,n2−1 (B14)
T7 = Tr
G±V − γ
2n1eB)
γ1 + iγ2
G±V − γ
2n2eB)
γ1 − iγ2
T7 = −26
G±V + 1
2n1eB
2n2eB (px − i py)(p′x + i p′y)
G±V − 1
2n1eB
2n2eB (px + i py)(p
x − i p′y) (B15)
A8 = In1,n2−1I
n1−1,n2−1 (B16)
T8 = Tr
G±V − γ
2n1eB)
γ1 + iγ2
G±V − γ
−m(1 + σ3)+ 6p2‖− 6q2‖γ5
T8 = 2
G±V + 1
2n1eB (p
x + i p
y)(E + pz)(En2 − p2 z)
G±V − 1
2n1eB (px + i py)(E
′ + p′z)(En2 + p2 z) (B17)
A9 = In1−1,n2I
n1,n2
= A∗5 (B18)
T9 = T
5 (B19)
A10 = In1−1,n2−1I
n1,n2
= A7 (B20)
T10 = T
7 (B21)
A11 = In1,n2I
n1,n2−1 = A
6 (B22)
T11 = T
6 (B23)
A12 = In1−1,n2−1I
n1,n2−1 = A
8 (B24)
T12 = T
8 (B25)
A13 = In1−1,n2I
n1−1,n2 (B26)
T13 = Tr
G±V − γ
m(1 + σ3)+ 6p1‖− 6q1‖γ5
G±V − γ
−m(1− σ3)+ 6p2‖+ 6q2‖γ5
T13 = 2
2 − 1
m2e(E − pz)(E′ + p′z) + 26
G±V + 1
(E − pz)(E′ + p′z)(En1 − p1 z)(En2 + p2 z)
G±V − 1
(E − pz)(E′ + p′z)(En1 + p1 z)(En2 − p2 z) (B27)
A14 = In1−1,n2−1I
n1−1,n2 (B28)
T14 = Tr
G±V − γ
m(1 + σ3)+ 6p1‖− 6q1‖γ5
G±V − γ
2n2eB)
γ1 + iγ2
T14 = −26
G±V + 1
2n2eB)(px + ipy)(E
′ + p′z)(En1 − p1 z)
G±V − 1
2n2eB)(p
x + ip
y)(E − pz)(En1 + p1 z) (B29)
A15 = In1−1,n2I
n1−1,n2−1 = A
14 (B30)
T15 = T
14 (B31)
A16 = In1−1,n2−1I
n1−1,n2−1 (B32)
T16 = Tr
G±V − γ
m(1 + σ3)+ 6p1‖− 6q1‖γ5
G±V − γ
−m(1 + σ3)+ 6p2‖− 6q2‖γ5
T16 = −27
2 − 1
m2e(pxp
x + pyp
y) + 2
G±V + 1
(E + pz)(E
′ + p′z)(En1 − p1 z)(En2 − p2 z)
G±V − 1
(E − pz)(E′ − p′z)(En1 + p1 z)(En2 + p2 z) . (B33)
Introduction
Field operator solutions
Electron field operator
Spin sums
Neutrino field operator
The production rate
The scattering matrix
The form of the production rate
Results
Acknowledgments
References
Appendices
Calculation of In,m
Result of trace
|
0704.1696 | Theoretical Aspects of the SOM Algorithm | Theoretical Aspects of the SOM
Algorithm
M.Cottrell†, J.C.Fort‡, G.Pagès∗
† SAMOS/Université Paris 1
90, rue de Tolbiac, F-75634 Paris Cedex 13, France
Tel/Fax : 33-1-40-77-19-22, E-mail: [email protected]
‡ Institut Elie Cartan/Université Nancy 1 et SAMOS
F-54506 Vandœuvre-Lès-Nancy Cedex, France
E-mail: [email protected]
∗ Université Paris 12 et Laboratoire de Probabilités /Paris 6
F-75252 Paris Cedex 05, France
E-mail:[email protected]
Abstract
The SOM algorithm is very astonishing. On the one hand, it is very simple
to write down and to simulate, its practical properties are clear and easy to
observe. But, on the other hand, its theoretical properties still remain without
proof in the general case, despite the great efforts of several authors. In this
paper, we pass in review the last results and provide some conjectures for the
future work.
Keywords: Self-organization, Kohonen algorithm, Convergence of stochas-
tic processes, Vectorial quantization.
1 Introduction
The now very popular SOM algorithm was originally devised by Teuvo Kohonen
in 1982 [35] and [36]. It was presented as a model of the self-organization of neu-
ral connections. What immediatly raised the interest of the scientific community
(neurophysiologists, computer scientists, mathematicians, physicists) was the abil-
ity of such a simple algorithm to produce organization, starting from possibly total
disorder. That is called the self-organization property.
As a matter of fact, the algorithm can be considered as a generalization of the
Competitive Learning, that is a Vectorial Quantization Algorithm [42], without any
notion of neighborhood between the units.
http://arxiv.org/abs/0704.1696v1
In the SOM algorithm, a neighborhood structure is defined for the units and is
respected throughout the learning process, which imposes the conservation of the
neighborhood relations. So the weights are progressively updated according to the
presentation of the inputs, in such a way that neighboring inputs are little by little
mapped onto the same unit or neighboring units.
There are two phases. As well in the practical applications as in the theoretical
studies, one can observe self-organization first (with large neighborhood and large
adaptation parameter), and later on convergence of the weights in order to quantify
the input space. In this second phase, the adaptation parameter is decreased to 0,
and the neighborhood is small or indeed reduced to one unit (the organization is
supposed not to be deleted by the process in this phase, that is really true for the
0-neighbor setting).
Even if the properties of the SOM algorithm can be easily reproduced by simu-
lations, and despite all the efforts, the Kohonen algorithm is surprisingly resistant
to a complete mathematical study. As far as we know, the only case where a com-
plete analysis has been achieved is the one dimensional case (the input space has
dimension 1) for a linear network (the units are disposed along a one-dimensional
array).
A sketch of the proof was provided in the Kohonen’s original papers [35], [36]
in 1982 and in his books [37], [40] in 1984 and 1995. The first complete proof
of both self-organization and convergence properties was established (for uniform
distribution of the inputs and a simple step-neighborhood function) by Cottrell and
Fort in 1987, [9].
Then, these results were generalized to a wide class of input distributions by
Bouton and Pagès in 1993 and 1994, [6], [7] and to a more general neighborhood by
Erwin et al. (1992) who have sketched the extension of the proof of self-organization
[21] and studied the role of the neighborhood function [20]. Recently, Sadeghi [59],
[60] has studied the self-organization for a general type of stimuli distribution and
neighborhood function.
At last, Fort and Pagès in 1993, [26], 1995 [27], 1997 [3], [4] (with Benaim) have
achieved the rigorous proof of the almost sure convergence towards a unique state,
after self-organization, for a very general class of neighborhood functions.
Before that, Ritter et al. in 1986 and 1988, [52], [53] have thrown some light on
the stationary state in any dimension, but they study only the final phase after the
self-organization, and do not prove the existence of this stationary state.
In multidimensional settings, it is not possible to define what could be a well
ordered configuration set that would be stable for the algorithm and that could be
an absorbing class. For example, the grid configurations that Lo et al. proposed
in 1991 or 1993, [45], [46] are not stable as proved in [10]. Fort and Pagès in 1996,
[28] show that there is no organized absorbing set, at least when the stimuli space
is continuous. On the other hand, Erwin et al. in 1992 [21] have proved that it
is impossible to associate a global decreasing potential function to the algorithm, as
long as the probability distribution of the inputs is continuous. Recently, Fort and
Pagès in 1994, [26], in 1996 [27] and [28], Flanagan in 1994 and 1996 [22], [23] gave
some results in higher dimension, but these remain incomplete.
In this paper, we try to present the state of the art. As a continuation of previous
paper [13], we gather the more recent results that have been published in different
journals that can be not easily get-a-able for the neural community.
We do not speak about the variants of the algorithm that have been defined and
studied by many authors, in order to improve the performances or to facilitate the
mathematical analysis, see for example [5], [47], [58], [61]. We do not either address
the numerous applications of the SOM algorithm. See for example the Kohonen’s
book [40] to have an idea of the profusion of these applications. We will only mention
as a conclusion some original data analysis methods based on the SOM algorithm.
The paper is organized as follows: in section 2, we define the notations. The
section 3 is devoted to the one dimensional case. Section 4 deals with the multidi-
mensional 0-neighbor case, that is the simple competitive learning and gives some
light on the quantization performances. In section 5, some partial results about
the multidimensional setting are provided. Section 6 treats the discrete finite case
and we present some data analysis methods derived from the SOM algorithm. The
conclusion gives some hints about future researches.
2 Notations and definitions
The network includes n units located in an ordered lattice (generally in a one- or
two-dimensional array). If I = {1, 2, . . . , n} is the set of the indices, the neighbor-
hood structure is provided by a neighborhood function Λ defined on I × I. It is
symmetrical, non increasing, and depends only on the distance between i and j in
the set of units I, (e.g. | i − j | if I = {1, 2, . . . , n} is one-dimensional). Λ(i, j)
decreases with increasing distance between i and j, and Λ(i, i) is usually equal to 1.
The input space Ω is a bounded convex subset of Rd, endowed with the Eu-
clidean distance. The inputs x(t), t ≥ 1 are Ω-valued, independent with common
distribution µ.
The network state at time t is given by
m(t) = (m1(t), m2(t), . . . , mn(t)).
where mi(t) is the d-dimensional weight vector of the unit i.
For a given state m and input x, the winning unit ic(x,m) is the unit whose
weight mic(x,m) is the closest to the input x. Thus the network defines a map
Φm : x 7−→ ic(x,m), from Ω to I, and the goal of the learning algorithm is to
converge to a network state such the Φm map will be “topology preserving”in some
sense.
For a given state m, let us denote Ci(m) the set of the inputs such that i is the
winning unit, that is Ci(m) = Φ
m (i). The set of the classes Ci(m) is the Euclidean
Voronöı tessellation of the space Ω related to m.
The SOM algorithm is recursively defined by :
ic(x(t+ 1), m(t)) = argmin {‖x(t + 1)−mi(t)‖, i ∈ I}
mi(t + 1) = mi(t)− εtΛ(i0, i)(mi(t)− x(t + 1)), ∀i ∈ I
The essential parameters are
• the dimension d of the input space
• the topology of the network
• the adaptation gain parameter εt, which is ]0, 1[-valued, constant or decreasing
with time,
• the neighborhood function Λ, which can be constant or time dependent,
• the probability distribution µ.
Mathematical available techniques
As mentioned before, when dealing with the SOM algorithm, one has to separate
two kinds of results: those related to self-organization, and those related to conver-
gence after organization. In any case, all the results have been obtained for a fixed
time-invariant neighborhood function.
First, the network state at time t is a random Ωn-valued vector m(t) displaying
m(t + 1) = m(t)− εt H(x(t+ 1), m(t)) (2)
(where H is defined in an obvious way according to the updating equation) is a
stochastic process. If εt and Λ are time-invariant, it is an homogeneousMarkov chain
and can be studied with the usual tools if possible (and fruitful). For example, if
the algorithm converges in distribution, this limit distribution has to be an invariant
measure for the Markov chain. If the algorithm has some fixed point, this point
has to be an absorbing state of the chain. If it is possible to prove some strong
organization [28], it has to be associated to an absorbing class.
Another way to investigate self-organization and convergence is to study the asso-
ciated ODE (Ordinary Differential Equation) [41] that describes the mean behaviour
of the algorithm :
= − h(m) (3)
where
h(m) = E(H(x, m)) =
H(x, m) dµ(x) (4)
is the expectation of H(., m) with respect to the probability measure µ.
Then it is clear that all the possible limit states m⋆ are solutions of the functional
equation
h(m) = 0
and any knowledge about the possible attracting equilibrium points of the ODE
can give some light about the self-organizing property and the convergence. But
actually the complete asymptotic study of the ODE in the multidimensional setting
seems to be untractable. One has to verify some global assumptions on the function
h (and on its gradient) and the explicit calculations are quite difficult, and perhaps
impossible.
In the convergence phase, the techniques depend on the kind of the desired con-
vergence mode. For the almost sure convergence, the parameter εt needs to decrease
to 0, and the form of equation (2) suggests to consider the SOM algorithm as a
Robbins-Monro [57] algorithm.
The usual hypothesis on the adaptation parameter to get almost sure results is
then:
εt = +∞ and
ε2t < +∞. (5)
The less restrictive conditions
t εt = +∞ and εt ց 0 generally do not ensure the
almost sure convergence, but some weaker convergence, for instance the convergence
in probability.
Let us first examine the results in dimension 1.
3 The dimension 1
3.1 The self-organization
The input space is [0, 1], the dimension d is 1 and the units are arranged on a linear
array. The neighborhood function Λ is supposed to be non increasing as a function
of the distance between units, the classical step neighborhood function satisfies this
condition. The input distribution µ is continuous on [0, 1]: this means that it does
not weight any point. This is satisfied for example by any distribution having a
density.
Let us define
F+n = {m ∈ R / 0 < m1 < m2 < . . . < mn < 1}
F−n = {m ∈ R / 0 < mn < mn−1 < . . . < m1 < 1}.
In [9], [6], the following results are proved using Markovian methods :
Theorem 1 (i) The two sets F+n and F
n are absorbing sets.
(ii) If ε is constant, and if Λ is decreasing as a function of the distance (e.g. if there
are only two neigbors) the entering time τ , that is the hitting time of F+n ∪ F
n , is
almost surely finite, and ∃λ > 0, s.t. supm∈[0,1]n Em(exp(λτ)) is finite, where Em
denote the expectation given m(0) = m.
The theorem 1 ensures that the algorithm will almost surely order the weights.
These results can be found for the more particular case (µ uniform and two neigh-
bors) in Cottrell and Fort [9], 1987, and the succesive generalisations in Erwin et
al. [21], 1992, Bouton and Pagès [6], 1993, Fort and Pagès [27], 1995, Flanagan [23],
1996.
The techniques are the Markov chain tools.
Actually following [6], it is possible to prove that whenever ε ց 0 and
εt = +∞,
then ∀m ∈ [0, 1]n,Probam(τ < +∞) > 0, (that is the probability of self-organization
is positive regardless the initial values, but not a priori equal to 1). In [60], Sadeghi
uses a generalized definition of the winner unit and shows that the probability of
self-organization is uniformly positive, without assuming a lower bound for εt.
No result of almost sure reordering with a vanishing εt is known so far. In [10], Cot-
trell and Fort propose a still not proved conjecture: it seems that the re-organization
occurs when the parameter εt has a
order.
3.2 The convergence for dimension 1
After having proved that the process enters an ordered state set (increasing or
decreasing), with probability 1, it is possible to study the convergence of the process.
So we assume that m(0) ∈ F+n . It would be the same if m(0) ∈ F
3.2.1 Decreasing adaptation parameter
In [9] (for the uniform distribution), in [7], [27] and more recently in [3], [4], 1997,
the almost sure convergence is proved in a very general setting. The results are
gathered in the theorem below :
Theorem 2 Assume that
1) (εt) ∈]0, 1[ satisfies the condition (5),
2) the neighborhood function satisfies the condition HΛ: there exists k0 <
that Λ(k0 + 1) < Λ(k0),
3) the input distribution µ satisfy the condition Hµ: it has a density f such that
f > 0 on ]0, 1[ and ln(f) is strictly concave (or only concave, with lim0+ f + lim1− f
positive),
(i) The mean function h has a unique zero m⋆ in F+n .
(ii) The dynamical system dm
= −h(m) is cooperative on F+n , i.e. the non diagonal
elements of ∇h(m) are non positive.
(iii) m⋆ is attracting.
So if m(0) ∈ F+n , m(t)
−→ m⋆ almost surely.
In this part, the authors use the ODE method, a result by M.Hirsch on cooperative
dynamical system [34], and the Kushner & Clark Theorem [41], [3]. A.Sadeghi put
in light that the non-positivity of non-diagonal terms of ∇h is exactly the basic
definition of a cooperative dynamical system and he obtained partial results in [59]
and more general ones in [60].
We can see that the assumptions are very general. Most of the usual probability
distributions (truncated on [0, 1]) have a density f such that ln(f) is strictly concave.
On the other hand, the uniform distribution is not strictly ln-concave as well as the
truncated exponential distribution, but both cumply the condition lim0+ f +lim1− f
positive.
Condition (5) is essential, because if εt ց 0 and
t εt = +∞, there is only a
priori convergence in probability.
In fact, by studying the associated ODE, Flanagan [22] shows that before ordering,
it can appear metastable equilibria.
In the uniform case, it is possible to calculate the limit m⋆. Its coordinates are
solutions of a (n × n)-linear system which can be found in [37] or [9]. An explicit
expression, up to the solution of a 3 × 3 linear system is proposed in [6]. Some
further investigations are made in [31].
3.2.2 Constant adaptation parameter
Another point of view is to study the convergence of m(t) when εt = ε is a constant.
Some results are available when the neighborhood function corresponds to the two-
neighbors setting. See [9], 1987, (for the uniform distribution) and [7], 1994, for the
more general case. One part of the results also hold for a more general neighborhood
function, see [3], [4].
Theorem 3 Assume that m(0) ∈ F+n ,
Part A: Assume that the hypotheses Hµ and HΛ hold as in Theorem 2, then
For each ε ∈]0, 1[, there exists some invariant probability νε on F+n .
Part B: Assume only that Λ(i, j) = 1 if and only if |i − j| = 0 or 1 (classical
2-neighbors setting),
(i) If the input distribution µ has an absolutely continuous part (e.g. has a density),
then for each ε ∈]0, 1[, there exists a unique probability distribution νε such that the
distribution of mt weakly converges to νε when t −→ ∞. The rate of convergence is
geometric. Actually the Markov chain is Doeblin recurrent.
(ii) Furthermore, if µ has a positive density, ∀ε, νε is equivalent to the Lebesgue
measure on F+n if and only if n is congruent with 0 or 1 modulo 3. If n is congruent
with 2 modulo 3, the Lebesgue measure is absolutely continuous with respect to νε ,
but the inverse is not true, that is νε has a singular part.
Part C: With the general hypotheses of Part A (which includes that of Part B), if
m⋆ is the unique globally attractive equilibrium of the ODE (see Theorem 2), thus
νε converges to the Dirac distribution on m⋆ when ε ց 0 .
So when ε is very small, the values will remain very close to m⋆.
Moreover, from this result we may conjecture that for a suitable choice of εt,
certainly εt =
, where A is a constant, both self-organization and convergence
towards the unique m⋆ can be achieved. This could be proved by techniques very
similar to the simulated annealing methods.
4 The 0 neighbor case in a multidimensional set-
In this case, we take any dimension d, the input space is Ω ⊂ Rd and Λ(i, j) = 1 if
i = j, and 0 elsewhere. There is no more topology on I, and reordering no makes
sense. In this case the algorithm is essentially a stochastic version of the Linde, Gray
and Buzo [44] algorithm (LBG). It belongs to the family of the vectorial quantization
algorithms and is equivalent to the Competitive Learning. The mathematical results
are more or less reachable. Even if this algorithm is deeply different from the usual
Kohonen algorithm, it is however interesting to study it because it can be viewed
as a limit situation when the neighborhood size decreases to 0.
The first result (which is classical for Competitive learning), and can be found in
[54], [50], [39] is:
Theorem 4 (i) The 0-neighbor algorithm derives from the potential
Vn(m) =
1≤i≤n
‖mi − x‖
2dµ(x) (6)
(ii) If the distribution probability µ is continuous (for example µ has a density f),
Vn(m) =
Ci(m)
‖mi − x‖
2f(x)dx =
1≤i≤n
‖mi − x‖
2f(x)dx (7)
where Ci(m) is the Voronöı set related with the unit i for the current state m.
The potential function Vn(m) is nothing else than the intra-classes variance used
by the statisticians to characterize the quality of a clustering. In the vectorial quan-
tization setting, Vn(m) is called distortion. It is a measure of the loss of information
when replacing each input by the closest weight vector (or code vector). The po-
tential Vn(m) has been extensively studied since 50 years, as it can be seen in the
Special Issue of IEEE Transactions on Information Theory (1982), [42].
The expression (7) holds as soon as mi 6= mj for all i 6= j and as the borders of
the Voronöı classes have probability 0, (µ(∪ni=1∂Ci(m)) = 0). This last condition is
always verified when the distribution µ has a density f . With these two conditions,
V (m) is differentiable at m and its gradient vector reads
∇Vn(m) =
Ci(m)
(mi − x)f(m)d(m)
So it becomes clear ([50],[40]) that the Kohonen algorithm with 0 neighbor is the
stochastic gradient descent relative to the function Vn(m) and can be written :
m(t + 1) = m(t)− εt+11Ci(m(t))(x(t+ 1))(m(t)− x(t + 1))
where 1Ci(m(t))(x(t + 1)) is equal to 1 if x(t+ 1) ∈ Ci(m(t)), and 0 if not.
The available results are more or less classical, and can be found in [44] and [8],
for a general dimension d and a distribution µ satisfying the previous conditions.
Concerning the convergence results, we have the following when the dimension
d = 1, see Pagès ([50], [51]), the Special Issue in IEEE [42] and also [43] for (ii):
The parameter ε(t) has to satisfy the conditions (5).
Theorem 5 Quantization in dimension 1
(i) If ∇Vn has finitely many zeros in F
n , m(t) converges almost surely to one of
these local minima.
(ii) If the hypothesis Hµ holds (see Theorem (2)), Vn has only one zero point in
F+n , say m
n. This point m
n ∈ F
n and is a minimum. Furthermore if m(0) ∈ F
−→ m⋆n.
(iii) If the stimuli are uniformly distributed on [0, 1], then
m⋆n = ((2i− 1)/2n)1≤i≤n.
The part (ii) shows that the global minimum de Vn(m) is reachable in the one-
dimensional case and the part (iii) is a confirmation of the fact that the algorithm
provides an optimal discretization of continous distributions.
A weaker result holds in the d-dimensional case, because one has only the conver-
gence to a local minimum of Vn(m).
Theorem 6 Quantization in dimension d
If ∇Vn has finitely many zeros in F
n , and if these zeros have all their components
pairwise distinct, m(t) converges almost surely to one of these local minima.
In the d-dimensional case, we are not able to compute the limit, even in the
uniform case. Following [48] and many experimental results, it seems that the
minimum distortion could be reached for an hexagonal tesselation, as mentioned in
[31] or [40].
In both cases, we can set the properties of the global minima of Vn(m), in the
general d-dimensional setting. Let us note first that Vn(m) is invariant under any
permutation of the integers 1, 2, . . . , n. So we can consider one of the global minima,
the ordered one (for example the lexicographically ordered one).
Theorem 7 Quantization property
(i) The function Vn(m) is continuous on (R
d)n and reaches its (global) minima
inside Ωn.
(ii) For a fixed n, a point m⋆n at which the function Vn is minimum has pairwise
distinct components.
(iii) Let n be a variable and m⋆n = (m
n,1, m
n,2, . . . , m
n,n) the ordered minimum of
Vn(m). The sequence min(Rd)n Vn(m) = Vn(m
n) converges to 0 as n goes to +∞.
More precisely, there exists a speed β = 2/d and a constante A(f) such that
nβVn(m
n) −→ A(f)
when n goes to +∞.
Following Zador [64], the constant A(f) can be computed, A(f) = ad ‖ f ‖ρ,
where ad does not depend on f , ρ = d/(d+ 2) and ‖ f ‖ρ= [
f ρ(x)dx]1/ρ.
(iv) Then, the weighted empirical discrete probability measure
µ(Ci(m
n))δm⋆n,i
converges in distribution to the probability measure µ, when n → ∞.
(v) If Fn (resp. F ) denotes the distribution function of µn (resp. µ), one has
(Rd)n
Vn(m) = min
(Rd)n
(Fn(x)− F (x))
so when n → ∞, Fn converges to F in quadratic norm.
The convergence in (iv) properly defines the quantization property, and explains
how to reconstruct the input distribution from the n code vectors after convergence.
But in fact this convergence holds for any sequence y⋆n = y1,n, y2,n, . . . , yn,n, which
“fills ” the space when n goes to +∞: for example it is sufficient that for any n,
there exists an integer n′ > n such that in any interval yi,n, yi+1,n (in R
d), there are
some points of y⋆n′. But for any sequence of quantizers satisfying this condition, even
if there is convergence in distribution, even if the speed of the convergence can be
the same, the constant A(f) will differ since it will not realize the minimum of the
distortion.
For each integer n, the solution m⋆n which minimizes the quadratic distortion
Vn(m) and the quadratic norm ‖ Fn − F ‖
2 is said to be an optimal n-quantizer
. It ensures also that the discrete distribution function associated to the minimum
m⋆n suitably weighted by the probability of the Voronöı classes, converges to the
initial distribution function F . So the 0-neighbor algorithm provides a skeleton of
the input distribution and as the distortion tends to 0 as well as the quadratic norm
distance of Fn and F , it provides an optimal quantizer. The weighting of the Dirac
functions by the volume of the Voronöı classes implies that the distribution µn is
usually quite different from the empirical one, in which each term would have the
same weight 1/n.
This result has been used by Pagès in [50] and [51] to numerically compute inte-
grals. He shows that the speed of convergence of the approximate integrals is exactly
d for smooth enough functions, which is faster than the Monte Carlo method while
d ≤ 4.
The difficulty remains that the optimal quantizer m⋆n is not easily reachable, since
the stochastic process m(t) converges only to a local minimum of the distortion,
when the dimension is greater than 1.
Magnification factor
There is some confusion [37], [52], between the asymptotic distribution of an
optimal quantizer m⋆n when n −→ ∞ and that one of the best random quantizer, as
defined by Zador [64] in 1982.
The Zador’s result, extended to the multi-dimensional case, is as follows : Let
f be the input density of the measure µ, and (Y1, Y2, . . . , Yn) a random quantizer,
where the code vectors Yi are independent with common distribution of density g.
Then, with some weak assumptions about f and g, the distortion tends to 0 when
n −→ ∞, with speed β = 2/d, and it is possible to define the quantity
A(f, g) = lim
nβEg[
‖Yi − x‖
2f(x)dx]
Then for any given input density f , the density g (assuming some weak condition)
which minimises A(f, g) is
g⋆ ∼ C f d/d+2.
The inverse of the exponent d/(d + 2) is refered as Magnification Factor. Note
that in any case, when the data dimension is large, this exponent is near 1 (it value
is 1/3 when d = 1). Note also that this power has no effect when the density f is
uniform. But in fact the optimal quantizer is another thing, with another definition.
Namely the optimal quantizerm⋆n (formed with the code vectorsm
1,n, m
2,n, . . . , m
n,n),
minimizes the distortion Vn(m), and is got after convergence of the 0-neighbor al-
gorithm (if we could ensure the convergence to a global minimum, that is true only
in the one-dimensional case). So if we set
An(f,m
n) = n
βVn(m
n) = n
‖m⋆i,n − x‖
2f(x)dx
actually we have,
A(f) = lim
An(f,m
n) < A(f, g
and the limit of the discrete distribution of m⋆n is not equal to g
⋆. So there is no
magnification factor, for the 0-neighbor algorithm as claimed in many papers. It can
be an approximation, but no more.
The problem comes from the confusion between two distinct notions: random
quantizer and optimal quantizer. And in fact, the good property is the convergence
of the weighted distribution function (7).
As to the SOM algorithm in the one-dimensional case, with a neighborhood func-
tion not reduced to the 0-neighbor case, one can find in [55] or [19] some result
about a possible limit of the discrete distribution when the number of units goes to
∞. But actually, the authors use the Zador’s result which is not appropriate as we
just see.
5 The multidimensional continuous setting
In this section, we consider a general neighborhood function and the SOM algorithm
is defined as in Section 2.
5.1 Self-organization
When the dimension d is greater than 1, little is known on the classical Kohonen
algorithm. The main reason seems to be the fact that it is difficult to define what can
be an organized state and that no absorbing sets have been found. The configurations
whose coordinates are monotoneous are not stable, contrary to the intuition. For
each configuration set which have been claimed to be left stable by the Kohonen
algorithm, it has been proved later that it was possible to go out with a positive
probability. See for example [10]. Most people think that the Kohonen algorithm
in dimension greater than 1 could correspond to an irreducible Markov chain, that
is a chain for which there exists always a path with positive probability to go from
anywhere to everywhere. That property imply that there is no absorbing set at all.
Actually, as soon as d ≥ 2, for a constant parameter ε, the 0-neighbor algorithm
is an Doeblin recurrent irreducible chain (see [7]), that cannot have any absorbing
class.
Recently, two apparently contradictory results were established, that can be col-
lected together as follows.
Theorem 8 (d = 2 and ε is a constant) Let us consider a n × n units square
network and the set F++ of states whose both coordinates are separately increasing
as function of their indices, i.e.
F++ =
∀i1 ≤ n,m
< m2i1,2 < . . . < m
, ∀i2 ≤ n,m
< m12,i2 < . . . < m
(i) If µ has a density on Ω, and if the neighborhood function Λ is everywhere posi-
tive and decreases with the distance, the hitting time of F++ is finite with positive
probability (i.e. > 0, but possibly less than 1). See Flanagan ([22], [23]).
(ii) In the 8-neighbor setting, the exit time from F++ is finite with positive proba-
bility. See Fort and Pagès in ([28]).
This means that (with a constant, even very small, parameter ε), the organi-
zation is temporarily reached and that even if we guess that it is almost stable,
dis-organization may occur with positive probability.
More generally, the question is how to define an organized state. Many authors
have proposed definitions and measures of the self-organization, [65], [18], [62], [32],
[63], [33]. But none such “organized” sets have a chance to be absorbing.
In [28], the authors propose to consider that a map is organized if and only if the
Voronöı classes of the closest neighboring units are contacting. They also precisely
define the nature of the organization (strong or weak).
They propose the following definitions :
Definition 1 Strong organization
There is strong organization if there exists a set of organized states S such that
(i) S is an absorbing class of the Markov chain m(t),
(ii) The entering time in S is almost surely finite, starting from any random weight
vectors (see [6]).
Definition 2 Weak organization
There is weak organization if there exists a set of organized states S such that all
the possible attracting equilibrium points of the ODE defined in 3 belong to the set
The authors prove that there is no strong organization at least in two seminal
cases: the input space is [0, 1]2, the network is one-dimensional with two neighbors or
two-dimensional with eight neighbors. The existence of weak organization should be
investigated as well, but until now no exact result is available even if the simulations
show a stable organized limit behavior of the SOM algorithm.
5.2 Convergence
In [27], (see also [26]) the gradient of h is computed in the d-dimensional setting
(when it exists). In [53], the convergence and the nature of the limit state is studied,
assuming that the organization has occured, although there is no mathematical proof
of the convergence.
Another interesting result received a mathematical proof thanks to the computa-
tion of the gradient of h: it is the dimension selection effect discovered by Ritter
and Schulten (see [53]). The mathematical result is (see [27]:
Theorem 9 Assume thatm⋆1 is a stable equilibrium point of a general d1-dimensional
Kohonen algorithm, with n1 units, stimuli distribution µ1 and some neighborhood
function Λ. Let µ2 be a d2-dimensional distribution with mean m
2 and covariance
matrix Σ2. Consider the d1 + d2 Kohonen algorithm with the same units and the
same neighborhood function. The stimuli distribution is now µ1
Then there exists some η > 0, such that if ‖Σ2‖ < η, the state m
1 in the subspace
m2 = m
2 is still a stable equilibrium point for the d1 + d2 algorithm.
It means that if the stimuli distribution is close to a d1-dimensional distribution in
the d1 + d2 space, the algorithm can find a d1-space stable equilibrium point. That
is the dimension selection effect.
From the computation of the gradient ∇h, some partial results on the stability of
grid equilibriums can also be proved:
Let us consider I = I1×I2×. . .×Id a d-dimensional array, with Il = {1, 2, . . . , nl},
for 1 ≤ l ≤ d. Let us assume that the neighborhood function is a product function
(for example 8 neighbors for d = 2) and that the input distributions in each coordi-
nate are independent, that is µ = µ1
. . .
µd. At last suppose that the support
of each µl is [0,1].
Let us call grid states the states m⋆ = (m⋆ill, 1 ≤ il ≤ nl, 1 ≤ l ≤ d), such that
for every 1 ≤ l ≤ d, (m⋆ill, 1 ≤ il ≤ nl) is an equilibrium for the one-dimensional
algorithm. Then the following results hold [27] :
Theorem 10 (i) The grid states are equilibrium points of the ODE (3) in the d-
dimensional case.
(ii) For d = 2, if µ1 and µ2 have strictly positive densities f1 and f2 on [0, 1], if the
neighborhood functions are strictly decreasing, the grid equilibrium points are not
stable as soon as n1 is large enough and the ratio
is large (or small) enough (i.e.
when n1 −→ +∞ and
−→ +∞ or 0, see [27], Section 4.3).
(iii) For d = 2, if µ1 and µ2 have strictly positive densities f1 and f2 on [0, 1], if
the neighborhood functions are degenerated (0 neighbor case), m⋆ is stable if n1 and
n2 are less or equal to 2, is not stable in any other case (may be excepted when
n1 = n2 = 3).
The (ii) gives a negative property for the non square grid which can be related
with this one: the product of one-dimensional quantizers is not the correct vectorial
quantization. But also notice that we have no result about the simplest case: the
square grid equilibrium in the uniformly distributed case. Everybody can observe by
simulation that this square grid is stable (and probably the unique stable “organized”
state). Nevertheless, even if we can numerically verify that it is stable, using the
gradient formula it is not mathematically proved even with two neighbors in each
dimension!
Moreover, if the distribution µ1 and µ2 are not uniform, generally the square grids
are not stables, as it can be seen experimentally.
6 The discrete case
In this case, there is a finite number N of inputs and Ω = {x1, x2, . . . , xN}. The
input distribution is uniform on Ω that is µ(dx) = 1
l=1 δxl. It is the setting of
many practical applications, like Classification or Data Analysis.
6.1 The results
The main result ([39], [56]) is that for not time-dependent general neighborhood,
the algorithm locally derives from the potential
Vn(m) =
xl∈Ci(m)
Λ(i− j)‖mj − xl‖
Ci(m)
Λ(i− j)‖mj − x‖
2)µ(dx)
i,j=1
Λ(i− j)
Ci(m)
‖mj − x‖
2µ(dx).
When Λ(i, j) = 1 if i and j are neighbors, and if V(j) denotes the neighborhood
of unit i in I, Vn(m) also reads
Vn(m) =
∪i∈V(j)Ci(m)
‖mj − x‖
2µ(dx).
Vn(m) is an intra-class variance extended to the neighbor classes which is a gen-
eralization of the distortion defined in Section 4 for the 0-neighbor setting. But this
potential does have many singularities and its complete analysis is not achieved,
even if the discrete algorithm can be viewed as a stochastic gradient descent proce-
dure. In fact, there is a problem with the borders of the Voronöı classes. The set of
all these borders along the process m(t) trajectories has measure 0, but it is difficult
to assume that the given points xl never belong to this set.
Actually the potential is the true measure of the self-organization. It measures
both clustering quality and proximity between classes. Its study should provide
some light on the Kohonen algorithm even in the continuous case.
When the stimuli distribution is continuous, we know that the algorithm is not a
gradient descent [21]. However the algorithm can be seen then as an approximation
of the stochastic gradient algorithm derived from the function Vn(m). Namely,
the gradient of Vn(m) has a non singular part which corresponds to the Kohonen
algorithm and a singular one which prevents the algorithm to be a gradient descent.
This remark is the base of many applications of the SOM algorithm as well in
combinatorial optimization, data analysis, classification, analysis of the relations
between qualitative classifying variables.
6.2 The applications
For example, in [24], Fort uses the SOM algorithm with a close one-dimensional
string, in a two dimensional space where are located M cities. He gets very quickly
a very good sub-optimal solution. See also the paper [1].
The applications in data analysis and classification are more classical. The prin-
ciple is very simple: after convergence, the SOM algorithm provides a two(or one)-
dimensional organized classification which permit a low dimensional representation
of the data. See in [40] an impressive list of examples.
In [15] and [17], an application to forecasting is presented from a previous classi-
fication by a SOM algorithm.
6.3 Analysis of qualitative variables
Let us define here two original algorithms to analyse the relations between qualitative
variables. The first one is defined only for two qualitative variables. It is called
KORRESP and is analogous to the simple classical Correspondence Analysis. The
second one is devoted to the analysis of any finite number of qualitative variables.
It is called KACM and is similar to the Multiple Correspondence Analysis. See [11],
[14], [16] for some applications.
For both algorithms, we consider a sample of individuals and a number K of
questions. Each question k, k = 1, 2, . . . , K has mk possible answers (or modalities).
Each individual answers each question by choosing one and only one modality. If
1≤k≤mk is the total number of modalities, each individual is represented by
a row M-vector with values in 0, 1. There is only one 1 between the 1st component
and the m1-th one, only one 1 between the m1+1-th component and the m1+m2-th
one and so on.
In the general case whereM > 2, the data are summarized into a Burt Table which
is a cross tabulation table. It is a M × M symmetric matrix and is composed of
K×K blocks, such that the (k, l)-block Bkl (for k 6= l) is the (mk×ml) contingency
table which crosses the question k and the question l. The block Bkk is a diagonal
matrix, whose diagonal entries are the numbers of individuals who have respectively
chosen the modalities 1, 2, . . . , mk for question k. In the following, the Burt Table
is denoted by B.
In the case M = 2, we only need the contingency table T which crosses the two
variables. In that case, we set p (resp. q) for m1 (resp. m2).
The KORRESP algorithm
In the contingency table T , the first qualitative variable has p levels and corre-
sponds with the rows. The second one has q levels and corresponds with the columns.
The entry nij is the number of individuals categorized by the row i and the column j.
From the contingency table, the matrix of relative frequencies (fij = nij/(
ij nij))
is computed.
Then the rows and the columns are normalized in order to have a sum equal to
1. The row profile r(i), 1 ≤ i ≤ p is the discrete probability distribution of the
second variable given that the first variable has modality i and the column profile
c(j), 1 ≤ j ≤ q is the discrete probability distribution of the first variable given
that the second variable has modality j. The classical Correspondence Analysis is a
simultaneous weighted Principal Component Analysis on the row profiles and on the
column profiles. The distance is chosen to be the χ2 distance. In the simultaneous
representation, related modalities are projected into neighboring points.
To define the algorithm KORRESP, we build a new data matrix D : to each row
profile r(i), we associate the column profile c(j(i)) which maximizes the probability
of j given i, and conversely, we associate to each column profile c(j) the row profile
r(i(j)) the most probable given j. The data matrix D is the ((p + q) × (q + p))-
matrix whose first p rows are the vectors (r(i), c(j(i))) and last q rows are the vectors
(r(i(j)), c(j)). The SOM algorithm is processed on the rows of this data matrix D.
Note that we use the χ2 distance to look for the winning unit and that we alterna-
tively pick at random the inputs among the p first rows and the q last ones. After
convergence, each modality of both variables is classified into a Voronöı class. Re-
lated modalities are classified into the same class or into neighboring classes. This
method give a very quick, efficient way to analyse the relations between two quali-
tative variables. See [11] and [12] for real-world applications.
The KACM Algorithm
When there are more than two qualitative variables, the above method does not
work any more. In that case, the data matrix is just the Burt Table B. The rows are
normalized, in order to have a sum equal to 1. At each step, we pick a normalized
row at random according to the frequency of the corresponding modality. We define
the winning unit according to the χ2 distance and update the weights vectors as
usual. After convergence, we get an organized classification of all the modalities,
where related modalities belong to the same class or to neighboring classes. In that
case also, the KACM method provides a very interesting alternative to classical
Multiple Correspondence Analysis.
The main advantages of both KORRESP and KACM methods are their rapidity
and their small computing time. While the classical methods have to use several
representations with decreasing information in each, ours provide only one map,
that is rough but unique and permit a rapid and complete interpretation. See [14]
and [16] for the details and financial applications.
7 Conclusion
So far, the theoretical study in the one-dimensional case is nearly complete. It
remains to find the convenient decreasing rate to ensure the ordering. For the
multidimensional setting, the problem is difficult. It seems that the Markov chain
is irreducible and that further results could come from the careful study of the
Ordinary Differential Equation (ODE) and from the powerful existing results about
the cooperative dynamical systems.
On the other hand, the applications are more and more numerous, especially
in data analysis, where the representation capability of the organized data is very
valuable. The related methods make up a large and useful set of methods which
can be substituted to the classical ones. To increase their use in the statistical
community, it would be necessary to continue the theoretical study, in order to
provide quality criteria and performance indices with the same rigour as for the
classical methods.
Acknowledgements
We would like to thank the anonymous rewiewers for their helpful comments.
References
[1] B.Angéniol, G.de la Croix Vaubois, J.Y. Le Texier, Self-Organizing Feature Maps
and the Travelling Salesman Problem, Neural Networks, Vol.1, 289-293, 1988.
[2] M.Benäım, Dynamical System Approach to Stochastic Approximation, SIAM J.
of Optimization, 34, 2, 437-472, 1996.
[3] M.Benäım, J.C.Fort, G.Pagès, Almost sure convergence of the one-dimensional
Kohonen algorithm, Proc. ESANN’97, M.Verleysen Ed., Editions D Facto, Brux-
elles, 193-198, 1997.
[4] M.Benäım, J.C.Fort, G.Pagès, Convergence of the one-dimensional Kohonen al-
gorithm, submitted.
[5] C.M.Bishop, M.Svensn, C.K.I. Williams, GTM: the generative topographic map-
ping, to appear in Neural Computation, 1997.
[6] C.Bouton, G.Pagès, Self-organization of the one-dimensional Kohonen algorithm
with non-uniformly distributed stimuli, Stochastic Processes and their Applica-
tions, 47, 249-274, 1993.
[7] C.Bouton, G.Pagès, Convergence in distribution of the one-dimensional Kohonen
algorithm when the stimuli are not uniform, Advanced in Applied Probability, 26,
1, 80-103, 1994.
[8] C.Bouton, G.Pagès, About the multi-dimensional competitive learning vector
quantization algorithm with a constant gain, Annals of Applied Probability, 7, 3,
670-710, 1997.
[9] M.Cottrell, J.C.Fort, Etude d’un algorithme d’auto-organisation, Ann. Inst.
Henri Poincaré, 23, 1, 1-20, 1987.
[10] M.Cottrell, J.C.Fort, G.Pagès, Comments about Analysis of the Convergence
Properties of Topology Preserving Neural Networks, IEEE Transactions on Neu-
ral Networks, Vol. 6, 3, 797-799, 1995.
[11] M.Cottrell, P.Letremy, E.Roy, Analysing a contingency table with Kohonen
maps : a Factorial Correspondence Analysis, Proc. IWANN’93, J.Cabestany,
J.Mary, A.Prieto Eds., Lecture Notes in Computer Science, Springer, 305-311,
1993.
[12] M.Cottrell, P.Letremy, Classification et analyse des correspondances au moyen
de l’algorithme de Kohonen : application à l’étude de données socio-économiques,
Proc. Neuro-Nı̂mes, 74-83, 1994.
[13] M.Cottrell, J.C.Fort, G.Pagès, Two or Three Things that we know about
the Kohonen Algorithm, Proc. ESANN’94, M.Verleysen Ed., Editions D Facto,
Bruxelles, 235-244, 1994.
[14] M.Cottrell, S.Ibbou, Multiple correspondence analysis of a crosstabulation ma-
trix using the Kohonen algorithm, Proc. ESANN’95, M.Verleysen Ed., Editions
D Facto, Bruxelles, 27-32, 1995.
[15] M.Cottrell, B.Girard, Y.Girard, C.Muller, P.Rousset, Daily Electrical Power
Curves : Classification and Forecasting Using a Kohonen Map, From Natural
to Artificial Neural Computation, Proc. IWANN’95, J.Mira, F.Sandoval eds.,
Lecture Notes in Computer Science, Vol.930, Springer, 1107-1113, 1995.
[16] M.Cottrell, E. de Bodt, E.F.Henrion, Understanding the Leasing Decision with
the Help of a Kohonen Map. An Empirical Study of the Belgian Market, Proc.
ICNN’96 International Conference, Vol.4, 2027-2032, 1996.
[17] M.Cottrell, B.Girard, P.Rousset, Forecasting of curves using a Kohonen Clas-
sification, to appear in Journal of Forecasting, 1998.
[18] P.Demartines, Organization measures and representations of Kohonen maps,
In : J.Hérault (ed), First IFIP Working Group 10.6 Workshop, 1992.
[19] D.Dersch, P.Tavan, Asymptotic Level Density in Topological Feature Maps,
IEEE Tr. on Neural Networks, Vol.6, 1, 230-236, 1995.
[20] E.Erwin, K.Obermayer and K.Shulten, Self-organizing maps : stationary states,
metastability and convergence rate, Biol. Cyb., 67, 35-45, 1992.
[21] E.Erwin, K.Obermayer and K.Shulten, Self-organizing maps : ordering, con-
vergence properties and energy functions, Biol. Cyb., 67, 47-55, 1992.
[22] J.A.Flanagan, Self-Organizing Neural Networks, Phd. Thesis, Ecole Polytech-
nique Fédérale de Lausanne, 1994.
[23] J.A.Flanagan, Self-organisation in Kohonen’s SOM, Neural Networks, Vol. 6,
No.7, 1185-1197, 1996.
[24] J.C.Fort, Solving a combinatorial problem via self-organizing process : an ap-
plication of the Kohonen algorithm to the travelling salesman problem, Biol.
Cyb., 59, 33-40, 1988.
[25] J.C.Fort and G.Pagès, A non linear Kohonen algorithm, Proc. ESANN’94,
M.Verleysen Ed., Editions D Facto, Bruxelles, 221-228, 1994.
[26] J.C.Fort and G.Pagès, About the convergence of the generalized Kohonen al-
gorithm, Proc. ICANN’94, M.Marinero, P.G.Morasso Eds., Springer, 318-321,
1994.
[27] J.C.Fort and G.Pagès, On the a.s. convergence of the Kohonen algorithm with
a general neighborhood function, Annals of Applied Probability, Vol.5, 4, 1177-
1216, 1995.
[28] J.C.Fort and G.Pagès, About the Kohonen algorithm : strong or weak self-
organisation, Neural Networks, Vol.9, 5, 773-785, 1995.
[29] J.C.Fort and G.Pagès, Convergence of Stochastic Algorithms : from the Kush-
ner & Clark theorem to the Lyapunov functional, Advances in Applied Probabil-
ity, 28, 4, 1072-1094, 1996.
[30] J.C.Fort and G.Pagès, Asymptotics of the invariant distributions of a constant
step stochastic algorithm, to appear in SIAM Journal of Control and Optimiza-
tion, 1996.
[31] J.C.Fort and G.Pagès, Quantization vs Organization in the Kohonen SOM,
Proc. ESANN’96, M.Verleysen Ed., Editions D Facto, Bruges, 85-89, 1996.
[32] G.J.Goodhill, T.Sejnowski, Quantifying neighbourhood preservation in topo-
graphic mappings, Proc. 3rd Joint Symposium on Neural Computation, 61-82,
1996.
[33] M.Herrmann, H.-U. Bauer, T.Vilmann, Measuring Topology Preservation in
Maps of Real-World Data, Proc. ESANN’97, M.Verleysen Ed., Editions D Facto,
Bruxelles, 205-210, 1997.
[34] M.Hirsch, Systems of differential equations which are competitive or cooperative
II : convergence almost everywhere, SIAM J. Math. Anal., 16, 423-439, 1985.
[35] T.Kohonen, Self-organized formation of topologically correct feature maps,
Biol. Cyb., 43, 59-69, 1982.
[36] T.Kohonen, Analysis of a simple self-organizing process, Biol. Cyb., 44, 135-140,
1982.
[37] T.Kohonen, Self-organization and associative memory Springer, New York
Berlin Heideberg, 1984 (3rd edition 1989).
[38] T.Kohonen, Speech recognition based on topology preserving neural maps, in :
I.Aleksander (ed) Neural Computation Kogan Page, London, 1989.
[39] T.Kohonen, Self-organizing maps : optimization approaches, in : T.Kohonen et
al. (eds) Artificial neural networks, vol. II, North Holland, Amsterdam, 981-990,
1991 .
[40] T.Kohonen, Self-Organizing Maps, Vol. 30, Springer, New York Berlin Heider-
berg, 1995.
[41] H.J.Kushner, D.S.Clark, Stochastic Approximation for Constrained and Uncon-
strained Sysqtems, Volume 26, in Applied Math. Science Series, Springer, 1978.
[42] S.P.LLoyd et al. Special Issue on Quantization, IEEE Tr. on Information The-
ory, Vol.IT-28, No.2, 129-137, 1982.
[43] D.Lamberton, G.Pagès, On the critical points of the 1- dimensional Competitive
Learning Vector Quantization Algorithm, Proc. ESANN’96, M.Verleysen Ed.,
Editions D Facto, Bruges, 1996.
[44] Y.Linde, A.Buzo, R.Gray, An Algorithm for Vector Quantizer Design, IEEE
Tr. on Communications, Vol. 28, No. 1, 84-95, 1980.
[45] Z.P.Lo, B.Bavarian, On the rate of convergence in topology preserving neural
networks, Biol. Cyb, 65, 55-63, 1991.
[46] Z.P.Lo, Y.Yu and B.Bavarian, Analysis of the convergence properties of topol-
ogy preserving neural networks, IEEE trans. on Neural Networks, 4, 2, 207-220,
1993.
[47] S.Luttrell, Derivation of a class of training algorithms, IEEE Transactions on
Neural Networks, 1 (2), 229-232, 1990.
[48] D.J.Newman, The Hexagon Theorem, Special Issue on Quantization, IEEE Tr.
on Information Theory, Vol.IT-28, No.2, 137-139, 1982.
[49] E.Oja, Self-organizing maps and computer vision, in : H.Wechsler (ed), Neural
networks for Perception, vol.1, Academic Press, Boston, 1992.
[50] G.Pagès, Voronöı tesselation, space quantization algorithms and numerical in-
tegration, in Proc. of the ESANN93 Conference, Bruxelles, Quorum Ed., (ISBN-
2-9600049-0-6), 221-228, 1993.
[51] G.Pagès, Numerical Integration by Space Quantization, Technical Report, 1996.
[52] H.Ritter and K. Schulten, On the stationary state of Kohonen’s self-organizing
sensory mapping, Biol. Cybern., 54, 99-106, 1986.
[53] H.Ritter and K. Schulten, Convergence properties of Kohonen’s topology con-
serving maps: fluctuations, stability and dimension selection, Biol. Cybern., 60,
59-71, 1988.
[54] H.Ritter T.Martinetz and K. Schulten, Topology conserving maps for mo-
tor control, Neural Networks, from Models to Applications, (L.Personnaz and
G.Dreyfus eds.), IDSET, Paris, 1989.
[55] H.Ritter, Asymptotic Level Density for a Class of Vector Quantization Pro-
cesses, IEEE Tr. on Neural Networks, Vol.2, 1, 173-175, 1991.
[56] H.Ritter T.Martinetz and K. Schulten, Neural computation and Self-Organizing
Maps, an Introduction, Addison-Wesley, Reading, 1992.
[57] H.Robbins and S. Monro, A stochastic approximation method, Ann. Math.
Stat., vol. 22, 400-407, 1951.
[58] P.Ru̇žička, On convergence of learning algorithm for topological maps, Neural
Network World, 4, 413-424, 1993.
[59] A.Sadeghi, Asymptotic Behaviour of Self-Organizing Maps with Non-Uniform
Stimuli Distribution, Annals of Applied Probability, 8, 1, 281-289, 1997.
[60] A.Sadegui, Self-organization property of Kohonen’s map with general type of
stimuli distribution, submitted to Neural Networks, 1997.
[61] P.Thiran, M.Hasler, Self-organization of a one-dimensional Kohonen network
with quantized weights and inputs, Neural Networks, 7(9), 1427-1439, 1994.
[62] T.Villmann, R.Der, T.Martinetz, A novel approach to measure the topology
preservation of feature maps, Proc. ICANN’94, M.Marinero, P.G.Morasso Eds.,
Springer, 298-301, 1994.
[63] T.Villmann, R.Der, T.Martinetz, Topology Preservation in Self-Organizing Fea-
ture Maps: Exact Definition and Measurement, IEEE Tr. on Neural Networks,
Vol.8, 2, 256-266, 1997.
[64] P.L.Zador, Asymptotic Quantization Error of Continuous Signals and the
Quantization Dimension, Special Issue on Quantization, IEEE Tr. on Informa-
tion Theory, Vol.IT-28, No.2, 139-149, 1982.
[65] S.Zrehen, F.Blayo, A geometric organization measure for Kohonen’s map, in:
Proc. of Neuro-Nı̂mes, 603-610, 1992.
Introduction
Notations and definitions
The dimension 1
The self-organization
The convergence for dimension 1
Decreasing adaptation parameter
Constant adaptation parameter
The 0 neighbor case in a multidimensional setting
The multidimensional continuous setting
Self-organization
Convergence
The discrete case
The results
The applications
Analysis of qualitative variables
Conclusion
|
0704.1697 | Effect of the Spatial Dispersion on the Shape of a Light Pulse in a
Quantum Well | Effect of the Spatial Dispersion on the Shape of a Light Pulse in a Quantum Well
L. I. Korovin, I. G. Lang
A. F. Ioffe Physical-Technical Institute, Russian Academy of Sciences, 194021 St. Petersburg, Russia
S. T. Pavlov†‡
†Facultad de Fisica de la UAZ, Apartado Postal C-580, 98060 Zacatecas, Zac., Mexico; and
‡P.N. Lebedev Physical Institute, Russian Academy of Sciences, 119991 Moscow, Russia; [email protected]
Reflectance, transmittance and absorbance of a symmetric light pulse, the carrying frequency of
which is close to the frequency of interband transitions in a quantum well, are calculated. Energy
levels of the quantum well are assumed discrete, and two closely located excited levels are taken into
account. A wide quantum well (the width of which is comparable to the length of the light wave,
corresponding to the pulse carrying frequency) is considered, and the dependance of the interband
matrix element of the momentum operator on the light wave vector is taken into account. Refractive
indices of barriers and quantum well are assumed equal each other. The problem is solved for an
arbitrary ratio of radiative and nonradiative lifetimes of electronic excitations. It is shown that
the spatial dispersion essentially affects the shapes of reflected and transmitted pulses. The largest
changes occur when the radiative broadening is close to the difference of frequencies of interband
transitions taken into account.
PACS numbers: 78.47. + p, 78.66.-w
Irradiation of the low-dimensional semiconductor sys-
tems by light pulses and analysis of reflected and trans-
mitted pulses allow to obtain the information regarding
the structure of energy levels as well as relaxation pro-
cesses.
The radiative mechanism of relaxation of excited en-
ergy levels in quantum wells arises due to a violation of
the translation symmetry perpendicular to the he quan-
tum well plane1,2. At low temperatures, low impurity
doping and perfect boundaries of quantum wells, the
contributions of the radiative and nonradiative relax-
ation can be comparable. In such situation, one can-
not be limited by the linear approximation on the elec-
tron - light interaction. All the orders of the interac-
tion have to be taken into account3,4,5,6,7,8,9. Alterations
of asymmetrical10,11,12,13 and symmetrical13,14,15 light
pulses are valid for narrow quantum wells under condi-
tions k d ≪ 1 (d is the quantum well width, k is the mag-
nitude of the light wave vector corresponding to the car-
rying frequency of the light pulse) and an independence
of optical characteristics of a quantum well on d. How-
ever, a situation is possible when the size quantization
is preserved and for wide quantum wells if k d ≥ 1 (see
corresponding estimates in16). In such a case, we have to
take into account the spatial dispersion of a monochro-
matic wave 9,19 and waves composing the light pulse16.
Our investigation is devoted to the influence of the spa-
tial dispersion on the optical characteristics (reflectance,
transmittance and absorbance) of a quantum well irradi-
ated by the symmetric light pulse. A system, consisting
of a deep quantum well of type I, situated inside of the
space interval 0 ≤ z ≤ d, and two semi-infinite barriers,
is considered. A constant quantizing magnetic field is
directed perpendicular to the quantum well plane what
provides the discrete energy levels of the electron sys-
tem. A stimulating light pulse propagates along the z
axis from the side of negative values z. The barriers are
transparent for the light pulse which is absorbed in the
quantum well to initiate the direct interband transitions.
The intrinsic semiconductor and zero temperature are
assumed.
The final results for two closely spaced energy levels
of the electronic system in a quantum well are obtained.
Effect of other levels on the optical characteristics may be
neglected, if the carrying frequency ω ℓ of the light pulse
is close to the frequencies ω 1 and ω 2 of the doublet levels,
and other energy levels are fairly distant. It is assumed
that the doublet is situated near the minimum of the
conduction band, the energy levels may be considered in
the effective mass approximation, and the barriers are
infinitely high.
In the case h̄ K⊥ = 0 (h̄ K⊥ is the vector of the quasi-
momentum of electron-hole pair in the quantum well
plane) in a quantum well, the discrete energy levels are
the excitonic energy levels in a zero magnetic field or
energy levels in a quantizing magnetic field directed per-
pendicularly to the quantum well plane. As an example,
the energy level of the electron-hole pair in a quantizing
magnetic field directed along the z axis (without taking
into account the Coulomb interaction between the elec-
tron and hole which is a weak perturbation for the strong
magnetic fields and not too wide quantum wells) is con-
sidered.
I. THE ELECTRIC FIELD
Let us consider a situation when a symmetric excit-
ing light pulse propagates through a single quantum well
along the z axis from the side of negative values of z.
Analogously to13,14,15, the electric field is chosen as
E 0 (z, t) = e ℓE 0 e
−iω ℓ p
http://arxiv.org/abs/0704.1697v1
Θ(p)e−γ ℓ p/2 + [1−Θ(p)]eγ ℓ p/2
+ c.c., (1)
where E 0 is the real amplitude, p = t− ν z/ c,
e ℓ =
(ex ± i e y)
are the unite vectors of the circular polarization, ex e y
are the real unite vectors, Θ(p) is the Heaviside function,
1/γ ℓ determines the pulse width, c is the light velocity in
vacuum, ν is the refraction index, which is assumed the
same for the quantum well and barriers (the approxima-
tion of a homogeneous media). The Fourier-transform of
(1) is as follows
E 0(z, ω) = e
ikz [e ℓ E 0(ω) + e
ℓ E 0(−ω)] ,
E 0(ω) =
E 0 γ ℓ
(ω − ω ℓ)2 + (γ ℓ/2)2
, k =
. (2)
The electric field in the region z ≤ 0 consists of the
sum of the exciting and reflected pulses. The Fourier-
transform may be written as
ℓ (z, ω) = E 0 (z, ω) + ∆E
ℓ (z, ω),
where ∆E ℓ(z, ω) is the electric field of the reflected pulse
∆E ℓ(z, ω) = e ℓ ∆E
ℓ(z, ω) + e ∗ℓ ∆E
ℓ(z,−ω). (3)
In the region z ≥ d, there is only the transmitted pulse,
and its electric field is
r(z, ω) = e ℓ E
r(z, ω) + e ∗ℓ E
r(z,−ω). (4)
It is assumed below that the pulse, having absorbed in
the quantum well, stimulates the interband transitions
and, consequently, the appearance of a current. In barri-
ers, the absorption is absent. Therefore, for the complex
amplitudes ∆E ℓ(z, ω) and E r(z, ω) in barriers for z ≤ 0
and z ≥ d, we obtain the expression
d z 2
+ k 2 E = 0. (5)
The expression for the electric field inside of the quantum
well (0 ≤ z ≤ d) has a form
d z 2
+ k 2 E = −4πi ω
J(z, ω), (6)
where J(z, ω) is the Fourier-transform of the current den-
sity, averaged on the ground state of the system. The
current is induced by the monochromatic wave of the
frequency ω. In the case of two excited energy levels,
J(z, ω) is expressed as follows
J(z, ω) =
γ rj Φ j(z)
dz′ Φ j(z
′)E(z ′, ω),
ω̃ j = ω − ω j + iγ j/2. (8)
where γ j is the nonradiative damping of the doublet,
γ r j is the radiative damping of the levels of the doublet
in the case of narrow quantum wells, when the spatial
dispersion of electromagnetic waves may be neglected.
In particular, the doublet system may be represented
by a magnetopolaron state18. In such a case,
γ r,j = γ r Q j , γ r =
h̄ cν
p 2c v
h̄ ω g m 0
m 0 c
where m 0 is the free electron mass, H is the magnetic
field, e is the electron charge, p c v is the matrix element
of the momentum, corresponding to the circular polar-
ization, p2c v = |p c v x |2 + |p c v y |2. The factor
Q j =
± h̄ (Ω c − ωLO)
h̄2(Ω c − ωLO)2 + (∆E pol)2
determines the change of the radiative timelife at a deflec-
tion of the magnetic field from the resonant value when
the resonant condition Ω c = ωLO is carried out. ∆E pol
is the polaron splitting, Ω c and ωLO are the cyclotron
frequency and optical phonon frequency, respectively. In
the resonance, Q j = 1/2 and γ r1 = γ r2.
When calculating J(z, ω), it was assumed that the
Lorentz force, determined by the external magnetic field,
is large in comparison with the Coulomb and exchange
forces in the electron-hole pair. In that case, the vari-
ables z (along magnetic field) and r⊥ (in the quantum
well plane) in the wave function of the electron-hole pair
may be separated. This condition is carried out for the
quantum well on basis of GaAs for the magnetic field,
corresponding to the magnetopolaron formation9. Be-
sides, if the energy of the size quantization exceeds the
Coulomb and exchange energies, the electron-hole pair
may be considered as a free particle. Then, in the ap-
proximation of the effective mass and infinitely high bar-
riers, the wave function, describing the dependence on z
, accepts a simple form
Φ j(z) =
πm cz
πm vz
, 0 ≤ z ≤ d, (9)
and Φ j(z) = 0 in barriers, where m c (m v) are the
quantum numbers of the size-quantization of an electron
(hole).
In the real systems, the approximation (9) is not al-
ways carried out. However, the taking into account
the Coulomb and exchange interactions will result only
into some changes of the function Φ j(z), what does not
change qualitatively the optical characteristics, as it was
shown for the monochromatic irradiation20.
Indices j = 1 and j = 2 in Φ j(z) correspond to the
pairs of quantum numbers of the size-quantization in a
direct interband transition. m
c ( v)
corresponds to the
index j = 1, and m
c ( v)
corresponds to the index j = 2.
In interband transitions, the Landau quantum numbers
are conserved. The total electric field E is included into
the RHS of (7), what is connected with the refuse from
the perturbation theory on the coupling constant e2/h̄c.
In further calculations, an equality of quantum num-
bers m
c v = m
c v is assumed. Then,
Φ 1(z) = Φ 2(z) = Φ(z),
and the current density in the RHS of (7) takes the form
J(z, ω) =
i ν c
ω − ω 1 + iγ 1
ω − ω 2 + iγ 2
×Φ(z)
dz′Φ(z′)E(z′). (10)
With the help of the indicated simplifications, as it
was shown in9,18,20, the field amplitudes in the Fourier-
representation ∆E ℓ(z, ω) and E r(z, ω) result in
∆E ℓ(z, ω) = −iE0(ω)(−1)m c+m ve−ik(z−d)N ,
E r(z, ω) = E0(ω)e
ikz(1− iN ), (11)
where E0(ω) is given in (3). Here, the frequency depen-
dence is determined by the function
N = ε (γ r1 ω̃2 + γ r2 ω̃1)/2
ω̃1 ω̃2 + iε(γ r1 ω̃2 + γ r2 ω̃1)/2
. (12)
The function N includes the value
ε = ε′ + iε′′, (13)
which determines influence of the spatial dispersion on
the radiative broadening (ε′ γ r) and shift (ε
′′ γ r) of the
doublet levels. ε′ and ε′′ are equal9,20:
ε′ = Re ε = 2B2
1− (−1)m c+m v cos kd
, (14)
ε′′ = Imε = 2B
(1 + δmc,mv )(mc +mv)
2 + (mc −mv)2
8mcmv
−(−1)mc+mvB sinkd− (2 + δmc,mv)(kd)
8π2mcmv
, (15)
B = 4π
2 m c m v kd
[π 2 (m c +m v)2 − (kd)2 ] [(kd)2 − π 2 (m c −m v)2 ]
(if kd → 0, ε → 1 (m c = m v, an allowed transition )
and ε → 0 (m c 6= m v, a forbidden transition)).
II. THE TIME-DEPENDENCE OF THE
ELECTRIC FIELD OF REFLECTED AND
TRANSMITTED LIGHT PULSES
With the help of the standard formulas, let us go to
the time-representation
∆E ℓ(z, t) ≡ ∆E ℓ(s) =
dω e−iω s ∆E ℓ(z, ω), s = t+ νz/c, (16)
E r(z, t) ≡ E r(p) = 1
dω e−iω p E r(z, ω),
p = t− νz/c. (17)
The vectors ∆E ℓ(s) and E r(p) have the form
∆E ℓ(s) = e ℓ ∆E
ℓ(s) + c.c.,
r(p) = e ℓ E
r(p) + c.c.. (18)
It is seen from expressions (11) and (12) that the denom-
inator in integrands of (16) and (17) is the same. It may
be transformed conveniently to the form
ω̃1 ω̃2 + i(ε/2) (γ r1 ω̃2 + γ r2 ω̃1)
= (ω − Ω 1) (ω − Ω 2), (19)
where Ω 1 and Ω 2 determine the poles of the integrand
in the complex plane ω. They are equal
Ω1,2 =
ω1 + ω2 −
(γ1 + γ2)−
(γr1 + γr2)
[ω1 − ω2 −
(γ1 − γ2)−
(γr1 − γr2)]2
−ε2γr1γr2
. (20)
Thus, in the integrands of 16) and (17), there are 4 poles:
ω = ω ℓ± iγ ℓ/2 and ω = Ω 1,2. The pole ω = ω ℓ+ iγ ℓ/2
is situated in the upper half plane, others are situated in
the lower half plane.
Integrating in the complex plane ω, we obtain that the
function ∆E ℓ(z, t), determining, according to (17), the
electric field vector of the reflected pulse ∆E ℓ(z, t), has
the form
∆E ℓ(z, t) = −iE 0(−1)m c+m veikd
×{R1[1−Θ(s)] + (R2 +R3 +R4)Θ(s)}, (21)
where
R1 = exp(−iω ℓs+ γ ℓs/2)
γ̄ r1/2
ω ℓ − Ω1 + iγ ℓ/2
γ̄ r2/2
ω ℓ − Ω2 + iγ ℓ/2
R2 = exp(−iω ℓ − γ ℓs/2)
γ̄ r1/2
ω ℓ − Ω1 − iγ ℓ/2
γ̄ r2/2
ω ℓ − Ω2 − iγ ℓ/2
R3 = − exp(−iΩ1s)(γ̄ r1/2)
ω ℓ − Ω1 − iγ ℓ/2
ω ℓ − Ω1 + iγ ℓ/2
R4 = − exp(−iΩ2s)(γ̄ r2/2)
ω ℓ − Ω2 − iγ ℓ/2
ω ℓ − Ω2 + iγ ℓ/2
, (22)
where
γ̄ r1 = ε
′ γ r1 +∆γ, γ̄ r2 = ε
′ γ r2 −∆γ,
ε′ γ r1(Ω2 − ω2 + iγ2/2)
Ω1 − Ω2
ε′ γ r2(Ω1 − ω1 + iγ1/2)
Ω1 − Ω2
. (23)
The function E r(z, t), corresponding to a transmitted
light pulse, is represented in the form
E r(z, t) = E 0
T1[1−Θ(p)] + (T2 + T3 + T4)Θ(p)
Ω1 − Ω2
where
T1 = exp(−iω ℓp+ γ ℓp/2)M(ω ℓ + iγ ℓ/2)
ω ℓ − Ω1 + iγ ℓ/2
ω ℓ − Ω2 + iγ ℓ/2
T2 = exp(−iω ℓp− γ ℓp/2)M(ω ℓ − iγ ℓ/2)
ω ℓ − Ω1 − iγ ℓ/2
ω ℓ − Ω2 − iγ ℓ/2
T3 = − exp(−iΩ1 p)M(Ω1)
ω ℓ − Ω1 − iγ ℓ/2
ω ℓ − Ω1 + iγ ℓ/2
T4 = exp(−iΩ2 p)M(Ω2)
ω ℓ − Ω2 − iγ ℓ/2
ω ℓ − Ω1 + γ ℓ/2
. (25)
The function M has the structure
M(ω) = (ω − ω1 + iγ1/2)(ω − ω2 + iγ2/2)−
(ε′′/2)[γr1(ω − ω2 + iγ2/2) + γr2(ω − ω1 + iγ1/2)].
When the electric field of stimulating light pulse E 0(z, t)
(determined in (2)) is extracted from E r(z, t) , i.e., it is
assumed
r(z, t) = E 0(z, t) + ∆E r(z, t), (26)
then, ∆E r(z, t) will differ from ∆E ℓ(z, t) only by substi-
tution of the variable s = t + νz/c by p = t − νz/c and
by absence of the factor (−1)m c+m v exp(ikd).
Thus, being taken into account, the spatial dispersion
provides a renormalization of radiative damping γ r i. In
nominators of formulas (21), the renormalization leads to
multiplication of γ r i on the real factor ε
′, i.e., decreases
the value γ r i (diagrams of functions ε
′ and ε′′ are rep-
resented in9). In denominators, γ r i is multiplied on the
complex function ε, that means the appearance, together
with the change of the radiative broadening, of a shift of
resonant frequencies. In the limit kd = 0, expressions
(21) - (23) coincide with obtained in (14).
=1 =0.0005 eVeV
=0.00005 eV
=0.0005 eV
t, s, p {ps)
6420-2
FIG. 1: The reflectance R, transmittance T , absorbance A,
and stimulating pulse P as time dependent functions for three
magnitudes of the parameter kd in the case of a long stimulat-
ing pulse (γ ℓ ≪ ∆ω) γ r ≪ γ, γ ℓ .∆ω = 6.65.10
eV, ω ℓ =
ReΩ1 = Ω res.
t (s, p) ps
=0.005 eV
=0.00005 eV
=0.0005 eV
2.10-4
1.10-4
-1.10-4
-2.10-4
-1 0 21
FIG. 2: Same as in Fig.1 for an exciting pulse of a middle
duration (γ ℓ ≃ ∆ω) γ r ≪ γ ≪ γ ℓ .
III. THE REFLECTANCE, TRANSMITTANCE
AND ABSORBANCE OF STIMULATING LIGHT
PULSE
The energy flux S(p), corresponding to the electric field
of stimulating light pulse, is equal
S(p) =
(E 0(z, t))2 = ezS0P(p), (27)
where S0 = cE
0/2πν, ez is the unite vector along the
light pulse. The dimensionless function
P(p) = (E
0(z, t))2
= Θ(p)e−γℓp + [1−Θ(p)]eγℓp (28)
determines the spatial and time dependence of the energy
flux of stimulating pulse. The flux, transmitted through
the quantum well, has a form
(E r(z, t))2 = ezS0T (p), (29)
the reflected energy flux has a form
ℓ = − ezc
(E ℓ(z, t))2 = −ezS0R(s). (30)
The dimensionless functions T (p) and R(s) correspond
to parts of transmitted and reflected energy fluxes of the
stimulating pulse. The dimensionless absorbance is de-
fined as
A(p) = P(p)−R(p) − T (p) (31)
(since for reflection z ≤ 0, the variable in R is s = t −
|z|/c).
The dependencies of the reflectance R, transmittance
T , absorbance A, and stimulated momentum P on the
variable p (or s for R) for the case m c = m v = 1 are
represented in figures. It was assumed also that
γ r 1 = γ r 2 = γ r, γ 1 = γ 2 = γ. (32)
It follows from (21) and (24) that the resonant frequen-
cies are ω ℓ = ReΩ1 and ω ℓ = ReΩ2. The calculations
were performed for
ω ℓ = ReΩ1 = Ωres. (33)
Let us go from the frequency ω ℓ to
Ω = ω ℓ − ω1, (34)
then the resonant frequency is
Ωres =
−∆ω + ε′γ r +Re
(∆ω)2 − ε2γ2r
. (35)
It depends on three parameters: ∆ω = ω1 − ω2, γ r and
kd, since the complex function ε depends on kd (see (15)).
Functions R, T ,A and P are homogeneous functions
of the inverse lifetimes and frequencies ω1, ω2, ω ℓ. There-
fore, a choice of the measurement units is arbitrary. For
the sake of certainty, all these values are expressed in eV .
The time dependence of the optical characteristics of a
quantum well is represented in figures for the different
magnitudes of kd. The curves, corresponding to kd = 0,
were obtained in14. It was assumed in calculations that
∆ω = 0.065eV , what corresponds to the magnetopolaron
state in a quantum well on basis of GaAs and to the width
d = 300Å of the quantum well18,19,21.
t (p, s) ps
= 0.0005 eV
= 0.00005 eV
= 0
151050
FIG. 3: Same as in Fig.1 for an exciting pulse of a middle
duration (γ ℓ ≃ ∆ω) γ r ≪ γ ≪ γ ℓ .
t (s, p) ps
= 0.005 eV
= 0.00005 eV
= 0
4.10-4
-6.10-4
-4.10-4
2.10-4
-2.10-4
FIG. 4: Same as in Fig.1 for an exciting pulse of a middle
duration (γ ℓ ≃ ∆ω) γ = 0.
IV. THE DISCUSSION OF RESULTS
Fig.1 corresponds to a long (wide in comparison to
∆ω) stimulating pulse and a small radiative broadening
(γ r ≪ γ, γ ℓ). In this case, the transmittance T domi-
nates. The shape of the curve weakly differs from P and
weakly depends on kd. The dependence on the spatial
dispersion is seen at the curves R and A. For example,
the reflectance R at kd = 3 is two times less than at
kd = 0. However, the magnitude R is shares of percent.
Fig.2 corresponds to a stimulating light pulse of a mid-
dle length, when γ ℓ ≃ ∆ω and γ r ≪ γ ≪ γ ℓ. There
appear peculiarities: a light generation (negative absorp-
= 0.00666
t (p, s) ps
21-1 0
FIG. 5: Same as in Fig.1 for four magnitudes of the parameter
kd in the case, when ∆ω is close to γ r, γ ℓ ≫ γ r ≫ γ.
tion) after the light pulse transmission and oscillations
of R,A and T . The generation is a consequence of the
fact that the electronic system has no time to irradiate
the energy during the propagation of such pulse. Oscil-
lations is a consequence of beatings with the frequency
(under condition ω ℓ = ReΩ1)
Re (Ω1 − Ω2) = Re
(ω1 − ω2)2 − (ε′ + iε′′)2γ2r. (36)
A noticeable effect of the spatial dispersion takes place
in the reflectance R during transmission of the pulse, as
well as after its transmission. The spatial dispersion af-
fects the transmittance T and absorbanceA after passing
through a quantum well, when these values are small.
In Fig. 3 and 4, the optical characteristics are repre-
sented at γ = 0 and a long stimulating pulse (γ ℓ ≪ ∆ω,
Fig.3) and a pulse of a middle duration, when γ ℓ ≃ ∆ω
(Fig.4). Since, in that case, the real absorption is ab-
sent, one have to accept the function A, defined in (31),
as an energy part, stored up by a quantum well for the
time being due to the interband transitions (if A > 0), or
an energy part, which is generated by the quantum well
during and after propagation of the pulse (A < 0). The
same concerns to Fig.2, however, the part of the stored
energy there, which disappears if γ → 0, corresponds to
the real absorption. The oscillation period in Fig.2 and
4 does not depend on the parameter kd, since, at chosen
magnitudes of the parameters ∆ω and γ r the beating
frequency (36) is almost equal to ω1 − ω2, and compara-
tively small changes of the functions ε′ and ε′′ does not
affect practically on the beating frequency.
In Fig.5, where ∆ω is near γ r (6.65.10
−3eV and
6.66.10−3eV , respectively), the stimulating light pulse is
5 times shorter, than in Fig.3 and 4, and γ ℓ ≫ γ r ≫ γ.
In that case, the spatial dispersion affects strongly on
the optical characteristics. In the interval 0 ≤ kd ≤ 3,
the reflectance increases 8 times approximately, and the
transmittance decreases 6 times. Such a sharp change is
due to the dependence of γ̄ r1 and γ̄ r2 on kd. For ex-
ample, at kd = 0 γ̄ r1 = −17303.9, γ̄ r2 = 193066, 6, and
at kd = 3 Re γ̄ r1 = 1960.21, Re γ̄ r2 = 442, 718. And at
the same time, R and T ≤ 1, since they are the result
of substraction of large magnitudes and therefore these
differences are sensitive to changes of kd.
In14, it was shown that, at kd = 0, there are the sin-
gular points on the time axis, where T = A = 0 and
R = P , or R = A = 0 and T = P (total reflection or
total transmission). It is seen from the figures that the
singular points are preserved and in the case kd 6= 0,
there is only a small shift of them. In Fig.5 the point of
the total transmission appears at kd = 0. At kd = 0.5,
this point disappears, and at kd = 1.5 and kd = 3.0 the
point of the total reflection appears. If kd = 1.5, then
R = P , A+ T = 0 (A < 0). If kd = 3.0, then as before
R = P , but A = T = 0. Thus, growing of the parameter
kd changes the type of a singular point.
Thus, the spatial dispersion of the electromagnetic
waves, forming the light pulse, noticeably affect the op-
tical characteristics of a quantum well. This influence is
especially strong, when γ r ≃ ∆ω.
Let us note in conclusion that the results obtained
above are valid at equal refraction indices of barriers and
quantum well. Otherwise, one is to take into account re-
flection of boundaries of a quantum well. However, this
problem is outside the scopes of present article.
1 L. C. Andreani, F. Tassone, F. Bassani. Sol. State Com-
mun. 77, 9, 641 (1991).
2 L. C. Andreani. In: Confined electrons and phonons. Eds
E. Burstein, C. Weisbuch, Plenum Press, N. Y. (1995), p.
3 E. L. Ivchenko. Fiz. Tverd. Tela, 1991, 33, N 8, 2388
(Physics of the Solid State (St. Petersburg), 33, 2182
(1991)).
4 F. Tassone, F. Bassani, L. C. Andreani. Phys. Rev. B 45,
11, 6023 (1992).
5 T.Stroucken, A. Knorr, C. Anthony, P. Thomas, S. W.
Koch, M. Koch, S. T. Gundiff, J. Feldman, E. O. Göbel.
Phys. Rev. Lett. 74, 9, 2391 (1996).
6 T.Stroucken, A. Knorr, P. Thomas, S. W. Koch. Phys.
Rev. B 53, 4, 2026 (1996).
7 L. C. Andreani, G. Panzarini, A. V. Kavokin, M. R.
Vladimirova. Phys. Rev. B 57, 8, 4670 (1998).
8 M. Hübner, T. Kuhl, S. Haas, T.Stroucken, S. W. Koch,
R. Hey, K. Ploog. Sol. State Commun., 105, 2, 105 (1998).
9 L. I. Korovin, I. G. Lang, D. A. Contreras-Solorio, S.
T. Pavlov. Fiz. Tverd. Tela, 43, 2091 (2001) (Physics
of the Solid State (St. Petersburg), 43, 2182 (2001));
cond-mat/0104262.
10 I. G Lang, V. I. Belitsky, M. Cardona. Phys. Stat. Sol. (a)
164, 1, 307 (1997).
11 I. G Lang, V. I. Belitsky. Solid. State Commun. 107, 10,
577 (1998).
12 I. G Lang, V. I. Belitsky. Phys. Lett. A 245, 329 (1998).
13 I. G. Lang, L. I. Korovin, A. Contreras-Solorio, S.
T. Pavlov. Fiz. Tverd. Tela, 43, 1117 (2001) (Physics
of the Solid State (St. Petersburg), 43, 1159 (2001));
cond-mat/ 0004178.
14 D. A. Contreras-Solorio, S. T. Pavlov, L. I. Korovin, I. G.
Lang. Phys. Rev 62, 24, 16815 (2000); cond-mat/0002229.
15 I. G. Lang, L. I. Korovin, A. Contreras-Solorio, S. T.
Pavlov. Fiz. Tverd. Tela, 42, 2230 (2000) ( Physics of
the Solid State (St. Petersburg) , 42, N 12, 2300 (2000));
cond-mat/0006364.
16 L. I. Korovin, I. G. Lang, D. A. Contreras-Solorio, S.
T. Pavlov. Fiz. Tverd. Tela, 44, 1681 (2002) (Physics
of the Solid State (St. Petersburg), 44, 1759 (2002));
cond-mat/0203390.
17 I.V.Lerner, J.E.Lozovik. Zh. Eksp. Teor. Fiz., 78, N 3,
1167 (1980) (JETP, 51,588 (1980)).
18 I. G. Lang, L. I. Korovin, A. Contreras-Solorio, S.
T. Pavlov, Fiz. Tverd.Tela, 44, 2084 (2002) (Physics
of the Solid State (St. Petersburg), 44, 2181 (2002));
cond-mat/ 0001248.
19 I. G. Lang, L. I. Korovin, A. Contreras-Solorio, S.
T. Pavlov. Fiz. Tverd.Tela, 48, 1795 (2006) (Physics
of the Solid State (St. Petersburg), 48, 1693 (2006));
cond-mat/ 0403302.
20 L. I. Korovin, I. G. Lang, S. T. Pavlov. Fiz. Tverd.Tela, 48,
2208 (2006) (Physics of the Solid State (St. Petersburg),
48, 2337 (2006)); cond-mat/ 0001248.
21 L. I. Korovin, I. G. Lang, S. T. Pavlov. Zh. Eksp. Teor.
Fiz., 115, 187 (1999) (JETP, 88, 105 (1999)).
http://arxiv.org/abs/cond-mat/0104262
http://arxiv.org/abs/cond-mat/0004178
http://arxiv.org/abs/cond-mat/0002229
http://arxiv.org/abs/cond-mat/0006364
http://arxiv.org/abs/cond-mat/0203390
http://arxiv.org/abs/cond-mat/0001248
http://arxiv.org/abs/cond-mat/0403302
http://arxiv.org/abs/cond-mat/0001248
|
0704.1698 | The origin of the anomalously strong influence of out-of-plane disorder
on high-Tc superconductivity | The origin of the anomalously strong influence of out-of-plane
disorder on high-T
superconductivity
Y. Okada,1 T. Takeuchi,2 T. Baba,3 S. Shin,3 and H. Ikuta1
1Department of Crystalline Materials Science,
Nagoya University, Nagoya 464-8603, Japan
2EcoTopia Science Institute, Nagoya University, Nagoya 464-8603, Japan
3Institute for Solid State Physics (ISSP),
University of Tokyo, Kashiwa 277-8581, Japan
(Dated: )
Abstract
The electronic structure of Bi2Sr2−xRxCuOy (R=La, Eu) near the (π,0) point of the first Bril-
louin zone was studied by means of angle-resolved photoemission spectroscopy (ARPES). The
temperature T ∗ above which the pseudogap structure in the ARPES spectrum disappears was
found to have an R dependence that is opposite to that of the superconducting transition tempera-
ture Tc. This indicates that the pseudogap state is competing with high-Tc superconductivity, and
the large Tc suppression caused by out-of-plane disorder is due to the stabilization of the pseudogap
state.
http://arxiv.org/abs/0704.1698v2
High temperature superconductivity occurs with doping carriers to a Mott insulator.
Carriers are usually doped either by varying the oxygen content or by an element substitu-
tion. Unavoidably, these procedures introduce disorder that influences the superconducting
transition temperature Tc even though only sites outside the CuO2 plane are chemically
modified. For instance, Tc of the La2CuO4 family depends on the size of the cation that
substitutes for La,1 and Tc of Bi2Sr1.6R0.4CuOy depends on the R element.
2,3 Recently, some
of the present authors have studied extensively the Bi2Sr2−xRxCuOy system using single
crystals and varied the R content x over a wide range for R=La, Sm, and Eu.4 The results
clearly show that Tc at the optimal doping T
c depends strongly on the R element and
decreases with the decrease in the ionic radius of R, in other words, with increasing disorder.
By plotting Tc as a function of the thermopower at 290 K S(290), it was found that the range
of S(290) values for samples with a non-zero Tc becomes narrower with increasing disorder
(see Fig. 1). Because S(290) correlates well with hole doping in many high-Tc cuprates,
this suggests that the doping range where superconductivity occurs decreases with increas-
ing out-of-plane disorder, in contrast to the naive expectation that the plot of Tc/T
c vs.
doping would merge into a universal curve for all high-Tc cuprates.
Despite the strong influence on Tc and on the doping range where superconductivity
can be observed, out-of-plane disorder affects only weakly the conduction along the CuO2
plane. According to Fujita et al.,6 out-of-plane disorder suppresses Tc more than Zn when
samples with a similar residual resistivity are compared. This means that out-of-plane
disorder influences Tc without being a strong scatterer, and that this type of disorder has
an unexplained effect on Tc. To elucidate the reason of this puzzling behavior and why the
carrier range of high-Tc superconductivity is affected by out-of-plane disorder, we studied
the electronic structure of R=La and Eu crystals by means of angle-resolved photoemission
spectroscopy (ARPES) measurements. We particularly focused on the so-called antinodal
position, the point where the Fermi surface crosses the (π,0)-(π,π) zone boundary (M̄-Y cut),
due to the following reasons. It is generally accepted that in-plane resistivity is sensitive to
the electronic structure near the nodal point of the Fermi surface.7,8,9 The small influence
of out-of-plane disorder on residual resistivity hence suggests that the electronic structure
of this region is not much affected, as Fujita et al. mentioned.6 Therefore, if out-of-plane
disorder causes any influence on the electronic structure, it would be more likely to occur at
the antinodal point of the Fermi surface.
The single crystals used in this study were grown by the floating zone method as re-
ported previously.4 As mentioned in that work and commonly observed for Bi-based high-Tc
cuprates, the composition of the grown crystal is not the same as the starting one and
depends on the position within the boule. Accordingly, the hole doping level can not be
determined from the starting composition of the crystal. On the other hand, it has been
shown for many cuprates that S(290) correlates well with hole doping. Although S(290)
is not directly related to the amount of carriers and should depend on the detail of the
electronic structure, this empirical connection provides a reasonable indicator for the hole
doping level. We note that we have confirmed in a separate experiment that the Fermi
surface of a R=La and a R=Eu crystal with similar S(290) values coincided quite well,10
implying that their hole doping was similar. Therefore, we use S(290) as a measure of doping
in the following.11
All crystals were annealed at 750◦C for 72 hours in air. The ARPES spectra were accu-
mulated using a Scienta SES2002 hemispherical analyzer with the Gammadata VUV5010
photon source (He Iα) at the Institute of Solid State Physics (ISSP), the University of Tokyo,
and at beam-line BL5U of UVSOR at the Institute for Molecular Science, Okazaki with an
incident photon energy of 18.3 eV. The energy resolution was 10-20 meV for all measure-
ments, which was determined by the intensity reduction from 90% to 10% at the Fermi edge
of a reference gold spectrum. Thermopower was measured by a four-point method using a
home-built equipment. S(290) was determined using crystals that were cleaved from those
used for ARPES measurements except the R=La sample that had the largest doping in Fig.
4(a). For that particular sample, the S(290) value was estimated from the c-axis length
deduced from x-ray diffraction based on the data shown in the inset to Fig. 1.
Figures 2(a) and (c) show the ARPES intensity plots along the (π,0)-(π,π) direction at
100 K for R=La and Eu crystals that have a similar hole concentration. The samples were
cleaved in situ at 250 K in a vacuum of better than 5×10−11 Torr. The S(290) values were
4.7 µV/K and 4.8 µV/K for the R=La and Eu samples, respectively, indicating that they are
slightly underdoped (see Fig. 1). Figures 2(b) and (d) show momentum distribution curves
(MDCs) of the R=La and Eu samples, respectively. We fitted the MDC curves to a Lorentz
function to determine the peak position. The thus extracted dispersion is superimposed by
white small circles on Figs. 2(a) and (c). The momentum where the dispersion curve crosses
the Fermi energy EF corresponds to the Fermi wave vector kF on the (π,0)-(π,π) cut, and
-2002040
S(290) (µV/K)
-2002040
S(290) (µV/K)
FIG. 1: (color online) The critical temperature Tc as a function of S(290), the thermopower at
290 K. Tc was determined from the temperature dependence of resistivity, which was measured
simultaneously with thermopower. Data are based on our previous work,4 and some new data
points are included. Inset: Lattice constant c plotted as a function of S(290).
Fig. 2(e) shows the energy distribution curves (EDCs) of the two samples at kF . Obviously,
the R=La sample has a larger spectral weight at EF , although the doping level of the two
samples is very similar.
Figure 3 shows the EDCs of the two samples of Fig. 2 at various temperatures. To
remove the effects of the Fermi function on the spectra, we applied the symmetrization
method Isym(ω) = I(ω) + I(−ω), where ω denotes the energy relative to EF .
12 As shown in
Figs. 3(a) and (b), the symmetrized spectra of both samples show clearly a gap structure at
the lowest measured temperature, 100 K. Because we are probing the antinodal direction at
a temperature that is higher than Tc, we attribute this gap structure to the pseudogap. With
increasing the temperature, the gap structure fills up without an obvious change in the gap
size. At 250 K, only a small suppression of the spectral weight was observed for the R=La
-0.08
-0.06
-0.04
-0.02
Momentum
T=100KR=Eu
-0.08
-0.06
-0.04
-0.02
Momentum
R=La T=100K
(0,0)
(π,π)
Momentum
−40meV
E-EF=−80meV
Momentum
−40meV
E-EF=−80meV
-0.4 -0.3 -0.2 -0.1 0.0 0.1
E-EF (eV)
(a) (c)
(e) (f)
(b) (d)
FIG. 2: (color online) Intensity plots in the energy-momentum plane of the ARPES spectra at
100 K of slightly underdoped Bi2Sr2−xRxCuOy samples that have a similar doping level with (a)
R=La and (c) R=Eu along the momentum line indicated by the arrow in (f). (b), (d) Momentum
distribution curves (MDCs) of the two samples. (e) The energy distribution curves (EDCs) of the
two samples at kF . (f) Schematic drawing of the underlying Fermi surface.
sample. On the other hand, a clear pseudogap structure can be observed for the R=Eu
sample even at 250 K. This means that the temperature T ∗ up to which the pseudogap
structure can be observed is certainly different despite the closeness of the doping level.
The thin solid lines Ifit(ω) of Figs. 3(a) and (b) are the results of fitting a Lorentz function
to the symmetrized spectrum in the energy range of EF ± 150 meV. The dashed lines are,
on the other hand, the background spectra Ibkg(ω), which were determined by making a
similar fit in the energy range of 150 meV≤ |ω| ≤400 meV. It can be seen that the difference
between the two fitted curves at EF increases with decreasing the temperature, reflecting
the growth of the pseudogap. To quantify how much the spectral weight at EF is depressed,
we define IPG as the difference between unity and the spectral weight of the fitted spectrum
at EF divided by that of the background curve (1 − Ifit(0)/Ibkg(0)). Figure 3(c) shows the
temperature dependence of IPG of the two samples. Obviously, the depression of the spectral
weight is larger for the R=Eu sample at all measured temperatures, and T ∗ is higher. IPG is
roughly linear temperature dependent for both samples with a very similar slope. Therefore,
we extrapolated the data with the same slope as shown by the dashed lines, and estimated
T ∗ to be 282 K and 341 K for the R=La and Eu samples, respectively.
We also measured ARPES spectra of four other samples at 150 K. Assuming that the
temperature dependence of IPG is the same as that of Fig. 3(c), we can estimate T
∗ from
the IPG value at 150 K. The thus estimated T
∗ values are included in Fig. 4(a), which shows
T ∗ of all samples studied in this work as a function of S(290). It can be seen that T ∗ is
higher for R=Eu than R=La when compared at the same S(290) value. Because Tc at the
same hole doping decreases with changing the R element to one with a smaller ionic radius
(Fig. 1), it is clear that Tc and T
∗ have an opposite R dependence. This important finding
is summarized on the schematic phase diagram shown in Fig. 4(b). As shown, both Tmaxc
and the carrier range where superconductivity takes place on the phase diagram decrease
with decreasing the ionic radius of R, while T ∗ at the same hole concentration increases.11
One of the most important issues of pseudogap has been whether such state is a compet-
itive one or a precursor state of high-Tc superconductivity.
14 Figure 4(b) clearly shows that
whatever the microscopic origin of the pseudogap is, it is competing with high-Tc supercon-
ductivity. In contrast, some other studies have concluded that pseudogap is closely related
to the superconducting state because the momentum dependence of the gap is the same
above and below Tc and the evolution of the gap structure through Tc is smooth.
12,15,16 We
point out, however, that several recent experiments revealed the existing of two energy gaps
at a temperature well below Tc for underdoped cuprate superconductors
17,18 as well as for
optimally doped and overdoped (Bi,Pb)2(Sr,La)2CuO6+δ.
19,20 The energy gap observed in
the antinodal region was attributed to the pseudogap while the one near the nodal direction
0 100 200 300
T (K)
-0.2 0 0.2
E-EF (eV)
-0.2 0 0.2
E-EF (eV)
R=La(a)
FIG. 3: (color online) Temperature dependence of the symmetrized ARPES spectrum at the antin-
odal point for the (a) R=La and (b) R=Eu crystals of Fig. 2. (c) Temperature dependence of the
amount of spectral weight suppression IPG.
to the superconducting gap. We think that the conflict encountered in the pseudogap issue
arose because distinguishing these two gaps would be not easy when their magnitudes are
similar.
We next discuss why Tc decreases when T
∗ increases. We think that the ungapped
Carrier Concentration
(a) (b)
S(290) (µV/K)
carrier doping
FIG. 4: (color online) (a) Pseudogap temperature T ∗ plotted as a function of S(290). (b) A
schematic phase diagram of Bi2Sr2−xRxCuOy based on the results of Figs. 1 and 4(a).
portion of the Fermi surface above Tc is smaller for R=Eu when compared at the same
doping and at the same temperature because pseudogap opens at a temperature that is
higher than R=La. Indeed, the results of our ARPES experiment on optimally doped
Bi2Sr2−xRxCuOy confirmed this assumption by demonstrating that the momentum region
where a quasiparticle or a coherence peak was observed was narrower for the R=Eu sample
than the R=La sample.10 In other words, the Fermi arc shrinks with changing the R element
from La to Eu, which mimics the behavior observed when doping is decreased.8,13 Because
the superfluid density ns decreases with underdoping,
21 it is reasonable to assume that
only the states on the Fermi arc can participate to superconductivity. If we can think in
analogy to the carrier underdoping case therefore, ns would be smaller forR=Eu than R=La,
and the decrease of Tc is quite naturally explained from the Uemura relation.
21 The faster
disappearance of superconductivity with carrier underdoping for R=Eu (see Fig. 1) is also
a straightforward consequence of this model. Furthermore, the opening of the pseudogap at
the antinodal direction would not increase much the residual in-plane resistivity because the
in-plane conduction is mainly governed by the nodal carriers.7,8,9 Hence, the observation that
out-of-plane disorder largely suppresses Tc with only a slight increase in residual resistivity
can also be immediately understood.
Finally, we discuss our results in conjunction with the reported data of scanning tunneling
microscopy/spectroscopy (STM/STS) experiments, which unveiled a strong inhomogeneity
in the local electronic structure for Bi2Sr2CaCu2Oy.
22,23,24 It was demonstrated that the
volume fraction of the pseudogapped region increases with underdoping. The ARPES ex-
periments, on the other hand, show that the spectral weight at the chemical potential of the
antinodal region decreases with carrier underdoping.14 Hence, the antinodal spectral weight
at EF is likely to correlate with the volume fraction of the pseudogap region. In the present
work, we increased the degree of out-of-plane disorder while the hole doping was unaltered,
and observed that the spectral weight at the chemical potential is lower for the R=Eu sample
when compared at the same temperature, as shown in Fig. 3(c). We thus expect that the
fraction of superconducting region in real space is smaller for R=Eu. Indeed, quite recent
STM/STS experiments on optimally doped Bi2Sr2−xRxCuOy report that the averaged gap
size is larger when the ionic radius of R is smaller,25 which is attributable to an increase of
the pseudogapped region. Moreover, our results complement the STM/STS data and indi-
cate that not only the area where a pseudogap is observed at low temperature but also T ∗
increases with disorder which means that the pseudogap state is stabilized and persists up to
higher temperatures. Further, while the STM/STS studies on Bi2Sr2−xRxCuOy investigated
only optimally doped samples,25,26 we have varied hole doping, which further corroborates
the conclusion that the pseudogap state competes with high-Tc superconductivity.
In summary, we have studied the mechanism why Tmaxc and the carrier range where
high-Tc superconductivity occurs strongly depend on the R element in the Bi2Sr2−xRxCuOy
system by investigating the electronic structure at the antinodal direction of the Fermi sur-
face of R=La and Eu samples. We observed a pseudogap structure in the ARPES spectrum
up to a higher temperature for R=Eu samples when samples with a similar hole doping are
compared, which clearly indicates that the pseudogap state is competing with high-Tc su-
perconductivity. This result suggests that out-of-plane disorder increases the pseudogapped
region and reduces the superconducting fluid density, which explains its strong influence
on high-Tc superconductivity. We stress that the present results are relevant to all high-Tc
superconductors because they are more or less suffered from out-of-plane disorders.
We would like to thank T. Ito of UVSOR and T. Kitao and H. Kaga of Nagoya University
for experimental assistance.
1 J. P. Attfield, A. L. Kharlanov, and J. A. McAllister, Nature 394, 157 (1998).
2 H. Nameki, M. Kikuchi, and Y. Syono, Physica C 234, 255 (1994).
3 H. Eisaki, N. Kaneko, D. L. Feng, A. Damascelli, P. K. Mang, K. M. Shen, Z.-X. Shen, and M.
Greven, Phys. Rev. B 69, 064512 (2004).
4 Y. Okada and H. Ikuta, Physica C 445-448, 84 (2006).
5 S. D. Obertelli, J. R. Cooper, and J. L. Tallon, Phys. Rev. B 46, 14928 (1992).
6 K. Fujita, T. Noda, K. M. Kojima, H. Eisaki, and S. Uchida, Phys. Rev. Lett. 95, 097006
(2005).
7 L. B. Ioffe and A. J. Millis, Phys. Rev. B 58, 11631 (1998).
8 T. Yoshida, X. J. Zhou, T. Sasagawa, W. L. Yang, P. V. Bogdanov, A. Lanzara, Z. Hussain, T.
Mizokawa, A. Fujimori, H. Eisaki, Z.-X. Shen, T. Kakeshita, and S. Uchida, Phys. Rev. Lett.
91, 027001 (2003).
9 T. Yoshida, X. J. Zhou, D. H. Lu, S. Komiya, Y. Ando, H. Eisaki, T. Kakeshita, S. Uchida, Z.
Hussain, Z. X. Shen, and A. Fujimori, J. Phys.: Condens. Matter 19, 125209 (2007).
10 Y. Okada, T. Takeuchi, A. Shimoyamada, S. Shin, and H. Ikuta, J. Phys. Chem. Solids (in
press) (cond-mat/0709.0220).
11 There remains a chance that the hole doping of La- and Eu-doped samples with the same S(290)
value is slightly different especially when the R content increases with underdoping because it
was reported that disorder increased thermopower of La2CuO4-based superconductors. (J. A.
McAllister and J. P. Attfield, Phys. Rev. Lett. 83, 3289 (1999).) However, this will not affect
our conclusion, because if this is the case, the doping of a disordered sample would be larger
than we are assuming and the data of the Eu-doped samples of Fig. 4(b) shift slightly to the
more carrier doped side relative to the La-doped samples. This is in favor to our conclusion.
12 M. R. Norman, H. Ding, M. Randeria, J. C. Campuzano, T. Yokoya, T. Takeuchi, T. Takahashi,
T. Mochiku, K. Kadowaki, P. Guptasarma, and D. G. Hinks, Nature 392, 157 (1998).
13 K. M. Shen, F. Ronning, D. H. Lu, F. Baumberger, N. J. C. Ingle, W. S. Lee, W. Meevasana,
Y. Kohsaka, M. Azuma, M. Takano, H. Takagi, and Z.-X. Shen, Science 307, 901 (2005).
14 M. R. Norman, D. Pines, and C. Kallin, Adv. Phys. 54, 715 (2005).
15 H. Ding, T. Yokoya, J. C. Campuzano, T. Takahashi, M. Randeria, M. R. Norman, T. Mochiku,
K. Kadowaki, and J. Giapintzakis, Nature 382, 51 (1996).
16 Ch. Renner, B. Revaz, J. Y. Genoud, K. Kadowaki, and Ø. Fischer, Phys. Rev. Lett. 80, 149
(1998).
17 M. Le Tacon, A. Sacuto, A. Georges, G. Kotliar, Y. Gallais, D. Colson, and A. Forget, Nature
Physics 2, 537 (2006).
18 K. Tanaka, W. S. Lee, D. H. Lu, A. Fujimori, T. Fujii, Risdiana, I. Terasaki, D. J. Scalapino,
T. P. Devereaux, Z. Hussain, and Z.-X. Shen, Science 314, 1910 (2006).
19 T. Kondo, T. Takeuchi, A. Kaminski, S. Tsuda, and S. Shin, Phys. Rev. Lett. 98, 267004 (2007).
20 M. C. Boyer, W. D. Wise, K. Chatterjee, M. Yi, T. Kondo, T. Takeuchi, H. Ikuta, and E. W.
Hudson, Nature Phys. 3, 802 (2007).
21 Y. J. Uemura, G. M. Luke, B. J. Sternlieb, J. H. Brewer, J. F. Carolan, W. N. Hardy, R. Kadono,
J. R. Kempton, R. F. Kief, S. R. Kreitzman, P. Mulhern, T. M. Riseman, D. L. Williams, B.
X. Yang, S. Uchida, H. Takagi, J. Gopalakrishnan, A. W. Sleight, M. A. Subramanian, C. L.
Chien, M. Z. Cieplak, Gang Xiao, V. Y. Lee, B. W. Statt, C. E. Stronach, W. J. Kossler, and
X. H. Yu, Phys. Rev. Lett. 62, 2317 (1989).
22 C. Howald, P. Fournier, and A. Kapitulnik, Phys. Rev. B 64, 100504(R) (2001).
23 S. H. Pan, J. P. O’Neal, R. L. Badzey. C. Chamon, H. Ding, J. R. Engelbrecht, Z. Wang, H.
Eisaki, S. Uchida, A. K. Guptak, K. W. Ng, E. W. Hudson, K. M. Lang, and J. C. Davis,
Nature 413, 282 (2001).
24 K. M. Lang, V. Madhavan, J. E. Hoffman, E. W. Hudson, H. Eisaki, S. Uchida, and J. C. Davis,
Nature 415, 412 (2002).
25 A. Sugimoto, S. Kashiwaya, H. Eisaki, H. Kashiwaya, H. Tsuchiura, Y. Tanaka, K. Fujita, and
S. Uchida, Phys. Rev. B 74, 094503 (2006).
26 T. Machida, Y. Kamijo, K. Harada, T. Noguchi, R. Saito, T. Kato, and H. Sakata, J. Phys.
Soc. Jpn. 75, 083708 (2006).
References
|
0704.1699 | Relativistic Hydrodynamics at RHIC and LHC | Relativistic Hydrodynamics at RHIC and LHC
Tetsufumi Hirano1,∗)
1 Department of Physics, The University of Tokyo, Tokyo 113–0033, Japan
Recent development of a hydrodynamic model is discussed by putting an emphasis on
realistic treatment of the early and late stages in relativistic heavy ion collisions. The model,
which incorporates a hydrodynamic description of the quark-gluon plasma with a kinetic
approach of hadron cascades, is applied to analysis of elliptic flow data at the Relativistic
Heavy Ion Collider energy. It is predicted that the elliptic flow parameter based on the
hybrid model increases with the collision energy up to the Large Hadron Collider energy.
§1. Introduction
One of the important discoveries made at the Relativistic Heavy Ion Collider
(RHIC) in Brookhaven National Laboratory is that the elliptic flow parameter,1)
namely, the second Fourier coefficient v2 = 〈cos(2φ)〉 of the azimuthal momentum
distribution dN/dφ,2) is quite large in non-central Au+Au collisions.3) Over the
past years, many studies have been devoted to understanding the elliptic flow from
dynamical models: (1) The observed v2 values near midrapidity at low transverse
momentum (pT ) in central and semi-central collisions are consistent with predictions
from ideal hydrodynamics.4) (2) The v2 data cannot be interpreted by hadronic cas-
cade models.5), 6) (3) A partonic cascade model7) can reproduce these data only with
significantly larger cross sections than the ones obtained from the perturbative cal-
culation of quantum chromodynamics. The produced dynamical system is beyond
the description of naive kinetic theories. Thus, a paradigm of the strongly cou-
pled/interacting/correlated matter is being established in the physics of relativistic
heavy ion collisions.8) The agreement between hydrodynamic predictions and the
data suggests that the heavy ion collision experiment indeed provides excellent op-
portunities for studying matter in local equilibrium at high temperature and for
drawing information of the bulk and transport properties of the quark-gluon plasma
(QGP). These kinds of phenomenological studies closely connected with experimen-
tal results, so to say, the “observational QGP physics”, will be one of the main
trends in modern nuclear physics in the eras of the RHIC and the upcoming Large
Hadron Collider (LHC). Then it is indispensable to sophisticate hydrodynamic mod-
eling of heavy ion collisions for making quantitative statements on properties of the
produced matter with estimation of uncertainties. In fact, the ideal fluid dynamical
description gradually breaks down as one studies peripheral collisions4) or moves
away from midrapidity.9), 10) This requires a more realistic treatment of the early
and late stages11)–14) in dynamical modeling of relativistic heavy ion collisions.
In this paper, recent studies of the state-of-the-art hydrodynamic simulations are
highlighted with emphases on the importance of the final decoupling stage (Sec. 3)
∗) e-mail address: [email protected]
typeset using PTPTEX.cls 〈Ver.0.9〉
http://arxiv.org/abs/0704.1699v1
2 T. Hirano
and of much better understanding of initial conditions (Sec. 4). A prediction of the
v2 parameter at the LHC energy will be made in Sec. 5. See also other reviews
15) to
complement other topics of hydrodynamics in heavy ion collisions at RHIC.
§2. A QGP fluid + hadronic cascade model
We have formulated a dynamical and unified model,13) based on fully three-
dimensional (3D) ideal hydrodynamics,9), 10) toward understanding the bulk and
transport properties of the QGP. During the fluid dynamical evolution one assumes
local thermal equilibrium. However, this assumption can be expected to hold only
during the intermediate stage of the collision. In order to extract properties of the
QGP from experimental data one must therefore supplement the hydrodynamic de-
scription by appropriate models for the beginning and end of the collision. In Sec. 3,
we employ the Glauber model for initial conditions in hydrodynamic simulations.
Initial entropy density is parametrized as a superposition of terms scaling with the
densities of participant nucleons and binary collisions, suitably generalized to ac-
count for the longitudinal structure of the initial fireball.13) Instead, in Secs. 4 and
5, we employ the Color Glass Condensate (CGC) picture17) for colliding nuclei and
calculate the produced gluon distributions18) as input for the initial conditions in
the hydrodynamical calculation.19) During the late stage, local thermal equilibrium
is no longer maintained due to expansion and dilution of the matter. We treat this
gradual transition from a locally thermalized system to free-streaming hadrons via
a dilute interacting hadronic gas by employing a hadronic cascade model20) below a
switching temperature of T sw=169 MeV. A massless ideal parton gas equation of
state (EOS) is employed in the QGP phase (T >Tc = 170 MeV) while a hadronic
resonance gas model is used at T <Tc. When we use the hydrodynamic code all the
way to final decoupling, we take into account10) chemical freezeout of the hadron
abundances at T ch = 170 MeV, separated from thermal freezeout of the momen-
tum spectra at a lower decoupling temperature T th, as required to reproduce the
experimentally measured yields.21)
§3. Success of a hybrid approach
Initial conditions in 3D hydrodynamic simulations are put so that centrality
and pseudorapidity dependences of charged particle yields are reproduced. A linear
combination of terms scaling with the number of participants and that of binary
collisions enables us to describe centrality dependence of particle yields at midra-
pidity. This agreement with the data still holds when the ideal fluid description is
replaced by a more realistic hadronic cascade below T sw. See also Fig. 3. When ideal
hydrodynamics is utilized all the way to kinetic freezeout, T th = 100 MeV is needed
to generate enough radial flow for reproduction of proton pT spectrum at midra-
pidity. One major advantage of the hybrid model over the ideal hydrodynamics is
that the hybrid model automatically describes freezeout processes without any free
parameters. The hybrid model works remarkably well in reproduction of pT spectra
for identified hadrons below pT ∼ 1.5 GeV/c.
Relativistic Hydrodynamics at RHIC and LHC 3
partN
0 50 100 150 200 250 300 350 400
QGP + hadron fluids
QGP fluid and hadron gas
Hadron gas
PHOBOS(hit)
PHOBOS(track)
-6 -4 -2 0 2 4 6
hydro+cascade
=100MeV, hydrothT
=169MeV, hydrothT
PHOBOS 25-50%
b=8.5fm
Fig. 1. (Left) Centrality dependence of v2. The solid (dashed) line results from a full ideal fluid
dynamic approach (a hybrid model). For reference, a result from a hadronic cascade model6)
is also shown (dash-dotted line). (Right) Pseudorapidity dependence of v2. The solid (dashed)
line is the result from a full ideal hydrodynamic approach with T th = 100 MeV (T th = 169
MeV). Filled circles are the result from the hybrid model. All data are from the PHOBOS
Collaboration.22)
The centrality dependences of v2 at midrapidity (| η |< 1) from (1) a hadronic
cascade model6) (dash-dotted), (2) a QGP fluid with hadronic rescatterings taken
through a hadronic cascade model (dashed), and (3) a QGP+hadron fluid with
T th = 100 MeV (solid) are compared with the PHOBOS data.22) A hadronic cascade
model cannot generate enough elliptic flow to reproduce the data. This is observed
also in other hadronic cascade calculations.5) Thus it is almost impossible to interpret
the v2 data from a hadronic picture only. Models based on a QGP fluid generate
large elliptic flow and gives v2 values which are comparable with the data. When
a hadronic matter is also treated by ideal hydrodynamics, v2 is overpredicted in
peripheral collisions. This is improved by dissipative effects in the hadronic matter.
Note that there could exist effects of eccentricity fluctuation,23) which is not taken
into account in the current approach. Deviation between the data and the QGP-
fluid-based results above Npart ∼ 200 could be attributed to these effects.
From the integrated elliptic flow data at midrapidity, initial push from QGP
pressure turns out to be important at midrapidity. In Fig. 1 (right), the pseu-
dorapidity dependence of v2 data in 25-50% centrality observed by PHOBOS
22) are
compared with QGP fluid models. Ideal hydrodynamics with T th = 169 MeV, which
is just below the transition temperature Tc = 170 MeV, underpredicts the data in
the whole pseudorapidity region. Hadronic rescatterings after QGP fluid evolution
generate the right amount of elliptic flow and, consequently, the triangle pattern of
the data is reproduced well. If the hadronic matter is also assumed to be described
by ideal hydrodynamics until T th = 100 MeV, v2 overshoots in forward/backward
rapidity regions (| η |∼ 4). This is simply due to the fact that, in ideal hydrody-
namics, v2 is approximately proportional to the initial eccentricity which is almost
independent of space-time rapidity. So the hadronic dissipation is quite important in
forward/backward rapidity regions as well as at midrapidity in peripheral collisions
(Npart < 100). From these studies, the perfect fluidity of the QGP is needed to
4 T. Hirano
obtain enough amount of the integrated v2, while the dissipation (or finite values
of the mean free path among hadrons) in the hadronic matter is also important to
obtain less elliptic flow coefficients when the multiplicity is small at midrapidity in
peripheral collisions and/or in forward/backward rapidity regions. This is exactly
the novel picture of dynamics in relativistic heavy ion collisions, namely, the nearly
perfect fluid QGP core and the highly dissipative hadronic corona, addressed in
Ref. 16)
(GeV/c)Tp
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-0.05
πSTAR,
STAR, K
STAR, p
Fig. 2. Transverse momentum dependence of v2 for pions, kaons and protons. Filled plots are the
results from the hybrid model. The impact parameter in the model simulation is 7.2 fm which
corresponds to 20-30% centrality. Data (open plots) for pions, kaons and protons are obtained
by the STAR Collaboration.24)
As a cross-check on the picture, we also study pT dependence of v2 for identi-
fied hadrons in semi-central collisions to see whether the hybrid model works. We
correctly reproduce mass ordering behavior of differential elliptic flow below pT ∼ 1
GeV/c as shown in Fig. 2. Here experimental data are from STAR.24) Although
we also reproduce the data in 10-20% and 30-40% well (not shown), it is hard to
reproduce data in very central collisions (0-5%) due to a lack of initial eccentricity
fluctuation in this model. It is worth mentioning that, recently, the hybrid model
succeeds to describes differential elliptic flow data for identified hadrons at forward
rapidity observed by BRAHMS.25)
§4. Challenge for a hydrodynamic approach
So far, an ideal hydrodynamic description of the QGP fluid with the Glauber
type initial conditions followed by an kinetic description of the hadron gas describes
the space-time evolution of bulk matter remarkably well. The CGC,17) whose cases
are growing both in deep inelastic scatterings and in d+Au collisions recently, is one
of the relevant pictures to describe initial colliding nuclei in high energy collisions. In
this section, novel hydrodynamic initial conditions19) based on the CGC are employed
for an analysis of elliptic flow.
We first calculate the centrality dependence of the multiplicity to see that the
CGC indeed correctly describes the initial entropy production and gives proper initial
conditions for the fluid dynamical calculations. Both CGC and Glauber model initial
Relativistic Hydrodynamics at RHIC and LHC 5
partN
0 50 100 150 200 250 300 350 400
=100MeVthGlauber+hydro, T
=100MeVthCGC+hydro, T
CGC+hydro+cascade
PHOBOS
Fig. 3. Centrality dependence of charged particle multiplicity per number of participant nucleons.
The solid (dashed) line results from Glauber-type (CGC) initial conditions. The dash-dotted
line results from our hybrid model. Experimental data are from PHOBOS.27)
conditions, propagated with ideal fluid dynamics, reproduce the observed centrality
dependence of the multiplicity,27) see Fig. 3. In the hydrodynamic simulations, the
numbers of stable hadrons below T ch are designed to be fixed by introducing chemical
potential for each hadron.10) On the other hand, the number of charged hadrons is
approximately conserved during hadronic cascades. So the centrality dependence of
charged particle yields is also reasonably reproduced by the hybrid approach.
In the left panel of Fig. 4 we show the impact parameter dependence of the
eccentricity of the initial energy density distributions at τ0 = 0.6 fm/c. We neglect
event-by-event eccentricity fluctuations although these might be important for very
central and peripheral events.23) Even though both models correctly describe the
centrality dependence of the multiplicity as shown in Fig. 3, they exhibit a signifi-
cant difference: The eccentricity from the CGC is 20-30% larger than that from the
Glauber model.13), 28) The situation does not change even when we employ the “uni-
versal” saturation scale29) in calculation of gluon production. The initial eccentricity
is thus quite sensitive to model assumptions about the initial energy deposition which
can be discriminated by the observation of elliptic flow. The centrality dependence
of v2 from the CGC initial conditions followed by the QGP fluid plus the hadron gas
is shown in Fig. 4 (right). With Glauber model initial conditions,4) the predicted
v2 from ideal fluid dynamics overshoots the peripheral collision data.
22) Hadronic
dissipative effects within hadron cascade model reduce v2 and, in the Glauber model
case, are seen to be sufficient to explain the data (Fig. 1 (left)).13) Initial conditions
based on the CGC model, however, lead to larger elliptic flows which overshoot the
data even after hadronic dissipation is accounted for,13) unless one additionally as-
sumes significant shear viscosity also during the early QGP stage. Therefore precise
understanding of the bulk and transport properties of QGP from the elliptic flow
data requires a better understanding of the initial stages in heavy ion collisions.
6 T. Hirano
b (fm)
0 2 4 6 8 10 12 14
no diffuseness
CGC (IC-e)
=0.85)αGlauber (
partN
0 50 100 150 200 250 300 350 400
hydro+cascade, CGC
hydro+cascade, Glauber
PHOBOS(hit)
PHOBOS(track)
Fig. 4. (Left) Impact parameter dependence of the eccentricity of the initial energy density dis-
tributions. The solid (dashed) line results from Glauber-type (CGC) initial conditions. The
dotted line assumes a box profile for the initial energy density distribution. (Right) Centrality
dependence of v2. The solid (dashed) line results from CGC (Glauber model) initial conditions
followed by ideal fluid QGP dynamics and a dissipative hadronic cascade. The data are from
PHOBOS.22)
§5. Elliptic flow at LHC
The elliptic flow parameter plays a very important role in understanding global
aspects of dynamics in heavy ion collisions at RHIC. It must be also important to
measure elliptic flow parameter at the LHC energy toward comprehensive under-
standing of the degree and mechanism of thermalization and the bulk and transport
properties of the QGP.
Figure 5 shows the excitation function of the charged particle elliptic flow v2,
scaled by the initial eccentricity ε, for Au+Au collisions at b = 6.3 fm impact pa-
rameter, using three different models: (i) a pure 3D ideal fluid approach with a
typical kinetic freezeout temperature T th = 100 MeV where both QGP and hadron
gas are treated as ideal fluids (dash-dotted line); (ii) 3D ideal fluid evolution for the
QGP, with kinetic freezeout at T th = 169 MeV and no hadronic rescattering (dashed
line); and (iii) 3D ideal fluid QGP evolution followed by hadronic rescattering below
T sw = 169MeV (solid line). Although applicability of the CGC model for SPS ener-
gies might be questioned, we use it here as a systematic tool for obtaining the energy
dependence of the hydrodynamic initial conditions. By dividing out the initial eccen-
tricity ε, we obtain an excitation function for the scaled elliptic flow v2/ε whose shape
should be insensitive to the facts that CGC initial conditions produce larger eccen-
tricities and the resulting integrated v2 overshoots the data at RHIC and also that
experiments with different collision system (Pb+Pb) will be performed at the LHC.
Figure 5 shows the well-known bump in v2/ε at SPS energies (
sNN ∼ 10GeV)
predicted by the purely hydrodynamic approach, as a consequence of the softening
of the equation of state (EOS) near the quark-hadron phase transition region,30) and
that this structure is completely washed out by hadronic dissipation,12) consistent
with the experimental data.31), 32) Even at RHIC energies, hadronic dissipation still
reduces v2 by ∼ 20%. The hybrid model predicts a monotonically increasing excita-
Relativistic Hydrodynamics at RHIC and LHC 7
tion function for v2/ε which keeps growing from RHIC to LHC energies,
12) contrary
to the ideal fluid approach whose excitation function almost saturates above RHIC
energies.30)
(GeV)NNs
10 210 310 410
=100MeV
CGC+hydro, T
=169MeV
CGC+hydro, T
CGC+hydro+cascade
Au+Au Charged, b=6.3fm
LHCRHIC
Fig. 5. Excitation function of v2/ε in Au+Au collisions at b = 6.3 fm. The solid line results from
CGC initial conditions followed an ideal QGP fluid and a dissipative hadronic cascade. The
dashed (dash-dotted) line results from purely ideal fluid dynamics with thermal freezeout at
T th = 169MeV (100MeV).
§6. Conclusions
We have studied the recent elliptic flow data at RHIC by using a hybrid model
in which an ideal hydrodynamic treatment of the QGP is combined with a hadronic
cascade model. With the Glauber-type initial conditions, the space-time evolution of
the bulk matter created at RHIC is well described by the hybrid model. The agree-
ment between the model results and the data includes v2(Npart), v2(η), pT spectra
for identified hadron below pT ∼ 1.5 GeV/c and v2(pT ) for identified hadrons at
midrapidity and in the forward rapidity region. If the Glauber type initial condi-
tions are realized, we can establish a picture of the nearly perfect fluid of the QGP
core and the highly dissipative hadronic corona. However, in the case of the CGC
initial conditions, the energy density profile in the transverse plane is more “eccen-
tric” than that from the conventional Glauber model. This in turn generates large
elliptic flow, which is not consistent with the experimental data. Without viscous
effects even in the QGP phase, we cannot interpret the integrated elliptic flow at
RHIC. If one wants to extract informations on the properties of the QGP, a better
understanding of the initial stages is required. We have also calculated an excitation
function of elliptic flow scaled by the initial eccentricity and found that the function
continuously increases with collision energy up to the LHC energy when hadronic
dissipation is taken into account.
Acknowledgments
The author would like to thank M. Gyulassy, U. Heinz, D. Kharzeev, R. Lacey
and Y. Nara for collaboration and fruitful discussions. He is also much indebted to
8 T. Hirano
T. Hatsuda and T. Matsui for continuous encouragement to the present work. He
also thanks M. Isse for providing him with a result from a hadronic cascade model
shown in Fig. 1. This work was partially supported by JSPS grant No.18-10104.
References
1) J. Y. Ollitrault, Phys. Rev. D 46 (1992), 229
2) A. M. Poskanzer and S. A. Voloshin, Phys. Rev. C 58 (1998), 1671
3) B.B. Back et al. [PHOBOS Collaboration], Nucl. Phys. A 757 (2005), 28; J. Adams et
al. [STAR Collaboration], Nucl. Phys. A 757 (2005), 102; K. Adcox et al. [PHENIX
Collaboration], Nucl. Phys. A 757 (2005), 184
4) P. F. Kolb, P. Huovinen, U. W. Heinz and H. Heiselberg, Phys. Lett. B 500 (2001), 232;
P. Huovinen et al., Phys. Lett. B 503 (2001), 58
5) M. Bleicher and H. Stöcker, Phys. Lett. B 526 (2002), 309; E. L. Bratkovskaya, W. Cass-
ing and H. Stöcker, Phys. Rev. C 67 (2003), 054905; W. Cassing, K. Gallmeister and
C. Greiner, Nucl. Phys. A 735 (2004), 277
6) P. K. Sahu et al., Pramana 67 (2006), 257; M. Isse, private communication
7) D. Molnar and M. Gyulassy, Nucl. Phys. A 697 (2002), 495 [Erratum Nucl. Phys. A 703
(2002), 893]
8) T.D. Lee, Nucl. Phys. A 750 (2005), 1; M. Gyulassy and L. McLerran, Nucl. Phys. A 750
(2005), 30; E. Shuryak, Nucl. Phys. A 750 (2005), 64
9) T. Hirano, Phys. Rev. C 65 (2001), 011901
10) T. Hirano and K. Tsuda, Phys. Rev. C 66 (2002), 054905
11) A. Dumitru et al., Phys. Lett. B 460 (1999), 411; S.A. Bass et al., Phys. Rev. C 60 (1999),
021902; S.A. Bass and A. Dumitru, Phys. Rev. C 61 (2000), 064909
12) D. Teaney, J. Lauret and E. V. Shuryak, Phys. Rev. Lett. 86 (2001), 4783; nucl-th/0110037;
D. Teaney, Phys. Rev. C 68 (2003), 034913
13) T. Hirano et al., Phys. Lett. B 636 (2006), 299
14) C. Nonaka and S. A. Bass, Phys. Rev. C 75 (2007), 014902
15) P. F. Kolb and U. W. Heinz, in Quark Gluon Plasma 3, edited by R.C. Hwa and X.N.
Wang (World Scientific, Singapore, 2004), p634, nucl-th/0305084; T. Hirano, Acta Phys.
Polon. B 36 (2005), 187; Y. Hama, T. Kodama and O. Socolowski Jr., Braz. J. Phys. 35
(2005), 24; F. Grassi, Braz. J. Phys. 35 (2005), 52; P. Huovinen and P. V. Ruuskanen,
nucl-th/0605008; C. Nonaka, nucl-th/0702082
16) T. Hirano and M. Gyulassy, Nucl. Phys. A 769 (2006), 71
17) E. Iancu and R. Venugopalan, in Quark Gluon Plasma 3, edited by R.C. Hwa and X.N.
Wang (World Scientific, Singapore, 2004), p249, hep-ph/0303204
18) D. Kharzeev and M. Nardi, Phys. Lett. B 507 (2001), 121; D. Kharzeev and E. Levin,
Phys. Lett. B 523 (2001), 79; D. Kharzeev, E. Levin and M. Nardi, Phys. Rev. C 71
(2005), 054903; D. Kharzeev, E. Levin and M. Nardi, Nucl. Phys. A 730 (2004), 448
19) T. Hirano and Y. Nara, Nucl. Phys. A 743 (2004), 305
20) Y. Nara et al., Phys. Rev. C 61 (2000), 024901
21) P. Braun-Munzinger, D. Magestro, K. Redlich and J. Stachel, Phys. Lett. B 518 (2001),
22) B. B. Back et al. [PHOBOS Collaboration], Phys. Rev. C 72 (2005), 051901
23) M. Miller and R. Snellings, nucl-ex/0312008; X.I. Zhu, M. Bleicher and B. Stoecker,Phys.
Rev. C 72 (2005), 064911; R. Andrade et al., Phys. Rev. Lett. 97 (2006), 202302;
H.J. Drescher and Y. Nara, Phys. Rev. C 75 (2007), 034905
24) J. Adams et al. [STAR Collaboration], Phys. Rev. C 72 (2005), 014904
25) S. J. Sanders, nucl-ex/0701076; nucl-ex/0701078
26) B. B. Back et al. [PHOBOS Collaboration], Phys. Rev. C 65 (2002), 061901
27) B.B. Back et. al. [PHOBOS Collaboration], Phys. Rev. C 65 (2002), 061901
28) A. Kuhlman, U. W. Heinz and Y. V. Kovchegov, Phys. Lett. B 638 (2006), 171; A. Adil
et al., Phys. Rev. C 74 (2006), 044905
29) T. Lappi and R. Venugopalan, Phys. Rev. C 74 (2006) 054905
30) P.F. Kolb, J. Sollfrank and U. Heinz Phys. Rev. C 62 (2000), 054909
31) C. Alt et al. [NA49 Collaboration], Phys. Rev. C 68 (2003), 034903
32) C. Adler et al. [STAR Collaboration], Phys. Rev. C 66 (2002), 034904
http://arxiv.org/abs/nucl-th/0110037
http://arxiv.org/abs/nucl-th/0305084
http://arxiv.org/abs/nucl-th/0605008
http://arxiv.org/abs/nucl-th/0702082
http://arxiv.org/abs/hep-ph/0303204
http://arxiv.org/abs/nucl-ex/0312008
http://arxiv.org/abs/nucl-ex/0701076
http://arxiv.org/abs/nucl-ex/0701078
Introduction
A QGP fluid + hadronic cascade model
Success of a hybrid approach
Challenge for a hydrodynamic approach
Elliptic flow at LHC
Conclusions
|
0704.1700 | Retract rationality and Noether's problem | arXiv:0704.1700v1 [math.AC] 13 Apr 2007
RETRACT RATIONALITY AND NOETHER’S PROBLEM
Ming-chang Kang
Department of Mathematics
National Taiwan University
Taipei, Taiwan, Rep. of China
E-mail: [email protected]
Abstract. Let K be any field and G be a finite group. Let G act on the rational
function fields K(xg : g ∈ G) by K-automorphisms defined by g · xh = xgh for any
g, h ∈ G. Denote by K(G) the fixed field K(xg : g ∈ G)
G. Noether’s problem asks
whether K(G) is rational (=purely transcendental) over K. We will prove that, if K
is any field, p an odd prime number, and G is a non-abelian group of exponent p with
|G| = p3 or p4 satisfying [K(ζp) : K] ≤ 2, then K(G) is rational over K. A notion of
retract rationality is introduced by Saltman in case K(G) is not rational. We will also
show that K(G) is retract rational if G belongs to a much larger class of p-groups. In
particular, generic G-polynomials of G-Galois extensions exist for these groups.
Mathematics Subject Classification 2000: Primary 12F12, 13A50, 11R32, 14E08.
Keywords and Phrases: Noether’s problem, rationality problem, retract rational, generic
polynomial, p-groups of exponent p, flabby class maps, projective modules.
http://arxiv.org/abs/0704.1700v1
§1. Introduction
Let K be any field and G be a finite group. Let G act on the rational function field
K(xg : g ∈ G) by K-automorphisms defined by g · xh = xgh for any g, h ∈ G. Denote
K(G) := K(xg : g ∈ G)
G the fixed subfield under the action of G. Noether’s problem
asks whether K(G) is rational (= purely transcendental) over K.
Noether’s problem for abelian groups was studied by Fischer, Furtwängler, K. Ma-
suda, Swan, Voskresenskii, S. Endo and T. Miyata, Lenstra, etc. It was known that
K(Zp) is rational if Zp is a cyclic group of order p with p = 3, 5, 7 or 11. The first
counter-example was found by Swan: Q(Z47) is not rational over Q [Sw1]. However
Saltman showed that Q(Zp) is retract rational over K for any prime number p, which
is enough to ensure the existence of a generic Galois G-extension and will fulfill the
original purpose of Emmy Noether [Sa1]. For the convenience of the reader, we recall
the definition of retract rationality.
Definition 1.1. ([Sa3]) Let K ⊂ L be a field extension. We say that L is re-
tract rational over K, if there is a K-algebra R contained in L such that (i) L is the
quotient field of R, and (ii) the identity map 1R : R → R factors through a local-
ized polynomial K-algebra, i.e. there is an element f ∈ K[x1, . . . , xn] the polynomial
ring over K and there are K-algebra homomorphisms ϕ : R → K[x1, . . . , xn][1/f ] and
ψ : K[x1, . . . , xn][1/f ] → R satisfying ψ ◦ ϕ = 1R.
It is not difficult to see that “rational” ⇒ “stably rational” ⇒ “retract rational”.
One of the motivation to study Noether’s problem arises from the inverse Galois
problem. If K is an infinite field, it is known that K(G) is retract rational over K if
and only if there exists a generic Galois G-extension over K [Sa1, Theorem 5.3; Sa3,
Theorem 3.12], which guarantees the existence of a Galois G-extension of K, provided
that K is a Hilbertian field. On the other hand, the existence of a generic Galois
G-extension over K is equivalent to the existence of a generic polynomial for G. For
the relationship among these notions, see [DM]. For a survey of Noether’s problem the
reader is referred to articles of Swan and Kersten [Sw2; Ke].
Although Noether’s problem for abelian groups was investigated extensively, our
knowledge for the non-abelian Noether’s problem was amazingly scarce (see, for exam-
ple, [Ka2]). We will list below some previous results of non-abelian Noether’s problem,
which are relevant to the theme of this article.
Theorem 1.2. (Saltman [Sa2]) For any prime number p and for any field K
with char K 6= p (in particular, K may be an algebraically closed field), there is a
meta-abelian p-group G of order p9 such that K(G) is not retract rational over K. In
particular, it is not rational.
Theorem 1.3. (Hajja [Ha]) Let G be a finite group containing an abelian normal
subgroup N such that G/N is a cyclic group of order < 23. Then C(G) is rational over
Theorem 1.4. (Chu and Kang [CK]) Let p be a prime number, G be a p-group
of order ≤ p4 with exponent pe. Let K be any field such that either char K = p or char
K 6= p and K contains a primitive pe-th root of unity. Then K(G) is rational over K.
Theorem 1.5. (Kang [Ka4]) Let G be a metacyclic p-group with exponent pe,
and let K be any field such that (i) char K = p, or (ii) char K 6= p and K contains a
primitive pe-th root of unity. Then K(G) is rational over K.
Note that, in Theorems 1.3—1.5, it is assumed that the ground field contains enough
roots of unity. We may wonder whether Q(G) is rational if G is a non-abelian p-group
of small order. The answer is rather optimistic when G is a group of order 8 or 16.
Theorem 1.6. ([CHK; Ka3]) Let K be any field and G be any non-abelian group
of order 8 or 16 other than the generalized quaternion group of order 16. Then K(G)
is always rational over K.
However Serre was able to show that Q(G) is not rational when G is the generalized
quaternion group [Se, p.441–442; GMS, Theorem 33.26 and Example 33.27, p.89–90].
On the other hand, if p is an odd prime number, Saltman proves the following
theorem.
Theorem 1.7. (Saltman [Sa4]) Let p be an odd prime number and G be a non-
abelian group of order p3. If K is a field containing a primitive p-th root of unity, then
K(G) is stably rational.
The above theorem may be generalized to the case of p-groups containing a maximal
cyclic subgroup, namely,
Theorem 1.8. (Hu and Kang [HuK]) Let p be a prime number and G be a non-
abelian p-group of order pn containing a cyclic subgroup of index p. If K is any field
containing a primitive pn−2-th root of unity, then K(G) is rational over K.
In this article we will prove the following theorem.
Theorem 1.9. Let p be an odd prime number, G be the non-abelian group of ex-
ponent p and of order p3 or p4. If K is a field with [K(ζp) : K] ≤ 2, then K(G) is
rational over K.
The rationality problem of K(G) seems rather intricate if the ground field K has no
enough root of unity. We don’t know the answer to the rationality of K(G) when the
assumption that [K(ζp) : K] ≤ 2 is waived in the above theorem. On the other hand,
as to the retract rationality of K(G), a lot of information may be obtained. Before
stating our results, we recall a theorem of Saltman first.
Theorem 1.10. (Saltman [Sa1, Theorem 3.5]) Let K be a field, G = A ⋊ G0
be a semi-direct product group where A is an abelian normal subgroup of G. Assume
that gcd{|A|, |G0|} = 1 and both K(A) and K(G0) are retract rational over K. Then
K(G) is retract rational over K.
Thus the main problem is to investigate the retract rationality for p-groups. We
will prove K(G) is retract rational for many p-groups G of exponent p.
Theorem 1.11. Let p be a prime number, K be any field, and G = A ⋊ G0 be a
semi-direct product group where A is a normal elementary p-group of G and G0 is a
cyclic group of order pm. If p = 2 and charK 6= 2, assume furthermore that K(ζ2m) is
a cyclic extension of K. Then K(G) is retract rational over K.
If p is an odd prime number, a p-group of exponent p containing an abelian normal
subgroup of index p certainly satisfies the assumption in Theorem 1.11. In particular,
a p-group of exponent p and of order p3 or p4 belongs to this class of p-groups (see
[CK]). There are six p-groups of exponent p and of order p5; only four of them contain
abelian normal subgroups of index p. Previously the retract rationality of K(G) for
non-abelian p-groups, i.e. the existence of generic polynomials for such groups G, is
known only when G is of order p3 and of exponent p.
Similarly if G = A⋊G0 is a semi-direct product of p-groups such that A is a normal
subgroup of order p, and G0 is a direct product of an elementary p-group with a cyclic
group of order pm, then G also satisfies the assumption in Theorem 1.11 provided that
the assumption that K(ζ2m) is a cyclic extension of K remains in force.
The above Theorem 1.11 is deduced from the following theorem.
Theorem 1.12. Let K be any field, and G = A⋊G0 be a semi-direct product group
where A is a normal abelian subgroup of exponent e and G0 is a cyclic group of order
m. Assume that
(i) either charK = 0 or charK > 0 with charK ∤ em, and
(ii) both K(ζe) and K(ζm) are cyclic extensions of K such that gcd{m, [K(ζe) :
K]} = 1.
Then K(G) is retract rational over K.
The idea of the proof of Theorem 1.12 is to add a primitive e-th root of unity to the
ground field and the question is reduced to a question of multiplicative group actions.
It is Voskresenskii who realizes that the multiplicative group action is related to the
birational classification of algebraic tori [Vo]. However, the multiplicative group action
arising in the present situation is not the function field of an algebraic torus; it is a
new type of multiplicative group actions. Thus we need a new criterion for retract
rationality. It is the following theorem.
Theorem 1.13. Let π1 and π2 be finite abelian groups, π = π1 × π2, and L be a
Galois extension of the field K with π1 = Gal(L/K). Regard L as a π-field through the
projection π → π1. Assume that
(i) gcd{|π1|, |π2|} = 1,
(ii) char K = 0 or charK > 0 with charK ∤ |π2|, and
(iii) K(ζm) is a cyclic extension of K where m is the exponent of π2.
If M is π-lattice such that ρπ(M) is an invertible π-lattice, than L(M)
π is retract
rational over K.
The reader will find that the above theorem is an adaptation of Saltman’s criterion
for retract rational algebraic tori [Sa3, Theorem 3.14] (see Theorem 2.5). We also
formulate another criterion for retract rationality of L(M)π when π is a semi-direct
product group (see Theorem 4.3). An amusing consequence of this criterion (when
compared with Theorem 1.3) is that, if G = A ⋊ H is a semi-direct product of an
abelian normal subgroup A and a cyclic subgroup H , then C(G) is always retract
rational (see Proposition 5.2).
We will organize this paper as follows. We recall some basic facts of multiplicative
group actions in Section 2. In particular, the flabby class map which was mentioned in
Theorem 1.13 will be defined. We will give additional tools for proving Theorem 1.9
and Theorems 1.11–1.13 in Section 3. In Section 4 Theorem 1.13 and its variants will
be proved. The proof of Theorem 1.11 and Theorem 1.12 will be given in Section 5.
Section 6 contains the proof of Theorem 1.9.
Acknowledgements. I am indebted to Prof. R. G. Swan for providing a simplified
proof in Step 7 of Case 1 of Theorem 1.9 (see Section 6). The proof in a previous version
of this paper was lengthy and complicated. I thank Swan’s generosity for allowing me
to include his proof in this article.
Notations and terminology. A field extension L over K is rational if L is purely
transcendental over K; L is stably rational over K if there exist y1, . . . , yN such that
y1, . . . , yN are algebraically independent over L and L(y1, . . . , yN) is rational over K.
More generally, two fields L1 and L2 are called stably isomorphic if L1(x1, . . . , xm) is
isomorphic to L2(y1, . . . , yn) where x1, . . . , xm and y1, . . . , yn are algebraically indepen-
dent over L1 and L2 respectively.
Recall the definition of K(G) at the beginning of this section: K(G) = K(xg : g ∈
G)G. If L is a field with a finite group G acting on it, we will call it a G-field. Two
G-fields L1 and L2 are G-isomorphic if there is an isomorphism ϕ : L1 → L2 satisfying
ϕ(σ · u) = σ · ϕ(u) for any σ ∈ G, any u ∈ L1.
We will denote by ζn a primitive n-th root of unity in some extension field of K
when char K = 0 or char K = p > 0 with p ∤ n. All the groups in this article are
finite groups. Zn will be the cyclic group of order n or the ring of integers modulo n
depending on the situation from the context. Z[π] is the group ring of a finite group
π over Z. Z(G) is the center of the group G. The exponent of a group G is the
least common multiple of the orders of elements in G. The representation space of the
regular representation of G over K is denoted by W =
g∈GK · x(g) where G acts on
W by g · x(h) = x(gh) for any g, h ∈ G.
§2. Multiplicative group actions
Let π be a finite group. A π-latticeM is a finitely generated Z[π]-module such that
M is a free abelian group when it is regarded as an abelian group.
For any field K and a π-lattice M , K[M ] will denote the Laurent polynomial ring
and K(M) is the quotient field of K[M ]. Explicitly, if M =
1≤i≤m Z · xi as a free
abelian group, then K[M ] = K[x±11 , . . . , x
m ] and K(M) = K(x1, . . . , xm). Since π
acts on M , it will act on K[M ] and K(M) by K-automorphisms, i.e. if σ ∈ π and
σ · xj =
1≤i≤m aijxi ∈ M , then we define the action of σ in K[M ] and K(M) by
σ · xj =
1≤i≤m x
The multiplicative action of π onK(M) is called a purely monomial action in [HK1].
If π is a group acting on the rational function field K(x1, . . . , xm) by K-automorphism
such that σ · xj = cj(σ) ·
1≤i≤m x
i where σ ∈ π, aij ∈ Z and cj(σ) ∈ K\{0}, such a
multiplicative group action is called a monomial action. Monomial actions arise when
studying Noether’s problem for non-split extension groups [Ha; Sa5].
We will introduce another kind of multiplicative actions. Let K ⊂ L be fields and
π be a finite group. Suppose that π acts on L by K-automorphisms (but it is not
assumed that π acts faithfully on L). Given a π-lattice M , the action of π on L can
be extended to an action of π on L(M) (= L(x1, . . . , xm) if M =
1≤i≤m Z · xi) by
K-automorphisms defined as follows: If σ ∈ π and σ · xj =
1≤i≤m aijxi ∈ M , then
the multiplication action in L(M) is defined by σ · xj =
1≤i≤m x
i for 1 ≤ j ≤ m.
When L is a Galois extension of K and π = Gal(L/K) (and therefore π acts
faithfully on L), the fixed subfield L(M)π is the function field of the algebraic torus
defined over K, split by L and with character group M (see [Vo]).
We recall some basic facts of the theory of flabby (flasque) π-lattices developed by
Endo and Miyata, Voskresenskii, Colliot-Thélène and Sansuc, etc. [Vo; CTS]. We refer
the reader to [Sw2; Sw3; Lo] for a quick review of the theory.
In the sequel, π denotes a finite group unless otherwise specified.
Definition 2.1. A π-lattice M is called a permutation lattice if M has a Z-basis
permuted by π. M is called an invertible (or permutation projective) lattice, if it is
a direct summand of some permutation lattice. A π-lattice M is called a flabby (or
flasque) lattice if H−1(π′,M) = 0 for any subgroup π′ of π. (Note that H−1(π′,M)
denotes the Tate cohomology group.) Similarly, M is called coflabby if H1(π′,M) = 0
for any subgroup π′ of π.
It is known that an invertible π-lattice is necessarily a flabby lattice [Sw2, Lemma
8.4; Lo, Lemma 2.5.1].
Theorem 2.2. (Endo and Miyata [Sw3, Theorem 3.4; Lo, 2.10.1]) Let π be
a finite group. Then any flabby π-lattice is invertible if and only if all Sylow subgroups
of π are cyclic.
Denote by Lπ the class of all π-lattices, and by Fπ the class of all flabby π-lattices.
Definition 2.3. We define an equivalence relation ∼ on Fπ: Two π-lattices E1
and E2 are similar, denoted by E1 ∼ E2, if E1 ⊕ P is isomorphic to E2 ⊕ Q for some
permutation lattices P and Q. The similarity class containing E will be denoted by
[E]. Define Fπ = Fπ/ ∼, the set of all similarity classes of flabby π-lattices.
Fπ becomes a commutative monoid if we define [E1]+[E2] = [E1⊕E2]. The monoid
Fπ is called the flabby class monoid of π.
Definition 2.4. We define a map ρ : Lπ → Fπ as follows. For any π-lattice M ,
there exists a flabby resolution, i.e. a short exact sequence of π-lattices 0 → M → P →
E → 0 where P is a permutation lattice and E is a flabby lattice [Sw2, Lemma 8.5].
We define ρπ(M) = [E] ∈ Fπ. The map ρπ : Lπ → Fπ is well-defined [Sw2, Lemma
8.7]; it is called the flabby class map. We will simply write ρ instead of ρπ, if the group
π is obvious from the context.
Theorem 2.5. (Saltman [Sa3, Theorem 3.14]) Let L be a Galois extension of
K with π = Gal(L/K) and M be a π-lattice. Then ρπ(M) is invertible if and only if
L(M)π is retract rational over K.
§3. Generalities
We recall several results which will be used later.
Theorem 3.1. ([HK2, Theorem 1]) Let L be a field and G be a finite group
acting on L(x1, . . . , xm), the rational function field of m variables over L. Suppose that
(i) for any σ ∈ G, σ(L) ⊂ L;
(ii) The restriction of the action of G to L is faithful;
(iii) for any σ ∈ G,
σ(x1)
σ(xm)
= A(σ)
+B(σ)
where A(σ) ∈ GLm(L) and B(σ) is an m× 1 matrix over L.
Then L(x1, . . . , xm) = L(z1, . . . , zm) where σ(zi) = zi for all σ ∈ G, and for any
1 ≤ i ≤ m. In fact, z1, . . . , zm can be defined by
= A ·
for some A ∈ GLm(L) and for some B which is an m× 1 matrix over L. Moreover, if
B(σ) = 0 for all σ ∈ G, we may choose B = 0 in defining z1, . . . , zm.
Theorem 3.2. (Kuniyoshi [CHK, Theorem 2.5]) LetK be a field with charK =
p > 0 and G be a p-group. Then K(G) is always rational over K.
Proposition 3.3. Let π be a finite group and L be a π-field. Suppose that 0 →
M1 → M2 → N → 0 is a short exact sequence of π-lattices satisfying (i) π acts
faithfully on L(M1), and (ii) N is an invertible π-lattice. Then the π-fields L(M2) and
L(M1 ⊕N) are π-isomorphic.
Proof. We follow the proof of [Le, Proposition 1.5]. Denote L(M1)
× = L(M1)\{0}.
Consider the exact sequence of π-modules:
0 → L(M1)
× → L(M1)
× ·M2 → N → 0.
By Hilbert Theorem 90, we find that H1(π′, L(M1)
×) = 0 for any subgroup π′ ⊂ π.
Applying [Le, Proposition 1.2] we find that the above exact sequence splits. The
resulting π-morphism N → L(M1)
× · M2 provides the required π-isomorphism form
L(M2) to L(M1 ⊕N). �
Lemma 3.4. Let the assumptions be the same as in Proposition 3.3. Assume fur-
thermore that N is a permutation π-lattice. Then L(M2)
π is rational over L(M1)
Proof. By Proposition 3.3, L(M2) = L(M1)(N). Since π acts faithfully on L(M1),
we may apply Theorem 3.1 and find u1, . . . , un ∈ L(M2) such that L(M2) = L(M1)(u1,
. . . , un) with σ(ui) = ui for any σ ∈ π, any 1 ≤ i ≤ n where n = rank(N). Hence
L(M2)
π = L(M1)
π(u1, . . . , un). �
Lemma 3.5. Let π be a finite abelian group of exponent e and K be a field such
that char K = 0 or charK > 0 with charK ∤ e. If P is a permutation π-lattice, then
K(P )π = K(ζe)(M)
π0 where π0 = Gal(K(ζe)/K) and M is some π0-lattice.
Proof. We follow the standard approach to solving Noether’s problem for abelian
groups [Sw1; Sw2; Le].
Note that K(P )π = {K(ζe)(P )
〈π0〉}〈π〉 = K(ζe)(P )
〈π,π0〉 where the action of π is
extended to K(ζe)(P ) by defining g(ζe) = ζe for any g ∈ π, and the action of π0 is
extended to K(ζe)(P ) by requiring that π0 acts trivially on P .
Since π is abelian of exponent e, we may diagonalize its action on P , i.e. we may
find x1, . . . , xn ∈ K(ζe)(P ) such that n = rank(P ), g(xi)/xi ∈ 〈ζe〉 for any g ∈ π, and
K(ζe)(P ) = K(ζe)(x1, . . . , xn).
Thus K(ζe)(P )
〈π〉 = K(ζe)(y1, . . . , yn) where y1, . . . , yn are monomials in x1, . . . , xn.
LetM be the multiplicative subgroup generated by y1, . . . , yn in K(ζe)(y1, . . . , yn)\{0}.
Then M is a π0-lattice and K(ζe)(y1, . . . , yn)
π0 = K(ζe)(M)
π0. �
Proposition 3.6. Let π and K be the same as in Lemma 3.5. If K(ζe) is a cyclic
extension of K, then K(π) is retract rational over K.
Proof. We may regard the regular representation of π is given by a permutation
π-lattice. Thus K(π) = K(ζe)(M)
π0 where π0 = Gal(K(ζe)/K). Since π0 is assumed
cyclic, thus we may apply Theorem 2.2 and Theorem 2.5. �
§4. Proof of Theorem 1.13
Lemma 4.1. Let π be a finite group, M be a π-lattice. Suppose that π0 is a normal
subgroup of π and π0 acts trivially on M . Thus we may regard M as a lattice over
π/π0.
(1) M is a permutation π-lattice ⇔ So is it as a π/π0-lattice.
(2) M is an invertible π-lattice ⇔ So is it as a π/π0-lattice.
(3) M is a flabby π-lattice ⇔ So is it as a π/π0-lattice.
(4) If 0 → M → P → E → 0 is a flabby resolution of M as a π/π0-lattice, this
short exact sequence is also a flabby resolution of M as a π-lattice.
(5) ρπ(M) is an invertible π-lattice ⇔ ρπ/π0(M) is an invertible π/π0-lattice.
Proof. The properties (1)–(4) can be found in [CTS, Lemma 2, p.179–180]. As
to (5), the direction “⇐” is obvious by applying (4). For the other direction, assume
ρπ(M) is an invertible π-lattice. Let 0 → M → P → E → 0 be a flabby resolution
of M as a π-lattice. Then 0 → Mπ0 → P π0 → Eπ0 → 0 is a flabby resolution of
M =Mπ0 in the category of π/π0-lattices by [CTS, Lemma (xi), p.180]. It remains to
show that Eπ0 is invertible. Since [E] = ρπ(M) is invertible, we can find a π-lattice
N such that E ⊕N = N ′ is a permutation π-lattice. Note that N ′
π0 is a permutation
π/π0-lattice by [CTS, Lemma 2(i), p.180]. We find that E
π0 is invertible because
Eπ0 ⊕Nπ0 = (E ⊕N)π0 = N ′
π0 . �
Lemma 4.2. Let the assumptions be the same as in Theorem 1.13. If P is a
permutation π-lattice, then L(P )π is retract rational over K.
Proof. Since π1 and π2 are abelian groups with gcd{|π1|, |π2|} = 1, every subgroup
π′ of π can be written as π′ = ρ × λ where ρ is a subgroup of π1 and λ is a subgroup
of π2.
As a permutation π-lattice, we may write P =
Z[π/π(i)] where π(i) is a subgroup
of π. Write π(i) = ρi × λi where ρi ⊂ π1, λi ⊂ π2. Hence Z[π/π
(i)] = Z[π/(ρi × λi)] =
Z[(π1/ρi)× (π2/λi)].
It is not difficult to see that Z[π/π(i)] =
Z · u
kl where 1 ≤ k ≤ t = |π1/ρi|,
1 ≤ l ≤ r = |π2/λi|. Moreover, if g ∈ π1 and g
′ ∈ π2, then g · u
kl = u
g(k),l
, g′ ·
kl = u
k,g′(l)
and the homomorphisms π1 → St and π2 → Sr are induced from the
permutation representations associated to π/π(i) = (π1/ρi)× (π2/λi) where St and Sr
are the symmetric groups of degree t and r respectively.
Since π1 is faithful on L, we may apply Theorem 3.1. Explicitly, for any 1 ≤ l ≤ r,
we may find A(i) ∈ GLt(L) and define v
kl by
= A(i)
such that g · v
k,l = v
k,l for any g ∈ π1, and L(Z[π/π
(i)]) = L(u
kl : 1 ≤ k ≤ t, 1 ≤ l ≤ r.
If g′ ∈ π2, from the relation g
′ · u
kl = u
k,g′(l)
and Formula (1), we find that g′ · v
k,g′(l)
Since L(P )π1 = L(u
π1 = L(v
π1 = K(v
kl ) where i, k, l runs over index sets
which are understood, it follows that L(P )π = K(v
π2. Note that π2 acts on {v
by permutations.
By Lemma 3.5 K(v
π2 = K(ζm)(M)
π0 where m is the exponent of π2, π0 =
Gal(K(ζm)/K) and M is some π0-lattice.
By our assumption, π0 is a cyclic group. Hence ρπ0(M) is invertible by Theorem
2.2. Apply Theorem 2.5. We find that K(ζm)(M)
π0 is retract rational over K. �
Proof of Theorem 1.13 ———————————–
Step 1. Suppose that M is a π-lattice such that ρπ(M) is invertible.
Define π0 = {g ∈ π2 : g acts trivially on L(M)}. Then π/π0 acts faithfully on
L(M). Moreover, ρπ/π0(M) is invertible by Lemma 4.1.
In other words, without loss of generality we may assume that π is faithfully on
L(M) . Thus we will keep in force this assumption in the sequel.
Step 2. Since ρπ(M) is invertible, by [Sa3, Theorem 2.3, p.176], we may find π-
lattice M ′, P , Q such that P and Q are permutation lattices, 0 →M →M ′ → Q→ 0
is exact, and the inclusion map M →M ′ factors through P , i.e. the following diagram
commutes
0 ✲ M ✲ M ′ ✲ Q ✲ 0.
The remaining proof proceeds quite similar to that of [Sa3, Theorem 3.14, p.189].
Step 3. We get a commutative diagram of K-algebra morphisms from the diagram
in (2), i.e.
L[M ]π
L[P ]π
L[M ]π ✲ L[M ′]π ✲ L[Q]π
Step 4. The quotient field of L[P ]π is L(P )π, which is retract rational over K by
Lemma 4.2. Thus the identity map 1 : L[P ]π → L[P ]π factors rationally by [Sa3,
Lemma 3.5], i.e. there is a localized polynomial ring K[x1, . . . , xn][1/f ] and K-algebra
maps ϕ : L[P ]π → K[x1, . . . , xn][1/f ], ψ : K[x1, . . . , xn][1/f ] → L[P ]
π such that
ψ ◦ ϕ = 1.
It follows that the composite map g : L[M ]π → L[P ]π → L[M ′]π also factors ratio-
nally, i.e. there areK-algebra ϕ′ : L[M ]π → K[x1, . . . , xn][1/f ], ψ
′ : K[x1, . . . , xn][1/f ] →
L[M ′]π such that g = ψ′ ◦ ϕ′.
Step 5. By Lemma 3.4 L(M ′)π is rational over L(M)π. (This is the only one step
we use the assumption that π is faithful on L(M).)
Now we may apply [Sa3, Proposition 3.6(b), p.183] where, in the notation of [Sa3],
we take S = T = L[M ]π, ϕ is the identity map on L(M)π. We conclude that
1 : L[M ]π → L[M ]π factors rationally, i.e. L(M)π is retract rational over K. �
Here is a variant of Theorem 1.13.
Theorem 4.3. Let π be a finite group, 0 → π1 → π → π2 → 1 is a group extension,
and L be a Galois extension of the field K with π2 = Gal(L/K). Let π act on L through
the projection π → π2. Assume that
(i) π1 is an abelian group of exponent e with ζe ∈ L,
(ii) the extension 0 → π1 → π → π2 → 1 splits, and
(iii) every Sylow subgroup of π2 is cyclic.
If M is a π-lattice such that ρπ(M) is an invertible lattice, then L(M)
π is retract
rational over K.
Proof. The proof is very similar to the proof of Theorem 1.13.
We claim that L(P )π is retract rational for any permutation π-lattice P .
For the proof, we will use [Sa5, Theorem 2.1, p.546]. We will show that (c) of
[Sa5, Theorem 2.1] is valid, which will guarantee that L(P )π is retract rational. By
assumption (iii), part (d) of [Sa5, Theorem 2.1] is valid by Theorem 2.2. It remains to
check that the embedding problem of L/K and the extension 0 → π1 → π → π2 → 1
is solvable. But this is the well-known split embedding problem [ILF, Theorem 1.9,
p.12].
Now define π0 = {g ∈ π : g acts trivially on L(M)}. Note that π0 ⊂ π1. The
remaining proof is the same as in the proof of Theorem 1.13 and is omitted. �
Corollary 4.4. Let π be an abelian group of exponent e, K be a field with ζe ∈ K.
Suppose that M is a π-lattice and π acts on K(M) by K-automorphisms. If ρπ(M) is
an invertible module, than K(M)π is retract rational over K.
§5. Proof of Theorems 1.11 and 1.12
Proof of Theorem 1.12 ———————————-
Step 1. Write G = A ⋊ G0 where G0 =< σ > is a cyclic group of order m, and
A = A1 ×A2 × · · · ×Ar with each Ai =< ρi > being a cyclic group of order ei, e = e1
and er | er−1, · · · , e2 | e1.
Define Bi =< ρi, · · · , ρ̂i, · · · , ρr > to be the subgroup of A by deleting the i-th direct
factor.
Let Gal(K(ζe)/K) =< τ > be a cyclic group of order n. Write ζ = ζe and τ ·ζ = ζ
for some integer a.
For 1 ≤ i ≤ r, choose ζi ∈< ζ > such that ζi is a primitive ei-th root of unity. Note
that τ · ζi = ζ
Let ni = [K(ζi) : K] for 1 ≤ i ≤ r. Note that n = n1.
Step 2. Let W =
g∈GK · x(g) be the representation space of the regular repre-
sentation of G. Then
K(G) = K(W )G = {K(ζ)(W )<τ>}G = K(ζ)(W )<G,τ>
where the action of G and τ are extended to K(ζ)(W ) by requiring that G and τ act
trivially on K(ζ) and W respectively.
Step 3. For 1 ≤ i ≤ r, define
l∈Zei , g∈Bi
ζ li · x(ρ
i · g).
Note that ρi · ui = ζ
i ui and, if j 6= i, then ρi · ui = ui.
For 1 ≤ i ≤ r, write χi to be the character χi : A −→ K(ζ)
× such that g · ui =
χi(g) · ui. Since χ1, · · · , χr are distinct characters on A, it follows that u1, · · · , ur are
linearly independent vectors in K(ζ)⊗K W .
Moreover, the subgroup A acts faithfully on
1≤i≤rK(ζ)ui.
Step 4. For 1 ≤ i ≤ r, j ∈ Zni , define
vij =
l∈Zei , g∈Bi
i · x(ρ
i · g).
Thus we have elements v1,1, · · · , v1,n1 , v2,1, · · · , v2,n2, · · · , vr,nr ; in total we have n1 +
n2 + · · ·+ nr such elements.
As in Step 3, these elements vij are linearly independent. Note that τ · vij = vi,j+1.
Step 5. For 1 ≤ i ≤ r, j ∈ Zni , k ∈ Zm. define
xi,j,k =
l∈Zei , g∈Bi
i · x(σ
k · ρli · g).
We have in total m(n1 + n2 + · · ·+ nr) such elements xi,j,k.
Note that xi,j,0 = vij , and the vector xi,j,k is just a “translation” of the vector xi,j,0
in the space
g∈GK(ζ)x(g) (with basis x(g), g ∈ G). Thus these vectors xi,j,k are
linearly independent. Note that σ · vi,j,k = vi,j,k+1. Moreover, the group < G, τ > acts
faithfully on
K(ξ) · xi,j,k.
Apply Theorem 3.1. We find K(G) = K(ζ)(xijk : 1 ≤ i ≤ r, j ∈ Zni, k ∈
Zm)(w1, · · · , ws) where s = |G| −m(n1 + n2 + · · ·+ nr), and λ(wi) = wi for any i, any
λ ∈< G, τ >.
Step 6. We will consider the fixed fieldK(ζ)(xijk : 1 ≤ i ≤ r, j ∈ Zni , k ∈ Zm)
<G,τ>.
Let π =< σ, τ >, Λ = Z[π]. Let N =< xijk : 1 ≤ i ≤ r, j ∈ Zni , k ∈ Zm > be the
multiplicative subgroup generated by these xijk in K(ζ)(xijk : 1 ≤ i ≤ r, j ∈ Zni , k ∈
Zm) \ {0}. Note that N is a Λ-module. In fact, N is a permutation π-lattice.
Define
Φ : N −→ Zn1 × Zn2 × · · · × Znr
by Φ(x) = (l̄1, l̄2, · · · , l̄r) ∈ Zn1 × Zn2 × · · · × Znr , if x =
ijk (with bijk ∈ Z) and
ρ1(x) = ζ
1 x, ρ2(x) = ζ
2 x, · · ·, ρr(x) = ζ
Define M = Ker(Φ). We find that K(ζ)(xijk : 1 ≤ i ≤ r, j ∈ Zni , k ∈ Zm)
K(ζ)(M). Thus it remains to find K(ζ)(M)π. Note that M is a π-lattice.
Step 7. Since gcd{m, n} = 1, it follows that π is a cyclic group. Hence ρπ(M) is
invertible by Theorem 2.2.
Apply Theorem 1.13. We find that K(ζ)(M)π is retract rational over K. Since
K(G) is rational over K(ζ)(M)π, it follows that K(G) is also retract rational over K
by [Sa3, Proposition 3.6(a), p.183]. �
Proof of Theorem 1.11 ——————————-
If charK = p, apply Theorem 3.2. Thus K(G) is rational. In particular it is retract
rational.
¿From now on we will assume that charK 6= p. It is not difficult to verify that all
the assumptions of Theorem 1.12 are valid in the present situation. Hence the result.
Corollary 5.1. Let K be a field, p be any prime number, G = A⋊ G0 be a semi-
direct product of p-groups where A is a normal abelian subgroup of exponent pe and G0
is a cyclic group of order pm. If charK 6= p, assume that (i) both K(ζpe) and K(ζpm)
are cyclic extensions of K and (ii) p ∤ [K(ζpe) : K]. Then K(G) is retract rational over
Proposition 5.2. Let K be a field and G = A ⋊ G0 be a semi-direct product.
Assume that
(i) A is an abelian normal subgroup of exponent e in G and G0 is a cyclic group of
order m.
(ii) either charK = 0 or charK > 0 with charK ∤ em, and
(iii) ζe ∈ K and K(ζm) is a cyclic extension of K.
If G −→ GL(V ) is a finite-dimensional linear representation of G over K, then
K(V )G is retract rational over K.
Proof. Decompose V into V =
Vχ where χ : A −→ K \ {0} runs over linear
characters of A and Vχ = {v ∈ V : g · v = χ(g) · v for all g ∈ A}
It is easy to see that σ(Vχ) = Vχ′ for some χ
′. Suppose Vχ1, Vχ2, · · · , Vχs is an orbit
of σ, i.e. σ(Vχj ) = Vχj+1 and σ(Vχs) = Vχ1. Choose a basis v1, · · · , vt of Vχ1. It follows
that {σj(vi) : 1 ≤ i ≤ t, 0 ≤ j ≤ s− 1} is a basis of ⊕1≤j≤sVχj .
In this way, we may find a basis w1, · · · , wn in V such that (i) for any g ∈ A, any
1 ≤ i ≤ r, g · wi = αwi for some α ∈ K; and (ii) σ · wi = wj for some j.
It follows that K(V )A = K(w1, · · · , wn)
A = K(u1, · · · , un) where u1, · · · , un are
monomials in w1, · · · , wn. Thus K(u1, · · · , un) = K(M) for some G0-lattice M . It
follows that K(V )G = {K(V )A}G0 = K(M)<σ>.
By Theorem 2.2, ρG0(M) is invertible. Apply Theorem 1.12. �
Remark. In the above theorem it is essential that G is a is a split extension group.
For non-split extension groups, monomial actions (instead of merely purely monomial
actions) may arise; see the proof of Theorem 1.3 [Ha] and also [Sa5].
§6. Proof of Theorem 1.9
We will prove Theorem 1.9 in this section.
If charK = p, apply Theorem 3.2. We find that K(G) is rational. In particular, it
is retract rational.
Thus we will assume that charK 6= p from now on till the end of this section.
If ζp ∈ K, we apply Theorem 1.4 to find that K(G) is rational. Hence we will
consider only the situation [K(ζp) : K] = 2 in the sequel.
Since p be an odd prime number, there is only one non-abelian p-group of order
p3 with exponent p, and there are precisely two non-isomorphic non-abelian p-groups
of order p4 with exponent p (see [CK, Section 2 and Section 3]). We will solve the
rationality problem of these three groups separately.
Let ζ = ζp, Gal(K(ζ)/K) = 〈τ〉 with τ · ζ = ζ
Case 1. G = 〈σ1, σ2, σ3 : σ
i = 1, σ1 ∈ Z(G), σ2σ3 = σ3σ1σ2〉.
Step 1. Let W =
g∈GK · x(g) be the representation space of the regular repre-
sentation of G. Note that
K(G) = K(x(g) : g ∈ G)G = {K(ζ)(x(g) : g ∈ G)〈τ〉}G = K(ζ)(x(g) : g ∈ G)〈G,τ〉.
Step 2. For i ∈ Zp, define x0,i, x1,i ∈
g∈GK(ζ) · x(g) by
x0,i =
j,k∈Zp
ζ−j−kx(σi3σ
x1,i =
j,k∈Zp
ζj+k · x(σi3σ
We find that
σ1 : x0,i 7→ ζx0,i, x1,i 7→ ζ
−1x1,i,
σ2 : x0,i 7→ ζ
i+1x0,i, x1,i 7→ ζ
−i−1x1,i,
σ3 : x0,i 7→ x0,i+1, x1,i 7→ x1,i+1,
τ : x0,i ↔ x1,i.
The restriction of the action of 〈σ1, σ2〉 toK(ζ)·x0,i andK(ζ)·x1,i is given by distinct
characters of 〈σ1, σ2〉 to K(ζ) \ {0}. Hence x0,0, x0,1, x0,2, . . . , x0,p−1, x1,0, . . . , x1,p−1 are
linearly independent vectors. Moreover, the action of 〈G, τ〉 on K(ζ)(x0,i, x1,i : i ∈ Zp)
is faithful.
By Theorem 3.1 K(ζ)(x(g) : g ∈ G) = K(ζ)(x0,i, x1,i : i ∈ Zp)(X1, . . . , Xl) where
l = p3 − 2p and ρ(Xj) = Xj for any 1 ≤ j ≤ l, any ρ ∈ 〈G, τ〉.
Step 3. Define x0 = x
0,0, y0 = x0,0x1,0 and xi = x0,i · x
0,i−1, yi = y0,i · y
0,i−1 for
1 ≤ i ≤ p − 1. It follows that K(ζ)(x0,i, x1,i : i ∈ Zp)
〈σ1〉 = K(ζ)(xi, yi : i ∈ Zp).
Moreover, the actions of σ2, σ3, τ are given as
σ2 : x0 7→ x0, y0 7→ y0, xi 7→ ζxi, yi 7→ ζ
−1yi for 1 ≤ i ≤ p− 1,
σ3 : x0 7→ x0x
1, x1 7→ x2 7→ · · · 7→ xp−1 7→ (x1x2 · · ·xp−1)
y0 7→ y0y1x1, y1 7→ y2 7→ · · · 7→ yp−1 7→ (y1y2 · · · yp−1)
τ : x0 7→ y
0 , y0 7→ y0, xi ↔ yi for 1 ≤ i ≤ p− 1.
Step 4. Define X = x0y
−(p−1)/2
0 , Y = x
(p+1)/2
Then K(ζ)(xi, yi : i ∈ Zp) = K(ζ)(X, Y, xi, yi : 1 ≤ i ≤ p − 1) and σ2(X) = X ,
σ2(Y ) = Y , σ3(X) = αX , σ3(Y ) = βY , τ : X ↔ Y where α, β ∈ K(ζ)(xi, yi : 1 ≤ i ≤
p− 1)\{0}.
Apply Theorem 3.1. We may find X̃ , Ỹ so that K(ζ)(xi, yi : i ∈ Zp) = K(ζ)(xi, yi :
1 ≤ i ≤ p − 1)(X̃, Ỹ ) with ρ(X̃) = X̃ , ρ(Ỹ ) = Ỹ for any ρ ∈ 〈σ2, σ3, τ〉. Thus it
remains to consider K(ζ)(xi, yi : 1 ≤ i ≤ p− 1)
〈σ2,σ3,τ〉.
Step 5. Define u0 = x
1, v0 = x1y1, ui = xi+1x
i , vi = yi+1y
i for 1 ≤ i ≤ p− 2.
It follows that K(ζ)(xi, yi : 1 ≤ i ≤ p − 1)
〈σ2〉 = K(ζ)(ui, vi : 0 ≤ i ≤ p − 2). The
actions of σ3 and τ are given by
σ3 : u0 7→ u0u
1, v0 7→ v0v1u1, u1 7→ u2 7→ · · · 7→ up−2 7→ (u0u
2 · · ·u
v1 7→ v2 7→ · · · 7→ vp−2 7→ u0(v
1 · · · v
τ : u0 7→ u
0, v0 7→ v0, ui ↔ vi for 1 ≤ i ≤ p− 2.
Step 6. Define up−1 = (u0u
2 · · ·u
−1, wi = v0v1 · · · viu1u2 · · ·ui for 1 ≤ i ≤
p − 2, and wp−1 = (v
1 · · · vp−2u
2 · · ·up−2)
−1. We find that K(ui, vi : 0 ≤
i ≤ p− 2) = K(ui, wi : 1 ≤ i ≤ p− 1) and
σ3 : u1 7→ u2 7→ · · · 7→ up−1 7→ (u1u2 · · ·up−1)
w1 7→ w2 7→ · · · 7→ wp−1 7→ (w1w2 · · ·wp−1)
τ : ui 7→ wi(uiwi−1)
−1, wi 7→ wi for 1 ≤ i ≤ p− 1.
where we write w0 = v0 for convenience. (Granting that w0 is defined as above, we
have a relation w0w1 · · ·wp−1 = 1. But we don’t have the relation u0u1 · · ·up−1 = 1
because we define u0, u1, . . . , up−2 first and up−1 is defined by another way.)
We will study whether K(ζ)(ui, wi : 1 ≤ i ≤ p− 1)
〈σ3,τ〉 is rational or not.
Step 7. The multiplicative action in Step 6 can be formulated in terms of π-lattices
where π = 〈τ, σ3〉 as follows.
LetM = (
1≤i≤p−1 Z·ui)⊕(
1≤i≤p−1Z·wi) and define w0 = −w1−w2−· · ·−wp−1.
Define a Z[π]-module structure on M by
σ3 : u1 7→ u2 7→ · · · 7→ up−1 7→ −u1 − u2 − · · · − up−1,
w1 7→ w2 7→ · · · 7→ wp−1 7→ −w1 − w2 − · · · − wp−1,
τ : ui 7→ −ui + wi − wi−1, wi 7→ wi for 1 ≤ i ≤ p− 1.
We claim that M ≃ Z[π1]
Z[π2]/Φp(σ3) with π1 = 〈τ〉, π2 = 〈σ3〉.
Throughout this step, we will write σ = σ3 and π =< σ, τ >. Define ρ = στ . Then
ρ is a generator of π with order 2p where p is an odd prime number.
Let Φp(T ) ∈ Z[T ] be the p-th cyclotomic polynomial. Note that Φp(σ)(ui) =
Φp(σ)(wi) = 0. It follows that Φp(σ) ·M = 0.
Since Φp(σ
0≤i≤p−1 σ
i = Φp(σ), we know that Φp(σ
2) ·M = 0. From ρ2 = σ2,
we find that Φp(ρ
2) ·M = 0. It is well-known that Φp(T
2) = Φp(T )Φ2p(T ) (see [Ka1,
Theorem 1.1] for example). We conclude that Φp(ρ)Φ2p(ρ) ·M = 0. In other words,
we may regard M as a module over Λ = Z[π]/ < Φp(ρ)Φ2p(ρ) >.
Clearly Λ is isomorphic to Z[π1]
Z[π2]/Φp(σ) where π1 = 〈τ〉, π2 = 〈σ〉.
It remains to show that M is isomorphic to Λ as a Λ-module.
It is not difficult to verify that Φ2p(ρ)(u1) = f(ρ)(w1) where f(T ) = Φ2p(T )−T
Define v = u1 − w1. We find that Φ2p(ρ)(v) = −w0.
Since M is a Λ-module generated by u1 and w0, it follows that, as a Λ-module,
M =< u1, w0 >=< u1 − σw0, w0 > =< u1 − w1, w0 >=< v,w0 > =< v,w0 +
Φ2p(ρ)(v) >=< v,w0 − w0 >=< v >, i.e. M is a cyclic Λ-module generated by
v. Thus we get an epimorphism Λ → M , which is an isomorphism by counting the
Z-ranks of both sides. Hence the result.
Step 8. By Step 7, M ≃ Z[π1]
Z[π2]/Φp(σ3) where π1 = 〈τ〉, π2 = 〈σ3〉.
Thus we may choose a Z-basis Y1, Y2, . . . , Yp−1, Z1, . . . , Zp−1 for M such that
σ3 : Y1 7→ Y2 7→ · · · 7→ Yp−1 7→ −Y1 − · · · − Yp−1,
Z1 7→ Z2 7→ · · · 7→ Zp−1 7→ −Z1 − · · · − Zp−1,
τ : Yi ↔ Zi for 1 ≤ i ≤ p− 1.
Hence K(ζ)(M) = K(ζ)(Yi, Zi : 1 ≤ i ≤ p− 1). We emphasize that σ3 acts on Yi,
Zi by multiplicative actions on the field K(ζ)(M) and σ3 · ζ = ζ , τ · ζ = ζ
Step 9. In the field K(ζ)(M), define
s0 = 1 + Y1 + Y1Y2 + Y1Y2Y3 + · · ·+ Y1Y2 · · ·Yp−1,
t0 = 1 + Z1 + Z1Z2 + · · ·+ Z1Z2 · · ·Zp−1,
s1 = (1/s0)− (1/p), s2 = (Y1/s0)− (1/p), . . . , sp−1 = (Y1Y2 · · ·Yp−2/s0)− (1/p),
t1 = (1/t0)− (1/p), t1 = (Z1/t0)− (1/p), . . . , tp−1 = (Z1Z2 · · ·Zp−2/t0)− (1/p).
It is easy to verify that K(ζ)(M) = K(ζ)(si, ti : 1 ≤ i ≤ p− 1) and
σ3 : s1 7→ s2 7→ · · · 7→ sp−1 7→ −s1 − s2 − · · · − sp−1,
t1 7→ t2 7→ · · · 7→ tp−1 7→ −t1 − t2 − · · · − tp−1,
τ : si ↔ ti.
Step 10. Define ri = si + ti for 1 ≤ i ≤ p − 1. Then K(ζ)(si, ti : 1 ≤ i ≤ p− 1) =
K(ζ)(si, ri : 1 ≤ i ≤ p− 1). Note that
σ3 : r1 7→ r2 7→ · · · 7→ rp−1 7→ −r1 − r2 − · · · − rp−1,
τ : ri 7→ ri, si 7→ −si + ri.
Apply Theorem 3.1. We find that K(ζ)(si, ri : 1 ≤ i ≤ p− 1) = K(ζ)(ri : 1 ≤ i ≤
p−1)(A1, . . . , Ap−1) for some A1, . . . , Ap−1 with σ3(Ai) = τ(Ai) = Ai for 1 ≤ i ≤ p−1.
Thus K(ζ)(ri, si : 1 ≤ i ≤ p − 1)
〈τ〉 = K(ζ)(ri : 1 ≤ i ≤ p − 1)
〈τ〉(A1, . . . , Ap−1) =
K(r1, . . . , rp−1, A1, . . . , Ap−1).
It remains to find K(r1, . . . , rp−1)
〈σ3〉.
Step 11. Write π2 = 〈σ3〉. The π2-fieldsK(r1, . . . , rp−1, A1) andK(B0, B1, . . . , Bp−1)
are π2-isomorphic where σ3 : B0 7→ B1 7→ · · · 7→ Bp−1 7→ B0. For, we may define
B = B0 +B1 + · · ·+Bp−1 and Ci = Bi − (B/p) for 0 ≤ i ≤ p− 1.
In other words K(r1, . . . , rp−1, A1, . . . , Ap−1)
〈σ3〉 = K(B0, B1, . . . , Bp−1)
〈σ3〉(A2, . . .,
Ap−1) = K(Zp)(A2, . . . , Ap−1).
By Lemma 3.5, K(Zp) = K(ζ)(N)
π1 where ζ = ζp, π1 = Gal(K(ζ)/K) = 〈τ〉 with
τ · ζ = ζ−1, and N is some π1-lattice.
By Reiner’s Theorem [Re], the π1-lattice N is a direct sum of lattices of three types:
(1) τ : z 7→ −z, (2) τ : z 7→ z, (3) τ : z1 ↔ z2. Thus we find K(ζ)(N)
〈τ〉 = K(ζ)(z1,
. . . , za, z
2, . . . , z
1 , w
1 , . . . , z
c , w
c ) where τ : z1 7→ 1/z1, . . ., za 7→ 1/za, z
1 7→ z
. . ., z′b 7→ z
1 ↔ w
1 , . . ., z
c ↔ z
By Theorem 3.1 we may “neglect” the roles of z′1, . . . , z
1 , w
1 , . . . , z
c , w
c . We may
linearize the action on z1, . . . , za by defining wi = 1/(1 + zi) when 1 ≤ i ≤ a. Then
τ : wi 7→ −wi + 1. Thus we may “neglect” the roles of wi by Theorem 3.1 again. In
conclusion, K(ζ)(z1, . . . , za, z
1, . . . , z
1 , w
1 , . . . , z
c , w
〈τ〉 is rational over K. �
Case 2. G = 〈σ1, σ2, σ3, σ4 : σ
i = 1, σ1, σ2 ∈ Z(G), σ
4 σ3σ4 = σ1σ3〉.
The proof is very similar to Case 1.
Step 1. For i ∈ Zp, define x0,i, x1,i, y0,i, y1,i ∈
g∈GK(ζ) · x(g) by
x0,i =
j,k,l∈Zp
ζ−j−k−lx(σi4σ
x1,i =
j,k,l∈Zp
ζj+k+lx(σi4σ
y0,i =
j,k,l∈Zp
ζ−j−k+lx(σi4σ
y1,i =
j,k,l∈Zp
ζj+k−lx(σi4σ
The action of 〈G, τ〉 is given by
σ1 : x0,i 7→ ζx0,i, x1,i 7→ ζ
−1x1,i, y0,i 7→ ζy0,i, y1,i 7→ ζ
−1y1,i,
σ2 : x0,i 7→ ζx0,i, x1,i 7→ ζ
−1x1,i, y0,i 7→ ζ
−1y0,i, y1,i 7→ ζy1,i,
σ3 : x0,i 7→ ζ
i+1x0,i, x1,i 7→ ζ
−i−1x1,i, y0,i 7→ ζ
i+1y0,i, y1,i 7→ ζ
−i−1y1,i,
σ4 : x0,i 7→ x0,i+1, x1,i 7→ x1,i+1, y0,i 7→ y0,i+1, y1,i 7→ y1,i+1,
τ : x0,i ↔ x1,i, y0,i ↔ y1,i.
Note that x0,i, x1,i, y0,i, y1,i where i ∈ Zp are linearly independent vectors in
g∈GK(ζ) · x(g). Apply Theorem 3.1. It suffices to consider the rationality prob-
lem of K(ζ)(x0,i, x1,i, y0,i, y1,i : i ∈ Zp)
〈G,τ〉.
Step 2. Define x0 = x
0,0, y0 = x0,0x1,0, X0 = y0,0(x0,0)
−1, Y0 = y1,0(x1,0)
−1; for
1 ≤ i ≤ p − 1, define xi = x0,i(x0,i−1)
−1, yi = x1,i(x1,i−1)
−1, Xi = y0,i(y0,i−1)
−1, Yi =
y1,i(y1,i−1)
−1. Then K(ζ)(x0,i, x1,i, y0,i, y1,i : i ∈ Zp)
〈σ1〉 = K(ζ)(xi, yi, Xi, Yi : i ∈ Zp)
and the actions are given by
σ2 : x0 7→ x0, y0 7→ y0, X0 7→ ζ
−2X0, Y0 7→ ζ
All the other generators are fixed by σ2;
σ3 : x0 7→ x0, y0 7→ y0, X0 7→ X0, Y0 7→ Y0,
xi 7→ ζxi, yi 7→ ζ
−1yi, Xi 7→ ζXi, Yi 7→ ζ
−1Yi,
σ4 : x0 7→ x0x
1, y0 7→ y0y1x1, X0 7→ X0X1x
1 , Y0 7→ Y0Y1y
x1 7→ x2 7→ x3 7→ · · · 7→ xp−1 7→ (x1x2 · · ·xp−1)
Similarly for y1, y2, . . . , yp−1, X1, . . . , Xp−1, Y1, . . . , Yp−1;
τ : x0 7→ y
0 , y0 7→ y0, X0 7→ Y0 7→ X0,
xi ↔ yi, Xi ↔ Yi for 1 ≤ i ≤ p− 1.
Step 3. Define x̃ = x0y
−(p−1)/2
0 , ỹ = x
(p+1)/2
Since x̃, ỹ are fixed by both σ2 and σ3, while σ4 : x̃ 7→ αx̃, ỹ 7→ βỹ, τ : x̃ ↔ ỹ
where α, β ∈ K(x1, y1, X1, Y1)\{0}, we may apply Theorem 3.1. Thus the roles of x̃, ỹ
may be “neglected”. It suffices to consider whether K(ζ)(X0, Y0, xi, yi, Xi, Yi : 1 ≤ i ≤
p− 1)〈σ2,σ3,σ4,τ〉 is rational over K.
Step 4. Define X̃ = X
0 , Ỹ = X0Y0. Then K(ζ)(X0, Y0, xi, yi, Xi, Yi : 1 ≤ i ≤
p− 1)〈σ2〉 = K(ζ)(X̃, Ỹ , xi, yi, Xi, Yi : 1 ≤ i ≤ p− 1). Moreover, the actions on X̃ and
Ỹ are given by
σ3 : X̃ 7→ X̃, Ỹ 7→ Ỹ ,
σ4 : X̃ 7→ X̃X
1 , Ỹ 7→ Ỹ X1Y1x
τ : X̃ 7→ X̃−1Ỹ p, Ỹ 7→ Ỹ .
Define X ′ = X̃Ỹ −(p−1)/2, Y ′ = X̃−1Ỹ (p+1)/2. As in Step 3, we may apply The-
orem 3.1 and “neglect” the roles of X ′ and Y ′. It remains to make sure whether
K(ζ)(xi, yi, Xi, Yi : 1 ≤ i ≤ p− 1)
〈σ3,σ4,τ〉 is rational.
Define u0 = x
1, v0 = x1y1; for 1 ≤ i ≤ p−2, define ui = xi+1(x
i ), vi = yi+1 · (y
and for 1 ≤ i ≤ p−1, define Ui = Xix
i , Vi = Yiy
i . It follows that K(ζ)(xi, yi, Xi, Yi :
1 ≤ i ≤ p− 1)〈σ3〉 = K(ζ)(u0, v0, Up−1, Vp−1, ui, vi, Ui, Vi : 1 ≤ i ≤ p− 2). Note that the
actions of σ4 and τ are given by
σ4 : u0 7→ u0u
1, v0 7→ v0v1u1, u1 7→ u2 7→ · · · 7→ up−2 7→ (u0u
2 · · ·u
v1 7→ v2 7→ · · · 7→ vp−2 7→ u0(v
1 · · · v
U1 7→ U2 7→ · · · 7→ Up−1 7→ (U1U2 · · ·Up−1)
V1 7→ V2 7→ · · · 7→ Vp−1 7→ (V1V2 · · ·Vp−1)
τ : u0 7→ u
0, v0 7→ v0, ui ↔ vi for 1 ≤ i ≤ p− 2,
Ui ↔ Vi for 1 ≤ i ≤ p− 1.
Note that the actions of σ4 and τ on U1, U2, . . . , Up−1, V1, . . . , Vp−1 may be linearized
by the same method as in Step 9 of Case 1. Hence we may apply Theorem 3.1 and
“neglect” the roles of Ui, Vi for 1 ≤ i ≤ p− 1.
In conclusion, all we need is to prove that K(ζ)(ui, vi : 0 ≤ i ≤ p−2)
〈σ4,τ〉 is rational
over K.
Compare the present situation with that of Step 5 of Case 1. We have the same
generators and the same actions (and even the same notation). Thus we finish the
proof. �
Case 3. G = 〈σ1, σ2, σ3, σ4 : σ
i = 1, σ1 ∈ Z(G), σ2σ3 = σ3σ2, σ
4 σ2σ4 =
σ1σ2, σ
4 σ3σ4 = σ2σ3〉.
Step 1. For i ∈ Zp, define x0,i, x1,i, y0,i, y1,i ∈
g∈GK(ζ) · x(g) by
x0,i =
j,k,l∈Zp
ζ−j−k−lx(σi4σ
x1,i =
j,k,l∈Zp
ζj+k+lx(σi4σ
y0,i =
j,k,l∈Zp
ζ−j−k+lx(σi4σ
y1,i =
j,k,l∈Zp
ζj+k−lx(σi4σ
The action of 〈G, τ〉 is given by
σ1 : x0,i 7→ ζx0,i, x1,i 7→ ζ
−1x1,i, y0,i 7→ ζ
−1y0,i, y1,i 7→ ζy1,i,
σ2 : x0,i 7→ ζ
i+1x0,i, x1,i 7→ ζ
−i−1x1,i, y0,i 7→ ζ
−i+1y0,i, y1,i 7→ ζ
i−1y1,i,
σ3 : x0,i 7→ ζ
i+1x0,i, x1,i 7→ ζ
−i−1x1,i, y0,i 7→ ζ
i+1y0,i, y1,i 7→ ζ
−i−1y1,i,
σ4 : x0,i 7→ x0,i+1, x1,i 7→ x1,i+1, y0,i 7→ y0,i+1, y1,i 7→ y1,i+1,
τ : x0,i ↔ x1,i, y0,i ↔ y1,i.
Checking the restriction of 〈σ1, σ2, σ3〉 as in Step 1 of Case 2, we find that these
vectors x0,i, x1,i, y0,i, y1,i are linearly independent except possibly for the case x0,p−2
and y1,0, and the case x1,p−2 and y0,0. But it is easy to see that these vectors are linearly
independent, because their ”supports” are disjoint. Apply Theorem 3.1. It suffices to
consider K(ζ)(x0,i, x1,i, y0,i, y1,i : i ∈ Zp)
〈G,τ〉.
Step 2. Define x0 = x
0,0, y0 = x0,0x1,0, X0 = x0,0 · y0,0, Y0 = x1,0y1,0; and for
1 ≤ i ≤ p − 1, define xi = x0,i(x0,i−1)
−1, yi = x1,i(x1,i−1)
−1, Xi = y0,i(y0,i−1)
−1, Yi =
y1,i(y1,i−1)
−1. Then K(ζ)(x0,i, x1,i, y0,i, y1,i : i ∈ Zp)
〈σ1〉 = K(ζ)(xi, yi, Xi, Yi : i ∈ Zp)
and the actions are given by
σ2 : x0 7→ x0, y0 7→ y0, X0 7→ ζ
2X0, Y0 7→ ζ
−2Y0,
xi 7→ ζxi, yi 7→ ζ
−1yi, Xi 7→ ζ
−1Xi, Yi 7→ ζYi,
σ3 : x0 7→ x0, y0 7→ y0, X0 7→ ζ
2X0, Y0 7→ ζ
−2Y0,
xi 7→ ζxi, yi 7→ ζ
−1yi, Xi 7→ ζXi, Yi 7→ ζ
−1Yi,
σ4 : x0 7→ x0x
1, y0 7→ y0y1X1, X0 7→ X0X1x1, Y0 7→ Y0Y1y1,
x1 7→ x2 7→ x3 7→ · · · 7→ xp−1 7→ (x1x2 · · ·xp−1)
Similarly for y1, y2, . . . , yp−1, X1, . . . , Xp−1, Y1, . . . , Yp−1.
τ : x0 7→ y
0 , y0 7→ y0, X0 7→ Y0 7→ X0,
xi ↔ yi, Xi ↔ Yi for 1 ≤ i ≤ p− 1.
Step 3. Define x̃ = x0y
−(p−1)/2
0 , ỹ = x
(p+1)/2
0 . Apply Theorem 3.1 again. It
suffices to consider K(ζ)(X0, Y0, xi, yi, Xi, Yi : 1 ≤ i ≤ p− 1)
〈σ2,σ3,σ4,τ〉.
We may apply Theorem 3.1 again to “neglect” X0 and Y0. Thus it suffices to
consider K(ζ)(xi, yi, Xi, Yi : 1 ≤ i ≤ p− 1)
〈σ2,σ3,σ4,τ〉.
Step 4. Define u0 = x
1, v0 = x1y1, U0 = x1X1, V0 = y1Y1; and for 1 ≤ i ≤ p−2, de-
fine ui = x
i xi+1, vi = y
i yi+1, Ui = X
i Xi+1, Vi = Y
i Yi+1. ThenK(ζ)(xi, yi, Xi, Yi :
1 ≤ i ≤ p− 1)〈σ2〉 = K(ζ)(ui, vi, Ui, Vi : 0 ≤ i ≤ p− 2). Note that the actions of σ3, σ4,
τ are given by
σ3 : u0 7→ u0, v0 7→ v0, U0 7→ ζ
2U0, V0 7→ ζ
−2U0,
All the other generators are fixed by σ3.
σ4 : u0 7→ u0u
1, v0 7→ v0v1u1, U0 7→ U0U1u1, V0 7→ V0V1v1,
u1 7→ u2 7→ · · · 7→ up−2 7→ (u0u
2 · · ·u
v1 7→ v2 7→ · · · 7→ vp−2 7→ u0(v
1 · · · v
U1 7→ U2 7→ · · · 7→ Up−2 7→ u0(U
1 · · ·U
V1 7→ V2 7→ · · · 7→ Vp−2 7→ u
1 · · ·V
τ : u0 7→ v
0 , v0 7→ v0, U0 7→ V0 7→ U0,
ui ↔ vi, Ui ↔ Vi for 1 ≤ i ≤ p− 2.
Step 5. Define R0 = U
0 , S0 = U0V0; and for 1 ≤ i ≤ p− 2, define Ri = Ui, Si = Vi.
Then K(ζ)(ui, vi, Ui, Vi : 0 ≤ i ≤ p − 2)
〈σ3〉 = K(ζ)(ui, vi, Ri, Si : 0 ≤ i ≤ p − 2). We
will write the actions of σ4 and τ on Ri, Si as follows.
σ4 : R0 7→ R0R
1, S0 7→ S0S1R1u1v1,
R1 7→ R2 7→ · · · 7→ Rp−2 7→ u0(R0R
2 · · ·R
S1 7→ S2 7→ · · · 7→ Sp−2 7→ u
0R0(S
1 · · ·S
τ : R0 7→ S
0 , S0 7→ S0, Ri ↔ Si for 1 ≤ i ≤ p− 2.
Step 6. Imitate the change of variables in Step 6 of Case 1. We define up−1 =
2 · · ·u
−1, Rp−1 = u0(R0R
2 · · ·R
−1; and for 1 ≤ i ≤ p − 2,
define wi = v0v1 · · · viu1u2 · · ·ui, Ti = S0S1 · · ·SiR1R2 · · ·Ri; define wp−1 = (v
1 · · · vp−2u
2 · · ·up−2)
−1, Tp−1 = u1v1v
1 · · ·Sp−2R
2 · · ·Rp−2)
We find that K(ui, vi, Ri, Si : 0 ≤ i ≤ p− 2) = K(ui, wi, Ri, Ti : 1 ≤ i ≤ p− 1) and
σ4 : u1 7→ u2 7→ · · · 7→ up−1 7→ (u1u2 · · ·up−1)
w1 7→ w2 7→ · · · 7→ wp−1 7→ (w1w2 · · ·wp−1)
R1 7→ R2 7→ · · · 7→ Rp−1 7→ (R1R2 · · ·Rp−1)
T1 7→ T2 7→ · · · 7→ Tp−1 7→ (T1T2 · · ·Tp−1)
τ : wi 7→ wi, Ti 7→ Ti, ui 7→ wi(uiwi−1)
−1, Ri 7→ Ti(RiTi−1)
where we write w0 = v0, T0 = S0 for convenience.
Step 7. The multiplicative action in Step 6 can be formulated as follows.
Let π = 〈τ, σ4〉 and define a π-lattice as in Step 7 of Case 1 (but σ3 should be
replaced by σ4 in the present situation). Then K(ζ)(ui, wi, Ri, Ti : 1 ≤ i ≤ p − 1)
K(ζ)(M ⊕M)π where M is the same lattice in Step 7 of Case 1.
The structure ofM has been determined in Step 7 of Case 1. Thus M⊕M ≃ Λ⊕Λ
where Λ ≃ Z[π1] ⊗Z Z[π2]/Φp(σ4) where π1 = 〈τ〉 and π2 = 〈σ4〉. It follows that we
can find elements Y1, . . . , Yp−1, Z1, . . . , Zp−1, W1, . . . ,Wp−1, Q1, . . . , Qp−1 in the field
K(ζ)(ui, wi, Ri, Ti : 1 ≤ i ≤ p − 1) so that K(ζ)(ui, wi, Ri, Ti : 1 ≤ i ≤ p − 1) =
K(ζ)(Y1, . . . , Yp−1, Z1, . . . , Zp−1,W1, . . . ,Wp−1, Q1, . . . , Qp−1) and the actions of σ4 and
τ are given by
σ4 : Y1 7→ · · · 7→ Yp−1 7→ (Y1Y2 · · ·Yp−1)
Similarly for Z1, . . . , Zp−1, W1, . . . ,Wp−1, Q1, . . . , Qp−1;
τ : Yi ↔ Zi, Wi ↔ Qi.
Step 8. We can linearize the actions of σ4 and τ by the same method in Step 9 of
Case 1. Apply Theorem 3.1. We may “neglect” the roles of Wi, Qi. Thus the present
situation is the same as in Step 8 of Case 1. �
References
[CHK] H. Chu, S. J. Hu and M. Kang, Noether’s problem for dihedral 2-groups, Com-
ment. Math. Helv. 79 (2004) 147–159.
[CK] H. Chu and M. Kang, Rationality of p-group actions, J. Algebra 237 (2001)
673–690.
[CTS] J. L. Colliot-Thélène and J. J. Sansuc, La R-equivalence sur les tores, Ann. Sci.
École Norm. Sup. 101 (1977) 175–230.
[DM] F. DeMeyer and T. McKenzie, On generic polynomials, J. Algebra 261 (2003)
327–333.
[GMS] S. Garibaldi, A. Merkurjev and J. -P. Serre, Cohomologocal invariants in Ga-
lois cohomology, AMS University Lecture Series vol. 28, Amer. Math. Soc.,
Providence 2003.
[Ha] M. Hajja, Rational invariants of meta-abelian groups of linear automorphisms,
J. Algebra 80 (1983) 295–305.
[HK1] M. Hajja and M. Kang, Three-dimensional purely monomial group actions, J.
Algebra 170 (1994) 805–860.
[HK2] M. Hajja and M. Kang, Some actions of symmetric groups, J. Algebra 177
(1995) 511–535.
[HuK] S. J. Hu and M. Kang, Noether’s problem for some p-groups, preprint.
[ILF] V. V. Ishkhanov, B. B. Luré and D. K. Faddeev, The embedding problem in Ga-
lois theory, Transl. Math. Monographs vol. 165, Amer. Math. Soc., Providence,
1997.
[Ka1] M. Kang, A note on cyclotomic polynomials, Rocky Mountain J. Math. 29
(1999) 893–907.
[Ka2] M. Kang, Introduction to Noether’s problem for dihedral groups, Algebra Colloq.
11 (2004) 71–78.
[Ka3] M. Kang, Noether’s problem for dihedral 2-groups II, to appear in “Pacific J.
Math.”.
[Ka4] M. Kang, Noether’s problem for metacyclic p-groups, to appear in “Advances
in Math.”.
[Ke] I. Kersten, Noether’s problem and normalization, Jber. Deutsch. Math.-Verein.
100 (1998) 3–22.
[Le] H. W. Lenstra, Rational functions invariant under a finite abelian group, Invent.
Math. 25 (1974) 299–325.
[Lo] M. Lorenz, Multiplicative invariant theory, Encyclo. Math. Sciences vol. 135,
Springer-Verlag, Berlin, 2005.
[Re] I. Reiner, Integral representations of cyclic groups of prime order, Proc. Amer.
Math. Soc. 8 (1957) 142–146.
[Sa1] D. J. Saltman, Generic Galois extensions and problems in field theory, Adv.
Math. 43 (1982) 250–283.
[Sa2] D. J. Saltman, Noether’s problem over an algebraically closed field, Invent.
Math. 77 (1984) 71–84.
[Sa3] D. J. Saltman, Retract rational fields and cyclic Galois extensions, Israel J.
Math. 47 (1984) 165–215.
[Sa4] D. J. Saltman, Galois groups of order p3, Comm. Algebra 15 (1987) 1365–1373.
[Sa5] D. J. Saltman, Twisted multiplicative field invariants, Noether’s problem, and
Galois extensions, J. Algebra 131 (1990) 555–558.
[Se] J. -P. Serre, Collected papers, vol. IV 1985–1998, Springer-Verlag, New York
2000.
[Sw1] R. G. Swan, Invariant rational functions and a problem of Steenrod, Invent.
Math. 7 (1969) 148–158.
[Sw2] R. G. Swan, Noether’s problem in Galois theory, in “Emmy Noether in Bryn
Mawr”, edited by B. Srinivasan and J. Sally, Springer-Verlag, 1983, New York.
[Sw3] R. G. Swan, The flabby class group of a finite cyclic group,
http://www.math.uchicago.edu/swan
[Vo] V. E. Voskresenskii, Algebraic groups and their birational invariants, Transl.
Math. Monographs vol. 179, Amer. Math. Soc., Providence, 1998.
|
0704.1801 | A calculation of the shear viscosity in SU(3) gluodynamics | MIT-CTP 3830
A calculation of the shear viscosity in SU(3) gluodynamics
Harvey B. Meyer∗
Center for Theoretical Physics
Massachusetts Institute of Technology
Cambridge, MA 02139, U.S.A.
(Dated: October 22, 2018)
We perform a lattice Monte-Carlo calculation of the two-point functions of the energy-momentum
tensor at finite temperature in the SU(3) gauge theory. Unprecedented precision is obtained thanks
to a multi-level algorithm. The lattice operators are renormalized non-perturbatively and the clas-
sical discretization errors affecting the correlators are corrected for. A robust upper bound for the
shear viscosity to entropy density ratio is derived, η/s < 1.0, and our best estimate is η/s = 0.134(33)
at T = 1.65Tc under the assumption of smoothness of the spectral function in the low-frequency
region.
PACS numbers: 12.38.Gc, 12.38.Mh, 25.75.-q
Introduction.— Models treating the system produced in
heavy ion collisions at RHIC as an ideal fluid have had
significant success in describing the observed flow phe-
nomena [1, 2]. Subsequently the leading corrections due
to a finite shear viscosity were computed [3], in parti-
cular the flattening of the elliptic flow coefficient v2(pT)
above 1GeV. It is therefore important to compute the
QCD shear and bulk viscosities from first principles to
establish this description more firmly. Small transport
coefficients are a signature of strong interactions, which
lead to efficient transmission of momentum in the system.
Strong interactions in turn require non-perturbative com-
putational techniques. Several attempts have been made
to compute these observables on the lattice in the SU(3)
gauge theory [4, 5]. The underlying basis of these calcu-
lations are the Kubo formulas, which relate each trans-
port coefficient to a spectral function ρ(ω) at vanishing
frequency. Even on current computers, these calcula-
tions are highly non-trivial, due to the fall-off of the re-
levant correlators in Euclidean time (as x−50 at short dis-
tances), implying a poor signal-to-noise ratio in a stan-
dard Monte-Carlo calculation. The second difficulty is
to solve the ill-posed inverse problem for ρ(ω) given the
Euclidean correlator at a finite set of points. Mathemati-
cally speaking, the uncertainty on a transport coefficient
χ is infinite for any finite statistical accuracy, because
adding ǫωδ(ω) to ρ(ω) merely corresponds to adding a
constant to the Euclidean correlator of order ǫ, while ren-
dering χ infinite. Therefore smoothness assumptions on
ρ(ω) have to be made, which are reasonable far from the
one-particle energy eigenstates, and can be proved in the
hard-thermal-loop framework [6].
In this Letter we present a new calculation which dra-
matically improves on the statistical accuracy of the Eu-
clidean correlator relevant to the shear viscosity through
the use of a two-level algorithm [16]. This allows us to
derive a robust upper bound on the viscosity and a use-
∗Electronic address: [email protected]
ful estimate of the ratio η/s, which has acquired a special
significance since its value 1/4π in a class of strongly cou-
pled supersymmetric gauge theories [7] was conjectured
to be an absolute lower bound for all substances [8].
Methodology.— In the continuum, the energy-
momentum tensor Tµν(x) = F
να − 14δµνF
ρσ , be-
ing a set of Noether currents associated with translations
in space and time, does not renormalize. With L0 = 1/T
the inverse temperature, we consider the Euclidean two-
point function (0 < x0 < L0)
C(x0) = L
d3x 〈T 12(0)T 12(x0,x)〉. (1)
The tree-level expression is Ct.l.(x0) =
f(τ)− π
with τ = 1 − 2x0
, dA = 8 the number of gluons and
f(z) =
ds s4 cosh2(zs)/ sinh2 s. The correlator C(x0)
is thus dimensionless and, in a conformal field theory,
would be a function of Tx0 only.
The spectral function is defined by
C(x0) = L
coshω(1
L0 − x0)
sinh ωL0
dω. (2)
The shear viscosity is given by [4, 13]
η(T ) = π
. (3)
Important properties of ρ are its positivity, ρ(ω)/ω ≥ 0
and parity, ρ(−ω) = −ρ(ω). The spectral function that
reproduces Ct.l.(x0) is
ρt.l.(ω) =
At.l. ω
tanh 1
+BL−40 ωδ(ω), (4)
At.l. =
(4π)2
, B =
dA. (5)
While the ω4 term is expected to survive in the inter-
acting theory with only logarithmic corrections, the δ-
function at the origin corresponds to the fact that gluons
http://arxiv.org/abs/0704.1801v1
mailto:[email protected]
are asymptotic states in the free theory and implies an
infinite viscosity.
On the lattice, translations only form a discrete group,
so that a finite renormalization is necessary, Tµν(g0) =
Z(g0)T
(bare)
µν . We employ the Wilson action [9], Sg =
x,µ6=ν Tr {1 − Pµν(x)}, on an L0 · L3 hypertoroidal
lattice, and the following discretized expression of the
Euclidean energy:
(bare)
00 (x) ≡
a4g20
ReTrPkl(x)−
ReTrP0k(x)
One of the lattice sum rules [10] can be interpreted as a
non-perturbative renormalization condition for this par-
ticular discretization, from which we read off Z(g0) =
g20(cσ − cτ ). The definition of the anisotropy coeffi-
cients cσ,τ can be found in [12], where they are computed
non-perturbatively. With a precision of about 1%, a Padé
fit constrained by the one-loop result [11] yields
Z(g0) =
1− 1.0225g20 + 0.1305g40
1− 0.8557g20
, (6/g20 ≥ 5.7). (6)
Numerical results.— We report results obtained on a
β ≡ 6/g20 = 6.2, 8 · 203 lattice and on a β = 6.408, 8 · 283
lattice. The first is thus at a temperature of 1.24Tc, the
second at T = 1.65Tc. We use the results for aTc obtained
in [14] and the non-perturbative lattice β-function of [15]
to determine this. We employ the two-level algorithm
described in [16]. The computing time invested into the
1.65Tc simulation is about 860 PC days. Following [4], we
discretize 1
〈(T 11 −T 22)(T 11 −T 22)〉 instead of 〈T 12T 12〉
(the two are equal in the continuum) to write C(x0) =
〈Oη(0)Oη(x0)〉+O(a2), where
Oη(x0) ≡ 12a
{T 11 − T 22}(g0, x)
2Z(g0)
a g20
ReTr {P10 + P13 − P20 − P23}(x).
The three electric-electric, magnetic-magnetic and
electric-magnetic contributions to C(x0) are computed
separately and shown on Fig. 1. We apply the follow-
ing technique to remove the tree-level discretization er-
rors [17] separately to CBB, CEE and CEB . Firstly, x̄0
is defined such that Ct.l.cont(x̄0) = C
lat (x0). The improved
correlator is defined at a discrete set of points through
C(x̄0) = C(x0), and then augmented to a continuous
function via C(x̄
0 ) = α + βC
cont(x̄
0 ), i = 1, 2, where
0 and x̄
0 correspond to two adjacent measurements.
The resulting improved correlator, normalized by the
continuum tree-level result, is shown on Fig. 2. One ob-
serves that the deviations from the tree-level result are
surprisingly small, while deviations from conformality are
visible. The latter is not unexpected at these tempera-
tures, where p/T 4 is still strongly rising [18]. Finite-
volume effects on the T = 1.65Tc lattice are smaller than
1000
0 0.1 0.2 0.3 0.4 0.5
C**(x0)
FIG. 1: The correlators that contribute to C(x0) =
(CBB +
CEE + 2CEB). Filled symbols correspond to T = 1.65Tc,
open symbols to 1.24Tc. Error bars are smaller than the data
symbols.
0.2 0.3 0.4 0.5
C(x0) / C
t.l.(x0)
1.65Tc
1.24Tc
FIG. 2: The tree-level improved correlator C(x0) normalized
to the tree-level continuum infinite-volume prediction. The
four points in each sequence are strongly correlated, but their
covariance matrix is non-singular.
one part in 103 at tree-level. Non-perturbatively, at the
same temperature with resolution L0/a = 6, increasing
L/a from 20 to 30 reduces C(L0/2) by a factor 0.922(73).
While not statistically compelling as it stands, the effect
deserves further investigation.
The entropy density is obtained from the relation
s = (ǫ + p)/T and the standard method to compute
ǫ + p ([12], Eq.1.14). We find s/T 3 = 4.72(3)(5) and
5.70(2)(6) respectively at T/Tc = 1.24 and 1.65 (the first
error is statistical and the second is the uncertainty on
Z(g0)). The Stefan-Boltzmann value is 32π
2/45 in the
continuum and 1.0867 times that value [12] at L0/a = 8.
Unsatisfactory attempts to extract the viscosity.— In
order to compare with previous studies [4, 5], we fit C(x0)
with a Breit-Wigner ansatz
ρ(ω)/ω =
1 + b2(ω − ω0)2
1 + b2(ω + ω0)2
, (7)
although it clearly ignores asymptotic freedom, which im-
plies that ρ(ω) ∼ ω4 at ω ≫ T [6]. The result of a cor-
related fit at T = 1.65Tc using the points at Tx0 = 0.5,
0.35 and 0.275 is a3F = 0.78(4), (b/a)2 = 240(30) and
aω0 = 2.36(4), and hence η/s|T=1.65Tc = 0.33(3). A
comparison of this to the results of Ref. [5] illustrates the
progress made in statistical accuracy.
An ansatz motivated by the hard-thermal-loop frame-
work is [6]
ρ(ω)/ω =
1 + b2ω2
+ θ(ω − ω1)
tanhω/4T
. (8)
It is capable of reproducing the tree-level prediction,
Eq. 4, and it allows for a thermal broadening of the delta
function at the origin. Fitting the T = 1.65Tc points
shown on Fig. 2, the χ2 is minimized for b = 0 (effec-
tively eliminating a free parameter), A/At.l. = 0.996(8),
ω1/T = 7.5(2) and η/s = 0.25(3), with χ
min = 4.0. Thus
while the ansatz is hardly compatible with the data, it
shows that the data tightly constrains the coefficient A
to assume its tree-level value.
A bound on the viscosity.— The positivity property of
ρ(ω) allows us to derive an upper bound on the viscosity,
based on the following assumptions:
1. the contribution to the correlator from ω > Λ is
correctly predicted by the tree-level formula
2. the width of any potential peak in the region ω < T
is no less than O(T ).
The standard QCD sum rule practice is to use perturba-
tion theory from the energy lying midway between the
lightest state and the first excitation. With this in mind
we choose Λ = max(1
[M2 +M2∗ ] ≈ 2.6GeV, 5T ), where
M2(∗) are the masses of the two lightest tensor glueballs.
Perturbation theory predicts a Breit-Wigner centered at
the origin of width Γ = 2γ [6], where γ ≈ αsNT is the
gluon damping rate. To derive the upper bound we con-
servatively assume that for ω <
2T , ρ(ω)/ω is a Breit-
Wigner of width Γ = T centered at the origin. From
L0) ≥ L50
ρBW (ω) +
ρt.l.(ω)
sinhωL0/2
obtain (with 90% statistical confidence level)
η/s <
0.96 (T = 1.65Tc)
1.08 (T = 1.24Tc).
The spectral function.— As illustrated above, it is rather
difficult to find a functional form for ρ(ω) that is both
physically motivated and fits the data. In a more model-
independent approach, ρ(ω) is expanded in an orthogo-
nal set of functions, which grows as the lattice resolution
on the correlator increases, and becomes complete in the
0 5 10 15 20 25
ρ(ω) K(x0=1/2T,ω)/T
T=1.24Tc
T=1.65Tc
FIG. 3: The result for ρ(ω). The meaning of the error bands
and the curves is described in the text. The area under them
equals C(L0/2) = 8.05(31) and 9.35(42) for 1.24Tc and 1.65Tc
respectively.
limit of L0/a → ∞. We proceed to determine the func-
tion ρ̄(ω) ≡ ρ(ω)/ tanh(1
ωL0) by making the ansatz
ρ̄(ω) = m(ω) [1 + a(ω)], (10)
where m(ω) > 0 has the high-frequency behavior of
Eq. 4, and correspondingly define K̄(x0, ω) = coshω(x0−
L0)/ cosh
ωL0. Suppose that m(ω) already is a
smooth approximate solution to ρ̄(ω); inserting (10) into
(Eq. 2), one requires that a(ω) =
ℓ cℓaℓ(ω), with
{aℓ} a basis of functions which is as sensitive as pos-
sible to the discrepancy between the lattice correla-
tor and the correlator generated by m(ω). These are
the eigenfunctions of largest eigenvalue of the symmet-
ric kernel G(ω, ω′) ≡
M(x0, ω)M(x0, ω
′), where
M(x0, ω) ≡ K̄(x0, ω)m(ω). These functions satisfy
dωuℓ(ω)uℓ′(ω) = δℓℓ′ and have an increasing number
of nodes as their eigenvalue decreases. Thus the more
data points available, the larger the basis and the finer
details of the spectral function one is able to determine.
To determine the spectral function from N points of
the correlator, we proceed by first discretizing the ω vari-
able into an Nω-vector. The final spectral function is
given by the last member ρ(N) of a sequence whose first
member is ρ(0) = m and whose general member ρ(n) re-
produces n points (or linear combinations) of the lattice
correlator. For n ≥ 1, ρ(n) = ρ(n−1)[1 +
ℓ=1 c
and the functions a
ℓ (ω) are found by the SVD decom-
position [19] of the Nω × n matrix M (n)t, where M (n)ij ≡
0 , ωj)ρ̄
(n−1)(ωj). The ‘model’ m(ω) is thus up-
dated and agrees with ρ(ω) at the end of the procedure.
We first performed this procedure on coarser lattices
with L0/a = 6 at the same temperatures, starting from
m(ω) = At.l.ω
4/(tanh(1
ωL0) tanh(
ωL0) tanh
2(cωL0))
with 1
≤ c ≤ 1
, and then recycled the output as seed for
the L0/a = 8 lattices. On the latter we used the N = 4
points shown on Fig. 2.
The next question to address is the uncertainty on
ρ(ω). It is important to realize that even in the absence
of statistical errors, a systematic uncertainty subsists due
to the finite number of basis functions we can afford to
describe ρ(ω) with. A reasonable measure of this uncer-
tainty is by how much ρ(ω) varies if one doubles the re-
solution on C(x0). This can be estimated by ‘generating’
new points by using the computed ρ(N)(ω). On the other
hand we perform a two-point interpolation in x0-space
(we chose the form (α + β(x0 − 12L0)
2)/ sin5(πx0/L0)),
and take the difference between these and the generated
ones as their systematic uncertainty. In practice this dif-
ference is added in quadrature with the statistical uncer-
tainty. Next we repeat the procedure to find ρ described
above with N → 2N : if we use as seed ρ(N), then by
construction it is left invariant by the iterative proce-
dure, but the derivatives of ρ(2N) with respect to the
2N points of the correlator can be evaluated. The er-
ror on ρ(ω) is then obtained from a formula of the type
(δρ)2 =
)2(δCi)
2 which however keeps track of
correlations in x0 and Monte-Carlo time. This is the er-
ror band shown on Fig. 3 and the corresponding shear
viscosity values are
η/s =
0.134(33) (T = 1.65Tc)
0.102(56) (T = 1.24Tc).
It is also interesting to check for the stability of the so-
lution under the use of a larger basis of functions. If in-
stead of starting from ρ(N)(ω) we restart from ρ(0) (the
output of the L0/a = 6 lattice) and fit the 2N (depen-
dent) points using 2N basis functions {aℓ}, we obtain the
curves drawn on Fig. 3. As one would hope, the oscilla-
tions of ρ(2N)(ω) are covered by the error band.
Conclusion.— Using state-of-the-art lattice techniques,
we have computed the correlation functions of the energy-
momentum tensor to high accuracy in the SU(3) pure
gauge theory. We have calculated the leading high-
temperature cutoff effects and removed them from the
correlator relevant to the shear viscosity, and we nor-
malized it non-perturbatively, exploiting existing results.
We obtained the entropy density with an accuracy of 1%.
The most robust result obtained on the shear viscosity
is the upper bound Eq. (9), which comes from lumping
the area under the curve on Fig. 3 in the interval [0, 6T ]
into a peak of width Γ = T centered at the origin. Sec-
ondly, our best estimate of the shear viscosity is given by
Eq. (11), using a new method of extraction of the spectral
function. The errors contain an estimate of the systema-
tic uncertainty associated with the limited resolution in
Euclidean time. We are extending the calculation to finer
lattice spacings and larger volumes to further consolidate
our findings.
The values (11) are intriguingly close to saturating the
KSS bound [8] η/s ≥ 1/4π. We note that in perturba-
tion theory the ratio η/s does not depend strongly on the
number of quark flavors [20]. Our results thus corrobo-
rate the picture of a near-perfect fluid that has emerged
from the RHIC experiments, with the magnitude of the
anisotropic flow incompatible with η/s & 0.2 [3].
Acknowledgments.— I thank Krishna Rajagopal and
Philippe de Forcrand for their encouragement and many
useful discussions. This work was supported in part by
funds provided by the U.S. Department of Energy under
cooperative research agreement DE-FC02-94ER40818.
[1] P. F. Kolb, P. Huovinen, U. W. Heinz and H. Heiselberg,
Phys. Lett. B 500, 232 (2001); P. Huovinen, P. F. Kolb,
U. W. Heinz, P. V. Ruuskanen and S. A. Voloshin, Phys.
Lett. B 503, 58 (2001).
[2] D. Teaney, J. Lauret and E. V. Shuryak, Phys. Rev. Lett.
86, 4783 (2001).
[3] D. Teaney, Phys. Rev. C 68, 034913 (2003).
[4] F. Karsch and H.W.Wyld, Phys. Rev. D 35, 2518 (1987).
[5] A. Nakamura and S. Sakai, Phys. Rev. Lett. 94, 072305
(2005).
[6] G. Aarts and J.M. Martinez Resco, JHEP 0204, 053
(2002).
[7] G. Policastro, D.T. Son and A.O. Starinets, Phys. Rev.
Lett. 87, 081601 (2001).
[8] P. Kovtun, D.T. Son and A.O. Starinets, Phys. Rev. Lett.
94, 111601 (2005).
[9] K.G. Wilson, Phys. Rev. D 10, 2445 (1974).
[10] C. Michael, Phys. Rev. D 53 (1996) 4102.
[11] F. Karsch, Nucl. Phys. B 205, 285 (1982).
[12] J. Engels, F. Karsch and T. Scheideler, Nucl. Phys. B
564 (2000) 303 [arXiv:hep-lat/9905002].
[13] A. Hosoya, M.A. Sakagami and M. Takao, Annals Phys.
154, 229 (1984).
[14] B. Lucini, M. Teper and U. Wenger, JHEP 0401, 061
(2004).
[15] S. Necco and R. Sommer, Nucl. Phys. B 622 (2002) 328.
[16] H.B. Meyer, JHEP 0401, 030 (2004).
[17] R. Sommer, Nucl. Phys. B 411 (1994) 839.
[18] G. Boyd, J. Engels, F. Karsch, E. Laermann, C. Leg-
eland, M. Lutgemeier and B. Petersson, Nucl. Phys. B
469, 419 (1996).
[19] R.K. Bryan, Eur. Biophys. J., 18 (1990) 165.
[20] P. Arnold, G.D. Moore and L.G. Yaffe, JHEP 0305, 051
(2003).
http://arxiv.org/abs/hep-lat/9905002
|
0704.1803 | Regularity properties in the classification program for separable
amenable C*-algebras | REGULARITY PROPERTIES IN THE CLASSIFICATION
PROGRAM FOR SEPARABLE AMENABLE C∗-ALGEBRAS
GEORGE A. ELLIOTT AND ANDREW S. TOMS
Abstract. We report on recent progress in the program to classify separable
amenable C∗-algebras. Our emphasis is on the newly apparent role of regular-
ity properties such as finite decomposition rank, strict comparison of positive
elements, and Z-stability, and on the importance of the Cuntz semigroup. We
include a brief history of the program’s successes since 1989, a more detailed
look at the Villadsen-type algebras which have so dramatically changed the
landscape, and a collection of announcements on the structure and properties
of the Cuntz semigroup.
1. Introduction
Rings of bounded operators on Hilbert space were first studied by Murray and
von Neumann in the 1930s. These rings, later called von Neumann algebras, came
to be viewed as a subcategory of a more general category, namely, C∗-algebras.
(The C∗-algebra of compact operators appeared for perhaps the first time when
von Neumann proved the uniqueness of the canonical commutation relations.) A
C∗-algebra is a Banach algebra A with involution x 7→ x∗ satisfying the C∗-algebra
identity:
||xx∗|| = ||x||2, ∀x ∈ A.
Every C∗-algebra is isometrically ∗-isomorphic to a norm-closed sub-∗-algebra of
the ∗-algebra of bounded linear operators on some Hilbert space, and so may still
be viewed as a ring of operators on a Hilbert space.
In 1990, the first named author initiated a program to classify amenable norm-
separable C∗-algebras via K-theoretic invariants. The graded and (pre-)ordered
group K0 ⊕ K1 was suggested as a first approximation to the correct invariant,
as it had already proved to be complete for both approximately finite-dimensional
(AF) algebras and approximately circle (AT) algebras of real rank zero ([15], [17]).
It was quickly realised, however, that more sensitive invariants would be required
if the algebras considered were not sufficiently rich in projections. The program
was refined, and became concentrated on proving that Banach algebra K-theory
and positive traces formed a complete invariant for simple separable amenable C∗-
algebras. Formulated as such, it enjoyed tremendous success throughout the 1990s
and early 2000s.
Date: November 1, 2021.
2000 Mathematics Subject Classification. Primary 46L35, Secondary 46L80.
Key words and phrases. C∗-algebras, classification.
This work was partially supported by NSERC.
http://arxiv.org/abs/0704.1803v3
2 GEORGE A. ELLIOTT AND ANDREW S. TOMS
Recent examples based on the pioneering work of Villadsen have shown that
the classification program must be further revised. Two things are now appar-
ent: the presence of a dichotomy among separable amenable C∗-algebras dividing
those algebras which are classifiable via K-theory and traces from those which will
require finer invariants; and the possibility—the reality, in some cases—that this di-
chotomy is characterised by one of three potentially equivalent regularity properties
for amenable C∗-algebras. (Happily, the vast majority of our stock-in-trade simple
separable amenable C∗-algebras have one or more of these properties, including, for
instance, those arising from directed graphs or minimal C∗-dynamical systems.)
Our plan in this article is to give a brief account of the activity in the classifica-
tion program over the past decade, with particular emphasis on the now apparent
role of regularity properties. After reviewing the successes of the program so far,
we will cover the work of Villadsen on rapid dimension growth AH algebras, the
examples of Rørdam and the second named author which have necessitated the
present re-evaluation of the classification program, and some recent and sweeping
classification results of Winter obtained in the presence of the aforementioned regu-
larity properties. We will also discuss the possible consequences to the classification
program of including the Cuntz semigroup as part of the invariant (as a refinement
of the K0 and tracial invariants).
2. Preliminaries
Throughout the sequel K will denote the C∗-algebra of compact operators on a
separable infinite-dimensional Hilbert space H. For a C∗-algebra A, we let Mn(A)
denote the algebra of n × n matrices with entries from A. The cone of positive
elements of A will be denoted by A+.
2.1. The Elliott invariant and the original conjecture. The Elliott invariant
of a C∗-algebra A is the 4-tuple
(1) Ell(A) :=
(K0A,K0A
+,ΣA),K1A,T
+A, ρA
where the K-groups are the topological ones, K0A
+ is the image of the Murray-
von Neumann semigroup V(A) under the Grothendieck map, ΣA is the subset
of K0A corresponding to projections in A, T
+A is the space of positive tracial
linear functionals on A, and ρA is the natural pairing of T
+A and K0A given by
evaluating a trace at a K0-class. The reader is referred to Rørdam’s monograph
[45] for a detailed treatment of this invariant. In the case of a unital C∗-algebra
the invariant becomes
(K0A,K0A
+, [1A]),K1A,TA, ρA
where [1A] is the K0-class of the unit, and TA is the (compact convex) space of
tracial states. We will concentrate on unital C∗-algebras in the sequel in order to
limit technicalities.
The original statement of the classification conjecture for simple unital separable
amenable C∗-algebras read as follows:
2.1. Let A and B be simple unital separable amenable C∗-algebras, and suppose
that there exists an isomorphism
φ : Ell(A) → Ell(B).
It follows that there is a ∗-isomorphsim Φ : A→ B which induces φ.
REGULARITY PROPERTIES FOR AMENABLE C
-ALGEBRAS 3
It will be convenient to have an abbreviation for the statement above. Let us call
it (EC).
2.2. Amenability. We will take the following deep theorem, which combines re-
sults of Choi and Effros ([7]), Connes ([9]), Haagerup ([27]), and Kirchberg ([32]),
to be our definition of amenability.
Theorem 2.2. A C∗-algebra A is amenable if and only if it has the following
property: for each finite subset G of A and ǫ > 0 there are a finite-dimensional
C∗-algebra F and completely positive contractions φ, ψ such that the diagram
idA //
commutes up to ǫ on G.
The property characterising amenability in Theorem 2.2 is known as the completely
positive approximation property.
Why do we only consider separable and amenable C∗-algebras in the classification
program? It stands to reason that if one has no good classification of the weak
closures of the GNS representations for a class of C∗-algebras, then one can hardly
expect to classify the C∗-algebras themselves. These weak closures have separable
predual if the C∗-algebra is separable. Connes and Haagerup gave a classification
of injective von Neumann algebras with separable predual (see [10] and [28]), while
Choi and Effros established that a C∗-algebra is amenable if and only if the weak
closure in each GNS representation is injective ([8]). Separability and amenability
are thus natural conditions which guarantee the existence of a good classification
theory for the weak closures of all GNS representations of a given C∗-algebra. The
assumption of amenability has been shown to be necessary by Dădărlat ([14]).
The reader new to the classification program who desires a fuller introduction is
referred to Rørdam’s excellent monograph [45].
2.3. The Cuntz semigroup. One of the three regularity properties alluded to
in the introduction is defined in terms of the Cuntz semigroup, an analogue for
positive elements of the Murray-von Neumann semigroup V(A). It is known that
this semigroup will be a vital part of any complete invariant for separable amenable
C∗-algebras ([51]). Given its importance, we present both its original definition,
and a modern version which makes the connection with classical K-theory more
transparent.
Definition 2.3 (Cuntz-Rørdam—see [12] and [49]). Let M∞(A) denote the alge-
braic limit of the direct system (Mn(A), φn), where φn : Mn(A) → Mn+1(A) is
given by
Let M∞(A)+ (resp. Mn(A)+) denote the positive elements in M∞(A) (resp. Mn(A)).
Given a, b ∈ M∞(A)+, we say that a is Cuntz subequivalent to b (written a - b) if
there is a sequence (vn)
n=1 of elements in some Mk(A) such that
||vnbv
n − a||
−→ 0.
4 GEORGE A. ELLIOTT AND ANDREW S. TOMS
We say that a and b are Cuntz equivalent (written a ∼ b) if a - b and b - a. This
relation is an equivalence relation, and we write 〈a〉 for the equivalence class of a.
The set
W(A) := M∞(A)+/ ∼
becomes a positively ordered Abelian semigroup when equipped with the operation
〈a〉+ 〈b〉 = 〈a⊕ b〉
and the partial order
〈a〉 ≤ 〈b〉 ⇔ a - b.
Definition 2.3 is slightly unnatural, as it fails to consider positive elements in
A ⊗ K. This defect is the result of mimicking the construction of the Murray-
von Neumann semigroup in letter rather than in spirit. Each projection in A ⊗ K
is equivalent to a projection in some Mn(A), whence M∞(A) is large enough to
encompass all possible equivalence classes of projections. The same is not true,
however, of positive elements and Cuntz equivalence. The definition below amounts
essentially to replacing M∞(A) with A⊗K in the definition above (this is a theorem),
and also gives a new and very useful characterisation of Cuntz subequivalence. We
refer the reader to [35] and [39] for background material on Hilbert C∗-modules.
Consider A as a (right) Hilbert C∗-module over itself, and let HA denote the
countably infinite direct sum of copies of this module. There is a one-to-one corre-
spondence between closed countably generated submodules of HA and hereditary
subalgebras of A ⊗ K: the hereditary subalgebra B corresponds to the closure of
the span of BHA. Since A is separable, B is singly hereditarily generated, and it is
fairly routine to prove that any two generators are Cuntz equivalent in the sense of
Definition 2.3. Thus, passing from positive elements to Cuntz equivalence classes
factors through the passage from positive elements to the hereditary subalgebras
they generate.
Let X and Y be closed countably generated submodules of HA. Recall that
the compact operators on HA form a C
∗-algebra isomorphic to A⊗ K. Let us say
that X is compactly contained in Y if X is contained in Y and there is compact
self-adjoint endomorphism of Y which fixes X pointwise. Such an endomorphism
extends naturally to a compact self-adjoint endomorphism of HA, and so may be
viewed as a self-adjoint element of A⊗K. We write X - Y if each closed countably
generated compactly contained submodule of X is isomorphic to such a submodule
of Y .
Theorem 2.4 (Coward-Elliott-Ivanescu, [11]). The relation - on Hilbert C∗ mod-
ules defined above, when viewed as a relation on positive elements in M∞(A), is
precisely the relation - of Definition 2.3.
Let [X ] denote the Cuntz equivalence class of the module X . One may con-
struct a positive partially ordered Abelian semigroup Cu(A) by endowing the set
of countably generated Hilbert C∗-modules over A with the operation
[X ] + [Y ] := [X ⊕ Y ]
and the partial order
[X ] ≤ [Y ] ⇔ X - Y.
The semigroup Cu(A) coincides with W(A) whenever A is stable, i.e., A ⊗ K ∼=
A, and has some advantages over W(A) in general. First, suprema of increasing
REGULARITY PROPERTIES FOR AMENABLE C
-ALGEBRAS 5
sequences always exist in Cu(A). This leads to the definition of a category including
this structure in which Cu(A) sits as an object, and as a functor into which it
is continuous with respect to inductive limits. (Definition 2.3 casts W(A) as a
functor into just the category of partially ordered Abelian semigroups with zero.
This functor fails to be continuous with respect to inductive limits.) Second, it
allows one to prove that if A has stable rank one, then Cuntz equivalence of positive
elements simply amounts to isomorphism of the corresponding Hilbert C∗-modules.
This has led, via recent work of Brown, Perera, and the second named author, to
the complete classification of all countably generated Hilbert C∗-modules over A
via K0 and traces, and to the classification of unitary orbits of positive operators
in A⊗K through recent work of Ciuperca and the first named author ([4], [5], [6]).
Essentially, W(A) and Cu(A) contain the same information, but we have chosen
to maintain separate notation both to avoid confusion and because many results in
the literature are stated only for W(A).
Cuntz equivalence is often described roughly as the Murray-von Neumann equiv-
alence of the support projections of positive elements. This heuristic is, modulo
accounting for projections, precise in C∗-algebras for which the Elliott invariant is
known to be complete ([42]). In the stably finite case, one recovers both K0, the
tracial simplex, and the pairing ρ (see (1)) from the Cuntz semigroup, whence the
invariant
(Cu(A),K1A)
is finer than Ell(A) in general. Remarkably, these two invariants determine each
other in a natural way for the largest class of simple separable amenable C∗-algebras
in which (EC) can be expected to hold ([4], [5]).
3. Three regularity properties
Let us now describe three agreeable properties which a C*-algebra may enjoy.
We will see later how virtually all classification theorems for separable amenable
C∗-algebras via the Elliott invariant assume, either explicitly or implicitly, one of
these properties.
3.1. Strict comparison. Our first regularity property—strict comparison—is one
that guarantees, in simple C∗-algebras, that the heuristic view of Cuntz equivalence
described at the end of Section 2 is in fact accurate for positive elements which
are not Cuntz equivalent to projections (see [42]). The property is K-theoretic in
character.
Let A be a unital C∗-algebra, and denote by QT(A) the space of normalised 2-
quasitraces on A (v.[2, Definition II.1.1]). Let S(W(A)) denote the set of additive
and order preserving maps d from W(A) to R+ having the property that d(〈1A〉) =
1. Such maps are called states. Given τ ∈ QT(A), one may define a map dτ :
M∞(A)+ → R
(2) dτ (a) = lim
τ(a1/n).
This map is lower semicontinuous, and depends only on the Cuntz equivalence class
of a. It moreover has the following properties:
(i) if a - b, then dτ (a) ≤ dτ (b);
(ii) if a and b are orthogonal, then dτ (a+ b) = dτ (a) + dτ (b).
6 GEORGE A. ELLIOTT AND ANDREW S. TOMS
Thus, dτ defines a state on W(A). Such states are called lower semicontinuous
dimension functions, and the set of them is denoted by LDF(A). If A has the
property that a - b whenever d(a) < d(b) for every d ∈ LDF(A), then let us say
that A has strict comparison of positive elements or simply strict comparison.
A theorem of Haagerup asserts that every element of QT(A) is in fact a trace
if A is exact ([29]). All amenable C∗-algebras are exact, so we dispense with the
consideration of quasi-traces in the sequel.
3.2. Finite decomposition rank. Our second regularity property, introduced by
Kirchberg and Winter, is topological in flavour. It is based on a noncommutative
version of covering dimension called decomposition rank.
Definition 3.1 ([34], Definitions 2.2 and 3.1). Let A be a separable C∗-algebra.
(i) A completely positive map ϕ :
i=1Mri → A is n-decomposable, if there is
a decomposition {1, . . . , s} =
j=0 Ij such that the restriction of ϕ to
preserves orthogonality for each j ∈ {0, . . . , n}.
(ii) A has decomposition rank n, drA = n, if n is the least integer such that
the following holds: Given {b1, . . . , bm} ⊂ A and ǫ > 0, there is a completely
positive approximation (F, ψ, ϕ) for b1, . . . , bm within ǫ (i.e., ψ : A → F and ϕ :
F → A are completely positive contractions and ‖ϕψ(bi) − bi‖ < ǫ) such that ϕ is
n-decomposable. If no such n exists, we write drA = ∞.
Decomposition rank has good permanence properties. It behaves well with re-
spect to quotients, inductive limits, hereditary subalgebras, unitization and sta-
bilization. Its topological flavour comes from the fact that it generalises covering
dimension in the commutative case: if X is a locally compact second countable
space, then drC0(X) = dimX . We refer the reader to [34] for details.
The regularity property that we are interested in is finite decomposition rank,
expressed by the inequality dr < ∞. This can only occur in a stably finite C∗-
algebra.
3.3. Z-stability. The Jiang-Su algebra Z is a simple separable amenable and
infinite-dimensional C∗-algebra with the same Elliott invariant as C ([30]). We
say that a second algebra A is Z-stable if A ⊗ Z ∼= A. Z-stability is our third
regularity property. It is very robust with respect to common constructions (see
[56]).
The next theorem shows Z-stability to be highly relevant to the classification
program. Recall that a pre-ordered Abelian group (G,G+) is said to be weakly
unperforated if nx ∈ G+\{0} implies x ∈ G+ for any x ∈ G and n ∈ N.
Theorem 3.2 (Gong-Jiang-Su, [26]). Let A be a simple unital C∗-algebra with
weakly unperforated K0-group. Then,
Ell(A) ∼= Ell(A⊗Z).
Thus, modulo a mild restriction on K0, the completeness of Ell(•) in the simple
unital case of the classification program would imply Z-stability. Remarkably, there
exist algebras satisfying the hypotheses of the above theorem which are not Z-stable
([46], [51], [52]).
REGULARITY PROPERTIES FOR AMENABLE C
-ALGEBRAS 7
3.4. Relationships. In general, no two of the regularity properties above are
equivalent. The most important general result connecting them is the following
theorem of M. Rørdam ([47]):
Theorem 3.3. Let A be a simple, unital, exact, finite, and Z-stable C∗-algebra.
Then, A has strict comparison of positive elements.
We shall see in the sequel that for a substantial class of simple, separable, amenable,
and stably finite C∗-algebras, all three of our regularity properties are equivalent.
Moreover, the algebras in this class which do satisfy these three properties also
satisfy (EC). There is good reason to believe that the equivalence of these three
properties will hold in much greater generality, at least in the stably finite case; in
the general case, strict comparison and Z-stability may well prove to be equiva-
lent characterisations of those simple, unital, separable, and amenable C∗-algebras
which satisfy (EC).
4. A brief history
We will now take a short tour of the classification program’s biggest successes,
and also the fascinating algebras of Villadsen. We have two goals in mind: to edify
the reader unfamiliar with the classification program, and to demonstrate that the
regularity properties of Section 3 pervade the known confirmations of (EC). This
is a new point of view, for when these results were originally proved, there was no
reason to think that anything more than simplicity, separability, and amenability
would be required to complete the classification program.
We have divided our review of known classification results into three broad cat-
egories according to the types of algebras covered: purely infinite algebras, and
two formally different types of stably finite algebras. It is beyond the scope of this
article to provide and exhaustive list of of known classification results, much less
demonstrate their connections to our regularity properties. We will thus choose,
from each of the three categories above, the classification theorem with the broadest
scope, and indicate how the algebras it covers satisfy at least one of our regularity
properties.
4.1. Purely infinite simple algebras. We first consider a case where the theory
is summarised with one beautiful result. Recall that a simple separable amenable
C∗-algebra is purely infinite if every hereditary subalgebra contains an infinite pro-
jection (a projection is infinite if it is equivalent, in the sense of Murray and von
Neumann, to a proper subprojection of itself—otherwise the projection is finite).
Theorem 4.1 (Kirchberg-Phillips, 1995, [31] and [43]). Let A and B be separable
amenable purely infinite simple C∗-algebras which satisfy the Universal Coefficient
Theorem. If there is an isomorphism
φ : Ell(A) → Ell(B),
then there is a ∗-isomorphism Φ : A→ B.
In the theorem above, the Elliott invariant is somewhat simplified. The hypothe-
ses on A and B guarantee that they are traceless, and that the order structure on
K0 is irrelevant. Thus, the invariant is simply the graded group K0⊕K1, along with
the K0-class of the unit if it exists. The assumption of the Universal Coefficient
Theorem (UCT) is required in order to deduce the theorem from a result which is
8 GEORGE A. ELLIOTT AND ANDREW S. TOMS
formally more general: A and B as in the theorem are ∗-isomorphic if and only
if they are KK-equivalent. The question of whether every amenable C∗-algebra
satisfies the UCT is open.
Which of our three regularity properties are present here? As noted earlier,
finite decomposition rank is out of the question. The algebras we are considering
are traceless, and so the definition of strict comparison reduces to the following
statement: for any two non-zero positive elements a, b ∈ A, we have a - b. This,
in turn, is often taken as the very definition of pure infiniteness, and can be shown
to be equivalent to the definition preceding Theorem 4.1 without much difficulty.
Strict comparison is thus satisfied in a slightly vacuous way. As it turns out, A and
B are also Z-stable, although this is less obvious. One first proves that A and B
are approximately divisible (again, this does not require Theorem 4.1), and then
uses the fact, due to Winter and the second named author, that any separable and
approximately divisible C∗-algebra is Z-stable ([57]).
4.2. The stably finite case, I: inductive limits. We now move on to the case
of stably finite C∗-algebras, i.e., those algebras A such that that every projection in
the (unitization of) each matrix algebra Mn(A) is finite. (The question of whether
a simple amenable C∗-algebra must always be purely infinite or stably finite was
recently settled negatively by Rørdam. We will address his example again later.)
Many of the classification results in this setting apply to classes of C∗-algebras
which can be realised as inductive limits of certain building block algebras. The
original classification result for stably finite algebras is due to Glimm. Recall that
a C∗-algebra A is uniformly hyperfinite (UHF) if it is the limit of an inductive
sequence
−→ Mn2
−→ Mn3
−→ · · · ,
where each φi is a unital ∗-homomorphism. We will state his result here as a con-
firmation of the Elliott conjecture, but note that it predates both the classification
program and the realisation that K-theory is the essential invariant.
Theorem 4.2 (Glimm, 1960, [24]). Let A and B be UHF algebras, and suppose
that there is an isomorphsim
φ : Ell(A) → Ell(B).
It follows that there is a ∗-isomorphism Φ : A→ B which induces φ.
Again, the invariant is dramatically simplified here. Only the ordered K0-group
is non-trivial. The strategy of Glimm’s proof (which did not use K-theory explicitly)
was to “intertwine” two inductive sequences (Mni , φi) and (Mmi , ψi), i.e., to find
sequences of ∗-homomorphisms ηi and γi making the diagram
φ1 //
φ2 //
φ3 //
· · ·
ψ1 //
;;xxxxxxxx
ψ2 //
;;xxxxxxxx
ψ3 //
==zzzzzzzzz
· · ·
commute. One then gets an isomorphism between the limit algebras by extending
the obvious morphism between the inductive sequences by continuity.
The intertwining argument above can be pushed surprisingly far. One replaces
the inductive sequences above with more general inductive sequences (Ai, φi) and
REGULARITY PROPERTIES FOR AMENABLE C
-ALGEBRAS 9
(Bi, ψi), where the Ai and Bi are drawn from a specified class (matrix algebras over
circles, for instance), and seeks maps ηi and γi as before. Usually, it is not possible
to find ηi and γi making the diagram commute, but approximate commutativity
on ever larger finite sets can be arranged for, and this suffices for the existence of
an isomorphism between the limit algebras. This generalised intertwining is known
as the Elliott Intertwining Argument.
The most important classification theorem for inductive limits covers the so-
called approximately homogeneous (AH) algebras. An AH algebra A is the limit of
an inductive sequence (Ai, φi), where each Ai is semi-homogeneous:
pi,j(C(Xi,j)⊗K)pi,j
for some natural number ni, compact metric spaces Xi,j , and projections pi,j ∈
C(Xi,j) ⊗ K. We refer to the sequence (Ai, φi) as a decomposition for A; such
decompositions are not unique. All AH algebras are separable and amenable.
Let A be a simple unital AH algebra. Let us say that A has slow dimension
growth if it has a decomposition (Ai, φi) satisfying
lim sup
dim(Xi,1)
rank(pi,1)
, . . . ,
dim(Xi,ni)
rank(pi,ni)
Let us say that A has very slow dimension growth if it has a decomposition satisfying
the (formally) stronger condition that
lim sup
dim(Xi,1)
rank(pi,1)
, . . . ,
dim(Xi,ni)
rank(pi,ni)
Finally, let us say that A has bounded dimension if there is a constant M > 0 and
a decomposition of A satisfying
{dim(Xi,l)} ≤M.
Theorem 4.3 (Elliott-Gong and Dădărlat, [20] and [13]). (EC) holds among simple
unital AH algebras with slow dimension growth and real rank zero.
Theorem 4.4 (Elliott-Gong-Li and Gong, [22] and [25]). (EC) holds among simple
unital AH algebras with very slow dimension growth.
All three of our regularity properties hold for the algebras of Theorems 4.3 and
4.4, but some are easier to establish than others. Let us first point out that an
algebra from either class has stable rank one and weakly unperforated K0-group
(cf.[1]), and that these facts predate Theorems 4.3 and 4.4. A simple unital C∗-
algebra of real rank zero and stable rank one has strict comparison if and only if its
K0-group is weakly unperforated (cf.[41]), whence strict comparison holds for the
algebras covered by Theorem 4.3. A recent result of the second named author shows
that strict comparison holds for any simple unital AH algebra with slow dimension
growth ([55]), and this result is independent of the classification theorems above.
Thus, strict comparison holds for the algebras of Theorems 4.3 and 4.4, and the
proof of this fact, while not easy, is at least much less complicated than the proofs
of the classification theorems themselves. Establishing finite decomposition rank
requires the full force of the classification theorems: a consequence of both theorems
is that the algebras they cover are all in fact simple unital AH algebras of bounded
10 GEORGE A. ELLIOTT AND ANDREW S. TOMS
dimension, and such algebras have finite decomposition rank by [34, Corollary 3.12
and 3.3 (ii)]. Proving Z-stability is also an application of Theorems 4.3 and 4.4: one
may use the said theorems to prove that the algebras in question are approximately
divisible ([21]), and this entails Z-stability for separable C∗-algebras ([57]).
Why all the interest in inductive limits? Initially at least, it was surprising to
find that any classification of C∗-algebras by K-theory was possible, and the earliest
theorems to this effect covered inductive limits (see, for instance, the first named
author’s classification of AF algebras and AT-algebras of real rank zero []). But it
was the realisation by Evans and the first named author that a very natural class
of C∗-algebras arising from dynamical systems—the irrational rotation algebras—
were in fact inductive limits of elementary building blocks that began the drive to
classify inductive limits of all stripes ([19]). This theorem of Elliott and Evans has
recently been generalised in sweeping fashion by Lin and Phillips, who prove that
virtually every C∗-dynamical system giving rise to a simple algebra is an inductive
limit of fairly tractable building blocks. This result continues to provide strong
motivation for the study of inductive limit algebras.
4.3. The stably finite case, II: tracial approximation. Natural examples of
separable amenable C∗-algebras are rarely equipped with obvious and useful induc-
tive limit decompositions. Even the aforementioned theorem of Lin and Phillips,
which gives an inductive limit decomposition for each minimal C∗-dynamical sys-
tem, does not produce inductive sequences covered by existing classification theo-
rems. It is thus desirable to have theorems confirming the Elliott conjecture under
hypotheses that are (reasonably) straightforward to verify for algebras not given as
inductive limits.
Lin in [36] introduced the concept of tracial topological rank for C∗-algebras.
His definition, in spirit if not in letter, is this: a unital simple tracial C∗-algebra A
has tracial topological rank at most n ∈ N if for any finite set F ⊆ A, tolerance
ǫ > 0, and positive element a ∈ A there exist unital subalgebras B and C of A such
(i) 1A = 1B ⊕ 1C ,
(ii) F is almost (to within ǫ) contained in B ⊕ C,
(iii) C is isomorphic to F⊗C(X), where dim(X) ≤ n and F is finite-dimensional,
(iv) 1B is dominated, in the sense of Cuntz subequivalence, by a.
One denotes by TR(A) the least integer n for which A satisfies the definition above;
this is the tracial topological rank, or simply the tracial rank, of A.
The most important value of the tracial rank is zero. Lin proved that simple
unital separable amenable C∗-algebras of tracial rank zero satisfy the Elliott con-
jecture, modulo the ever present UCT assumption ([37]). The great advantage of
this result is that its hypotheses can be verified for a wide variety of C∗-dynamical
systems and all simple non-commutative tori, without ever having to prove that the
latter have tractable inductive limit decompositions (see [44], for instance). Indeed,
the existence of such decompositions is a consequence of Lin’s theorem! One can
also verify the hypotheses of Lin’s classification theorem for many real rank zero
C∗-algebras with unique trace ([3]), always with the assumption, indirectly, of strict
comparison.
REGULARITY PROPERTIES FOR AMENABLE C
-ALGEBRAS 11
Simple unital C∗-algebras of tracial rank zero can be shown to have stable rank
one and weakly unperforated K0-groups, whence they have strict comparison of
positive elements by a theorem of Perera ([41]). (There is a classification theorem
for algebras of tracial rank one ([38]), but this has been somewhat less useful—it
is difficult to verify tracial rank one in natural examples. Also, Niu has recently
proved a classification theorem for some C∗-algebras which are approximated in
trace by certain subalgebras of Mn ⊗ C[0, 1] ([40]).)
And what of our regularity properties? Lin proved in [36] that every unital simple
C∗-algebra of tracial rank zero has stable rank one and weakly unperforated K0-
group. These facts, by the results reviewed at the end of the preceding subsection,
entail strict comparison, and are not nearly so difficult to prove as the tracial rank
zero classification theorem. In a further analogy with the case of AH algebras,
finite decomposition rank and Z-stability can only be verified by applying Lin’s
classification theorem—a consequence of this theorem is that the algebras it covers
are in fact AH algebras of bounded dimension!
4.4. Villadsen’s algebras. Until the mid 1990s we had no examples of simple
separable amenable C∗-algebras where one of our regularity properties failed. To
be fair, two of our regularity properties had not yet even been defined, and strict
comparison was seen as a technical version of the more attractive Second Fun-
damental Comparability Question for projections (this last condition, abbreviated
FCQ2, asks for strict comparison for projections only). This all changed when Vil-
ladsen produced a simple separable amenable and stably finite C∗-algebra which
did not have FCQ2, answering a long-standing question of Blackadar ([59]). The
techniques introduced by Villadsen were subsequently used by him and others to
answer many open questions in the theory of nuclear C∗-algebras including the
following:
(i) Does there exist a simple separable amenable C∗-algebra containing a finite
and an infinite projection? (Solved affirmatively by Rørdam in [46].)
(ii) Does there exist a simple and stably finite C∗-algebra with non-minimal
stable rank? (Solved affirmatively by Villadsen in [60].)
(iii) Is stability a stable property for simple C∗-algebras? (Solved negatively by
Rørdam in [48].)
(iv) Does a simple and stably finite C∗-algebra with cancellation of projections
necessarily have stable rank one? (Solved negatively by the second named
author in [54].)
Of the results above, (i) was (and is) the most significant. In addition to showing
that simple separable amenable C∗-algebras do not have a factor-like type classifi-
cation, Rørdam’s example demonstrated that the Elliott invariant as it stood could
not be complete in the simple case. This and other examples due to the second
named author have necessitated a revision of the classification program. It is to
the nature of this revision that we now turn.
5. The way(s) forward
5.1. New assumptions. (EC) does not hold in general, and this justifies new as-
sumptions in efforts to confirm it. In particular, one may assume any combination
of our three regularity properties. We will comment on the aptness of these new
assumptions in the next subsection. For now we observe that, from a certain point
12 GEORGE A. ELLIOTT AND ANDREW S. TOMS
of view, we have been making these assumptions all along. Existing classification
theorems for C∗-algebras of real rank zero are accompanied by the crucial assump-
tions of stable rank one and weakly unperforated K-theory; as has already been
pointed out, unperforated K-theory can be replaced with strict comparison in this
setting.
How much further can one get by assuming the (formally) stronger condition
of Z-stability? What role does finite decomposition rank play? As it turns out,
these two properties both alone and together produce interesting results. Let RR0
denote the class of simple unital separable amenable C∗-algebras of real rank zero.
The following subclasses of RR0 satisfy (EC):
(i) algebras which satisfy the UCT, have finite decomposition rank, and have
tracial simplex with compact and zero-dimensional extreme boundary;
(ii) Z-stable algebras which satisfy the UCT and are approximated locally by
subalgebras of finite decomposition rank.
These results, due to Winter ([61], [62]), showcase the power of our regularity
properties: included in the algebras covered by (ii) are all simple separable unital
Z-stable ASH (approximately subhomogeneous) algebras of real rank zero.
Another advantage to the assumptions of Z-stability and strict comparison is
that they allow one to recover extremely fine isomorphism invariants for C∗-algebras
from the Elliott invariant alone. (This recovery is not possible in general.) We will
be able to give precise meaning to this comment below, but first require a further
dicussion of Cuntz semigroup.
5.2. New invariants. A natural reaction to an incomplete invariant is to enlarge
it: include whatever information was used to prove incompleteness. This is not
always a good idea. It is possible that one’s distinguishing information is ad hoc,
and unlikely to yield a complete invariant. Worse, one may throw so much new
information into the invariant that the impact of its potential completeness is se-
verely diminished. The revision of an invariant is a delicate business. In this light,
not all counterexamples are equal.
Rørdam’s finite-and-infinite-projection example is distinguished from a simple
and purely infinite algebra with the same K-theory by the obvious fact that the
latter contains no finite projections. The natural invariant which captures this dif-
ference is the semigroup of Murray-von Neumann equivalence classes of projections
in matrices over an algebra A, denoted by V(A). After the appearance of Rørdam’s
example, the second named author produced a pair of simple, separable, amenable,
and stably finite C∗-algebras which agreed on the Elliott invariant, but were not
isomorphic. In this case the distinguishing invariant was Rieffel’s stable rank. It
was later discovered that these algebras could not be distinguished by their Murray-
von Neumann semigroups, but it was not yet clear which data were missing from
the Elliott invariant. More dramatic examples were needed, ones which agreed on
most candidates for enlarging the invariant, and pointed the way to the “missing
information”.
In [52], the second named author constructed a pair of simple unital AH algebras
which, while non-isomorphic, agreed on a wide swath of invariants including the
Elliott invariant, all continuous (with respect to inductive sequences) and homotopy
invariant functors from the category of C∗-algebras (a class which includes the
Murray-von Neumann semigroup), the real and stable ranks, and, as was shown
REGULARITY PROPERTIES FOR AMENABLE C
-ALGEBRAS 13
later in [], stable isomorphism invariants (those invariants which are insensitive to
tensoring with a matrix algebra or passing to a hereditary subalgebra). It was thus
reasonable to expect that the distinguishing invariant in this example—the Cuntz
semigroup—might be a good candidate for enlarging the invariant. At least, it
was an object which after years of being used sparingly as a means to other ends,
merited study for its own sake.
Let us collect some evidence supporting the addition of the Cuntz semigroup to
the usual Elliott invarariant. First, in the biggest class of algebras where (EC) can
be expected to hold—Z-stable algebras, as shown by Theorem 3.2—it is not an
addition at all! Recent work of Brown, Perera, and the second named author shows
that for a simple unital separable amenable C∗-algebra which absorbs Z tensorially,
there is a functor which recovers the Cuntz semigroup from the Elliott invariant
([4], [42]). This functorial recovery also holds for simple unital AH algebras of slow
dimension growth, a class for which Z-stability is not known and yet confirmation of
(EC) is expected. (It should be noted that the computation of the Cuntz semigroup
for a simple approximately interval (AI) algebra was essentially carried out by
Ivanescu and the first named author in [23], although one does require [11, Corollary
4] to see that the computation is complete.) Second, the Cuntz semigroup unifies
the counterexamples of Rørdam and the second named author. One can show that
the examples of [45], [51], and [52] all consist of pairs of algebras with different Cuntz
semigroups; there are no counterexamples to the conjecture that simple separable
amenable C∗-algebras will be classified up to ∗-isomorphism by the Elliott invariant
and the Cuntz semigroup. Third, the Cuntz semigroup provides a bridge to the
classification of non-simple algebras. Ciuperca and the first named author have
recently proved that AI algebras—limits of inductive sequences of algebras of the
Mmi(C[0, 1])
— are classified up to isomorphism by their Cuntz semigroups. This is accomplished
by proving that the approximate unitary equivalence classes of positive operators
in the unitization of a stable C∗-algebra of stable rank one are determined by the
Cuntz semigroup of the algebra, and then appealing to a theorem of Thomsen
([50]). (These approximate unitary equivalence classes of positive operators can
be endowed with the structure of a topological partially ordered semigroup with
functional calculus. This invariant, known as Thomsen’s semigroup, is recovered
functorially from the Cuntz semigroup for separable algebras of stable rank one,
and so from the Elliott invariant in algebras which are moreover simple, unital,
exact, finite, and Z-stable by the results of [4]. This new semigroup is the fine
invariant alluded to at the end of subsection 5.1.)
There is one last reason to suspect a deep connection between the classification
program and the Cuntz semigroup. Let us first recall a theorem of Kirchberg, which
is germane to the classification of purely infinite C∗-algebras (cf. Theorem 4.1).
Theorem 5.1 (Kirchberg, c. 1994; see [33]). Let A be a separable amenable C∗-
algebra. The following two properties are equivalent:
(i) A is purely infinite;
(ii) A⊗O∞ ∼= A.
14 GEORGE A. ELLIOTT AND ANDREW S. TOMS
A consequence of Kirchberg’s theorem is that among simple separable amenable
C∗-algebras which merely contain an infinite projection, there is a two-fold char-
acterisation of the (proper) subclass which satisfies the original form of the Elliott
conjecture (modulo UCT). If one assumes a priori that A is simple and unital
with no tracial state, then a theorem of Rørdam (see [47]) shows that property (ii)
above — known as O∞-stability—is equivalent to Z-stability. Under these same
hypotheses, property (i) is equivalent to the statement that A has strict compar-
ison. Kirchberg’s theorem can thus be rephrased as follows in the simple unital
case:
Theorem 5.2. Let A be a simple separable unital amenable C∗-algebra without a
tracial state. The following two properties are equivalent:
(i) A has strict comparison;
(ii) A⊗Z ∼= A.
The properties (i) and (ii) in the theorem above make perfect sense in the presence
of a trace. We moreover have that (ii) implies (i) even in the presence of traces (this
is due to Rørdam—see [47]). It therefore makes sense to ask whether the theorem
might be true without the tracelessness hypothesis. Remarkably, this appears to be
the case. Winter and the second named author have proved that for a substantial
class of stably finite C∗-algebras, strict comparison and Z-stability are equivalent,
and that these properties moreover characterise the (proper) subclass which satisfies
(EC) ([58]). In other words, Kirchberg’s theorem is quite possibly a special case
of a more general result, one which will give a unified two-fold characterisation of
those simple separable amenable C∗-algebras which satisfy the original form of the
Elliott conjecture.
It is too soon to know whether the Cuntz semigroup together with Elliott in-
variant will suffice for the classification of simple separable amenable C∗-algebras,
or indeed, whether such a broad classification can be hoped for at all. But there
is already cause for optimism. Zhuang Niu has recently obtained some results on
lifting maps at the level of the Cuntz semigroup to ∗-homomorphisms. This type of
lifting result is a key ingredient in proving classification theorems of all stripes. His
results suggest the algebras of [52] as the appropriate starting point for any effort
to establish the Cuntz semigroup as a complete isomorphism invariant, at least in
the absence of K1.
We close our survey with a few questions for the future, both near and far.
(i) When do natural examples of simple separable amenable C∗-algebras satisfy
one or more of the regularity properties of Section 3? In particular, do
simple unital inductive limits of recursive subhomogeneous algebras have
strict comparison whenever they have strict slow dimension growth?
(ii) Can the classification of positive operators up to approximate unitary equiv-
alence via the Cuntz semigroup in algebras of stable rank one be extended
to normal elements, provided that one accounts for K1?
(iii) Let A be a simple, unital, separable, and amenable C∗-algebra with strict
comparison of positive elements. Is A Z-stable? Less ambitiously, does A
have stable rank one whenever it is stably finite?
(iv) Can one use Thomsen’s semigroup to prove new classification theorems?
(The attraction here is that Thomsen’s semigroup is already implicit in the
Elliott invariant for many classes of C∗-algebras.)
REGULARITY PROPERTIES FOR AMENABLE C
-ALGEBRAS 15
References
[1] Blackadar, B., Dădărlat, M., and Rørdam, M.: The real rank of inductive limit C∗-algebras,
Math. Scand. 69 (1991), 211-216
[2] Blackadar, B., and Handelman, D.:Dimension Functions and Traces on C∗-algebras, J.
Funct. Anal. 45 (1982), 297-340
[3] Brown, N.: Invariant means and finite representation theory of C∗-algebras, Mem. Amer.
Math. Soc. 184, no. 865, viii + 105 pp.
[4] Brown, N. P., Perera, F., and Toms, A. S.: The Cuntz semigroup, the Elliott conjecture,
and dimension functions on C∗-algebras, arXiv preprint math.OA/0609182 (2006)
[5] Brown, N. P., and Toms, A. S.: Three applications of the Cuntz semigroup, preprint
[6] Ciuperca, A., and Elliott, G. A.: A remark on invariants for C∗-algebras of stable rank one,
preprint
[7] Choi, M.-D., and Effros, E. G.: Nuclear C∗-algebras and the approximation property, Amer.
J. Math. 100 (1978), 61-79
[8] Choi, M.-D., and Effros, E. G.: Nuclear C∗-algebras and injectivity: the general case,
Indiana Univ. Math. J. 26 (1977), 443-446
[9] Connes, A.: On the cohomology of operator algebras, J. Funct. Anal. 28 (1978), 248-253
[10] Connes, A.: Classification of injective factors. Cases II1, II∞, IIIλ, λ 6= 1., Ann. of Math.
(2) 104 (1976), 73-115
[11] Coward, K. T., Elliott, G. A., and Ivanescu, C.: The Cuntz semigroup as an invariant for
C∗-algebras, preprint
[12] Cuntz, J.: Dimension Functions on Simple C∗-algebras, Math. Ann. 233 (1978), 145-153
[13] Dădărlat, M.: Reduction to dimension three of of local spectra of real rank zero C∗-algebras,
J. Reine Angew. Math. 460 (1995), 189-212
[14] Dădărlat, M.: Nonnuclear subalgebras of AF algebras, Amer. J. Math. 122 (2000), 581-597
[15] Elliott, G. A.: On the classification of inductive limits of sequences of semi-simple finite-
dimensional algebras, J. Algebra 38 (1976), 29-44
[16] Elliott, G. A.: The classification problem for amenable C∗-algebras, Proceedings of ICM
’94, Zurich, Switzerland, Birkhauser Verlag, Basel, Switzerland, 922-932
[17] Elliott, G. A.: On the classification of C∗-algebras of real rank zero, J. Reine Angew. Math.
443 (1993), 179-219
[18] Elliott, G. A.: Towards a theory of classification, preprint
[19] Elliott, G. A., and Evans, D. E.: The structure of irrational rotation C∗-algebras, Ann. of
Math. (2) 138 (1993), 477-501
[20] Elliott, G. A., and Gong, G.: On the classification of C∗-algebras of real rank zero. II. Ann.
of Math. (2) 144 (1996), 497-610
[21] Elliott, G. A., Gong, G., and Li, L.: Approximate divisibility of simple inductive limit
C∗-algebras, Contemp. Math. 228 (1998), 87-97
[22] Elliott, G. A., Gong, G., and Li, L.: On the classification of simple inductive limit C∗-
algebras, II: The isomorphism theorem, to appear in Invent. Math.
[23] Elliott. G. A.: The classification of separable simple C∗-algebras which are inductive limits
of continuous-trace C∗-algebras with spectrum homeomorphic to [0, 1], preprint
[24] Glimm, J.: On a certain class of operator algebras, Trans. Amer. Math. Soc. 95 (1960),
318-340
[25] Gong, G.: On the classification of simple inductive limit C∗-algebras I. The reduction
theorem., Doc. Math.7 (2002), 255-461
[26] Gong, G., Jiang, X., and Su, H.: Obstructions to Z-stability for unital simple C∗-algebras,
Canad. Math. Bull. 43 (2000), 418-426
[27] Haagerup, U.: All nuclear C∗-algebras are amenable, Invent. Math. 74 (1983), 305-319
[28] Haagerup, U.: Connes’ bicentralizer problem and uniqueness of the injective factor of type
III1, Acta. Math. 158 (1987), 95-148
[29] Haagerup, U.: Quasi-traces on exact C∗-algebras are traces, preprint (1991)
[30] Jiang, X. and Su, H.: On a simple unital projectionless C∗-algebra, Amer. J. Math. 121
(1999), 359-413
[31] Kirchberg, E.: The classification of Purely Infinite C∗-algebras using Kasparov’s Theory,
to appear in Fields Inst. Comm.
[32] Kirchberg, E.: C∗-nuclearity implies CPAP, Math. Nachr. 76 (1977), 203-212
http://arxiv.org/abs/math/0609182
16 GEORGE A. ELLIOTT AND ANDREW S. TOMS
[33] Kirchberg, E., and Phillips, N. C.: Embedding of exact C∗-algebras in the Cuntz algebra
O2, J. Reine Angew. Math. 525 (2000), 17-53
[34] Kirchberg, E., and Winter, W.: Covering dimension and quasidiagonality, Intern. J. Math.
15 (2004), 63-85
[35] Lance, E. C.: Hilbert C∗-modules, London Mathematical Society Lecture Note Series 210,
Cambridge University Press, 1995
[36] Lin, H.: The tracial topological rank of C∗-algebras, Proc. London Math. Soc. (3) 83 (2001),
199-234
[37] Lin, H.: Classification of simple C∗-algebras of tracial topological rank zero, Duke Math. J.
125 (2004), 91-119
[38] Lin, H.: Simple nuclear C∗-algebras of tracial topological rank one, preprint
[39] Manuilov, V. M., and Troitsky, E. V.: Hilbert C∗-modules, Translations of Mathematical
Monographs 226, American Mathematical Society, 2001
[40] Niu, Z.: A classification of certain tracially approximately subhomogeneous C∗-algebras,
Ph.D. thesis, University of Toronto, 2005
[41] Perera, F.: The structure of positive elements for C∗-algebras of real rank zero, Int. J. Math.
8 (1997), 383-405
[42] Perera, F., and Toms, A. S.: Recasting the Elliott conjecture, to appear in Math. Ann.,
arXiv preprint math.OA/0601478 (2006)
[43] Phillips, N. C.: A classification theorem for nuclear purely infinte simple C∗-algebras, Doc.
Math. 5 (2000), 49-114
[44] Phillips, N. C.: Every simple higher dimensional noncommutative torus is an AT-algebra,
arXiv preprint math.OA/0609783 (2006)
[45] Rørdam, M.: Classification of Nuclear C∗-Algebras, Encyclopaedia of Mathematical Sci-
ences 126, Springer-Verlag, Berlin, Heidelberg 2002
[46] Rørdam, M.: A simple C∗-algebra with a finite and an infinite projection, Acta Math. 191
(2003), 109-142
[47] Rørdam, M.: The stable and the real rank of Z-absorbing C∗-algebras, Int. J. Math. 15
(2004), 1065-1084
[48] Rørdam, M.: Stability of C∗-algebras is not a stable property, Doc. Math. 2 (1997), 375-386
[49] Rørdam, M.: On the structure of simple C∗-algebras tensored with a UHF-algebra, II, J.
Funct. Anal. 107 (1992), 255-269
[50] Thomsen, K.: Inductive limits of interval algebras: unitary orbits of positive elements,
Math. Ann. 293 (1992), 47-63
[51] Toms, A. S.: On the independence of K-theory and stable rank for simple C∗-algebras, J.
Reine Angew. Math. 578 (2005), 185-199
[52] Toms, A. S.: On the classification problem for nuclear C∗-algebras, to appear in Ann. of
Math. (2), arXiv preprint math.OA/0509103 (2005)
[53] Toms, A. S.: An infinite family of non-isomorphic C∗-algebras with identical K-theory,
arXiv preprint math.OA/0609214 (2006)
[54] Toms, A. S.: Cancellation does not imply stable rank one, Bull. London Math. Soc. 38
(2006), 1005-1008
[55] Toms, A. S.: Stability in the Cuntz semigroup of a commutative C∗-algebra, to appear in
Proc. London Math. Soc., arXiv preprint math.OA/0607099 (2006)
[56] Toms, A. S. and Winter, W.: Strongly self-absorbing C∗-algebras, to appear in Trans. Amer.
Math. Soc., arXiv preprint math.OA/0502211 (2005)
[57] Toms, A. S. and Winter, W.: Z-stable ASH algebras, to appear in Canad. J. Math., arXiv
preprint math.OA/0508218 (2005)
[58] Toms. A. S. and Winter, W.: The Elliott conjecture for Villadsen algebras of the first type,
arXiv preprint math.OA/0611059 (2006)
[59] Villadsen, J.: Simple C∗-algebras with perforation, J. Funct. Anal. 154 (1998), 110-116
[60] Villadsen, J.: On the stable rank of simple C∗-algebras, J. Amer. Math. Soc. 12 (1999),
1091-1102
[61] Winter, W.: On topologically finite-dimensional simple C∗-algebras, Math. Ann. 332 (2005),
843-878
[62] Winter, W.: Simple C∗-algebras with locally finite decomposition rank, arXiv preprint
math.OA/0602617 (2006)
http://arxiv.org/abs/math/0601478
http://arxiv.org/abs/math/0609783
http://arxiv.org/abs/math/0509103
http://arxiv.org/abs/math/0609214
http://arxiv.org/abs/math/0607099
http://arxiv.org/abs/math/0502211
http://arxiv.org/abs/math/0508218
http://arxiv.org/abs/math/0611059
http://arxiv.org/abs/math/0602617
REGULARITY PROPERTIES FOR AMENABLE C
-ALGEBRAS 17
Department of Mathematics, University of Toronto, Toronto, Ontario, Canada,
M5S 2E4
E-mail address: [email protected]
Department of Mathematics and Statistics, York University, 4700 Keele St., Toronto,
Ontario, Canada, M3J 1P3
E-mail address: [email protected]
1. Introduction
2. Preliminaries
2.1. The Elliott invariant and the original conjecture
2.2. Amenability
2.3. The Cuntz semigroup
3. Three regularity properties
3.1. Strict comparison
3.2. Finite decomposition rank
3.3. Z-stability
3.4. Relationships
4. A brief history
4.1. Purely infinite simple algebras
4.2. The stably finite case, I: inductive limits
4.3. The stably finite case, II: tracial approximation
4.4. Villadsen's algebras
5. The way(s) forward
5.1. New assumptions
5.2. New invariants
References
|
0704.1804 | Equation of state of atomic systems beyond s-wave determined by the
lowest order constrained variational method: Large scattering length limit | Equation of state of atomic systems beyond s-wave determined by the lowest order
constrained variational method: Large scattering length limit
Ryan M. Kalas(1) and D. Blume(1,2)
(1)Department of Physics and Astronomy, Washington State University, Pullman, Washington 99164-2814
(2)INFM-BEC, Dipartimento di Fisica, Università di Trento, I-38050 Povo, Italy
Dilute Fermi systems with large s-wave scattering length as exhibit universal properties if the
interparticle spacing ro greatly exceeds the range of the underlying two-body interaction potential.
In this regime, ro is the only relevant length scale and observables such as the energy per particle
depend only on ro (or, equivalently, the energy EFG of the free Fermi gas). This paper investigates
Bose and Fermi systems with non-vanishing angular momentum l using the lowest order constrained
variational method. We focus on the regime where the generalized scattering length becomes large
and determine the relevant length scales. For Bose gases with large generalized scattering lengths,
we obtain simple expressions for the energy per particle in terms of a l-dependent length scale ξl,
which depends on the range of the underlying two-body potential and the average interparticle
spacing. We discuss possible implications for dilute two-component Fermi systems with finite l.
Furthermore, we determine the equation of state of liquid and gaseous bosonic helium.
PACS numbers:
I. INTRODUCTION
The experimental realization of dilute degenerate Bose
and Fermi gases has led to an explosion of activities in the
field of cold atom gases. A particularly intriguing feature
of atomic Bose and Fermi gases is that their interaction
strengths can be tuned experimentally through the ap-
plication of an external magnetic field in the vicinity of
a Feshbach resonance [1, 2]. This external knob allows
dilute systems with essentially any interaction strength,
including infinitely strongly attractive and repulsive in-
teractions, to be realized. Feshbach resonances have been
experimentally observed for s-, p- and d-wave interacting
gases [3, 4, 5, 6, 7] and have been predicted to exist also
for higher partial waves.
A Feshbach resonance arises due to the coupling of two
Born-Oppenheimer potential curves coupled through a
hyperfine Hamiltonian, and requires, in general, a multi-
channel description. For s-wave interacting systems, Fes-
hbach resonances can be classified as broad or narrow [8].
Whether a resonance is broad or narrow depends on
whether the energy width of the resonance is large or
small compared to the characteristic energy scale, such
as the Fermi energy or the harmonic oscillator energy,
of the system. In contrast to s-wave resonances, higher
partial wave resonances are necessarily narrow due to the
presence of the angular momentum barrier [9]. This pa-
per uses an effective single channel description to investi-
gate the behaviors of strongly-interacting Bose and Fermi
systems with different orbital angular momenta.
In dilute homogeneous Bose and Fermi gases with large
s-wave scattering length as, a regime has been identi-
fied in which the energy per particle takes on a universal
value which is set by a single length scale, the average
interparticle spacing ro [10, 11, 12]. In this so-called uni-
tary regime, the length scales of the s-wave interacting
system separate according to |as| ≫ ro ≫ R, where R
denotes the range of the two-body potential. The en-
ergy per particle EB,0/N (the subscripts “B” and “0”
stand respectively for “boson” and “s-wave interacting”)
for a homogeneous one-component gas of bosons with
mass m in the unitary regime has been calculated to be
EB,0/N ≈ 13.3 ~
B /m using the lowest order con-
strained variational (LOCV) method [12]. The energy
EB,0/N at unitarity is thus independent of as and R,
and depends on the single length scale ro through the
boson number density nB, ro = (4πnB/3)
−1/3. How-
ever, Bose gases in the large scattering length limit are
expected to be unstable due to three-body recombina-
tion [13, 14, 15, 16].
On the other hand, the Fermi pressure prevents the
collapse of two-component Fermi gases with equal masses
and equal number of “spin-up” and “spin-down” fermions
with large interspecies s-wave scattering length [10, 11,
17, 18]. At unitarity, the energy per particle is given by
EF,0/N ≈ 0.42EFG, where EFG = (3/10)(~
2k2F /m) de-
notes the energy per particle of the non-interacting Fermi
gas [19, 20, 21, 22]. The Fermi wave vector kF is related
to the number density of the Fermi gas by nF = k
F /3π
which implies that EF,0/N depends on ro but is indepen-
dent of as and R. We note that the inequality |as| ≫ ro
is equivalent to 1/(kF |as|) ≪ 1.
This paper investigates Bose and Fermi systems with
large generalized scattering lengths using the LOCV
method. For p- and d-wave interacting Bose systems,
we define the unitary regime [23] through the inequali-
ties |al(Erel)| ≫ ξl ≫ R, where ξl denotes a l-dependent
length scale given by the geometric combination of ro and
R, i.e., ξl = r
(1−l/4)
l/4, and Erel the relative scatter-
ing energy. The generalized energy-dependent scattering
length al(Erel) [24, 25, 26] characterizes the scattering
strength (see below). We find that the energy of p-wave
interacting two-component Bose gases and d-wave inter-
acting one- and two-component Bose gases at unitary is
determined by the combined length ξl. While Bose gases
with higher angular momentum in the unitary regime are
http://arxiv.org/abs/0704.1804v1
of theoretical interest, they are, like their s-wave cousin,
expected to be unstable. We comment that the energet-
ics of two-component Fermi gases with large generalized
scattering length may depend on the same length scales.
Furthermore, we consider s-wave interacting Bose sys-
tems over a wide range of densities. Motivated by two re-
cent studies by Gao [27, 28], we determine the energy per
particle EB,0/N of the Bose system characterized by two
atomic physics parameters, the s-wave scattering lengh
as and the van der Waals coefficient C6. Our results lead
to a phase diagram of liquid helium in the low-density
regime that differs from that proposed in Ref. [28].
Section II describes the systems under study and in-
troduces the LOCV method. Section III describes our
results for dilute s-wave interacting Bose and Fermi sys-
tems and for liquid helium. Section IV considers Bose
and Fermi systems interacting through l-wave (l > 0)
scattering. Finally, Section V concludes.
II. LOCV METHOD FOR BOSONS AND
FERMIONS
This section introduces the three-dimensional Bose
and Fermi systems under study and reviews the LOCV
method [29, 30, 31, 32]. The idea of the LOCV method
is to explicitly treat two-body correlations, but to ne-
glect three- and higher-body correlations. This allows
the many-body problem to be reduced to solving an
effective two-body equation with properly chosen con-
straints. Imposing these constraints makes the method
non-variational, i.e., the resulting energy does not place
an upper bound on the many-body energy. The LOCV
method is expected to capture some of the key physics of
dilute Bose and Fermi systems.
The Hamiltonian HB for a homogeneous system con-
sisting of identical mass m bosons is given by
HB = −
∇2i +
v(rij), (1)
where the spherically symmetric interaction potential v
depends on the relative distance rij , rij = |ri−rj|. Here,
ri denotes the position vector of the ith boson. The
Hamiltonian HF for a two-component Fermi system with
equal masses and identical spin population is given by
HF = −
∇2i −
∇2i′ +
v(rii′ ), (2)
where the unprimed subscripts label spin-up and the
primed subscripts spin-down fermions. Throughout, we
take like fermions to be non-interacting. Our primary
interest in this paper is in the description of systems
for which many-body observables are insensitive to the
short-range behavior of the atom-atom potential v(r).
This motivates us to consider two simple model poten-
tials: an attractive square well potential vsw with depth
Vo (Vo ≥ 0),
vsw(r) =
−Vo for r < R
0 for r > R ;
and an attractive van der Waals potential vvdw with hard-
core rc,
vvdw(r) =
∞ for r < rc
−C6/r
6 for r > rc .
In all applications, we choose the hardcore rc so that the
inequality rc ≪ β6, where β6 = (mC6/~
2)1/4, is satisfied.
The natural length scale of the square well potential is
given by the range R and that of the van der Waals po-
tential by the van der Waals length β6. The solutions to
the two-body Schrödinger equation for vsw are given in
terms of spherical Bessel and Neumann functions (impos-
ing the proper continuity conditions of the wave function
and its derivations at those r values where the poten-
tial exhibits a discontinuity), and those for vvdw in terms
of convergent infinite series of spherical Bessel and Neu-
mann functions [33].
The interaction strength of the short-range square well
potential can be characterized by the generalized energy-
dependent scattering lengths al(k),
al(k) = sgn[− tan δl(k)]
tan δl(k)
k2l+1
1/(2l+1)
, (5)
where δl(k) denotes the phase shift of the lth partial
wave calculated at the relative scattering energy Erel,
mErel/~2. This definition ensures that al(k) ap-
proaches a constant as k → 0 [34, 35]. For the van der
Waals potential vvdw, the threshold behavior changes for
higher partial waves and the definition of al(k) has to be
modified accordingly [34, 35]. In general, for a poten-
tial that falls off as −r−n at large interparticle distances,
al(k) is defined by Eq. (5) if 2l < n− 3 and by
al(k) = sgn[− tan δl(k)]
tan δl(k)
1/(n−2)
if 2l > n− 3. For our van der Waals potential, n is equal
to 6 and al(k) is given by Eq. (5) for l ≤ 1 and by Eq. (6)
for l ≥ 2. The zero-energy generalized scattering lengths
al can now be defined readily through
al = lim
al(k). (7)
We note that a new two-body l-wave bound state ap-
pears at threshold when |al| → ∞. The unitary regime
for higher partial waves discussed in Sec. IV is thus, as
in the s-wave case, closely related to the physics of ex-
tremely weakly-bound atom-pairs. To uncover the key
behaviors at unitarity, we assume in the following that
the many-body system under study is interacting through
a single partial wave l. While this may not be exactly
realized in an experiment, this situation may be approx-
imated by utilizing Feshbach resonances.
We now outline how the energy per particle EB,l/N
of a one-component Bose system with l-wave interac-
tions [36] is calculated by the LOCV method [29, 30,
31, 32]. The boson wave function ΨB is taken to be a
product of pair functions fl,
ΨB(r1, . . . , rN ) =
fl(rij), (8)
and the energy expectation value ofHB, Eq. (1), is calcu-
lated using ΨB. If terms depending on the coordinates of
three or more different particles are neglected, the result-
ing energy is given by the two-body term in the cluster
expansion,
fl(r)
∇2 + v(r)
fl(r) d
r. (9)
The idea of the LOCV method is now to introduce a heal-
ing distance d beyond which the pair correlation function
fl is constant,
fl(r > d) = 1. (10)
To ensure that the derivative of fl is continuous at r = d,
an additional constraint is introduced,
f ′l (r = d) = 0. (11)
Introducing a constant average field λl and varying with
respect to fl while using that fl is constant for r > d,
gives the Schrödinger-like two-body equation for r < d,
∇2 + v(r)
(rfl(r)) = λlrfl(r). (12)
Finally, the condition
f2l (r) d
r = 1 (13)
enforces that the average number of particles within d
equals 1. Using Eqs. (9), (10) and (12), the energy per
particle becomes,
v(r)d3r. (14)
The second term on the right hand side of Eq. (14) is
identically zero for the square well potential vsw but con-
tributes a so-called tail or mean-field energy for the van
der Waals potential vvdw [27, 28]. We determine the
three unknown nB, λl and d by simultaneously solving
Eqs. (12) and (13) subject to the boundary condition
given by Eq. (11). Note that nB and d depend, just as
fl and λl, on the angular momentum; the subscript has
been dropped, however, for notational convenience.
In addition to one-component Bose systems, Sec. IV
considers two-component Bose systems, characterized by
l-wave interspecies and vanishing intraspecies interac-
tions. The Hamiltonian for the two-component Bose sys-
tem is given by Eq. (2), with the sum of the two-body
interactions restricted to unlike bosons. Correspondingly,
the product wave function is written as a product of
pair functions, including only correlations between unlike
bosons. The LOCV equations are then given by Eqs. (10)
through (13) with nB in Eq. (13) replaced by nB/2.
Next, we discuss how to determine the energy EF,l/N
per particle for a two-component Fermi system within
the LOCV method [29, 30, 31, 32]. The wavefunction is
taken to be
ΨF (r1, . . . , r1′ , . . . ) = ΦFG
fl(rij′ ), (15)
where ΦFG denotes the ground state wavefunction of the
non-interacting Fermi gas. The product of pair functions
fl accounts for the correlations between unlike fermions.
In accord with our assumption that like fermions are non-
interacting, Eq. (15) treats like fermion pairs as uncor-
related. Neglecting exchange effects, the derivation of
the LOCV equations parallels that outlined above for the
bosons. The boundary conditions, given by Eqs. (10) and
(11), and the Schrödinger-like differential equation for λl,
Eq. (12), are unchanged. The “normalization condition,”
however, becomes
f2l (r)d
r = 1, (16)
where the left-hand side is the number of fermion pairs
within d. The fermion energy per particle is then the sum
of the one-particle contribution from the non-interacting
Fermi gas and the pair correlation energy λl [20],
EF,l/N = EFG +
. (17)
This equation excludes the contribution from the tail of
the potential, i.e., the term analogous to the second term
on the right hand side of Eq. (14), since this term is neg-
ligible for the fermion densities considered in this paper.
The LOCV solutions for fl, λl and d for the ho-
mogeneous one-component Bose system and the two-
component Fermi system are formally identical if the bo-
son density is chosen to equal half the fermion density,
i.e., if nB = nF /2. This relation can be understood by
realizing that any given fermion (e.g., a spin-up particle)
interacts with only half of the total number of fermions
(e.g., all the spin-down fermions). Consequently, the two-
component Fermi system appears twice as dense as the
one-component Bose system. The fact that the LOCV
solutions for bosons can be converted to LOCV solutions
for fermions suggests that some physics of the bosonic
system can be understood in terms of the fermionic sys-
tem and vice versa. In fact, it has been shown previ-
ously [20] that the LOCV energy for the first excited
gas-like state of s-wave interacting fermions at unitarity
can be derived from the LOCV energy of the energet-
ically lowest-lying gas-like branch of s-wave interacting
bosons [12]. Here, we extend this analysis and show that
the ground state energy of the Fermi gas at unitarity can
0 1 2 3 4
FIG. 1: Energy per particle EB,0/N as a function of the
density nB , both plotted as dimensionless quantities, for a
one-component Bose system interacting through the van der
Waals potential with s-wave scattering length as = 16.9β6.
The dotted line shows the gas branch and the dashed line the
liquid branch. The minimum of the liquid branch is discussed
in reference to liquid 4He in the text.
be derived from the energetically highest-lying liquid-like
branch of the Bose system. Furthermore, we extend this
analysis to higher angular momentum scattering.
III. s-WAVE INTERACTING BOSE AND
FERMI SYSTEMS
Figure 1 shows the energy per particle EB,0/N ,
Eq. (14), obtained by solving the LOCV equations for
a one-component Bose system interacting through the
van der Waals potential with s-wave scattering length
as = 16.9β6. The dotted line in Fig. 1 has positive energy
and increases with increasing density; it describes the
energetically lowest-lying “gas branch” for the Bose sys-
tem with as = 16.9β6 and corresponds to the metastable
gaseous condensate studied experimentally. The dashed
line in Fig. 1 has negative energy at small densities, de-
creases with increasing density, and then exhibits a mini-
mum; this dashed line describes the energetically highest-
lying “liquid branch” for a Bose system with as = 16.9β6.
Within the LOCV framework, these two branches arise
because the Schrödinger-like equation, Eq. (12), permits
for a given interaction potential solutions f0 with differ-
ing number of nodes, which in turn give rise to a host of
liquid and gas branches [27, 28]. Throughout this work
we only consider the energetically highest-lying liquid
branch with n nodes and the energetically lowest-lying
gas branch with n + 1 nodes. To obtain Fig. 1, we con-
sider a class of two-body potentials with fixed as/β6, and
decrease the value of the ratio rc/β6 till EB,0/N , Eq. (14),
no longer changes over the density range of interest, i.e.,
the number of nodes n of the energetically highest-lying
liquid branch is increased till convergence is reached.
0.0 4.0×10
8.0×10
1.2×10
-0.003
0.003
0.006
FIG. 2: Energy per particle EB,0/N as a function of the den-
sity nB for a one-component Bose system interacting through
the van der Waals potential with s-wave scattering lengths
as = 16.9β6 (open circles) and as = 169β6 (filled circles).
To guide the eye, dashed and dotted lines connect the data
points of the liquid and gas branches, respectively. The liq-
uid branches go to Edimer/2 as the density goes to zero. The
solid lines show EB,0/N at unitarity; see text for discussion.
Compared to Fig. 1, the energy and density scales are greatly
enlarged.
In Fig. 1, the two-body van der Waals potential is cho-
sen so that the scattering length of as = 16.9β6 coin-
cides with that of the 4He pair potential [37]. The liquid
branch in Fig. 1 can hence be applied to liquid 4He, and
has previously been considered in Refs. [27, 28]. The min-
imum of the liquid branch at a density of nB = 2.83β
or 1.82 × 1022cm−3, agrees quite well with the experi-
mental value of 2.18× 1022cm−3 [38]. The corresponding
energy per particle of −6.56 K deviates by 8.5 % from
the experimental value of −7.17 K [38]. This shows that
the LOCV framework provides a fair description of the
strongly interacting liquid 4He system, which is charac-
terized by interparticle spacings comparable to the range
of the potential. This is somewhat remarkable consid-
ering that the LOCV method includes only pair corre-
lations and that the van der Waals potential used here
contains only two parameters.
Open circles connected by a dashed line in Fig. 2 show
the liquid branch for as = 16.9β6 in the small density
region. As the density goes to zero, the energy per par-
ticle EB,0/N does not terminate at zero but, instead,
goes to Edimer/2, where Edimer denotes the energy of
the most weakly-bound s-wave molecule of vvdw. In this
small density limit, the liquid branch describes a gas of
weakly-bound molecules, in which the interparticle spac-
ing between the molecules greatly exceeds the size of the
molecules, and Edimer is to a very good approximation
given by −~2/(ma2s). As seen in Fig. 2, we find solu-
tions in the whole density range considered. In contrast
to our findings, Ref. [28] reports that the LOCV solu-
tions of the liquid branch disappear at densities smaller
than a scattering length dependent critical density, i.e.,
FIG. 3: Scaled interparticle spacing ro/β6 as a function of the
scaled density nBβ
6 for the gas branch of a one-component
Bose system interacting through the van der Waals poten-
tial with as = 169β6. The horizontal lines show the scaled
s-wave scattering length as = 169β6 and the range of the van
der Waals potential, which is one in scaled units (almost in-
distinguishable from the x-axis). This graph shows that the
unitary inequalities as ≫ ro ≫ β6 hold for nB larger than
about 10−5β−36 .
at a critical density of 8.68 × 10−7β−36 for as = 16.9β6.
Thus we are not able to reproduce the liquid-gas phase
diagram proposed in Fig. 2 of Ref. [28], which depends
on this termination of the liquid branch. We note that
the liquid branch is, as indicated by its imaginary speed
of sound, dynamically unstable at sufficiently small den-
sities. The liquid of weakly-bound bosonic molecules
discussed here can, as we show below, be related to
weakly-bound molecules on the BEC side of the BEC-
BCS crossover curve for two-component Fermi gases.
We now discuss the gas branch in more detail. Open
and filled circles connected by dotted lines in Fig. 2 show
the energy per particle for as = 16.9β6 and 169β6, respec-
tively. These curves can be applied, e.g., to 85Rb, whose
scattering length can be tuned by means of a Feshbach
resonance and which has a β6 value of 164abohr, where
abohr denotes the Bohr radius. For this system, a scat-
tering length of as = 16.9β6 corresponds to 2770abohr,
a comparatively large value that can be realized experi-
mentally in 85Rb gases. As a point of reference, a density
of 10−5β−36 corresponds to a density of 1.53× 10
13cm−3
for 85Rb.
The solid curve with positive energy in Fig. 2 shows
the energy per particle EB,0/N at unitarity, EB,0/N ≈
13.3~2n
B /m [12]. As seen in Fig. 2, this unitary limit
is approached by the energy per particle for the Bose
gas with as = 169β6 (filled circles connected by a dotted
line). To illustrate this point, Fig. 3 shows the scaled
average interparticle spacing ro/β6 as a function of the
scaled density nBβ
6 for as = 169β6. This plot indicates
that the unitary requirement, as ≫ ro ≫ R, is met for
values of nBβ
6 larger than about 10
−5. Similarly, we find
-1 -0.5 0 0.5 1
1/(kFas)
FIG. 4: Scaled energy per particle (EF,0/N)/EFG as a func-
tion of 1/(kF as) for a two-component s-wave Fermi gas inter-
acting through the square well potential for nF = 10
−6R−3.
The combined dashed and dash-dotted curve corresponds to
the BEC-BCS crossover curve and the dotted curve corre-
sponds to the first excited state of the Fermi gas. The dashed
and dotted linestyles are chosen to emphasize the connection
to the gas and liquid branches of the Bose system in Figs. 2
and 3 (see text for more details).
that the family of liquid curves converges to EB,0/N ≈
−2.46~2n
B /m (see Sec. IV for details), plotted as a solid
line in Fig. 2, when the inequalities as ≫ ro ≫ β6 are
fullfilled. We note that the unitarity curve with negative
energy is also approached, from above, for systems with
large negative scattering lengths (not shown in Fig. 2).
Aside from the proportionality constant, the power law
relation for the liquid and gas branches at unitarity is
the same.
In addition to a Bose system interacting through the
van der Waals potential, we consider a Bose system in-
teracting through the square well potential with range
R. For a given scattering length as and density nB, the
energy per particle EB,0/N for these two two-body po-
tentials is essentially identical for the densities shown in
Fig. 2. This agreement emphasizes that the details of
the two-body potential become negligible at low density,
and in particular, that the behavior of the Bose gas in
the unitary limit is governed by a single length scale, the
average interparticle spacing ro.
As discussed in Sec. II, the formal parallels between the
LOCV method applied to bosons and fermions allows the
energy per particle EF,0/N for a two-component Fermi
gas, Eq. (17), to be obtained straightforwardly from the
energy per particle EB,0/N of the Bose system. Fig-
ure 4 shows the dimensionless energy (EF,0/N)/EFG as
a function of the dimensionless quantity 1/(kFas) for the
square well potential for nF = 10
−6R−3. We find essen-
tially identical results for the van der Waals potential.
The crossover curve shown in Fig. 4 describes any dilute
Fermi gas for which the range R of the two-body poten-
tial is very small compared to the average interparticle
spacing ro. In converting the energies for the Bose sys-
tem to those for the Fermi system, the gas branches of the
Bose system (dotted lines in Figs. 2 and 3) “turn into”
the excited state of the Fermi gas (dotted line in Fig. 4);
the liquid branches of the Bose system with positive as
(dashed lines in Figs. 2 and 3) “turn into” the part of
the BEC-BCS crossover curve with positive as (dashed
line in Fig. 4); and the liquid branches of the Bose system
with negative as (not shown in Figs. 2 and 3) “turn into”
the part of the BEC-BCS crossover curve with negative
as (dash-dotted line in Fig. 4).
To emphasize the connection between the Bose and
Fermi systems further, let us consider the BEC side of
the crossover curve. If 1/(kFas) & 1, the fermion energy
per particle EF,0/N is approximately given by Edimer/2,
which indicates that the Fermi gas forms a molecular
Bose gas. Similarly, the liquid branch of the Bose sys-
tem with positive scattering length is made up of bosonic
molecules as the density goes to zero. The formal analogy
between the Bose and Fermi LOCV solutions also allows
the energy per particle EF,0/N at unitarity, i.e., in the
1/(kF |as|) → 0 limit, to be calculated from the energies
for large as of the gas and liquid branches of the Bose
system (solid lines in Fig. 2). For the excited state of the
Fermi gas we find EF,0/N ≈ 3.92EFG, and for the low-
est gas state we find EF,0/N ≈ 0.46EFG. These results
agree with the LOCV calculations of Ref. [20], which use
an attractive cosh-potential and a δ-function potential.
The value of 0.46EFG is in good agreement with the en-
ergy of 0.42EFG obtained by fixed-node diffusion Monte
Carlo calculations [21, 22].
IV. BOSE AND FERMI SYSTEMS BEYOND
s-WAVE AT UNITARITY
This section investigates the unitary regime of Bose
and Fermi systems interacting through higher angular
momentum resonances. These higher angular momen-
tum resonances are necessarily narrow [9], and we hence
expect the energy-dependence of the generalized scatter-
ing length al(k) to be particularly important in under-
standing the many-body physics of dilute atomic systems
beyond s-wave. In the following we focus on the strongly-
interacting limit. Figure 5 shows 1/al(k) as a function of
the relative scattering energy Erel for the square-well po-
tential with infinite zero-energy scattering length al for
three different angular momenta, l = 0 (solid line), l = 1
(dashed line), and l = 2 (dotted line). Figure 5 shows
that the energy-dependence of al(k) increases with in-
creasing l.
Our goal is to determine the energy per particle
EB,l/N for Bose systems with finite angular momentum
l in the strongly-interacting regime. For s-wave interac-
tions, the only relevant length scale at unitarity is the av-
erage interparticle spacing ro (see Sec. III). In this case,
the energy per particle at unitarity can be estimated an-
alytically by evaluating the LOCV equations subject to
-0.01 -0.005 0 0.005 0.01
rel /(h-
FIG. 5: R/al(Erel) as a function of the scaled relative scatter-
ing energy Erel/(~
2/mR2) for the square well potential vsw
with infinite zero-energy scattering length al, i.e., 1/al = 0,
for three different partial waves [l = 0 (solid line), l = 1
(dashed line), and l = 2 (dotted line)].
the boundary condition implied by the zero-range s-wave
pseudo-potential [12]. Unfortunatey, a similarly simple
analysis that uses the boundary condition implied by the
two-body zero-range pseudo-potential for higher partial
waves fails. This combined with the following arguments
suggests that EB,l/N depends additionally on the range
of the underlying two-body potential for finite l: i) The
probability distribution of the two-body l-wave bound
state, l > 0, remains finite as al approaches infinity and
depends on the interaction potential [39, 40]. ii) A de-
scription of l-wave resonances (l > 0) that uses a coupled
channel square well model depends on the range of the
square well potential [41]. iii) The calculation of struc-
tural expectation values of two-body systems with finite
l within a zero-range pseudo-potential treatment requires
a new length scale to be introduced [26].
Motivated by these two-body arguments (see also
Refs. [42, 43, 44] for a treatment of p-wave interacting
Fermi gases) we propose the following functional form for
the energy per particle EB,l/N of a l-wave Bose system
at unitarity interacting through the square-well potential
vsw with range R,
mRxl/2
2/3−xl/6
B . (18)
Here, Cl denotes a dimensionless l-dependent proportion-
ality constant. The dimensionless parameter xl deter-
mines the powers of the range R and the density nB,
and ensures the correct units of the right hand side of
Eq. (18). To test the validity of Eq. (18), we solve the
LOCV equations, Eqs. (11) through (14), for l = 0 to 2
for the one-component Bose system. Note that the one-
component p-wave system is unphysical since it does not
obey Bose symmetry; we nevertheless consider it here
since its LOCV energy determines the energy of two-
component p-wave Bose and Fermi systems (see below).
Figure 6 shows the energy per particle EB,l/N for
0 5×10
FIG. 6: Scaled energy per particle (EB,l/N)/(~
2/mR2) for a
one-component Bose system for the energetically lowest-lying
gas branch as a function of the scaled density nBR
3 obtained
by solving the LOCV equations [Eqs. (11) through (14)] for
vsw for three different angular momenta l [l = 0 (crosses),
l = 1 (asterisks) and l = 2 (pluses)]. The depth V0 of vsw is
adjusted so that 1/al(k) = 0. Solid, dotted and dashed lines
show fits of the LOCV energies at low densities to Eq. (18)
for l = 0, l = 1 and l = 2 (see text for details). Note that
the system with l = 1 is of theoretical interest but does not
describe a physical system.
l xl C
0 0.00 −2.46 13.3
1 1.00 −3.24 9.22
2 2.00 −3.30 6.98
TABLE I: Dimensionless parameters xl, C
l and C
l for l = 0
to 2 for a one-component Bose system obtained by fitting the
LOCV energies EB,l/N for small densities to the functional
form given in Eq. (18) (see text for details).
a one-component Bose system, obtained by solving the
LOCV equations for the energetically lowest-lying gas
branch, as a function of the density nB for l = 0 (crosses),
l = 1 (asterisks), and l = 2 (pluses) for the square well
potential, whose depth V0 is adjusted for each l so that
the energy-dependent generalized scattering length al(k)
diverges, i.e., 1/al(k) = 0. Setting al(k) to infinity en-
sures that the l-wave interacting Bose system is infinitely
strongly interacting over the entire density regime shown
in Fig. 6. Had we instead set the zero-energy scattering
length al to infinity, the system would, due to the strong
energy-dependence of al(k) [see Fig. 5], “effectively” in-
teract through a finite scattering length.
Table I summarizes the values for xl and C
l , which we
obtain by performing a fit of the LOCV energies EB,l/N
for the one-component Bose system for small densities to
the functional form given in Eq. (18). In particular, we
find xl = l, which implies that EB,l/N varies as n
B and n
B for l = 0, 1 and 2, respectively. Table I
uses the superscript “G” to indicate that the proportion-
ality constant is obtained for the energetically lowest-
lying gas branch. The density ranges used in the fit are
chosen so that Eq. (18) describes the low-density or uni-
versal regime accurately. Solid, dotted and dashed lines
in Fig. 6 show the results of these fits for l = 0, 1 and
2, respectively; in the low density regime, the lines agree
well with the symbols thereby validating the functional
form proposed in Eq. (18).
We repeat the LOCV calculations for the energeti-
cally highest-lying liquid branch of the one-component
Bose system. By fitting the LOCV energies of the liquid
branches for small densities to Eq. (18), we determine xl
and Cl [45]. We find the same xl as for the gas branch but
different Cl than for the gas branch (the proportionality
constants obtained for the liquid branch are denoted by
CLl ; see Table I). Our values for x0, C
0 and C
0 agree
with those reported in the literature [12, 20, 46].
Equation (18) can be rewritten in terms of the com-
bined length ξl,
)(l/6−2/3) ~
, (19)
where
ξl = r
(1−l/4)
l/4. (20)
For the s-wave case, ξl reduces to ro and the conver-
gence to the unitary regime can be seen by plotting
(EB,0/N)/(~
2/mr2o) as a function of as/ro [12]. To
investigate the convergence to the unitary regime for
higher partial waves, Fig. 7 shows the energy per parti-
cle EB,l/N for the energetically lowest-lying gas branch
as a function of al(Erel)/ξl for fixed energy-dependent
scattering lengths al(k), i.e., for as(k) = 10
10R, ap(k) =
1010R, and ad(k) = 10
6R [the different values of al(k)
are chosen for numerical reasons]. Figure 7 shows that
the inequality
|al(Erel)| ≫ ξl ≫ R (21)
is fulfilled when (EB,l/N)/(~
2/mξ2l ) is constant. Note
that this inequality is written in terms of the energy-
dependent scattering length (see above). We find sim-
ilar results for the liquid branches for l = 0 to 2. For
higher partial waves, we hence use the inequality given
by Eq. (21) to define the unitary regime. In the uni-
tary regime, the energy per particle EB,l/N of the Bose
system depends only on the combined length scale ξl.
For s-wave interacting systems, we have ξs = ro and
as(k) ≈ as, and Eq. (21) reduces to the well known s-
wave unitary condition, i.e., to |as| ≫ ro ≫ R.
We now discuss those regions of Fig. 7, where the
energy per particle (EB,l/N)/(~
2/mξ2l ) for the one-
component Bose system deviates from a constant. For
sufficiently large densities, the characteristic length ξl be-
comes of the order of the range R of the square well
potential. In this “high density” regime the system
exhibits non-universal behaviors. In Fig. 7, e.g., the
energy-dependent scattering length ad(k) equals 10
) /ξl
FIG. 7: Scaled energy per particle (EB,l/N)/(~
2/mξ2l ) for
the energetically lowest-lying gas branch of l-wave interacting
one-component Bose systems obtained by solving the LOCV
equations [Eqs. (11) through (14)] for vsw as a function of
al(Erel)/ξl. The depth V0 of vsw is adjusted so that al(Erel) =
1010R for l = 0 and l = 1, and al(Erel) = 10
6R for l = 2.
Note that the system with l = 1 is of theoretical interest but
does not describe a physical system. In the regime where the
inequality R ≪ ξl ≪ al(Erel) is fulfilled, the scaled energy
per particle is constant; this defines the unitary regime.
correspondingly, ξd equals R when ad(k)/ξd = 10
6. As
ad(k)/ξd approaches 10
6 from below, the system becomes
non-universal, as indicated in Fig. 7 by the non-constant
dependence of the scaled energy per particle on ad(k)/ξd.
On the left side of Fig. 7, where al(Erel)/ξl becomes of
order 1, the “low density” end of the unitary regime is
reached. When ap(Erel)/ξp equals 10, e.g., the inter-
particle spacing ro equals 10
2ap(Erel), i.e., the system
exhibits universal behavior even when the interparticle
spacing is 100 times larger than the scattering length
ap(Erel). This is in contrast to the s-wave case, where
the universal regime requires |as| ≫ ro. The different
behavior of the higher partial wave systems compared to
the s-wave system can be understood by realizing that ξl
is a combined length, which contains both the range R
of the two-body potential and the average interparticle
spacing ro. For a given al(Erel)/R, the first inequality
in Eq. (21) is thus satisfied for larger average interparti-
cle spacings ro/R, or smaller scaled densities nBR
3, as l
increases from 0 to 2.
In addition to investigating l-wave Bose gases interact-
ing through the square well potential vsw , we consider
the van der Waals potential vvdw. For the energetically
lowest-lying gas branch of the one-component “p-wave
Bose” system we find the same results as for the square
well potential if we replace R in Eqs. (18) to (20) by
β6. We believe that the same replacement needs to be
done for the liquid branch with l = 1 and for the liquid
and gas branches of d-wave interacting bosons, and that
the scaling at unitarity derived above for the square well
potential holds for a wide class of two-body potentials.
Within the LOCV framework, the results obtained for
l xl C
0 0.00 −1.55 8.40
1 1.00 −2.29 6.52
2 2.00 −2.62 5.54
TABLE II: Dimensionless parameters xl, C
l and C
l for l = 0
to 2 for a two-component Bose system (see text for details).
xl, C
l and C
l for the one-component Bose systems can
be applied readily to the corresponding two-component
system by scaling the Bose density appropriately (see
Sec. II). The resulting parameters xl, C
l and C
l for
the two-component Bose systems are summarized in Ta-
ble II.
The energy per particle EF,l/N for l-wave interact-
ing two-component Fermi systems can be obtained from
Eq. (17) using the LOCV solutions for the liquid and
gas branches discussed above for l-wave interacting one-
component Bose systems. In the unitary limit, we find
F +Bl
2/3−l/6
, (22)
where A = (3/10)(3π2)2/3 ≈ 2.87 and Bl =
2/3−l/6 (the C
l are given in Table I). The first
term on the right hand side of Eq. (22) equals EFG, and
the second term, which is obtained from the LOCV solu-
tions, equals λl/2. The energy per particleEF,l/N at uni-
tarity is positive for all densities for Bl = C
2/3−l/6.
For Bl = C
2/3−l/6, however, the energy per particle
EF,l/N at unitarity is negative for l > 0 for small den-
sities, and goes through a minimum for larger densities.
This implies that this branch is always mechanically un-
stable in the dilute limit for l > 0.
The LOCV treatment for fermions relies heavily on the
product representation of the many-body wave function,
Eq. (15), which in turn gives rise to the two terms on
the right hand side of Eq. (22). It is the competition of
these two energy terms that leads to the energy minimum
discussed in the previous paragraph. Future work needs
to investigate whether the dependence of EF,l/N on two
length scales as implied by Eq. (22) is correct. In contrast
to the LOCV method, mean-field treatments predict that
the energy at unitarity is proportional to EFG|kF re,l|,
where re,l denotes a range parameter that characterizes
the underlying two-body potential [42, 43, 44].
V. CONCLUSION
This paper investigates Bose and Fermi systems us-
ing the LOCV method, which assumes that three- and
higher-order correlations can be neglected and that the
behaviors of the many-body system are governed by two-
body correlations. This assumption allows the many-
body problem to be reduced to an effective two-body
problem. Besides the reduced numerical effort, this for-
malism allows certain aspects of the many-body physics
to be interpreted from a two-body point of view. Further-
more, it allows parallels between Bose and Fermi systems
to be drawn.
In agreement with previous studies, we find that the
energy per particle “corrected” by the dimer binding en-
ergy, i.e., EF,0/N − Edimer/2, of dilute two-component
s-wave Fermi gases in the whole crossover regime de-
pends only on the s-wave scattering length and not on
the details of the underlying two-body potential. Fur-
thermore, at unitarity the energy per particle is given by
EF,0/N = 0.46EFG. This LOCV result is in good agree-
ment with the energy per particle obtained from fixed-
node diffusion Monte Carlo calculations, which predict
EF,0/N = 0.42EFG [19, 20, 21]. This agreement may
be partially due to the cancellation of higher-order cor-
relations, and thus somewhat fortuitous. In contrast to
Ref. [28], we find that the liquid branch of bosonic he-
lium does not terminate at low densities but exists down
to zero density.
For higher angular momentum interactions, we deter-
mine the energy per particle of one- and two-component
Bose systems with infinitely large scattering lengths. For
these systems, we expect the LOCV formalism to pre-
dict the dimensionless exponent xl, which determines
the functional dependenc of EB,l/N on the range R of
the two-body potential and on the average interparticle
spacing ro, correctly. The values of the proportionality
constants CGl and C
l , in contrast, may be less accurate.
We use the LOCV energies to generalize the known uni-
tary condition for s-wave interacting systems to systems
with finite angular momentum. Since higher angular mo-
mentum resonances are necessarily narrow, leading to a
strong energy-dependence of the scattering strength, we
define the universal regime using the energy-dependent
scattering length al(k). In the unitary regime, the en-
ergy per particle can be written in terms of the length
ξl, which is given by a geometric combination of ro and
R. The LOCV framework also allows a prediction for
the energy per particle of two-component Fermi gases
beyond s-wave to be made [see Eq. (22)]. Although the
functional form of the many-body wave function for two-
component Fermi systems used in this work may not be
the best choice, we speculate that the energy scales de-
rived for strongly interacting Bose systems are also rele-
vant to Fermi systems.
This work was supported by the NSF through grant
PHY-0555316. RMK gratefully acknowledges hospitality
of the BEC Center, Trento.
[1] W. C. Stwalley, Phys. Rev. Lett. 37, 1628 (1976).
[2] E. Tiesinga, B. J. Verhaar, and H. T. C. Stoof, Phys.
Rev. A 47, 4114 (1993).
[3] S. Inouye, M. R. Andrews, J. Stenger, H. J. Miesner,
D. M. Stamper-Kurn, and W. Ketterle, Nature 392, 151
(1998).
[4] P. Courteille, R. S. Freeland, D. J. Heinzen, F. A. van
Abeelen, and B. J. Verhaar, Phys. Rev. Lett. 81, 69
(1998).
[5] C. A. Regal, C. Ticknor, J. L. Bohn, and D. S. Jin, Phys.
Rev. Lett. 90, 053201 (2003).
[6] J. Zhang, E. G. M. van Kempen, T. Bourdel, L.
Khaykovich, J. Cubizolles, F. Chevy, M. Teichmann, L.
Tarruell, S. J. J. M. F. Kokkelmans, and C. Salomon,
Phys. Rev. A 70, 030702(R) (2004).
[7] C. Chin, V. Vuletić, A. J. Kerman, and S. Chu, Phys.
Rev. Lett. 85, 2717 (2000).
[8] T. Köhler, K. Góral, and P. S. Julienne, Rev. Mod. Phys.
78, 1311 (2006).
[9] L. D. Landau and E. M. Lifshitz, Quantum Mechanics
(Non-relativistic theory), Vol. 3, 3rd Edition (Butter-
worth Heinemann, Oxford, 1977).
[10] G. A. Baker, Jr., Phys. Rev. C 60, 054311 (1999).
[11] H. Heiselberg, Phys. Rev. A 63, 043606 (2001).
[12] S. Cowell, H. Heiselberg, I. E. Mazets, J. Morales, V. R.
Pandharipande, and C. J. Pethick, Phys. Rev. Lett. 88,
210403 (2002).
[13] P. O. Fedichev, M. W. Reynolds, and G. V. Shlyapnikov,
Phys. Rev. Lett. 77, 2921 (1996).
[14] B. D. Esry, C. H. Greene, and J. P. Burke Jr., Phys. Rev.
Lett. 83, 1751 (1999).
[15] E. Nielsen and J. H. Macek, Phys. Rev. Lett. 83, 1566
(1999).
[16] P. F. Bedaque, E. Braaten, and H.-W. Hammer, Phys.
Rev. Lett. 85, 908 (2000).
[17] K. M. O’Hara, S. L. Hemmer, M. E. Gehm, S. R.
Granade, and J. E. Thomas, Science 298, 2179 (2002).
[18] T. Bourdel, J. Cubizolles, L. Khaykovich, K. M. F. Ma-
galhaes, S. J. J. M. F. Kokkelmans, G. V. Shlyapnikov,
and C. Salomon, Phys. Rev. Lett. 91, 020402 (2003).
[19] J. Carlson, S. Y. Chang, V. R. Pandharipande, and K. E.
Schmidt, Phys. Rev. Lett. 91, 050401 (2003).
[20] S. Y. Chang, V. R. Pandharipande, J. Carlson, and K. E.
Schmidt, Phys. Rev. A 70, 043602 (2004).
[21] G. E. Astrakharchik, J. Boronat, J. D. Casulleras, and
S. Giorgini, Phys. Rev. Lett. 93, 200404 (2004).
[22] J. Carlson and S. Reddy, Phys. Rev. Lett. 95, 060401
(2005).
[23] For finite l, the term “unitary regime” refers to the regime
in which the observables of the many-body system de-
pend, as discussed in Sec. IV, in a simple way on two
length scales. Since the observables in this regime depend
on two length scales, the system does not, strictly speak-
ing, exhibit universality. However, because of the rather
simple dependence on these two length scales, we never-
theless use the term “universal” to refer to this regime.
[24] D. Blume and C. H. Greene, Phys. Rev. A 65, 043613
(2002).
[25] E. L. Bolda, E. Tiesinga, and P. S. Julienne, Phys. Rev.
A 66, 013403 (2002).
[26] R. Stock, A. Silberfarb, E. L. Bolda, and I. H. Deutsch,
Phys. Rev. Lett. 94, 023202 (2005).
[27] B. Gao, J. Phys. B 37, L227 (2004).
[28] B. Gao, Phys. Rev. Lett. 95, 240403 (2005).
[29] V. R. Pandharipande, Nucl. Phys. A178, 123 (1971).
[30] V. R. Pandharipande, Nucl. Phys. A174, 641 (1971).
[31] V. R. Pandharipande and H. A. Bethe, Phys. Rev. C 7,
1312 (1973).
[32] V. R. Pandharipande and K. E. Schmidt, Phys. Rev. A
15, 2486 (1977).
[33] B. Gao, Phys. Rev. A 58, 1728 (1998).
[34] N. F. Mott and H. S. W. Massey, The Theory of Atomic
Collisions, Third Edition (Clarendon, Oxford, 1965).
[35] J. R. Taylor, Scattering Theory (Wiley, New York, 1972).
[36] The total wave function of the Bose system has to be
symmetric with respect to the exchange of any two par-
ticles, which restricts the angular momentum l to even
values.
[37] A. R. Janzen and R. A. Aziz, J. Chem. Phys. 103, 9626
(1995).
[38] R. De Bruyn Ouboter and C. N. Yang, Physica B 144,
127 (1987).
[39] K. Riisager, A. S. Jensen, and P. Møller, Nuc. Phys. A
548, 393 (1992).
[40] A. S. Jensen, K. Riisager, D. V. Fedorov, and E. Garrido,
Rev. Mod. Phys. 76, 215 (2004).
[41] C. H. Greene, unpublished notes.
[42] T.-L. Ho and R. B. Diener, Phys. Rev. Lett. 94, 090402
(2005).
[43] C.-H. Cheng and S.-Y. Yip, Phys. Rev. Lett. 95, 070404
(2005).
[44] M. Iskin and C. A. R. Sá de Melo, Phys. Rev. Lett. 96,
040402 (2006).
[45] For the liquid l = 1 branch we do not find solutions that
obey the boundary condition given by Eq. (11). Instead,
we determine the healing distance d by looking for the
minimum of f ′1. We find that the value of f
1 at the min-
imum is much smaller than the value of f ′1 at small r.
[46] Within the LOCV formalism, the value of −2.46 for CL0
can, as discussed in Secs. II and III, be straightforwardly
converted to obtain the energy per particle EF,0/N of the
ground state of s-wave interacting two-component Fermi
gases at unitarity, EF,0/N = 0.46EFG [20]. Similarly, the
value of 13.3 for CG0 can be straightforwardly converted
to obtain the energy per particle EF,0/N of the first
excited gas state of s-wave interacting two-component
Fermi gases at unitarity, EF,0/N = 3.92EFG.
|
0704.1805 | On Charge Conservation and The Equivalence Principle in the
Noncommutative Spacetime | On Charge Conservation and The Equivalence Principle
in the Noncommutative Spacetime
Youngone Lee
Center for Quantum Spacetime, Sogang University,
Seoul 121-742, Korea.
Abstract
We investigate one of the consequences of the twisted Poincaré symmetry. We derive the
charge conservation law and show that the equivalence principle is satisfied in the canonical non-
commutative spacetime. We applied the twisted Poincaré symmetry to the Weinberg’s analysis
[11]. To this end, we generalize our earlier construction of the twisted S matrix [10], which apply
the noncommutativity to the fourier modes, to the massless fields of integer spins. The transforma-
tion formula for the twisted S matrix for the massless fields of integer spin has been obtained. For
massless fields of spin 1, we obtain the conservation of charge, and the universality of coupling
constant for massless fields of spin 2, which can be interpreted as the equality of gravitational
mass and inertial mass, i.e., the equivalence principle.
PACS numbers: 11.10.Nx, 11.30.-j, 11.30.Cp, 11.55.-m
[email protected]
http://arxiv.org/abs/0704.1805v3
1 Introduction
In effort to construct an effective theory of quantum gravity at the Planck scale, noncommutativity of
the spacetime has been considered. The canonical noncommutative spacetime has the commutation
relations between the coordinates [1],
[xµ, xν ] = iθµν , (1)
where θµν(µ, ν = 0 to 3) is a constant antisymmetric matrix.
Field theories in the canonical noncommutative spacetime can be replaced by field theories in the
commutative spacetime with the Moyal product (The Weyl-Moyal correspondence [2]). One of the
significant problems of those theories is that they violate the Lorentz symmetry. One finds that the
symmetry group is SO(1, 1)×SO(2) instead of the Lorentz group, SO(1, 3). Since there is no spinor
or vector representations in that symmetry group, most of the earlier studies performed by using the
spinor, vector representations of the Lorentz group can not be justfied. Moreover, the factors 1(−1)
are multiplied for a boson(fermion)loop without knowing the spin-statistics relation.
To get around this, Chaichian et.al.∗ have deformed the Poincaré symmetry as well as its module
space to which the symmetry acts [3]. The twisted symmetry group has the same representations as
the original Poincaré group and at the same time they successfully retain the physical information of
the canonical noncommutativity. The main idea was that one can change a classical symmetry group
to a quantum group, ISOθ(1, 3) in this case, and twist-deform the module algebra consistently to
reproduce the noncommutativity. In their approach, the noncommutative parameter θµν transform as
an invariant tensor. This reminds us the situation that Einstein had to change the symmetry group of
the spacetime and its module space(to the Minkowski spacetime) when the speed of light is required
to be constant for any observer in an inertial frame. Similarly, Chaichian et.al. have required the
change of the Hopf algebra with its module algebra so that any observer in an inertial frame feel
the noncommutativity in the same way. For the κ-deformed noncommutativity, Majid and Ruegg
found the κ-deformed spacetime [6] as a module space of the κ-deformed symmetry after Lukierski
et.al. discovered the symmetry [7]. The real benefit of the twist is in the use of the same irreducible
representations of original theories unlike general deformed theories, as in the case of the κ-deformed
theory.
Recently, groups of physicists have constructed the quantum field theory in the noncommutative
spacetime by twisting the quantum space as a module space [8],[9],[10]. Espcially, Bu et.al. have
proposed a twisted S-matrix as well as a twisted Fock space for consistency [10]. There we have
obtained the twisted algebra of the creation and annihilation operators and the spin-statistics relation
by applying the twisted Poincaré symmetry on the quantum space consistently. The analysis of this
paper is mainly based on this work.
These works justify the use of the irreducible representations of the Poincaré group and the sign
factors being used in the earlier studies. Are these works merely a change of viewpoint? Mathemat-
ically they look equivalent and seem to have equal amounts of information. But when the physics
∗ Oeckl [4], Wess [5] have proposed the same deformed Poincaré algebra.
is concerned the action of symmetry becomes more subtle than it seems because it confines possible
configurations of physical systems. In this article, we present an example showing the role of the
twisted symmetry for solving physics problems, especially in the canonical noncommutative space-
time. As an example, we derive the conservation law of charge and show the fact that the equivalence
principle is satisfied even in the noncommutative spacetime. In this derivation we consider spin 1 and
2 massless fields for the photon and the graviton, respectively. For this purpose, we extend our ear-
lier study on the scalar field theory to more general field theories and investigate a noncommutative
version of Weinberg’s analysis [11, 12, 13].
Actually, there are many studies for the relation between noncommutativity and gauge theory [14],
[15], [16], [17] or between the noncommutativity and the gravity [18],[19],[20] and between them
[21]. And there were many argument whether the equivalence principle is satisfied at the quantum
level. Some people argued that the equivalence principle is violated in quantum regime [22],[23],
while there are studies which show non-violation of the equivalence principle [24]. Whether the
principle of equivalence is violated or not is an important issue for quantum gravity because the
principle is the core of the general relativity.
The paper is organized as follows. We extend our previous construction of the S⋆ matrix to the
massless fields of integer spin after giving a brief review on the construction and the properties of the
S⋆ matrix. We give an exact transformation formula for the S⋆ matrix elements in section 2. We give
the consequence of requiring the twist invariance to the S⋆ matrix elements for the scattering process.
These results lead to the charge conservation law for the spin 1 field theory and the universality of
the coupling constant for spin 2 field in noncommutative spacetime in section 3. Finally, we discuss
the implication of the twisted symmetry and its applicability to other issues in section 4. We give
some related calculations of the polarization vector, the noncommutative definition of the invariant
M function, and a twisted transformation formula for the S⋆ matrix in the Appendices.
2 Properties of the general S⋆ matrix
2.1 A short introduction of useful properties of the twist-deformation
An algebra with a product · and a coalgebra with a coproduct ∆ constitute a Hopf algebra if it has an
invertible element S called antipode and with some compatibility relations. For a Lie algebra g, there
is a unique universal enveloping algebra U(g) which preserves the Lie algebra properties in terms of
unital associative algebra. The Hopf algebra of a Lie algebra g is denoted as H ≡ {U(g), ·,∆, ǫ, S},
where U(g) is an universal enveloping algebra of the corresponding algebra g and we denotes the
counit as ǫ. The Sweedler notation is being widely used for a shorthand notation of the coproduct,
Y(1) ⊗ Y(2) [25].
The action of a Hopf algebra H to a module algebra A is defined as
Y ⊲ (a · b) =
(Y(1) ⊲ a) · (Y(2) ⊲ b), (2)
where a, b ∈ A, the symbol · is a multiplication in the module algebra A, and the symbol ⊲ denotes
an action of the Lie generators Y ∈ U(g) on the module algebra A. The product · in H and the
multiplication · in A should be distinguished.
If there is an invertible ’twist element’, F =
F(1) ⊗ F(2) ∈ H ⊗H, which satisfies
(F ⊗ 1) · (∆⊗ id)F = (1⊗F) · (id ⊗∆)F , (3)
(ǫ⊗ id)F = 1 = (id ⊗ ǫ)F , (4)
one can obtain a new Hopf algebra HF ≡ {UF (g), ·,∆F , ǫF , SF} from the original one. The relations
between them are
∆FY = F ·∆Y · F−1 , ǫF (Y ) = ǫ(Y ),
SF (Y ) = u · S(Y ) · u−1 , u =
F(1) · S(F(2)), (5)
with the same product in the algebra sector. The ’covariant’ multiplication of the module algebra AF
for the twisted Hopf algebra HF which maintain the form of Eq.(2) is given as
(a ⋆ b) = ·[F−1 (a⊗ b)]. (6)
From the above relations, one can derive an important property of the twist such that it does not
change the representations of the algebra:
DF(Y )(a ⋆ b) = ⋆ [∆FY (a⊗ b)]
= · [F−1 · F∆0YF−1(a⊗ b)]
= · [∆0Y F−1(a⊗ b)]
= D0(Y )(a ⋆ b), (7)
where representations of the coproduct and the twist element is implied, i.e.,
D[∆Y ] =
D(Y(1))⊗D(Y(2)), D[F ] =
D(F(1))⊗D(F(2)). (8)
The above considerations lead us to the golden rule: The irreducible representations are not changed
by a twist and one can regard the covariant action of a twisted Hopf algebra on a twisted module
algebra as the action of the original algebra on the twisted module algebra.
2.2 The S⋆ matrix and its twist invariance
Recently, a quantum field theory has been constructed in such a way to preserve the twisted Poincaré
symmetry [10]. There we confined the construction for the space-space noncommutativity. It is hard
to know whether one can construct a consistent twist Poincaré invariant field theory satisfying the
causality in the case of space-time noncommutativity. They have tried to apply the twisted symmetry
to quantum spaces consistently, especially to the algebra of the creation and annihilation operators (a†p
and ap). As a result, they obtained the twisted algebra of quantum operators. If we use a shorthand
notation, p ∧ q = pµθµνqν , the twisted algebra of a†p and ap can be denoted as:
cp ⋆ cq = e
ep∧eq cp · cq, (9)
where cp can be ap or a
p, p̃ ≡ −p(p) for cp = ap(a†p) and · denotes the ordinary multiplication of
operators in the commutative theories.
This twisted algebra naturally leads to the twisted form of Fock space, S-matrix and quantities
related to the creation and annihilation operators. Thus, we obtain the twisted basis of Fock space and
S⋆-matrix:
|q1, · · · , qn〉 → |q1, · · · , qn〉⋆ = E(q1, · · · , qn)|q1, · · · , qn〉, (10)
where E(q1, · · · , qn) = exp
i<j qi ∧ qj
is a phase factor which has the interesting properties
[10], and
S → S⋆ =
(−i)k
d4x1 · · · d4xk T {H⋆I(x1) ⋆ · · · ⋆H⋆I(xk)} , (11)
where T denotes the time ordering and H⋆I(x) is an interaction Hamiltonian density in the Dyson
formalism.
The explicit form of the S⋆ matrix elements for the scalar φ
n theoryin the momentum space is:
⋆〈β|S⋆|α〉⋆ = E(−β, α)
(−ig)k
· · ·
···cQk
E(Q̃1) · · · E(Q̃k)〈β|Sk(Q̃1 · · · , Q̃k)|α〉, (12)
where Q̃ is the shorthand notation for (q̃1, . . . , q̃n) [10].
In the above, 〈β|Sk(Q̃1 · · · , Q̃k)|α〉 is a gk order term of the S-matrix element of the commutative
theory where g is the coupling constant of the theory. From the momentum conservation, i.e., delta
functions in the 〈β|Sk(Q̃1 · · · , Q̃k)|α〉, one can show that the S⋆ matrix element, ⋆〈β|S⋆|α〉⋆, can be
represented by Feynman diagrams with extra phase factors E(Q̃) for each vertex. The phase factors
drastically change the predictions of the theory. This result agrees with Filk’s result[26], but we have
overall factors E(−β, α) corresponding to external lines in the Feynman diagram which originated
from the twisted Fock space. From the above considerations, the new modified Feynman diagrams
can be obtained from the untwisted ones by changing the phase factors from 1 to E(Q̃i) at each vertex.
The twist invariance of this prescription of the S⋆ matrix is not manifest because non-locality of
the interactions may violate the twist invariance of the S⋆ matrix, in general, i.e.,
[H⋆I(x),H⋆I(y)]⋆ 6= 0 for spacelike (x− y). (13)
However, we see from the form of S⋆ matrix in Eq.(12), that the proposed S⋆ matrix is clearly twist
invariant since it is constructed from phase factors which are twist invariant, and the Feynman propa-
gators. Twisted product of fields operators satisfy
〈0|ψ(x) ⋆ ψ(y)|0〉 = 〈0|ψ(y) ⋆ ψ(x)|0〉, for spacelike (x− y). (14)
Hence we see that the Feynman propagator, same as the twist Feynman propagator 〈0|T [ψ(x) ⋆
ψ(y)]|0〉, is twist invariant. From this, the invariance of the S⋆ matrix elements follows immediately.
2.3 Generalization to arbitrary fields
We need to get the S⋆ matrix for massless field theories of integer spin for the analysis of this paper.
In the previous work [10], we have constructed the S⋆ matrix for scalar field theory and we have
expected that the same formulation could be possible for general field theories. In this section, we
generalize our argument used in that paper to obtain the form of the S⋆ matrix elements for massless
field theories with spin 1 and 2. In the analysis of this paper, we use the (s/2, s/2) representation for
massless integer spin fields. The reason we use it and the considerations for the other representations
are given in section 4.
We have used the perturbation theory in our formulation of the S⋆ matrix in Eq.(11). Another
assumption was the particle interpretation. That is, the field operators are represented as linear com-
binations of the creation and annihilation operators and the fields of spin s transform(twist) as† :
[Dsθ(Λ
−1)]AB Ψ̂
θ (Λx+ a) = Uθ(Λ, a) Ψ̂
θ (x) U
θ (Λ, a), (15)
where Ds denotes the irreducible representation for spin s. Since translations act homogenously on
the fields, twisted tensor fields can be written as
Ψ̂ Aθ (x) =
[aσ(p) ǫ
θ (p, σ) e
ip·x + b†σ(p) χ
θ (p, σ) e
−ip·x], (16)
where A denotes the tensor index, A ≡ (a1 · · · as) for massless spin s fields and σ for the helicity
indices. Thus, the transformation relation of the fields, Eq.(15), can be reduced to
[Dsθ(Λ
−1)]AB Ψ̂
θ (Λx) = Uθ(Λ) Ψ̂
θ (x) U
θ (Λ). (17)
In order that the field operators transform correctly, the ’polarization’ tensor ǫ Aθ (p, σ) are to be trans-
formed as
ǫ Aθ (p, σ) =
(2π)3
[Dsθ(L(p))]AB ǫ Bθ (k, σ),
χ Aθ (p, σ) =
(2π)3
[Dsθ(L(p))]AB χ Bθ (k, σ′) C−1σ′σ, (18)
where L(p) is the Lorentz transformation‡ which reflects the little group and C is a matrix with the
properties [12],
C∗ C = (−1)2s, C† C = 1. (19)
† Recently, Joung and Mourad [27] have argued that a covariant field linear in creation and annihilation operators does
not exist. But the definition of creation and annihilation operators in that paper is different from ours. We try to follow the
view that we use the same fundamental quantities as of those in the untwisted theories changing the algebras only. The
field operators in linear combinations of the creation and annihilation operators can be justified in this basis, because the
free fields are the same as the commutative case.
‡ L(p) is a Lorentz transformation that takes the standard momentum kµ to pµ ≡ (|p|,p) for massless fields. For
the massless case L(p) satisfies L(p) = R(p̂)B(|p|) where R(p̂) is a rotation which takes the direction of the standard
momentum k to the direction of p and B(|p|) is the boost along the p direction [11].
Since the most important properties of the twist is that the representations are the same as in the
original group, the Poincaré group in this case, the above Dsθ(Λ
−1) and Dsθ(L) can be replaced by
Ds(Λ−1) and Ds(L), respectively. Thus the θ-dependance remains only in the polarization tensors.
Furthermore, one can see that there’s no θ-dependance in ǫ Aθ (σ) and χ
′). We can show this by
considering the case of ǫ Aθ (σ). From the group property of the twisted Lorentz transformations, the
related transformation to the polarization tensor ǫ Aθ (σ) can be written as:
[Dsθ(Λ
−1)]AB [D
θ(L)]CD ǫ Dθ (σ) = [Dsθ(Λ−1 · L)]AB ǫ Bθ (σ). (20)
In order for this transformation to be a twist transformation, it is to be satisfied
[Dsθ(Λ
−1 · L)]AB ǫ Bθ (σ) = [Ds(Λ−1 · L)]AB ǫ Bθ (σ), (21)
i.e., twisted transformation has the same representation as the untwisted one. From the primary
relation between the twist and the module algebra, one can see that the form of ǫ Aθ (σ), twisted version
of the commutative polarization tensor, should be
ǫ a1···anθ (σ) = · [F
n ⊲ ǫ
a1(σ)⊗ · · · ⊗ ǫan(σ)], (22)
where F−1n can be obtain from§ the Fθ, and Fθ is the very twist element corresponding to the canonical
noncommutativity:
Fθ = exp
θαβPα ⊗ Pβ
. (23)
Since Pα ⊲ ǫ
a(σ) = 0, we obtain
ǫ a1a2···asθ (σ) ≡ ǫ
a1a2···as(σ) = ǫ(a1σ ǫ
σ · · · ǫas)σ , (24)
where ǫ(a1a2···as)(σ) is the polarization tensor in the corresponding commutative field theory. This
relation leads to the relation
Ψ̂ Aθ (x) ≡ Ψ̂A(x). (25)
This relation is expected because the twist does not change the representations and we write the field
operators by using an irreducible representations of the symmetry group. Explicit calculations for
massless (s/2, s/2) tensor representation which shows the non-dependance on the θ is given in the
Appendix A.
Therefore, one can safely use the same representations for the field operators in the twisted theory
as those of the untwisted ones. What really be twisted are the multiplications of the creation and an-
nihilation operators only. Since the multiplication of the creation and annihilation operators between
different species of particles act like composite mappings on the Hilbert space, one can construct a
§ The associativity of the twisted product ∗ guaranty that it can be obtained by successive applications of Eq.(6).
module algebra from them. That is, when cp’s and dp’s are the creation and annihilation operators
corresponding to different species, their twisted multiplications are
cp ⋆ dq = e
ep∧eq cp · dq, (26)
where cp(dp) can be ap(bp) or a
p), and p̃ and q̃ are defined as in section 2.2.
Consequently, the scheme for twisting S-matrix used in [10] applies for general field theories.
Twist invariance of the S⋆ matrix for general field theories follows immediately. As in the scalar field
theory, the amplitudes can be obtained by multiplying the phase factor E(q1, · · · , qn) for each vertex
in the Feynman diagram of the untwisted theories.
2.4 Exact transformation formula for the S⋆ matrix element for massless fields
Transformation formula for the S⋆ matrix element corresponding to the process in which a massless
particle is emitted with momentum q and helicity s can be inferred as (Appendix C)
S±s⋆ (q, p) =
exp [±isΘ(q,Λ)]S±s⋆ (Λq,Λp). (27)
The S⋆ matrix can be written as the scalar product of a polarization tensor and theM⋆ function(Appendix
S±s⋆ (q, p) =
± (q̂) · · · ǫ
± (q̂)(M
⋆ )µ1···µs(q, p), (28)
where the M⋆ function twist-transform covariantly as
M±µ1···µs⋆ (q, p) = Λ
· · ·Λ µjνs M
±ν1···νs
⋆ (Λq,Λp). (29)
The form of the S⋆ matrix element in Eq.(28) appears to break the twisted Poincaré symmetry
because the polarization vectors do not satisfy the Lorentz covariance, rather they satisfy
(Λ µν − qµΛ 0ν /|q|)ǫ ν± (Λq) = exp {±iΘ(q,Λ)} ǫ
± (q̂). (30)
Hence requiring the twist invariance of the S⋆ matrix would lead to a constraint relation between the
momentum, the polarization vectors and the M⋆ function. From Eq.(27) and Eq.(30), the S⋆ matrix
element in Eq.(28) can be written as
S±s⋆ (q, p) =
exp [±isΘ(q,Λ)] [ǫ µ1± (Λq)− (Λq)µ1Λ 0ν ǫν±(Λq)/|q|]∗
· · · [ǫ µs± (Λq)− (Λq)µsΛ 0ν ǫν±(Λq)/|q|]∗(M⋆)±µ1···µs(Λq,Λp). (31)
Requiring the twist invariance of the S⋆ matrix element results in:
± (q̂) · · · ǫ
± (q̂)M
⋆ µ1···µs
= 0. (32)
This leads us to the desired identities:
±ρµ2···µs
⋆ (q, p) = 0. (33)
3 Charge conservation and the equivalence principle
Since the analysis of the conservation law is just a noncommutative generalization of Weinberg’s
work [11], the derivation of this section will be fairly straightforward. As we saw in section 2, the
differences between the noncommutative field theory and the commutative one are the phase factors
at each vertex of the Feynman diagrams.
3.1 Dynamical definition of charge and gravitational mass
We define the charge and the gravitational mass dynamically as the coupling of the vertex amplitudes
for the soft(very low energy) photon and graviton, respectively. Consider the vertex amplitude for the
process that a soft¶ massless particle of momentum q and spin s is emitted by a particle of momentum
p and spin J . Since the only tensor which can be used to form the invariantM function is known to be
pµ1 · · ·pµs , the noncommutative M function, M⋆ function, is given by the commutative M function
with phase factor E(q̃, p̃, p̃′) multiplied at each vertex. That is, the vertex amplitude can be written as
E(q̃, p̃, p̃′)
2E(p)
pµ1 · · · pµs ǫ
± (q̂) · · · ǫ
± (q̂). (34)
When the emitting particle has spin J we have to multiply to the vertex amplitude δσσ′ [11]. Thus,
the explicit form of the vertex amplitudes can be written as:
2i(2π)4 · es · δσσ′ [pµ ǫ µ∗± (q̂)]s
(2π)9/2[2E(p)]
· E(q̃, p̃, p̃′), (35)
where es is a coupling constant for emitting a soft massless particle of spin s (e.g., photon and gravi-
ton).
These coupling constants for emitting a soft particle can be interpreted as: e1 ≡ e as the elec-
tric charge, and e2 ≡
8πGg, with g the ratio of the gravitational mass and the inertial mass.
Let us consider a near forward scattering of the two particles A and B with the coupling esA and
B , respectively. From the properties of the phase factors [10], the phase factor for this scattering,
E(q̃, p̃A, p̃A′) · E(−q̃, p̃B, p̃B′), goes to:
E(q̃, p̃A, p̃A′) · E(−q̃, p̃B, p̃B′) = E(p̃A, p̃A′, q̃) · E(−q̃, p̃B, p̃B′)
= E(p̃A, p̃A′) · E(p̃B, p̃B′). (36)
However, in the forward scattering limit, the direction of the particles does not change, i.e. pA ‖ p′A.
For space-space noncommutativity pA ∧ p′A goes to zero, i.e., the phase factor goes to 1.
Thus, when the invariant momentum transfer t = −(p′A − pA)2 goes to zero, using the properties
of the polarization vectors in the Appendix B, the S⋆ matrix element can be shown to approach the
same form as in the corresponding commutative quantity which is easily calculated in a well chosen‖
¶ This is to define the charge and gravitational mass as a monopole, not as a multipole moments.
‖ The coordinate system in which q · pA = q · pB = 0 [11].
coordinate system,
δσAσB′δσBσB′
4π2EAEBt
eAeB(pA · pB) + 8πG gAgB
(pA · pB)2 −
m 2Am
. (37)
This coincidence is quite special in the sense that the S⋆ matrix elements are quite different from the
S matrix elements in the commutative theory when there is a momentum transfer. If particle B is at
rest, this gives
δσAσA′δσBσB′
−eAeB
+G · gA
2EA −
· gBmB
. (38)
Hence, one can interpret the coupling constant eA as the usual charge of the particle A. Moreover one
can identify the effective gravitational mass of A as
(mg)A = gA
2EA −
. (39)
For nonrelativistic limit the gA can be interpreted as a ratio of the gravitational mass and the inertial
mass, i.e.,
(mg)A = gA ·mA, EA ≃ mA. (40)
Consequently, if gA does not depend on the species of A, it suggests that the equivalence principle
holds.
3.2 Conservation law
Consider a S matrix element S(α→ β) for some reaction α→ β, where the states α and β consist of
various species of particles. The same reaction with emitting a soft massless particle of spin s (photon
or graviton for s = 1 or 2), momentum q and helicity ±s can occur. We denote the corresponding
S matrix element as S±s(q, α → β). Each amplitude of this process breaks the Lorentz symmetry
because a massless fields of (s/2, s/2) representation break the symmetry [28]. However, a real
physical reaction, to which a S-matrix element (the sums of each amplitude) correspond, should be
Lorentz invariant. By requiring this condition, Weinberg could obtain the conservation relations. In
this section, we investigate the similar relations by requiring the twist invariance of the S⋆ matrix
elements.
Suppose that a soft particle is emitted by ith particle (i = 1, · · · , n). Then by the polology of
the conventional field theory, the S matrix elements will have poles at |q| = 0 when an extra soft
massless particle is emitted by one of the external lines:
(pi + ηi · q)2 +m 2i
2pi · q
, (41)
where ηi = 1(−1) for the emission of a soft particle from an out(in) particle, respectively. By utilizing
the above relation, one can obtain the S matrix elements for the soft massless particles in the |q| → 0
limit [11]:
S±s(q, α→ β) =
(2π)3/2
ηiesi
[pi · ǫ ∗±(q̂)]s
(pi · q)
S(α→ β). (42)
In the noncommutative case, from the result of the section (2) and by using the above relation,
one can deduce the S⋆ matrix elements for the same process in the noncommutative spacetime:
S ±s⋆ (q, α→ β) =
Ei(q̃, α → β) · S±i (q, α → β)
E(q̃, p̃i, p̃i + q̃) · EI(p̃1, · · · , p̃i + ηiq̃, · · · , p̃n) · S±i (q, α→ β)
E(q, p̃i)
(2π)3/2
ηiesi
[pi · ǫ ∗±(q̂)]s
(pi · q)
× EI(p̃1, · · · , p̃i + ηiq̃, · · · , p̃n) · S±i (α→ β), (43)
where E iI = EI(p̃1, · · · , p̃i + q̃, · · · , p̃n) are the phase factors in the internal process∗∗. Since all the
E iI’s are the same when q → 0 (E iI = EI), one obtains in that limit,
S ±s⋆ (q, α→ β) =
(2π)3/2
ηiesi
[pi · ǫ ∗±(q̂)]s
(pi · q)
S⋆(α → β), (44)
where we used the relation S⋆(α → β) = EI · S(α → β). From the transformation properties of the
S⋆ matrix elements, Eq.(28), we obtain,
S ±s⋆ (q, α→ β) →
± (q̂) · · · ǫ
± (q̂)(M
⋆ )µ1···µs(q, α→ β). (45)
Identifying the S⋆ matrix elements in Eq.(44) and Eq.(45) gives the invariant M⋆ functions for spin 1
and 2:
Mµ⋆ (q, α→ β) =
(2π)3/2
ei · ηip µi
(pi · q)
S⋆(α→ β),
Mµν⋆ (q, α→ β) =
(2π)3/2
gi · ηip µi p νi
(pi · q)
S⋆(α → β). (46)
Requiring the twist invariance to the S⋆ matrix elements, Eq.(33), gives:
0 = qµM
⋆ (q, α→ β) →
ηiei = 0,
0 = qµM
⋆ (q, α→ β) →
gi · ηipνi = 0, (47)
∗∗ The integrals over the internal loop momenta have been suppressed because they do not affect the final result since
the change in the phase factor occurs only in the external lines.
in general. Hence, one obtains the charge conservation law for the spin 1 fields. For s = 2, in order
to satisfy the two relations,
ηipi = 0 (4-momentum conservation) and
i gi · ηipνi = 0, gi should
be constants (i.e. independence of the particle species). The universality of this coupling constant as
a ratio of gravitational mass and inertial mass shows that the equivalence principle is satisfied even in
the noncommutative spacetime.
4 Discussion
We have found that the conservation of charge and the equivalence principle are satisfied even in the
canonical noncommutative spacetime. The derivation was fairly straightforward, once we have con-
structed the S⋆ matrix for general field theories. The assumption was that the quanta of the gravitation
is massless spin 2, massless spin 1 for photon.
We extended the construction of the S⋆ matrix to general fields, especially for the massless fields
of integer spin. The twisted Feynman diagrams can be constructed by the same irreducible represen-
tations as those in the untwisted theories with the same rule except for the different phase factors at
each vertex. Hence, we can say that the same reasoning apply to the massive fields.
We use the (s/2, s/2) representation for the field operators mainly because one can obtain the
condition, Eq.(33), requiring the twist Lorentz invariance. In this representation, since the polarization
vectors are not Lorentz four vectors, each amplitude of emitting a real soft massless particle violate
the symmetry. We have used this property to show the conservation law in this paper. Another
representation for the fields, the (s, 0)⊕ (0, s) representation, which has the parity symmetry, can be
made Lorentz covariant. One realizes that due to these properties one cannot derive the conservation
law by the method used in this paper.
Charge conservation in the noncommutative spacetime is expected from the gauge symmetry and
the noncommutative Noether theorem. However, it is not quite certain whether the equivalence princi-
ple is satisfied in the noncommutative spacetime. Since the noncommutative spacetime is not locally
Minkowski nor the local symmetry group is SO(1, 3), one can not guarantee if the principle is satis-
fied. But in our twisted symmetry context, one can expect that the equivalence principle is satisfied
because the algebra structure is the same as the conventional theory though the coalgebra structure
is different. The applicability of the S matrix theoretic proof of the equivalence principle given in
this paper is restrictive because the analysis is perturbative in nature. We expect that the conclusion
of this paper is to be one of the stepping stone towards the further understanding of the nature of the
principle in quantum gravity.
This paper shows an example for the usefulness of the twisted symmetry to derive the physically
important relations in the noncommutative spacetime. We think that the twist analysis is adequate for
the schematic approaches to the noncommutative physics, while the other approaches are suitable for
the explicit calculations though they are equivalent. The future applications of the twisted symmetry
to the other issues in this direction of approach are expected.
Acknowledgement
I express my deep gratitude to Prof. J. H. Yee for his support. I would like to thank Dr. Jake Lee
for helpful discussions and Dr. A. Tureanu for his comments on the draft. This work was supported
in part by Korea Science and Engineering Foundation Grant No. R01-2004-000-10526-0, and by
the Science Research Center Program of the Korea Science and Engineering Foundation through the
Center for Quantum Spacetime(CQUeST) of Sogang University with grant number R11 - 2005 - 021.
A Exact calculation of the polarization tensor for massless fields
The creation and annihilation operators of the massless field transform under the Lorentz transforma-
tion as [28]
a†(Λp, σ) = e−iσΘ[W (Λ,p)] U(Λ) a†(p, σ) U−1(Λ),
a(Λp, σ) = e+iσΘ[W (Λ,p)] U(Λ) a(p, σ) U−1(Λ), (48)
up to phase factors, where W (Λ, p) = L−1(p)Λ−1L(Λp) denotes the Wigner rotation defined as in
[28] (Appendix C), and the Θ[W (Λ, p)] is the related angle to which the little group corresponds.
Hereafter, we abbreviate Θ[W (Λ, p)] as Θ(Λ, p). It is to be noted that we use U(Λ) instead of Uθ(Λ)
in here. This follows immediately from the properties of the twist in section 2.1.
The field Ψ̂ Aθ transform as
[Ds(Λ−1)]AB Ψ̂
θ (Λx) = U(Λ) Ψ̂
θ (x) U
−1(Λ). (49)
In order to satisfy the two relations (48), (49), the polarization tensor should transform as
ǫ Aθ (p, σ) = e
−iσΘ(Λ,p) [Ds(Λ−1)]AB ǫ
θ (Λp, σ). (50)
When Λ = W and p = k, the above relation goes to
[Ds(W )]AB ǫ
θ (k, σ) = e
−iσΘ(W,p)ǫ Aθ (k, σ), (51)
where k denotes the standard momentum. The Wigner rotation can be written as W (φ, α, β) =
R(φ)T (α, β) because the little group is isomorphic to ISO(2) for massless fields [28].
Suppose that the field transforms as (m,n) representation of spin s (m + n = s). When the φ, α
and β are infinitesimal, Ds(W ) can be written as
Ds(W ) ≃ 1− iφ(M3 +N3) + (α + iβ)(M1 − iM2) + (α− iβ)(N1 + iN2). (52)
For M− ≡M1 − iM2, N+ ≡M1 + iM2, and Θ → φ we have:
(M3 +N3) ǫθ(k, σ) = σ ǫθ(k, σ),
M− ǫθ(k, σ) = 0,
N+ ǫθ(k, σ) = 0, (53)
which gives
M3 ǫθ(k, σ) = −m ǫθ(k, σ),
N3 ǫθ(k, σ) = +n ǫθ(k, σ). (54)
The ǫθ(k, σ) satisfys the same equation as ǫ(k, σ). Since the highest or lowest weight corresponds to
a unique state, one obtains the same polarization tensor as a solution, ǫθ(k, σ) = ǫ(k, σ).
B Properties of the polarization vectors
Here, we summarize the properties of the polarization vector. The properties of the polarization
tensors of other rank, Eq.(24), follows from it.
Solving the (53) for σ = ±1 gives the explicit form of the polarization vector for the standard
momentum as
± (k) ≡
(1,±i, 0, 0), (55)
where we made the conventional choice of the phase. The polarization vector of momentum p is
defined as
± (p) = [L(p)]µν ǫ ν± (k), (56)
where L(p) is the Lorentz transformation which takes k to p, i.e., pµ = [L(p)]µν kν . Then the well
known properties of the polarization vectors can be deduced [11]:
± (p̂) = 0, (57)
ǫ ∗±µ(p̂)ǫ
± (p̂) = 1, ǫ±µ(p̂)ǫ
± (p̂) = 0, (58)
± (p̂) = ǫ
∓ (p̂), ǫ
± (p̂) = 0, (59)∑
± (p̂)ǫ
± (p̂) = η
µν + (p̃µpν + p̃νpµ)/2|p|2 ≡ Πµν , p̃ ≡ (|p|,−p), (60)
± (p̂)ǫ
± (p̂)ǫ
± (p̂)ǫ
± (p̂) =
{Πµ1ν1(p̂)Πµ2ν2(p̂) + Πµ1ν2(p̂)Πµ2ν1(p̂)
−Πµ1µ2(p̂)Πν1ν2(p̂)} . (61)
The polarization ’vectors’ are not the Lorentz four vectors††, rather they transform as
±(p) = e
∓iΘ(Λ,p)[Dsǫ (Λ, p)]
±(Λp),
[Dsǫ (Λ, p)]
ν = (Λ
−1)µν −
(Λ−1)0ν . (62)
For general spin s, the polarization tensor can be written as
ǫAσ (p) = e
∓iΘ(Λ,p)[Dsǫ (Λ, p)]
σ (Λp), (63)
up to a phase factor.
††It comes from the fact that the little group is not semisimple for massless fields. Translation in the little group
isomorphic to ISO(2) generates the gradient term in the transformations.
C Invariant M⋆ function for massless field
Let P denotes a shorthand notation for the external lines, P ≡ (p1, . . . , pn), and K denotes a standard
momenta‡‡ K ≡ (k1, . . . , kn). There exists a unique Lorentz transformation satisfying P ≡ LPK.
Then the relation between the Lorentz group and the little group can be described symbolically as:
W (Λ,P )
// ΛP ,
where W (Λ, P ) is the Wigner transformation to which a Lorentz transformation Λ and the momenta
P correspond. Since the twist do not change the group properties, above relation also holds for the
twisted symmetry group. The S⋆ matrix elements transform as
S⋆[P ] = D
θ[W (Λ, P )] S⋆[ΛP ]
= Dsθ[L
P · Λ
−1 · LΛP ] S⋆[ΛP ] (64)
where the indices for the external lines are suppressed. From the golden rule and (48), the explicit
transformation formular for S⋆[P ] can be obtained:
S⋆[P ] =
N(ΛP )
N(P )
e±isΘ(Λ,P ) S⋆[ΛP ], (65)
where N denotes the corresponding normalization factor.
If one defines M⋆[P ] as S⋆[P ] = D
P ]M⋆[P ], one can show that M⋆[P ] transform as
M⋆[P ] = D
−1]M⋆[ΛP ], (66)
i.e., it is twist invariant. We will call it as an invariant M⋆ function as in [29, 30].
Let us find out the invariant M⋆ function for massless fields. First, we define the quanties for the
standard momenta K. The choice for M⋆[K] is
MA⋆ [K] = N(K) ǫ
A[K] S⋆[K]. (67)
If we define M⋆[P ] as
MA⋆ [P ] = [D
θ(LP )]
⋆ [K], (68)
then one can easily show that MA⋆ [P ] is twist invariant.
By using the explicit form of Dsǫ in (62) and kbjM
b1···bs
⋆ [K] = 0, one obtains the relation
ǫC [P ]∗ (M⋆)C [P ] =
e±isΘ(L
,P )[Dǫθ(L
A[K]∗
[Dθ(LP )]
C (M⋆)B[K]
= e±isΘ(L
,P ) ǫA[K]∗ [Dθ(L
P ) ·D
A (M⋆)B[K]
= e±isΘ(L
,P ) ǫA[P ]∗ (M⋆)A[P ]
= e±isΘ(L
,P )N(K) S⋆[K], (69)
‡‡ see [11],[29].
where we used the relation
[Dθ(L
P ) ·D
A (M⋆)B[K] ≡ [D(L−1P ) ·Dǫ(L
A (M⋆)B[K]
δb1a1 −
(LP )
· · ·
δbsas −
(LP )
× (M⋆)b1···bs [K]
= (M⋆)a1···as [K]. (70)
By setting Λ = L−1P in (65), one can see that (69) is S⋆[P ]. Thus, the desired form of S⋆[P ] is
S⋆[P ] = N(P ) ǫ
A[P ]M
⋆ [P ]. (71)
References
[1] S. Doplicher, K. Fredenhagen, and J. E. Roberts, Comm. Math. Phys. 172, 187 (1995).
[2] H. J. Groenewold, Physica 12, 405 (1946); J. E. Moyal, Proc. Cambridge Phil. Soc. 45, 99
(1949); H. Weyl, Quantum mechanics and group theory, Z. Phys. 46, 1 (1927).
[3] M. Chaichian, P. P. Kulish, K. Nishijima, and A. Tureanu, Phys. Lett. B 604, 98(2004).
[4] R. Oeckl, Nucl. Phys. B 581, 559 (2000).
[5] J. Wess, hep-th/0408080.
[6] S. Majid, H. Ruegg, Phys. Lett. B 334, 348 (1994).
[7] J. Lukierski, A. Nowicki, H. Ruegg and V. N. Tolstory, Phys. Lett. B 264, 331 (1991).
[8] M. Chaichian, P. Presnajder, and A. Tureanu, Phys. Rev. Lett. 94, 151602 (2005).
[9] A. P. Balachandran, G. Mangano, A. Pinzul and S. Vaidya, Int. J. Mod. Phys. A 21, 3111 (2006).
[10] J. G. Bu, H. C. Kim, Y. Lee, C. H. Vac, and J. H. Yee, Phys. Rev. D 73, 125001 (2006).
[11] S. Weinberg, Phys. Rev. 135, B1049 (1964).
[12] S. Weinberg, Phys. Rev. 133, B1318 (1964).
[13] S. Weinberg, Phys. Rev. 134, B882 (1964).
[14] L. Bonora, M. Schnabl, M. M. Sheikh-Jabbari, A. Tomasiello, Nucl. Phys. B 589, 461 (2000).
[15] J. Madore, S. Schraml, P. Schupp, J. Wess, Eur. Phys. J. C16, 161 (2000).
[16] M. Chaichian, P. Presnajder, M. M. Sheikh-Jabbari, A. Tureanu, Phys. Lett. B 526, 132 (2002).
[17] A. H. Fatollahi, H. Mohammadzadeh, Eur. Phys. J. C 36 113 (2004).
http://arxiv.org/abs/hep-th/0408080
[18] V. O. Rivelles, Phys. Lett. B 558, 191 (2003).
[19] M. Banados, O. Chandia, N. Grandi, F. A. Schaposnik, G. A. Silva, Phys. Rev. D64, 084012
(2001).
[20] P. Aschieri, C. Blohmann, M. Dimitrijevic, F. Meyer, P. Schupp, J. Wess, Class. Quant. Grav.
22, 3511 (2005).
[21] R. Banerjee, H. S. Yang, Nucl. Phys. B708, 434 (2005).
[22] J. Ellis, N. E. Mavromatos, D. V. Nanopoulos, A. S. Sakharov, Int. J. Mod. Phys. A19, 4413
(2004).
[23] G. Z. Adunas, E. Rodriguez-Milla, D. V. Ahluwalia, Gen. Rel. Grav. 33, 183 (2001).
[24] Y. N. Obukhov, Phys. Rev. Lett. 86, 192 (2001).
[25] S. Majid, Foundations of Quantum Group Theory, Cambridge University Press, (1995).
[26] T. Filk, Phys. Lett. B 376, 53 (1996).
[27] E. Joung, J. Mourad, hep-th/0703245.
[28] S. Weinberg, The Quantum Theory of Fields, Vol.I, Cambridge University Press, (1996).
[29] H. Stapp, Phys. Rev. 125, 2139 (1962).
[30] A. O. Barut, I. Muzinich, D. N. Williams, Phys. Rev. 130, 442 (1963).
http://arxiv.org/abs/hep-th/0703245
Introduction
Properties of the general S matrix
A short introduction of useful properties of the twist-deformation
The S matrix and its twist invariance
Generalization to arbitrary fields
Exact transformation formula for the S matrix element for massless fields
Charge conservation and the equivalence principle
Dynamical definition of charge and gravitational mass
Conservation law
Discussion
Exact calculation of the polarization tensor for massless fields
Properties of the polarization vectors
Invariant M function for massless field
|
0704.1806 | Critical Scaling of Shear Viscosity at the Jamming Transition | Critical Scaling of Shear Viscosity at the Jamming Transition
Peter Olsson1 and S. Teitel2
Department of Physics, Ume̊a University, 901 87 Ume̊a, Sweden
Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627
(Dated: July 30, 2021)
We carry out numerical simulations to study transport behavior about the jamming transition of
a model granular material in two dimensions at zero temperature. Shear viscosity η is computed as
a function of particle volume density ρ and applied shear stress σ, for diffusively moving particles
with a soft core interaction. We find an excellent scaling collapse of our data as a function of the
scaling variable σ/|ρc − ρ|
∆, where ρc is the critical density at σ = 0 (“point J”), and ∆ is the
crossover scaling critical exponent. We define a correlation length ξ from velocity correlations in
the driven steady state, and show that it diverges at point J. Our results support the assertion that
jamming is a true second order critical phenomenon.
PACS numbers: 45.70.-n, 64.60.-i, 83.80.Fg
Keywords:
In granular materials, or other spatially disordered
systems such as colloidal glasses, gels, and foams, in
which thermal fluctuations are believed to be negligible,
a jamming transition has been proposed: upon increas-
ing the volume density (or “packing fraction”) of parti-
cles ρ above a critical ρc, the sudden appearance of a
finite shear stiffness signals a transition between flowing
liquid and rigid (but disordered) solid states [1]. It has
further been proposed by Liu and Nagel and co-workers
[2, 3] that this jamming transition is a special second or-
der critical point (“point J”) in a wider phase diagram
whose axes are volume density ρ, temperature T , and ap-
plied shear stress σ (the latter parameter taking one out
of equilibrium to non-equilibrium driven steady states).
A surface in this three dimensional parameter space then
separates jammed from flowing states, and the intersec-
tion of this surface with the equilibrium ρ − T plane at
σ = 0 is related to the structural glass transition.
Several numerical [3, 4, 5, 6, 7, 8, 9, 10], theoretical
[11, 12, 13, 14] and experimental [5, 15, 16, 17, 18] works
have investigated the jamming transition, mostly by con-
sidering behavior as the transition is approached from the
jammed side. In this work we consider the flowing state,
computing the shear viscosity η under applied uniform
shear stress. Previous works have simulated the flow-
ing response to applied shear in glassy systems at finite
temperature [19, 20, 21], and in foams [4] and granular
systems [10] at T = 0, ρ > ρc. Here we consider the
ρ − σ plane at T = 0, showing for the first time that,
near point J, η−1(ρ, σ) collapses to a universal scaling
function of the variable σ/|ρc − ρ|
∆ for both ρ < ρc and
ρ > ρc. We further define a correlation length ξ from
steady state velocity correlations, and show that it di-
verges at point J. Our results support that jamming is a
true second order critical phenomenon.
Following O’Hern et al. [3], we simulate frictionless soft
disks in two dimensions (2D) using a bidisperse mixture
with equal numbers of disks of two different radii. The
radii ratio is 1.4 and the interaction between the particles
V (rij) =
ǫ(1− rij/dij)
2/2 for rij < dij
0 for rij ≥ dij
where rij is the distance between the centers of two par-
ticles i and j, and dij is the sum of their radii. Particles
are non-interacting when they do not touch, and inter-
act with a harmonic repulsion when they overlap. We
measure length in units such that the smaller diameter
is unity, and energy in units such that ǫ = 1. A system
of N disks in an area Lx ×Ly thus has a volume density
ρ = Nπ(0.52 + 0.72)/(2LxLy) . (2)
To model an applied uniform shear stress, σ, we first
use Lees-Edwards boundary conditions [22] to introduce
a uniform shear strain, γ. Defining particle i’s position as
ri = (xi+γyi, yi), we apply periodic boundary conditions
on the coordinates xi and yi in an Lx × Ly system. In
this way, each particle upon mapping back to itself under
the periodic boundary condition in the ŷ direction, has
displaced a distance ∆x = γLy in the x̂ direction, re-
sulting in a shear strain ∆x/Ly = γ. When particles do
not touch, and hence all mutual forces vanish, xi and yi
are constant and a time dependent strain γ(t) produces a
uniform shear flow, dri/dt = yi(dγ/dt)x̂. When particles
touch, we assume a diffusive response to the inter-particle
forces, as would be appropriate if the particles were im-
mersed in a highly viscous liquid or resting upon a rough
surface with high friction. This results in the following
equation of motion, which was first proposed as a model
for sheared foams [4],
dV (rij)
x̂ . (3)
The strain γ is then treated as a dynamical variable,
http://arxiv.org/abs/0704.1806v2
obeying the equation of motion,
LxLyσ −
dV (rij)
, (4)
where the applied stress σ acts like an external force on
γ and the interaction terms V (rij) depend on γ via the
particle separations, rij = ([xi−xj]Lx +γ[yi−yj]Ly , [yi−
yj ]Ly), where by [. . .]Lµ we mean that the difference is
to be taken, invoking periodic boundary conditions, so
that the result lies in the interval (−Lµ/2, Lµ/2]. The
constants D and Dγ are set by the dissipation of the
medium in which the particles are embedded; we take
units of time such that D = Dγ ≡ 1.
In a flowing state at finite σ > 0, the sum of the inter-
action terms is of order O(N) so that the right hand side
of Eq. (4) is O(1). The strain γ(t) increases linearly in
time on average, leading to a sheared flow of the particles
with average velocity gradient dvx/dy = 〈dγ/dt〉, where
vx(y) is the average velocity in the x̂ direction of the par-
ticles at height y. We then measure the shear viscosity,
defined by,
dvx/dy
〈dγ/dt〉
. (5)
We expect η−1 to vanish in a jammed state.
We integrate the equations of motion, Eqs. (3)-(4),
starting from an initial random configuration, using the
Heuns method. The time step ∆t is varied according to
system size to ensure our results are independent of ∆t.
We consider a fixed number of particles N , in a square
system L ≡ Lx = Ly, and vary the volume density ρ by
adjusting the length L according to Eq. (2). We simulate
for times ttot such that the total relative displacement per
unit length transverse to the direction of motion is typ-
ically γ(ttot) ∼ 10, with γ(ttot) ranging between 1 and
200 depending on the particular system parameters.
In Fig. 1 we show our results for η−1 using a fixed
small shear stress, σ = 10−5, representative of the σ → 0
limit. Our raw results are shown in Fig. 1a for several
different numbers of particles N from 64 to 1024. Com-
paring the curves for different N as ρ increases, we see
that they overlap for some range of ρ, before each drops
discontinuously into a jammed state. As N increases, the
onset value of ρ for jamming increases to a limiting value
ρc ≃ 0.84 (consistent with the value for random close
packing [3]) and η−1 vanishes continuously. For finite
N , systems jam below ρc because there is always a fi-
nite probability to find a configuration with a force chain
spanning the width of the system, thus causing it to jam;
and at T = 0, once a system jams, it remains jammed
for all further time. As the system evolves dynamically
with increasing simulation time, it explores an increasing
region of configuration space, and ultimately finds a con-
figuration that causes it to jam. The statistical weight
10-3 10-2 10-1
eta^-1
eta^-1
eta^-1
eta^-1
eta^-1
beta=1.6
volume density #c " #
(b)$ = 10
#c = 0.8415
% = 1.65
!"1~ |# " #c|
0.78 0.79 0.80 0.81 0.82 0.83 0.84
eta^-1
eta^-1
eta^-1
eta^-1
eta^-1
volume density #
(a)$ = 10"5
FIG. 1: (color online) a) Plot of inverse shear viscosity η−1
vs volume density ρ for several different numbers of particles
N , at constant small applied shear stress σ = 10−5. As N
increases, one see jamming at a limiting value of the density
ρc ∼ 0.84. b) Log-log replot of the data of (a) as η
−1 vs
ρc − ρ, with ρc = 0.8415. The dashed line has slope β = 1.65
indicating the continuous algebraic vanishing of η−1 at ρc with
a critical exponent β.
of such jamming configurations decreases, and hence the
average time required to jam increases, as one either de-
creases ρ, or increases N [3]. In the limit N → ∞, we
expect jamming will occur in finite time only for ρ ≥ ρc.
In Fig. 1b we show a log-log plot of η−1 vs ρc − ρ, us-
ing a value ρc = 0.8415. We see that the data in the
unjammed state is well approximated by a straight line
of slope β = 1.65, giving η−1 ∼ |ρ − ρc|
β in agreement
with the expectation that point J is a second order phase
transition.
If point J is indeed a true critical point, one expects
that its influence will be felt also at finite values of the
stress σ, with η−1 obeying a typical scaling law,
η−1(ρ, σ) = |ρ− ρc|
|ρ− ρc|∆
. (6)
Here z ≡ σ/|ρ−ρc|
∆ is the crossover scaling variable, ∆ is
the crossover scaling critical exponent, and f−(z), f+(z)
are the two branches of the crossover scaling function for
ρ < ρc and ρ > ρc respectively.
In Fig. 2 we show a log-log plot of inverse shear vis-
cosity η−1 vs applied shear stress σ, for several different
values of volume density ρ. Our results are for systems
large enough that we believe finite size effects are negli-
gible. We use N = 1024 for ρ < 0.844 and N = 2048 for
ρ ≥ 0.844. Again we see that ρc ≃ 0.8415 separates two
limits of behavior. For ρ < ρc, log η
−1 is convex in log σ,
decreasing to a finite value as σ → 0. For ρ > ρc, log η
is concave in log σ, decreasing towards zero as σ → 0.
The dashed straight line, separating the two regions of
behavior, indicates the power law dependence that is ex-
pected exactly at ρ = ρc (see below). Similar power
law behavior at ρc was recently found in simulations of a
three dimensional granular material [23].
10-5 10-4 10-3 10-2
shear stress #
$=0.830
$=0.834
$=0.836
$=0.838
$=0.840
$=0.841
$=0.842
$=0.844
$=0.848
$=0.852
$=0.856
$=0.860
$=0.864
$=0.868
#=0.0012
FIG. 2: (color online) Plot of inverse shear viscosity η−1 vs
applied shear stress σ for several different values of the vol-
ume density ρ. The dashed line represents the power law
dependence expected exactly at ρ = ρc and has a slope
β/∆ = 1.375. Solid lines are guides to the eye. Points la-
beled σ = 0.0012 correspond to densities ρ = 0.870, 0.872,
0.874, 0.876, and 0.878.
In Fig. 3 we replot the data of Fig. 2 in the scaled
variables η−1/|ρ−ρc|
β vs σ/|ρ−ρc|
∆. Using ρc = 0.8415,
β = 1.65 (the same values used in Fig. 1b) and ∆ = 1.2,
we find an excellent scaling collapse in agreement with
the prediction of Eq. (6). As the scaling variable z → 0,
f−(z) → constant; this gives the vanishing of η
−1 ∼ |ρ−
β at σ = 0. As z → ∞, both branches of the scaling
function approach a common curve, f±(z) ∼ z
β/∆, so
that precisely at ρ = ρc, η
−1 ∼ σβ/∆ as σ → 0 [24].
This is shown as the dashed line in both Figs. 3 and 2. A
similar scaling collapse of η has been found in simulations
[20] of a sheared Lennard-Jones glass, as a function of
temperature and applied shear strain rate γ̇, but only
above the glass transition, T > Tc. By comparing the
goodness of the scaling collapse as parameters are varied,
we estimate the accuracy of the critical exponents to be
roughly, β = 1.7± 0.2 and ∆ = 1.2± 0.2.
That the crossover scaling exponent ∆ > 0, implies
that σ is a relevant variable in the renormalization group
sense, and that critical behavior at finite σ should be
in a different universality class from the jamming tran-
sition at point J (i.e. σ = 0). The nature of jamming
at finite σ > 0 will be determined by the behavior of
the branch of the crossover scaling function f+(z), that
describes behavior for ρ > ρc. From Fig. 3 we see that
f+(z) is a decreasing function of z. If f+(z) vanishes only
when z → 0, then Eq. (6) implies that η−1 vanishes for
ρ > ρc only when σ = 0, and so there will be no jamming
at finite σ > 0. If, however, f+(z) vanishes at some fi-
nite z0, then η
−1 will vanish whenever σ/(ρ− ρc)
∆ = z0;
there will then be a line of jamming transitions emanat-
ing from point J in the ρ − σ plane given by the curve
ρ∗(σ) = ρc + (σ/z0)
1/∆. If f+(z) vanishes continuously
10-2 10-1 100
z = %/|# " #c|
#c = 0.8415
$ = 1.65
& = 1.2
f"(z)
# < #c
f+(z)
# > #c~ z$/&
FIG. 3: (color online) Plot of scaled inverse viscosity η−1/|ρ−
β vs scaled shear stress z ≡ σ/|ρ − ρc|
∆ for the data of
Fig. 2. We find an excellent collapse to the scaling form of
Eq. (6) using values ρc = 0.8415, β = 1.65 and ∆ = 1.2.
The dashed line represents the large z asymptotic dependence,
∼ zβ/∆. Data point symbols correspond to those used in
Fig. 2.
at z0, jamming at finite σ will be like a second order
transition; if f+(z) jumps discontinuously to zero at z0,
it will be like a first order transition. Such a first order
like transition has been reported in simulations [20, 21]
of sheared glasses at finite temperature below the glass
transition, T < Tc. However, recent simulations [10] of
a granular system at T = 0, ρ > ρc, showed that a sim-
ilar first order like behavior was a finite size effect that
vanished in the thermodynamic limit. With these obser-
vations, we leave the question of criticality at finite σ to
future work
The critical scaling found in Fig. 3 strongly suggests
that point J is indeed a true second order phase transi-
tion, and thus implies that there ought to be a diverg-
ing correlation length ξ at this point. Measurements
of dynamic (time dependent) susceptibilities have been
used to argue for a divergent length scale in both the
thermally driven glass transition [25], and the density
driven jamming transition [17]. Here we consider the
equal time transverse velocity correlation function in the
shear driven steady state,
g(x) = 〈vy(xi, yi)vy(xi + x, yi)〉 , (7)
where vy(xi, yi) is the instantaneous velocity in the ŷ
direction, transverse to the direction of the average shear
flow, for a particle at position (xi, yi). The average is over
particle positions and time. In the inset to Fig. 4 we plot
g(x)/g(0) vs x for three different values of ρ at fixed σ =
10−4 and number of particlesN = 1024. We see that g(x)
decreases to negative values at a well defined minimum,
before decaying to zero as x increases. We define ξ to be
the position of this minimum. That g(ξ) < 0, indicates
10-2 10-1 100 101
z = %/|# " #c|
#c = 0.8415
$ = 0.6
& = 1.2h"(z)
# < #c
h+(z)
# > #c
~ z$/&
0 5 10 15 20
rho=0.83
rho=0.834
rho=0.838
# = 0.830
# = 0.834
# = 0.838
N = 1024 % = 10-4
FIG. 4: (color online) Inset: Normalized transverse velocity
correlation function g(x)/g(0) vs longitudinal position x for
N = 1024 particles, applied shear stress σ = 10−4, and vol-
ume densities ρ = 0.830, 0.834 and 0.838. The position of
the minimum determines the correlation length ξ. Main fig-
ure: Plot of scaled inverse correlation length ξ−1/|ρ− ρc|
scaled shear stress z ≡ σ/|ρ− ρc|
∆ for the data of Fig. 2. We
find a good scaling collapse using values ρc = 0.8415, ∆ = 1.2
(the same as in Fig. 3) and ν = 0.6. Data point symbols
correspond to those used in Fig. 2.
that regions separated by a distance ξ are anti-correlated.
We can thus interpret the sheared flow in the unjammed
state as due to the rotation of correlated regions of length
ξ. Similar behavior, leading to a similar definition of
ξ, has previously been found [26] in correlations of the
nonaffine displacements of particles in a Lennard-Jones
glass, in response to small elastic distortions.
As with viscosity, we expect the correlation length
ξ(ρ, σ) to obey a scaling equation similar to Eq. (6). We
consider here the inverse correlation length ξ−1, which
like η−1 should vanish at the jamming transition, obey-
ing the scaling equation,
ξ−1(ρ, σ) = |ρ− ρc|
|ρ− ρc|∆
. (8)
The correlation length critical exponent is ν, but the
crossover exponent ∆ remains the same as for the vis-
cosity.
In Fig. 4 we plot the scaled inverse correlation length,
ξ−1/|ρ−ρc|
ν vs the scaled stress, σ/|ρ−ρc|
∆. Using ρc =
0.8415 and ∆ = 1.2, as was found for the scaling of η−1,
we now find a good scaling collapse for ξ−1 by taking the
value ν = 0.6. By comparing the goodness of the collapse
as ν is varied, we estimate ν = 0.6±0.1. From the scaling
equation Eq. (8) we expect both branches of the scaling
function to approach the power law h±(z) ∼ z
ν/∆ as
z → ∞, so that ξ−1 ∼ σν/∆ as σ → 0 at ρ = ρc [24].
This is shown as the dashed line in Fig. 4. Our result
is consistent with the conclusion “ν is between 0.6 and
0.7” of Drocco et al. [7] for the flowing phase, ρ < ρc.
It also agrees with ν = 0.71 ± 0.08 found by O’Hern et
al. [3] from a finite size scaling argument. Wyart et al.
[14] have proposed a diverging length scale with exponent
ν = 0.5 by considering the vibrational spectrum of soft
modes approaching point J from the jammed side, ρ > ρc.
While our results cannot rule out ν = 0.5, our scaling
collapse in Fig. 4 does seem somewhat better when using
the larger value 0.6.
This work was supported by Department of Energy
grant DE-FG02-06ER46298 and by the resources of the
Swedish High Performance Computing Center North
(HPC2N). We thank J. P. Sethna, L. Berthier, M. Wyart,
J. M. Schwarz, N. Xu, D. J. Durian, A. J. Liu and
S. R. Nagel for helpful discussion.
[1] Jamming and Rheology, edited by A. J. Liu and
S. R. Nagel (Taylor & Francis, New York, 2001).
[2] A. J. Liu and S. R. Nagel, Nature 396, 21 (1998).
[3] C. S. O’Hern et al. Phys. Rev. E 68, 011306 (2003).
[4] D. J. Durian, Phys. Rev. Lett. 75, 4780 (1995) and Phys.
Rev. E 55, 1739 (1997).
[5] H. A. Makse, D. L. Johnson and L. M. Schwartz, Phys.
Rev. Lett. 84, 4160 (2000).
[6] C. S. O’Hern et al., Phys. Rev. Lett. 86, 000111 (2001)
and 88, 075507 (2002).
[7] J. A. Drocco et al., Phys. Rev. Lett. 95, 088001 (2005).
[8] L. E. Silbert, A. J. Liu and S. R. Nagel, Phys. Rev. Lett.
95, 098301 (2005) and Phys. Rev. E 73, 041304 (2006).
[9] W. G. Ellenbroek et al., Phys. Rev. Lett. 97, 258001
(2006).
[10] N. Xu and C. S. O’Hern, Phys. Rev. E 73, 061303 (2006).
[11] J. M. Schwarz, A. J. Liu and L. Q. Chayes, Europhys.
Lett. 73, 560 (2006).
[12] C. Toninelli, G. Biroli and D. S. Fisher, Phys. Rev. Lett.
96, 035702 (2006).
[13] S. Henkes and B. Chakraborty, Phys. Rev. Lett. 95,
198002 (2005).
[14] M. Wyart, S. R. Nagel, T. A. Witten, Europhys. Lett.
72, 486-492 (2005); M. Wyart et al., Phys. Rev. E 72,
051306 (2005); C. Brito and M. Wyart, Europhys. Lett.
76, 149 (2006).
[15] V. Trappe et al., Nature (London) 411, 772 (2001).
[16] T. S. Majmudar et al., Phys. Rev. Lett. 98, 058001
(2007).
[17] A. .S. Keys et al., Nature physics 3, 260 (2007).
[18] M. Schröter et al., Europhys Lett. 78, 44004 (2007).
[19] R. Yamamoto and A. Onuki, Phys. Rev. E 58, 3515
(1998).
[20] L. Berthier and J.-L. Barat, J. Chem. Phys. 116, 6228
(2002).
[21] F. Varnik, L. Bocquet and J.-L.Barrat, J. Chem. Phys.
120, 2788, (2004).
[22] D. J. Evans and G. P. Morriss, Statistical Mechanics of
Non-equilibrium Liquids (Academic, London, 1990).
[23] T. Hatano, M. Otsuki and S. Sasa, condmat/0607511.
[24] In general, one should consider nonlinear scaling vari-
ables. In our case, the most important correction would
be to replace ρ− ρc in Eq. (6) by gρ(ρ, σ) ≡ ρ− ρc + cσ
this could lead to noticeable corrections to our scaling
equation near ρ = ρc. However, since we find ∆ > 0.5,
our conclusion that η−1 ∼ σβ/∆ at ρ = ρc remains valid.
See, A. Aharony and M. E. Fisher, Phys. Rev. B 27, 4394
(1983).
[25] L. Berthier et al., Science 310, 1797 (2005).
[26] A. Tanguy et al., Phys. Rev. B 66, 174205 (2002).
|
0704.1807 | Polar actions on compact Euclidean hypersurfaces | Polar actions on compact Euclidean hypersurfaces.
Ion Moutinho & Ruy Tojeiro
Abstract: Given an isometric immersion f : Mn → Rn+1 of a compact Riemannian manifold
of dimension n ≥ 3 into Euclidean space of dimension n+ 1, we prove that the identity com-
ponent Iso0(Mn) of the isometry group Iso(Mn) of Mn admits an orthogonal representation
Φ: Iso0(Mn) → SO(n + 1) such that f ◦ g = Φ(g) ◦ f for every g ∈ Iso0(Mn). If G is a
closed connected subgroup of Iso(Mn) acting locally polarly on Mn, we prove that Φ(G) acts
polarly on Rn+1, and we obtain that f(Mn) is given as Φ(G)(L), where L is a hypersurface of
a section which is invariant under the Weyl group of the Φ(G)-action. We also find several suf-
ficient conditions for such an f to be a rotation hypersurface. Finally, we show that compact
Euclidean rotation hypersurfaces of dimension n ≥ 3 are characterized by their underlying
warped product structure.
MSC 2000: 53 A07, 53 C40, 53 C42.
Key words: polar actions, rotation hypersurfaces, isoparametric submanifolds, rigidity of
hypersurfaces, warped products.
1 Introduction
Let G be a connected subgroup of the isometry group Iso(Mn) of a compact Riemannian
manifold Mn of dimension n ≥ 3, which we always assume to be connected. Given an
isometric immersion f : Mn → RN into Euclidean space of dimension N , in general one
can not expect G to be realizable as a group of rigid motions of RN that leave f(Mn)
invariant. Nevertheless, a fundamental fact for us is that in codimension N −n = 1 this
is indeed the case.
Theorem 1 Let f : Mn → Rn+1, n ≥ 3, be a compact hypersurface. Then the identity
component Iso0(Mn) of the isometry group of Mn admits an orthogonal representation
Φ: Iso0(Mn) → SO(n+ 1) such that f ◦ g = Φ(g) ◦ f for all g ∈ Iso0(Mn).
Theorem 1 may be regarded as a generalization of a classical result of Kobayashi
[Ko], who proved that a compact homogeneous hypersurface of Euclidean space must
http://arxiv.org/abs/0704.1807v1
be a round sphere. In fact, the crucial step in the proof of Kobayashi’s theorem is to
show that the isometry group of the hypersurface can be realized as a closed subgroup
of O(n+ 1). The idea of the proof of Theorem 1 actually appears already in [MPST],
where Euclidean G-hypersurfaces of cohomogeneity one, i.e., with principal orbits of
codimension one, are considered.
We apply Theorem 1 to study compact Euclidean hypersurfaces f : Mn → Rn+1,
n ≥ 3, on which a connected closed subgroup G of Iso(Mn) acts locally polarly. Recall
that an isometric action of a compact Lie group G on a Riemannian manifoldMn is said
to be locally polar if the distribution of normal spaces to principal orbits on the regular
part of Mn is integrable. If Mn is complete, this implies the existence of a connected
complete immersed submanifold Σ of Mn that intersects orthogonally all G-orbits (cf.
[HLO]). Such a submanifold is called a section, and it is always a totally geodesic
submanifold of Mn. In particular, any isometric action of cohomogeneity one is locally
polar. The action is said to be polar if there exists a closed and embedded section.
Clearly, for orthogonal representations there is no distinction between polar and locally
polar actions, for in this case sections are just affine subspaces.
It was shown in [BCO], Proposition 3.2.9 that if a closed subgroup of SO(N) acts
polarly on RN and leaves invariant a submanifold f : Mn → RN , then its restricted action
on Mn is locally polar. Our next result states that any locally polar isometric action
of a compact connected Lie group on a compact Euclidean hypersurface of dimension
n ≥ 3 arises in this way.
Theorem 2 Let f : Mn → Rn+1, n ≥ 3, be a compact hypersurface and let G be a closed
connected subgroup of Iso(Mn) acting locally polarly onMn with cohomogeneity k. Then
there exists an orthogonal representation Ψ: G→ SO(n+1) such that Ψ(G) acts polarly
on Rn+1 with cohomogeneity k + 1 and f ◦ g = Ψ(g) ◦ f for every g ∈ G.
A natural problem that emerges is how to explicitly construct all compact hy-
persurfaces f : Mn → Rn+1 that are invariant under a polar action of a closed sub-
group G ⊂ SO(n + 1). This is accomplished by the following result. Recall that
the Weyl group of the G-action is defined as W = N(Σ)/Z(Σ), where Σ is a section,
N(Σ) = {g ∈ G | gΣ = Σ} and Z(Σ) = {g ∈ G | gp = p, ∀ p ∈ Σ} is the intersection of
the isotropy subgroups Gp, p ∈ Σ.
Theorem 3 Let G ⊂ SO(n + 1) be a closed subgroup that acts polarly on Rn+1, let
Σ be a section and let L be a compact immersed hypersurface of Σ which is invariant
under the Weyl group of the G-action. Then G(L) is a compact G-invariant immersed
hypersurface of Rn+1. Conversely, any compact hypersurface f : Mn → Rn+1 that is
invariant under a polar action of a closed subgroup of SO(n+ 1) can be constructed in
this way.
The simplest examples of hypersurfaces f : Mn → Rn+1 that are invariant under a
polar action on Rn+1 of a closed subgroup of SO(n+ 1) are the rotation hypersurfaces.
These are invariant under a polar representation which is the sum of a trivial represen-
tation on a subspace Rk ⊂ Rn+1 and one which acts transitively on the unit sphere of
its orthogonal complement Rn−k+1. In this case any subspace Rk+1 containing Rk is a
section, and the Weyl group of the action is just the group Z2 generated by the reflection
with respect to the “axis” Rk. Thus, the condition that a hypersurface Lk ⊂ Rk+1 be
invariant under the Weyl group reduces in this case to Lk being symmetric with respect
to Rk. We now give several sufficient conditions for a compact Euclidean hypersurface
as in Theorem 2 to be a rotation hypersurface.
Corollary 4 Under the assumptions of Theorem 2, any of the following additional con-
ditions implies that f is a rotation hypersurface:
(i) there exists a totally geodesic (in Mn) G-principal orbit;
(ii) k = n− 1;
(iii) the G-principal orbits are umbilical in Mn;
(iv) there exists a G-principal orbit with nonzero constant sectional curvatures;
(v) there exists a G-principal orbit with positive sectional curvatures.
Moreover, G is isomorphic to one of the closed subgroups of SO(n − k + 1) that act
transitively on Sn−k.
For a list of all closed subgroups of SO(n) that act transitively on the sphere, see,
e.g., [EH], p. 392. Corollary 4 generalizes similar results in [PS], [AMN] and [MPST]
for compact Euclidean G-hypersurfaces of cohomogeneity one. In part (v), weakening
the assumption to non-negativity of the sectional curvatures of some principal G-orbit
implies f to be a multi-rotational hypersurface in the sense of [DN]:
Corollary 5 Under the assumptions of Theorem 2, suppose further that there exists
a G-principal orbit Gp with nonnegative sectional curvatures. Then there exist an or-
thogonal decomposition Rn+1 =
ni into G̃-invariant subspaces, where G̃ = Ψ(G),
and connected Lie subgroups G1, . . . , Gk of G̃ such that Gi acts on R
ni, the action being
transitive on Sni−1 ⊂ Rni, and the action of Ḡ = G1 × . . .×Gk on R
n+1 given by
(g1 . . . gk)(v0, v1, . . . , vk) = (v0, g1v1, . . . , gkvk)
is orbit equivalent to the action of G̃. In particular, if Gp is flat then ni = 2 and Gi is
isomorphic to SO(2) for i = 1, . . . , k.
Finally, we apply some of the previous results to a problem that at first sight has no
relation to isometric actions whatsoever.
Let f : Mn → Rn+1 be a rotation hypersurface as described in the paragraph following
Theorem 3. Then the open and dense subset of Mn that is mapped by f onto the
complement of the axis Rk is isometric to the warped product Lk ×ρN
n−k, where Nn−k
is the orbit of some fixed point f(p) ∈ Lk under the action of G̃, and the warping
function ρ : Lk → R+ is a constant multiple of the distance to R
k. Recall that a warped
product N1 ×ρ N2 of Riemannian manifolds (N1, 〈 , 〉N1) and (N2, 〈 , 〉N2) with warping
function ρ: N1 → R+ is the product manifold N1 ×N2 endowed with the metric
〈 , 〉 = π∗1〈 , 〉N1 + (ρ ◦ π1)
2π∗2〈 , 〉N2,
where πi: N1 ×N2 → Ni, 1 ≤ i ≤ 2, denote the canonical projections.
We prove that, conversely, compact Euclidean rotation hypersurfaces of dimension
n ≥ 3 are characterized by their warped product structure.
Theorem 6 Let f : Mn → Rn+1, n ≥ 3, be a compact hypersurface. If there exists
an isometry onto an open and dense subset U ⊂ Mn of a warped product Lk ×ρ N
with Nn−k connected and complete (in particular if Mn is isometric to a warped product
Lk ×ρ N
n−k) then f is a rotation hypersurface.
Theorem 6 can be seen as a global version in the hypersurface case of the local
classification in [DT] of isometric immersions in codimension ℓ ≤ 2 of warped products
Lk ×ρ N
n−k, n− k ≥ 2, into Euclidean space.
Acknowledgment. We are grateful to C. Gorodski for helpful discussions.
2 Proof of Theorem 1
The proof of Theorem 1 relies on a result of Sacksteder [Sa] (see also the proof in [Da],
Theorem 6.14, which is based on an unpublished manuscript by D. Ferus) according to
which a compact hypersurface f : Mn → Rn+1, n ≥ 3, is rigid whenever the subset of
totally geodesic points of f does not disconnect Mn. Recall that f is rigid if any other
isometric immersion f̃ : Mn → Rn+1 differs from it by a rigid motion of Rn+1. The proof
of Sacksteder’s theorem actually shows more than the preceding statement. Namely, let
f, f̃ : Mn → Rn+1 be isometric immersions into Rn+1 of a compact Riemannian manifold
Mn, n ≥ 3, and let φ: T⊥Mnf → T
be the vector bundle isometry between the
normal bundles of f and f̃ defined as follows. Given a unit vector ξx ∈ T
x ∈M , let φ(ξx) be the unique unit vector in T
such that {f∗X1, . . . , f∗Xn, ξx} and
{f̃∗X1, . . . , f̃∗Xn, φ(ξx)} determine the same orientation in R
n+1, where {X1, . . . , Xn} is
any ordered basis of TxM
n. Then it is shown that
(x) = ±φ(x) ◦ αf(x) (1)
at each point x ∈Mn, where αf and αf̃ denote the second fundamental forms of f and f̃ ,
respectively, with values in the normal bundle. The proof is based on a careful analysis of
the distributions on Mn determined by the relative nullity subspaces ∆(x) = ker αf (x)
and ∆̃(x) = ker α
(x) of f and f̃ , respectively. Rigidity of f under the assumption
that the subset of totally geodesic points of f does not disconnect Mn then follows
immediately from the Fundamental Theorem of Hypersurfaces (cf. [Da], Theorem 1.1).
Proof of Theorem 1. Given g ∈ Iso0(Mn), let αf◦g denote the second fundamental form
of f ◦ g. We claim that
αf◦g(x) = φg(x) ◦ αf(x) (2)
for every g ∈ Iso0(Mn) and x ∈ Mn, where φg denotes the vector bundle isometry
between T⊥Mnf and T
⊥Mnf◦g defined as in the preceding paragraph. On one hand,
αf◦g(x)(X, Y ) = αf(gx)(g∗X, g∗Y ) (3)
for every g ∈ Iso0(Mn), x ∈ Mn and X, Y ∈ TxM
n. In particular, this implies that for
any fixed x ∈ Mn the map Θx: Iso
0(Mn) → Sym(TxM
n × TxM
n → T⊥x M
f ) into the
vector space of symmetric bilinear maps of TxM
n × TxM
n into T⊥x M
f , given by
Θx(g)(X, Y ) = φg(x)
−1(αf◦g(x)(X, Y )) = φg(x)
−1(αf (gx)(g∗X, g∗Y ))
for any X, Y ∈ TxM
n, is continuous. On the other hand, by the preceding remarks on
Sacksteder’s theorem, either αf◦g(x) = φg(x) ◦αf (x) or αf◦g(x) = −φg(x) ◦αf (x). Thus
Θx is a continuous map taking values in {αf(x),−αf (x)}, hence it must be constant
because Iso0(Mn) is connected. Since Θx(id) = αf(x), our claim follows.
We conclude that for each g ∈ Iso0(Mn) there exists a rigid motion g̃ ∈ Iso(Rn+1)
such that f ◦g = g̃◦f . It now follows from standard arguments that g 7→ g̃ defines a Lie-
group homomorphism Φ: Iso0(Mn) → Iso(Rn+1), whose image must lie in SO(n + 1)
because it is compact and connected.
Remarks 7 (i) Theorem 1 is also true for compact hypersurfaces of dimension n ≥ 3
of hyperbolic space, as well as for complete hypersurfaces of dimension n ≥ 4 of the
sphere. It also holds for complete hypersurfaces of dimension n ≥ 3 of both Euclidean
and hyperbolic spaces, under the additional assumption that they do not carry a com-
plete leaf of dimension (n− 1) or (n− 2) of their relative nullity distributions. In fact,
the proof of Theorem 1 carries over in exactly the same way for these cases, because so
does equation (1) (cf. [Da], p. 96-100).
(ii) Clearly, Theorem 1 does not hold for isometric immersions f : Mn → Rn+ℓ of codi-
mension ℓ ≥ 2. Namely, counterexamples can be easily constructed, for instance,
by means of compositions f = h ◦ g of isometric immersions g: Mn → Rn+1 and
h: V → Rn+ℓ, with V ⊂ Rn+1 an open subset containing g(Mn).
The next result gives a sufficient condition for rigidity of a compact hypersurface
f : Mn → Rn+1 as in Theorem 1 in terms of G̃ = Φ(Iso0(Mn)).
Proposition 8 Under the assumptions of Theorem 1, suppose that G̃ = Φ(Iso0(Mn))
does not have a fixed vector. Then f is free of totally geodesic points. In particular, f
is rigid.
Proof: Suppose that the subset B of totally geodesic points of f is nonempty. Since
αf◦g(x) = φg(x) ◦ αf(x) for every g ∈ G = Iso
0(Mn) and x ∈ Mn by (2), B coincides
with the set of totally geodesic points of f ◦ g for every g ∈ G. In view of (3), this
amounts to saying that B is G-invariant. Thus, if Gp is the orbit of a point p ∈ B
then Gp ⊂ B. Since Gp is connected, it follows from [DG], Lemma 3.14 that f(Gp) is
contained in a hyperplane H that is tangent to f along Gp. Therefore, a unit vector
v orthogonal to H spans T⊥gpM
f for every gp ∈ Gp. Since g̃T
f = T
f for every
g̃ = Φ(g) ∈ G̃, because f is equivariant with respect to Φ, the connectedness of G̃
implies that it must fix v.
Proposition 8 implies, for instance, that if a closed connected subgroup of Iso(Mn)
acts onMn with cohomogeneity one then either f is a rotation hypersurface over a plane
curve or it is free of totally geodesic points, and in particular it is rigid (see [MPST],
Theorem 1). As another consequence we have:
Corollary 9 Let f : M3 → R4 be a compact hypersurface. If f has a totally geodesic
point (in particular, if it is not rigid) then either M3 has finite isometry group or f is
a rotation hypersurface.
Proof: Let Φ: Iso0(M3) → SO(4) be the orthogonal representation given by Theorem 1.
By Proposition 8, if f has a totally geodesic point then G̃ = Φ(Iso0(M3)) has a fixed
vector v, hence it can be regarded as a subgroup of SO(3). Therefore, either the re-
stricted action of G̃ on {v}⊥ has also a fixed vector or it is transitive on the sphere.
In the first case, either Iso0(M3) is trivial, that is, Iso(M3) is finite, or G̃ fixes a two
dimensional subspace R2 of R4, in which case f is a rotation hypersurface over a surface
in a half-space R3+ with R
2 as boundary. In the latter case, f is a rotation hypersurface
over a plane curve in a half-space R2+ having span{v} as boundary.
3 Proof of Theorem 2
For the proof of Theorem 2, we recall from Theorem 1 that there exists an orthogonal
representation Φ: Iso0(Mn) → SO(n + 1) such that f ◦ g = Φ(g) ◦ f for every g ∈
Iso0(Mn). Since G is connected we have G ⊂ Iso0(Mn), hence it suffices to prove that
G̃ = Φ(G) acts polarly on Rn+1 with cohomogeneity k + 1 and then set Ψ = Φ|G.
We claim that there exists a principal orbit Gp such that the position vector f is
nowhere tangent to f(Mn) along Gp, that is, f(g(p)) 6∈ f∗(g(p))Tg(p)M
n for any g ∈ G.
In order to prove our claim we need the following observation.
Lemma 10 Let f : Mn → Rn+1 be a hypersurface. Assume that the position vector
is tangent to f(Mn) on an open subset U ⊂ Mn. Then the index of relative nullity
νf(x) = dim ∆f (x) of f is positive at any point x ∈ U .
Proof: Let Z be a vector field on U such that f∗(p)Z(p) = f(p) for any p ∈ U and let η
be a local unit normal vector field to f . Differentiating 〈η, f〉 = 0 yields 〈AX,Z〉 = 0 for
any tangent vector field X on U , where A denotes the shape operator of f with respect
to η. Thus AZ = 0.
Going back to the proof of the claim, since Mn is a compact Riemannian manifold
isometrically immersed in Euclidean space as a hypersurface, there exists an open subset
V ⊂ Mn where the sectional curvatures of Mn are strictly positive. In particular, the
index of relative nullity of f vanishes at every x ∈ V . If the position vector were tangent
to f(Mn) at every regular point of V , it would be tangent to f(Mn) everywhere on V ,
because the set of regular points is dense onMn. This is in contradiction with Lemma 10
and shows that there must exist a regular point p ∈ V such that f(p) 6∈ f∗(p)TpM
Since f is equivariant, we must have in fact that f(g(p)) 6∈ f∗(g(p))Tg(p)M
n for any
g ∈ G and the claim is proved.
Now let Gp be a principal orbit such that the position vector f is nowhere tangent
to f(Mn) along Gp. Then the normal bundle of f |Gp splits (possibly non-orthogonally)
as span{f}⊕ f∗T
⊥Gp. Let ξ be an equivariant normal vector field to Gp in Mn and let
η = f∗ξ. Then, denoting Φ(g) by g̃ and identifying g̃ with its derivative at any point,
because it is linear, we have
g̃η(p) = (f ◦ g)∗(p)ξ(p) = f∗(gp)g∗(p)ξ(p) = f∗(gp)ξ(gp) = η(gp) (4)
for any g ∈ G. In particular, 〈η(gp), f(gp)〉 = 〈g̃η(p), g̃f(p)〉 = 〈η(p), f(p)〉 for every
g ∈ G, that is, 〈η, f〉 is constant on Gp. It follows that
X〈η, f〉 = 0, for any X ∈ TGp, (5)
and hence
〈∇̃Xη, f〉 = X〈η, f〉 − 〈ξ,X〉 = 0, (6)
where ∇̃ denotes the derivative in Rn+1. On the other hand, since G acts locally polarly
on Mn, we have that ξ is parallel in the normal connection of Gp in M . Therefore
(∇̃Xη)f∗T⊥Gp = f∗(∇Xξ)T⊥Gp = 0, (7)
where ∇ is the Levi-Civita connection of Mn; here, writing a vector subbundle as a
subscript of a vector field indicates taking its orthogonal projection onto that subbundle.
It follows from (6) and (7) that η is parallel in the normal connection of f |Gp. On the
other hand, the position vector f is clearly also an equivariant normal vector field of
f |Gp which is parallel in the normal connection.
Thus, we have shown that there exist equivariant normal vector fields to G̃(f(p)) =
f(Gp) that form a basis of the normal spaces at each point and which are parallel in
the normal connection. The statement now follows from the next known result, a proof
of which is included for completeness.
Lemma 11 Let G̃ ⊂ SO(n+ 1) have an orbit G̃(q) along which there exist equivariant
normal vector fields that form a basis of the normal spaces at each point and which are
parallel in the normal connection. Then G̃ acts polarly.
Proof: Since there exist equivariant normal vector fields to G̃(q) that form a basis of
the normal spaces at each point, the isotropy group acts trivially on each normal space,
hence G̃(q) is a principal orbit. We now show that the normal space T⊥q G̃(q) is a section,
for which it suffices to show that any Killing vector field X induced by G̃ is everywhere
orthogonal to T⊥q G̃(q). Given ξq ∈ T
q G̃(q), let ξ be an equivariant normal vector field
to G̃(q) extending ξq, which is also parallel in the normal connection. Then, denoting
by φt the flow of X and setting c(t) = φt(q) we have
|t=0φt(ξq)
T⊥q G̃(q)
|t=0ξ(c(t))
T⊥q G̃(q)
= ∇⊥X(q)ξ = 0,
where
denotes the covariant derivative in Euclidean space along c(t).
Remark 12 A closed subgroup G ⊂ SO(n + ℓ), ℓ ≥ 2, that acts non-polarly on Rn+ℓ
may leave invariant a compact submanifold f : Mn → Rn+ℓ and induce a locally polar
action on Mn. For instance, consider a compact submanifold f : Mn → Rn+2 that is
invariant by the action of a closed subgroup G ⊂ SO(n+2) that acts non-polarly on Rn+2
with cohomogeneity three. Then the induced action of G onMn has cohomogeneity one,
whence is locally polar. Moreover, taking f as a compact hypersurface of Sn+1 shows
also that Theorem 2 is no longer true if Rn+1 is replaced by Sn+1.
Theorem 2 yields the following obstruction for the existence of an isometric immer-
sion in codimension one into Euclidean space of a compact Riemannian manifold acted
on locally polarly by a closed connected Lie subgroup of its isometry group.
Corollary 13 Let Mn be a compact Riemannian manifold of dimension n ≥ 3 acted
on locally polarly by a closed connected subgroup G of its isometry group. If G has an
exceptional orbit then Mn can not be isometrically immersed in Euclidean space as a
hypersurface.
Proof: Let f : Mn → Rn+1 be an isometric immersion of a compact Riemannian manifold
acted on locally polarly by a closed connected subgroup G of its isometry group. We
will prove that G can not have any exceptional orbit. By Theorem 2 there exists an
orthogonal representation Ψ: G → SO(n + 1) such that G̃ = Ψ(G) acts polarly on
n+1 with cohomogeneity k + 1 and f ◦ g = Ψ(g) ◦ f for every g ∈ G. Let Gp be
a nonsingular orbit. Then Gp has maximal dimension among all G-orbits, and hence
G̃f(p) = f(Gp) has maximal dimension among all G̃-orbits. Since polar representations
are known to admit no exceptional orbits (cf. [BCO], Corollary 5.4.3), it follows that
G̃f(p) is a principal orbit. Then, for any g in the isotropy subgroup Gp we have that
g̃ = Ψ(g) ∈ G̃f(p), thus for any ξp ∈ T
p Gp we obtain
f∗(p)ξp = g̃∗f∗(p)ξp = (g̃ ◦ f)∗(p)ξp = (f ◦ g)∗(p)ξp = f∗(gp)g∗ξp = f∗(p)g∗ξp.
Since f∗(p) is injective, then g∗ξp = ξp. This shows that the slice representation, that
is, the action of the isotropy group Gp on the normal space T
p G(p) to the orbit G(p)
at p, is trivial. Thus Gp is a principal orbit.
4 Isoparametric submanifolds
We now recall some results on isoparametric submanifolds and derive a few additional
facts on them that will be needed for the proofs of Theorem 3 and Corollaries 4 and 5.
Given an isometric immersion f : Mn → RN with flat normal bundle, it is well-
known (cf. [St]) that for each point x ∈Mn there exist an integer s = s(x) ∈ {1, . . . , n}
and a uniquely determined subset Hx = {η1, . . . , ηs} of T
f such that TxM
n is the
orthogonal sum of the nontrivial subspaces
Eηi(x) = {X ∈ TxM
n : α(X, Y ) = 〈X, Y 〉 ηi, for all Y ∈ TxM
n}, 1 ≤ i ≤ s.
Therefore, the second fundamental form of f has the simple representation
α(X, Y ) =
〈Xi, Yi〉 ηi, (8)
or equivalently,
AξX =
〈ξ, ηi〉Xi, (9)
where X 7→ Xi denotes orthogonal projection onto Eηi . Each ηi ∈ Hx is called a
principal normal of f at x. The Gauss equation takes the form
R(X, Y ) =
i,j=1
〈ηi, ηj〉Xi ∧ Yj, (10)
where (Xi ∧ Yj)Z = 〈Z, Yj〉Xi − 〈Z,Xi〉Yj.
Lemma 14 Let f : Mn → RN be an isometric immersion with flat normal bundle of a
Riemannian manifold with constant sectional curvature c. Let η1, . . . , ηs be the distinct
principal normals of f at x ∈ Mn. Then
(i) There exists at most one ηi such that |ηi|
2 = c.
(ii) For all i, j, k ∈ {1, . . . , s} with i 6= j 6= k 6= i the vectors ηi − ηk and ηj − ηk are
linearly independent.
Proof: It follows from (10) that 〈ηi, ηj〉 = c for all i, j ∈ {1, . . . , s} with i 6= j. If
2 = |ηj|
2 = c, this gives |ηi − ηj |
2 = 0.
(iii) Assume that there exist λ 6= 0 and i, j, k ∈ {1, . . . , s} with i 6= j 6= k 6= i such that
ηi − ηk = λ(ηj − ηk). Then
2 − c = 〈ηi − ηk, ηi〉 = λ〈ηj − ηk, ηi〉 = 0,
and similarly |ηj|
2 = c, in contradiction with (i).
Let f : Mn → RN be an isometric immersion with flat normal bundle and let Hx =
{η1, . . . , ηs} be the set of principal normals of f at x ∈M
n. If the mapMn → {1, . . . , n}
given by x 7→ #Hx has a constant value s on an open subset U ⊂M
n, there exist smooth
normal vector fields η1, . . . , ηs on U such that Hx = {η1(x), . . . , ηs(x)} for any x ∈ U .
Furthermore, each Eηi = (Eηi(x))x∈U is a C
∞- subbundle of TU for 1 ≤ i ≤ s. The
following result is contained in [DN], Lemma 2.3.
Lemma 15. Let f : Mn → Rn+p be an isometric immersion with flat normal bundle and
a constant number s of principal normals η1, . . . , ηs everywhere. Assume that for a fixed
i ∈ {1, . . . , s} all principal normals ηj, j 6= i, are parallel in the normal connection along
Eηi and that the vectors ηi− ηj and ηi− ηℓ are everywhere pairwise linearly independent
for any pair of indices 1 ≤ j 6= ℓ ≤ s with j, ℓ 6= i. Then E⊥ηi is totally geodesic.
Proof: The Codazzi equation yields
〈∇XℓXj , Xi〉(ηi − ηj) = 〈∇XℓXj , Xi〉(ηi − ηℓ), i 6= j 6= k 6= i, (11)
∇⊥Xiηj = 〈∇XjXj , Xi〉(ηj − ηi), i 6= j, (12)
for all unit vectors Xi ∈ Eηi , Xj ∈ Eηj and Xℓ ∈ Eηℓ .
An isometric immersion f : Mn → RN is called isoparametric if it has flat normal
bundle and the principal curvatures of f with respect to every parallel normal vector
field along any curve in Mn are constant (with constant multiplicities). The following
facts on isoparametric submanifolds are due to Strübing [St].
Theorem 16 Let f : Mn → RN be an isoparametric isometric immersion. Then
(i) The number of principal normals is constant on Mn.
(ii) The first normal spaces, i.e, the subspaces of the normal spaces spanned by the im-
age of the second fundamental form, determine a parallel subbundle of the normal
bundle.
(iii) The subbundles Eηi, 1 ≤ i ≤ s, are totally geodesic and the principal normals
η1, . . . , ηs are parallel in the normal connection.
(iv) The subbundles Eηi, 1 ≤ i ≤ s, are parallel if and only if f has parallel second
fundamental form.
(v) If the principal normals η1, . . . , ηs satisfy 〈ηi, ηj〉 ≥ 0 everywhere then f has parallel
second fundamental form and 〈ηi, ηj〉 = 0 everywhere.
The next result will be used in the proofs of Corollaries 4 and 5.
Proposition 17 Let f : Mn → RN , n ≥ 2, be a compact isoparametric submanifold.
(i) If Mn has constant sectional curvature c, then either c > 0 and f(Mn) is a round
sphere or c = 0 and f(Mn) is an extrinsic product of circles.
(ii) If Mn has nonnegative sectional curvatures, then f(Mn) is an extrinsic product
of round spheres or circles. In particular, if Mn has positive sectional curvatures
then f(Mn) is a round sphere.
Proof: By Theorem 16, the number of distinct principal normals η1, . . . , ηs of f is con-
stant on Mn and all of them are parallel in the normal connection. Moreover, the
subbundles Eηi , 1 ≤ i ≤ s, are totally geodesic. If M
n has constant sectional cur-
vature c, then it follows from Lemmas 14 and 15 that also E⊥ηi is totally geodesic for
1 ≤ i ≤ s, and hence the sectional curvatures along planes spanned by vectors in dif-
ferent subbundles vanish. Therefore c = 0, unless s = 1 and f is umbilic, in which case
c > 0 and f(Mn) is a round sphere. Furthermore, if c = 0 and Eηi has rank at least 2
for some 1 ≤ i ≤ s then the sectional curvature along a plane tangent to Eηi is |ηi|
2 = 0,
in contradiction with the compactness of Mn. Hence Eηi has rank 1 for 1 ≤ i ≤ s. We
conclude that the universal covering of Mn is isometric to Rn, and that f ◦ π splits as
a product of circles by Moore’s Lemma [Mo], where π: Rn → Mn is the covering map.
Assume now that Mn has nonnegative sectional curvatures. It follows from (10)
that 〈ηi, ηj〉 ≥ 0 for 1 ≤ i 6= j ≤ s, whence f has parallel second fundamental form
and all subbundles Eηi , 1 ≤ i ≤ s, are parallel by parts (iv) and (v) of Theorem 16.
We obtain from the de Rham decomposition theorem that the universal covering of Mn
splits isometrically as Mn11 × · · ·×M
s , where each factor M
i is either R if ni = 1 or a
sphere Snii of curvature |ηi|
2 if ni ≥ 2. Moreover, if π: M
1 × · · · ×M
s → M
n denotes
the covering map, then Moore’s Lemma implies that f ◦π splits as f ◦π = f1×· · ·× fs,
where fi(M
i ) is a round sphere or circle for 1 ≤ i ≤ s.
To every compact isoparametric submanifold f : Mn → RN one can associate a finite
group, its Weyl group, as follows. Let η1, . . . , ηg denote the principal normal vector fields
of f . For p ∈ Mn, let Hj(p), 1 ≤ j ≤ g, be the focal hyperplane of T
n given by the
equation 〈ηj(p), 〉 = 1. Then one can show that the reflection on the affine normal space
p+T⊥p M
n with respect to each affine focal hyperplane p+Hi(p) leaves
j=1(p+Hj(p))
invariant, and thus the set of all such reflections generate a finite group, the Weyl group
of f at p. Moreover, the Weyl groups of f at different points are conjugate by the parallel
transport with respect to the normal connection, hence a well-defined Weyl group W
can be associated to f . We refer to [PT2] for details. In the proof of Theorem 3 we will
need the following property of the Weyl group of an isoparametric submanifold.
Proposition 18 Let f : Mn → RN be a compact isoparametric submanifold and let
W (p) be its Weyl group at p ∈ Mn. Assume that W (p) leaves invariant an affine
hyperplane H orthogonal to ξ ∈ T⊥p M . Then f(M
n) is contained in the affine hyperplane
of RN through p orthogonal to ξ.
Proof: It follows from the assumption that H is orthogonal to every focal hyperplane
p + Hj(p), 1 ≤ j ≤ g, of f at p. For q ∈ H, let Q =
g∈W (p)
gq ∈ H. Then Q
is a fixed point of W (p), hence it lies in the intersection
(p + Hj(p)) of all affine
focal hyperplanes of f at p. We obtain that the line through Q orthogonal to H lies in
j=1(p +Hj(p)). Therefore 〈ηj, Q+ λξ〉 = 1 for every λ ∈ R, 1 ≤ j ≤ g, which implies
that 〈ηj , ξ〉 = 0, for every 1 ≤ j ≤ g. Now extend ξ to a parallel vector field along M
with respect to the normal connection. Since the principal normal vector fields η1, . . . , ηg
of f are parallel with respect to the normal connection by Theorem 16-(iii), it follows
that 〈ηj, ξ〉 = 0 everywhere, and hence the shape operator Aξ of f with respect to ξ is
identically zero by (9). Then ξ is constant in RN and the conclusion follows.
A rich source of isoparametric submanifolds is provided by the following result of
Palais and Terng (see [PT1], Theorem 6.5).
Proposition 19 If a closed subgroup G ⊂ SO(N) acts polarly on RN then any of its
principal orbits is an isoparametric submanifold of RN .
To conclude this section, we point out that if G ⊂ SO(N) acts polarly on RN and Σ
is a section, then the the Weyl group W = N(Σ)/Z(Σ) of the G-action coincides with
the Weyl group W (p) just defined of any principal orbit Gp, p ∈ Σ, as an isoparametric
submanifold of RN (cf. [PT2]).
5 Proof of Theorem 3
We first prove the converse. Let G ⊂ SO(n+1) act polarly on Rn+1, let Σ be a section
of the G-action and let Mn ⊂ Rn+1 be a G-invariant immersed hypersurface. It suffices
to prove that Σ is transversal to Mn, for then L = Σ ∩Mn is a compact hypersurface
of Σ that is invariant under the Weyl group W of the action and Mn = G(L).
Assume, on the contrary, that transversality does not hold. Then there exists p ∈
Σ ∩Mn with TpΣ ⊂ TpM
n. Fix v ∈ TpΣ in a principal orbit of the slice representation
at p and let γ: (−ǫ, ǫ) → Mn be a smooth curve with γ(0) = p and γ′(0) = v. Since Mn
is G-invariant, it contains {gγ(t) | g ∈ Gp, t ∈ (−ǫ, ǫ)}. Therefore, TpM
n contains Gpv,
and hence Rv⊕TvGpv. Recall that TpΣ is a section for the slice representation at p (see
[PT1], Theorem 4.6). Therefore TvGpv is a subspace of T
p G(p) orthogonal to TpΣ with
dimTvGpv = dim T
p G(p)− dimΣ. (13)
Moreover, again by the G-invariance of Mn, we have that G(p) ⊂ Mn, and hence
TpG(p) ⊂ TpM
n. Using (13), we conclude that
dimTpM
n ≥ dimG(p) + dimΣ + dimTvGpv
= dimG(p) + dimT⊥p G(p) = n + 1,
a contradiction.
In order to prove the direct statement, it suffices to show that at each point p ∈ L
which is a singular point of the G-action the subset
H = {γ′(0) | γ: (−ǫ, ǫ) → Rn+1 is a curve in Rn+1 with γ(−ǫ, ǫ) ⊂Mn and γ(0) = p}
is an n-dimensional subspace of Rn+1, the tangent space of Mn at p. Clearly, we have
H = TpG(p)
gTpL.
We use again that TpΣ is a section of the slice representation at p and that, in addition,
the Weyl group for the slice representation is W (Σ)p = W ∩Gp ([PT1], Theorem 4.6).
Since L is invariant under W by assumption, it follows that TpL is invariant under
W (Σ)p. Let ξ ∈ Σ be a unit vector normal to L at p. Then, for any principal vector
v ∈ TpL ⊂ TpΣ of the slice representation at p it follows from Proposition 18 that Gpv
lies in the affine hyperplane π of RN through p orthogonal to ξ. Therefore gTpL ⊂ π
for every g ∈ Gp. Since TpG(p) is orthogonal to Σ, we conclude that H ⊂ π, and hence
H = π by dimension reasons.
Remark 20 Theorems 1 and 3 yield as a special case the main theorem in [MPST]:
any compact hypersurface of Rn+1 with cohomogeneity one under the action of a closed
connected subgroup of its isometry group is given as G(γ), where G ⊂ SO(n + 1) acts
on Rn+1 with cohomogeneity two (hence polarly) and γ is a smooth curve in a (two-
dimensional) section Σ which is invariant under the Weyl group W of the G-action. We
take the opportunity to point out that starting with a smooth curve β in a Weyl chamber
σ of Σ (which is identified with the orbit space of the G-action) which is orthogonal to
the boundary ∂σ of σ is not enough to ensure smoothness of γ = W (β), or equivalently,
of G(β), as claimed in [MPST]. One should require, in addition, that after expressing
β locally as a graph z = f(x) over its tangent line at a point of intersection with ∂σ,
the function f be even, that is, all of its derivatives of odd order vanish and not only
the first one.
6 Proofs of Corollaries 4 and 5.
For the proof of Corollary 4 we need another known property of polar representations,
a simple proof of which is included for the sake of completeness
Lemma 21 Let G̃ ⊂ SO(n+ 1) act polarly and have a principal orbit G̃(q) that is not
full, i.e., the linear span V of G̃(q) is a proper subspace of Rn+1. Then G̃ acts trivially
on V ⊥.
Proof: Let v ∈ V ⊥. Then v belongs to the normal spaces of G̃(q) at any point, hence
admits a unique extension to an equivariant normal vector field ξ along G̃(q). Moreover,
the shape operator Aξ of G̃(q) with respect to ξ is identically zero, for Av is clearly zero
and ξ is equivariant. Now, since G̃ acts polarly, the vector field ξ is parallel in the
normal connection. It follows that ξ is a constant vector in Rn+1, which means that G̃
fixes v.
Proof of Corollary 4. We know from Theorem 2 that there exists an orthogonal represen-
tation Ψ: G→ SO(n+1) such that G̃ = Ψ(G) acts polarly on Rn+1 and f ◦g = Ψ(g)◦f
for every g ∈ G. We claim that any of the conditions in the statement implies that f
immerses some G-principal orbit Gp in Rn+1 as a round sphere (circle, if k = n − 1).
Assuming the claim, it follows from Lemma 21 that G̃ fixes the orthogonal complement
V ⊥ of the linear span V of G̃p, hence f is a rotation hypersurface with V ⊥ as axis.
Moreover, if k 6= n − 1 then f |Gp must be an embedding by a standard covering map
argument. From this and f ◦ g = Ψ(g) ◦ f for every g ∈ G it follows that any g in the
kernel of Ψ must fix any point of Gp. Since Gp is a principal orbit, this easily implies
that g = id. Therefore Ψ is an isomorphism of G onto G̃.
We now prove the claim. By Proposition 19 we have that f immerses Gp as an
isoparametric submanifold. If condition (i) holds then the first normal spaces of f |Gp
in Rn+1 are one-dimensional. By Theorem 16-(ii) and a well-known result on reduction
of codimension of isometric immersions (cf. [Da], Proposition 4.1), we obtain that
f(Gp) = G̃(f(p)) is contained as a hypersurface in some affine hyperplane H ⊂ Rn+1,
and hence f(Gp) must be a round hypersphere of H (a circle if k = n − 1). Moreover,
condition (i) is automatic if k = n − 1, for in this case the principal orbit of maximal
length must be a geodesic. Now, if (iii) holds, let Gp be a principal orbit such that the
position vector of f is nowhere tangent to f(Mn) along Gp. Then any normal vector ξ̃
to f |Gp at gp ∈ Gp can be written as ξ̃ = af(gp) + f∗(gp)ξgp, with ξgp normal to Gp in
Mn. Therefore the shape operator A
f |Gp
is a multiple of the identity tensor, hence f |Gp
is umbilical. Since, by (ii), the dimension of Gp can be assumed to be at least two, the
claim is proved also in this case. As for conditions (iv) and (v), the claim follows from
Proposition 17-(i) and the last assertion in Proposition 17-(ii), respectively.
Proof of Corollary 5. By Proposition 19 and part (ii) of Proposition 17 we have that the
orbit G̃(f(p)) = f(Gp) of G̃ is an extrinsic product Sn1−11 ×. . .×S
of round spheres or
circles. In particular, this implies that the orthogonal decomposition Rn+1 =
where Rni is the linear span of S
i for 1 ≤ i ≤ k, is G̃-stable, that is, R
ni is G̃-invariant
for 0 ≤ i ≤ k (cf. [GOT], Lemma 6.2). By [D], Theorem 4 there exist connected Lie
subgroups G1, . . . , Gk of G̃ such that Gi acts on R
ni and the action of Ḡ = G1× . . .×Gk
on Rn+1 given by
(g1 . . . gk)(v0, v1, . . . , vk) = (v0, g1v1, . . . , gkvk)
is orbit equivalent to the action of G̃. Moreover, writing q = f(p) = (q0, . . . , qk) then
G̃(q) = {q0} × G1(q1) × . . . × Gk(qk), and hence Gi(qi) is a hypersphere of R
ni for
1 ≤ i ≤ k. The last assertion is clear.
7 Proof of Theorem 6.
Let Lk×ρN
n−k be a warped product with Nn−k connected and complete and let ψ: Lk×ρ
Nn−k → U be an isometry onto an open dense subset U ⊂Mn. Since Mn is a compact
Riemannian manifold isometrically immersed in Euclidean space as a hypersurface, there
exists an open subset W ⊂Mn with strictly positive sectional curvatures. The subset U
being open and dense inMn, W ∩U is a nonempty open set. Let L1×N1 be a connected
open subset of Lk×Nn−k that is mapped intoW∩U by ψ. Then the sectional curvatures
of Lk ×ρ N
n−k are strictly positive on L1 ×N1.
For a fixed x ∈ L1, choose a unit vector Xx ∈ TxL1. For each y ∈ N
n−k, let X(x,y)
be the unique unit horizontal vector in T(x,y)(L1 × N
n−k) that projects onto Xx by
(π1)∗(x, y). Then the sectional curvature of L
k ×ρ N
n−k along a plane σ spanned by
X(x,y) and any unit vertical vector Z(x,y) ∈ T(x,y)(L1 ×N
n−k) is given by
K(σ) = −Hess ρ(x)(Xx, Xx)/ρ(x).
Observe that K(σ) depends neither on y nor on the vector Z(x,y). Since K(σ) > 0 if
y ∈ N1, the same holds for any y ∈ N
n−k. In particular, L1×ρN
n−k is free of flat points.
If n − k ≥ 2, it follows from [DT], Theorem 16 that f ◦ ψ immerses L1 ×ρ N
either as a rotation hypersurface or as the extrinsic product of an Euclidean factor Rk−1
with a cone over a hypersurface of Sn−k+1. The latter possibility is ruled out by the fact
that the sectional curvatures of Lk ×ρ N
n−k are strictly positive on L1 ×N1. Thus the
first possibility holds, and in particular f ◦ ψ immerses each leaf {x} × Nn−k, x ∈ L1,
isometrically onto a round (n−k)-dimensional sphere. It follows that Nn−k is isometric
to a round sphere.
In any case, Iso0(Nn−k) acts transitively on Nn−k and each g ∈ Iso0(Nn−k) induces
an isometry ḡ of Lk ×ρ N
n−k by defining
ḡ(x, y) = (x, g(y)), for all (x, y) ∈ Lk ×Nn−k.
The map g ∈ Iso0(Nn−k) 7→ ḡ ∈ Iso(Lk ×ρ N
n−k) being clearly continuous, its image
Ḡ is a closed connected subgroup of Iso(Lk ×ρ N
n−k). For each ḡ ∈ Ḡ, the induced
isometry ψ ◦ ḡ ◦ ψ−1 on U extends uniquely to an isometry of Mn. The orbits of the
induced action of Ḡ on U are the images by ψ of the leaves {x} ×Nn−k, x ∈ Lk, hence
are umbilical in Mn. Moreover, the normal spaces to the (principal) orbits of Ḡ on U
are the images by ψ∗ of the horizontal subspaces of L
k ×ρ N
n−k. Therefore, they define
an integrable distribution on U , whence on the whole regular part of Mn. Thus, the
action of Ḡ on Mn is locally polar with umbilical principal orbits. We conclude from
Corollary 4-(iii) that f is a rotation hypersurface.
References
[AMN] ASPERTI, A.C., MERCURI, F., NORONHA, M.H.: Cohomogeneity one man-
ifolds and hypersurfaces of revolution. Bolletino U.M.I. 11-B (1997), 199-215.
[BCO] BERNDT, J., CONSOLE, S., OLMOS, C.: Submanifolds and Holonomy,
CRC/Chapman and Hall Research Notes Series in Mathematics 434 (2003), Boca
Ratton.
[D] DADOK, J.: Polar coordinates induced by actions of compact Lie groups. Trans.
Amer. Math. Soc. 288 (1985), 125-137.
[Da] DAJCZER, M. et al.: Submanifolds and isometric immersions. Matematics Lec-
ture Series 13, Publish or Perish Inc., Houston-Texas, 1990.
[DG] DAJCZER, M., GROMOLL, D.: Rigidity of complete Euclidean hypersurfaces.
J.Diff. Geom. 31 (1990), 401-416.
[DT] DAJCZER, M., TOJEIRO, R.: Isometric immersions in codimension two of
warped products into space forms. Illinois J. Math. 48 (3) (2004), 711-746.
[DN] DILLEN, F., NÖLKER, S.: Semi-parallel submanifolds, multi-rotation surfaces
and the helix-property. J. reine angew. Math. 435 (1993), 33-63.
[EH] ESCHENBURG, J.-H., HEINTZE, E.: On the classification of polar representa-
tions. Math. Z. 232 (1999), 391-398.
[HLO] HEINTZE, E., LIU, X., OLMOS, C.: Isoparametric submanifolds and a
Chevalley-type restriction theorem. Integrable systems, geometry and topology,
151-190, AMS/IP Stud. Adv. Math. 36, Amer. Math. Soc., Providence, RI, 2006.
[GOT] GORODSKI, C., OLMOS, C., TOJEIRO, R.: Copolarity of isometric actions.
Trans. Amer. Math. Soc. 356 (2004), 1585-1608.
[Ko] KOBAYASHI, S.: Compact homogeneous hypersurfaces. Trans. Amer. Math.
Soc. 88 (1958), 137-143.
[MPST] MERCURI, F., PODESTÀ, F., SEIXAS, J. A., TOJEIRO, R.: Cohomogene-
ity one hypersurfaces of Euclidean spaces. Comment. Math. Helv. 81 (2) (2006),
471-479.
[Mo] MOORE, J. D.: Isometric immersions of Riemannian products, J. Diff. Geom.
5 (1971), 159–168.
[PT1] PALAIS, R., TERNG, C.-L.: A general theory of canonical forms. Trans. Amer.
Math. Soc. 300 (1987), 771-789.
[PT2] PALAIS, R., TERNG, C.-L.: Critical Point Theory and Submanifold Geometry,
Lecture Notes in Mathematics 1353 (1988), Springer-Verlag.
[PS] PODESTÀ, F., SPIRO, A.: Cohomogeneity one manifolds and hypersurfaces of
Euclidean space. Ann. Global Anal. Geom. 13 (1995), 169-184.
[Sa] SACKSTEDER, R.: The rigidity of hypersurfaces, J. Math. Mech. 11 (1962),
929-939.
[St] STRÜBING, W.: Isoparametric submanifolds. Geom. Dedicata 20 (1986), 367-
Universidade Federal Fluminense Universidade Federal de São Carlos
24020-140 – Niteroi – Brazil 13565-905 – São Carlos – Brazil
E-mail: [email protected] E-mail: [email protected]
Introduction
Proof of Theorem ??
Proof of Theorem ??
Isoparametric submanifolds
Proof of Theorem ??
Proofs of Corollaries ?? and ??.
Proof of Theorem ??.
|
0704.1808 | Tests of Bayesian Model Selection Techniques for Gravitational Wave
Astronomy | Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy
Neil J. Cornish and Tyson B. Littenberg
Department of Physics, Montana State University, Bozeman, MT 59717
The analysis of gravitational wave data involves many model selection problems. The most
important example is the detection problem of selecting between the data being consistent with
instrument noise alone, or instrument noise and a gravitational wave signal. The analysis of data
from ground based gravitational wave detectors is mostly conducted using classical statistics, and
methods such as the Neyman-Pearson criteria are used for model selection. Future space based
detectors, such as the Laser Interferometer Space Antenna (LISA), are expected to produced rich
data streams containing the signals from many millions of sources. Determining the number of
sources that are resolvable, and the most appropriate description of each source poses a challenging
model selection problem that may best be addressed in a Bayesian framework. An important class of
LISA sources are the millions of low-mass binary systems within our own galaxy, tens of thousands
of which will be detectable. Not only are the number of sources unknown, but so are the number
of parameters required to model the waveforms. For example, a significant subset of the resolvable
galactic binaries will exhibit orbital frequency evolution, while a smaller number will have measurable
eccentricity. In the Bayesian approach to model selection one needs to compute the Bayes factor
between competing models. Here we explore various methods for computing Bayes factors in the
context of determining which galactic binaries have measurable frequency evolution. The methods
explored include a Reverse Jump Markov Chain Monte Carlo (RJMCMC) algorithm, Savage-Dickie
density ratios, the Schwarz-Bayes Information Criterion (BIC), and the Laplace approximation to
the model evidence. We find good agreement between all of the approaches.
I. BACKGROUND
Bayesian statistical techniques are becoming increas-
ingly popular in gravitational wave data analysis, and
have shown great promise in tackling the various difficul-
ties of gravitational wave (GW) source extraction from
modeled data for the Laser Interferometer Space Antenna
(LISA). A powerful tool in the suite of Bayesian methods
is that of quantitative model selection [1, 2]. To under-
stand why this is a valuable feature consider a scenario
where one is attempting to fit data with two competing
models of differing dimension. In general, a higher di-
mensional model will produce a better fit to a given set
of data. This can be taken to the limit where there are as
many model parameters as there are data points allowing
one to perfectly match the data. The problem then is to
decide how many parameters are physically meaningful
and to select the model containing only those parameters.
In the context of GW detection these extra parameters
could be additional physical parameters used to model
the source or additional sources in the data. If a model
is over-parameterized it will over-fit the data and produce
spurious results.
Many of the model selection problems associated with
LISA astronomy involve nested models, where the sim-
pler model forms a subset of the more complicated model.
The problem of determining the number of resolvable
galactic binaries, and the problem of determining the
number of measurable source parameters, are both ex-
amples of nested model selection. One could argue that
the later is better described as “approximation selection”
since we are selecting between different parameterizations
of the full 17 dimensional physical model that describes
the signals from binary systems of point masses in general
relativity. However, many similar modeling problems in
astrophysics and cosmology [2], as well as in other fields
such as geophysics [3], are considered to be examples of
model selection, and we will adopt that viewpoint here.
The LISA observatory [4] is designed to explore the
low frequency portion of the gravitational wave spectrum
between ∼ 0.1 → 100 mHz. This frequency region will be
heavily populated by signals from galactic binary systems
composed of stellar mass compact objects (e.g. white
dwarfs and neutron stars), of which millions are theorized
to exist. Tens of thousands of these GW sources will
be resolvable by LISA and the remaining sources will
contribute to a confusion-limited background [5]. This
is expected to be the dominant source of low frequency
noise for LISA.
Detection and subsequent regression of the galactic
foreground is of vital importance in order to then pur-
sue dimmer sources that would otherwise be buried by
the foreground. Because of the great number of galac-
tic sources, and the ensuing overlap between individ-
ual sources, a one-by-one detection/regression is inaccu-
rate [6]. Therefore a global fit to all of the galactic sources
is required. Because of the uncertainty in the number of
resolvable sources one can not fix the model dimension a
priori which presents a crucial model selection problem.
Over-fitting the data will result in an inaccurate regres-
sion which would then remove power from other sources
in the data-stream, negatively impacting their detection
and characterization. The Reverse Jump Markov Chain
Monte Carlo approach to Bayesian model selection has
been used to determine the number of resolvable sources
in the context of a toy problem [7, 8] which shares some
of the features of the LISA foreground removal prob-
lem. Meanwhile the Laplace approximation to Bayesian
http://arxiv.org/abs/0704.1808v3
model selection has been employed to estimate the num-
ber of resolvable sources as part of a MCMC based al-
gorithm to extracting signals from simulated LISA data
streams [6, 9].
Another important problem for GW astronomy is the
determination of which parameters need to be included in
the description of the waveforms. For example, the GW
signal from a binary inspiral, as detected by LISA, may
involve as many as 17 parameters. However, for massive
black hole binaries of comparable mass we expect the ec-
centricity to be negligible, reducing the model dimension
to 15, while for extreme mass ratio systems we expect
the spin of the smaller body to have little impact on
the waveforms, reducing the model dimension to 14. In
many cases the inspiral signals may be described by even
fewer parameters. For low mass galactic binaries spin
effects will be negligible (removing six parameters), and
various astrophysical processes will have circularized the
orbits of the majority of systems (removing two param-
eters). Of the remaining nine parameters, two describe
the frequency evolution - e.g. the first and second time
derivatives of the GW frequency, which may or may not
be detectable [27].
Here we investigate the application of Bayesian model
selection to LISA data analysis in the context of deter-
mining the conditions under which the first time deriva-
tive of the GW frequency, ḟ , can be inferred from the
data. We parameterize the signals using the eight pa-
rameters
~λ→ (A, f, θ, φ, ψ, ι, ϕ0, ḟ) (1)
where A is the amplitude, f is the initial gravitational
wave frequency, θ and φ are the ecliptic co-latitude and
longitude, ψ is the polarization angle, ι is the orbital
inclination of the binary and ϕ0 is the GW phase. The
parameters f , ḟ and ϕ0 are evaluated at some fiducial
time (e.g. at the time the first data is taken). For our
analysis only a single source is injected into the simulated
data streams. In the frequency domain the output s(f)
in channel α can be written as
s̃α(f) = h̃α(~λ) + ñα(f) (2)
where h̃α(~λ) is the response of the detector to the incident
GW and ñα(f) is the instrument noise. For our work we
will assume stationary Gaussian instrument noise with no
contribution from a confusion background. In our anal-
ysis we use the noise orthogonal A,E, T data streams,
which can be constructed from linear combinations of
the Michelson type X,Y, Z signals:
(2X − Y − Z) ,
(Z − Y ) ,
(X + Y + Z) . (3)
This set of A,E, T variables differ slightly from those
constructed from the Sagnac signals [10]. We do not use
the T channel in our analysis as it is insensitive to GWs
at the frequencies we are considering. The noise spectral
density in the A,E channels has the form
Sn(f) =
(1− cos(2f/f∗))
(2 + cos(f/f∗))Ss
+2(3 + 2 cos(f/f∗) + cos(2f/f∗))Sa
Hz−1 (4)
where f∗ = 1/2πL, and the acceleration noise Sa and
shot noise Ss are simulated at the levels
10−22
9× 10−30
(2πf)4L2
Hz−1 .
Here L is the LISA arm length (≈ 5× 109 m).
Of central importance to Bayesian analysis is the pos-
terior distribution function (PDF) of the model parame-
ters. The PDF p(~λ|s) describes the probability that the
source is described by parameters ~λ given the signal s.
According to Bayes’ Theorem,
p(~λ|s) = p(
~λ)p(s|~λ)
d~λ p(~λ)p(s|~λ)
where p(~λ) is the a priori, or prior, distribution of the
parameters ~λ and p(s|~λ) is the likelihood that we measure
the signal s if the source is described by the parameters ~λ.
We define the likelihood using the noise weighted inner
product
(A|B) ≡ 2
Ã∗α(f)B̃α(f) + Ãα(f)B̃
Sαn (f)
p(s|~λ) = C exp
s− h(~λ)
s− h(~λ)
where the normalization constant C depends on the
noise, but not the GW signal. One goal of the data anal-
ysis method is to find the parameters ~λ which maximizes
the posterior. Markov Chain Monte Carlo (MCMC)
methods are ideal for this type of application [11]. The
MCMC algorithm will simultaneously find the param-
eters which maximize the posterior and accurately map
out the PDF of the parameters. This is achieved through
the use of a Metropolis-Hastings [12, 13] exploration of
the parameter space. A brief description of this process
is as follows: The chain begins at some random position
~x in the parameter space and subsequent steps are made
by randomly proposing a new position in the parame-
ter space ~y. This new position is determined by drawing
from some proposal distribution q(~x|~y). The choice of
whether or not adopt the new position ~y is made by cal-
culating the Hastings ratio (transition probability)
α = min
p(~y)p(s|~y)q(~y|~x)
p(~x)p(s|~x)q(~x|~y)
and comparing α to a random number β taken from a
uniform draw in the interval [0:1]. If α exceeds β then
the chain adopts ~y as the new position. This process
is repeated until some convergence criterion is satisfied.
The MCMC differs from a Metropolis extremization by
forbidding proposal distributions that depend on the past
history of the chain. This ensures that the progress of the
chain is Markovian and therefore statistically unbiased.
Once the chain has stabilized in the neighborhood of the
best fit parameters all previous steps of the chain are
excluded from the statistical analysis (these early steps
are referred to as the “burn in” phase of the chain) and
henceforth the number of iterations the chain spends at
different parameter values can be used to infer the PDF.
The power of the MCMC is two-fold: Because the al-
gorithm has a finite probability of moving away from a
favorable location in the parameter space it can avoid
getting trapped by local features of the likelihood sur-
face. Meanwhile, the absence of any “memory” within
the chain of past parameter values allows the algorithm
to blindly, statistically, explore the region in the neigh-
borhood of the global maximum. It is then rigorously
proven that an MCMC will (eventually) perfectly map
out the PDF, completely removing the need for user in-
put to determine parameter uncertainties or thresholds.
The parameter vector that maximizes the posterior dis-
tribution is stored as the maximum a posteriori (MAP)
value and is considered to be the best estimate of the
source parameters. Note that because of the prior weight-
ing in the definition of the PDF this is not necessarily
the ~λ that yields the greatest likelihood. Upon obtaining
the MAP value for a particular model X the PDF, now
written as p( ~λ,X |s), can be employed to solve the model
selection problem.
II. BAYES FACTOR ESTIMATES
The Bayes Factor BXY [14] is a comparison of the ev-
idence for two competing models, X and Y , where
pX(s) =
d~λ p(~λ,X |s) (10)
is the marginal likelihood, or evidence, for model X. The
Bayes Factor can then be calculated by
BXY (s) =
pX(s)
pY (s)
. (11)
The Bayes Factor has been described as the Holy Grail
of model selection: It is a powerful entity that is very
difficult to find. The quantity BXY can be thought of as
the odds ratio for a preference of model X over model Y .
Apart from carefully concocted toy problems, direct cal-
culation of the evidence, and therefore BXY , is imprac-
tical. To determine BXY the integral required to com-
pute pX can not generally be solved analytically and for
high dimension models Monte-Carlo integration proves
BXY 2 logBXY Evidence for model X
< 1 < 0 Negative (supports model Y )
1 to 3 0 to 2 Not worth more than a bare mention
3 to 12 2 to 5 Positive
12 to 150 5 to 10 Strong
> 150 > 10 Very Strong
TABLE I: BXY ‘confidence’ levels taken from [1]
to be inefficient. To employ this powerful statistical tool
various estimates for the Bayes Factor have been devel-
oped that have different levels of accuracy and computa-
tional cost [1, 2]. We have chosen to focus on four such
methods: Reverse Jump Markov Chain Monte Carlo and
Savage-Dickie density ratios, which directly estimate the
Bayes factor, and the Schwarz-Bayes Information Cri-
terion (BIC) and Laplace approximations of the model
evidence.
A. RJMCMC
Reverse JumpMarkov Chain Monte Carlo (RJMCMC)
algorithms are a class of MCMC algorithms which admit
“trans-dimensional” moves between models of different
dimension [3, 15, 16]. For the trans-dimensional imple-
mentation applicable to the LISA data analysis problem
the choice of model parameters becomes one of the search
parameters. The algorithm proposes parameter ‘birth’ or
‘death’ moves (proposing to include or discard the ‘extra’
parameter(s)) while holding all other parameters fixed.
The priors in the RJMCMC Hastings ratio
α = min
p(~λY )p(s|~λY )g(~uY )
p(~λX)p(s|~λX)g(~uX)
automatically penalizes the posterior density of the
higher dimensional model, which compensate for its
generically higher likelihood, serving as a built in ‘Occam
Factor.’ The g(~u) which appears in (12) is the distribu-
tion from which the random numbers ~u are drawn and
|J| is the Jacobian
|J| ≡
∂(~λY , ~uY )
∂(~λX , ~uX)
The chain will tend to spend more iterations using the
model most appropriately describing the data, making
the decision of which model to favor a trivial one. To
quantitatively determine the Bayes Factor one simply
computes the ratio of the iterations spent within each
model.
BXY ≃
# of iterations in model X
# of iterations in model Y
Like the simpler MCMC methods, the RJMCMC is guar-
anteed to converge on the correct value of BXY making
0 10000 20000 30000 40000 50000
iterations
FIG. 1: 50 000 iteration segment of an RJMCMC chain mov-
ing between models with and without frequency evolution.
This particular run was for a source with q = 1 and SNR=10
and yielded BXY ∼ 1.
it the ‘gold standard’ of Bayes Factor estimation. And,
like regular MCMCs, the convergence can be very slow, so
that in practice the Bayes Factor estimates can be inaccu-
rate. This is especially true when the trans-dimensional
moves involve many parameters, such as the 7 or 8 di-
mensional jumps that are required to transition between
models with differing numbers of galactic binaries.
Figure 1 shows the output of a RJMCMC search of a
simulated LISA data stream containing the signal from a
galactic binary with q = ḟT 2obs = 1 and and observation
time of Tobs = 2 years. The chain moved freely between
the 7-dimensional model with no frequency evolution and
the 8-dimensional model which included the frequency
evolution.
B. Laplace Approximation
A common approach to model selection is to approx-
imate the model evidence directly. Working under the
assumption that the PDF is Gaussian, the integral in
equation (10) can be estimated by use of the Laplace
approximation. This is accomplished by comparing the
volume of the models parameter space V to that of the
parameter uncertainty ellipsoid ∆V
pX(s) ≃ p(~λMAP, X |s)
. (15)
The uncertainty ellipsoid can be determined by calculat-
ing the determinant of the Hessian H of partial deriva-
tives of the posterior with respect to the model parameter
evaluated at the MAP value for the parameters.
pX(s) ≃ p(~λMAP, X |s)
(2π)D/2√
The Fisher Information Matrix (FIM) Γ with compo-
nents
Γij ≡ (h,i |h,j ) where h,i≡
can be used as a quadratic approximation to H yielding
pX(s) ≃ p(~λMAP, X |s)
(2π)D/2√
We will refer to this estimate of the evidence as the
Laplace-Fisher (LF) approximation. The LF approxima-
tion breaks down if the priors have large gradients in the
vicinity of the MAP parameter estimates. The FIM esti-
mates can also be poor if some of the source parameters
are highly correlated, or if the quadratic approximation
fails. In addition, the FIM approximation gets progres-
sively worse as the SNR of the source decreases.
A more accurate (though more costly) method for esti-
mating the evidence is the Laplace-Metropolis (LM) ap-
proximation which employs the PDF as mapped out by
the MCMC exploration of the likelihood surface to es-
timate H [16]. This can be accomplished by fitting a
minimum volume ellipsoid (MVE) to the D-dimensional
posterior distribution function. The principle axes of the
MVE lie in eigen-directions of the distribution which gen-
erally do not lie along the parameter directions. Here we
employ the MVE.jar package which utilizes a genetic al-
gorithm to determine the MVE of the distribution and
returns the covariance matrix of the PDF [17]. The de-
terminant of the covariance matrix can then be used to
determine the evidence via
pX(s) ≃ p(~λMAP, X |s)(2π)D/2
detC. (19)
In the MCMC literature the LM approximation is gen-
erally considered to be second only to the RJMCMC
method for estimating Bayes Factors.
C. Savage Dickie Density Ratio
Both RJMCMC and LM require exploration of the pos-
terior for each model under consideration. The Savage-
Dickie (SD) approximation estimates the Bayes Factor
directly while only requiring exploration of the highest di-
mensional space [2, 18]. This approximation requires that
two conditions are met: Model X must be nested within
Model Y (adding and subtracting parameters clearly sat-
isfies this condition) and the priors for any given model
must be separable (i.e. p(~λ) = p(λ1)×p(λ2)×. . .×p(λD))
which is, to a good approximation, satisfied in our exam-
ple. The Bayes Factor BXY is then calculated by com-
paring the weight of the marginalized posterior to the
weight of the prior distribution for the ‘extra’ parameter
at the default, lower-dimensional, value for the parameter
in question.
BXY (s) ≃
p(λ0|s)
p(λ0)
It is interesting to note that if the above conditions are
precisely satisfied it can then be shown that this is an
exact calculation of BXY (assuming sufficient sampling
of the PDF), as opposed to an approximation.
D. Schwarz-Bayes Information Criterion
All of the approximations discussed so far depend on
the supplied priors p(~λ). The Schwarz-Bayes Informa-
tion Criterion (BIC) method is an approximation to the
model evidence which makes its own assumptions about
the priors - namely that they take the form of a multi-
variate Gaussian with covariance matrix derived from the
Hessian H [16, 19]. The BIC estimate for the evidence is
ln pX(s) ≃ ln p(~λMAP, X |s)−
lnNeff (21)
where DX is the dimension of model X and Neff is the
effective number of samples in the data. For our tests we
defined Neff to be the number of data points required to
return a (power) signal-to-noise ratio of SNR2−D, where
SNR is the signal-to-noise one gets by summing over the
entire LISA band. This choice was motivated by the
fact that the variance in SNR2 is equal to D2, so Neff
accounts for the data points that carry significant weight
in the model fit. The BIC estimate has the advantage of
being very easy to calculate, but is generally considered
less reliable than the other techniques we are using.
III. CASE STUDY
To compare the various approximations to the Bayes
Factor we simulated a ‘typical’ galactic binary. The in-
jected parameters for our test source can be found in ta-
ble II. Since ḟ ∼ f11/3, higher frequency sources are more
likely to have a measurable ḟ . On the other hand, the
number of binaries per frequency bin falls of as ∼ f−11/3,
so high frequency systems are fairly rare. As a compro-
mise, we selected a system with a GW frequency of 5
mHz. To describe the frequency evolution we introduced
the dimensionless parameter
q ≡ ḟT 2obs, (22)
which measures the change in the Barycenter GW fre-
quency in units of 1/Tobs frequency bins. For q ≫ 1 it
is reasonable to believe that a search algorithm will have
no difficulty detecting the frequency shift. Likewise, for
q ≪ 1 it is unlikely that the frequency evolution can be
detected (at least for sources with modest SNR). There-
fore we have selected q ∼ 1 to test the model selection
techniques. Achieving q = 1 for typical galactic binaries
at 5 mHz requires observation times of approximately
two years. A range of SNRs were explored by varying
the distance to the source.
TABLE II: Source parameters
f (mHz) cos θ φ (deg) ψ (deg) cos ι ϕ0 (deg) q Tobs (yr)
5.0 1.0 266.0 51.25 0.17 204.94 1 2
We can rapidly calculate accurate waveform templates
using the fast-slow decomposition described in the Ap-
pendix. Our waveform algorithm has been used in the
second round of Mock LISA Data Challenges [20] to sim-
ulate the response to a galaxy containing some 26 million
sources. The simulation takes just a few hours to run on
a single 2 GHz processor.
We simulated a 1024 frequency bin snippet of LISA
data around 5 mHz that included the injected signal
and stationary Gaussian instrument noise. The Markov
chains were initialized at the injected source parameters
as the focus of this study is the statistical character of
the detection, and not the initial detection (a highly ef-
ficient solution to the detection problem is described in
Ref. [9]). We used uniform priors for all of the parame-
ters, with the angular parameters taking their standard
ranges. We took the frequency to lie somewhere within
the frequency snippet, and lnA to be uniform across
the range 1
ln(Sn/(2T )) and
ln(1000Sn/(2T ), which
roughly corresponds to signal SNRs in the range 1 to
1000. We took the frequency evolution parameter q to
be uniformly distributed between -3 and 3 and adopted
q = 0 as the default value when operating under the
7-dimensional model. In reality, astrophysical considera-
tions yield a very strong prior for q (see Section V) that
will significantly impact model selection. We decided to
use a simple uniform prior to compare the various ap-
proximations to the Bayes Factor, before moving on to
consider the effects of the astrophysical prior in Section
The choice of proposal distribution q(~x|~y) from which
to draw new parameter values has no effect on the asymp-
totic form of the recovered PDFs, but the choice is cru-
cially important in determining the rate of convergence to
the stationary distribution. We took q(~x|~y) to be a mul-
tivariate Gaussian with covariance matrix given by the
inverse of the FIM. In addition to the source parameters
we included two additional parameters, kA and kE , that
describe the noise levels in the A and E data channels:
SAn (f) = kASn(f)
SEn (f) = kESn(f). (23)
In a given realization of the instrument noise kA and
kE will differ from unity by some random amount δ. The
quantity δ will have a Gaussian distribution with variance
σ2 = 1/N , whereN is the number of frequency bins being
analyzed. The likelihood p(s|~λ) can then be written as
p(s|~λ) = C′ exp
−N ln (kAkE)
where the constant C′ is independent of the signal param-
eters ~λ and the noise parameters kA and kE . We explored
the noise level estimation capabilities of our MCMC al-
gorithm by starting kA and kE far from unity. As can be
seen in Figure 2 the chain quickly identified the correct
noise level.
0 500 1000 1500 2000 2500
iterations
FIG. 2: Demonstration of the MCMC algorithm’s rapid de-
termination of the injected noise level. The parameters kA
and kE were initialized at 10 and 0.1 respectively.
IV. COMPARISON OF TECHNIQUES
We compared the Bayes Factor estimates obtained us-
ing the various methods in two ways. First, we fixed the
frequency derivative of the source at q = 1 and varied
the SNR between 5 and 20 in unit increments. Second,
we fixed the signal to noise ratio at SNR = 12 and varied
the frequency derivative of the source.
Favor 7D model
Not worth more than a bare mention
Positive
Strong
Very StrongV
RJMCMC
0.001
0.01
1000
10000
6 8 10 12 14 16 18 20
FIG. 3: Plot of the Bayes Factor estimates as a function of
SNR for each of the approximation schemes described in the
text.
The results of the first test are shown in Figure 3 We
see that all five methods agree very well for SNR > 7.
As expected, the Laplace-Metropolis and Savage-Dicke
methods provide the best approximation to the “Gold
Standard” RJMCMC estimate, showing good agreement
all the way down to SNR = 5. Most importantly, all five
methods agree on when the 8-dimensional source model is
favored over the 7-dimensional model, placing the tran-
sition point at SNR ≃ 12.2. To get a rough estimate
for the numerical error in the various Bayes Factor es-
timates we repeated the SNR= 15 case 10 times using
different random number seeds. We found that the nu-
merical error was enough to account for any quantitative
differences between the estimates returned by the various
approaches.
It is interesting to compare the Bayesian model se-
lection results to the frequentist “3-σ” rule for positive
detection:
|q̄| > 3σq, (25)
where q̄ is the MAP estimate for the frequency change
and σq is the standard deviation in q as determined by
the FIM. For the source under consideration we found
the “3-σ” rule to require SNR ≃ 13 for a detection, in
good agreement with the Bayesian analysis. This lends
support to Seto’s [21] earlier FIM based study of the de-
tectability of the frequency evolution of galactic bina-
ries, but we should caution that the literature is replete
with examples where the “3-σ” rule yields results in dis-
agreement with Bayesian model selection and common
sense [22].
Favor 7D model
Not worth more than a bare mention
Positive
Strong
0.7 0.8 0.9 1 1.1 1.2 1.3
RJMCMC
FIG. 4: Plot of the Bayes Factor estimates as a function of q
for each of the approximation schemes described in the text.
The signal to noise ratio was held fixed at SNR = 12.
The results of the second test are displayed in Figure 4
In this case all five methods produced results that agree
to within numerical error.
While the results shown here are for a particular choice
of source parameters, we found similar results for other
sets of source parameters. In general all five methods
for estimating the Bayes Factor gave consistent results
for signals with SNR > 7. One exception to this general
trend were sources with inclinations close to zero, as then
the PDFs tend to be highly non-gaussian. The Laplace-
Metropolis and Laplace-Fisher approximations suffered
the most in those cases.
V. ASTROPHYSICAL PRIORS
Astrophysical considerations lead to very strong pri-
ors for the frequency evolution of galactic binaries. The
detached systems, which are expected to account for
the majority of LISA sources, will evolve under gravita-
tional radiation reaction in accord with the leading order
quadrapole formula:
3(8π)8/3
f11/3M5/3 , (26)
where M is the chirp mass. Contact binaries undergoing
stable mass transfer from the lighter to the heavier com-
ponent are driven to longer orbital periods by angular
momentum conservation. The competition between the
effects of mass transfer and gravitational wave emission
lead to a formula for ḟ with the same frequency and mass
scaling as (26), but with the opposite sign and a slightly
lower magnitude [23].
Population synthesis models, calibrated against obser-
vational data, yield predictions for the distribution of
chirp masses M as a function of orbital frequency. These
distributions can be converted into priors on q. In con-
structing such priors one should also fold in observational
selection effects, which will favor systems with larger
chirp mass (the GW amplitude scales as M5/6). To get
some sense of how such priors will affect the model selec-
tion we took the chirp mass distribution for detached sys-
tems at f ∼ 5 mHz from the population synthesis model
described in Ref. [24], (kindly provided to us by Gijs Nele-
mans), and used (26) to construct the prior on q shown
in Figures 5 and 6 (observation selection effects were ig-
nored). The prior has been modified slightly to give a
small but no-vanishing weight to sources with q = 0. The
astrophysically motivated prior has a very sharp peak at
q = 0.64, and we use this value when fixing the frequency
derivative for the 7-dimensional model.
To explore the impact on model selection when such a
strong prior has been adopted we simulated a source with
q = 1 and varied the SNR. The RJMCMC algorithm was
applied using chains of length 107 in conjunction with
a fixed 8-dimensional MCMC (also allowed to run for
107 iterations) in order to compare the RJMCMC results
with the Savage-Dickie density ratio.
The results of this first exploration are shown in Fig-
ure 5. We found that for SNR < 15 the marginalized
PDF very closely resembled the prior distribution. This
demonstrates that the information content of the data
is insufficient to change our prior belief about the value
of the frequency derivative. As the SNR increased, how-
ever, the PDF began to move away from the prior until
we reached SNR=30 when the astrophysical prior had a
0.005
0.01
0.015
0.02
0.025
0.03
.5 1
.5 2
0.005
0.01
0.015
0.02
0.025
0.03
.5 1
.5 2
0.005
0.01
0.015
0.02
0.025
0.03
.5 1
.5 2
0.005
0.01
0.015
0.02
0.025
0.03
.5 1
.5 2
0.005
0.01
0.015
0.02
0.025
0.03
.5 1
.5 2
0.005
0.01
0.015
0.02
0.025
0.03
.5 1
.5 2
SNR=5 SNR=10 SNR=15
SNR=20 SNR=25 SNR=30
q q q
FIG. 5: Comparison between astrophysically motivated prior
distribution of q for f = 5 mHz and Tobs = 2 years (dashed,
blue) to marginalized PDF (solid, red) for sources injected
with q = 1 and SNRs varying from 5 to 30.
SNR BXY (SD) BXY (RJMCMC)
5 0.926 1.015
10 0.977 0.996
15 0.749 0.742
20 0.427 0.427
25 0.176 0.177
30 0.060 0.056
TABLE III: Savage-Dickie density ratio estimates of BXY for
sources with q = 1 and SNRs varying from 5 to 30. Compar-
isons with RJMCMC explorations of the same data set show
excellent agreement between the two methods.
negligible effect on the shape of the posterior, signaling
confidence in the quoted measurement of q. This qualita-
tive assessment of model preference is strongly supported
by the Bayes factor estimation made by the RJMCMC
algorithm as can be seen in Table III. It should also be
noted that the excellent agreement between the RJM-
CMC and S-D estimates for Bayes factor BXY . Both
methods indicate that for the chosen value of q = 1,
the signal-to-noise needs to exceed SNR ∼ 25 for the
8-dimensional model to be favored. This is in contrast
to the case discussed earlier where a uniform prior was
adopted for the frequency derivative, and the model se-
lection methods began showing a preference for the 8-
dimensional model around SNR=12.
Figure 6 shows the impact of the astrophysically moti-
vated prior when the SNR was held at 15 and four differ-
ent injected values for q were adopted, corresponding to
the full width at half maximum (FWHM) and full width
at quarter maximum (FWQM) of the prior distribution.
The Bayes factors listed in Table IV indicate that for
modestly loud sources with SNR=15 the model selection
techniques do not favor updating our estimate of the fre-
0.005
0.01
0.015
0.02
0.025
0.03
.5 1
.5 2
0.005
0.01
0.015
0.02
0.025
0.03
.5 1
.5 2
0.005
0.01
0.015
0.02
0.025
0.03
.5 1
.5 2
0.005
0.01
0.015
0.02
0.025
0.03
.5 2
q=0.35
q=0.83
q=0.48
q=1.15
FIG. 6: Marginalized PDF (solid, red) for fixed SNR=15 in-
jected sources with q corresponding to FWHM and FWQM
of the astrophysical prior (dashed, blue)
q BXY (SD) BXY (RJMCMC)
0.35 1.412 1.414
0.48 1.381 1.388
0.83 1.059 1.052
1.15 0.432 0.428
TABLE IV: Savage-Dickie and RJMCMC density ratio esti-
mates of BXY for sources with SNR=15 and q at FWHM and
FWQM of astrophysical prior
quency derivative until the frequency derivative exceeds
q = 1.2.
VI. DISCUSSION
We have found that the several common methods for
estimating Bayes Factors give good agreement when ap-
plied to the the model selection problem of deciding when
the data from the LISA observatory can be used to detect
the orbital evolution of a galactic binary. The methods
studied require varying degrees of effort to implement and
calculate, and although found to be accurate in this test
case, it is clear that some of these methods would be in-
appropriate approximations for more physically relevant
examples.
If a RJMCMC algorithm is used as the sole model
selection technique, the resistance of the algorithm
to change dimension, especially when making multi-
dimensional jumps, can result in invalid model selection
unless the chains are run for a very large numbers of
steps. In the examples we studied the transdimensional
jumps only had to span one dimension, and our basic
RJMCMC algorithm performed well. However, a more
sophisticated implementation, using e.g. rejection sam-
pling or coupled chains, will be required to select the
number of sources, as this requires jumps that span seven
or more dimensions.
The Laplace-Metropolis method for approximating the
model evidence is more robust than the commonly used
Fisher Information Matrix approximation of the Hessian
of the PDF. Implementing an LM evidence estimation is
a somewhat costly because of the need to fit the posterior
to a minimum volume ellipsoid.
The Savage-Dickie approximation is more economical
than the RJMCMC or LM methods, but is limited by the
requirement that the competing models must be nested.
The Bayes Information Criterion approximation to the
evidence is by far the cheapest to implement, and is able
to produce reliable results when the SNR is high. It has
therefore shown the most promise as an ‘on the fly’ model
determination scheme. More thorough (and therefore
more costly) methods such as RJMCMC and LM could
then be used to refine the conclusions initially made by
the BIC.
Our investigation using a strong astrophysical prior
indicated that the gravitational wave signals will need
to have high signal-to-noise (SNR > 25), or moderate
signal-to-noise (SNR > 15) and frequency derivatives far
from the peak of the astrophysical distribution, in order
to update our prior belief in the value of the frequency
derivative. In other words, the frequency derivative will
only been needed as a search parameter for a small num-
ber of bright high frequency sources.
Acknowledgments
This work was supported by NASA Grant
NNG05GI69G. We are most grateful to Gijs Nele-
mans for providing us with data from his population
synthesis studies.
Appendix A
To leading order in the eccentricity, e, the Cartesian
coordinates of the ith LISA spacecraft are given by [25]
xi(t) = R cos(α) +
cos(2α− βi)− 3 cos(β)
yi(t) = R sin(α) +
sin(2α− βi)− 3 sin(β)
zi(t) = −
3eR cos(α− βi) . (27)
In the above R = 1 AU, is the radial distance of the
guiding center, α = 2πfmt+ κ is the orbital phase of the
guiding center, and βi = 2π(i − 1)/3 + λ (i = 1, 2, 3) is
the relative phase of the spacecraft within the constel-
lation. The parameters κ and λ give the initial ecliptic
longitude and orientation of the constellation. The dis-
tance between the spacecraft is L = 2
3eR. Setting
e = 0.00985 yields L = 5× 109 m.
An arbitrary gravitational wave traveling in the k̂ di-
rection can be written as the linear sum of two indepen-
dent polarization states
h(ξ) = h+(ξ)ε
+ + h×(ξ)ε
× (28)
where the wave variable ξ = t − k̂ · x gives the surfaces
of constant phase. The polarization tensors can be ex-
panded in terms of the basis tensors e+ and e× as
ε+ = cos(2ψ)e+ − sin(2ψ)e×
ε× = sin(2ψ)e+ + cos(2ψ)e× , (29)
where ψ is the polarization angle and
+ = û⊗ û− v̂ ⊗ v̂
× = û⊗ v̂ + v̂ ⊗ û . (30)
The vectors (û, v̂, k̂) form an orthonormal triad which
may be expressed as a function of the source location on
the celestial sphere
û = cos θ cosφ x̂+ cos θ sinφ ŷ − sin θ ẑ
v̂ = sinφ x̂− cosφ ŷ
k̂ = − sin θ cosφ x̂− sin θ sinφ ŷ − cos θ ẑ . (31)
For mildly chirping binary sources we have
h(ξ) = ℜ
+ + ei3π/2A×ε
eiΨ(ξ)
where
2M(πf)2/3
1 + cos2 ι
A× = −
4M(πf)2/3
cos ι . (33)
Here M is the chirp mass, DL is the luminosity dis-
tance and ι is the inclination of the binary to the line
of sight. Higher post-Newtonian corrections, eccentric-
ity of the orbit, and spin effects will introduce additional
harmonics. For chirping sources the adiabatic approxi-
mation requires that the frequency evolution ḟ occurs on
a timescale long compared to the light travel time in the
interferometer: f/ḟ ≪ L. The gravitational wave phase
can be approximated as
Ψ(ξ) = 2πf0ξ + πḟ0ξ
2 + ϕ0 , (34)
where ϕ0 is the initial phase. The instantaneous fre-
quency is given by 2πf = ∂tΨ:
f = (f0 + ḟ0ξ)(1 − k̂ · v) . (35)
The general expression for the path length variation
caused by a gravitational wave involves an integral in
ξ from ξi to ξf . Writing ξ = ξi + δξ we have
Ψ(ξ) ≃ 2π(f0 + ḟ0ξi)δξ + const . (36)
Thus, we can treat the wave as having fixed frequency
f0 + ḟ0ξi for the purposes of the integration, and then
increment the frequency forward in time in the final ex-
pression [26]. The path length variation is then given
by [25, 26]
δℓij(ξ) = Lℜ
d(f, t, k̂) : h(ξ)
, (37)
where a : b = aijbij . The one-arm detector tensor is
given by
d(f, t, k̂) =
r̂ij(t)⊗ r̂ij(t)
T (f, t, k̂) , (38)
and the transfer function is
T (f, t, k̂) = sinc
1− k̂ · r̂ij(t)
× exp
1− k̂ · r̂ij(t)
, (39)
where f∗ = 1/(2πL) is the transfer frequency and f =
f0 + ḟ0ξ. The expression can be attacked in pieces. It is
useful to define the quantities
d+ij(t) ≡ (r̂ij(t)⊗ r̂ij(t)) : e
+ (40)
d×ij(t) ≡ (r̂ij(t)⊗ r̂ij(t)) : e
× . (41)
and yij(t) = δℓij(t)/(2L). Then
yij(t) = ℜ
yslowij (t)e
2πif0t
, (42)
where
yslowij (t) =
T (f, t, k̂)
d+ij(t)(A+(t) cos(2ψ)
+e3πi/2A×(t) sin(2ψ))
+d×ij(t)(e
3πi/2A×(t) cos(2ψ)
−A+(t) sin(2ψ))) e(πiḟ0ξ
2+iϕ0−2πif0k̂·x)
It is a simple exercise to derive explicit expressions for the
antenna functions and the transfer function appearing in
yslowij (t) using (27) and (31).
In the Fourier domain the response can be written as
yij(t) = ℜ
2πint/Tobs
e2πif0t
, (44)
where the coefficients an can be found by a numerical
FFT of the slow terms yslowij (t). Note that the sum over
n should extend over both negative and positive values.
The number of time samples needed in the FFT will de-
pend on f0 and ḟ0 and Tobs, but is less than 2
9 = 512
for any galactic sources we are likely to encounter when
Tobs ≤ 2yr. The bandwidth of a source can be estimated
B = 2 (4 + 2πf0R sin(θ)) fm + ḟ0Tobs . (45)
The number of samples should exceed 2BTobs. The
Fourier transform of the fast term can be done analyti-
cally:
e2πif0t =
2πimt/Tobs (46)
where
bm = Tobs sinc(xm)e
ixm (47)
xm = f0Tobs −m. (48)
The cardinal sine function in (46) ensures that the
Fourier components bm away from resonance, xm ≈ 0, are
quite small. It is only necessary to keep ∼ 100 → 1000
terms either side of p = [f0Tobs], depending on how bright
the source is, and how far f0Tobs is from an integer. We
now have
yij(t) = ℜ
2πijt/Tobs
, (49)
where
anbj−n . (50)
The final step is to ensure that our Fourier transform
yields a real yij(t). This is done by setting the final an-
swer for the Fourier coefficients equal to dj = (cj+c
−j)/2.
But since xm never hits resonance for positive j (we
are not interested in the negative frequency components
j < 0), we can neglect the second term and simply write
dj = cj/2.
Basically what we are doing is hetrodyning the signal
to the base frequency f0, then Fourier transforming the
slowly evolving hetrodyned signal numerically. We then
convolve these Fourier coefficients with the analytically
derived Fourier coefficients of the carrier wave.
The Michelson type TDI variables are given by
X(t) = y12(t− 3L)− y13(t− 3L) + y21(t− 2L)
−y31(t− 2L) + y13(t− L)− y12(t− L)
+y31(t)− y21(t), (51)
Y (t) = y23(t− 3L)− y21(t− 3L) + y32(t− 2L)
−y12(t− 2L) + y21(t− L)− y23(t− L)
+y12(t)− y32(t), (52)
Z(t) = y31(t− 3L)− y32(t− 3L) + y13(t− 2L)
−y23(t− 2L) + y32(t− L)− y31(t− L)
+y23(t)− y13(t). (53)
Note that in the Fourier domain
X(f) = ỹ12(f)e
−3if/f∗ − ỹ13(f)e−3if/f∗ + ỹ21(f)e−2if/f∗
−ỹ31(f)e−2if/f∗ + ỹ13(f)e−if/f∗ − ỹ12(f)e−if/f∗
+ỹ31(f)− ỹ21(f) . (54)
This saves us from having to interpolate in the time do-
main. We just combine phase shifted versions of our orig-
inal Fourier transforms.
[1] Raftery, A.E., Practical Markov Chain Monte Carlo,
(Chapman and Hall, London, 1996).
[2] Trotta, R., astro-ph/0504022 (2005).
[3] Sambridge, M., Gallagher, K., Jackson, A. & Rickwood,
P., Geophys. J. Int. 167 528-542 (2006).
[4] Bender, P. et al., LISA Pre-Phase A Report, (1998).
[5] S. Timpano, L. J. Rubbo & N. J. Cornish, Phys. Rev.
D73 122001 (2006).
[6] Cornish, N.J. & Crowder, J., Phys. Rev. D 72, 043005
(2005).
[7] C. Andrieu & A. Doucet, IEEE Trans. Signal Process.
47 2667 (1999).
[8] R. Umstatter, N. Christensen, M. Hendry, R. Meyer, V.
Simha, J. Veitch, S. Viegland & G. Woan, gr-qc/0503121
(2005).
[9] J. Crowder & N. J. Cornish, Phys. Rev. D75 043008
(2007).
[10] T. A. Prince, M. Tinto, S. L. Larson & J. W. Armstrong,
Phys. Rev. D66, 122002 (2002).
[11] Gamerman, D., Markov Chain Monte Carlo: Stochas-
tic Simulation of Bayesian Inference, (Chapman & Hall,
London, 1997).
[12] Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N.,
Teller, A.H.,& Teller E., J. Chem. Phys. 21, 1087 (1953).
[13] Hastings, W.K., Biometrika 57, 97 (1970).
[14] Jeffreys, H. Theory of Probability, Third Edition. (Oxford
University Press 1961).
[15] Green, P.J., Biometrika 82 711-32 (1995).
[16] Lopes, H.F., & West M., Statistica Sinica 14, 41-67
(2004).
[17] Kim van der Linde (2004) MVE: Minimum Vol-
ume Ellipsoid estimation for robust outlier detec-
tion in multivariate space, Java version. Website:
http://www.kimvdlinde.com/professional/mve.html.
[18] Dickey, J.M., Ann. Math. Stat., 42, 204 (1971).
[19] Schwarz, G., Ann. Stats. 5, 461 (1978).
[20] K. Arnaud et al., preprint gr-qc/0701170 (2007).
[21] Seto, N., Mon. Not. Roy. Astron. Soc. 333, 469-474
(2002).
[22] D. Lindley, Biometrica 44 187 (1957).
[23] G. Nelemans, L. R. Yungelson & S. F. Portegies Zwart,
Mon. Not. Roy. Astron. Soc. 349 181, (2004).
[24] G. Nelemans, L. R. Yungelson & S. F. Portegies Zwart,
A&A 375, 890 (2001).
[25] Cornish, N.J. & Rubbo, L.J., Phys. Rev. D 67, 022001
(2003).
[26] Cornish, N.J., Rubbo, L.J. & Poujade, O., Phys. Rev. D
69, 082003 (2004).
http://arxiv.org/abs/astro-ph/0504022
http://arxiv.org/abs/gr-qc/0503121
http://www.kimvdlinde.com/professional/mve.html
http://arxiv.org/abs/gr-qc/0701170
[27] While this count is only strictly correct for point-like
masses, frequency evolution due to tides and mass trans-
fer can also be described by the same two parameters for
the majority of sources in the LISA band.
|
0704.1810 | A Cascade Model for Particle Concentration and Enstrophy in Fully
Developed Turbulence with Mass Loading Feedback | A Cascade Model for Particle Concentration and Enstrophy in Fully Developed
Turbulence with Mass Loading Feedback
R. C. Hogan
Bay Area Environmental Research Institute; MS 245-3 Moffett Field, CA 94035-1000
J. N. Cuzzi
NASA Ames Research Center; MS 245-3 Moffett Field, CA 94035-1000
(Dated: November 4, 2018)
A cascade model is described based on multiplier distributions determined from 3D direct numer-
ical simulations (DNS) of turbulent particle laden flows, which include two-way coupling between
the phases at global mass loadings equal to unity. The governing Eulerian equations are solved
using psuedo-spectral methods on up to 5123 computional grid points. DNS results for particle
concentration and enstrophy at Taylor microscale Reynolds numbers in the range 34 - 170 were
used to directly determine multiplier distributions on spatial scales 3 times the Kolmogorov length
scale. The multiplier probability distribution functions (PDFs) are well characterized by the β dis-
tribution function. The width of the PDFs, which is a measure of intermittency, decreases with
increasing mass loading within the local region where the multipliers are measured. The functional
form of this dependence is not sensitive to Reynolds numbers in the range considered. A partition
correlation probability is included in the cascade model to account for the observed spatial anti-
correlation between particle concentration and enstrophy. Joint probability distribution functions
of concentration and enstrophy generated using the cascade model are shown to be in excellent
agreement with those derived directly from our 3D simulations. Probabilities predicted by the cas-
cade model are presented at Reynolds numbers well beyond what is achievable by direct simulation.
These results clearly indicate that particle mass loading significantly reduces the probabilities of
high particle concentration and enstrophy relative to those resulting from unloaded runs. Particle
mass density appears to reach a limit at around 100 times the gas density. This approach has
promise for significant computational savings in certain applications.
PACS numbers: 47.61.Jd, 47.27.E-, 47.27.eb
Keywords: Turbulence, Multiphase Flows, Statistical Distributions
I. INTRODUCTION
The study of turbulent flows incorporating heavy par-
ticles in suspension (particles with finite stopping times)
is an important endeavor that has both fundamental and
practical relevance to many scientific and engineering
problems. Such flows have been investigated mainly in
numerical simulations where detailed statistical analysis
of the flow fields is possible [1, 2, 3, 4] These simulations,
limited to relatively low Taylor microscale Reynolds num-
bers Reλ (∼ 40), demonstrated that particles whose fluid
response times are comparable to the lifetime of the
smallest turbulent eddies produce a highly nonuniform
field with intense regions of concentration. Preliminary
indications were that the feedback from such concentra-
tions of particles could locally damp turbulence - how-
ever, the role of this “mass loading” effect in determining
the statistical distributions of particle density and vari-
ous fluid scalars has not been thoroughly studied. Ex-
perimental investigations of turbulence modification by
particles have demonstrated that the degree of turbu-
lence damping increases with particle mass loading and
∗Electronic address: [email protected]
†Electronic address: [email protected]
concentration [4].
The phenomenon known as intermittency can be de-
scribed as intense fluctuations, on small spatial and tem-
poral scales in the turbulent field, that contribute to
the exponential tails of probability distribution functions
(PDFs) of scalars such as velocity increments and gradi-
ents [5, 6, 7], dissipation [8], pressure [9, 10], enstrophy
[11, 12] and velocity circulation [13]. Intermittency in
the density field of preferentially concentrated particles
has also been observed and studied [14, 15].
Although intermittency in turbulence still lacks a com-
plete theoretical understanding, progress has been made
with phenomenological models that capture intermit-
tency in a cascade process. Richardson [16] and later Kol-
mogorov [17] suggested that such models might be used
to explain the process of eddy fragmentation initiated
by unstable large scale structures in a turbulent fluid.
Intermittency in the context of fragmentation though a
cascading process has been studied for large-scale gravi-
tating masses [18] and velocity increments in turbulence
[19]. Simple cascade models were explored by Meneveau
and Sreenivasan [20] and were reviewed by Sreenivasan
and Stolovitzky [21] The scale similarity of random fields
was explored by Novikov [22, 23], with a focus on the
energy dissipation cascade. In Novikov’s work, the ratio
of dissipation averaged over two spheres, one embedded
within the other, served as a measure of enstrophy par-
http://arxiv.org/abs/0704.1810v2
mailto:[email protected]
mailto:[email protected]
titioning between larger and smaller scales. The prob-
ability distribution of these ratios, known as multipliers
or breakdown coefficients, was shown to relate to multi-
fractal and statistical measures (moments) of the velocity
and dissipation fields. A recent review of intermittency
in multiplicative cascades stresses that this theory is a
kinematic description and its connection with the real
dynamics remains unclear [24].
Our previous numerical study of particle concentration
in turbulent flows showed that the particle density field
is a multifractal on scales comparable to the Kolmogorov
length scale [14]. This result suggests that a deeper de-
scription of the statistical properties of the particle con-
centration field, based on multiplier PDFs, may also be
possible. Analytical efforts have suggested that dissipa-
tion and vorticity in the fluid phase should be locally
linked with particle concentration [25]. Numerical work
in this regard has demonstrated that preferential concen-
tration is statistically anticorrelated with low vorticity:
particles tend to concentrate in regions where enstrophy
is relatively weak [26, 27].
In this paper we present a cascade model in the spirit
of Novikov [22, 23] that follows the partitioning of pos-
itive definite scalars associated with both the fluid and
the particles. Multipliers controlling the partitioning of
enstrophy and particle density at each step in the cascade
are drawn from probability distribution functions (PDFs)
which are determined empirically from direct numerical
simulations (DNS). Moreover, the multiplier PDFs are
dependent on, or conditioned by, the particle mass den-
sity or mass loading. The cascade model then generates
joint PDFs for particle concentration and enstrophy at
arbitrary cascade levels. A partitioning correlation prob-
ability is also applied at each cascade level to account for
the observed spatial anticorrelation between enstrophy
and particle concentration [26, 28].
In Section II we describe the cascade model and its
parameters, which are empirically determined from DNS
calculations. Details of the DNS equations, and our nu-
merical methods, are discussed in the Appendix. Results
are shown in section III, including comparisons of joint
PDFs of enstrophy and particle concentration as pre-
dicted by the cascade model with those obtained directly
from the DNS results. Cascade model PDF predictions
at Reynolds numbers well beyond the DNS values are
also presented. In section IV, we summarize our results
and discuss their implications.
II. CASCADE MODEL
A turbulent cascade can be envisioned as an hierar-
chical breakdown of larger eddies into smaller ones that
halts when the fluid viscosity alone can dissipate eddy ki-
netic energy. Eddies or similar turbulent structures such
as vortex tubes are bundles of energy containing vorticity
and dissipation. These structures start with a size com-
parable to the integral scale Λ of the flow, and break down
in steps to a size comparable to the Kolmogorov scale
η before being dissipated away by viscosity. The fluid
vorticity and dissipation exhibit spatial fluctuations that
increase in intensity as the spatial scale decreases. This
phenonemon is known as intermittency and has been ob-
served in a variety of processes with strong nonlinear in-
teractions.
In previous numerical and experimental studies, locally
averaged intermittent dissipation fields with scale at or
near η were used to quantify the statistical properties
of multiplier distributions [21]. Multipliers are random
variables that govern the partitioning of a positive defi-
nite scalar as turbulent structures break down along the
cascade. In these studies the statistical distribution of
multipliers (their PDF) were shown to be invariant over
spatial scales that fall within the turbulent inertial range.
Multifractal properties of the cascading field are deriv-
able from such multiplier distributions [23], and cascade
models based on the iterative application of multipliers
to a cascading variable have been shown to mimic inter-
mittency.
While invariant with level in the inertial range of a
cascade, multiplier PDFs might depend on local proper-
ties of the environment. For instance, Sreenivasan and
Stolovitzky [21] showed that the degree of intermittency
in dissipation increases with the degree of local strain
rate, and constructed multiplier distributions for local
energy dissipation conditioned on the local strain rate.
The physical mechanism behind this effect is believed to
be related to vortex stretching dynamics creating intense
bursts of dissipation.
All the multiplier PDFs measured by Sreenivasan and
Stolovitzky [21], whether conditioned or unconditioned
by local properties, are well characterized by the β dis-
tribution function,
p(m) =
Γ(2β)
Γ(β)2
mβ−1(1−m)β−1 (1)
where m is the multiplier variable and β is a shape con-
trolling parameter. A large β produces a narrow, delta-
function-like curve centered at m = 0.5, whereas β = 1
produces a flat distribution between m = 0 and 1. These
limits for β correspond to uniform and highly intermit-
tent processes respectively. In conditioned multipliers,
the value of β varies with some local property of the
fluid.
Concentration of particles in turbulence is a result of
the active dynamics of eddies on all scales. The process
depends on the scale of the eddies and the corresponding
particle response to those eddies. Intense particle den-
sity fluctuations, akin to intermittency, were observed
in a previous numerical study where it was also shown
that nonuniform particle concentrations have multifrac-
tal scaling properties [14]. These results strongly suggest
that a phenomenological cascade model based on multi-
pliers may adequately describe the particle density field.
Simulations that have included particle feedback on the
fluid through the mass loading effect show that damp-
ing of local turbulence occurs [2, 29]. The latter have
shown that vorticity dynamics is affected locally by par-
ticle feedback. This interplay between the phases could
attenuate vortex stretching and, thereby, diminish local
turbulent intermittency. Multiplier distributions condi-
tioned on local mass loading should therefore be an inte-
gral part of a realistic fluid-particle cascade model.
A. Two-Phase Cascade model
Below we describe a two-phase cascade model that in-
corporates simultaneous multiplier processes for parti-
cle concentration C and fluid enstrophy S, in addition
to a process that models their spatial anticorrelation.
The multiplier distributions are conditioned by the local
particle concentration, as determined empirically from
DNS fields equilibrated to Reλ = 34, 60, 107, and 170.
The spatial anticorrelation was also quantified from these
fields. Local measures of particle concentration (C) and
enstrophy (S) used are defined in the Appendix.
A schematic illustration of our two-phase partition-
ing process is shown in FIG. 1. The cascading vector
(S,C) has components representing enstrophy and parti-
cle concentration. Initially the components are assigned
the value unity and are associated with a common cell
having a volume of unity. Each component is partitioned
into two parts; (mSS, (1−mS)S) and (mCC, (1−mC)C),
respectively, where mS ,mC are multipliers for S and C
whose values are between zero and one inclusive and are
random members of the corresponding multiplier distri-
butions. The parts are associated with two daughter cells
each containing half the volume of the starting cell. In
the example shown in FIG. 1, mS and mC are assumed
to be greater than 0.5. The largest parts of S and C are
placed in the same daughter cell with probability Γ (and
in different cells with probability 1− Γ). This partition-
ing process is repeated for each daughter cell down the
cascade until the ratio of the daughter cell size to the
initial cell size equals a specified cutoff. When this cutoff
is set to the ratio of the turbulent lengthscales Λ and η,
the cascade corresponds to turbulence characterized by
Reλ ∼ (Λ/η)2/3 [30].
B. Conditioned Multipliers
The parameters of the cascade model are empirically
derived from the particle density and enstrophy fields C
and S as calculated by DNS (see Appendix). The simu-
lation parameters for four DNS runs representing Reλ =
36, 60, 104, and 170 are shown in Table I. The turbu-
lence kinetic energy q, the volume averaged dissipation
ǫ, and Λ are calculated from the 3-D turbulent energy
spectrum E(k) and kinematic viscosity ν,
E(k)dk (2)
mC mC *C
1−Γ = .7
Γ = .3
mS *S
mC *C (1− )*CmC
(1− )*S
(1− )*SmS *S
FIG. 1: Figure depicting the breakdown of a parcel of en-
strophy (S) and particle concentration (C) into two parcels
each with half the volume of the parent. The corresponding
multipliers mS and mC are assumed to be greater than 0.5 in
this figure. These measures are broken down and distributed
between the two parcels in one of two ways - the larger por-
tions are partitioned together with probability Γ= 0.3 (upper
figure), or in opposite directions with probability 1− Γ= 0.7
(lower figure).
ǫ = 2ν
E(k)k2dk (3)
dk (4)
where k is wavenumber. kmax =
times the number
of computational nodes per side is the maximum effective
wavenumber. Thus kmaxη > 1 indicates an adequate
resolution of the Kolmogorov scale.
Parameter Case I Case II Case III Case IV
Nodes/side 64 128 256 512
ν .01 .003 .0007 .0002
Reλ 34 60. 104 170
q 1.5 .65 .28 .14
23. 22.8 22.4 23
kmaxη 1.4 1.5 1.45 1.56
14.1 23.3 45.8 86.2
Γ .31 .29 .27 .32
D .0001 .00003 .000007 .000002
νp .001 .0003. .00007 .00002
TABLE I: Case Parameters for DNS runs. The quantities D
and νp are defined in the Appendix. Other quantities above
are defined in Section II.
The 3-D DNS computational box is uniformly subdi-
vided into spatial cells 3η on a side, and the average value
of C and S is determined for each cell ( see Appendix ).
The cells are divided into groups associated with disjoint
ranges of C. Each cell is then divided into two parts of
equal volume and averages for C and S are determined
for each part. The C and S multipliers for each cell are
evaluated as the ratio of these averages to the averages
in the parent cell. A conditional multiplier distribution
p(m) is then determined for each binned value of C from
the corresponding set of cell multipliers. Plots of p(m) for
three values of C are shown in FIG. 2. The points repre-
sent distributions derived from all DNS runs and the solid
lines are least squares fits to the β distribution function
(Eq. 1). For the lower values of C, Reλ-independence
is apparent; only the Reλ = 170 case provided data for
the largest C range. The plots clearly indicate that the
intermittency in C is reduced (multiplier PDFs narrow)
as C is increased. Derived values of βC(C) and βS(C)
are shown as a function of C in FIG. 3. Least squares
fits to the functional form p1 exp(p2C
p3) are drawn as
solid lines and the best fit parameter values for this func-
tion are tabulated in Table II. Bounding curves (dashed
lines) are defined by setting p2 and p3 to their 2σ lim-
its, to establish a plausible range of uncertainty in the
predictions.
Scalar p1 p2 p3
C 2.7 .045 1.02
S 9. .03 1.06
TABLE II: β model parameters
It is certainly of interest that such large solid/gas mass
loadings as C = 100 appear in the DNS runs at all, given
published reports that particle mass loading significantly
dampens turbulent intensity even for mass loadings on
the order of unity [1, 4]. These diverse results might be
reconciled since the particles we study herein are all far
smaller than the Kolmogorov scale and also have only a
very small lag velocity relative to the gas. Recall that
we force the turbulence, as might be the case if it were
FIG. 2: Empirically determined conditional multiplier distri-
butions p(m|C) for particle concentration at three different
mass loading values, C = 1, 20 and 50. The distributions are
obtained from bifurcations of cells with a spatial scale equal
to 3η. Results at Reλ = 34 ( square ), 60 (triangle), 107 (cir-
cle) and 170 ( cross ) are overlain. Only the simulation with
Reλ = 170 provided results for C = 50. At each mass loading
the p(m) at all Reynolds numbers are very well approximated
with the β distribution function ( solid line ). The distribu-
tion widths narrow as the mass loading increases, indicating
a decrease in the intermittency.
being constantly forced by energetic sources operating on
larger scales than our computational volume. However,
FIG. 3 strongly suggests an upper limit for C ( ∼ 100 )
for both βS and βC .
The cascade anticorrelation parameter Γ was deter-
mined by counting the number of parent cells within
which the larger partitions of C and S were found to
share the same daughter cell. This number divided by
the total number of parent cells defines Γ. The derived
Γ value is approximately constant across the DNS cases,
as indicated in Table I. Operationally, the Γ used in
the cascade model was determined by taking a simple
average of the Γ values in Table I.
Overall, the invariance of Γ and the βC(C) and βS(C)
functions across our range of Reλ justifies their treat-
ment as level independent parameters in the two-phase
cascade model. One caveat remains, which would be of
interest to address in future work. While it has been
shown that multiplier distributions leading to βC and βS
are level-invariant over a range of scales within an inertial
range [21], our simulations were numerically restricted
to values of Re in which the inertial range has not yet
become fully developed. Our reliance on the smallest
available scales of 3η to 1.5η (those providing the largest
available intermittency) might lead to some concern that
FIG. 3: The β parameters as functions of local mass loading
C for enstrophy and particle concentration at 3η. Results for
all DNS cases are indicated as described in FIG. 2. A least
squares fit of an exponential function to the points over the
entire mass loading range is shown ( solid line ). Dashed lines
correspond to the upper and lower limits of the function, and
are derived using the 2σ errors of p2 and p3.
they were already sampling the dissipation range of our
calculations, and thus may not be appropriate for a cas-
cade code. We tested this possibility by calculating mul-
tipliers for the next largest level bifurcation (6η to 3η)
for the Reλ = 170 case. The β values for those multi-
plier distributions are slightly larger in value, but consis-
tent with the C-dependence shown in FIG. 2 (6η scales
don’t provide good distribution functions beyond C ∼
15). Thus we believe that for the purpose of demonstrat-
ing this technique, and for the purpose of estimating the
occurrence statistics of C under particle mass loading,
our results are satisfactory. For applications requiring
quantitatively detailed and/or more accurate P (S,C), it
would certainly be of interest to extend the DNS calcu-
lations to larger Re, at which a true inertial range might
be found.
III. MODEL RESULTS
The 2D joint probability distribution function or PDF
of concentration and enstrophy, a fractional volume mea-
sure, was generated from the cascade model and com-
pared with results derived directly from numerical DNS
simulations. The basic probability density P (S,C) gives
the fractional volume occupied by cells having enstrophy
S and concentration C, per unit S and C; thus the frac-
tional volume having C and S in some range ∆S,∆C
is P (S,C)∆S∆C. For quantities varying over orders
of magnitude, it is convenient to adopt ∆S = S and
∆C = C, and we will present the results in the form
P (S,C)SC.
We started by binning results at spatial scale 3η, ob-
tained from the semi-final level of a cascade model run,
into a uniform logarithmic grid of S,C bins each having
width ∆(logS) = ∆(logC) = δ, with corresponding val-
ues of ∆S and ∆C. The number of 3η cells accumulated
in each bin was normalized by the total number of such
cells in the sample to convert it to a fractional volume
∆V (S,C) = P (S,C)∆S∆C. Then
∆V (S,C)
P (S,C)∆S∆C
∆(logS)∆(logC)
→ P (S,C)SC as δ → 0.
In practice of course, the binning ranges δ are not van-
ishingly small.
The plots in FIGs. 4 5 and 7 then, show the PDF
as the volume fraction P (S,C)SC. Cascade levels 9, 12,
15, and 18 correspond approximately to the Reλ of the
four simulation cases shown in Table I. These levels
were determined from the ratio of Λ and η for each case:
level = 3log
(Λ/η). The factor 3 accounts for cascade
bifurcations of 3D cells, because it takes three partition-
ings, along three orthogonal planes, to generate eight
subvolumes of linear dimension one-half that of the par-
ent volume. That is, 2level is equal to the number of
η cells within a 3D volume having linear dimension Λ
and (2level/3)2/3 is the corresponding Reλ. The number
of cascade realizations is, in turn, equal to the product
of the number of Λ-size volumes in the computational
box and the number of simulation snapshots processed.
In general it is difficult to generate DNS results with a
ratio of Λ and η that is an exact power of two. In or-
der to correctly compare DNS simulations with the cas-
cade model it was necessary to interpolate between two
cascade generated P (S,C)SC computed at scale ratios
(levels) that bracketed the ratios that were actually sim-
ulated. In FIG. 4 we compare iso-probability contours of
P (S,C)SC predicted by cascade models representing the
four DNS cases with the same contours derived directly
from the simulated S and C fields. The agreement is very
good.
A. Predictions at higher Reynolds number
The cascade model was used to generate PDFs at
deeper levels in order to assess the effect of mass loading
on the probabilities of high C and S. We generated 256
realizations of a level 24 cascade, 20 realizations of a level
30 cascade, and one realization of a level 36 cascade.
FIG. 5(a) shows the average of 256 realizations of a 24
level cascade, taken to lower probability values. The pro-
nounced crowding of the contours at the top of the figure
indicates the effect of particle mass loading on reducing
the intermittency of C at high values of C. For compar-
FIG. 4: Comparisons of cascade model predictions of
P (S,C)SC with DNS results at Reλ = 34 (a), 60 (b), 107
(c) , and 170 (d). Contours indicate probabilities .001, .01, .1
and .3. Dashed contours are cascade model predictions and
solid ones are DNS results.
ison, FIG. 5(b) shows a control run of a 24 level cascade
with all conditioning turned off. In this control case, the
exponential tails characterizing intermittent fluctuations
are seen at both low and high C.
In order to evaluate the effect of the uncertainties in the
extrapolations of the β curves for C and S on the PDF,
two cascade runs to level 24 were generated using the pa-
rameters for the upper and lower dotted curves in FIG. 3.
In FIG. 6 we show cross-sections of the PDFs produced
by these runs along the C axis through the distribution
modes to compare with the same cross-section for a run
using the nominal parameters in Table II. Both models
diverge from the mean model beyond C > 40, with the
upper (lower) curve corresponding to the outside (inside)
βC(C) and βS(C) bounds in FIG. 3. Figure 6 indicates
that the sensitivity of the PDF to the β model parame-
ters at the 2σ level is only apparent at large C, and all
models show a sharp dropoff in the probability for C >
A crowding effect similar to the one seen in FIG. 5(a)
is shown in FIG. 7 for iso-probability contours equal to
5× 10−4, for cascade levels 6, 12, 18, 24, 30 and 36.
Figures 8(a) and 8(b) compare 1D cuts through the
modes of the PDFs for cascades of 18 - 36 levels, indi-
cating that going to deeper levels (higher Reλ) results in
larger intermittency at the low-C end (as expected), re-
taining the exponential tail characteristic of intermittent
processes, but the highest particle concentration end of
the distribution is extended more slowly. Certainly at
the order of magnitude level, a particle mass loading ra-
tio of 100 times the gas density appears to be as high as
preferential concentration can produce. This result could
be inferred directly from inspection of the conditioned β
distributions of FIG. 3.
FIG. 5: (a) Cascade model predictions for a 24 level case,
taken to lower probability levels, using 256 realizations of the
cascade. Contours are labeled by log(P (S,C)SC). Note the
crowding of contours at high C values, indicating the high-C
limit of the process under conditions of mass loading.(b) A
control cascade to level 24, as in FIG. 5(a), with conditioning
turned off. The difference between (a) and (b) clearly shows
the “choking” effects of particle mass loading on intermittency
in C.
IV. SUMMARY
A two-phase cascade model for enstrophy and parti-
cle concentration in 3-D, isotropic, fully developed tur-
bulence with particle loading feedback has been devel-
oped and tested. Multiplier distributions for enstrophy
and particle concentration were empirically determined
from direct numerical simulation fields at Taylor scale
Reynolds numbers between 34 and 170. These simula-
tions included ‘two-way’ coupling between the phases at
global particle/gas mass loadings equal to unity. The
shape of all multiplier distributions is well characterized
by the β distribution function, with a value of β that
depends systematically on the local degree of mass load-
ing. The values of β increase monotonically with mass
loading and begin to rapidly increase at mass loadings
FIG. 6: 1D cuts through the mode of the PDF of FIG. 5(a)
parallel to the C axis, showing the effects of uncertainty in
the conditioning curve βC(C). The solid curve is the nominal
model and the dashed curves are obtained by allowing the
parameters p2 and p3 to take their 2σ extreme values.
FIG. 7: Cascade model predictions for P (S,C)SC = 5×10−4
for levels 6, 12, 18, 24, 30, and 36. Contour labels indicate
the cascade levels.
greater than 100.
The C-dependent multiplier distributions were used as
input to a cascade model that simulates the breakdown,
or cascade, of enstrophy S and particle concentration C
from large to small spatial scales. The spatial anticorre-
lation between enstrophy and particle concentration was
empirically determined from 3D DNS models and shown
to be constant with Reλ. This constant was used as
a correlation probability governing the relative spatial
distribution of S and C at each bifurcation step in the
cascade model.
The cascade model we have developed clearly repro-
duces the statistical distributions and spatial correlations
observed in our DNS calculations. The cascade parame-
ter values we have derived appear to be universal within
FIG. 8: (a) 1D global cuts through the cascade model PDFs
P (S,C)SC for runs with 18, 24, 30, and 36 levels. (b) closeup
of 1-D cuts through high-C regime.
the range of Reλ of our simulations. We thus specu-
late that they can be used to predict approximate joint
probabilities of enstrophy and particle concentration at
higher Reynolds numbers, at great savings in computer
time. For example, a typical DNS run to Reλ = 170
takes about 170 cpu hours on an Origins 3000 machine,
while a cascade model to an equivalent level takes 0.1 cpu
hours.
We have presented joint probabilites of S and C de-
rived from cascade runs up to level 36. The contours
shown in FIG. 5(a) and FIG. 6 clearly show the effects
of particle mass loading on the probability distribution
functions of C in the regimes where C is large. It appears
that particle mass loadings greater than 100 are rare in
turbulent flows.
The properties of the cascade rest on the physics of our
DNS simulations, and we speculate that two separate ef-
fects are involved. First, particle mass loading dampens
fluid motions of all types, decreasing vorticity stretching
and all other forms of ongoing eddy bifurcation which are
needed to produce intermittency. Second, as a byproduct
of this, particle mass loading may alter the Kolmogorov
timescale locally and shift the most effectively concen-
trated particle Stokes number St to a larger value than
that characterizing particles already lying in the local
volume, reducing the probability of preferentially con-
centrating the local particles any further.
Caveats and Future Work:
As described in section II, our multiplier distributions
were taken from the most numerous cells, with the largest
intermittency, which are at the smallest scales possible
(furthest from the forcing scale). At Reynolds numbers
accessible to DNS, a true inertial range is only beginning
to appear, and while, sampling at the smallest spatial
scales possible, we are as closely approaching the asymp-
totic values within the true inertial range as possible,
where level-independence has been demonstrated in the
past [21], it is possible that our values are subject to
inaccuracy by virtue of being sampled too close to the
dissipation scale. Any such inaccuracy will affect our
cascade results quantitatively but not qualitatively. As
computer power increases, it would be a sensible thing
to continue experiments like these at higher Reλ.
A more general model that treats enstrophy and strain
as independent cascading scalarsmight allow for a higher-
fidelity particle concentration cascade, since C is known
to be linked to the difference between these two scalars
[25] (the so-called second invariant tensor II). However,
such an effort would introduce further complexity of its
own, as II is no longer positive definite. We consider the
development of such a model a suitable task for future
work.
APPENDIX
We used an Eulerian scheme developed by Dr. Alan
Wray to solve the coupled set of fluid/particle equations
used in this study. This was done to maximize the
computational efficiency of the calculations and, more
importantly, to accurately evaluate multipliers over the
wide range of particle concentrations and enstrophies ex-
pected. In this study the effects of particle collisions and
external forces on the particles (e.g., gravity) are not con-
sidered. The turbulence is spectrally forced at k =
such that moments of the Fourier coefficients of the force
satisfy isotropy up to the fourth order. The instanta-
neous Navier-Stokes equations describing the conserva-
tion of mass and momentum for an incompressible fluid
∇ ·U = 0 (A.1)
+(U · ∇)U = −
+ν∇2U−α
(U −V) (A.2)
where U is fluid velocity, V is particle velocity, ρf and
ρp are the fluid and particle mass densities, ν is fluid
viscosity, P is pressure, and α is the inverse of the particle
gas drag stopping time τp.
The compressible equations for the particles are
+∇(ρpV) = D∇2ρp (A.3)
∂(ρpV)
+∇(ρpVV) = νp∇2(ρpV)+αρp(U−V) (A.4)
where νp is a “particle viscosity”, and D is a “particle
diffusivity”. The particle diffusivity and viscosity terms
numerically smooth out particle mass and momentum,
alleviating the formation of steep gradients of ρp that
can lead to numerical instabilities eg. [31].
The right hand sides of Eqs. A.2 and A.4 contain phase
coupling terms which are linearly dependent on (U−V).
The linear form of the coupling follows from the assump-
tions that the particle size is much less than η, and that
the material density of the particles is much greater than
ρf [2]. Additional contributions to the particle-gas cou-
plings involving pressure, viscous and Basset forces [29]
have not been added since they are expected to be weak
in our size regime of interest. The particle field is intro-
duced with a constant mass density and an initial veloc-
ity given by the local gas velocity in a field of statisti-
cally stationary turbulence. All runs are continued until
the particle statistics (RMS of conentration distribution)
have equilibrated.
The particle Stokes number St is defined relative to
the Kolmogorov time scale τη as St = τp/τη, and Φ =
Mp/Mf is the global mass loading, whereMp and Mf are
the total mass of particles and fluid respectively. In this
study ρf , St, and Φ are set to unity, D/ν = 0.01, and
νp/ν = 0.1. Explicitly setting St = 1 guarantees that
the particles are preferentially concentrated. When Φ is
unity, ρp is a surrogate for the local mass loading or local
concentration factor C. The values of νp and D minimize
the diluting effects of numerical particle diffusion while
preventing numerical blowups; their values were deter-
mined from a set of DNS runs in which their values were
decreased systematically until numerical instabilities set
Eqs. A.1 - A.4 are solved using psuedo-spectral meth-
ods commonly used to solve Naviers-Stokes equations for
a turbulent fluid. The Fast Fourier Transform (FFT) al-
gorithm is used to efficiently evaluate the dynamical vari-
ables U, V and ρp on a 3D uniform grid of computional
nodes with periodic boundary conditions. The computa-
tional algorithm is parallelized using MPI and is written
in Fortran 90. All runs for this study were executed on
SGI Origins supercomputers with up to 1024 processors.
Enstrophy is defined as
(∂iUj − ∂jUi)2 (A.5)
where i, j are summed over the three coordinate dimen-
sions of U.
The local spatial average of a scalar over a sample vol-
ume is estimated as,
Fidv (A.6)
where Fi is the scalar’s value on computational node i
centered within a cube of volume dv and the sum is over
all n nodes covering the sample volume. We normalized
this average by the global average value to get a quan-
tity that measures the scalar’s local value relative to its
mean. In this paper C and S will denote normalized spa-
tial averages of particle concentration and enstrophy over
cubes 3η on a side.
ACKNOWLEDGMENTS
We are very grateful to Dr. Alan Wray for providing
the 3-D code and for useful comments on its use. We
thank Robert Last for parallelizing the cascade code on
the SGI Origins 3000. We also would like to thank the
consultants and support staff at the NAS facility for pro-
viding invaluable assistance, and the Science Mission Di-
rectorate of NASA for generous grants of computer time.
We thank Prof. K. Sreenivasan for several helpful con-
versations in the preliminary stages of this project and
the internal reviewers Drs. Alan Wray and Denis Richard
for their suggestions for improving the manuscript. This
research has been made possible by a grant from NASA’s
Planetary Geology and Geophysics program.
[1] K. D. Squires and J. K. Eaton, Phys. Fluids A 2, 1191
(1990).
[2] K. D. Squires and J. K. Eaton, Tech. Rep. MD-55, Stan-
ford University (1990).
[3] K. D. Squires and J. K. Eaton, Phys. Fluids. A 3, 1159
(1990).
[4] J. D. Kulick, J. R. Fessler, and J. K. Eaton, J. Fluid
Mech. 227, 109 (1994).
[5] B. Castaing, Y. Gagne, and E. J. Hopfinger, Physica D
46, 177 (1990).
[6] S. P. G. Dinavahi, K. S. Breuer, and L. Sirovich, Phys.
Fluids 7, 1122 (1995).
[7] P. Kailasnath, K. R. Sreenivasan, and G. Stolovitzky,
Phys. Rev. Lett. 68, 2766 (1992).
[8] A. Vincent and M. Meneguzzi, J. Fluid Mech. 225, 1
(1991).
[9] A. Pumir, Phys. Fluids 6, 2071 (1994).
[10] E. Lamballais, M. Lesieur, and O. Métais, Phys. Rev. E
56, 6761 (1997).
[11] J. Jiménez, A. A. Wray, P. G. Saffman, and R. S. Rogallo,
J. Fluid Mech. 255, 65 (1993).
[12] G. He, S. Chen, R. H. Kraichnan, R. Zhang, and Y. Zhou,
Phys. Rev. Lett. 81, 4636 (1998).
[13] N. Cao, S. Chen, and K. R. Sreenivasan, Phys. Rev. Lett.
76, 616 (1996).
[14] R. C. Hogan, J. N. Cuzzi, and A. R. Dobrovolskis, Phys.
Rev. E 60, 1674 (1999).
[15] E. Balkovsky, G. Falkovich, and A. Fouxon, Phys. Rev.
Lett. 86, 2790 (2001).
[16] L. F. Richardson, Weather Prediction by Numerical Pro-
cess. (Cambridge University Press, Cambridge U.K.,
1922).
[17] A. N. Komolgorov, J. Fluid Mech. 13, 82 (1962).
[18] T. Chiueh, Chin. J. Phys. 32, 319 (1994).
[19] M. Gorokhovski, Tech. Rep., Center for Turbulence Re-
search, Annual Research Briefs (2003).
[20] C. Meneveau and K. R. Sreenivasan, Phys. Rev. Lett.
59, 1424 (1987).
[21] K. R. Sreenivasan and G. Stolovitzky, J. Fluid Mech.
379, 105 (1995).
[22] E. A. Novikov, Phys. Fluids A 2, 814 (1990).
[23] E. A. Novikov, Phys. Rev. E 50, R3303 (1994).
[24] J. Jiménez, J. Fluid Mech. 409, 99 (2000).
[25] M. R. Maxey, Phys. Fluids 30, 1915 (1987).
[26] K. D. Squires and J. K. Eaton, J. Fluid Mech. 226, 1
(1991).
[27] A. M. Ahmed and S. Elghobashi, Phys. Fluids 13, 3346
(2001).
[28] J. K. Eaton and J. R. Fessler, Int. J. Multiphase Flow
20, Suppl., 169 (1994).
[29] S. Elghobashi and G. C. Truesdell, Phys. Fluids A 5,
1790 (1993).
[30] U. Frisch, Turbulence (Cambridge University Press,
Cambridge, U.K., 1995), chap. 8.
[31] A. Johansen, A. C. Anderson, and A. Brandenburg, As-
tron. Astrophys. (2004).
|
0704.1811 | Unifying Evolutionary and Network Dynamics | APS/123-QED
Unifying Evolutionary and Network Dynamics
Samarth Swarup∗
Department of Computer Science,
University of Illinois at Urbana-Champaign
Les Gasser
Graduate School of Library and Information Science, and
Department of Computer Science,
University of Illinois at Urbana-Champaign.
(Dated: November 17, 2018)
Many important real-world networks manifest “small-world” properties such as scale-free degree
distributions, small diameters, and clustering. The most common model of growth for these networks
is “preferential attachment”, where nodes acquire new links with probability proportional to the
number of links they already have. We show that preferential attachment is a special case of the
process of molecular evolution. We present a new single-parameter model of network growth that
unifies varieties of preferential attachment with the quasispecies equation (which models molecular
evolution), and also with the Erdõs-Rényi random graph model. We suggest some properties of
evolutionary models that might be applied to the study of networks. We also derive the form of the
degree distribution resulting from our algorithm, and we show through simulations that the process
also models aspects of network growth. The unification allows mathematical machinery developed
for evolutionary dynamics to be applied in the study of network dynamics, and vice versa.
PACS numbers: 89.75.Hc, 89.75.Da, 87.23.Kg
Keywords: Evolutionary dynamics, Small-world networks, Scale-free networks, Preferential attachment,
Quasi-species, Urn models.
I. INTRODUCTION
The study of networks has become a very active area
of research since the discovery of “small-world” networks
[1, 2]. Small-world networks are characterized by scale-
free degree distributions, small diameters, and high clus-
tering coefficients. Many real networks, such as neuronal
networks [2], power grids [3], the world wide web [4] and
human language [5], have been shown to be small-world.
Small-worldness has important consequences. For exam-
ple, such networks are found to be resistant to random
attacks, but susceptible to targeted attacks, because of
the power-law nature of the degree distribution.
The process most commonly invoked for the genera-
tion of such networks is called “preferential attachment”
[6, 7]. Briefly, new links attach preferentially to nodes
with more existing links. Simon analyzed this stochas-
tic process, and derived the resulting distribution [8].
This simple process has been shown to generate networks
with many of the characteristics of small-world networks,
and has largely replaced the Erdõs-Rényi random graph
model [9] in modeling and simulation work.
Another major area of research in recent years has been
the consolidation of evolutionary dynamics [10], and its
application to alternate areas of research, such as lan-
guage [11]. This work rests on the foundation of quasi-
species theory [12, 13], which forms the basis of much
subsequent mathematical modeling in theoretical biology.
∗Electronic address: [email protected]
In this paper we bring together network generation
models and evolutionary dynamics models (and partic-
ularly quasi-species theory) by showing that they have
a common underlying probabilistic model. This unified
model relates both processes through a single parameter,
called a transfer matrix. The unification allows mathe-
matical machinery developed for evolutionary dynamics
to be applied in the study of network dynamics, and vice
versa. The rest of this paper is organized as follows: first
we describe the preferential attachment algorithm and
the quasispecies model of evolutionary dynamics. Then
we show that we can describe both of these with a single
probabilistic model. This is followed by a brief analy-
sis, and some simulations, which show that power-law
degree distributions can be generated by the model, and
that the process can also be used to model some aspects
of network growth, such as densification power laws and
shrinking diameters.
II. PREFERENTIAL ATTACHMENT
The Preferential Attachment algorithm specifies a pro-
cess of network growth in which the addition of new (in-
)links to nodes is random, but biased according to the
number of (in-)links the node already has. We identify
each node by a unique type i, and let xi indicate the
proportion of the total number of links in the graph that
is already assigned to node i. Then equation 1 gives the
probablity P (i) of adding a new link to node i [6].
P (i) = αx
i . (1)
http://arxiv.org/abs/0704.1811v1
mailto:[email protected]
where α is a normalizing term, and γ is a constant. As γ
approaches 0 the preference bias disappears; γ > 1 causes
exponentially greater bias from the existing in-degree of
the node.
III. EVOLUTIONARY DYNAMICS AND
QUASISPECIES
Evolutionary dynamics describes a population of types
(species, for example) undergoing change through repli-
cation, mutation, and selection[28]. Suppose there are N
possible types, and let si,t denote the number of individ-
uals of type i in the population at time t. Each type has
a fitness, fi which determines its probability of repro-
duction. At each time step, we select, with probability
proportional to fitness, one individual for reproduction.
Reproduction is noisy, however, and there is a probability
qij that an individual of type j will generate an individ-
ual of type i. The expected value of the change in the
number of individuals of type i at time t is given by,
∆si,t =
j fjsjqij
j fjsj
This is known as the quasispecies equation [13]. The fit-
ness, fi, is a constant for each i. Fitness can also be
frequency-dependent, i.e. it can depend on which other
types are present in the population. In this case the
above equation is known as the replicator-mutator equa-
tion (RME) [10],[14].
IV. A GENERALIZED POLYA’S URN MODEL
THAT DESCRIBES BOTH PROCESSES
Urn models have been used to describe both prefer-
ential attachment [15], and evolutionary processes [16].
Here we describe an urn process derived from the quasis-
pecies equation that also gives a model of network gener-
ation. In addition, this model of network generation will
be seen to unify the Erdõs-Rényi random graph model
[9] with the preferential attachment model.
Our urn process is as follows:
• We have a set of n urns, which are all initially
empty except for one, which has one ball in it.
• We add balls one by one, and a ball goes into urn i
with probability proportional to fimi, where fi is
the “fitness” of urn i, and mi is the number of balls
already in urn i.
• If the ball is put into urn j, then a ball is taken out
of urn j, and moved to urn k with probability qkj .
The matrix Q = [qij ], which we call the transfer matrix,
is the same as the mutation matrix in the quasispecies
equation.
This process describes the preferential attachment
model if we set the fitness, fi, to be proportional tom
where γ is a constant (as in equation 1). Now we get a
network generation algorithm in much the same way as
Chung et al. did [15], where each ball corresponds to a
half-edge, and each urn corresponds to a node. Placing
a ball in an urn corresponds to linking to a node, and
moving a ball from one urn to another corresponds to
rewiring. We call this algorithm Noisy Preferential At-
tachment (NPA). If the transfer matrix is set to be the
identity matrix, Noisy Preferential Attachment reduces
to pure preferential attachment.
In the NPA algorithm, just like in the preferential at-
tachment algorithm, the probability of linking to a node
depends only on the number of in-links to that node. The
“from” node for a new edge is chosen uniformly randomly.
In keeping with standard practice, the graphs in the next
section show only the in-degree distribution. However,
since the “from” nodes are chosen uniformly randomly,
the total degree distribution has the same form. Consider
the case where the transfer matrix is almost diagonal, i.e.
qii is close to 1, and the same ∀i, and all the qij are small
and equal, ∀i 6= j. Let qii = p and
qij =
= q, ∀i 6= j. (3)
Then, the probability of the new ball being placed in bin
P (i) = αm
i p+ (1− αm
i )q, (4)
where α is a normalizing constant. That is, the ball could
be placed in bin i with probability αm
i and then replaced
in bin i with probability p, or it could be placed in some
other bin with probability (1 − αm
i ), and then trans-
ferred to bin i with probability q. Rearranging, we get,
P (i) = αm
i (p− q) + q. (5)
In this case, NPA reduces to preferential attachment with
initial attractiveness [17], where the initial attractiveness
(q, here) is the same for each node. We can get differ-
ent values of initial attractiveness by setting the transfer
matrix to be non-uniform. We can get the Erdõs-Rényi
model by setting the transfer matrix to be entirely uni-
form, i.e. qij = 1/n, ∀i, j. Thus the Erdõs-Rényi model
and the preferential attachment model are seen as two ex-
tremes of the same process, which differ with the transfer
matrix, Q.
This process also obviously describes the evolutionary
process when γ = 1. In this case, we can assume that at
each step we first select a ball from among all the balls in
all the urns with probability proportional to the fitness
of the ball (assuming that the fitness of a ball is the same
as the fitness of the urn in which it is). The probability
that we will choose a ball from urn i is proportional to
fimi. We then replace this ball and add another ball
to the same urn. This is the replication step. This is
followed by a mutation step as before, where we choose
a ball from the urn and either replace it in the urn with
with probability p or move it to any one of the remaining
urns. If we assume that all urns (i.e. all types or species)
have the same intrinsic fitness, then this process reduces
to the preferential attachment process.
Having developed the unified NPA model, we can now
point towards several concepts in quasi-species theory
that are missing from the study of networks, that NPA
makes it possible to investigate:
• Quasi-species theory assumes a genome, a bit string
for example. This allows the use of a distance mea-
sure on the space of types.
• Mutations are often assumed to be point mutations,
i.e. they can flip one bit. This means that a mu-
tation cannot result in just any type being intro-
duced into the population, only a neighbor of the
type that gets mutated.
• This leads to the notion of a quasi-species, which
is a cloud of mutants that are close to the most-fit
type in genome space.
• Quasi-species theory also assumes a fitness land-
scape. This may in fact be flat, leading to neutral
evolution [18]. Another (toy) fitness landscape is
the Sharply Peaked Landscape (SPL), which has
only one peak and therefore does not suffer from
problems of local optima. In general, though, fit-
ness landscapes have many peaks, and the rugged-
ness of the landscape (and how to evaluate it) is
an important concept in evolutionary theory. The
notion of (node) fitness is largely missing from net-
work theory (with a couple of exceptions: [19],
[20]), though the study of networks might benefit
greatly from it.
• The event of a new type entering the population
and “taking over” is known as fixation. This means
that the entire population eventually consists of
this new type. Typically we speak of gene fixa-
tion, i.e. the probability that a single new gene
gets incorporated into all genomes present in the
population. Fixation can occur due to drift (neu-
tral evolution) as well as due to selection.
V. ANALYSIS AND SIMULATIONS
We next derive the degree distribution of the network.
Since there is no “link death” in the NPA algorithm and
the number of nodes is finite, the limiting behavior in our
model is not the same as that of the preferential attach-
ment model (which allows introduction of new nodes).
This means that we cannot re-use Simon’s result [8] di-
rectly to derive the degree distribution of the network
that results from NPA.
A. Derivation of the degree distribution
Suppose there are N urns and n balls at time t. Let
xi,t denote the fraction of urns with i balls at time t.
We choose a ball uniformly at random and “replicate” it,
i.e. we add a new ball (and replace the chosen ball) into
the same urn. Uniformly random choice corresponds to a
model where all the urns have equal intrinsic fitness. We
follow this up by drawing another ball from this urn and
moving it to a uniformly randomly chosen urn (from the
N − 1 other urns) with probability q = (1− p)/(N − 1),
where p is the probability of putting it back in the same
urn. Let P1(i) be the probability that the ball to be
replicated is chosen from an urn with i balls. Let P2(i)
be the probability that the new ball is placed in an urn
with i balls. The net probability that the new ball ends
up in an urn with i balls,
P (i) = P1(i) and P2(i) or P̄1(i) and P2(i). (6)
The probability of selecting a ball from an urn with i
balls,
P1(i) =
Nxi,ti
n0 + t
where n0 is the number of balls in the urns initially. P2(i)
depends on the outcome of the first step.
P2(i) =
p+ (Nxi,t − 1)q when step 1 is “successful”,
Nxi,tq when step 1 is a “failure”.
Putting these together, we get,
P (i) =
Nxi,ti
n0 + t
(p+ (Nxi,t − 1)q) +
Nxi,ti
n0 + t
Nxi,tq
Nxi,ti
n0 + t
(p− q) +Nxi,tq.
Now we calculate the expected value of xi,t+1. xi,t will
increase if the ball goes into an urn with i−1 balls. Sim-
ilarly it will decrease if the ball ends up in an urn with
i balls. Otherwise it will remain unchanged. Remember-
ing that xi,t is the fraction of urns with i balls at time t,
we write,
Nxi,t+1 =
Nxi,t + 1 w. p.
Nxi−1,t(i−1)
(p− q) +Nxi−1,tq,
Nxi,t − 1 w. p.
Nxi,ti
(p− q) +Nxi,tq,
Nxi,t otherwise.
From this, the expected value of xi,t+1 works out to be,
xi,t+1 =
i(p− q)
n0 + t
xi,t+
[(i− 1)(p− q)
n0 + t
xi−1,t.
We can show the approximate solution for xi,t to be,
xi,t =
ri−1Γ(i)
k=1(kr + 1)
(t+ 1)(1− q)t−1, (8)
0 200 400 600 800 1000
0.5t(0.99)t
t(0.99)t
FIG. 1: Example xi,t curves.
1 10 100
r=0.33
FIG. 2: The form of the degree distribution.
where r = (p − q)/(1 − q). This approximation is valid
while t << N . See Appendix A for details. For any
particular i, the shape of this curve is given by t(1− q)t.
An example curve is shown in fig 1. This matches our
intuition. Initially, xi,t = 0 for i > 1. As t increases, xi,t
increases through mutations. However, since N is finite
and we keep adding balls, eventually the number of bins
with i balls must go to zero for any particular i. Thus
xi,t must eventually start decreasing, which is what we
see in figure 1. The middle term can be simplified further
k=1(kr + 1)
∏i+1/r
k=1+1/r
∏i+1/r
k=1+1/r
Γ(1/r)
r2Γ(i + 1+ 1/r)
Therefore, in terms of i, equation 8 can be written as
(for fixed t),
xi = C
Γ(i+ 1 + 1
, (9)
where C is a constant. This is the form of the degree
distribution. This is a power law, because as i → ∞,
equation 9 tends to i−(1+1/r) (see discussion of eq. 1.4
in [8, pg 426]). This is also demonstrated in the sample
plots in figure 2.
These results are confirmed through simulation. We
did an experiment where the number of possible nodes
was set to 100000, and 10000 links were added. The
experiment was repeated for values of p ranging from
0.01 to 0.99, in steps of 0.01. Figure 3 shows a plot of
0 10 20 30 40 50 60 70 80 90 100
p (= 1 - mutation probability)
FIG. 3: N = 100000, number of edges = 10000.
1000
10000
1 10 100 1000 10000
Indegree
FIG. 4: p = 0.8, N = 100000, number of edges = 10000.
coherence, φ, which is defined as,
x2i . (10)
Coherence is a measure of the non-uniformity of the de-
gree distribution. It is 1 when a single node has all the
links. When all nodes have one link each, coherence has
its lowest value, 1/N . We see that as p increases (i.e.
mutation rate decreases), coherence also increases. This
is borne out by the degree distribution plots (figures 4
through 6). The degree distribution is steeper for lower
values of p.
B. Stability
We can rewrite equation 2 as
∆si =
j fjsj
(fisiqii +
j 6=i
fjsjqij) (11)
The first term in the parentheses represents the change in
si due to selection. Some of the copies of type i are lost
due to mutation. The fraction that are retained are given
1000
10000
1 10 100 1000 10000
Indegree
FIG. 5: p = 0.6, N = 100000, number of edges = 10000.
1000
10000
1 10 100 1000 10000
Indegree
FIG. 6: p = 0.4, N = 100000, number of edges = 10000.
by the product fiqii. If this product is greater than 1, the
proportion of type i will increase due to selection, oth-
erwise it will decrease. The second term represents the
contribution to type i due to mutation from all the other
types in the population. Thus, if si decreases towards
zero due to a selective disadvantage, it will be maintained
in the population at “noise” level due to mutations.
This leads to the notion of an error threshold. Sup-
pose that the fitness landscape has only one peak. This
is known as the Sharply Peaked Landscape, or SPL. Sup-
pose further that mutations only alter one position on the
genome at a time. Then it can be shown that if the mu-
tation rate is small enough the population will be closely
clustered about the fittest type. The fittest type keeps
getting regenerated due to selection, and mutations gen-
erate a cloud of individuals with genomes very close to
the genome of the fittest type. This cloud is known as a
quasi-species [21].
If, on the other hand, the mutation rate is above a
certain threshold (essentially 1/fi, where i is the fittest
type) then all types will persist in the population in equal
proportions. This threshold is known as the error thresh-
VI. FITNESS LANDSCAPES AND NEUTRAL
EVOLUTION
We have seen above that noisy preferential attachment
is equivalent to molecular evolution where all intrinsic
fitnesses are equal. If node fitnesses are allowed to be
different, we get standard quasi-species behavior. If the
mutation rate is low enough, the fittest node dominates
the network and acquires nearly all the links. If the mu-
tation rate is high enough to be over the error threshold,
no single node dominates.
Figures 7 and 8 show simulations where nodes are as-
signed intrinsic fitness values uniformly randomly in the
range (0, 1), for different values of p. We see that when p
is high (0.9), i.e. mutation rate is low, the degree distri-
bution stretches out along the bottom, and one or a few
nodes acquire nearly all the links. When p = 0.4, though,
we don’t get this behavior, because the mutation rate is
over the error threshold.
Since we generally don’t see a single node dominating
in real-world networks, we are led to one of two conclu-
sions: either mutation rates in real-world networks are
1000
10000
1 10 100 1000 10000
Indegree
FIG. 7: p = 0.4, N = 100000, number of edges = 10000, node
fitnesses are uniformly randomly distributed between 0 and
1000
1 10 100 1000 10000
Indegree
dominant
node
FIG. 8: p = 0.9, N = 100000, number of edges = 10000, node
fitnesses are uniformly randomly distributed between 0 and
rather high, or the intrinsic fitnesses of the nodes are
all equal. The former seems somewhat untenable. The
latter suggests that most networks undergo neutral evo-
lution [18].
Fitness landscapes can also be dynamic. Golder and
Huberman give examples of short term dynamics in col-
laborative tagging systems (in particular Del.icio.us) [22].
Figures 9 and 10, which are taken from their paper, show
two instances of the rate at which two different web sites
acquired bookmarks. The first one shows a peak right
after it appears, before the rate of bookmarking drops to
a baseline level. The second instance shows a web site
existing for a while before it suddenly shows a peak in
the rate of bookmarking. Both are examples of dynamic,
i.e. changing, fitness. Wilke et al. have shown that in
the case of molecular evolution a rapidly changing fit-
FIG. 9: This is figure 6a from [22]. It shows number of book-
marks received against time (day number). This particular
site acquires a lot of bookmarks almost immediately after it
appears, but thereafter receives few bookmarks.
FIG. 10: This is figure 6b from [22]. It shows number of
bookmarks received against time (day number). This par-
ticular site suddenly acquires a lot of bookmarks in a short
period of time, though it has existed for a long time.
ness landscape is equivalent to the time-averaged fitness
landscape [23]. Thus while short term dynamics show
peaks in link (or bookmark) acquisition, the long-term
dynamics could still be neutral or nearly neutral.
VII. DYNAMICAL PROPERTIES OF
REAL-WORLD NETWORKS
Leskovec et al. point out that though models like pref-
erential attachment are good at generating networks that
match static “snapshots” of real-world networks, they do
not appropriately model how real-world networks change
over time [24]. They point out two main properties which
are observed for several real-world networks over time:
densification power laws, and shrinking diameters. The
term densification power law refers to the fact that the
number of edges grows super-linearly with respect to the
number of nodes in the network. In particular, it grows as
a power law. This means that these networks are getting
more densely connected over time. The second surprising
property of the dynamics of growing real-world networks
is that the diameter (or 90th percentile distance, which
is called the effective diameter) decreases over time. In
most existing models of scale-free network generation, it
has been shown that the diameter increases very slowly
over time [25]. Leskovec et al. stress the importance
of modeling these dynamical aspects of network growth,
and they present an alternate algorithm that displays
both the above properties.
Noisy preferential attachment can also show these
properties if we slowly decrease the mutation rate over
time. Figures 11 and 12 show the effective diameter of the
network and the rate of change of the number of nodes
with respect to the number of edges for a simulation in
which the mutation rate was changed from 0.3 to 0.01
over the course of the simulation run.
0 20 40 60 80 100 120 140 160 180 200
Number of edges (x100)
FIG. 11: The effective diameter of the network when the mu-
tation rate decreases over time from 0.3 to 0.01. It increases
quickly at first and then decreases slowly over time.
1000
10000
100 1000 10000 100000
Number of edges
FIG. 12: The number of nodes grows as a power law with re-
spect to the number of edges (or time, since one edge is added
at each time step). The slope of the line is approximately 0.86.
VIII. CONCLUSIONS
We have shown that, when modeled appropriately, the
preferential attachment model of network generation can
be seen as a special case of the process of molecular evo-
lution because they share a common underlying proba-
bilistic model. We have presented a new, more general,
model of network generation, based on this underlying
probabilistic model. Further, this new model of network
generation, which we call Noisy Preferential Attachment,
unifies the Erdõs-Rényi random graph model with the
preferential attachment model.
The preferential attachment algorithm assumes that
the fitness of a node depends only on the number of links
it has. This is not true of most real networks. On the
world wide web, for instance, the likelihood of linking to
an existing webpage depends also on the content of that
webpage. Some websites also experience sudden spurts
of popularity, after which they may cease to acquire new
links. Thus the probability of acquiring new links de-
pends on more than the existing degree. This kind of
behavior can be modeled by the Noisy Preferential At-
tachment algorithm by including intrinsic fitness values
for nodes.
The Noisy Preferential Attachment algorithm can also
be used to model some dynamical aspects of network
growth such as densification power laws and shrinking di-
ameters by gradually decreasing mutation rate over time.
If true, this brings up the intriguing question of why mu-
tation rate would decrease over time in real-world net-
works. On the world wide web, for example, this may
have to do with better quality information being avail-
able through the emergence of improved search engines
etc. However, the fact that many different kinds of net-
works exhibit densification and shrinking diameters sug-
gests that there may be some deeper explanation to be
found.
From a design point of view, intentional modulation
of the mutation rate can provide a useful means of trad-
ing off between exploration and exploitation of network
structure. We have been exploring this in the context of
convergence in a population of artificial language learners
[26].
The larger contribution of this work, however, is to
bring together the fields of study of networks and evo-
lutionary dynamics, and we believe that many further
connections can be made.
IX. ACKNOWLEDGEMENTS
We appreciate the helpful comments of Roberto Al-
dunate and Jun Wang. Work supported under NSF
Grant IIS-0340996.
APPENDIX A
Here we solve the difference equation,
xi,t+1 =
i(p− q)
n0 + t
xi,t+
[(i− 1)(p− q)
n0 + t
xi−1,t.
x0,t is a special case.
Nx0,t+1 =
Nx0,t − 1 w. p. Nx0,tq,
Nx0,t otherwise.
Expanding and simplifying as above, we get,
x0,t+1 = (1− q)x0,t.
The solution to this difference equation is simply,
x0,t = (1− q)
tx0,0, (A2)
where x0,0 = (N−1)/N is the initial value of the number
of empty urns. Note that here, and henceforth, we are
assuming that initially all the urns are empty except for
one, which has one ball in it. Therefore x1,0 = 1, and
xi,0 = 0 ∀i > 1. This also means that n0 = 1. These
conditions together specify the entire initial state of the
system.
Equation A1 is difficult to solve directly, so we shall
take the approach of finding the solution to x1,t and x2,t
and then simply guessing the solution to xi,t.
Substituting i = 1 in equation 7 gives us,
x1,t+1 =
(p− q)
n0 + t
x1,t + qx0,t.
Substituting the solution for x0,t from equation A2 gives
x1,t+1 =
(p− q)
n0 + t
x1,t + q(1 − q)
tx0,0. (A3)
The complete solution for x1,t is (see Appendix B),
x1,t = (1 − q)
A(t+ 1) +
, (A4)
where A =
qx0,0
1+p−2q
and B =
2(p−q)
(1+p−2q)NΓ(1−r)
are con-
stants. Let us now use this result to derive the solution
for x2,t. Substituting i = 2 in equation A1, we get,
x2,t+1 =
2(p− q)
n0 + t
x2,t +
[ p− q
n0 + t
x1,t.
Substituting the solution for x1,t from equation A4 and
replacing n0 by 1 for convenience gives us,
x2,t+1 =
2(p− q)
1 + t
x2,t+
(1− q)t
A(t+ 1) +
][p− q
1 + t
. (A5)
The solution to this (after some work) turns out to be
(see Appendix B),
x2,t = (1− q)
A(t+ 1)
2r + 1
q(1− q)t
1 + p− 2q
A(t+ 1)
2rt+ t+ 2r
2(2r + 1)
(t+ 2)
In the above expression, compared to the first term, the
remaining terms are negligible. To see this, consider that
B/tr can be at most B (as r → 0), and at least B/t (as
r → 1). B itself is less than 1/N . Therefore the con-
tribution of the second term is upper-bounded by 1/N .
A similar observation will hold for D/t2r. This is far
less than the contribution due to the first term, since A
(which is also close to 1/N) is multiplied by (t+1). The
remaining terms are approximately of the form t2/N2
(and higher i will contain higher powers). We can ignore
these as long as t << N . Thus, we can write the solution
for x2,t approximately as,
x2,t =
2r + 1
(t+ 1)(1− q)t
1 + p− 2q
N − 1
(t+ 1)(1− q)t
(r + 1)(2r + 1)
(t+ 1)(1− q)t−1.
We can continue on with x3,t:
x3,t+1 =
3(p− q)
1 + t
x3,t +
[2(p− q)
1 + t
x2,t.
If we follow through with this as for x2,t, we will see the
2 from the constant in the second term (
) appear as
a factor in the first term of the solution for x3,t. In the
general expression for the solution, this appears as Γ(i).
Therefore, we can guess the approximate expression for
xi,t to be,
xi,t =
ri−1Γ(i)
k=1(kr + 1)
(t+ 1)(1− q)t−1, (A7)
which is the same as equation 8
APPENDIX B
Equation A3 is,
x1,t+1 =
(p− q)
n0 + t
x1,t + q(1− q)
tx0,0.
This equation is of the form y(t + 1) = p(t)y(t) + r(t).
The general form of the solution is,
y(t) = u(t)
∑ r(t)
Eu(t)
, (B1)
where u(t) is the solution of the homogeneous part of
the above equation, i.e. u(t + 1) = p(t)u(t), and E is
the time-shift operator, i.e. Eu(t) = u(t+ 1). Now, the
homogeneous part of equation A3 is,
u(t+ 1) =
1− q −
n0 + t
( (1− q)t+ (1 − q)n0 − (p− q)
n0 + t
= (1− q)
t+ n0 −
t+ n0
u(t).
The solution to this difference equation is,
u(t) = C(1− q)t
Γ(t+ n0 − r)
Γ(t+ n0)
, (B2)
where r = (p − q)/(1 − q), C is a constant, and Γ(·) is
the gamma-function, which is a “generalization” of the
factorial to the complex plane. It is defined recursively
as Γ(n + 1) = nΓ(n). The derivation of equation B2 is
given in Appendix C. From equations A3, B1, and B2,
we get,
x1,t =
C(1− q)t
Γ(t+ n0 − r)
Γ(t+ n0)
∑ qx0,0(1− q)
tΓ(t+ 1 + n0)
C(1 − q)t+1Γ(t+ 1 + n0 − r)
C(1 − q)t
(t+ n0 − 1)r
[ qx0,0
C(1 − q)
(t+ n0)
r +D1
(tr is read as “t to the r falling”)
q(1− q)t−1x0,0
(t+ n0 − 1)r
(t+ n0)
r + 1
D(1− q)t
(t+ n0 − 1)r
(where D = CD1 is another constant)
q(1− q)tx0,0
1 + p− 2q
Γ(t+ n0 − r)
Γ(t+ n0)
Γ(t+ n0 + 1)
Γ(t+ n0 − r)
D(1− q)t
(t+ n0 − 1)r
q(1− q)tx0,0(t+ n0)
1 + p− 2q
D(1 − q)t
(t+ n0 − 1)r
Let us evaluate the constant by applying the initial con-
ditions t = 0, x0,0 = (N−1)/N , x1,0 = 1/N , and n0 = 1.
We get,
1 + p− 2q
+DΓ(1− r)
q(N − 1)
1 + p− 2q
+NDΓ(1− r).
Therefore, D =
2(p− q)
(1 + p− 2q)NΓ(1− r)
. (B3)
This gives us the complete solution for x1,t as,
x1,t = (1 − q)
A(t+ 1) +
where A =
qx0,0
1+p−2q
and B = D =
2(p−q)
(1+p−2q)NΓ(1−r)
constants. This is the same as equation A4.
1. Solution to equation A5
Equation A5 is,
x2,t+1 =
2(p− q)
1 + t
+(1− q)t
A(t+ 1) +
][p− q
1 + t
Again, this equation is of the form of equation B1. The
solution to the homogeneous part in this case is,
u(t) = C(1− q)t
Γ(t+ 1−
2(p−q)
Γ(t+ 1)
. (B4)
This is found in exactly the same way as equation B2
(see Appendix B). Now, from equations B1, A5, and B4,
we get,
x2,t =
C(1 − q)t
∑ (1− q)t(A(t+ 1) + B
)(p−q
C(1− q)t+1 1
(t+1)2r
C(1 − q)t
C(1− q)
A(p− q)
(t+ 1)2r
(t+ 1)(t+ 1)2r +B(p− q)
∑ (t+ 1)2r
tr(t+ 1)
∑ (t+ 1)2r
Solving the summations (see Appendix C), we get,
x2,t =
C(1 − q)t
C(1− q)
[A(p− q)(t+ 1)2r+1
2r + 1
( t(t+ 1)2r+1
2r + 1
(t+ 1)2r+2
(2r + 1)(2r + 2)
+B(p− q)
(t+ 2)t2r
(1 + r)tr
Simplifying,
x2,t = (1− q)
[Ar(t + 1)
2r + 1
Aq(t+ 1)(2rt+ t+ 2r)
(1 − q)(2r + 1)(2r + 2)
Bq(t+ 2)
(1− q)(1 + r)tr
D(1 − q)t
= (1− q)t
A(t+ 1)
2r + 1
q(1− q)t
1 + p− 2q
A(t+ 1)
2rt+ t+ 2r
2(2r + 1)
(t+ 2)
This is the same as equation B5.
APPENDIX C
1. Derivation of equation B2
Equation B2 is the solution to the following difference
equation:
u(t+ 1) = (1 − q)
t+ n0 −
t+ n0
u(t).
Note that all the factors in this equation are positive.
Taking log, we get,
log u(t+ 1) = log
(1− q)
( t+ n0 − r
t+ n0
+ log u(t),
∆log u(t) = log
(1− q)
( t+ n0 − r
t+ n0
log u(t) =
log(1− q) + log(t+ n0 − r)
−log(t+ n0)
Remembering that
a = ta, and
log(t+a) = logΓ(t+
a), we get,
log u(t) = tlog(1− q) + logΓ(t+ n0 − r)
−logΓ(t+ n0) +D,
Therefore, u(t) = C(1− q)t
Γ(t+ n0 − r)
Γ(t+ n0)
This is the same as equation B2.
2. Derivation of equation B5
Equation B5 is the solution to the following difference
equation:
x2,t =
C(1 − q)t
C(1− q)
A(p− q)
(t+ 1)2r
(t+ 1)(t+ 1)2r +B(p− q)
∑ (t+ 1)2r
tr(t+ 1)
∑ (t+ 1)2r
We shall solve each of the summations individually. At
several points, we will use the summation by parts for-
mula,
Ey(t)∆z(t)
= y(t)z(t)−
z(t)∆y(t)
. (C1)
The first summation term can be obtained directly:
(t+ 1)2r =
(t+ 1)2r+1
2r + 1
+ C1. (C2)
The second summation term can be obtained using the
summation by parts formula. Let Ey(t) = t + 1. Then
y(t) = t, and ∆y(t) = 1. Let ∆z(t) = (t + 1)2r. Then
z(t) =
(t+1)2r+1
. We get,
(t+1)(t+1)2r =
(t+ 1)(t+ 1)2r+1
2r + 1
∑ (t+ 1)2r+1
2r + 1
(t+1)(t+1)2r =
(t+ 1)(t+ 1)2r+1
2r + 1
(t+ 1)2r+2
(2r + 1)(2r + 2)
Before proceeding, we pause to calculate
(1/tr). Note
that,
(t+ 1)r
t+ 1− r
(t+ 1)tr
(t+ 1)tr
Taking summation, we get,
Using the summation by parts formula, we get,
(1− r)tr
We now proceed to the third summation term in the dif-
ference equation for x2,t.
∑ (t+ 1)2r
tr(t+ 1)
∑ t2r−1
We shall again use the summation by parts formula. Let
Ey(t) = t2r−1. Therefore y(t) = (t−1)2r−1, and ∆y(t) =
(2r − 1)(t − 1)2r−2. Let ∆z(t) = 1/tr. Therefore z(t) =
t/(1− r)tr (from equation C4). We get,
∑ t2r−1
t(t− 1)2r−1
(1− r)tr
∑ 2r − 1
t(t− 1)2r−2
t(t− 1)2r−1
(1− r)tr
2r − 1
∑ t2r−1
2r − 1
∑ t2r−1
(t− 1)2r−1
∑ t2r−1
Therefore,
∑ (t+ 1)2r
tr(t+ 1)
The fourth summation term in the difference equation
for x2,t is similar to the third one.
∑ (t+ 1)2r
∑ (t+ 1)2r
tr(t+ 1)
(t+ 1)
Let Ey(t) = (t + 1). Then y(t) = t, and ∆y(t) = 1. Let
∆z(t) =
∑ (t+1)2r
tr(t+1)
. Then z(t) = t
(from equation C5).
Therefore, using the summation by parts rule, we get,
∑ (t+ 1)2r
∑ t2r
∑ t2r
∑ (t+ 1− 2r)t2r−1
(t− 2r)t2r
∑ t2r
t− 2r
1 + r
Substituting back in equation C6, we get,
∑ (t+ 1)2r
( t− 2r
1 + r
t− 2r
1 + r
Therefore, we have,
∑ (t+ 1)2r
(t+ 2)t2r
(1 + r)tr
Combining equations C2, C3, C5, and C7, we get the
solution for x2,t, i.e. equation B5.
[1] S. Milgram, Psychology Today 2, 60 (1967).
[2] D. J. Watts and S. H. Strogatz, Nature 393, 440 (1998).
[3] C. Asavathiratham, S. Roy, B. Lesieutre, and G. Vergh-
ese, IEEE Control Systems (2001).
[4] R. Albert, H. Jeong, and A.-L. Barabási, Nature 401,
130 (1999).
[5] R. Ferrer i Cancho and R. V. Solé, Proceedings of the
Royal Society of London B 268, 2261 (2001).
[6] A.-L. Barabási and R. Albert, Science 286, 509 (1999).
[7] R. Albert and A.-L. Barabási, Physical Review Letters
85, 5234 (2000).
[8] H. A. Simon, Biometrika 42, 425 (1955).
[9] P. Erdõs and A. Rényi, Publicationes Mathematicae De-
brecen 6, 290 (1959).
[10] K. M. Page and M. A. Nowak, Journal of theoretical
biology 219, 93 (2002).
[11] M. A. Nowak, Z. Phys. Chem. 16, 5 (2002).
[12] M. Eigen and P. Schuster, Naturwissenschaften 64, 541
(1977).
[13] M. Eigen, J. McCaskill, and P. Schuster, Journal of Phys-
ical Chemistry 92, 6881 (1988).
[14] N. L. Komarova, Journal of Theoretical Biology 230, 227
(2004).
[15] F. Chung, S. Handjani, and D. Jungreis, Annals of Com-
binatorics 7, 141 (2003).
[16] M. Benäım, S. Schreiber, and P. Tarrès, Annals of Ap-
plied Probability 14, 1455 (2004).
[17] S. N. Dorogovtsev, J. F. F. Mendes, and A. N. Samukhin,
Physical Review Letters 85, 4633 (2000).
[18] M. Kimura, The Neutral Theory of Molecular Evolution
(Cambridge University Press, Cambridge, 1983).
[19] G. Caldarelli, A. Capocci, P. De Los Rios, and M. A.
Muñoz, Physical Review Letters 89 (2002).
[20] G. Bianconi and A.-L. Barabási, Europhysics Letters 54,
436 (2001).
[21] M. Eigen, J. McCaskill, and P. Schuster, Adv. Chem.
Phys. 75, 149 (1989).
[22] S. A. Golder and B. A. Huberman, Journal of Informa-
tion Science 32, 198 (2006).
[23] C. O. Wilke, C. Ronnewinkel, and T. Martinetz, Phys.
Rep. 349, 395 (2001).
[24] J. Leskovec, J. Kleinberg, and C. Faloutsos, in Proceed-
ings of KDD’05 (Chicago, Illinois, USA, 2005).
[25] B. Bollobás and O. Riordan, Combinatorica 24, 5 (2004).
[26] S. Swarup and L. Gasser, in From Animals to Animats 9:
Proceedings of the Ninth International Conference on the
Simulation of Adaptive Behavior (Rome, Italy, 2006).
[27] G. U. Yule, Philosophical Transactions of the Royal Soci-
ety of London. Series B, Containing Papers of a Biological
Character 213, 21 (1925).
[28] Simon (and Yule [27] before him) applied their stochas-
tic model to the estimation of numbers of species within
genera, but the notion of quasi-species was unknown at
the time, and it addresses a much wider range of issues
than species frequency.
|
0704.1812 | The LuckyCam Survey for Very Low Mass Binaries II: 13 new M4.5-M6.0
Binaries | Mon. Not. R. Astron. Soc. 000, 1–11 () Printed 2 November 2021 (MN LATEX style file v2.2)
The LuckyCam Survey for Very Low Mass Binaries II: 13 new
M4.5-M6.0 Binaries⋆
N.M. Law1,2†, S.T. Hodgkin2 and C.D. Mackay2
1Department of Astronomy, Mail Code 105-24, California Institute of Technology, 1200 East California Blvd., Pasadena, CA 91125, USA
2Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, UK
ABSTRACT
We present results from a high-angular-resolution survey of 78 very low mass (VLM) binary
systems with 6.0 6 V-K colour 6 7.5 and proper motion > 0.15 arcsec/yr. Twenty-one VLM
binaries were detected, 13 of them new discoveries. The new binary systems range in separa-
tion between 0.18 arcsec and 1.3 arcsec. The distance-corrected binary fraction is 13.5+6.5
−4 %,
in agreement with previous results. Nine of the new binary systems have orbital radii > 10 AU,
including a new wide VLM binary with 27 AU projected orbital separation. One of the new
systems forms two components of a 2300 AU separation triple system. We find that the orbital
radius distribution of the binaries with V-K < 6.5 in this survey appears to be different from
that of redder (lower-mass) objects, suggesting a possible rapid change in the orbital radius
distribution at around the M5 spectral type. The target sample was also selected to investigate
X-ray activity among VLM binaries. There is no detectable correlation between excess X-Ray
emission and the frequency and binary properties of the VLM systems.
Key words: Binaries: close - Stars: low-mass, brown dwarfs - Instrumentation: high angular
resolution - Methods: observational - Techniques: high angular resolution
1 INTRODUCTION
Multiple star systems offer a powerful way to constrain the pro-
cesses of star formation. The distributions of companion masses,
orbital radii and thus binding energies provide important clues to
the systems’ formation processes. In addition, binaries provide us
with a method of directly determining the masses of the stars in
the systems. This is fundamental to the calibration of the mass-
luminosity relation (Henry & McCarthy 1993; Henry et al. 1999;
Ségransan et al. 2000).
A number of recent studies have tested the stellar multiplic-
ity fraction of low-mass and very-low-mass (VLM) stars. The
fraction of known directly-imaged companions to very-low-mass
stars is much lower than that of early M-dwarfs and solar type
stars. Around 57% of solar-type stars (F7–G9) have known stellar
companions (Abt & Levy 1976; Duquennoy & Mayor 1991), while
imaging and radial velocity surveys of early M dwarfs suggest that
between 25% & 42% have companions (Henry & McCarthy 1990;
Fischer & Marcy 1992; Leinert et al. 1997; Reid & Gizis 1997).
For M6–L1 primary spectral types direct imaging studies find bi-
nary fractions of only 10–20% (Close et al. 2003; Siegler et al.
⋆ Based on observations made with the Nordic Optical Telescope, operated
on the island of La Palma jointly by Denmark, Finland, Iceland, Norway,
and Sweden, in the Spanish Observatorio del Roque de los Muchachos of
the Instituto de Astrofisica de Canarias.
† E-mail: [email protected]
2005; Law et al. 2006; Montagnier et al. 2006), and similar binary
fractions have been found for still later spectral types (Bouy et al.
2003; Gizis et al. 2003; Burgasser et al. 2003). Recent radial-
velocity work has, however, suggested that a large fraction of ap-
parently single VLM stars are actually very close doubles, and the
VLM multiplicity fraction may thus be comparable to higher mass
stars (Jeffries & Maxted 2005; Basri & Reiners 2006).
Very low mass M, L and T systems appear to have a tighter
and closer distribution of orbital separations, peaking at around
4 AU compared to 30 AU for G dwarfs (Close et al. 2003). How-
ever, the relatively few known field VLM binaries limit the sta-
tistical analysis of the distribution, in particular for studying the
frequency of the rare large-orbital-radii systems which offer strong
constraints on some formation theories (eg. Bate & Bonnell 2005;
Phan-Bao et al. 2005; Law et al. 2006; Close et al. 2006; Caballero
2007; Artigau et al. 2007).
We have been engaged in a programme to image a large
and carefully selected sample of VLM stars, targeting separations
greater than 1 AU (Law et al. 2005, 2006). The programme has
yielded a total of 18 new VLM binary systems, where VLM is de-
fined as a primary mass <0.11 M⊙. This paper presents the second
of the surveys, targeting field stars in the range M4.5–M6.0. The
spectral type range of this survey is designed to probe the transition
between the properties of the 30 AU median-radius binaries of the
early M-dwarfs and the 4 AU median-radius late M-dwarf binaries.
We observed 78 field M-dwarf targets with estimated spec-
tral types between M4.5 and M6.0, searching for companions with
c© RAS
http://arxiv.org/abs/0704.1812v1
2 N.M. Law et al.
separations between 0.1 and 2.0 arcsec. The surveyed primary stel-
lar masses range from 0.089 M⊙ to 0.11 M⊙ using the models in
Baraffe et al. (1998).
It has been suggested in Makarov (2002) that F & G field
stars detected in the ROSAT Bright Source Catalogue are 2.4 times
more likely to be members of wide (> 0.3 arcsec) multiple sys-
tems than those not detected in X-Rays. There is also a well-known
correlation between activity and stellar rotation rates (eg. Simon
1990; Soderblom et al. 1993; Terndrup et al. 2002). A correlation
between binarity and rotation rate would thus be detectable as a
correlation between activity and binarity. To test these ideas, we di-
vided our targets into two approximately equal numbered samples
on the basis of X-ray activity.
All observations used LuckyCam, the Cambridge Lucky
Imaging system. The system has been demonstrated to reliably
achieve diffraction-limited images in I-band on 2.5m telescopes
(Law 2007; Law et al. 2006; Mackay et al. 2004; Tubbs et al. 2002;
Baldwin et al. 2001). A Lucky Imaging system takes many rapid
short-exposure images, typically at 20-30 frames per second. The
turbulence statistics are such that a high-quality, near-diffraction-
limited frame is recorded a few percent of the time; in Lucky Imag-
ing only those frames are aligned and co-added to produce a final
high-resolution image. Lucky Imaging is an entirely passive pro-
cess, and thus introduces no extra time overheads beyond those re-
quired for standard CCD camera observations. The system is thus
very well suited to rapid high-angular-resolution surveys of large
numbers of targets.
In section 2 we describe the survey sample and the X-Ray
activity selection. Section 3 describes the observations and their
sensitivity. Section 4 describes the properties of the 13 new VLM
binaries, and section 5 discusses the results.
2 THE SAMPLE
We selected a magnitude and colour limited sample of nearby late
M-dwarfs from the LSPM-North High Proper motion catalogue
(Lépine & Shara 2005). The LSPM-North catalogue is a survey of
the Northern sky for stars with annual proper motions greater than
0.15”/year. Most stars in the catalogue are listed with both 2MASS
IR photometry and V-band magnitudes estimated from the photo-
graphic BJ and RF bands.
The LSPM-North high proper motion cut ensures that all stars
are relatively nearby, and thus removes contaminating distant giant
stars from the sample. We cut the LSPM catalogue to include only
stars with V-K colour >6 and 67.5, and K-magnitude brighter than
10. The colour cut selects approximately M4.5 to M6.0 stars; its
effectiveness is confirmed in Law et al. (2006).
2.1 X-ray selection
After the colour and magnitude cuts the sample contained 231 late
M-dwarfs. We then divide the stars into two target lists on the basis
of X-ray activity. We mark a star as X-ray active if the target star
has a ROSAT All-Sky Survey detection from the Faint Source Cat-
alogue (Voges 2000) or the Bright Source catalogue (Voges 1999)
within 1.5× the 1σ uncertainty in the X-ray position. Known or
high-probability non-stellar X-Ray associations noted in the QORG
catalogue of radio/X-ray sources (Flesch & Hardcastle 2004) are
removed. Finally, we manually checked the Digitized Sky Survey
(DSS) field around each star to remove those stars which did not
show an unambiguous association with the position of the X-ray
4 5 6 7 8 9 10 11 12
2MASS K magnitude
Non-activeN
X-ray-active
5.5 6 6.5 7 7.5 8
LSPM V-K colour
Non-active
16 X-ray-active
0 5 10 15 20 25 30
Photometric Estimated Distance / pc
Non-activeN
X-ray-active
Figure 1. The 2MASS K-magnitude, V-K colour and distance distributions
of the X-ray-active and non-X-ray-active samples. Distances are estimated
from the LSPM V-K colours of the samples and the V-K photometric ab-
solute magnitude relations detailed in Leggett (1992). The distances shown
in this figure have a precision of approximately 30%, and assume that all
targets are single stars.
detection. The completeness and biases of the X-Ray selection are
discussed in section 5.2.
It should be noted that the fraction of stars which show mag-
netic activity (as measured in Hα) reaches nearly 100% at a spectral
type of M7, and so the X-ray selection here picks only especially
active stars (Gizis et al. 2000; Schmitt & Liefke 2004). However,
for convenience, we here denote the stars without ROSAT evidence
for X-Ray activity as “non-X-ray active”.
One star in the remaining sample, LSPM J0336+3118, is
listed as a T-Tauri in the SIMBAD database, and was therefore re-
moved from the sample. We note that in the case of the newly de-
tected binary LSPM J0610+2234, which is ∼0.7σ away from the
ROSAT X-Ray source we associate with it, there is another bright
star at 1.5σ distance which may be the source of the X-Ray emis-
sion. GJ 376B is known to be a common-proper-motion companion
to the G star GJ 376, located at a distance of 134 arcsec (Gizis et al.
2000). Since the separation is very much greater than can detected
in the LuckyCam survey, we treat it as a single star in the following
analysis.
2.2 Target distributions
These cuts left 51 X-ray active stars and 179 stars without evidence
for X-Ray activity. We drew roughly equal numbers of stars at ran-
dom from these both these lists to form the final observing target set
c© RAS, MNRAS 000, 1–11
13 New VLM Binaries 3
LSPM ID Other Name K V-K Est. SpT PM/”/yr LSPM ID Other Name K V-K Est. SpT PM/”/yr
LSPM J0023+7711 LHS 1066 9.11 6.06 M4.5 0.839 LSPM J0722+7305 9.44 6.20 M4.5 0.178
LSPM J0035+0233 9.54 6.82 M5.0 0.299 LSPM J0736+0704 G 89-32 7.28 6.01 M4.5 0.383
LSPM J0259+3855 G 134-63 9.52 6.21 M4.5 0.252 LSPM J0738+4925 LHS 5126 9.70 6.34 M4.5 0.497
LSPM J0330+5413 9.28 6.92 M5.0 0.151 LSPM J0738+1829 9.81 6.58 M5.0 0.186
LSPM J0406+7916 G 248-12 9.19 6.43 M4.5 0.485 LSPM J0810+0109 9.74 6.10 M4.5 0.194
LSPM J0408+6910 G 247-12 9.40 6.08 M4.5 0.290 LSPM J0824+2555 9.70 6.10 M4.5 0.233
LSPM J0409+0546 9.74 6.34 M4.5 0.255 LSPM J0825+6902 LHS 246 9.16 6.47 M4.5 1.425
LSPM J0412+3529 9.79 6.25 M4.5 0.184 LSPM J0829+2646 V* DX Cnc 7.26 7.48 M5.5 1.272
LSPM J0414+8215 G 222-2 9.36 6.13 M4.5 0.633 LSPM J0841+5929 LHS 252 8.67 6.51 M5.0 1.311
LSPM J0417+0849 8.18 6.36 M4.5 0.405 LSPM J0849+3936 9.64 6.25 M4.5 0.513
LSPM J0420+8454 9.46 6.10 M4.5 0.279 LSPM J0858+1945 V* EI Cnc 6.89 7.04 M5.5 0.864
LSPM J0422+3900 9.67 6.10 M4.5 0.840 LSPM J0859+2918 LP 312-51 9.84 6.26 M4.5 0.434
LSPM J0439+1615 9.19 7.05 M5.5 0.797 LSPM J0900+2150 8.44 7.76 M6.5 0.782
LSPM J0501+2237 9.23 6.21 M4.5 0.248 LSPM J0929+2558 LHS 269 9.96 6.67 M5.0 1.084
LSPM J0503+2122 NLTT 14406 8.89 6.28 M4.5 0.177 LSPM J0932+2659 GJ 354.1 B 9.47 6.33 M4.5 0.277
LSPM J0546+0025 EM* RJHA 15 9.63 6.50 M4.5 0.309 LSPM J0956+2239 8.72 6.06 M4.5 0.533
LSPM J0602+4951 LHS 1809 8.44 6.20 M4.5 0.863 LSPM J1848+0741 7.91 6.72 M5.0 0.447
LSPM J0604+0741 9.78 6.15 M4.5 0.211 LSPM J2215+6613 7.89 6.02 M4.5 0.208
LSPM J0657+6219 GJ 3417 7.69 6.05 M4.5 0.611 LSPM J2227+5741 NSV 14168 4.78 6.62 M5.0 0.899
LSPM J0706+2624 9.95 6.26 M4.5 0.161 LSPM J2308+0335 9.86 6.18 M4.5 0.281
LSPM J0711+4329 LHS 1901 9.13 6.74 M5.0 0.676
Table 1. The observed non-X-ray-emitting sample. The quoted V & K magnitudes are taken from the LSPM catalogue. K magnitudes are based on 2MASS
photometry; the LSPM-North V-band photometry is estimated from photographic BJ and RF magnitudes and is thus approximate only, but is sufficient for
spectral type estimation – see section 4.2. Spectral types and distances are estimated from the V & K photometry (compared to SIMBAD spectral types) and
the young-disk photometric parallax relations described in Leggett (1992). Spectral types have a precision of approximately 0.5 spectral classes and distances
have a precision of ∼30%.
LSPM ID Other Name K V-K ST PM/as/yr ROSAT BSC/FSC ID ROSAT CPS
LSPM J0045+3347 9.31 6.50 M4.5 0.263 1RXS J004556.3+334718 2.522E-02
LSPM J0115+4702S 9.31 6.04 M4.5 0.186 1RXS J011549.5+470159 4.323E-02
LSPM J0200+1303 6.65 6.06 M4.5 2.088 1RXS J020012.5+130317 1.674E-01
LSPM J0207+6417 8.99 6.25 M4.5 0.283 1RXS J020711.8+641711 8.783E-02
LSPM J0227+5432 9.33 6.05 M4.5 0.167 1RXS J022716.4+543258 2.059E-02
LSPM J0432+0006 9.43 6.37 M4.5 0.183 1RXS J043256.1+000650 1.557E-02
LSPM J0433+2044 8.96 6.47 M4.5 0.589 1RXS J043334.8+204437 9.016E-02
LSPM J0610+2234 9.75 6.68 M5.0 0.166 1RXS J061022.8+223403 8.490E-02
LSPM J0631+4129 8.81 6.34 M4.5 0.212 1RXS J063150.6+412948 4.275E-02
LSPM J0813+7918 LHS 1993 9.13 6.07 M4.5 0.539 1RXS J081346.5+791822 1.404E-02
LSPM J0921+4330 GJ 3554 8.49 6.21 M4.5 0.319 1RXS J092149.3+433019 3.240E-02
LSPM J0953+2056 GJ 3571 8.33 6.15 M4.5 0.535 1RXS J095354.6+205636 2.356E-02
LSPM J0958+0558 9.04 6.17 M4.5 0.197 1RXS J095856.7+055802 2.484E-02
LSPM J1000+3155 GJ 376B 9.27 6.86 M5.0 0.523 1RXS J100050.9+315555 2.383E-01
LSPM J1001+8109 9.41 6.20 M4.5 0.363 1RXS J100121.0+810931 3.321E-02
LSPM J1002+4827 9.01 6.57 M5.0 0.426 1RXS J100249.7+482739 6.655E-02
LSPM J1125+4319 9.47 6.16 M4.5 0.579 1RXS J112502.7+431941 5.058E-02
LSPM J1214+0037 7.54 6.33 M4.5 0.994 1RXS J121417.5+003730 9.834E-02
LSPM J1240+1955 9.69 6.08 M4.5 0.307 1RXS J124041.4+195509 2.895E-02
LSPM J1300+0541 7.66 6.02 M4.5 0.959 1RXS J130034.2+054111 1.400E-01
LSPM J1417+3142 LP 325-15 7.61 6.19 M4.5 0.606 1RXS J141703.1+314249 1.145E-01
LSPM J1419+0254 9.07 6.29 M4.5 0.233 1RXS J141930.4+025430 2.689E-02
LSPM J1422+2352 LP 381-49 9.65 6.38 M4.5 0.248 1RXS J142220.3+235241 2.999E-02
LSPM J1549+7939 G 256-25 8.86 6.11 M4.5 0.251 1RXS J154954.7+793949 2.033E-02
LSPM J1555+3512 8.04 6.02 M4.5 0.277 1RXS J155532.2+351207 1.555E-01
LSPM J1640+6736 GJ 3971 8.95 6.91 M5.0 0.446 1RXS J164020.0+673612 7.059E-02
LSPM J1650+2227 8.31 6.38 M4.5 0.396 1RXS J165057.5+222653 6.277E-02
LSPM J1832+2030 9.76 6.28 M4.5 0.212 1RXS J183203.0+203050 1.634E-01
LSPM J1842+1354 7.55 6.28 M4.5 0.347 1RXS J184244.9+135407 1.315E-01
LSPM J1926+2426 8.73 6.37 M4.5 0.197 1RXS J192601.4+242618 1.938E-02
LSPM J1953+4424 6.85 6.63 M5.0 0.624 1RXS J195354.7+442454 1.982E-01
LSPM J2023+6710 9.17 6.60 M5.0 0.296 1RXS J202318.5+671012 2.561E-02
LSPM J2059+5303 GSC 03952-01062 9.12 6.34 M4.5 0.170 1RXS J205921.6+530330 4.892E-02
LSPM J2117+6402 9.18 6.62 M5.0 0.348 1RXS J211721.8+640241 3.628E-02
LSPM J2322+7847 9.52 6.97 M5.0 0.227 1RXS J232250.1+784749 2.631E-02
LSPM J2327+2710 9.42 6.07 M4.5 0.149 1RXS J232702.1+271039 4.356E-02
LSPM J2341+4410 5.93 6.48 M4.5 1.588 1RXS J234155.0+441047 1.772E-01
Table 2. The observed X-ray emitting sample. The star properties are estimated as described in the caption to table 1. ST is the estimated spectral type; the
ROSAT flux is given in units of counts per second.
c© RAS, MNRAS 000, 1–11
4 N.M. Law et al.
0 2 4 6 8 10
X-ray active
Non-X-ray active
Figure 2. The observed samples, plotted in a V/V-K colour-magnitude dia-
gram. The background distribution shows all stars in the LSPM-North cat-
alogue.
Name Ref.
GJ 3417 Henry et al. (1999)
G 89-32B Henry et al. (1997)
V* EI Cnc Gliese & Jahreiß (1991)
LP 595-21 Luyten (1997)
GJ 1245 McCarthy et al. (1988)
GJ 3928 McCarthy et al. (2001)
GJ 3839 Delfosse et al. (1999)
LHS 1901 Montagnier et al. (2006)
Table 3. The previously known binaries which were re-detected by Lucky-
Cam in this survey.
of 37 X-Ray active stars and 41 non-X-ray active stars (described
in tables 1 and 2). Four of the X-Ray active stars and 4 of the non-
X-ray stars were previously known to be binary systems (detailed
in table 3), but were reimaged with LuckyCam to ensure a uniform
survey sensitivity in both angular resolution and detectable com-
panion contrast ratio.
Figure 1 shows the survey targets’ distributions in K magni-
tude, V-K colour and photometrically estimated distance. Figure 2
compares the targets to the rest of the stars in the LSPM catalogue.
The X-ray and non-X-ray samples are very similar, although the
non-X-ray sample has a slightly higher median distance, at 15.4pc
rather than 12.2pc (the errors on the distance determination are
about 30%).
3 OBSERVATIONS
We imaged all 78 targets in a total of 11 hours of on-sky time in
June and November 2005, using LuckyCam on the 2.56m Nordic
Optical Telescope. Each target was observed for 100 seconds in
both i’ and the z’ filters. Most of the observations were performed
through varying cloud cover with a median extinction on the order
of three magnitudes. This did not significantly affect the imaging
performance, as all these stars are 3-4 magnitudes brighter than
the LuckyCam guide star requirements, but the sensitivity to faint
objects was reduced and no calibrated photometry was attempted.
3.1 Binary detection and photometry
Companions were detected according to the criteria described in
detail in Law et al. (2006). We required 10σ detections above both
photon and speckle noise; the detections must appear in both i’ and
z’ images. Detection is confirmed by comparison with point spread
function (PSF) reference stars imaged before and after each target.
In this case, because the observed binary fraction is only ∼30%,
other survey sample stars serve as PSF references. We measured
resolved photometry of each binary system by the fitting and sub-
traction of two identical PSFs to each image, modelled as Moffat
functions with an additional diffraction-limited core.
3.2 Sensitivity
The sensitivity of the survey was limited by the cloud cover. Be-
cause of the difficulty of flux calibration under very variable extinc-
tion conditions we do not give an overall survey sensitivity. How-
ever, a minimum sensitivity can be estimated. LuckyCam requires
an i’=+15.5m guide star to provide good correction; all stars in
this survey must appear to be at least that bright during the obser-
vations1. The sensitivity of the survey around a i=+15.5m star is
calculated in Law et al. (2006) and the sensitivity as a function of
companion separation is discussed in section 5.4.
The survey is also sensitive to white dwarf companions around
all stars in the sample. However, until calibrated resolved photom-
etry or spectroscopy is obtained for the systems it is not possi-
ble to distinguish between M-dwarf and white-dwarf companions.
Since a large sample of very close M-dwarf companions to white
dwarf primaries have been found spectroscopically (for example,
Delfosse et al. 1999; Raymond et al. 2003), but very few have been
resolved, it is unlikely that the companions are white dwarfs. It
will, however, be of interest to further constrain the frequency of
white-dwarf M-dwarf systems.
4 RESULTS & ANALYSIS
We found 13 new very low mass binaries. The binaries are shown
in figure 3 and the observed properties of the systems are detailed
in table 4. In addition to the new discoveries, we also confirmed
eight previously known binaries, detailed in tables 3 and 4.
4.1 Confirmation of physical association
Seven of the newly discovered binaries have moved more than one
DSS PSF-radius between the acquisition of DSS images and these
observations (table 5). With a limiting magnitude of iN ∼ +20.3m
(Gal et al. 2004), the DSS images are deep enough for clear de-
tection of all the companions found here, should they actually be
stationary background objects. None of the DSS images show an
object at the present position of the detected proposed companion,
confirming the common proper motions of these companions with
their primaries.
The other binaries require a probabilistic assessment. In the
entire LuckyCam VLM binary survey, covering a total area of
(22′′ × 14.4′′) × 122 fields, there are 10 objects which would have
1 LSPM J2023+6710 was observed though ∼5 magnitudes of cloud, much
more than any other target in the survey, and was too therefore faint for
good performance Lucky Imaging. However, its bright companion is at 0.9
arcsec separation and so was easily detected.
c© RAS, MNRAS 000, 1–11
13 New VLM Binaries 5
LSPM J0035+0233
0.25"
LSPM J0409+0546
NLTT 14406
0.26"
LSPM J0610+2234
0.18"
LHS 1901
0.26"
LHS 5126
LP 312-51
0.26"
LSPM J0045+3347
0.27"
LSPM J0115+4702
0.68"
LSPM J0227+5432
0.90"
G 134-63
GJ 3554
0.90"
LSPM J2023+6710
1.30"
LSPM J1832+2030
Figure 3. The newly discovered binaries. All images are orientated with North up and East to the left. The images are the results of a Lucky Imaging selection
of the best 10% of the frames taken in i’, with the following exceptions: LSPM J0409+0546, LSPM J0610+2234 and LP 312-51 are presented in the z’
band, as the cloud extinction was very large during their i’ observations. The image of LSPM LHS 5126 uses the best 2% of the frames taken and LSPM
J0115+4702S uses the best 1%, to improve the light concentration of the secondary. LSPM J2023+6710 was observed through more than 5 magnitudes of
cloud extinction, and was thus too faint for Lucky Imaging; a summed image with each frame centroid-centred is presented here, showing clear binarity. LHS
1901 was independently found by Montagnier et al. (2006) during a similar M-dwarf survey. We present our image here to confirm its binarity.
been detected as companions if they had happened to be close to the
target star. One of the detected objects is a known wide common
proper motion companion; others are due to random alignments.
For the purposes of this calculation we assume that all detected
widely separated objects are not physically associated with the tar-
get stars.
Limiting the detection radius to 2 arcsec around the target star
(we confirm wider binaries by testing for common proper motion
against DSS images) 0.026 random alignments would be expected
in our dataset. This corresponds to a probability of only 2.5 per
cent that one or more of the apparent binaries detected here is a
chance alignment of the stars. We thus conclude that all the detected
binaries are physically associated systems.
4.2 Constraints on the nature of the target stars
Clouds unfortunately prevented calibrated resolved photometry for
the VLM systems. However, unresolved V & K-band photometry
listed in the LSPM survey gives useful constraints on the spectral
types of the targets. About one third of the sample has a listed spec-
tral type in the SIMBAD database (from Jaschek 1978). To obtain
estimated spectral types for the VLM binary systems, we fit the
LSPM V-K colours to those spectral types. The fit has a 1σ un-
certainty of ∼0.5 spectral types. The colour-magnitude relations in
Leggett (1992) show the unresolved system colour is dominated
by the primary for all M2–M9 combinations of primary and sec-
ondary. We then estimate the secondaries’ spectral types by: 1/ as-
suming the estimated primary spectral type to be the true value and
2/ using the spectral type vs. i’ and z’ absolute magnitude relations
in Hawley et al. (2002) to estimate the difference in spectral types
between the primary and secondary. This procedure gives useful
constraints on the nature of the systems, although resolved spec-
troscopy is required for definitive determinations.
c© RAS, MNRAS 000, 1–11
6 N.M. Law et al.
Name ∆i′ ∆z′ Sep. (arcsec) P.A. (deg) Epoch X-ray emitter?
LSPM J0035+0233 1.30 ± 0.30 · · · 0.446 ± 0.01 14.3 ± 1.4 2005.9
LSPM J0409+0546 < 1.5 < 1.5 0.247 ± 0.01 40.0 ± 3.2 2005.9
NLTT 14406 1.30 ± 0.30 0.77 ± 0.30 0.310 ± 0.01 351.6 ± 1.1 2005.9
LSPM J0610+2234 < 1.0 < 1.0 0.255 ± 0.01 268.4 ± 2.7 2005.9 *
LHS 5126 0.50 ± 0.20 0.50 ± 0.30 0.256 ± 0.02 235.1 ± 3.4 2005.9
LP 312-51 0.74 ± 0.10 0.51 ± 0.10 0.716 ± 0.01 120.5 ± 1.1 2005.9
LSPM J0045+3347 0.80 ± 0.35 0.77 ± 0.35 0.262 ± 0.01 37.6 ± 1.9 2005.9 *
LSPM J0115+4702S 0.55 ± 0.25 0.73 ± 0.25 0.272 ± 0.01 249.8 ± 1.3 2005.9 *
LSPM J0227+5432 0.60 ± 0.10 0.59 ± 0.10 0.677 ± 0.01 275.8 ± 1.1 2005.9 *
G 134-63 1.55 ± 0.10 1.35 ± 0.10 0.897 ± 0.01 13.6 ± 1.1 2005.9
GJ 3554 0.51 ± 0.20 0.57 ± 0.20 0.579 ± 0.01 44.0 ± 1.1 2005.9 *
LSPM J2023+6710 0.55 ± 0.20 · · · 0.900 ± 0.15 232.5 ± 3.2 2005.9 *
LSPM J1832+2030 0.48 ± 0.10 0.45 ± 0.10 1.303 ± 0.01 20.6 ± 1.1 2005.4 *
GJ 3417 1.66 ± 0.10 1.42 ± 0.10 1.526 ± 0.01 −39.8 ± 1.0 2005.9
LHS 1901 1.30 ± 0.70 1.30 ± 0.70 0.177 ± 0.01 51.4 ± 1.6 2005.9
G 89-32 0.43 ± 0.10 0.38 ± 0.10 0.898 ± 0.01 61.3 ± 1.0 2005.9
V* EI Cnc 0.62 ± 0.10 0.49 ± 0.10 1.391 ± 0.01 76.6 ± 1.0 2005.9
LP 595-21 0.74 ± 0.10 0.60 ± 0.10 4.664 ± 0.01 80.9 ± 1.0 2005.9 *
GJ 1245AC 2.95 ± 0.20 2.16 ± 0.20 1.010 ± 0.01 −11.3 ± 1.0 2005.4 *
GJ 3928 2.32 ± 0.20 2.21 ± 0.20 1.556 ± 0.01 −10.7 ± 1.0 2005.4 *
LP 325-15 0.36 ± 0.10 0.33 ± 0.10 0.694 ± 0.01 −21.5 ± 1.0 2005.4 *
Table 4. The observed properties of the detected binaries. The top group are stars with newly detected companions; the bottom group are the previously known
systems. LSPM J0409+0546 and LSPM J0610+2234 were observed though thick cloud and in poor seeing, and so only upper limits on the contrast ratio are
given. LSPM J2023+6710 was not observed in z’, and cloud prevented useful z’ observations of LSPM J0035+0233.
LSPM ID Years since DSS obs. Dist. moved
1RXS J004556.3+334718 16.2 4.3”
G 134-63 16.2 4.1”
NLTT 14406 19.1 3.4”
LHS 5126 6.8 3.4”
LP 312-51 7.6 3.3”
GJ 3554 15.8 5.0”
LSPM J2023+6710 14.2 4.2”
Table 5. The newly discovered binaries which have moved far enough since
DSS observations to allow confirmation of the common proper motion of
their companions.
4.3 Distances
The measurement of the distances to the detected binaries is a vital
step in the determination of the orbital radii of the systems. None of
the newly discovered binaries presented here has a measured par-
allax (although four2 of the previously known systems do) and cal-
2 G 132-25 (NLTT 2511) is listed in Reid & Cruz (2002) and the SIM-
BAD database as having a trigonometric parallax of 14.7 ± 4.0 mas,
based on the Yale General Catalogue of Trigonometric Stellar Parallaxes
(van Altena et al. 2001). However, this appears to be a misidentification, as
the star is not listed in the Yale catalogue. The closest star listed, which
does have the parallax stated for G 132-25 in Reid & Cruz (2002), is LP
294-2 (NLTT 2532). This star has a very different proper motion speed and
direction to G 132-25 (0.886 arcsec/yr vs. 0.258 arcsec/yr in the LSPM cat-
alogue & SIMBAD). In addition, the G 132-25 LSPM V and K photometry
is inconsistent with that of an M-dwarf at a distance of 68pc. We thus do
not use the stated parallax for G 132-25.
ibrated resolved photometry is not available for almost all the sys-
tems. We therefore calculate distances to the newly discovered sys-
tems using the V-K colour-absolute magnitude relations described
in Leggett (1992). Calculation of the distances in this manner re-
quires care, as the V and K-band photometry is unresolved, and so
two luminous bodies contribute to the observed colours and mag-
nitudes.
The estimated distances to the systems, and the resulting or-
bital separations, are given in table 6. The stated 1σ distance ranges
include the following contributions:
• A 0.6 magnitude Gaussian-distributed uncertainty in the V-K
colour of the system (a combination of the colour uncertainty noted
in the LSPM catalogue and the maximum change in the V-K colour
of the primary induced by a close companion).
• A 0.3 magnitude Gaussian-distributed uncertainty in the abso-
lute K-band magnitude of the system, from the uncertainty in the
colour-absolute magnitude relations from Leggett 1992.
• A 0.75 magnitude flat-distributed uncertainty in the absolute
K-band magnitude of the system, to account for the unknown K-
band contrast ratio of the binary system.
The resulting distances have 1σ errors of approximately 35%,
with a tail towards larger distances due to the K-band contrast ratio
uncertainties.
4.4 NLTT 14406 – A Newly Discovered Triple System
We found NLTT 14406 to have a 0.31 arcsec separation companion.
NLTT 14406 is identified with LP 359-186 in the NLTT catalogue
(Luyten 1995), although it is not listed in the revised NLTT cat-
alogue (Salim & Gould 2003). LP 359-186 is a component of the
common-proper-motion (CPM) binary LDS 6160 (Luyten 1997),
with the primary being LP 359-216 (NLTT 14412), 167 arcsec dis-
tant and listed in the SIMBAD database as a M2.5 dwarf.
c© RAS, MNRAS 000, 1–11
13 New VLM Binaries 7
Name Parallax / mas Distance / pc Orbital rad. / AU Prim. ST (est.) Sec. ST (est.)
LSPM J0035+0233 · · · 14.5+6.3
−2.4 6.8
−1.0 M5.0 M6.0
LSPM J0409+0546 · · · 19.9+9.1
−3.8 4.9
−0.7 M4.5 6M6.0
NLTT 14406 · · · 13.7+6.5
4.4+2.3
−0.7 M4.5 M5.5
LSPM J0610+2234 · · · 17.0+7.5
−2.9 4.6
−0.8 M5.0 6M6.0
LHS 5126 · · · 19.5+8.9
−3.7 4.9
−0.6 M4.5 M5.0
LP 312-51 · · · 21.5+10.1
−4.0 16.1
−2.7 M4.5 M5.0
LSPM J0045+3347 · · · 14.9+7.0
−2.6 4.0
−0.6 M4.5 M5.5
LSPM J0115+4702S · · · 18.7+9.3
−3.6 5.2
−0.9 M4.5 M5.0
LSPM J0227+5432 · · · 18.6+9.5
−3.4 13.2
−2.2 M4.5 M5.0
G 134-63 · · · 18.8+9.3
−3.4 17.6
−2.8 M4.5 M5.5
GJ 3554 · · · 11.8+5.6
−2.2 7.1
−1.2 M4.5 M4.5
LSPM J2023+6710 · · · 13.6+5.9
−2.5 12.8
−2.6 M5.0 M5.0
LSPM J1832+2030 · · · 20.6+9.6
−3.9 27.0
+14.6
−4.0 M4.5 M5.0
GJ 3417 87.4 ± 2.3 11.4+0.3
−0.3 17.5
−0.5 M4.5 M6.5
G 89-32 · · · 7.3+3.9
−1.3 6.5
−1.1 M4.5 M5.0
LHS 1901 · · · 12.3+5.6
−2.0 2.3
−0.4 M4.5 M6.0
V* EI Cnc 191.2 ± 2.5 5.23+0.07
−0.07 7.27
+0.11
−0.11 M5.5 M6.0
LP 595-21 · · · 16.5+8.2
−2.7 76.2
+38.7
−11.8 M4.5 M5.5
GJ 1245AC 220.2 ± 1.0 4.54+0.02
−0.02 4.6
+0.05
−0.05
M5.0 M8.5
GJ 3928 · · · 10.2+5.6
−1.7 15.7
−2.5 M4.5 M6.5
LP 325-15 62.2 ± 13.0 16.1+3.4
−3.4 11.2
−2.4 M4.5 M4.5
Table 6. The derived properties of the binary systems. The top group are stars with newly detected companions; the bottom group are the previously known
binaries. All parallaxes are from the Yale General Catalogue of Trigonometric Stellar Parallaxes (van Altena et al. 2001). Distances and orbital radii are
estimated as noted in the text; the stated errors are 1σ. The primaries’ spectral types have a 1σ uncertainty of ∼0.5 subtypes (section 4.2); the difference in
spectral types is accurate to ∼0.5 spectral subtypes.
The identification of these high proper motion stars can be
occasionally problematic when working over long time baselines.
As a confirmatory check, the LSPM-listed proper motion speeds
and directions of these candidate CPM stars agree to within 1σ
(using the stated LSPM proper motion errors).
In the LSPM catalogue, the two stars are separated by 166.3
arcsec at the J2000.0 epoch. We thus identify our newly discovered
4.4 AU separation companion to NLTT 14406 as a member of a
triple system with an M2.5 primary located at 22801080
−420 AU separa-
tion.
5 DISCUSSION
5.1 The binary frequency of stars in this survey
We detected 13 new binaries in a sample of 78 VLM stars, as well
as a further 8 previously known binaries. This corresponds to a bi-
nary fraction of 26.9+5.5
−4.4%, assuming Poisson errors. However, the
binaries in our sample are brighter on average than single stars of
the same colour and so were selected from a larger volume. Cor-
recting for this, assuming a range of contrast ratio distributions be-
tween all binaries being equal magnitude and all constrast ratios
being equally likely (Burgasser et al. 2003), we find a distance-
corrected binary fraction of 13.5+6.5
−4 %.
However, because the binaries are more distant on average
than the single stars in this survey, they also have a lower aver-
age proper motion. The LSPM proper motion cut will thus pref-
erentially remove binaries from a sample which is purely selected
on magnitude and colour. The above correction factor for the in-
creased magnitude of binary systems does not include this effect,
and so will underestimate the true binary fraction of the survey.
5.2 Biases in the X-ray sample
Before testing for correlations between X-ray emission and binary
parameters, it is important to assess the biases introduced in the
selection of the X-ray sample. The X-ray flux assignment criteria
described in section 2.1 are conservative. To reduce false associa-
tions, the X-ray source must appear within 1.5σ of the candidate
star, which implies that ∼13% of true associations are rejected. The
requirement for an unambiguous association will also reject some
fraction of actual X-ray emitters (10% of the candidate emitting
systems were rejected on this basis). The non-X-ray emitting sam-
ple will thus contain some systems that do actually meet the X-ray
flux-emitting limit.
The X-ray source detection itself, which cuts only on the de-
tection limit in the ROSAT Faint Source catalogue, is biased both
towards some sky regions (the ROSAT All-Sky Survey does not
have uniform exposure time (Voges 1999)) and towards closer stars.
However, these biases have only a small effect: all but three of the
target stars fall within the relatively constant-exposure area of the
ROSAT survey, where the brightness-cutoff is constant to within
c© RAS, MNRAS 000, 1–11
8 N.M. Law et al.
6 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 7
Figure 4. The fraction of stellar luminosity which appears as X-Ray emis-
sion. Empty circles denote single stars; filled circles denote the binaries
detected in this survey. No binarity correction is made to either the X-Ray
flux or K-magnitude. The two high points are likely to be due to flaring.
about 50%. The samples also do not show a large bias in distance
– the X-ray stars’ median distance is only about 10% smaller than
that of the non-X-ray sample (figure 1).
Finally, the X-Ray active stars also represent a different stel-
lar population from the non-active sample. In particular, the X-ray
active stars are more likely to be young (eg. Jeffries (1999) and ref-
erences therein). It may thus be difficult to disentangle the biases
introduced by selecting young stars from those intrinsic to the pop-
ulation of X-ray emitting older stars. As the results below show,
there are no detectable correlations between binarity and X-ray
emission. If correlations are detected in larger samples, constraints
on the ages of the targets would have to be found to investigate the
causes of the correlations.
5.3 Is X-ray activity an indicator of binarity?
11 of the 21 detected binaries are X-ray active. The non-distance-
corrected binary fraction of X-Ray active targets in our survey is
thus 30+8
−6%, and that of non-X-ray-active targets is 24
−5%. X-Ray
activity therefore does not appear to be an indicator of binarity.
The fraction of the X-ray target’s bolometric luminosity which
is in the X-Ray emission (Lx/Lbol) is shown in figure 4, and again
no correlation with binarity is found. The two targets with very
large Lx/Lbol (GJ 376B and LSPM J1832+2030) are listed as flar-
ing sources in Fuhrmeister & Schmitt (2003) and thus were prob-
ably observed during flare events (although Gizis et al. (2000) ar-
gues that GJ 376B is simply very active).
This contrasts with the 2.4 times higher binarity among the
similarly-selected sample of F & G type X-ray active stars in
Makarov (2002). However, the binary fractions themselves are very
similar, with a 31% binary fraction among X-ray active F & G stars,
compared with 13% for X-ray mute F & G stars. Since the fraction
of stars showing X-Ray activity increases towards later types, it
is possible that the Makarov sample preferentially selects systems
containing an X-ray emitting late M-dwarf. However, most of the
stellar components detected in Makarov (2002) are F & G types.
The much longer spin-down timescales of late M-dwarfs,
in combination with the rotation-activity paradigm, may ex-
plain the lack of activity-binarity correlation in late M-dwarfs.
Delfosse et al. (1998) show that young disk M dwarfs with spectral
types later than around M3 are still relatively rapidly rotating (with
vsini’s up to 40 km/s and even 60 km/s in one case), while earlier
spectral types do not have detectable rotation periods to the limit of
their sensitivity (around 2 km/s). Indeed solar type stars spin down
0 10 20 30 40 50
Separation / AU
Figure 5. The i-band contrast ratios of the detected binaries, plotted as a
function of binary separation in AU. For reasons of clarity, the 76AU binary
and the contrast ratio errorbars (table 4) have been omitted. Filled circles
are X-ray emitters.
0 10 20 30 40 50 60 70 80 90 100
Separation / AU
Figure 6. The histogram distribution of the orbital radii of the detected
binaries in the sample.
on relatively short timescales, for example in the 200 Myr old open
cluster M34 Irwin et al. (2006) find that the majority of solar type
stars have spun down to periods of around 7 days (Vrot ∼ 7 km/sec).
The M-dwarfs thus have a high probability of fast rotation and thus
activity even when single, which could wash-out any obvious bina-
rity correlation with X-ray activity.
5.4 Contrast ratios
In common with previous surveys, the new systems have low con-
trast ratios. All but two of the detected systems have contrast ratios
<1.7 mags. This is well above the survey sensitivity limits, as illus-
trated by the two binaries detected at much larger contrast ratios.
Although those two systems are at larger radii, they would have
been detected around most targets in the survey at as close as 0.2-
0.3 arcsec.
It is difficult to obtain good constraints on the mass contrast
ratio for these systems because of the lack of calibrated photome-
try, and so we leave the determination of the individual component
masses for future work. However, we note that an interesting fea-
ture of the sample is that no binaries with contrast ratios consistent
with equal mass stars are detected.
There is no obvious correlation between the orbital radius and
the i-band contrast ratios, nor between X-ray emission and the con-
trast ratios (figure 5).
5.5 The distribution of orbital radii
Early M-dwarfs and G-dwarfs binaries have a broad or-
bital radius peak of around 30 AU (Fischer & Marcy 1992;
Duquennoy & Mayor 1991), while late M-dwarfs have a peak at
around 4 AU (eg. Close et al. 2005). Our survey covers a narrow
c© RAS, MNRAS 000, 1–11
13 New VLM Binaries 9
5.8 6.0 6.2 6.4 6.6 6.8 7.0 7.2
V-K colour
Figure 7. Orbital radius in the detected binaries as a function of colour. V-
K=6 corresponds approximately to M4.5, and V-K=7 to M5.5. Filled circles
are X-ray emitters. For clarity, the ∼0.3 mags horizontal error bars have
been omitted. There is no obvious correlation between X-ray emission and
orbital radius.
(0.02M⊙) mass range in the region between the two populations
and so allows us to test the rapidity of the transition in binary prop-
erties.
The orbital radius distribution derived in this survey (figure
6) replicates the previously known VLM-star 4 AU orbital radius
peak. However, 9 of the 21 systems are at a projected separation of
more than 10 AU. These wide VLM binaries are known to be rare
– for example, in the 36 M6-M7.5 M-dwarf sample of Siegler et al.
(2005) 5 binaries are detected but none are found to be wider than
10 AU.
To test for a rapid transition between the low-mass and very-
low-mass binary properties in the mass range covered by our sur-
vey, we supplemented the V-K > 6.5 systems from the LuckyCam
sample with the known VLM binaries from the Very Low Mass Bi-
naries archive3 (which, due to a different mass cut, all have a lower
system mass than the LuckyCam sample). To reduce selection ef-
fects from the instrumental resolution cut-offs we only considered
VLM binaries with orbital radius > 3.0 AU.
The resulting cumulative probability distributions are shown
in figure 8. There is a deficit in wider systems in the redder sample
compared to the more massive, bluer systems. A K-S test between
the two orbital radius distributions gives an 8% probability that they
are derived from the same underlying distribution. This suggests a
possibly rapid change in the incidence of systems with orbital radii
> 10AU, at around the M5-M5.5 spectral type. However, confirma-
tion of the rapid change will require a larger number of binaries and
a more precise mass determination for each system.
5.6 The LuckyCam surveys in the context of formation
mechanisms
VLM star formation is currently usually modelled as fragmentation
of the initial molecular cloud core followed by ejection of the low
mass stellar embryos before mass accretion has completed – the
ejection hypothesis (Reipurth & Clarke 2001). Multiple systems
formed by fragmentation are limited to be no smaller than 10AU
by the opacity limit (eg. Boss 1988), although closer binaries can
3 collated by Nick Siegler; VLM there is defined at the slightly lower cutoff
of total system mass of < 0.2M⊙
0 10 20 30 40 50
Orbital radius / AU
V-K < 6.5
V-K > 6.5
Figure 8. The cumulative distribution of orbital radii of the detected bina-
ries in the sample with V-K < 6.5 (dashed line). The solid line shows those
with V-K > 6.5, with the addition of the full sample of known VLM binaries
with total system masses < 0.2M⊙, collated by Siegler. Neither distribution
reaches a fraction of 1.0 because of a small number of binaries wider than
50 AU.
be formed by dynamical interactions and orbital decay (Bate et al.
2002).
The ejection hypothesis predicted binary frequency is about
8% (Bate & Bonnell 2005); few very close (< 3AU) binaries are
expected (Umbreit et al. 2005) without appealing to orbital decay.
Few wide binaries with low binding energies are expected to sur-
vive the ejection, although recent models produce some systems
wider than 20AU when two stars are ejected simultaneously in the
same direction (Bate & Bonnell 2005). The standard ejecton hy-
pothesis orbital radius distribution is thus rather tight and centered
at about 3-5 AU, although its width can be enlarged by appealing
to the above additional effects.
The LuckyCam VLM binary surveys (this work and Law et al.
2006) found several wide binary systems, with 11 of the 24 de-
tected systems at more than 10 AU orbital radius and 3 at more
than 20 AU. With the latest models, the ejection hypothesis can-
not be ruled out by these observations, and indeed (as suggested
in Bate & Bonnell 2005) the frequency of wider systems will be
very useful for constraining more sophisticated models capable
of predicting the frequency in detail. The observed distance-bias-
corrected binary frequency in the LuckyCam survey is consistent
with the ejection hypothesis models, but may be inconsistent when
the number of very close binaries undetected in the surveys is taken
into account (Maxted & Jeffries 2005; Jeffries & Maxted 2005).
For fragmentation to reproduce the observed orbital radius dis-
tribution, including the likely number of very close systems, dy-
namical interactions and orbital decay must be very important pro-
cesses. However, SPH models also predict very low numbers of
close binaries. An alternate mechanism for the production of the
closest binaries is thus required (Jeffries & Maxted 2005), as well
as modelling constraints to test against the observed numbers of
wider binaries. Radial velocity observations of the LuckyCam sam-
ples to test for much closer systems would offer a very useful in-
sight into the full orbital radius distribution that must be reproduced
by the models.
6 CONCLUSIONS
We found 21 very low mass binary systems in a 78 star sample,
including one close binary in a 2300 AU wide triple system and
c© RAS, MNRAS 000, 1–11
10 N.M. Law et al.
one VLM system with a 27 AU orbital radius. 13 of the binary sys-
tems are new discoveries. All of the new systems are significantly
fainter than the previously known close systems in the sample. The
distance-corrected binary fraction is 13.5+6.5
−4 %, in agreement with
previous results. There is no detectable correlation between X-Ray
emission and binarity. The orbital radius distribution of the binaries
appears to show characteristics of both the late and early M-dwarf
distributions, with 9 systems having an orbital radius of more than
10 AU. We find that the orbital radius distribution of the binaries
with V-K < 6.5 in this survey appears to be different from that of
lower-mass objects, suggesting a possible sharp cutoff in the num-
ber of binaries wider than 10 AU at about the M5 spectral type.
ACKNOWLEDGEMENTS
The authors would like to particularly thank the staff members at
the Nordic Optical Telescope. We would also like to thank John
Baldwin and Peter Warner for many helpful discussions. NML ac-
knowledges support from the UK Particle Physics and Astronomy
Research Council (PPARC). This research has made use of the
SIMBAD database, operated at CDS, Strasbourg, France. We also
made use of NASA’s Astrophysics Data System Bibliographic Ser-
vices.
REFERENCES
Abt H. A., Levy S. G., 1976, ApJS, 30, 273
Artigau É., Lafrenière D., Doyon R., Albert L., Nadeau D., Robert
J., 2007, ApJL, 659, L49
Baldwin J. E., Tubbs R. N., Cox G. C., Mackay C. D., Wilson
R. W., Andersen M. I., 2001, A&A, 368, L1
Baraffe I., Chabrier G., Allard F., Hauschildt P. H., 1998, A&A,
337, 403
Basri G., Reiners A., 2006, ArXiv Astrophysics e-prints
Bate M. R., Bonnell I. A., 2005, MNRAS, 356, 1201
Bate M. R., Bonnell I. A., Bromm V., 2002, MNRAS, 336, 705
Boss A. P., 1988, ApJ, 331, 370
Bouy H., Brandner W., Martı́n E. L., Delfosse X., Allard F., Basri
G., 2003, AJ, 126, 1526
Burgasser A. J., Kirkpatrick J. D., Reid I. N., Brown M. E.,
Miskey C. L., Gizis J. E., 2003, ApJ, 586, 512
Caballero J. A., 2007, A&A, 462, L61
Close L. M., Lenzen R., Guirado J. C., Nielsen E. L., Mamajek
E. E., Brandner W., Hartung M., Lidman C., Biller B., 2005,
Nature, 433, 286
Close L. M., Siegler N., Freed M., Biller B., 2003, ApJ, 587, 407
Close L. M., Zuckerman B., Song I., Barman T., Marois C., Rice
E. L., Siegler N., Macintosh B., Becklin E. E., Campbell R., Lyke
J. E., Conrad A., Le Mignant D., 2006, ArXiv Astrophysics e-
prints
Delfosse X., Forveille T., Beuzit J.-L., Udry S., Mayor M., Perrier
C., 1999, A&A, 344, 897
Delfosse X., Forveille T., Perrier C., Mayor M., 1998, A&A, 331,
Duquennoy A., Mayor M., 1991, A&A, 248, 485
Fischer D. A., Marcy G. W., 1992, ApJ, 396, 178
Flesch E., Hardcastle M. J., 2004, A&A, 427, 387
Fuhrmeister B., Schmitt J. H. M. M., 2003, A&A, 403, 247
Gal R. R., de Carvalho R. R., Odewahn S. C., Djorgovski S. G.,
Mahabal A., Brunner R. J., Lopes P. A. A., 2004, AJ, 128, 3082
Gizis J. E., Monet D. G., Reid I. N., Kirkpatrick J. D., Burgasser
A. J., 2000, MNRAS, 311, 385
Gizis J. E., Reid I. N., Knapp G. R., Liebert J., Kirkpatrick J. D.,
Koerner D. W., Burgasser A. J., 2003, AJ, 125, 3302
Gliese W., Jahreiß H., 1991, Technical report, Preliminary Version
of the Third Catalogue of Nearby Stars
Hawley S. L., Covey K. R., Knapp G. R., Golimowski D. A., Fan
X., Anderson S. F., Gunn J. E., Harris H. C., Ivezić Ž., Long
G. M., Lupton R. H., et. al. 2002, AJ, 123, 3409
Henry T. J., Franz O. G., Wasserman L. H., Benedict G. F., Shelus
P. J., Ianna P. A., Kirkpatrick J. D., McCarthy D. W., 1999, ApJ,
512, 864
Henry T. J., Ianna P. A., Kirkpatrick J. D., Jahreiss H., 1997, AJ,
114, 388
Henry T. J., McCarthy D. W., 1990, ApJ, 350, 334
Henry T. J., McCarthy D. W., 1993, AJ, 106, 773
Irwin J., Aigrain S., Hodgkin S., Irwin M., Bouvier J., Clarke C.,
Hebb L., Moraux E., 2006, MNRAS, 370, 954
Jaschek M., 1978, Bulletin d’Information du Centre de Donnees
Stellaires, 15, 121
Jeffries R. D., 1999, in Butler C. J., Doyle J. G., eds, ASP Conf.
Ser. 158: Solar and Stellar Activity: Similarities and Differences
X-rays from Open Clusters. pp 75–+
Jeffries R. D., Maxted P. F. L., 2005, Astronomische Nachrichten,
326, 944
Lépine S., Shara M. M., 2005, AJ, 129, 1483
Law N. M., 2007, The Observatory, 127, 71
Law N. M., Hodgkin S. T., Mackay C. D., 2006, MNRAS
Law N. M., Hodgkin S. T., Mackay C. D., Baldwin J. E., 2005,
Astron. Nachr., 326, 1024
Law N. M., Mackay C. D., Baldwin J. E., 2006, A&A
Leggett S. K., 1992, ApJS, 82, 351
Leinert C., Henry T., Glindemann A., McCarthy D. W., 1997,
A&A, 325, 159
Luyten W. J., 1995, VizieR Online Data Catalog, 1098, 0
Luyten W. J., 1997, VizieR Online Data Catalog, 1130, 0
Mackay C. D., Baldwin J., Law N., Warner P., 2004, in Pro-
ceedings of the SPIE, Volume 5492, pp. 128-135 (2004). High-
resolution imaging in the visible from the ground without adap-
tive optics: new techniques and results. pp 128–135
Makarov V. V., 2002, ApJL, 576, L61
Maxted P. F. L., Jeffries R. D., 2005, MNRAS, 362, L45
McCarthy C., Zuckerman B., Becklin E. E., 2001, AJ, 121, 3259
McCarthy D. W., Henry T. J., Fleming T. A., Saffer R. A., Liebert
J., Christou J. C., 1988, ApJ, 333, 943
Montagnier G., Ségransan D., Beuzit J.-L., Forveille T., Delorme
P., Delfosse X., Perrier C., Udry S., Mayor M., Chauvin G., La-
grange A.-M., Mouillet D., Fusco T., Gigan P., Stadler E., 2006,
A&A, 460, L19
Phan-Bao N., Martı́n E. L., Reylé C., Forveille T., Lim J., 2005,
A&A, 439, L19
Raymond S. N., Szkody P., Hawley S. L., Anderson S. F.,
Brinkmann J., Covey K. R., McGehee P. M., Schneider D. P.,
West A. A., York D. G., 2003, AJ, 125, 2621
Reid I. N., Cruz K. L., 2002, AJ, 123, 2806
Reid I. N., Gizis J. E., 1997, AJ, 113, 2246
Reipurth B., Clarke C., 2001, AJ, 122, 432
Salim S., Gould A., 2003, ApJ, 582, 1011
Schmitt J. H. M. M., Liefke C., 2004, A&A, 417, 651
Ségransan D., Delfosse X., Forveille T., Beuzit J.-L., Udry S., Per-
rier C., Mayor M., 2000, A&A, 364, 665
c© RAS, MNRAS 000, 1–11
13 New VLM Binaries 11
Siegler N., Close L. M., Cruz K. L., Martı́n E. L., Reid I. N., 2005,
ApJ, 621, 1023
Simon T., 1990, ApJL, 359, L51
Soderblom D. R., Stauffer J. R., Hudon J. D., Jones B. F., 1993,
ApJS, 85, 315
Terndrup D. M., Pinsonneault M., Jeffries R. D., Ford A., Stauffer
J. R., Sills A., 2002, ApJ, 576, 950
Tubbs R. N., Baldwin J. E., Mackay C. D., Cox G. C., 2002, A&A,
387, L21
Umbreit S., Burkert A., Henning T., Mikkola S., Spurzem R.,
2005, ApJ, 623, 940
van Altena W. F., Lee J. T., Hoffleit E. D., 2001, VizieR Online
Data Catalog, 1238, 0
Voges W. e. a., 1999, VizieR Online Data Catalog, 9010, 0
Voges W. e. a., 2000, VizieR Online Data Catalog, 9029, 0
c© RAS, MNRAS 000, 1–11
Introduction
The Sample
X-ray selection
Target distributions
Observations
Binary detection and photometry
Sensitivity
Results & Analysis
Confirmation of physical association
Constraints on the nature of the target stars
Distances
NLTT 14406 – A Newly Discovered Triple System
Discussion
The binary frequency of stars in this survey
Biases in the X-ray sample
Is X-ray activity an indicator of binarity?
Contrast ratios
The distribution of orbital radii
The LuckyCam surveys in the context of formation mechanisms
Conclusions
|
0704.1813 | The Discovery of a Companion to the Lowest Mass White Dwarf | The Discovery of a Companion to the Lowest Mass White Dwarf1
Mukremin Kilic2, Warren R. Brown3, Carlos Allende Prieto4, M. H. Pinsonneault2, and S.
J. Kenyon3
ABSTRACT
We report the detection of a radial velocity companion to SDSS
J091709.55+463821.8, the lowest mass white dwarf currently known with M ∼
0.17M⊙. The radial velocity of the white dwarf shows variations with a semi-
amplitude of 148.8 ± 6.9 km s−1 and a period of 7.5936 ± 0.0024 hours, which
implies a companion mass of M ≥ 0.28M⊙. The lack of evidence of a companion
in the optical photometry forces any main-sequence companion to be smaller than
0.1M⊙, hence a low mass main sequence star companion is ruled out for this sys-
tem. The companion is most likely another white dwarf, and we present tentative
evidence for an evolutionary scenario which could have produced it. However,
a neutron star companion cannot be ruled out and follow-up radio observations
are required to search for a pulsar companion.
Subject headings: stars: individual (SDSS J091709.55+463821.8) – low-mass –
white dwarfs
1. Introduction
Recent discoveries of several extremely low mass white dwarfs (WDs) in the field (Kilic
et al. 2007; Eisenstein et al. 2006; Kawka et al. 2006) and around pulsars (Bassa et al.
2006; van Kerkwijk et al. 1996) show that WDs with mass as low as 0.17M⊙ are formed
in the Galaxy. No galaxy is old enough to produce such extremely low mass WDs through
single star evolution. The oldest globular clusters in our Galaxy are currently producing
2Ohio State University, Department of Astronomy, 140 West 18th Avenue, Columbus, OH 43210
3Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138
4McDonald Observatory and Department of Astronomy, University of Texas, Austin, TX 78712
1Observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian
Institution and the University of Arizona.
http://arxiv.org/abs/0704.1813v1
– 2 –
∼ 0.5M⊙ WDs (Hansen et al. 2007), therefore lower mass WDs must experience significant
mass loss. The most likely explanation is a close binary companion. If a WD forms in a
close binary, it can lose its outer envelope without reaching the asymptotic giant branch
and without ever igniting helium, ending up as a low mass, helium core WD. Confirmation
of the binary nature of several low mass WDs by Marsh et al. (1995) supports this binary
formation scenario.
White dwarf binaries provide an important tool for testing binary evolution, specifically
the efficiency of the mass loss process and the common envelope phase. Since WDs can
be created only at the cores of giant stars, their properties can be used to reconstruct the
properties of the progenitor binary systems. Using a simple core mass - radius relation for
giants and the known orbital period, the initial orbital parameters of the binary system can
be determined. For a review of binary evolution involving WDs, see e.g. Sarna et al. (1996),
Iben et al. (1997), Sandquist et al. (2000), Yungelson et al. (2000), Nelemans & Tout
(2005), and Benvenuto & De Vito (2005).
Known companions to low mass WDs include late type main sequence stars (Farihi et
al. 2005; Maxted et al. 2007), helium or carbon/oxygen core WDs (Marsh et al. 1995;
Marsh 2000; Napiwotzki et al. 2001), and in some cases neutron stars (Nice et al. 2005).
Late type stellar companions to low mass WDs have a distribution of masses with median
0.27M⊙ (Nelemans & Tout 2005). This median companion mass is nearly identical to the
peak companion mass of 0.3M⊙(spectral type M3.5) observed in the field population of late
type main sequence stars within 20 pc of the Earth (Farihi et al. 2005). Low mass WD -
WD binaries, on the other hand, tend to have equal mass WDs. The median mass for both
the brighter and dimmer components of the known low mass WD binary systems is 0.44M⊙
(Nelemans & Tout 2005).
The discovery of extremely low mass WDs around pulsars suggests that neutron star
companions may be responsible for creating white dwarfs with masses of about 0.2M⊙. PSR
J0437-4715, J0751+1807, J1012+5307, J1713+0747, B1855+09, and J1909-3744 are pulsars
in pulsar – He-WD binary systems with circular orbits and orbital periods of ∼0.2-100 days
(Nice et al. 2005). So far, only two of these companions are spectroscopically confirmed to
be helium-core WDs. Van Leeuwen et al. (2007) searched for radio pulsars around 8 low
mass WDs, but did not find any companions. They concluded that the fraction of low mass
helium-core WDs with neutron star companions is less than 18% ± 5%.
Kilic et al. (2007) reported the discovery of the lowest mass WD currently known: SDSS
J091709.55+463821.8 (hereafter J0917+46). With an estimated mass of 0.17M⊙, J0917+46
provides a unique opportunity to search for a binary companion and to test our understanding
of the formation scenarios for extremely low mass WDs. Do extremely low mass WDs form in
– 3 –
binaries with neutron stars, WDs, or late type stars? If the companion is a neutron star, the
mass of the neutron star can be used to constrain the neutron star equation of state. In case
of a WD or a late type star companion, the orbital parameters can be used to constrain the
common-envelope phase of binary evolution in these systems. In this Letter, we present new
optical spectroscopy and radial velocity measurements for J0917+46. Our observations are
discussed in §2, while an analysis of the spectroscopic data and the discovery of a companion
are discussed in §3. The nature of the companion is discussed in §4.
2. Observations
We used the 6.5m MMT telescope equipped with the Blue Channel Spectrograph to
obtain moderate resolution spectroscopy of SDSS J0917+46 nine times over the course of
five nights between UT 2006 December 22-27 and five times on UT 2007 March 19. The
spectrograph was operated with the 832 line mm−1 grating in second order, providing a
wavelength coverage of 3650 - 4500 Å. Most spectra were obtained with a 1.0′′ slit yielding
a resolving power of R = 4300, however a 1.25′′ slit was used on 2006 December 24, which
resulted in a resolving power of 3500. Exposure times ranged from 15 to 22 minutes and
yielded signal-to-noise ratio S/N > 20 in the continuum at 4000 Å. All spectra were obtained
at the parallactic angle, and comparison lamp exposures were obtained after every exposure.
The spectra were flux-calibrated using blue spectrophotometric standards (Massey et al.
1988).
Heliocentric radial velocities were measured using the cross-correlation package RVSAO
(Kurtz & Mink 1998). We obtained preliminary velocities by cross-correlating the obser-
vations with bright WD templates of known velocity. However, greater velocity precision
comes from cross-correlating J0917+46 with itself. Thus we shifted the individual spectra to
rest-frame and summed them together into a high S/N template spectrum. Our final veloc-
ities come from cross-correlating the individual observations with the J0917+46 template,
and are presented in Table 1.
3. The Discovery of a Companion
The radial velocity of the lowest mass WD varies by as much as 263 km s−1 between
different observations, revealing the presence of a companion object. We solve for the best-
fit orbit using the code of Kenyon & Garcia (1986). We find that the heliocentric radial
velocities of the WD are best fitted with a circular orbit and a radial velocity amplitude K =
– 4 –
148.8 ± 6.9 km s−1. We used the method of Lucy & Sweeney (1971) to show that there is no
evidence for an eccentric orbit from our data and that the 1σ upper limit to the eccentricity
is e = 0.06. We have assumed that the orbit is circular. The orbital period is 7.5936 ±
0.0024 hours with spectroscopic conjunction at HJD 2454091.73 ± 0.002. Figure 1 shows
the observed radial velocities and our best fit model for SDSS J0917+46.
Measurement of the orbital period P and the semi-amplitude of the radial velocity
variations K allows us to calculate the mass function
M3 sin3i
(MWD +M)2
= 0.108± 0.018M⊙, (1)
where i is the orbital inclination angle, MWD is the WD mass (0.17M⊙ for SDSS J0917+46),
and M is the mass of the companion object. We can put a lower limit on the mass of the
companion by assuming an edge-on orbit (sin i = 1), for which the companion would be an
M = 0.28M⊙ object at an orbital separation of 1.5R⊙. Therefore, the companion mass is
M ≥ 0.28M⊙.
4. The Nature of the Companion
To understand the nature of the companion, first we need to understand the properties
of the WD. Kilic et al.’s (2007) analysis of J0917+46 was based on a single 15 minute
exposure with S/N = 20 at 4000 Å. Here we use 14 different spectra of J0917+46 (4.3 hours
total exposure time) and repeat our spectroscopic analysis. We also combine all of these
spectra into a composite (weighted average) spectrum with R = 3500 that results in S/N =
80 at 4000 Å. Figure 2 shows the composite spectrum and our fits using the entire spectrum
(excluding the Ca K line) and also using only the Balmer lines (see Kilic et al. 2007 for the
details of the spectroscopic analysis). We find a best-fit solution of Teff = 11855 K and log g
= 5.55 if we use the observed composite spectrum. If we normalize (continuum-correct) the
composite spectrum and fit just the Balmer lines, then we obtain Teff = 11817 K and log g
= 5.51. Since we have 14 different spectra, we also fit each spectrum individually to obtain
a robust estimate of the errors in our analysis. For the first case where we fit the observed
spectra, we obtain a best fit solution of Teff = 11984 ± 168 K and log g = 5.57 ±0.05. For
the second case where we fit only the Balmer lines, we obtain Teff = 11811± 67 K and log g
= 5.51 ±0.02. Our results are consistent with each other and also with Kilic et al.’s analysis.
We confirm that our temperature and gravity estimates are robust; SDSS J0917+46 is still
the lowest gravity/mass WD currently known.
– 5 –
We adopt our best fit solution of Teff = 11855 ± 168 K and log g = 5.55 ±0.05 for
the remainder of the paper. Using our new temperature and gravity measurements and
Althaus et al. (2001) models, we estimate the absolute magnitude of the WD to be MV ∼
7.0, corresponding to a distance modulus of 11.8 (at 2.3 kpc) and a cooling age of about
500 Myr. At a Galactic latitude of +44◦, J0917+46 is located at 1.6 kpc above the plane.
The radial velocity of the binary system is ∼29 km s−1. J0917+46 displays a proper motion
of µRAcosδ = −2 ± 3.4 mas yr
−1 and µDEC = 2 ± 3.4 mas yr
−1 measured from its SDSS
and USNO-B positions (kindly provided by J. Munn). These measurements correspond to
a tangential velocity of 31 ± 52 km s−1. J0917+46 has disk kinematics, and its location
above the Galactic plane is consistent with a thick disk origin. Therefore the main sequence
age of the progenitor star of the lowest mass WD needs to be on the order of 10 Gyr; the
progenitor was a ∼1M⊙ main sequence star.
The broad-band spectral energy distribution of J0917+46 is shown in panel c in Figure
2. The de-reddened SDSS photometry and the fluxes predicted for the parameters derived
from our spectroscopic analysis are shown as error bars and circles, respectively. The SDSS
photometry is consistent with our spectroscopic solution within the errors. The g-band
photometry is slightly brighter than expected, however it is within 1.7σ of our spectroscopic
solution and therefore the excess is not significant. We also note that many low mass WD
candidates analyzed by Kilic et al. (2007) had discrepant g-band photometry. A similar
problem may cause the observed slight g-band excess.
The mass function for our target plus the MMT spectroscopy and the SDSS photometry
can be used to constrain the nature of the companion star. Since the companion mass has
to be ≥ 0.28M⊙, it can be a low mass star, another WD, or a neutron star.
4.1. A Low Mass Star
4.1.1. Constraints from the SDSS Photometry
If the orbital inclination angle of the binary system is between 47◦ and 90◦, the compan-
ion mass would be 0.28-0.50M⊙, consistent with being an M dwarf. However, an M dwarf
companion would show up as an excess in the SDSS photometry. For example, an M6 dwarf
would cause a 10% excess in the z-band photometry of J0917+46 at its distance, and hence
any star with M ≥ 0.1M⊙ would be visible in the SDSS photometry. Since the mass function
derived from the observed orbital parameters of the binary limits the companion mass to
≥ 0.28M⊙ (earlier than M3.5V spectral type), and any such companion would be visible in
the SDSS photometry (see panel c in Figure 2), a main sequence star companion is ruled
– 6 –
4.1.2. Constraints from the MMT Spectroscopy
We search for spectral features from a companion by subtracting all 14 individual spectra
from our best-fit WD model after shifting the individual spectra to the rest-frame. The only
significant feature that we detect is a Ca K line. This same calcium line was also present in
the discovery spectrum of J0917+46 (Kilic et al. 2007). It had the same radial velocity as
the Balmer lines and hence, it was predicted to be photospheric. The Ca K line is visible in
all of our new spectra of this object, and its radial velocity changes with the radial velocity
of the Balmer lines; it is confirmed to be photospheric. The Ca H line overlaps with the
saturated Hǫ line and it is not detected in our spectrum. If J0917+46 had a low mass star
companion, it could contribute an additional calcium line in the spectrum.
Marsh et al. (1995) were able to find the companion to the low mass white dwarf
WD1713+332 by searching for asymmetries in the Hα line profile. We perform a similar
analysis for J0917+46 using the calcium line. We combine the spectra near maximum radial
velocity (V ≥ 125 km s−1) and near minimum radial velocity (V ≤ −80 km s−1) into two
composite spectra. If there is a contribution from a companion object, it should be visible
as an asymmetry in the line profile. This asymmetry should be on the blue side of the
red-shifted composite spectrum, and on the red side of the blue-shifted composite spectrum.
Figure 3 shows the red-shifted (solid line) and blue-shifted (dotted line) composite spectra
of J0917+46. The rest-frame wavelegth of the Ca K line is shown as a dashed line. This
figure demonstrates that there is an asymmetry in the Ca K line profile of the red-shifted
spectrum, however the additional Ca K feature is close to the rest-frame velocity, and it is
not detected in the blue-shifted spectrum. The equivalent width of the main Ca K feature
in the red-shifted spectrum is 0.42Å, and the additional feature has an equivalent width of
∼0.16 Å. The blue shifted spectrum has a stronger Ca K feature with an equivalent width
of 0.57 Å, and it is consistent with a blend of the two Ca K features seen in the red-shifted
spectrum. The observed additional calcium feature seems to be stationary and it seems to
originate from interstellar absorption. Therefore, we conclude that our optical spectroscopy
does not reveal any spectral features from a companion object.
– 7 –
4.2. Another White Dwarf
Using the most likely inclination angle for a random stellar sample, i = 60◦, we estimate
that the companion mass is most likely to be 0.36M⊙, another low mass WD. The orbital
separation of the system would be about 1.6R⊙. If the inclination angle is between 47
◦ and
27◦ (21% likelihood), the companion mass would be 0.5-1.4M⊙, consistent with a normal
carbon/oxygen or a massive WD.
Liebert et al. (2004) argue that it is possible to create a 0.16-0.19M⊙ WD from a 1M⊙
progenitor star, if the binary separation is appropriate and the common-envelope phase is
sufficiently unstable so that the envelope can be lost quickly from the system. We re-visit
their claim to see if J0917+46 can be a low mass WD formed from such a progenitor system.
Close WD pairs can be created by two consecutive common envelope phases or an
Algol-like stable mass transfer phase followed by a common envelope phase (Iben et al.
1997). In the first scenario, due to orbital shrinkage the recently formed WD is expected to
be less massive than its companion by a factor of ≤0.55. On the other hand, the expected
mass ratio for the second scenario involving a stable mass transfer and a common-envelope
phase is around 1.1 (Nelemans et al. 2000). The mass ratio of the J0917+46 binary is
Mbright/Mdim ≤ 0.61, therefore the progenitor binary star system has probably gone through
two common envelope phases.
Nelemans & Tout (2005) found that the common-envelope evolution of close WD bina-
ries can be reconstructed with an algorithm (γ-algorithm) equating the angular momentum
balance. The final orbital separation of a binary system that went through a common enve-
lope phase is
afinal
ainitial
Mgiant
Mcore
Mcore +Mcompanion
Mgiant +Mcompanion
Menvelope
Mgiant +Mcompanion
, (2)
where a is the orbital separation, M are the masses of the companion, core, giant, and the
envelope, and γ = 1.5 (Nelemans & Tout 2005).
We assume that the mass of the WD is the same as the mass of the core of the giant
at the onset of the mass transfer. Assuming a giant mass of 0.8-1.3M⊙, a core mass of
0.17M⊙, and possible WD companion masses of 0.28-1.39M⊙, we estimate the initial orbital
separation. Using the core-mass - radius relation for giants, R = 103.5M4core (Iben & Tutukov
1985), we find that the radius of the giant star that created J0917+46 was R = 2.6R⊙. For
M = 0.28-1.39M⊙ companions, we use the size of the Roche lobe RL as given by Eggleton
(1983) to determine the separation at the onset of mass transfer assuming Rgiant = RL. The
– 8 –
size of the Roche lobe depends on the mass ratio and the orbital separation of the system.
The initial separation and the size of the Roche lobe gives us a unique solution for the binary
mass ratio. Table 2 presents the companion mass, initial orbital separation and the orbital
period for 0.8-1.3M⊙ giants that could create the lowest mass WD with the observed orbital
parameters.
The mass function for J0917+46 favors a low mass WD companion, therefore the first
four scenarios in Table 2 seem to be more likely. For example, a 0.8M⊙ giant and a 0.33M⊙
WD companion with an initial orbital separation of 5.7R⊙ and an orbital period of 36 hr
would create a 0.17M⊙ WD with the observed orbital parameters.
The same procedures can be used to re-create the first common-envelope phase of the
binary evolution. The progenitor of the unseen companion to J0917+46 must be in the range
0.8-2.3M⊙. The lower mass limit is set by the fact that the unseen companion has to be more
massive than the lowest mass WD. The upper limit is set by the fact that more massive stars
do not form degenerate helium cores (Nelemans et al. 2000) and that a common envelope
phase with a more massive giant would end up in a merger and not in a binary system.
Assuming 0.8-2.3M⊙ giants and 0.8-1.3M⊙ main sequence companions, we calculate possible
evolutionary scenarios to create the orbital parameters of the binary system before the last
common envelope phase. We find that a 2.2M⊙ star and a 0.8M⊙ companion (which is the
progenitor of the lowest mass WD) with an orbital period of 51 days and orbital separation
of 0.4 AU would create a 0.33M⊙ WD. In addition, we find that the progenitor of the lowest
mass WD has to be less massive than 0.9M⊙ since a M ≥ 0.9M⊙ progenitor would require
a companion more massive than 2.3M⊙ to match the binary properties of the system before
the last common envelope phase. Therefore, the most likely evolutionary scenario for a WD
+ WD binary involving J0917+46 would be: 2.2M⊙ giant + 0.8M⊙ star −→ 0.33M⊙ WD
+ 0.8M⊙ star −→ 0.33M⊙ WD + 0.8M⊙ giant −→ 0.33M⊙ WD + 0.17M⊙ WD.
The main sequence lifetime of a 2.2M⊙ thick disk ([Fe/H] ≃ −0.7) star is about 650
Myr (Bertelli et al. 1994), and a 0.33M⊙ white dwarf created from such a system would
be a ∼10 Gyr old WD. According to Althaus et al. (2001) models, a 0.33M⊙ He-core WD
would cool down to 3700 K in 10 Gyr and it would have MV ∼ 15.8. This companion would
be several orders of magnitude fainter than the 0.17M⊙ WD observed today, and therefore
the lack of evidence of a companion in the SDSS photometry and our optical spectroscopy
is consistent with this formation scenario.
– 9 –
4.3. A Neutron Star
The formation scenarios for low mass helium WDs in close binary systems also include
neutron star companions. There are already several low mass WDs known to have neutron
star companions (van Kerkwijk et al. 1996; Nice et al. 2005; Bassa et al. 2006). According
to the theoretical calculations by Benvenuto & De Vito (2005), a 1M⊙ star with a neutron
star companion in an initial orbital period of 0.5-0.75 days would end up as a 0.05-0.21M⊙
WD with a 1.8-1.9M⊙ neutron star in an orbital period of 0.03-3.5 days. They expect a
0.17M⊙ WD + neutron star binary to have an orbital period of 9.6 hours (see their Figure
16). The orbital period of J0917+46 is consistent with their analysis.
If the orbital inclination angle of the J0917+46 binary system is less than 27◦, the
companion mass would be ≥1.4M⊙, consistent with being a neutron star. The probability
of observing a binary system at an angle less than 27◦ is only 11%. This is unlikely, but
cannot be definitely ruled out.
5. Discussion
Our radial velocity measurements of J0917+46 shows that it is in a binary system with
an orbital period of 7.59 hours. Short period binaries may merge within a Hubble time
by losing angular momentum through gravitational radiation. The merger time for such
binaries is given by
(M1 +M2)
P 8/3 × 107yr (3)
where the masses are in solar units and the period is in hours (Landau & Lifshitz 1958;
Marsh et al. 1995). For the J0917+46 binary, if the companion is a low mass WD with
M ≤ 0.5M⊙, the merger time is longer than 23 Gyr. If the companion is a 1.4M⊙ neutron
star, then the merger time would be 10.8 Gyr. J0917+46 binary system will not merge
within the next 10 Gyr.
J0917+46 has nearly solar calcium abundance, and it has more calcium than many
of the metal-rich WDs with circumstellar debris disks. The extremely short timescales for
gravitational settling of Ca implies that the WD requires an external source for the observed
metals (Koester & Wilken 2006; Kilic & Redfield 2007). The star is located far above the
Galactic plane where accretion from the ISM is unlikely. A possible scenario for explaining
the photospheric calcium is accretion from a circumbinary disk created during the mass
loss phase of the giant. Since J0917+46 went through a common envelope phase with a
companion, a left-over circumbinary disk is possible. An accretion disk is observed around
– 10 –
the companion to the giant Mira A (Ireland et al. 2007). In addition, a fallback disk is
observed around a young neutron star (Wang et al. 2006). A similar mechanism could
create a circumbinary disk around J0917+46.
Several calcium-rich WDs are known to host disks (Kilic et al. 2006; von Hippel et
al. 2007). These WDs are 0.2 - 1.0 Gyr old. The presumed origin of these disks is tidal
disruption of asteroids or comets, however a leftover disk from the giant phase of the WD is
not completely ruled out (von Hippel et al. 2007). Su et al. (2007) detected a disk around the
central star of the Helix planetary nebula. This disk could form from the left-over material
from the giant phase that is ejected with less than the escape speed. The majority of the
metal-rich white dwarfs with disks show 30-100% excess in the K-band (Kilic & Redfield
2007). J0917+46 is expected to be fainter than 19th magnitude in the near-infrared, and it
is not detected in the Point Source Catalog (PSC) part of the Two Micron All Sky Survey
(2MASS; Skrutskie et al. 2006). The PSC 99% completeness limits are 15.8, 15.1 and 14.3
in J,H and Ks filters, respectively. These limits do not provide any additional constraints
on the existence of a disk or a companion object. Follow-up near-infrared observations with
an 8m class telescope are required to search for the signature of a debris disk which could
explain the observed calcium abundance in this WD.
6. Conclusions
SDSS J0917+46, the lowest gravity/mass WD currently known, has a radial velocity
companion. The lack of excess in the SDSS photometry and the orbital parameters of the
system rule out a low mass star companion. We find that the companion is likely to be
another WD, and most likely to be a low mass WD. We show that if the binary separation
is appropriate and the common-envelope phase is efficient, it is possible to create a 0.17M⊙
WD in a 7.59 hr orbit around another WD. A neutron star companion is also possible if the
inclination angle is smaller than 27◦; the likelihood of this is 11%. If the companion is a
neutron star, it would be a milli-second pulsar. Radio observations of J0917+46 are needed
to search for such a companion.
M. Kilic thanks A. Gould for helpful discussions.
Facilities: MMT (Blue Channel Spectrograph)
– 11 –
REFERENCES
Althaus, L. G., Serenelli, A. M., & Benvenuto, O. G. 2001, MNRAS, 323, 471
Bassa, C. G., van Kerkwijk, M. H., Koester, D., & Verbunt, F. 2006, A&A, 456, 295
Benvenuto, O. G., & De Vito, M. A. 2005, MNRAS, 362, 891
Bertelli, G., Bressan, A., Chiosi, C., Fagotto, F., & Nasi, E. 1994, A&AS, 106, 275
Brown, W. R., Geller, M. J., Kenyon, S. J., & Kurtz, M. J. 2006, ApJ, 647, 303
Eggleton, P. P. 1983, ApJ, 268, 368
Eisenstein, D. J. et al. 2006, ApJS, 167, 40
Farihi, J., Becklin, E. E., & Zuckerman, B. 2005, ApJS, 161, 394
Hansen, B. M. S., et al. 2007, ApJ, in press (astro-ph/0701738)
Iben, I. J., Tutukov, A. V., & Yungelson, L. R. 1997, ApJ, 475, 291
Ireland, M. J., et al. 2007, ArXiv Astrophysics e-prints, arXiv:astro-ph/0703244
Kawka, A., Vennes, S., Oswalt, T. D., Smith, J. A., & Silvestri, N. M. 2006, ApJ, 643, L123
Kenyon, S. J. & Garcia, M. R. 1986, AJ, 91, 125
Kilic, M., von Hippel, T., Leggett, S. K., & Winget, D. E. 2006, ApJ, 646, 474
Kilic, M., Allende Prieto, C., Brown, W. R., & Koester, D. 2007, ApJ, in press
Kilic, M. & Redfield, S. 2007, ApJ, in press
Koester, D. and Wilken, D. 2006, A&A, 453, 1051
Kurtz, M. J., & Mink, D. J. 1998, PASP, 110, 934
Landau, L. & Lifshitz, E. 1958, The Theory of Classical Fields, Pergamon Press, Oxford
Liebert, J., Bergeron, P., Eisenstein, D., Harris, H. C., Kleinman, S. J., Nitta, A., & Krzesin-
ski, J. 2004, ApJ, 606, L147
Lucy, L. B. & Sweeney, M. A.1971, AJ, 76, 544
Marsh, T. R., Dhillon, V. S., & Duck, S. R. 1995, MNRAS, 275, 828
http://arxiv.org/abs/astro-ph/0701738
http://arxiv.org/abs/astro-ph/0703244
– 12 –
Marsh, T. R. 2000, New Astronomy Review, 44, 119
Massey, P., Strobel, K., Barnes, J. V., & Anderson, E. 1988, ApJ, 328, 315
Maxted, P. F. L., O’Donoghue, D., Morales-Rueda, L., & Napiwotzki, R. 2007, ArXiv As-
trophysics e-prints, arXiv:astro-ph/0702005
Napiwotzki, R., et al. 2001, Astronomische Nachrichten, 322, 411
Nelemans, G., & Tout, C. A. 2005, MNRAS, 356, 753
Nice, D. J., Splaver, E. M., Stairs, I. H., Löhmer, O., Jessner, A., Kramer, M., & Cordes,
J. M. 2005, ApJ, 634, 1242
Sandquist, E. L., Taam, R. E., & Burkert, A. 2000, ApJ, 533, 984
Sarna, M. J., Marks, P. B., & Connon Smith, R. 1996, MNRAS, 279, 88
Su, K. Y. L., et al. 2007, ApJ, 657, L41
van Kerkwijk, M. H., Bergeron, P., & Kulkarni, S. R. 1996, ApJ, 467, L89
van Leeuwen, J., Ferdman, R. D., Meyer, S., & Stairs, I. 2007, MNRAS, 374, 1437
von Hippel, T., Kuchner, M. J., Kilic, M., Mullally, F., & Reach, W. T. 2007, ApJ, in press
(astro-ph/0703473)
Wang, Z., Chakrabarty, D., & Kaplan, D. L. 2006, Nature, 440, 772
Yungelson, L. R., Nelemans, G., Zwart, S. F. P., & Verbunt, F. 2000, “The influence of
binaries on stellar population studies”, Brussels, (Kluwer, D. Vanbeveren ed)
This preprint was prepared with the AAS LATEX macros v5.2.
http://arxiv.org/abs/astro-ph/0702005
http://arxiv.org/abs/astro-ph/0703473
– 13 –
Table 1. Radial Velocity Measurements for SDSS J0917+46
Julian Date Heliocentric Radial Velocity
(km s−1)
2454091.77060 134.29 ± 5.45
2454091.85741 124.83 ± 8.56
2454093.82302 −82.91 ± 4.22
2454093.94566 41.98 ± 4.65
2454094.04293 152.42 ± 3.79
2454095.83998 39.88 ± 3.45
2454095.89902 172.60 ± 3.57
2454096.04373 −79.87 ± 3.76
2454096.95394 20.45 ± 7.49
2454178.62439 −86.78 ± 6.08
2454178.69885 −90.08 ± 3.90
2454178.72963 −11.16 ± 6.32
2454178.77125 106.28 ± 6.53
2454178.83047 160.16 ± 5.48
Table 2. The Last and the First Common-Envelope Phases
CE Phase Mgiant Mcompanion ainitial afinal Pinitial Pfinal
(M⊙) (M⊙) (R⊙) (R⊙) (hr) (hr)
2 0.8 0.33 5.70 1.55 35.6 7.6
2 0.9 0.39 5.73 1.61 33.6 7.6
2 1.0 0.45 5.78 1.66 32.1 7.6
2 1.1 0.50 5.78 1.71 30.5 7.6
2 1.2 0.56 5.85 1.75 29.7 7.6
2 1.3 0.61 5.84 1.80 28.4 7.6
1 2.2 0.80 83.52 5.70 1222.9 35.6
– 14 –
Fig. 1.— The radial velocities of the white dwarf SDSS J0917+46 (black dots) observed in
2006 December (top panel) and 2007 March (bottom left panel). The bottom right panel
shows all of these data points phased with the best-fit period. The solid line represents the
best-fit model for a circular orbit with a radial velocity amplitude of 148.8 km s−1 and a
period of 7.5936 hours.
– 15 –
Fig. 2.— Spectral fits (solid lines) to the observed composite spectrum of SDSS J0917+46
(jagged lines, panel a) and to the flux-normalized line profiles (panel b). The Ca K line
region (3925 - 3940 Å) is not included in our fits. The SDSS photometry (error bars) and
the predicted fluxes from our best fit solution to the spectra (circles) are shown in panel
c. The dashed line shows the effect of adding an M3.5V (0.3M⊙) companion to our best-fit
white dwarf model.
– 16 –
Fig. 3.— The spectra averaged around the maximum and minimum radial velocity for
J0917+46. The red-shifted spectrum (solid line) is a combination of five spectra with V =
125, 134, 152, 160, and 173 km s−1 shifted to an average velocity of 149 km s−1. The blue-
shifted spectrum (dotted line) is a combination of four spectra with V = −80,−83,−87, and
−90 km s−1 shifted to an average velocity of −85 km s−1. The dashed line marks the rest
wavelength of the Ca K line.
Introduction
Observations
The Discovery of a Companion
The Nature of the Companion
A Low Mass Star
Constraints from the SDSS Photometry
Constraints from the MMT Spectroscopy
Another White Dwarf
A Neutron Star
Discussion
Conclusions
|
0704.1814 | A Measure of de Sitter Entropy and Eternal Inflation | arXiv:0704.1814v1 [hep-th] 13 Apr 2007
A Measure of de Sitter Entropy
and Eternal Inflation
Nima Arkani-Hameda, Sergei Dubovskya,b, Alberto Nicolisa,
Enrico Trincherinia, and Giovanni Villadoroa
a Jefferson Physical Laboratory,
Harvard University, Cambridge, MA 02138, USA
b Institute for Nuclear Research of the Russian Academy of Sciences,
60th October Anniversary Prospect, 7a, 117312 Moscow, Russia
Abstract
We show that in any model of non-eternal inflation satisfying the null energy condition,
the area of the de Sitter horizon increases by at least one Planck unit in each inflationary
e-folding. This observation gives an operational meaning to the finiteness of the entropy
SdS of an inflationary de Sitter space eventually exiting into an asymptotically flat region:
the asymptotic observer is never able to measure more than eSdS independent inflationary
modes. This suggests a limitation on the amount of de Sitter space outside the horizon that
can be consistently described at the semiclassical level, fitting well with other examples of
the breakdown of locality in quantum gravity, such as in black hole evaporation. The bound
does not hold in models of inflation that violate the null energy condition, such as ghost
inflation. This strengthens the case for the thermodynamical interpretation of the bound as
conventional black hole thermodynamics also fails in these models, strongly suggesting that
these theories are incompatible with basic gravitational principles.
http://arxiv.org/abs/0704.1814v1
1 Introduction
String theory appears to have a landscape of vacua [1, 2], and eternal inflation [3, 4] is a plausible
mechanism for populating them. In this picture there is an infinite volume of spacetime undergoing
eternal inflation, nucleating bubbles of other vacua that either themselves eternally inflate, or
end in asymptotically flat or AdS crunch space-times. These different regions are all space-like
separated from each other and are therefore naively completely independent. The infinite volumes
and infinite numbers of bubbles vex simple attempts to define a “measure” on the space of vacua,
since these involve ratios of infinite quantities.
This picture relies on an application of low-energy effective field theory to inflation and bubble
nucleation. On the face of it this is totally justified, since everywhere curvatures are low compared
to the Planck or string scales. However, we have long known that effective field theory can
break down dramatically even in regions of low curvature, indeed it is precisely the application of
effective field theory within its putative domain of validity that leads to the black hole information
paradox. Complementarity [5, 6] suggests that regions of low-curvature spacetime that are space-
like separated may nonetheless not be independent. How can we transfer these relatively well-
established lessons to de Sitter space and eternal inflation [7]?
In this note we begin with a brief discussion of why locality is necessarily an approximate
concept in quantum gravity, and why the failure of locality can sometimes manifest itself macro-
scopically as in the information paradox (see, e.g., [8, 9, 10] for related discussions with somewhat
different accents). Much of this material is review, though some of the emphasis is novel. The
conclusion is simple: effective field theory breaks down when it relies on the presence of eS states
behind a horizon of entropy S. Note that if the spacetime geometry is kept fixed as gravity is
decoupled G→ 0, the entropy goes to infinity and effective field theory is a perfectly valid descrip-
tion. In attempting to extend these ideas to de Sitter space, there is a basic confusion. It is very
natural to assign a finite number of states to a black hole, since it occupies a finite region of space
[11]. De Sitter space also has a finite entropy [12], but its spatially flat space-like surfaces have
infinite volume, and it is not completely clear what this finite entropy means operationally, though
clearly it must be associated with the fact that any given observer only sees a finite volume of
de Sitter space. We regulate this question by considering approximate de Sitter spaces which are
non-eternal inflation models, exiting into asymptotically flat space-times. We show that for a very
broad class of inflationary models, as long the null-energy condition is satisfied, the area of the
de Sitter horizon grows by at least one Planck unit during each e-folding, so that dSdS/dNe ≫ 1,
and so the number of e-foldings of inflation down to a given value of inflationary Hubble is bounded
as Ne ≪ SdS (limits on the effective theory of inflation have also been considered in e.g. [13, 14]).
This provides an operational meaning to the finiteness of the de Sitter entropy: the asymptotic
observer detects a spectrum of scale-invariant perturbations that she associates with the early
de Sitter epoch; however, she never measures more than eSdS of these modes. The bound is vio-
lated when the conditions for eternal inflation are met; indeed, dSdS/dNe . 1 thereby provides a
completely macroscopic characterization of eternal inflation. This bound suggests that no more
than eSdS spacetime Hubble volumes can be consistently described within an effective field theory.
Our bound does not hold in models of inflation that violate the null-energy condition. Of
course most theories that violate this energy condition are obviously pathological, with instabili-
ties present even at the long distances. However in the last number of years, a class of theories have
been studied [15, 16, 17], loosely describing various “Higgs” phases of gravity, which appear to be
consistent as long-distance effective theories, and which (essentially as part of their raison d’etre)
violate the null energy condition. Our result suggests that these theories violate the thermody-
namic interpretation of de Sitter entropy—an asymptotic observer exiting into flat space from
ghost inflation [18] could, for instance, measure parametrically more than eSdS inflationary modes.
This fits nicely with other recent investigations [19, 20] that show that the second law of black
hole thermodynamics also fails for these models. Taken together these results strongly suggest
that, while these theories may be consistent as effective theories, they are in the “swampland” of
effective theories that are incompatible with basic gravitational principles [21, 22].
2 Locality, gravity, and black holes
2.1 Locality in gravity
Since the very early days of quantum gravity it has been appreciated that the notion of local
off-shell observables is not sharply well-defined (see e.g. [23]). It is important to realize that it is
dynamical gravity that is crucial for this conclusion, and not just the reparameterization invariance
of the theory. The existence of local operators clashes with causality in a theory with a dynamical
metric. Indeed, causality tells that the commutator of local operators taken at space-like separated
points should be zero,
[O(x),O(y)] = 0 if (x− y)2 > 0
However, whether two points are space-like separated or not is determined by the metric, and is not
well defined if the metric itself fluctuates. Clearly, this argument crucially relies on the ability of the
metric to fluctuate, i.e. on the non-trivial dynamics of gravity. Another argument is that Green’s
functions of local field operators, such as 〈φ(x)φ(y) · · · 〉 are not invariant (as opposed to covariant)
under coordinate changes. Consequently, they cannot represent physical quantities in a theory of
gravity, where coordinate changes are gauge transformations. Related to this, there is no standard
notion of time evolution in gravity. Indeed, as a consequence of time reparameterization invariance,
the canonical quantization of general relativity leads to the Wheeler-de Witt equation [24], which
is analogous to the Schroedinger equation in ordinary quantum mechanics, but does not involve
time,
HΨ = 0 (1)
These somewhat formal arguments seem to rely only on the reparametrization invariance of
the theory, but of course this is incorrect—it is the dynamical gravity that is the culprit. To see
this, let us take the decoupling limit MPl → ∞, so that gravity becomes non-dynamical. If we are
in flat space, in this limit the metric gαβ must be diffeomorphic to ηαβ :
gαβ =
ηµν (2)
where ηµν is the Minkowski metric and Y
µ’s are to be thought of as the component functions of the
space-time diffeomorphism (diff), xµ → Y µ(x). The resulting theory is still reparameterization
invariant, with matter fields transforming in the usual way under the space-time diffs x → ξ(x),
and the transformation rule of the Y µ fields is
Y µ →
ξ−1 ◦ Y
where ◦ is the natural multiplication of two diffeomorphisms. Nevertheless, there are local diff-
invariant observables now, such as 〈φ(Y (x))φ(Y (y)) · · · 〉. Of course this theory is just equivalent
to the conventional flat space field theory, which is recovered in the “unitary” gauge Y µ = xµ.
Conversely, any field theory can be made diff invariant by introducing the “Stueckelberg” fields
Y µ according to (2). Diff invariance by itself, like any gauge symmetry, is just a redundancy of
the description and cannot imply any physical consequences. Conventional time evolution is also
recovered in the decoupling limit; the Hamiltonian constraint (1) still holds as a consequence of
time reparameterization invariance, and in the decoupling limit the Hamiltonian H is
pµ +HM
Ψ[Y µ, matter] = 0 (3)
where HM is the matter Hamiltonian. Noting that the canonical conjugate momenta act as
pµ = i
we find that the Hamiltonian constraint reduces to the conventional time-dependent Schroedinger
equation with Ψ depending on time through Y µ. In a sense, the gauge degrees of freedom Y µ’s
play the role of clocks and rods in the decoupling limit.
This is to be contrasted with what happens for finite MPl. In this case it is not possible to
explicitly disentangle the gauge degrees of freedom from the metric. As a result to recover the
conventional time evolution from the Wheeler-de Witt equation one has to specify some physical
clock field (for instance, it can be the scale factor of the Universe, or some rolling scalar field),
and use this field similarly to how we used Y µ’s in (3) to recover the time-dependent Schroedinger
equation [25, 26, 27]. This strongly suggests that with dynamical gravity one is forced to consider
whether there exist physical clocks that can resolve a given physical process. In particular, this
means that in a region of size L it does not make sense to discuss time evolution with resolution
better than δt ∼ (LM2Pl)−1, as any physical clocks aiming to measure time with that precision by
the uncertainty principle would collapse the whole region into a black hole.
What does the formal absence of local observables in gravity mean operationally? There
must be an intrinsic obstacle to measuring local observables with arbitrary precision; what is this
intrinsic uncertainty? Imagine we want to determine the value of the 2-point function 〈φ(x)φ(y)〉
of a scalar field φ(x) between two space-like separated points x and y. We have to set up an
apparatus that measures φ(x) and φ(y), repeat the experiment N times and collect the outcomes
φi(x), φi(y) for i = 1, · · · , N . We can then plot the values for the product φi(x)φi(y), which will
be peaked around some value. The width of the distribution will represent the uncertainty due
to quantum fluctuations. Without gravity there is no limit to the precision we can reach, just by
increasing N the width of the distribution decreases as 1/
N . The presence of gravity, however,
sets an intrinsic systematic uncertainty in the measurement. The Bekenstein bound [11], indeed,
limits the number of states in a localized region of space-time. This is due to the fact that, in
a theory with gravity, the object with the largest density of states is a black hole, whose size
RS grows with its entropy (SBH = R
S /4G), or equivalently, with the number of states it can
contain (∼ eSBH). This means that an apparatus of finite size has a finite number of degrees of
freedom (d.o.f.), thus can reach only a finite precision, limited by the number of states. For an
apparatus with size smaller than r = |x − y|, the number of d.o.f. is bounded by S = rD−2/G.
Without gravity there is no limit to the number of d.o.f. a compact apparatus can have so that
the indetermination in the two-point function is only limited by the statistical error, which can be
reduced indefinitely by increasing the number of measurements N . With gravity instead this is
no longer true; an intrinsic systematic error (which must be a decreasing function of S) is always
present to fuzz the notion of locality. The only two ways to eliminate such indetermination are:
a) by switching off gravity (G → 0); b) by giving up with local observables and considering only
S-matrix elements (for asymptotically Minkowski spaces) where r → ∞: in this sense there are
no local (off-shell) observables in gravity.
Let us now try to quantify the amount of indetermination due to quantum gravity. The
parameter controlling the uncertainty 1/S = G/rD−2 is always tiny for distances larger than the
Planck length, which signals the fact that quantum gravity becomes important at this scale. We do
not expect the low-energy effective theory to break down at any order in perturbation theory, i.e.
at any order in 1/S. This is what perturbative string theory suggests by providing, in principle, a
well defined higher-derivative low-energy expansion at all order in G. Also, in our 2-point function
example, the natural limit on the resolution should be set by the number of states of the apparatus
(eS) instead of its number of d.o.f. (S). We thus expect the irreducible error due to quantum
gravity to be non-perturbative in the coupling G,
δ〈φ(x)φ(y)〉 ∼ e−S ∼ e−
(x−y)
The smallness and the non-perturbative nature of this effect suggest that it becomes important
only at very short distances, with the low-energy field theory remaining a very good approximation
at long distances. This is true except in special situations where the effective theory breaks down
when it is not naively expected to. However, before discussing this point further, let us examine
the issue of locality from another angle by looking at what it means in S-matrix language.
As is well-known, the S-matrix associated with a local theory enjoys analyticity properties. For
instance, for the 2 → 2 scattering, the amplitude must be an analytic function of the Maldelstam’s
variables s and t away from the real axis. It must also be exponentially bounded in energy—at
fixed angles, the amplitude can not fall faster than e−
s log s [28]. In local QFT, both of these
requirements follow directly from the sharp vanishing of field commutators outside the light-cone
in position space. A trivial example illustrates the point: consider a function f(x) that vanishes
sharply outside the interval [x1, x2]. What does this imply for the Fourier transform f̃(p)? Since
the integral for f̃(p) is over a finite range [x1, x2] and e
ipx is analytic in p, f̃(p) must be both
analytic and exponentially bounded in the complex p plane. Now amplitudes in UV complete
local quantum field theories certainly satisfy these requirements—they are analytic and fall off as
powers of energy. More significantly, amplitudes in perturbative string theory also satisfy these
bounds. That they are analytic is no surprise, since after all the Veneziano amplitude arose in
the context of the analytic S-matrix program. More non-trivially they are also exponentially
bounded—high energy amplitudes for E ≫ Ms are dominated by genus g = E/Ms and fall off
precisely as e−E logE, saturating the locality bound [29] (see also [30] and referenced therein for
discussion of high-energy scattering in string theory). Thus despite naive appearances, the finite
extent of the string does not in itself give rise to any violations of locality. Indeed, we now know
of non-gravitational string theories—little string theories in six dimensions. These theories have a
definition in terms of four-dimensional gauge theories via deconstruction and are manifestly local
in this sense [31].
It is possible that violations of locality do show up in the S-matrix when black hole production
becomes important. At high enough energies relative to the Planck scale, the two-particle scatter-
ing is dominated by black hole production, when the energy becomes larger than MP l divided by
some power of gs so the would-be BH becomes larger than the string scale. The 2 → 2 scattering
amplitude therefore cannot be smaller than e−S(E), and it is natural to conjecture that this lower
bound is met:
A2→2(E ≫MPl) ∼ e−S(E) ∼ e−ER(E) (5)
where R(E) ∼ (GE)1/(D−3) is the radius of the black hole formed with center of mass energy E
and S(E) is the associated entropy. Note that since R(E) grows as a power of energy, saturating
this lower bound leads to an amplitude falling faster than exponentially at high energies, so
that the only sharp mirror of locality in the scattering amplitude is lost. A heuristic measure
of the size of these non-local effects in position space can be obtained by Fourier transforming
the analytically continued A2→2 back to position space; a saddle point approximation using the
black-hole dominated amplitude gives a Fourier transform of order e−r
D−2/G ∼ e−S, in accordance
with our general expectations. Of course this asymptotic form of the scattering amplitude is a
guess; it is hard to imagine that the amplitude is smaller than this but one might imagine that it
can be larger (we thank J. Maldacena for pointing this out to us). The point is that there is no
reason to expect perturbative string effects to violate notions of locality—they certainly do not in
the S-matrix—while gravitational effects can plausibly do it.
Naively one would expect that the breakdown of locality only shows up when scales of order of
the Planck length or shorter are probed, while for IR physics the corrections are ridiculously tiny
(e−S) with no observable effects. This is however not true. There are several important cases where
the loss of locality by quantum gravity give O(1) effects. This happens when in processes with
O(eS) states, the tiny O(e−S) corrections sum to give O(1) effects. This is similar to renormalon
contributions in QCD. Independently of the value of αs, or equivalently of the energy considered,
every QCD amplitude is indeed affected by non-perturbative power corrections
Λ2QCD
β0 αs(Q2) (6)
which limit the power of the “asymptotic” perturbative expansion. Because in the N -loop order
contributions, and equivalently in the N -point functions, combinatorics produce enhancing N !
factors, they start receiving O(1) corrections when N ∼ 1/αs. Analogously in gravity, we must
expect O(1) corrections from “non-perturbative” quantum gravity in processes with N -point func-
tions with N ≃ S. These contributions are not captured by the perturbative expansion, they show
the very nature of quantum gravity and its non-locality, which is usually thought to be confined
at the Planck scale. Indeed in eq. (5) it is the presence of eS states (the inclusive amplitude
is an S-point function) that suppresses exponentially the 2 → 2 amplitude, thus violating the
locality bound. An example where this effect becomes macroscopic is well-known as the black
hole information paradox [32], and will be reviewed more extensively below in section 2.2. Notice
however that only for specific questions e−S effects become relevant, in all other cases, where less
than O(S) quanta are involved, the low energy effective theory of gravity (or perturbative string
theory) remains an excellent tool for describing gravity at large distances.
2.2 The black hole information paradox
Since an effective field theory analysis of black hole information and evaporation leads to dramat-
ically incorrect conclusions, it is worth reviewing this well-worn territory in some detail, in order
to draw a lesson that can then be applied to cosmology.
Schwarzschild black hole solutions of mass M and radius RS (with R
S ∼ GM) exist for
any D > 3 spacetime dimensions. Black holes lose mass via Hawking radiation [33] with a rate
dM/dt ∼ −R−2, so that the evaporation time is
∼MRS ∼ SBH (7)
where
SBH ∼
RD−2S
is the black hole entropy, the large dimensionless parameter in the problem. Note that there is a
natural limit where the geometry (RS) is kept fixed, M → ∞ but G → 0 so that SBH → ∞. In
this limit there is still a black hole together with its horizon and singularity, and it emits Hawking
radiation with temperature TH ∼ R−1S , but tev → ∞ so the black hole never evaporates.
Hawking radiation can certainly be computed using effective field theory, after all the horizon
of a macroscopic black hole is a region of spacetime with very small curvature and as a consequence
there should be a description of the evaporation where only low-energy degrees of freedom are
excited. In order to derive Hawking radiation, one has to be able to describe the evolution of
r = 0
t = 0
infalling
matter
Figure 1: Nice slices in Kruskal coordinates (left) and in the Penrose diagram (right). The singularity
is at T 2 −X2 = 1.
an initial state on the black hole semiclassical background to some final state that has Hawking
quanta. Following the laws of quantum mechanics, all that is needed is a set of spatial slices
and the corresponding—in general time dependent—Hamiltonian. However, because the aim is
to compute the final state within a long distance effective field theory, the curvature of the sliced
region of spacetime must be low everywhere (the slices can also cross the horizon if they stay away
from the singularity) and the extrinsic curvature of the slices themselves has to be small as well.
Spatial surfaces with these properties are called “nice slices” [34, 35]. One can easily arrange for
this slicing to cover also most of the collapsing matter that forms the black hole. To be specific
we can take the first (t = 0) slice to be T = c0 for X > 0 and the hyperbola T
2 − X2 = c20
for X < 0, where X and T are Kruskal coordinates; this slice has small extrinsic curvature by
construction. Then we take a second slice with c1 > c0 and we boost it in such a way that the
asymptotic Schwarzschild time on this slice is larger than the asymptotic time on the previous one.
We can build in this way a whole set of slices c0 < . . . < cn, all with small extrinsic curvature; if
the region they cover inside the horizon is still far away from the singularity, while outside
they can be boosted arbitrarily far in the future (Fig. 1) so that they can intercept most of the
outgoing Hawking quanta. When the black hole evaporates the background geometry changes
and the slices can be smoothly adjusted with the change in the geometry until very late in the
evaporation process, when the curvature becomes Planckian and the black hole has lost most of
its mass.
Starting with a pure state |ψi〉 at t = 0, one can now evolve it using the Hamiltonian HNS
defined on this set of slices, never entering the regime of high curvature. We can now imagine
dividing the slices in a portion that is outside the horizon and one inside it; even if the state
on the entire slice is pure, we can consider the effective density operator outside the black hole
defined as ρout(t) = Trin |ψ(t)〉〈ψ(t)|. In principle we can measure ρout. As usual in quantum
mechanics, this is done by repeating exactly the same experiments an infinite number of times,
and measuring all the mutually commuting observables that are possible. We should certainly
expect that at early times ρout is a mixed state, representing the entanglement between infalling
matter and Hawking radiation along the early nice-slices. This can be quantified by looking at
the entanglement entropy associated with ρout:
Sent = −Tr ρout log ρout (9)
Clearly at early times Sent is non-vanishing. What happens at late times? Should we expect the
final state of the evolution to be |ψf〉 = |ψout〉 ⊗ |ψin〉, with no entanglement between inside and
outside and Sent = 0? The answer is negative because of the quantum Xerox principle [36]. If this
decomposition were correct, two different states |A〉 and |B〉 should evolve into
|A〉 → |Aout〉 ⊗ |Ain〉, |B〉 → |Bout〉 ⊗ |Bin〉 (10)
but a linear superposition of them
|A〉+ |B〉
|Aout〉 ⊗ |Ain〉+ |Bout〉 ⊗ |Bin〉
cannot be of the form (|A〉+ |B〉)out⊗ (|A〉+ |B〉)in unless the states behind the horizon are equal
|Ain〉 = |Bin〉 for every A and B, and this is clearly impossible. No mystery then that the outgoing
Hawking radiation ρout looks thermal, being correlated with states behind the horizon.
Using nice slices one can compute in the low energy theory the entanglement entropy associated
with the density matrix ρout: while the horizon area shrinks, the number of emitted quanta
increases, the entanglement entropy of these thermal states grows monotonically as a function of
time until the black hole becomes Planckian, the effective field theory is no longer valid and we
don’t know what happens next without a UV completion (Fig. 2). This seems a generic prediction
of low energy EFT. It implies a peculiar fate for black hole evaporation: either the evolution
of a pure state ends in a mixed state, violating unitarity, or the black hole doesn’t evaporate
completely, a Planckian remnant is left and the information remains stored in the correlations
between Hawking radiation and the remnant. What cannot be is that the purity of the final
state is recovered in the last moments of black hole evaporation, because the number of remaining
quanta is not large enough to carry all the information. This is the black hole information paradox.
It suggests that in order to preserve unitarity, effective field theory should break down earlier than
expected. If we believe in the holographic principle, the total dimension of the Hilbert space
of the region inside the black hole has to be bounded by the exponential of the horizon area in
Planckian units. Since the entropy of any density operator is always smaller than the logarithm
of the dimension of the Hilbert space, and since the entanglement entropy for a pure state divided
into two subsystems is the same for each of them, the correct value of the entanglement entropy
that is measured from ρout should start decreasing at a time of order tev, finally becoming zero
when the black hole evaporates and a pure state is recovered.
tt ~ t
Planckianev
EFTarea
horizon
correct
Figure 2: The entanglement entropy for an evaporating black hole as a function of time. After a time
of order of the evaporation time the EFT prediction (blue line) starts violating the holographic bound
(dashed line). The correct behavior (red line) must reduce to the former at early times and approach the
latter at late times. At the final stages, t ≃ tPlanckian, curvatures are large and EFT breaks down.
According to this picture, the difference between the prediction of EFT and the right answer
is of O(1) in a regime where curvature is low and there is no reason why effective field theory
should be breaking down. However, the way this O(1) difference manifests itself is rather subtle.
To understand this point let us first consider N spins σi = ±12 and take the following state:
|ψ〉 =
|σ1 . . . σN 〉eiθ(σ1,...,σN ) (12)
where θ(σ1, ..., σN) are random phases. If only k of the N spins are measured, the density matrix
ρk can be computed taking the trace over the remaining N − k spins
|σ1 . . . σk〉〈σ1 . . . σk|+O(2−
2 )off-diagonal (13)
the off-diagonal exponential suppression comes from averaging 2N−k random phases. When k ≪ N
this density matrix looks diagonal and maximally mixed. Let us now study the entanglement
entropy: for small k we can expand
Sent = −Tr ρk log ρk = k log 2 +O(2−N+2k) (14)
and conclude that the effect of correlations becomes important only when k ≃ N/2 spins are
measured; finally when k ∼ N the entanglement entropy goes to zero as expected for a pure
state. A state that looks thermal instead of maximally mixed is |ψ〉 =
2 |En〉eiθ(En) with
random phases θ(En). This is of course why common pure states in nature, like the proverbial
“lump of coal” entangled with the photons it has emitted, look thermal when only a subset of the
states is observed.
This is a simple illustration of a general result due to Page [37], showing how the difference
between a pure and a mixed state is exponentially small until a number of states of order of
the dimensionality of the Hilbert space is measured. Suppose we have to verify if the black hole
density operator has an entanglement entropy of order S, then we need to measure an eS × eS
matrix—the entropy of any N×N matrix is bounded by logN—with entries of order e−S; in order
to see O(1) deviations from thermality in the spectrum, a huge number of Hawking states must be
measured with incredibly fine accuracy. Because it takes a time scale of order of the evaporation
time tev = RSSBH to emit order SBH quanta, before that time effective field theory predictions are
correct up to tiny e−S effects (Fig. 2). In particular this means that when looking at the N -point
functions of the theory, the exact value is the one obtained using EFT plus corrections that are
exponentially small until N ≃ S:
〈φ1 . . . φN〉correct = 〈φ1 . . . φN〉EFT +O(e−(S−N)) (15)
This can be explicitly seen with large black holes in AdS, as discussed by Maldacena [38] and
Hawking [41]. The semiclassical boundary two-point function for a massless scalar field falls off as
e−t/R. Its vanishing as t → ∞ is the information paradox in this context, while the CFT ensures
that this two-point function never drops below e−S; but the discrepancy of the semiclassical
approximation relative to the exact unitary CFT result for the two-point function is of order e−S.
There is another heuristic observation that supports the idea that the whole process of black
hole evaporation cannot be described within a single effective field theory. There is actually a
limitation in the slicing procedure that we described at the beginning of this section. In order for
the slices to extend arbitrarily in the future outside the black hole, they have to be closer and
closer inside the horizon. However, quantum mechanics plus gravitation put a strong constraint:
to measure shorter time intervals heavier clocks are needed. Of course they must be larger than
their own Schwarzschild radius but a clock has also to be smaller than the black hole itself. This
gives a bound on the shortest interval of time δt (the difference ck − ck−1 between two subsequent
slices in (Fig. 1)) that makes sense to talk about inside the horizon
Mclock
RD−3S
In this equation we have temporarily restored ~ to highlight the fact that whenever ~ or G goes to
zero the bound becomes trivial. On the other hand, the proper time inside the black hole is finite
τin . RS. These two conditions imply a striking bound: the maximum number of slices inside the
black hole is also finite, Nmax ≃ τin/δt ≃ RD−2S /G ≃ SBH. How large is then the time interval that
we can cover outside? With a spacing between the slices of the order of the Planck length (ℓPl)
the total time interval is τout ≃ NmaxℓPl ≃ RD−2S G(3−D)/(D−2). Note, however, that if we are only
interested in the Hawking quanta we may allow for a much less dense slicing: the spacing outside
can be of order of the typical wavelength of the radiation δtout ∼ 1/TBH ∼ RS. In this way we can
cover at most
τout . NmaxRS ≃ SBHRS (17)
which is precisely the evaporation time tev. Summarizing, the system of slices we need to define
the Hamiltonian evolution cannot cover enough space-time to describe the process of black hole
evaporation for time intervals parametrically larger than tev. With this argument we find that
effective field theory should break down exactly when it starts giving the wrong prediction for
the entanglement entropy (Fig. 2). Most previous estimates instead accounted for a much shorter
regime of validity, up to time-scales of order RS logRS [39, 40]. This would imply that the EFT
breakdown originates at some finite order in perturbation theory while in our case it comes from
non-perturbative O(e−S) effects.
Because the EFT description of states is incorrect for late times t ≫ tev, also the association
of commuting operators to the 2-point function is wrong. It can well be that two observables
evaluated on the same nice slice, one inside the horizon and the other outside, will no longer
commute, even if they are at large spatial separation. This is consistent with the principle of black
hole complementarity [5, 6]. Notice however that this breaking down is not merely a kinematical
effect due to the presence of the horizon, after all EFT is perfectly good for computing Hawking
radiation. In fact in the limit described at the beginning of this section, when we keep the
geometry fixed and we decouple dynamical gravity, there still is an horizon but effective field
theory now gives the right answer for arbitrarily long time scale: the black hole doesn’t evaporate
and information is entangled with states behind the horizon. The limitation on the validity of
EFT comes from dynamical gravity. There is nothing wrong in talking about both the inside and
the outside of the horizon for time intervals parametrically smaller than tev and even if one goes
past that point, O(S) quanta have to be measured to see a deviation of O(1).
3 Limits on de Sitter space
We now consider de Sitter space. According to the covariant entropy bound, de Sitter space should
have a finite maximum entropy given in 4D by the horizon area in 4G units, SdS = πH
−2/G. For a
black hole in asymptotically flat space it makes sense that the number of internal quantum states
should be finite. After all for an external observer a black hole is a localized object, occupying a
limited region of space. But for de Sitter space it is less clear how to think about the finiteness
of the number of quantum states: de Sitter has infinitely large spatial sections, at least in flat
FRW slicing, and continuous non-compact isometries—features that seem to clash with the idea
of a finite-dimensional Hilbert space. In particular the de Sitter symmetry group SO(n, 1) has
no finite-dimensional representations, so it cannot be realized in the de Sitter Hilbert space (see
however Ref. [42] for a discussion on this point). However the fact that no single observer can
ever experience what is beyond his or her causal horizon makes it tempting to postulate some sort
of ‘complementarity’ between the outside and the inside of the horizon, in the same spirit as the
black hole complementarity. From this point of view the global picture of de Sitter space would
not make much sense at the quantum level.
It is plausible that the global picture of de Sitter space is only a semiclassical approximation,
which becomes strictly valid only in the limit where gravity is decoupled while the geometry is kept
fixed. In the same limit the entropy SdS diverges, and one recovers the infinite-dimensional Hilbert
space of a local QFT in a fixed de Sitter geometry. With dynamical gravity we expect tiny non-
perturbative effects of order e−SdS to put fundamental limitations on how sharply one can define
local observables, in the spirit of sect. 2.1. These tiny effects can have dramatic consequences
in situations where they are enhanced by huge ∼ e+SdS multiplicative factors. For instance it is
widely believed that on a timescale of order H−1eSdS—the Poincaré recurrence time—de Sitter
space necessarily suffers from instabilities and no consistent theory of pure de Sitter space is
possible; although this view has been seriously challenged by Banks [43, 44, 45]. Notice however
that the near-horizon geometry of de Sitter space is identical to that of a black hole—they are
both equivalent to Rindler space. As we discussed in sect. 2.2, in the black hole case the local
EFT description must break down at a time tev ∼ RS · SBH after the formation of the black hole
itself. It is natural to conjecture that a similar breakdown of EFT occurs in de Sitter space after
a time of order H−1 · SdS. This is an extremely shorter timescale than the Poincaré recurrence
time, which is instead exponential in the de Sitter entropy.
3.1 Slow-roll inflation
Is there a way to be more concrete? In pure de Sitter any observer has access only to a small portion
of the full spacetime, and it is not even clear what the observables are [46]. But we can make better
sense of de Sitter space if we regulate it by making it a part of inflation. If inflation eventually
ends in a flat FRW cosmology with zero cosmological constant, then asymptotically in the future
every observer will have access to the whole of spacetime. In particular an asymptotic observer can
detect—in the form of density perturbations—modes that exited the cosmological horizon during
the near-de Sitter inflationary epoch. Notice that from this point of view it looks perfectly sensible
to talk about what is outside the early de Sitter horizon—we even have experimental evidence
that computing density perturbations by following quantum fluctuations outside the horizon is
reliable—and a strict complementarity between the inside and the outside of the de Sitter horizon
seems too restrictive. Now, the interesting point is that the fact that an asymptotic observer
can detect modes coming from the early inflationary phase gives an operational meaning to the
de Sitter degrees of freedom, and to their number. Every detectable mode corresponds to a state
in the de Sitter Hilbert space.
Let’s consider for instance an early phase of ordinary slow-roll inflation. Classically the inflaton
φ rolls down its potential V (φ) with a small velocity φ̇cl ∼ V ′/H . On top of this classical motion
there are small quantum fluctuations. Modes get continuously stretched out of the de Sitter
horizon, and quantum fluctuations get frozen at their typical amplitude at horizon-crossing,
δφq ∼ H (18)
For a future observer, who makes observations in an epoch when the inflaton is no longer an
important degree of freedom, these fluctuations are just small fluctuations of the space-like hyper-
surface that determines the end of inflation. That is, since with good approximation inflation ends
at some fixed value of φ, small fluctuations in φ curve this hypersurface by perturbing the local
scale factor a,
∼ Hδt ∼ H δφq
Such a perturbation is locally unobservable as long as its wavelength is larger than the cosmological
horizon. But eventually every mode re-enters the horizon, and when this happens a perturbation
in the local a translates into a perturbation in the local energy density ρ,
where we made use of eq. (18).
By observing density perturbations in the sky an asymptotic observer is able to assign states
to the approximately de Sitter early phase. If we believe the finiteness of de Sitter entropy, the
maximum number of independent modes from inflation an observer can ever detect should be
bounded by the dimensionality of the de Sitter Hilbert space, dim(H) = eS. Of course slow-roll
inflation has a finite duration, thus only a finite number of modes can exit the horizon during
inflation and re-enter in the asymptotic future. Roughly speaking, if inflation lasts for a total
of Ntot e-foldings, the number of independent modes coming from inflation is of order e
3Ntot—it
is the number of different Hubble volumes that get populated starting from a single inflationary
Hubble patch. If the number of e-foldings during inflation gets larger than the de Sitter entropy,
Ntot & S, this operational definition of de Sitter degrees of freedom starts violating the entropy
bound.
In slow-roll inflation the Hubble rate slowly changes with time,
Ḣ = −(4πG) φ̇2 (21)
and so does the associated de Sitter entropy S = πH−2/G. In particular, the rate of entropy
change per e-folding is
8π2φ̇2
where we made use of eq. (20). By integrating this equation we get a bound on the total number
of e-foldings,
Ntot .
· Send (23)
where Send is the de Sitter entropy at the end of inflation. We thus see that since δρ/ρ is smaller
than one, the total number of e-foldings is bounded by the de Sitter entropy. As a consequence
a future observer will never be able to associate more than eS states to the near-de Sitter early
phase!
By adjusting the model parameters one can make the inflationary potential flatter and flatter,
thus enhancing the amplitude of density perturbations δρ/ρ. In this way, according to eq. (23) for
a fixed de Sitter entropy the allowed number of e-foldings can be made larger and larger. When
δρ/ρ becomes of order one we start saturating the de Sitter entropy bound, Ntot ∼ S. However
exactly when δρ/ρ is of order one we enter the regime of eternal inflation. Indeed quantum
fluctuations in the inflaton field, δφq ∼ H , are so large that they are of the same order as the
classical advancement of the inflaton itself in one Hubble time, ∆φcl ∼ φ̇cl ·H−1,
∼ 1 (24)
Now in principle there is no limit to the total number of e-foldings one can have in an inflationary
patch—the field can fluctuate up the potential as easily as it is classically rolling down. Still
when a future observer starts detecting modes coming from an eternal-inflation phase, precisely
because they correspond to density perturbations of order unity the Hubble volume surrounding
the observer will soon get collapsed into a black hole [47, 48]. Therefore a future observer will not
be able to assign more than eS states to the inflationary phase.
Notice that when dealing with eternal inflation we are pushing the semiclassical analysis beyond
its regime of validity, by applying it to a regime of large quantum fluctuations. This is to be
contrasted with standard (i.e., non-eternal) slow-roll inflation, where the semiclassical computation
is under control and quantitatively reliable. This matches nicely with what we postulated above
by analogy with the black hole system—that in de Sitter space the local EFT description should
break down after a time of order H−1 ·S. Indeed in standard slow-roll inflation the near-de Sitter
phase cannot be kept for longer than Ntot ∼ S e-foldings.
Normally whether inflation is eternal or not is controlled by the microscopic parameters of the
inflaton potential. For slow-roll inflation we have just given instead a macroscopic characterization
of eternal inflation, involving geometric quantities only: an observer living in an inflationary
Universe can in principle measure the local H and Ḣ with good accuracy, and determine the rate
of entropy change per e-folding. If such a quantity is of order one, the observer lives in an eternally
inflating Universe.
Indeed we will see that this macroscopic characterization of eternal inflation is far more general
than the simple single-field slow-roll inflationary model we are discussing here. By now we know
several alternative mechanisms for driving inflation, well known examples being for instance DBI
inflation [49], locked inflation [50], k-inflation [51]. These models can be thought of as different
regularizations of de Sitter space—different ways of sustaining an approximately de Sitter early
phase for a finite period of time before matching onto an ordinary flat FRW cosmology, thus
allowing an asymptotic observer to gather information about de Sitter space. We will show in
a model-independent fashion that the absence of eternal inflation requires that the Hubble rate
decrease faster than a critical speed,
|Ḣ| ≫ GH4 (25)
This is a necessary condition for the classical motion not to be overwhelmed by quantum fluctua-
tions, so that the semiclassical analysis is trustworthy. In terms of the de Sitter entropy the above
inequality reads
≫ 1 (26)
which once integrated limits the total number of e-folds an inflationary model can achieve without
entering an eternal-inflation regime,
Ntot ≪ Send (27)
As pointed out by Bousso, Freivogel and Yang, the bound (26) is necessarily violated [48]
in slow-roll eternal inflation, thereby avoiding conflict with the second law of thermodynamics.
Indeed, during eternal inflation the evolution of the horizon area is dominated by quantum jumps
of the inflaton field and can go either way during each e-folding. From
∣ < 1 one infers that
the entropy changes by less than one unit during each e-folding and, consequently, its decrease is
unobservable.
One notable exception is ghost inflation [18]. There φ̇ and Ḣ are not tightly bound to each other
like in eq. (21). Indeed there exists an exactly de Sitter solution with vanishing Ḣ but constant,
non-vanishing φ̇. This is because the stress-energy tensor of the ghost condensate vacuum is that
of a cosmological constant, even though the vacuum itself breaks Lorentz invariance through a
non-zero order parameter 〈φ̇〉 [15]. Therefore, the requirement of not being eternally inflating still
gives a lower bound on φ̇ but now this does not translate into a lower bound on |Ḣ|. Ḣ can be
strictly zero, still inflation is guaranteed to end by the incessant progression of the scalar, which
will eventually trigger a sudden drop in the cosmological constant [18]. Thus in ghost inflation
there is no analogue of the local bounds (25) and (26), nor there is any upper bound on the total
number of e-foldings.
Notice however that the ghost condensate is on the verge of violating the null energy condition,
having ρ + p = 0. Indeed small perturbations about the condensate do violate it. In the next
subsection we will prove that our bounds are guaranteed to hold for all inflationary systems that
do not admit violations of the null energy condition. This matches with the general discussion
of sect. 4: the NEC is known to play an important role in the holographic bound and in general
in limiting the accuracy with which one can define local observables in gravity. The fact that all
reliable NEC-respecting semiclassical models of inflation obey our bounds, suggests that the latter
really limit the portion of de Sitter space one can consistently talk about within local EFT.
3.2 General case
Let us consider a generic inflationary cosmology driven by a collection of matter fields ψm. We
want to see under what conditions the time-evolution of the system is mainly classical, with
quantum fluctuations giving only negligible corrections. We could work with a completely generic
matter Lagrangian, function of the matter fields and their first derivatives, and possibly including
higher-derivative terms, which in specific models like ghost inflation can play a significant role. We
should then: take the proper derivatives with respect to the metric to find the stress-energy tensor;
plug it into the Friedmann equations and solve them; expand the action at quadratic order in the
fluctuations around the classical solution; compute the size of typical quantum fluctuations; impose
PSfrag repla
ements
end of
in
ation
Figure 3: Love in an inflationary Universe.
that they do not overcome the classical evolution. This procedure would be quite cumbersome, at
the very least.
Fortunately we can answer our question in general, with no reference to the actual system that
is driving inflation. To this purpose it is particularly convenient to work with the effective theory
for adiabatic scalar perturbations of a generic FRW Universe. This framework has been developed
in Ref. [52], to which we refer for details. The idea is to focus on a scalar excitation that is present
in virtually all expanding Universes: the Goldstone boson of broken time-translations. That is,
given the background solution for the matter fields ψm(t), we consider the matter fluctuation
δψm(x) ≡ ψm
t+ π(x)
− ψm(t) (28)
parameterized by π(x), and the corresponding scalar perturbation of the metric as enforced by
Einstein equations (after fixing, e.g., Newtonian gauge). This fluctuation corresponds to a com-
mon, local shift in time for all matter fields and is what in the long wavelength limit is called an
‘adiabatic’ perturbation. As for all Goldstone bosons, its Lagrangian is largely dictated by sym-
metry considerations. This is clearly the relevant degree of freedom one has to consider to decide
whether eternal inflation is taking place or not. Minimally, a sufficient condition for having eternal
inflation is to have large quantum fluctuations back and forth along the classical trajectory. In
the presence of several matter fields other fluctuation modes will be present. For the moment we
concentrate on the Goldstone alone. As we will see at the end of this section, our conclusions are
unaltered by the presence of large mixings between π and extra degrees of freedom. The situation
is schematically depicted in Fig. 4.
Of course with dynamical gravity time-translations are gauged and formally there is no Gold-
stone boson at all—it is “eaten” by the gravitational degrees of freedom and one can always fix
the gauge π(x) = 0 (‘unitary gauge’). Still it remains a convenient parametrization of a particular
scalar fluctuation at short distances, shorter than the Hubble scale, which plays the role of the
field space
classical
solution
Figure 4: A given cosmological history is a classical trajectory in field space (red line), parameterized by
time. The Goldstone field π describes small local fluctuations along the classical solution. In general other
light oscillation modes, transverse to the trajectory will also be present, and π can be mixed with them.
In the picture ϕ1 and ϕ2 are the modes that locally diagonalize the quadratic Lagrangian of perturbations.
The blue ellipsoid gives the typical size of quantum fluctuations.
graviton Compton wavelength. This is completely analogous to the case of massive gauge theories,
where the dynamics of longitudinal gauge bosons is well described by the “eaten” Goldstones at
energies higher than the mass.
This approach allows us to analyze essentially any model of inflation. The reason is that, no
matter what the underlying model is, it produces some a(t), and in unitary gauge the effective
Lagrangian breaks time diffs but as we will see is still quite constrained by preserving spatial diffs,
so a completely general model can be characterized in a systematic derivative expansion with only
a few parameters. The inside-horizon dynamics of the “clock” field can be simply obtained from
the unitary gauge Lagrangian by re-introducing the time diff Goldstone à la Stückelberg.
The construction of the Lagrangian for π is greatly simplified in ‘unitary’ gauge, π = 0. That
is, by its very definition eq. (28), π(x) can always be gauged away from the matter sector through
a time redefinition, t → t − π(x). Then the scalar fluctuation appears only in the metric, thus
its Lagrangian only involves the metric variables. We can reintroduce π at any stage of the
computation simply by performing the opposite time diffeomorphism t → t + π(x). Notice that
by construction π has dimension of length. All Lagrangian terms must be invariant under the
symmetries left unbroken by the background solution and by the unitary gauge choice. These are
time- and space-dependent spatial diffeomorphisms, xi → xi + ξi(t, ~x). At the lowest derivative
level the only such invariant is g00. Notice that, given the residual symmetries, the Lagrangian
terms will have explicitly time-dependent coefficients. From the top-down viewpoint this time-
dependence arises because we are expanding around the time-dependent background matter fields
ψm(t) and metric a(t). Because of this, we expect the typical time-variation rate to be of order
H , so that at frequencies larger than H it can be safely ignored.
The matter Lagrangian in unitary gauge takes the form [52]
Smatter =
Ḣ g00 − 1
(3H2 + Ḣ) + F
g00 + 1
where the first two terms are fixed by imposing that the background a(t) solves Friedmann equa-
tions, since they contribute to ‘tadpole’ terms. F instead can be a generic function that starts
quadratic in its argument δg00 ≡ g00+1, so that it doesn’t contribute to the background equations
of motion, with time-dependent coefficients,
F (δg00) =M4(t) (δg00)2 + M̃4(t) (δg00)3 + . . . (30)
To match this description with a familiar situation, consider for instance the case of an ordinary
scalar φ with a potential V driving the expansion of the Universe. If we perturb the scalar and
the metric around the background solution φ0(t), a(t) and choose unitary gauge, φ(x) = φ0(t),
the Lagrangian is
gµν ∂µφ∂νφ− V (φ)
φ̇20 g
00 − V
φ0(t)
which, upon using the background Friedmann equations, reproduces exactly the first two terms
in eq. (29). Therefore an ordinary scalar corresponds to the case F (δg00) = 0.
We can now reintroduce the Goldstone π. This amounts to performing in eq. (29) the time
diffeomorphism
t→ t+ π g00 → −1− 2π̇ + (∂π)2 (32)
Notice that we should really evaluate all explicit functions of time like H , etc., at t+π rather than
at t. However, after expanding in π, this would give rise only to non-derivative terms suppressed
by H , Ḣ, etc., that can be safely neglected as long as we consider frequencies faster than H . Of
course in the end we are interested in the physics at freeze-out, i.e. exactly at frequencies of order
H . A correct analysis should then include these non-derivative terms for π, as well as the effect
of mixing with gravity—the Goldstone is a convenient parameterization only at high frequencies.
However, being only interested in orders of magnitude we can use the high-frequency Lagrangian
for π and simply extrapolate our estimates down to frequencies of order H . From eq. (29) we get
Lπ = M2PlḢ (∂π)2 + F
− 2π̇ + (∂π)2
= (4M4 −M2PlḢ) π̇2 +M2PlḢ (~∇π)2 + higher orders (34)
where we neglected a total derivative term and we expanded F as in eq. (30). At the lowest
derivative level, the quadratic Lagrangian for π only has one free parameter, M4. The only
constraint onM4 is that it must be positive for the propagation speed of π fluctuations (the ‘speed
of sound’, from now on) c2 ≡ M2Pl|Ḣ|/(4M4 +M2Pl|Ḣ|) to be smaller than one. For instance, a
relativistic scalar with c2 = 1 corresponds to M4 = 0; a perfect fluid with constant equation of
state 0 < w < 1 corresponds to M4 =M2Pl|Ḣ| (1− w)/w.
IfM4 .M2Pl|Ḣ| the speed of sound is of order one and we can repeat exactly the same analysis
as in the case of slow roll inflation, modulo straightforward changes in the notation. Therefore,
let us concentrate on the case c2 ≪ 1, M4 ≫ M2Pl|Ḣ|; the Lagrangian further simplifies to
Lπ = 4M4 π̇2 +M2PlḢ (~∇π)2 + higher orders (35)
We now want to use the Lagrangian (35) to estimate the size of quantum fluctuations, and to
impose that they don’t overcome the classical evolution of the system. For the latter requirement
the π language is particularly convenient: π is the perturbation of the classical ‘clock’ t, directly
in time units, so we just have to impose π̇ ≪ 1 at freeze-out, that is at frequencies of order H .
Alternatively, in unitary gauge we can look at the dimensionless perturbation in the metric,
= H π (36)
so that imposing ζ ≪ 1 at freeze-out we get the same condition for π as above.
The typical size of the vacuum quantum fluctuations for a non-relativistic, canonically nor-
malized field φ with a generic speed of sound c ∼ ω/k at frequencies of order ω is
〈φ2〉ω ∼
where the ω in the denominator comes from the canonical wave-function normalization, and the
k3 in the numerator from the measure in Fourier space. Taking into account the non-canonical
normalization of π, at frequencies of order H we have
〈π2〉H ∼
M4 c3
The size of quantum fluctuation is enhanced for smaller sound speeds c. And since c2 is pro-
portional to |Ḣ|, clearly there will be a lower bound on |Ḣ| below which the system is eternally
inflating. Indeed imposing 〈π̇2〉H ≪ 1 and using c2 =M2Pl|Ḣ|/M4 we directly get
|Ḣ| ≫ 1
GH4 (39)
which in the limit c ≪ 1 is even stronger than eq. (25). From this the constraint dS ≫ 1
immediately follows.
This proves our bounds for all models in which the physics of fluctuations is correctly described
by the Goldstone two-derivative Lagrangian, eq. (35). This class includes for instance all single-
field inflationary models where the Lagrangian is a generic function of the field and its first
derivatives, L = P
(∂φ)2, φ
, from slow-roll inflation to k-inflation models [51]. It is however useful
to consider an even stronger bound that comes from taking into account non-linear interactions
of π. This bound will be easily generalizable to theories with sizable higher-derivative corrections
to the quadratic π Lagrangian, like the ghost condensate. This is where the null energy condition
comes in.
The null energy condition requires that the stress-energy tensor contracted with any null vector
nµ be non-negative, Tµν n
µnν ≥ 0. We can read off the stress energy tensor from the matter action
in unitary gauge eq. (29) by performing the appropriate derivatives with respect to the metric.
Given a generic null vector nµ = (n0, ~n) the relevant contraction is
Tµν n
µnν = −2 (n0)2
M2PlḢ + F
′(δg00)
where δg00 = g00+1 is the fluctuation in g00 around the background. In a more familiar notation,
for a scalar field with a generic Lagrangian L = P (X, φ), X ≡ (∂φ)2, the above contraction is just
Tµν n
µnν = 2 (nµ ∂µφ)
2 ∂XP , so the NEC is equivalent to ∂XP ≥ 0.
On the background solution δg00 vanishes and since F ′(0) vanishes by construction, the NEC
is satisfied—of course as long as Ḣ is negative, as we are assuming. However F ′′(0) = M4 is
positive, making F ′ positive for positive δg00. As a consequence the r.h.s. of eq. (40) is pushed
towards negative values for positive δg00. So the NEC tends to be violated in the vicinity of the
background solution unless higher order terms in the expansion of F , eq. (30), save the day, see
Fig. 5. But this can only happen if their coefficient is large enough. For instance in order for
the n-th order term to keep eq. (40) positive definite its coefficient must be at least as large as
M4 (M4/M2PlḢ)
n−2. The smaller |Ḣ|, the closer is the background solution to violate the NEC,
and so the larger is the ‘correction’ needed not to violate it. But then if higher derivatives of F
on the background solution are large, self-interactions of π are strong. Minimally, we don’t want
π fluctuations to be strongly coupled at frequencies of order H . If this happened the semiclassical
approximation would break down, and the classical background solution could not be trusted at
all—quantum effects would be as important as the classical dynamics in determining the evolution
of the system, much like in the usual heuristic picture of eternal inflation.
Recall that the argument of F expressed in terms of the Goldstone is δg00 = −2π̇ − π̇2 +
(~∇π)2. Given an interaction term (δg00)n, it is easy to check that for fixed n the most relevant
π interactions come from taking only the linear π̇ term in δg00, i.e. (δg00)n → π̇n. Therefore, if
eq. (40) is kept positive definite thanks to the n-th order term in the Taylor expansion of F , the
ratio of the π self-interaction induced by this term and the free kinetic energy of π is
M4 (M4/M2Pl|Ḣ| )n−2 π̇n
M4 π̇2
M4 π̇
M2Pl|Ḣ|
M2Pl|Ḣ| · c3/2
M2Pl|Ḣ| · c5
where we plugged in the size of typical quantum fluctuations at frequencies of orderH , eq. (38), and
we used the fact thatM2Pl|Ḣ| = c2M4. From eq. (41) it is evident that if we require that quantum
fluctuations be weakly coupled at frequencies of order H we automatically get the constraint
|Ḣ| ≫ 1
GH4 , dS ≫ 1
dN (42)
on the background classical solution.
strong
coupling
− M 2Pl
Figure 5: The null energy condition is violated whenever F ′(δg00) enters the shaded region, F ′+M2P lḢ >
0. Since F ′ starts with a strictly positive slope at the origin, to avoid this one needs that higher derivatives
of F bend F ′ away from the NEC-violating region. The smaller |Ḣ|, the stronger the needed ‘bending’.
This can make π fluctuations strongly coupled at H.
The above proof holds in all cases where the Goldstone two-derivative Lagrangian, eq. (35) is
a good description of the physics of fluctuations. However, when |Ḣ| is very small the (~∇π)2 term
appears in the Lagrangian with a very small coefficient, and one can worry that higher derivative
corrections to the π quadratic Lagrangian start dominating the gradient energy. This is exactly
what happens in ghost inflation, where the (~∇π)2 term is absent—in agreement with the vanishing
of Ḣ—and the spatial-gradient part of the quadratic Lagrangian is dominated by the (∇2π)2 term,
which enters the Lagrangian with an arbitrary coefficient [15, 18]. In such cases, at all scales where
the gradient energy is dominated by higher derivative terms one has M2Pl|Ḣ| < c2M4, where c
is the propagation speed, simply because the (∇π)2 term of eq. (35) is not the dominant source
of gradient energy, thus the sound speed is dominated by other sources. So the last equality in
eq. (41) becomes a ‘>’ sign, and our bound gets even stronger. Therefore our results equally
apply to theories where higher derivative corrections can play a significant role, like the ghost
condensate.
In summary: imposing that the NEC is not violated in the vicinity of the background solution
implies sizable non-linearities in the system. For smaller |Ḣ| the system is closer to violating the
NEC—Ḣ = 0 saturates the NEC. So the smaller |Ḣ|, the larger the non-linearities needed to
make the system healthy. Requiring that fluctuations not be strongly coupled at the scale H—a
necessary condition for the applicability of the semiclassical description—sets a lower bound on
|Ḣ|, eq. (42).
So far we neglected possible mixings of π with other light fluctuation modes. However our
conclusion are unaltered by the presence of such mixings. At any give moment of time t the
quadratic Lagrangian for fluctuations can be diagonalized,
L = 1
ϕ̇2i − c2i (~∇ϕi)2 (43)
Typical quantum fluctuations now define an ellipsoid in the ϕi’s space, whose semi-axes depend
on the individual speeds ci (see Fig. 4). The Goldstone π corresponds to some specific direction
in field space, and in any direction quantum fluctuations are bounded from below by the shortest
semi-axis. By requiring that the system does not enter eternal inflation it is straightforward to
show that our bound (39) generalizes to
|Ḣ| ≫ 1
GH4 , dS ≫ 1
dN (44)
where cmax ≤ 1 is the maximum of the ci’s. The generalization to theories in which higher spatial
derivative terms are important proceeds along the same lines as in the case of the π alone, by
imposing that the NEC is not violated along π and that π fluctuations are not strongly coupled
at H .
4 Null energy condition and thermodynamics of horizons
The proof of our central result (26) and the related interpretation of what finite de Sitter entropy
means crucially relies on the null energy condition,
µnν ≥ 0 (45)
where nµ is null. The history of general relativity knows many examples when the assumed “energy
conditions”—assumptions about the properties of physically allowed energy-momentum tensors—
turned out to be wrong. In the end, the NEC is also known to be violated both by quantum
effects (Casimir energy, Hawking evaporation) and by non-perturbative objects (orientifold planes
in string theory). So it is important to clarify to what extent the violation of the NEC needed to
get around the bound (26) is qualitatively different from these examples, and why the relevance
of the NEC in our proof is more than just a technicality.
Note first that all qualitative arguments of section 2.1, indicating that sharply defined local
observables are absent in quantum gravity, implicitly rely on the notion of positive gravitational
energy. Indeed, schematically these arguments reduce to saying that, by the uncertainty principle,
preparing arbitrarily precise clocks and rods requires concentrating indefinitely large energy in a
small volume. Then the self-gravity of clocks and rods themselves causes the volume to collapse
into a black hole and screws up the result of the measurement. Clearly this problem would not
be there if there were some negative gravitational energy available around. Using this energy one
would be able to screen the self-gravity of clocks and rods and to perform an arbitrarily precise
local measurement. NEC is a natural candidate to define what the positivity of energy means; at
the end it is the only energy condition in gravity that cannot be violated by just changing the
vacuum part of the energy-momentum, Tµν → Tµν+Λgµν . Indeed the NEC is a crucial assumption
in proving the positivity of the ADM mass in asymptotically flat spaces [53, 54].
Generically, classical field theoretic systems violating NEC suffer from either ghost or rapid
gradient instabilities. In a very broad class of systems, including conventional relativistic fluids,
these instability can be proven [55] to originate from the“clock and rod” sector of the system—one
of the Goldstones of the spontaneously broken space-time translations is either a ghost or has an
imaginary propagation speed. For instance, if space translations are not spontaneously broken
and only the Goldstone of time translations (the “clock” field π of section 3.2) is present, then
the instability is due to the wrong-sign gradient energy in the Goldstone Lagrangian (35) in the
NEC violating case Ḣ > 0. The examples of stable NEC violations we mentioned above avoid this
problem by either being quantum and non-local effects (Casimir energy and Hawking process) or
by projecting out the corresponding Goldstone mode (orientifold planes). This allows to avoid
the instability, but simultaneously makes these systems incapable of providing the non-gravitating
clocks and rods.
Nevertheless, stable effective field theories describing non-gravitating systems of clocks and
rods can be constructed. This is the ghost condensate model [15] where space diffs are unbroken,
and so only the clock field appears, as well as more general models describing gravity in the Higgs
phase where Goldstones of the space diffs are present as well [16, 17]. All these setups provide
constructions of de Sitter space with intrinsic clock variable and thus allow to get around our
bound (26). Related to that, all these theories describe systems on the verge of violating NEC,
and small perturbations around their vacuum violate it. Nevertheless these effective theories avoid
rapid instabilities as a combined result of taking into account the higher derivative operators in
the Goldstone sector and of imposing special symmetries.
Does the existence of these counterexamples cause problems in relating the bound (26) to the
fundamental properties of de Sitter space in quantum gravity? We believe that the answer is no,
and that actually the opposite is true—this failure of the bound (26) provides a quite non-trivial
support to the idea that the bound is deeply related to de Sitter thermodynamics. The reason is
that the conventional black hole thermodynamics also fails in these models [19].
To see how this can be possible, note that, more or less by construction, all these models
spontaneously break Lorentz invariance. For instance, in the ghost condensate Minkowski or
de Sitter vacuum a non-vanishing time-like vector—the gradient of the ghost condensate field
∂µφ—is present. As usual in Lorentz violating theories the maximum propagation velocities need
not be universal for different fields, now as a consequence of the direct interactions with the ghost
condensate. Being a consistent non-linear effective theory, ghost condensate allows to study the
consequences of the velocity differences in a black hole background. The result is very simple—
the effective metric describing propagation of a field with v 6= 1 in a Schwarzschild background
has the Schwarzschild form with a different value of the mass. As one could have expected, the
black hole horizon appears larger for subluminal particles and smaller for superluminal ones. As a
consequence, the temperature of the Hawking radiation is not universal any longer; “slow” fields
are radiated with lower temperatures than “fast” fields.
Figure 6: In the presence of the ghost condensate black holes can have different temperatures for different
fields. This allows to perform thermodynamic transformations whose net effect is the transfer of heat Q2
from a cold reservoir at temperature T2 to a hotter one at temperature T1 (left). Then one can close a
cycle by feeding heat Q1 at the higher temperature T1 into a machine that produces work W and as a
byproduct releases heat Q2 at the lower temperature T2 (right). The net effect of the cycle is the conversion
of heat into mechanical work.
Also the horizon area does not have a universal meaning any longer, making it impossible to
define the black hole entropy just as a function of mass, angular momentum and gauge charges.
To make the conflict with thermodynamics explicit, let us consider a black hole radiating two
different non-interacting species with different Hawking temperatures TH1 > TH2. Let us bring
the black hole in thermal contact with two thermal reservoirs containing species 1 and 2 and
having temperatures T1 and T2 respectively. By tuning these temperatures one can arrange that
they satisfy
TH1 > T1 > T2 > TH2
and the thermal flux from the black hole to the first reservoir is exactly equal in magnitude to the
flux from the second reservoir to the black hole. As a result the mass of the black hole remains
unchanged and the heat is transferred from the cold to the hot body in contradiction with the
second law of thermodynamics, see Fig. 6.
The case for violation of the second law of black hole thermodynamics in models with sponta-
neous Lorentz violation is even strengthened by the observation [20] that the same conclusion can
be achieved purely at the classical level and without neglecting the interaction between the two
species. This classical process is analogous to the Penrose process. Namely, in a region between
the two horizons the energy of the “slow” field can be negative similarly to what happens in the
ergosphere of a Kerr black hole. The fast field can escape from this region making it possible
to arrange an analogue of the Penrose process. In the case at hand, this process just extracts
energy from the black hole by decreasing its mass. The mass decrease can be compensated by
throwing in more entropic stuff, which again results in an entropy decrease outside with the black
hole parameters remaining unchanged (this does not happen in the conventional Penrose process
because the angular momentum of the black hole changes).
Actually, it is not surprising at all that a violation of the NEC implies the breakdown of black
hole thermodynamics, as the NEC is needed in the proof [56] of the covariant entropy bound [57],
which is one of the basic ingredients of black hole thermodynamics and holography. Also note that
the above conflict with thermodynamics is just a consequence of spontaneous breaking of Lorentz
invariance (existence of non-gravitating clocks); in particular, it is there even if one assumes that
all fields propagate subluminally.
The second law of thermodynamics is a consequence of a few very basic properties, such as
unitarity, so it is expected to hold in any sensible quantum theory. Hence, the only chance for
Lorentz violating models to be embedded in a consistent microscopic theory is if black holes are not
actually black in these theories, so that the observer can measure both the inside and the outside
entropy and there is no need for a purely outside counting as provided by the Bekenstein formula
(this is indeed what happens if space diffs are broken as well, due to the existence of instantaneous
interactions). In any case, this definitely puts the ghost condensate with other Lorentz violating
models in a completely different ballpark from GR as far as the physics of horizons goes. That is
why we find it encouraging for a thermodynamical interpretation of the bound (26) that is also
violated by the ghost condensate.
5 Open questions
We have seen that all NEC-obeying models of inflation that do not eternally inflate increase the
de Sitter at a minimal rate, dS/dN ≫ 1, and therefore cannot sustain an approximate de Sitter
phase for longer that N ∼ S e-foldings. This gives an observational way of determining whether
or not inflation is eternal. For instance, if our current accelerating epoch lasts for longer than
∼ 10130 years, or if (1+w) is smaller than 10−120, our current inflationary epoch is eternal. While
these are somewhat challenging measurements, they can at least be done at timescales shorter
than the recurrence time!
This bound implies that an observer exiting into flat space in the asymptotic future cannot
detect more than eS independent modes coming from inflation, which matches nicely with the idea
of de Sitter space having a finite-dimensional Hilbert space of dimension ∼ eS. Although we are
not able to provide a microscopic counting of de Sitter entropy, we can at least give an operational
meaning to the number of de Sitter degrees of freedom. The NEC is very important in proving
our bound; indeed the NEC is crucial in existing derivations of various holographic bounds, and
indeed consistent EFTs that violate the NEC like the ghost condensate are also known to violate
the thermodynamics of black hole horizons. This suggests that our bound is related to holography.
We can view different inflationary models as possible regularizations of pure de Sitter space
in which a semiclassical analysis in terms of a local EFT is reliable. Then our universal bound
suggests that any semiclassical, local description of de Sitter space cannot be trusted past ∼ S
Hubble times and further than ∼ eS Hubble radii in space—perhaps a more covariant statement
Figure 7: A possible covariant generalization of our bound. Given an observer’s worldline and a“start”
and an “end” times (red dots), one identifies the portion of de Sitter spacetime that is detectable by
the observer in this time interval (shaded regions). Then EFT properly describes such a region only for
spacetime volumes smaller than ∼ eSH−4. If applied to eternal de Sitter (left) this gives the Poincaré
recurrence time eSH−1 times the causal patch volume H−3. If applied to an FRW observer after inflation
(right) it gives S e-foldings times eS Hubble volumes.
is that the largest four-volume one can consistently describe in terms of a local EFT is of order
eSH−4, see Fig. 7. Notice that this is analogous to what happens for a black hole: in order not to
violate unitarity the EFT description must break down after a time of order S Schwarzschild times,
when more than eS modes must be invoked behind the horizon to accommodate the entanglement
entropy.
Ultimately we are interested in eternal inflation, in particular in its effectiveness in populating
the string landscape. In this case the relevant mechanism is false vacuum eternal inflation, in
which there is no classically rolling scalar to begin with, and the evolution of the Universe is
governed by quantum tunneling. Our analysis does not directly apply here—there is no classical
non-eternal version of this kind of inflation. In particular, in the slow roll eternal inflation case an
asymptotic future observer only has access to the late phase of inflation, when the Universe is not
eternally inflating. The eternal inflation part corresponds to density perturbations of order unity,
thus making the Hubble volume surrounding the observer collapse when they become observable.
As a consequence the number of possible independent measurements such an observer can make
is always bounded by eS.
In the false vacuum eternal inflation case instead there can be asymptotic observers who live in
a zero cosmological constant bubble. This is the case if the theory does not have negative energy
vacua, or if the zero energy ones are supersymmetric, and therefore perfectly stable. Such zero-
energy bubbles are occasionally hit from outside by small bubbles that form in their vicinity, but
these collisions are not very energetic and do not perturb significantly the bubble evolution—the
observer
Figure 8: (Left) In false vacuum eternal inflation there seems to be no limit to the spacetime volume of the
outside de Sitter space an asymptotic flat-space observer can detect. The spacetime volumes diverges in
the shaded corners. Bubble collisions don’t alter this conclusion; the pattern of collisions is simply depicted
on a Poincaré disk representation of the hyperbolic FRW spatial slices (Right). Maloney, Shenker and
Susskind argue that observers in the bubbles can make an infinite number of observations and arrive at
sharply defined observables.
total probability of being hit and eaten by a large bubble is small, of order ΓH−4 ≪ 1, where
Γ is the typical transition rate per unit volume. By measuring the remnants of such collisions
the observer inside the bubble can gather information about the outside de Sitter space and the
landscape of vacua [58]. Then, in this case these measurements play the same role in giving
an operational definition of de Sitter degrees of freedom as density perturbations did in slow-
roll inflation. But now there seems to be no limit to how many independent measurements an
asymptotic observer can make. The expected total number of bubble collisions experienced by a
zero-energy bubble is infinite, and with very good probability none of these collisions destroys the
bubble. It is true that as time goes on for such an observer it becomes more and more difficult to
perform these measurements—collisions get rarer and rarer, and their observational consequences
get more and more redshifted. Still we have not been able to find a physical reason why these
observations cannot be done, at least in principle. The asymptotic observer in the bubble can in
principle perform infinitely many independent measurements, and Maloney, Shenker and Susskind
argue that these might give sharply defined observables [58]. The case of collisions with negative
vacuum energy supersymmetric bubbles is particularly interesting; in this case, as the boundary
of the zero energy bubble is covered by an infinite fractal of domain-wall horizons [59], the pattern
of bubble collisions with other supersymmetric vacua as seen on the hyperbolic spatial slices of the
bubble FRW Universe is shown in Fig. (8) where the hyperbolic space is represented as a Poincaré
disk; at early times the walls are at the boundary while at infinite time they asymptote to fixed
Poincaré co-ordinates as shown. The pattern of collisions is scale-invariant, reflecting the origin of
the bubbles in the underlying de Sitter space. Still, it appears that an observer away from these
walls can make an infinite number of observations. This apparently violates the expectation that
one should not be able to assign more than eS independent states to de Sitter space. Perhaps false
vacuum eternal inflation is a qualitatively different regularization of de Sitter space than offered
by the class of inflationary models we studied for our bound. There may be some more subtle
effect that prevents the bubble observer from making observations with better than e−S accuracy
of the ambient de Sitter space. Or perhaps the limitation is correct, and it is the effective field
theory description that is breaking down when more than eS observations are allowed, much as in
black hole evaporation. We believe these issues deserve further investigation.
Acknowledgments
We thank Tom Banks, Raphael Bousso, Ben Freivogel, Steve Giddings, David Gross, Don Marolf,
Joe Polchinski, Leonardo Senatore, and Andy Strominger for stimulating discussions. We es-
pecially thank Juan Maldacena for clarifying many aspects of the information paradox and de
Sitter entropy, and also Alex Maloney and Steve Shenker for extensive discussions of their work
in progress with Susskind. The work of Enrico Trincherini is supported by an INFN postdoctoral
fellowship.
References
[1] R. Bousso and J. Polchinski, “Quantization of four-form fluxes and dynamical neutralization
of the cosmological constant,” JHEP 0006, 006 (2000) [arXiv:hep-th/0004134].
[2] M. R. Douglas and S. Kachru, “Flux compactification,” arXiv:hep-th/0610102.
[3] A. Vilenkin, “The Birth Of Inflationary Universes,” Phys. Rev. D 27, 2848 (1983).
[4] A. D. Linde, “Eternally Existing Selfreproducing Chaotic Inflationary Universe,” Phys. Lett.
B 175, 395 (1986).
[5] G. ’t Hooft, “The black hole interpretation of string theory,” Nucl. Phys. B 335, 138 (1990).
[6] L. Susskind, L. Thorlacius and J. Uglum, “The Stretched Horizon And Black Hole Comple-
mentarity,” Phys. Rev. D 48, 3743 (1993) [arXiv:hep-th/9306069].
[7] T. Banks and W. Fischler, “M-theory observables for cosmological space-times,” arXiv:hep-
th/0102077.
[8] T. Banks, W. Fischler and S. Paban, “Recurrent nightmares?: Measurement theory in de
Sitter space,” JHEP 0212, 062 (2002) [arXiv:hep-th/0210160].
[9] S. B. Giddings, D. Marolf and J. B. Hartle, “Observables in effective gravity,” Phys. Rev. D
74, 064018 (2006) [arXiv:hep-th/0512200].
[10] S. B. Giddings, “Quantization in black hole backgrounds,” arXiv:hep-th/0703116.
[11] J. D. Bekenstein, “A Universal Upper Bound On The Entropy To Energy Ratio For Bounded
Systems,” Phys. Rev. D 23, 287 (1981).
[12] G. W. Gibbons and S. W. Hawking, “Cosmological Event Horizons, Thermodynamics, And
Particle Creation,” Phys. Rev. D 15, 2738 (1977).
[13] A. Albrecht, N. Kaloper and Y. S. Song, “Holographic limitations of the effective field theory
of inflation,” arXiv:hep-th/0211221.
[14] T. Banks and W. Fischler, “An upper bound on the number of e-foldings,” arXiv:astro-
ph/0307459.
[15] N. Arkani-Hamed, H. C. Cheng, M. A. Luty and S. Mukohyama, “Ghost condensation and a
consistent infrared modification of gravity,” JHEP 0405, 074 (2004) [arXiv:hep-th/0312099].
[16] V. A. Rubakov, “Lorentz-violating graviton masses: Getting around ghosts, low strong cou-
pling scale and VDVZ discontinuity,” arXiv:hep-th/0407104.
[17] S. L. Dubovsky, “Phases of massive gravity,” JHEP 0410, 076 (2004) [arXiv:hep-th/0409124].
[18] N. Arkani-Hamed, P. Creminelli, S. Mukohyama and M. Zaldarriaga, “Ghost inflation,” JCAP
0404, 001 (2004) [arXiv:hep-th/0312100].
[19] S. L. Dubovsky and S. M. Sibiryakov, “Spontaneous breaking of Lorentz invariance, black
holes and perpetuum mobile of the 2nd kind,” Phys. Lett. B 638, 509 (2006) [arXiv:hep-
th/0603158].
[20] C. Eling, B. Z. Foster, T. Jacobson and A. C. Wall, “Lorentz violation and perpetual motion,”
arXiv:hep-th/0702124.
[21] C. Vafa, “The string landscape and the swampland,” arXiv:hep-th/0509212.
[22] N. Arkani-Hamed, L. Motl, A. Nicolis and C. Vafa, “The string landscape, black holes and
gravity as the weakest force,” arXiv:hep-th/0601001.
[23] B. S. DeWitt, “Quantum theory of gravity. II. The manifestly covariant theory,” Phys. Rev.
162, 1195 (1967).
[24] DeWitt, B.S., “Quantum Theory of Gravity. I. The Canonical Theory,” Phys. Rev. 160,
1113-1148, (1967);
Wheeler, J.A., “Superspace and the nature of quantum geometrodynamics,” in DeWitt, C.,
and Wheeler, J.A., eds., Battelle Rencontres: 1967 Lectures in Mathematics and Physics,
(W.A. Benjamin, New York, U.S.A., 1968).
[25] V. G. Lapchinsky and V. A. Rubakov, “Canonical Quantization Of Gravity And Quantum
Field Theory In Curved Space-Time,” Acta Phys. Polon. B 10, 1041 (1979).
[26] T. Banks, “T C P, Quantum Gravity, The Cosmological Constant And All That..,” Nucl.
Phys. B 249, 332 (1985).
[27] T. Banks, W. Fischler and L. Susskind, “Quantum Cosmology In (2+1)-Dimensions And
(3+1)-Dimensions,” Nucl. Phys. B 262, 159 (1985).
[28] F. Cerulus and A. Martin, “A Lower Bound For Large Angle Elastic Scattering At High
Energies,” Phys. Lett. 8, 80 (1964);
A. Martin, Nuovo Cimento 37, 671 (1965).
[29] P. F. Mende and H. Ooguri, “Borel Summation Of String Theory For Planck Scale Scatter-
ing,” Nucl. Phys. B 339, 641 (1990).
[30] G. Veneziano, “String-theoretic unitary S-matrix at the threshold of black-hole production,”
JHEP 0411, 001 (2004) [arXiv:hep-th/0410166].
[31] N. Arkani-Hamed, A. G. Cohen, D. B. Kaplan, A. Karch and L. Motl, “Deconstructing (2,0)
and little string theories,” JHEP 0301, 083 (2003) [arXiv:hep-th/0110146].
[32] S. W. Hawking, “Breakdown Of Predictability In Gravitational Collapse,” Phys. Rev. D 14,
2460 (1976).
[33] S. W. Hawking, “Particle Creation By Black Holes,” Commun. Math. Phys. 43, 199 (1975)
[Erratum-ibid. 46, 206 (1976)].
[34] R. M. Wald, unpublished.
[35] J. Polchinski, “String theory and black hole complementarity,” arXiv:hep-th/9507094.
[36] D. Bigatti and L. Susskind, “TASI lectures on the holographic principle,” arXiv:hep-
th/0002044.
[37] D. N. Page, “Information in black hole radiation,” Phys. Rev. Lett. 71, 3743 (1993)
[arXiv:hep-th/9306083].
[38] J. M. Maldacena, “Eternal black holes in Anti-de-Sitter,” JHEP 0304, 021 (2003) [arXiv:hep-
th/0106112].
[39] D. A. Lowe, J. Polchinski, L. Susskind, L. Thorlacius and J. Uglum, “Black hole complemen-
tarity versus locality,” Phys. Rev. D 52, 6997 (1995) [arXiv:hep-th/9506138].
[40] S. B. Giddings and M. Lippert, “The information paradox and the locality bound,” Phys.
Rev. D 69, 124019 (2004) [arXiv:hep-th/0402073].
[41] S. W. Hawking, “Information loss in black holes,” Phys. Rev. D 72, 084013 (2005) [arXiv:hep-
th/0507171].
[42] E. Witten, “Quantum gravity in de Sitter space,” arXiv:hep-th/0106109.
[43] T. Banks, “Heretics of the false vacuum: Gravitational effects on and of vacuum decay. II,”
arXiv:hep-th/0211160.
[44] T. Banks, “Landskepticism or why effective potentials don’t count string models,” arXiv:hep-
th/0412129.
[45] T. Banks, “More thoughts on the quantum theory of stable de Sitter space,” arXiv:hep-
th/0503066.
[46] R. Bousso, “Cosmology and the S-matrix,” Phys. Rev. D 71, 064024 (2005) [arXiv:hep-
th/0412197].
[47] A. D. Linde, “Life After Inflation,” Phys. Lett. B 211, 29 (1988).
[48] R. Bousso, B. Freivogel and I. S. Yang, “Eternal inflation: The inside story,” Phys. Rev. D
74, 103516 (2006) [arXiv:hep-th/0606114].
[49] E. Silverstein and D. Tong, “Scalar speed limits and cosmology: Acceleration from D-
cceleration,” Phys. Rev. D 70, 103505 (2004) [arXiv:hep-th/0310221].
[50] G. Dvali and S. Kachru, “New old inflation,” arXiv:hep-th/0309095.
[51] C. Armendariz-Picon, T. Damour and V. F. Mukhanov, “k-inflation,” Phys. Lett. B 458, 209
(1999) [arXiv:hep-th/9904075].
[52] P. Creminelli, M. A. Luty, A. Nicolis and L. Senatore, “Starting the Universe: Stable violation
of the null energy condition and non-standard cosmologies,” arXiv:hep-th/0606090.
[53] R. Schon and S. T. Yau, “Proof Of The Positive Mass Theorem. 2,” Commun. Math. Phys.
79, 231 (1981).
[54] E. Witten, “A Simple Proof Of The Positive Energy Theorem,” Commun. Math. Phys. 80,
381 (1981).
[55] S. Dubovsky, T. Gregoire, A. Nicolis and R. Rattazzi, “Null energy condition and superlu-
minal propagation,” JHEP 0603, 025 (2006) [arXiv:hep-th/0512260].
[56] E. E. Flanagan, D. Marolf and R. M. Wald, “Proof of Classical Versions of the Bousso Entropy
Bound and of the Generalized Second Law,” Phys. Rev. D 62, 084035 (2000) [arXiv:hep-
th/9908070].
[57] R. Bousso, “A Covariant Entropy Conjecture,” JHEP 9907, 004 (1999) [arXiv:hep-
th/9905177].
[58] A. Maloney, S. Shenker, L. Susskind, to appear.
[59] B. Freivogel, G. T. Horowitz and S. Shenker, “Colliding with a crunching bubble,” arXiv:hep-
th/0703146.
|
0704.1815 | Kondo-lattice screening in a d-wave superconductor | Kondo-lattice screening in a d-wave superconductor
Daniel E. Sheehy1,2 and Jörg Schmalian2
Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803
Department of Physics and Astronomy, Iowa State University and Ames Laboratory, Ames IA 50011
(Dated: November 28, 2018)
We show that local moment screening in a Kondo lattice with d-wave superconducting conduction
electrons is qualitatively different from the corresponding single Kondo impurity case. Despite the
conduction-electron pseudogap, Kondo-lattice screening is stable if the gap amplitude obeys ∆ <√
TKD, in contrast to the single impurity condition ∆ < TK (where TK is the Kondo temperature
for ∆ = 0 and D is the bandwidth). Our theory explains the heavy electron behavior in the d-wave
superconductor Nd2−xCexCuO4.
I. INTRODUCTION
The physical properties of heavy-fermion metals are
commonly attributed to the Kondo effect, which causes
the hybridization of local 4-f and 5-f electrons with itin-
erant conduction electrons. The Kondo effect for a single
magnetic ion in a metallic host is well understood1. In
contrast, the physics of the Kondo lattice, with one mag-
netic ion per crystallographic unit cell, is among the most
challenging problems in correlated electron systems. At
the heart of this problem is the need for a deeper un-
derstanding of the stability of collective Kondo screen-
ing. Examples are the stability with respect to com-
peting ordered states (relevant in the context of quan-
tum criticality2) or low conduction electron concentra-
tion (as discussed in the so-called exhaustion problem3).
In these cases, Kondo screening of the lattice is believed
to be more fragile in comparison to the single-impurity
case. In this paper, we analyze the Kondo lattice in a
host with a d-wave conduction electron pseudogap4. We
demonstrate that Kondo lattice screening is then sig-
nificantly more robust than single impurity screening.
The unexpected stabilization of the state with screened
moments is a consequence of the coherency of the hy-
bridized heavy Fermi liquid, i.e. it is a unique lattice ef-
fect. We believe that our results are of relevance for the
observed large low temperature heat capacity and sus-
ceptibility of Nd2−xCexCuO4, an electron-doped cuprate
superconductor5.
The stability of single-impurity Kondo screening has
been investigated by modifying the properties of the con-
duction electrons. Most notably, beginning with the work
of Withoff and Fradkin (WF)6, the suppression of the
single-impurity Kondo effect by the presence of d-wave
superconducting order has been studied. A variety of an-
alytic and numeric tools have been used to investigate the
single impurity Kondo screening in a system with conduc-
tion electron density of states (DOS) ρ (ω) ∝ |ω|r, with
variable exponent r (see Refs. 6,7,8,9,10,11,12). Here,
r = 1 corresponds to the case of a d-wave superconduc-
tor, i.e. is the impurity version of the problem discussed
in this paper. For r ≪ 1 the perturbative renormaliza-
tion group of the ordinary13 Kondo problem (r = 0), can
be generalized6. While the Kondo coupling J is marginal,
0.05 0.1 0.15 0.2
0.05 0.1 0.15 0.2
local
moment
screened
Kondo lattice
screened impurity
FIG. 1: The solid line is the critical pairing strength
∆c for T → 0 [Eq. (33)] separating the Kondo screened
(shaded) and local moment regimes in the Kondo-lattice
model Eq. (4). Following well-known results6,7 (see also Ap-
pendix A), the single-impurity Kondo effect is only stable for
∆ . D exp(−2D/J) ∼ TK (dashed).
a fixed point value J∗ = r/ρ0 emerges for finite but small
r. Here, ρ0 is the DOS for ω = D with bandwidth D.
Kondo screening only occurs for J∗ and the transition
from the unscreened doublet state to a screened singlet
ground state is characterized by critical fluctuations in
time.
Numerical renormalization group (NRG) calculations
demonstrated the existence of a such an impurity quan-
tum critical point even if r is not small but also
revealed that the perturbative renormalization group
breaks down, failing to correctly describe this critical
point9. For r = 1, Vojta and Fritz demonstrated that
the universal properties of the critical point can be un-
derstood using an infinite-U Anderson model where the
level crossing of the doublet and singlet ground states
is modified by a marginally irrelevant hybridization be-
tween those states10,11. NRG calculations further demon-
strate that the non-universal value for the Kondo cou-
pling at the critical point is still given by J∗ ≃ r/ρ0,
even if r is not small8. This result applies to the case of
broken particle-hole symmetry, relevant for our compari-
son with the Kondo lattice. In the case of perfect particle
http://arxiv.org/abs/0704.1815v3
hole symmetry it holds that8 J∗ → ∞ for r ≥ 1/2.
The result J∗ ≃ r/ρ0 may also be obtained from a large
N mean field theory6, which otherwise fails to properly
describe the critical behavior of the transition, in partic-
ular if r is not small. The result for J∗ as the transition
between the screened and unscreened states relies on the
assumption that the DOS behaves as ρ (ω) ∝ |ω|r all the
way to the bandwidth. However, in a superconductor
with nodes we expect that ρ (ω) ≃ ρ0 is essentially con-
stant for |ω| > ∆, with gap amplitude ∆, altering the
predicted location of the transition between the screened
and unscreened states. To see this, we note that, for en-
ergies above ∆, the approximately constant DOS implies
the RG flow will be governed by the standard metallic
Kondo result1,13 with r = 0, renormalizing the Kondo
coupling to J̃ = J/ (1− Jρ0 lnD/∆) with the effective
bandwidth ∆ (see Ref. 9). Then, we can use the above
result in the renormalized system, obtaining that Kondo
screening occurs for J̃ρ0 & r which is easily shown to be
equivalent to the condition ∆ . ∆∗ with
∆∗ = e
1/rTK, (1)
where
TK = D exp
, (2)
is the Kondo temperature of the system in the absence
of pseudogap (which we are using here to clarify the typ-
ical energy scale for ∆∗). Setting r = 1 to establish the
implication of Eq. (1) for a d-wave superconductor, we
see that, due to the d-wave pseudogap in the density of
states, the conduction electrons can only screen the im-
purity moment if their gap amplitude is smaller than a
critical value of order the corresponding Kondo temper-
ature TK for constant density of states. In particular,
for ∆ large compared to the (often rather small) energy
scale TK, the local moment is unscreened, demonstrating
the sensitivity of the single impurity Kondo effect with
respect to the low energy behavior of the host.
Given the complexity of the behavior for a single im-
purity in a conduction electron host with pseudogap, it
seems hopeless to study the Kondo lattice. We will show
below that this must not be the case and that, moreover,
Kondo screening is stable far beyond the single-impurity
result Eq. (1), as illustrated in Fig. 1 (the dashed line
in this plot is Eq. (1) with ρ0 = 1/2D). To do this,
we utilize a the large-N mean field theory of the Kondo
lattice to demonstrate that the transition between the
screened and unscreened case is discontinuous. Thus, at
least within this approach, no critical fluctuations oc-
cur (in contrast to the single-impurity case discussed
above). More importantly, our large-N analysis also finds
that the stability regime of the Kondo screened lattice
is much larger than that of the single impurity. Thus,
the screened heavy-electron state is more robust and the
local-moment phase only emerges if the conduction elec-
tron d-wave gap amplitude obeys
∆ > ∆c ≃
TKD ≫ TK, (3)
0.02 0.04 0.06 0.08 0.1 0.12 0.14
local
moment
screened
Kondo lattice
FIG. 2: (Color online) The solid line is a plot of the Kondo
temperature TK(∆), above which V = 0 (and Kondo screen-
ing is destroyed), normalized to its value at ∆ = 0 [Eq. (14)],
as a function of the d-wave pairing amplitude ∆, for the
case of J = 0.3D and µ = −0.1D. With these parameters,
TK(0) = 0.0014D, and ∆c, the point where TK(∆) reaches
zero, is 0.14D [given by Eq. (33)] The dashed line indicates a
spinodal, along which the term proportional to V 2 in the free
energy vanishes. At very small ∆ < 2.7 × 10−4D, where the
transition is continuous, the dashed line coincides with the
solid line.
with D the conduction electron bandwidth. Below, we
shall derive a more detailed expression for ∆c; in Eq. (3)
we are simply emphasizing that ∆c is large compared to
TK [and, hence, Eq. (1)].
In addition, we find that for ∆ < ∆c, the renormalized
mass only weakly depends on ∆, except for the region
close to ∆c. We give a detailed explanation for this en-
hanced stability of Kondo lattice screening, demonstrat-
ing that it is a direct result of the opening of a hybridiza-
tion gap in the heavy Fermi liquid state. Since the re-
sult was obtained using a large-N mean field theory we
stress that such an approach is not expected to properly
describe the detailed nature close to the transition. It
should, however, give a correct order of magnitude result
for the location of the transition.
To understand the resilience of Kondo-lattice screen-
ing, recall that, in the absence of d-wave pairing, it is
well known that the lattice Kondo effect (and concomi-
tant heavy-fermion behavior) is due a hybridization of
the conduction band with an f -fermion band that rep-
resents excitations of the lattice of spins. A hybridized
Fermi liquid emerges from this interaction. We shall see
that, due to the coherency of the Fermi liquid state, the
resulting hybridized heavy fermions are only marginally
affected by the onset of conduction-electron pairing. This
weak proximity effect, with a small d-wave gap ampli-
tude ∆f ≃ ∆TK/D for the heavy fermions, allows the
Kondo effect in a lattice system to proceed via f -electron-
dominated heavy-fermion states that screen the local mo-
ments, with such screening persisting up to much larger
values of the d-wave pairing amplitude than implied by
the single impurity result6,7, as depicted in Fig. 1 (which
applies at low T ). A typical finite-T phase diagram is
shown in Fig. 2.
Our theory directly applies to the electron-doped
cuprate Nd2−xCexCuO4, possessing both d-wave
superconductivity14,15 with Tc ≃ 20K and heavy
fermion behavior below5 TK ∼ 2 − 3K. The latter
is exhibited in a large linear heat capacity coefficient
γ ≃ 4J/(mol×K2) together with a large low-frequency
susceptibility χ with Wilson ratio R ≃ 1.6. The lowest
crystal field state of Nd3+ is a Kramers doublet, well
separated from higher crystal field levels16, supporting
Kondo lattice behavior of the Nd-spins. The supercon-
ducting Cu-O-states play the role of the conduction
electrons. Previous theoretical work on Nd2−xCexCuO4
discussed the role of conduction electron correlations17.
Careful investigations show that the single ion Kondo
temperature slightly increases in systems with elec-
tronic correlations18,19, an effect essentially caused by
the increase in the electronic density of states of the
conduction electrons. However, the fact that these con-
duction electrons are gapped has not been considered,
even though the Kondo temperature is significantly
smaller than the d-wave gap amplitude ∆ ≃ 3.7meV
(See Ref. 20). We argue that Kondo screening in
Nd2−xCexCuO4 with TK ≪ ∆ can only be understood
in terms of the mechanism discussed here.
We add for completeness that an alternative sce-
nario for the large low temperature heat capacity of
Nd2−xCexCuO4 is based on very low lying spin wave
excitations21. While such a scenario cannot account for
a finite value of C (T ) /T as T → 0, it is consistent
with the shift in the overall position of the Nd-crystal
field states upon doping. However, an analysis of the
spin wave contribution of the Nd-spins shows that for
realistic parameters C (T ) /T vanishes rapidly below the
Schottky anomaly22, in contrast to experiments. Thus
we believe that the large heat capacity and susceptibility
of Nd2−xCexCuO4 at low temperatures originates from
Kondo screening of the Nd-spins.
Despite its relevance for the d-wave superconductor
Nd2−xCexCuO4, we stress that our theory does not ap-
ply to heavy electron d-wave superconductors, such as
CeCoIn5 (see Ref. 23), in which the d-wave gap is not
a property of the conduction electron host, but a more
subtle aspect of the heavy electron state itself. The latter
gives rise to a heat capacity jump at the superconducing
transition ∆C (Tc) that is comparable to γTc, while in
our theory ∆C (Tc) ≪ γTc holds.
II. MODEL
The principal aim of this paper is to study the screen-
ing of local moments in a d-wave superconductor. Thus,
we consider the Kondo lattice Hamiltonian, possessing lo-
cal spins (Si) coupled to conduction electrons (ckα) that
are subject to a pairing interaction:
kαckα +
i,α,β
Si · c†iασαβciβ + Upair. (4)
Here, J is the exchange interaction between conduction
electrons and local spins and ξk = ǫk − µ with ǫk the
conduction-electron energy and µ the chemical potential.
The pairing term
Upair = −
Ukk′c
c−k′↓ck′↑, (5)
is characterized by the attractive interaction between
conduction electrons Ukk′ . We shall assume the latter
stabilizes d-wave pairing with a gap ∆k = ∆cos 2θ with
θ the angle around the conduction-electron Fermi surface.
We are particularly interested in the low-temperature
strong-coupling phase of this model, which can be studied
by extending the conduction-electron and local-moment
spin symmetry to SU(N) and focusing on the large-N
limit24. In case of the single Kondo impurity, the large-
N approach is not able to reproduce the critical behavior
at the transition from a screened to an unscreeened state.
However, it does correctly determine the location of the
transition, i.e. the non-universal value for the strength of
the Kondo coupling where the transition from screened
to unscreened impurity takes place8. Since the location
of the transition and not the detailed nature of the tran-
sition is the primary focus of this paper, a mean field
theory is still useful.
Although the physical case corresponds to N = 2, the
large-N limit yields a valid description of the heavy Fermi
liquid Kondo-screened phase25. We thus write the spins
in terms of auxiliary f fermions as Si · σαβ → f †iαfiβ −
δαβ/2, subject to the constraint
iαfiα = N/2. (6)
To implement the large-N limit, we rescale the ex-
change coupling via J/2 → J/N and the conduction-
electron interaction as Uk,k′ → s−1Uk,k′ [where N ≡
(2s + 1)]. The utility of the large-N limit is that the
(mean-field) stationary-phase approximation to H is be-
lieved to be exact at large N . Performing this mean field
decoupling of H yields
k,m=−s
kmckm + V
kmckm + h.c.
kmfkm
k,m=1/2
c−k−mckm + h.c.
+ E0, (7)
with E0 a constant in the energy that is defined below.
The pairing gap, ∆k, and the hybridization between con-
duction and f -electrons, V , result from the mean field de-
coupling of the pairing and Kondo interactions, respec-
tively. The hybridization V (that we took to be real)
measures the degree of Kondo screening (and can be di-
rectly measured experimentally26) and λ is the Lagrange
multiplier that implements the above constraint, playing
the role of the f -electron level. The free energy F of this
single-particle problem can now be calculated, and has
the form:
F (V, λ,∆k) =
∆k∆k′U
k,α=±
(ξk + λ)−
Ekα − T ln
1 + e−βEkα
where T = β−1 is the temperature. The first three terms
are the explicit expressions for E0 in Eq. (7), and Ek± is
Ek± =
+ λ2 + 2V 2 + ξ2
Sk, (9)
Sk = (∆
− λ2)2 + 4V 2
(ξk + λ)
2 +∆2
describing the bands of our d-wave paired heavy-fermion
system.
The phase behavior of this Kondo lattice system for
given values of T , J and µ is determined by finding points
at which F is stationary with respect to the variational
parameters V , λ, and ∆k. For simplicity, henceforth we
take ∆k as given (and having d-wave symmetry as noted
above) with the goal of studying the effect of nonzero
pairing on the formation of the heavy-fermion metal char-
acterized by V and λ that satisfy the stationarity condi-
tions
= 0, (10a)
= 0, (10b)
with the second equation enforcing the constraint,
Eq. (6). We shall furthermore restrict attention to µ < 0
(i.e., a less than half-filled conduction band).
Before we proceed we point out that the magnitude of
the pairing gap near the unpaired heavy-fermion Fermi
surface (located at ξ = V 2/λ) is remarkably small. Tay-
lor expanding Ek− near this point, we find
Ek− ≃
ξ − V 2/λ− λ∆2
, (11)
giving a heavy-fermion gap ∆fk = (λ/V )
∆k [with am-
plitude ∆f = ∆(λ/V )
]. We show below that (λ/V )
≪ 1 such that ∆fk ≪ ∆k. In Fig. 3, we plot the
lower heavy-fermion band for the unpaired case ∆k = 0
(dashed line) along with ±Ek− for the case of finite ∆k
(solid lines) in the vicinity of the unpaired heavy-fermion
Fermi surface, showing the small heavy-fermion gap ∆fk.
Thus, we find a weak proximity effect in which the heavy-
fermion quasiparticles, which are predominantly of f -
character, are only weakly affected by the presence of
d-wave pairing in the conduction electron band.
0.6 0.8 1.0 1.2 1.4
FIG. 3: The dashed line is the lower heavy-fermion band
(crossing zero at the heavy-fermion Fermi surface) for the
unpaired (∆ = 0) case and the solid lines are ±Ek− for ∆k =
0.1D, showing a small f-electron gap ∆fk ≃ .014D.
-1 -0.5 0.5 1
FIG. 4: Plot of the energy bands E+(ξ) (top curve) and
(ξ) (bottom curve), defined in Eq. (13), in the heavy
Fermi liquid state (for ∆ = 0), for the case V = 0.2D and
λ = 0.04D, that has a heavy-fermion Fermi surface near
ξ = D and an experimentally-measurable hybridization gap26
(the minimum value of E+ − E−, i.e., the direct gap) equal
to 2V ∼
TKD. Note, however, the indirect gap is λ ∼ TK.
III. KONDO LATTICE SCREENING
A. Normal conduction electrons
A useful starting point for our analysis is to recall
the well-known27 unpaired (∆ = 0) limit of our model.
By minimizing the correpsonding free energy [simply the
∆ = 0 limit of Eq. (8)], one obtains, at low temperatures,
that the Kondo screening of the local moments is repre-
sented by the nontrivial stationary point of F at V = V0
and λ = λ0 = V
0 /D, with
D + µ
, (12)
Here we have taken the conduction electron density of
states to be a constant, ρ0 = (2D)
−1, with 2D the
bandwidth. The resulting phase is a metal accommo-
dating both the conduction and f -electrons with a large
density of states ∝ λ0−1 near the Fermi surface at
ǫk ≃ µ + V 20 /λ0, revealing its heavy-fermion character.
In Fig. 4, we plot the energy bands
E± (ξk) =
ξk + λ±
(ξk − λ)2 + 4V 2
, (13)
of this heavy Fermi liquid in the low-T limit.
With increasing T , the stationary V and λ decrease
monotonically, vanishing at the Kondo temperature
D2 − µ2 exp
, (14)
D − µ
D + µ
λ0. (15)
Here, the second line is meant to emphasize that TK is
of the same order as the T = 0 value of the f -fermion
chemical potential λ0, and therefore TK ≪ V0, i.e., TK
is small compared to the zero-temperature hybridization
energy V0.
It is well established that the phase transition-like be-
havior of V at TK is in fact a crossover onceN is finite
1,24.
Nevertheless, the large-N approach yields the correct or-
der of magnitude estimate for TK and provides a very use-
ful description of the strong coupling heavy-Fermi liquid
regime, including the emergence of a hybridization gap
in the energy spectrum.
B. d-wave paired conduction electrons
Next, we analyze the theory in the presence of d-wave
pairing with gap amplitude ∆. Thus, we imagine contin-
uously turning on the d-wave pairing amplitude ∆, and
study the stability of the Kondo-screened heavy-Fermi
liquid state characterized by the low-T hybridization V0,
Eq. (12). As we discussed in Sec. I, in the case of a single
Kondo impurity, it is well known that Kondo screening is
qualitatively different in the case of d-wave pairing, and
the single impurity is only screened by the conduction
electrons if the Kondo coupling exceeds a critical value
1 + lnD/∆
. (16)
For J < J∗, the impurity is unscreened. This result for
J∗ can equivalently be expressed in terms of a critical
pairing strength ∆∗, beyond which Kondo screening is
destroyed for a given J :
∆∗ = D exp
, (17)
[equivalent to Eq. (1) for r = 1], which is proportional
to the Kondo temperature TK. This result, implying
0.02 0.04 0.06 0.08
FIG. 5: (Color online) Main: Mean-field Kondo parameter V
as a function of the d-wave pairing amplitude ∆, for exchange
coupling J = 0.30D and chemical potential µ = −0.1D, ac-
cording to the approximate formula Eq. (31) (solid line) and
via a direct minimization of Eq. (8) at T = 10−4D (points),
the latter exhibiting a first-order transition near ∆ = 0.086D.
that a d-wave superconductor can only screen a local
spin if the pairing strength is much smaller than TK,
can also be derived within the mean-field approach to
the Kondo problem, as shown in Appendix A (see also
Ref. 7). Within this approach, a continuous transition to
the unscreened phase (where V 2 → 0 continuously) takes
place at ∆ ≃ ∆∗.
Thus, calculations for the single impurity case indi-
cate that Kondo screening is rather sensitive to a d-wave
pairing gap. The question we wish to address is, how
does d-wave pairing affect Kondo screening in the lattice
case? In fact, we will see that the results are quite differ-
ent in the Kondo lattice case, such that Kondo screening
persists beyond the point ∆∗. To show this, we have nu-
merically studied the ∆-dependence of the saddle point
of the free energy Eq. (8), showing that, at low temper-
atures, V only vanishes, in a discontinuous manner, at
much larger values of ∆, as shown in Fig. 5 (solid dots)
for the case of J = 0.30D, µ = −0.1D and T = 10−4D
(i.e., T/TK ≃ .069). In Fig. 2, we plot the phase diagram
as a function of T and ∆, for the same values of J and
µ, with the solid line denoting the line of discontinuous
transitions.
The dashed line in Fig. 2 denotes the spinodal Ts of the
free energy F at which the quadratic coefficient of Eq. (8)
crosses zero. The significance of Ts is that, if the Kondo-
to-local moment transition were continuous (as it is for
∆ = 0), this would denote phase boundary; the T → 0
limit of this quantity coincides with the single-impurity
critical pairing Eq. (17). An explicit formula for Ts can
be easily obtained by finding the quadratic coefficient of
Eq. (8):
tanhEk/2Ts(∆)
, (18)
with Ek ≡
, and where we set λ = 0 [which
must occur at a continuous transition where V → 0, as
can be seen by analyzing Eq. (10b)]. As seen in Fig. 2,
the spinodal temperature is generally much smaller than
the true transition temperature; however, for very small
∆ → 0, Ts(∆) coincides with the actual transition (which
becomes continuous), as noted in the figure caption.
Our next task is to understand these results within
an approximate analytic analysis of Eq. (8); before do-
ing so, we stress again that the discontinuous transition
from a screened to an unscreened state as function of T
becomes a rapid crossover for finite N . The large N the-
ory is, however, expected to correctly determine where
this crossover takes place.
1. Low-T limit
According to the numerical data (points) plotted in
Fig. 5, the hybridization V is smoothly suppressed with
increasing pairing strength ∆ before undergoing a discon-
tinuous jump to V = 0. To understand, analytically, the
∆-dependence of V at low-T , we shall analyze the T = 0
limit of F , i.e., the ground-state energy E. The essen-
tial question concerns the stability of the Kondo-screened
state with respect to a d-wave pairing gap, characterized
by the following ∆-dependent hybridization
V (∆) = V0
∆2typ
, (19)
with ∆typ an energy scale, to be derived, that gives the
typical value of ∆ for which the heavy-fermion state is
affected by d−wave pairing.
To show that Eq. (19) correctly describes the smooth
suppression of the hybrization with increasing ∆, and to
obtain the scale ∆typ, we now consider the dimensionless
quantity
χ∆ ≡ −
, (20)
that characterizes the change of the ground state en-
ergy with respect to the pairing gap. Separating the
amplitude of the gap from its momentum dependence,
i.e. writing ∆k = ∆φk, we obtain from the Hellmann-
Feynman theorem that:
χ∆ = −
. (21)
For ∆ → 0 this yields
Gcc (k,iω)Gcc (−k,−iω) . (22)
Here, Gcc (k,iω) is the conduction electron propagator.
As expected, χ∆ is the particle-particle correlator of the
conduction electrons. Thus, for T = 0 the particle-
particle response will be singular. This is the well known
Cooper instability. For V = 0 we obtain for example
χ∆ (V = 0) =
D2 − µ2
, (23)
where we used ∆ as a lower cut off to control the Cooper
logarithm. Below we will see that, except for extremely
small values of ∆, the corresponding Cooper logarithm
is overshadowed by another logarithmic term that does
not have its origin in states close to the Fermi surface,
but rather results from states with typical energy V ≃√
In order to evaluate χ∆ in the heavy Fermi liquid state,
we start from:
Gcc (k,ω) =
ω − E+ (ξk)
ω − E− (ξk)
, (24)
where E± is given in Eq. (13) and the coherence factors
of the hybridized Fermi liquid are:
1− ξk − λ√
(ξk − λ)2 + 4V 2
ξk − λ√
(ξk − λ)2 + 4V 2
. (25)
Inserting Gcc (k,ω) into the above expression for χ∆
yields
∫ D−µ
4v2u2θ (E−)
E+ + E−
We used that E+ > 0 is always fulfilled, as we consider a
less than half filled conduction band.
Considering first the limit λ = 0, it holds E− (ξ) < 0
and the last term in the above integral disappears. The
remaining terms simplify to
χ∆ (λ = 0) =
∫ D−µ
ξ2 + 4V 2
D2 − µ2
. (27)
Even for λ nonzero, this is the dominant contribution to
χ∆ in the relevant limit λ ≪ V ≪ D. To demonstrate
this we analyze Eq. (26) for nonzero λ, but assuming
λ ≪ V as is indeed the case for small ∆. The calculation
is lengthy but straightforward. It follows:
D2 − µ2
D |µ|
. (28)
The last term is the Cooper logarithm, but now in the
heavy fermion state. The prefactor λ/D ≃ TK/D is a
result of the small weight of the conduction electrons on
the Fermi surface (i.e. where ξ ≃ V 2/λ) as well as the
reduced velocity close to the heavy electron Fermi sur-
face. Specifically it holds u2
ξ ≃ V 2/λ
≃ λ2/V 2 as well
as E−
ξ ≃ V 2/λ
ξ − V
Thus, except for extremely small gap values where
∆2 < D2
)−D/TK
, χ∆ is dominated by the λ = 0
result, Eq. (27), and the Cooper logarithm plays no role
in our analysis. The logarithm in Eq. (27) is not origi-
nating from the heavy electron Fermi surface (i.e. it is
not from ξ ≃ r
). Instead, it has its origin in the inte-
gration over states where E− < 0. The important term
in Eq. (26) is peaked for ξ ≃ 0 i.e. where
E± (ξ ≃ 0) = ±V and is large as long as |ξ| . V . For
ξ ≃ 0 holds v
≃ − u
. This peak at ξ ≃ 0 has
its origin in the competition between two effects. Usu-
ally, u or v are large when E± ≃ ξ. The only regime
where u or v are still sizable while E± remain small is
close to the bare conduction electron Fermi surface at
|ξ| ≃ V (the position of the level repulsion between the
two hybridizing bands). Thus, the logarithm is caused
by states that are close to the bare conduction electron
Fermi surface. Although these states have the strongest
response to a pairing gap, they don’t have much to do
with the heavy fermion character of the system. It is in-
teresting that this heavy fermion pairing response is the
same even in case of a Kondo insulator where λ = 0 and
the Fermi level is in the middle of the hybridization gap.
The purpose of the preceding analysis was to derive
an accurate expression for the ground-state energy E at
small ∆. Using Eq. (20) gives:
E = E(∆ = 0)− χ∆ρ0∆2, (29)
which, using Eq. (27) and considering the leading order
in λ ≪ V and ∆ ≪ V , safely neglecting the last term
of Eq. (28) according to the argument of the previous
paragraph, and dropping overall constants, yields
+V 2ρ0 ln
D + µ
− ρ0∆
D2 − µ2
. (30)
Using Eq. (10), the stationary value of the hybridization
(to leading order in ∆2) is then obtained via minimization
with respect to V and λ. This yields
V (∆) ≃ V0 −
, (31)
with the stationary value of λ = 2ρ0V
2, which estab-
lishes Eq. (19). A smooth suppression of the Kondo
hybridization from the ∆ = 0 value V0 [Eq. (12)] oc-
curs with increasing d-wave pairing amplitude ∆ at low
T . This result thus implies that the conduction electron
gap only causes a significant reduction of V and λ for
∆ ≃ ∆typ ∝
In Fig. 5 we compare V (∆) of Eq. (31) (solid line)
with the numerical result (solid dots). As long as V
0.0001 0.0002 0.0003 0.0004 0.0005
C/γ0T
FIG. 6: Plot of the low-temperature specific heat coefficient
= − ∂
, for the case of λ = 10−2D, V = 10−1D, and
µ = −0.1D, for the metallic case (∆ = 0, dashed line) and
the case of nonzero d-wave pairing (∆ = 0.1D, solid line).
This shows that, even with nonzero ∆, the specific heat coef-
ficient will appear to saturate at a large value at low T (thus
exhibiting signatures of a heavy fermion metal), before van-
ishing at asymptotically low T ≪ ∆f (= ∆(λ/V )2 = 10−4D)
Each curve is normalized to the T = 0 value for the metallic
case, γ0 ≃ 23π
2/λ2.
stays finite, the simple relation Eq. (31) gives an ex-
cellent description of the heavy electron state. Above
the small f -electron gap ∆f , these values of V and λ
yield a large heat capacity coefficient (taking N = 2)
γ ≃ 2
π2ρ0V
2/λ2 and susceptibility χ ≃ 2ρ0V 2/λ2, re-
flecting the heavy-fermion character of this Kondo-lattice
system even in the presence of a d-wave pairing gap. Ac-
cording to our theory, this standard heavy-fermion be-
havior (as observed experimentally5 in Nd2−xCexCuO4)
will be observed for temperatures that are large com-
pared to the f -electron gap ∆f . However, for very small
T ≪ ∆f , the temperature dependence of the heat capac-
ity changes (due to the d-wave character of the f -fermion
gap), behaving as C = AT 2/∆ with a large prefactor
A ≃ (D/TK)2. This leads to a sudden drop in the heat
capacity coefficient at low T , as depicted in Fig. 6.
The surprising robustness of the Kondo screening with
respect to d-wave pairing is rooted in the weak proximity
effect of the f -levels and the coherency as caused by the
formation of the hybridization gap. Generally, a pairing
gap affects states with energy ∆k from the Fermi en-
ergy. However, low energy states that are within TK of
the Fermi energy are predominantly of f -electron charac-
ter (a fact that follows from our large-N theory but also
from the much more general Fermi liquid description of
the Kondo lattice28) and are protected by the weak prox-
imity. These states only sense a gap ∆fk ≪ ∆k and can
readily participate in local-moment screening.
Furthermore, the opening of the hybridization gap co-
herently pushes conduction electrons to energies ≃ V
from the Fermi energy. Only for ∆ ≃ V ≃
TKD will
the conduction electrons ability to screen the local mo-
ments be affected by d-wave pairing. This situation is
very different from the single impurity Kondo problem
where conduction electron states come arbitrarily close
to the Fermi energy.
2. First-order transition
The result Eq. (31) of the preceding subsection strictly
applies for ∆ → 0, although as seen in Fig. 5, in practice
it agrees quite well with the numerical minimization of
the free energy until the first-order transition. To under-
stand the way in which V is destroyed with increasing ∆,
we must consider the V → 0 limit of the free energy.
We start with the ground-state energy. Expanding E
[the T → 0 limit of Eq. (8)] to leading order in V and
zeroth order in λ (valid for V → 0), we find (dropping
overall constants)
≃ −4ρ0V 2 ln
V 3, (32)
where we defined the quantity ∆c
∆c = 4
D2 − µ2 exp
, (33)
at which the minimum value of V in Eq. (32) vanishes
continuously, with the formula for V (∆) given by
V (∆) ≃ 1
, (34)
near the transition. According to Eq. (33), the equilib-
rium hybridization V vanishes (along with the destruc-
tion of Kondo screening) for pairing amplitude ∆c ∼√
TKD, of the same order of magnitude as the T = 0
hybridization V0, as expected [and advertised above in
Eq. (3)].
Equation (33) strictly applies only at T = 0, appar-
ently yielding a continuous transition at which V → 0
for ∆ → ∆c. What about T 6= 0? We find that, for
small but nonzero T , Eq. (33) approximately yields the
correct location of the transition, but that the nature
of the transition changes from continuous to first-order.
Thus, for ∆ near ∆c, there is a discontinuous jump to
the local-moment phase that is best obtained numeri-
cally, as shown above in Figs. 5 and 2. However, we can
get an approximate analytic understanding of this first-
order transition by examining the low-T limit. Since ex-
citations are gapped, at low T the free energy FK of the
Kondo-screened (V 6= 0) phase is well-approximated by
inserting the stationary solution Eq. (34) into Eq. (32):
2 ln3
, (35)
for FK at ∆ → ∆c. The discontinuous Kondo-to-local
moment transition occurs when the Kondo free energy
Eq. (35) is equal to the local-moment free energy. For
the latter we set V = λ = 0 in Eq. (8), obtaining (recall
ρ0(D + µ)
2 − 1
D2 − µ2
−T ln 2− T
1 + e−βEk
, (36)
where we dropped an overall constant depending on the
conduction-band interaction.
The term proportional to T in Eq. (36) comes from the
fact that Ek− = 0 for V = λ = 0, and corresponds to
the entropy of the local moments. At low T , the gapped
nature of the d-wave quasiparticles implies the last term
in Eq. (36) can be neglected (although the nodal quasi-
particles give a subdominant power-law contribution). In
deriving the Kondo free energy FK, Eq. (35), we dropped
overall constant terms; re-establishing these to allow a
comparison to FLM , and setting FLM = FK, we find
2 ln3
= T ln 2, (37)
that can be solved for temperature to find the transition
temperature TK for the first-order Kondo screened-to-
local moment phase transition:
TK(∆) =
6 ln 2
, (38)
that is valid for ∆ → ∆c, providing an accurate ap-
proximation to the numerically-determined TK curve in
Fig. 2 (solid line) in the low temperature regime (i.e.,
near ∆c = 0.14D in Fig. 2).
Equation (38) yields the temperature at which, within
mean-field theory, the screened Kondo lattice is destroyed
by the presence of nonzero d-wave pairing; thus, as long
as ∆ < TK(∆), heavy-fermion behavior is compatible
with d-wave pairing in our model. The essential feature
of this result is that TK(∆) is only marginally reduced
from the ∆ = 0 Kondo temperature Eq. (2), establishing
the stability of this state. In comparison, according to ex-
pectations based on a single-impurity analysis, one would
expect the Kondo temperature to follow the dashed line
in Fig. 2.
Away from this approximate result valid at large N ,
the RKKY interaction between moments is expected to
lower the local-moment free energy, altering the predicted
location of the phase boundary. Then, even for T =
0, a level crossing between the screened and unscreened
ground states occurs for a finite V . Still, as long as the
∆ = 0 heavy fermion state is robust, it will remain stable
at low T for ∆ small compared to ∆c, as summarized in
Figs. 1 and 2.
IV. CONCLUSIONS
We have shown that a lattice of Kondo spins coupled to
an itinerant conduction band experiences robust Kondo
screening even in the presence of d-wave pairing among
the conduction electrons. The heavy electron state is pro-
tected by the large hybridization energy V ≫ TK. The
d-wave gap in the conduction band induces a relatively
weak gap at the heavy-fermion Fermi surface, allowing
Kondo screening and heavy-fermion behavior to persist.
Our results demonstrate the importance of Kondo-lattice
coherency, manifested by the hybridization gap, which is
absent in case of dilute Kondo impurities. As pointed
out in detail, the origin for the unexpected robustness of
the screened heavy electron state is the coherency of the
Fermi liquid state. With the opening of a hybridization
gap, conduction electron states are pushed to energies
of order
TKD away from the Fermi energy. Whether
or not these conduction electrons open up a d-wave gap
is therefore of minor importance for the stability of the
heavy electron state.
Our conclusions are based on a large-N mean field the-
ory. In case of a single impurity, numerical renormaliza-
tion group calculations demonstrated that such a mean
field approach fails to reproduce the correct critical be-
havior where the transition between screened and un-
screened impurity takes place. However the mean field
theory yields the correct value for the strength of the
Kondo coupling at the transition. In our paper we are
not concerned with the detailed nature in the near vicin-
ity of the transition. Our focus is solely the location of
the boundary between the heavy Fermi liquid and un-
screened local moment phase, and we do expect that a
mean field theory gives the correct result. One possibility
to test the results of this paper is a combination of dy-
namical mean field theory and numerical renormalization
group for the pseudogap Kondo lattice problem.
In case where Kondo screening is inefficient and ∆ >√
TKD, i.e., the “local moment” phase of Figs. 1 and 2,
the ground state of the moments will likely be magneti-
cally ordered. This can have interesting implications for
the superconducting state. Examples are reentrance into
a normal phase (similar to ErRh4B4, see Ref. 29) or a
modified vortex lattice in the low temperature magnetic
phase. In our theory we ignored these effects. This is no
problem as long as the superconducting gap amplitude
∆ is small compared to
TKD and the Kondo lattice is
well screened. Thus, the region of stability of the Kondo
screened state will not be significantly affected by includ-
ing the magnetic coupling between the f -electrons. Only
the nature of the transition and, of course, the physics
of the unscreened state will depend on it. Finally, our
theory offers an explanation for the heavy fermion state
in Nd2−xCexCuO4, where ∆ ≫ TK.
Acknowledgments — We are grateful for useful discus-
sions with A. Rosch and M. Vojta. This research was
supported by the Ames Laboratory, operated for the U.S.
Department of Energy by Iowa State University under
Contract No. DE-AC02-07CH11358. DES was also sup-
ported at the KITP under NSF grant PHY05-51164.
APPENDIX A: SINGLE IMPURITY CASE
For a single Kondo impurity a critical value J∗ for
the coupling between conduction electron and impurity
spin emerges, separating Kondo-screened from local mo-
ment behavior for a single spin impurity in a d-wave su-
perconductor, see Eq. (16). As discussed in the main
text, this is equivalent to a critical pairing Eq. (17)
above which Kondo screening does not occur. The re-
sult was obtained in careful numerical renormalization
group calculations8,9. In the present section, we demon-
strate that the same result also follows from a simple
large-N mean field approach. It is important to stress
that this approach fails to describe the detailed critical
behavior. However, here we are only concerned with the
approximate value of the non-universal quantity J∗. In-
deed, mean field theory is expected to give a reasonable
value for the location of the transition.
Our starting point is the model Hamiltonian
kmckm ++
m,m′,k,k′
f †mfmc
kmck′m′
Ukk′c
. (A1)
with the corresponding mean-field action S = Sf + Sb +
Sint with (introducing the Lagrange multiplier λ and hy-
bridization V as usual, and making the BCS mean-field
approximation for the conduction fermions):
km(∂τ + ǫk)ckm + f
m(∂τ + λ)fm
V †V − λNq0
, (A2)
Sint =
f †mckmV + V
†ckmfm
m=1/2
−k−mckm + c
−k−m∆k
, (A3)
where the λ integral implements the constraint Nq0 =∑
mfm, with q0 = 1/2. Here, we have taken the large
N limit, with N = 2J + 1.
The mean-field approximation having been made, it is
now straightforward to trace over the fermionic degrees
of freedom to yield
N |V |2
− λNq0 −
(iω − λ− Γ1(iω))(iω + λ+ Γ1(−iω))− Γ2(iω)Γ̄2(iω)
, (A4)
for the free energy contribution due to a single impurity
in a d-wave superconductor. Here, we dropped an overall
constant due to the conduction fermions only, as well as
the quadratic term in ∆k (which of course determines the
equilibrium value of ∆k; here, as in the main text, we’re
interested in the impact of a given ∆k on the degree of
Kondo screening), and defined the functions
Γ1(iω) = |V |2
iω + ǫk
(iω)2 − E2
, (A5)
Γ2(iω) = V
(iω)2 − E2
, (A6)
Γ̄2(iω) = (V
(iω)2 − E2
. (A7)
At this point we note that, for a d-wave superconduc-
tor, Γ2 = Γ̄2 = 0 due to the sign change of the d-wave
order parameter. The self-energy Γ1(iω) is nonzero and
essentially measures the density of states (DOS) ρd(ω) of
the d-wave superconductor. In fact, one can show that
the corresponding retarded function Γ1R(ω) satisfies
Γ1R(ω) = |V |2
ρd(z)
ω + iδ − z
, (A8)
with δ = 0+, so that the imaginary part Γ′′1R(ω) =
−π|V |2ρd(ω) measures the DOS. Writing Γ1R(ω) ≡
|V |2G(ω), we have for the free energy
N |V |2
− λNq0 (A9)
nF(z) tan
( −|V |2G′′(z)
z − λ− |V |2G′(z)
and for the stationarity conditions, Eq. (10),
nF(z)G
′′(z)(z − λ)
(z − λ− |V |2G′(z))2 + |V |4(G′′(z))2
,(A10)
q0 = −
nF(z)|V |2G′′(z)
(z − λ− |V |2G′(z))2 + |V |4(G′′(z))2
,(A11)
which can be evaluated numerically to determine V and
λ as a function of T and ∆.
The Kondo temperature TK is defined by the temper-
ature at which V 2 → 0 continuously; at such a point, the
constraint Eq. (A11) requires λ → 0. Here, we are in-
terested in finding the pairing ∆ at which TK → 0; thus,
this is obtained by setting T = V = λ = 0 in Eq. (A10):
−πρd(z)
, (A12)
= −ρ0 log
D + µ
+ ρ0, (A13)
where, for simplicity, in the final line we approximated
ρd(z) to be given by
ρd(ω) ≃
ρ0|ω|/∆, for |ω| < ∆,
ρ0, for |ω| > ∆,
(A14)
that captures the essential features (except for the nar-
row peak near ω = ∆) of the true DOS of a d-wave
superconductor, with ρ0 the (assumed constant) DOS of
the underlying conduction band.
The solution to Eq. (A13) is:
∆∗ = (D + µ) exp
, (A15)
showing a destruction of the Kondo effect for ∆ → ∆∗, as
V → 0 continuously, thus separating the Kondo-screened
(for ∆ < ∆∗) from the local moment (for ∆ > ∆∗)
phases.
1 See, e.g., A.C. Hewson, The Kondo Problem to Heavy
Fermions (Cambridge University Press, Cambridge, Eng-
land, 1993).
2 P. Coleman, C. Pepin, Q. Si and R. Ramazashvili, Journ.
of Phys: Cond. Mat. 13, R723 (2001).
3 P. Nozières, Eur. Phys. J. B 6, 447 (1998).
4 By “pseudogap”, we are of course referring to the nodal
structure of d-wave pairing, not the pseudogap regime of
the high-Tc cuprates.
5 T. Brugger, T. Schreiner, G. Roth, P. Adelmann, and G.
Czjzek, Phys. Rev. Lett. 71, 2481 (1993).
6 D. Withoff and E. Fradkin, Phys. Rev. Lett. 64, 1835
(1990).
7 L.S. Borkowski and P.J. Hirschfeld, Phys. Rev. B 46, 9274
(1992).
8 K. Ingersent, Phys. Rev. B 54, 11936 (1996).
9 C. Gonzalez-Buxton and K. Ingersent, Phys. Rev. B 57,
14254 (1998).
10 M. Vojta and L. Fritz, Phys. Rev. B 70, 094502 (2004).
11 L. Fritz and M. Vojta, Phys. Rev. B 70, 214427 (2004).
12 M. Vojta, Philos. Mag. 86, 1807 (2006).
13 P.W. Anderson, J. Phys. C 3, 2439 (1970).
14 C.C. Tsuei and J.R. Kirtley, Phys. Rev. Lett. 85, 182
(2000).
15 R. Prozorov, R.W. Giannetta, P. Fournier, R.L. Greene,
Phys. Rev. Lett. 85, 3700 (2000).
16 N.T. Hien, V.H.M. Duijn, J.H.P. Colpa, J.J.M. Franse, and
A.A. Menovsky, Phys. Rev. B 57, 5906 (1998).
17 P. Fulde, V. Zevin, and G. Zwicknagl, Z. Phys. B 92, 133
(1993).
18 G. Khaliullin and P. Fulde, Phys. Rev. B 52, 9514 (1995).
19 W. Hofstetter, R. Bulla, and D. Vollhardt, Phys. Rev. Rev.
Lett. 84, 4417 (2000).
20 Q. Huang, J.F. Zasadzinsky, N. Tralshawala, K.E. Gray,
D.G. Hinks, J.L. Peng and R.L. Greene, Nature 347, 369
(1990).
21 W. Henggeler and A.Furrer, Journ. of Phys. Cond. Mat.
10, 2579 (1998).
22 J. Ba la, Phys. Rev. B 57, 14235 (1998).
23 C. Petrovic, P.G. Pagliuso, M.F. Hundley, R. Movshovich,
J.L. Sarrao, J.D. Thompson, Z. Fisk and P. Monthoux, J.
Phys. Condens. Matter 13, L337 (2001).
24 See, e.g., D.M. Newns and N. Read, Adv. Phys. 36, 799
(1987); P. Coleman, Phys. Rev. B 29, 3035 (1984); A.
Auerbach and K. Levin, Phys. Rev. Lett. 57, 877 (1986);
A.J. Millis and P.A. Lee, Phys. Rev. B 35, 3394 (1987).
25 H. Shiba and P. Fazekas, Prog. Theor. Phys. Suppl. 101,
403 (1990).
26 S. V. Dordevic, D. N. Basov, N. R. Dilley, E. D. Bauer, and
M. B. Maple, Phys. Rev. Lett. 86, 684 (2001); L. Degiorgi,
F.B.B. Anders, and G. Grüner, Eur. Phys. J. B 19, 167
(2001); J.N. Hancock. T. McKnew, Z. Schlesinger, J.L.
Sarrao, and Z. Fisk, Phys. Rev. Lett. 92, 186405 (2004).
27 P. Coleman, in Lectures on the Physics of Highly Correlated
Electron Systems VI , F. Mancini, Ed., American Institute
of Physics, New York (2002), p. 79 - 160.
28 K. Yamada and K. Yosida, Prog. Theor. Phys. 76, 621
(1986).
29 W. A. Fertig, D. C. Johnston, L. E. DeLong, R. W. McCal-
lum, M. B. Maple, and B. T. Matthias, Phys. Rev. Lett.
38, 987 (1977).
|
0704.1817 | Redefining the Missing Satellites Problem | Draft version May 29, 2018
Preprint typeset using LATEX style emulateapj v. 08/22/09
REDEFINING THE MISSING SATELLITES PROBLEM
Louis E. Strigari
, James S. Bullock
, Manoj Kaplinghat
, Juerg Diemand
, Michael Kuhlen
, Piero
Madau
Draft version May 29, 2018
ABSTRACT
Numerical simulations of Milky-Way size Cold Dark Matter (CDM) halos predict a steeply rising
mass function of small dark matter subhalos and a substructure count that greatly outnumbers the
observed satellites of the Milky Way. Several proposed explanations exist, but detailed comparison
between theory and observation in terms of the maximum circular velocity (Vmax) of the subhalos is
hampered by the fact that Vmax for satellite halos is poorly constrained. We present comprehensive
mass models for the well-known Milky Way dwarf satellites, and derive likelihood functions to show
that their masses within 0.6 kpc (M0.6) are strongly constrained by the present data. We show that
the M0.6 mass function of luminous satellite halos is flat between ∼ 107 and 108M⊙. We use the
“Via Lactea” N-body simulation to show that the M0.6 mass function of CDM subhalos is steeply
rising over this range. We rule out the hypothesis that the 11 well-known satellites of the Milky Way
are hosted by the 11 most massive subhalos. We show that models where the brightest satellites
correspond to the earliest forming subhalos or the most massive accreted objects both reproduce the
observed mass function. A similar analysis with the newly-discovered dwarf satellites will further test
these scenarios and provide powerful constraints on the CDM small-scale power spectrum and warm
dark matter models.
Subject headings: Cosmology: dark matter, theory–galaxies
1. INTRODUCTION
It is now well-established that numerical simulations
of Cold Dark Matter (CDM) halos predict many orders
of magnitude more dark matter subhalos around Milky
Way-sized galaxies than observed dwarf satellite galax-
ies (Klypin et al. 1999; Moore et al. 1999; Diemand et al.
2007). Within the context of the CDM paradigm, there
are well-motivated astrophysical solutions to this ‘Miss-
ing Satellites Problem’ (MSP) that rely on the idea
that galaxy formation is inefficient in the smallest dark
matter halos (Bullock et al. 2000; Benson et al. 2002;
Somerville 2002; Stoehr et al. 2002; Kravtsov et al. 2004;
Moore et al. 2006; Gnedin & Kravtsov 2006). However,
from an observational perspective, it has not been possi-
ble to distinguish between these solutions.
A detailed understanding of the MSP is limited by our
lack of a precise ability to characterize the dark matter
halos of satellite galaxies. From an observational per-
spective, the primary constraints come from the veloc-
ity dispersion of ∼ 200 stars in the population of dark
matter dominated dwarf spheroidal galaxies (dSphs)
(Wilkinson et al. 2004; Lokas et al. 2005; Munoz et al.
2005, 2006; Westfall et al. 2006; Walker et al. 2005, 2006;
Sohn et al. 2006; Gilmore et al. 2007). However, the ob-
served extent of the stellar populations in dSphs are ∼
kpc, so these velocity dispersion measurements are only
able to probe properties of the halos in this limited radial
1 Center for Cosmology, Department of Physics & Astronomy,
University of California, Irvine, CA 92697
2 Department of Astronomy & Astrophysics, University of Cali-
fornia, Santa Cruz, CA 95064
3 School of Natural Sciences, Institute for Advanced Study, Ein-
stein Drive, Princeton, NJ 08540
4 Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str.
1, 85740 Garching, Germany.
5 Hubble Fellow
regime.
From the theoretical perspective, dissipationless nu-
merical simulations typically characterize subhalo counts
as a function of the total bound mass or maximum cir-
cular velocity, Vmax. While robustly determined in sim-
ulations, global quantities like Vmax are difficult to con-
strain observationally because dark halos can extend well
beyond the stellar radius of a satellite. Indeed stellar
kinematics alone provide only a lower limit on the halo
Vmax value (see below). This is a fundamental limita-
tion of stellar kinematics that cannot be remedied by
increasing the number of stars used in the velocity dis-
persion analysis (Strigari et al. 2007). Thus determining
Vmax values for satellite halos requires a theoretical ex-
trapolation. Any extrapolation of this kind is sensitive
to the predicted density structure of subhalos, which de-
pends on cosmology, power-spectrum normalization, and
the nature of dark matter (Zentner & Bullock 2003).
Our inability to determinate Vmax is the primary lim-
itation to test solutions to the MSP. One particular so-
lution states that the masses of the dSphs have been
systematically underestimated, so that the ∼ 10 bright-
est satellites reside systematically in the ∼ 10 most mas-
sive subhalos (Stoehr et al. 2002; Hayashi et al. 2003). A
byproduct of this solution is that there must be a sharp
mass cutoff at some current subhalo mass, below which
galaxy formation is suppressed. Other solutions, based
on reionization suppression, or a characteristic halo mass
scale prior to subhalo accretion predict that the sup-
pression comes in gradually with current subhalo mass
(Bullock et al. 2000; Kravtsov et al. 2004; Moore et al.
2006).
In this paper, we provide a systematic investigation of
the masses of the Milky Way satellites. We highlight
that in all cases the total halo masses and maximum
http://arxiv.org/abs/0704.1817v2
2 Strigari et al.
circular velocities are not well-determined by the data.
We instead use the fact that the total cumulative mass
within a characteristic radius ∼ 0.6 kpc is much bet-
ter determined by the present data (Strigari et al. 2007).
We propose using this mass, which we define as M0.6,
as the favored means to compare to the dark halo pop-
ulation in numerical simulations. Unlike Vmax, M0.6 is
measured directly and requires no cosmology-dependent
or dark-matter-model-dependent theoretical prior.
In the following sections, we determine the M0.6 mass
function for the Milky Way satellites, and compare it
to the corresponding mass function measured directly in
the high-resolution “Via Lactea” substructure simulation
of Diemand et al. (2007). We rule out the possibility
that there is a one-to- one correspondence between the 11
most luminous satellites and the most massive subhalos
in Via Lactea. We find that MSP solutions based on
reionization and/or characteristic halo mass scales prior
to accretion are still viable.
2. MILKY WAY SATELLITES
Approximately 20 satellite galaxies can be classified
as residing in MW subhalos. Of these, ∼ 9 were
discovered within the last two years and have very
low luminosities and surface brightnesses (Willman et al.
2005; Willman et al. 2005; Belokurov et al. 2006, 2007;
Zucker et al. 2006a,b). The lack of precision in these
numbers reflects the ambiguity in the classification of the
newly-discovered objects. The nine ‘older’ galaxies clas-
sified as dSphs are supported by their velocity dispersion,
and exhibit no rotational motion (Mateo 1998). Two
satellite galaxies, the Small Megallanic Cloud (SMC)
and Large Megallanic Cloud (LMC), are most likely sup-
ported by some combination of rotation and dispersion.
Stellar kinematics suggest that the LMC and SMC are
likely the most massive satellite systems of the Milky
We focus on determining the masses of the nine most
well-studied dSphs. The dark matter masses of the dSphs
are determined from the line-of-sight velocities of the
stars, which trace the total gravitational potential. We
assume a negligible contribution of the stars to the grav-
itational potential, which we find to be an excellent ap-
proximation. The dSph with the smallest mass-to-light
ratio is Fornax, though even for this case we find that the
stars generally do not affect the dynamics of the system
(see Lokas 2002; Wu 2007, and below).
Under the assumptions of equilibrium and spherical
symmetry, the radial component of the stellar velocity
dispersion, σr, is linked to the total gravitational poten-
tial of the system via the Jeans equation,
d(ν⋆σ
= −ν⋆V 2c − 2βν⋆σ
r . (1)
Here, ν⋆ is the stellar density profile, V
c = GM(r)/r
includes the total gravitating mass of the system, and
the parameter β(r) = 1 − σ2θ/σ
r characterizes the dif-
ference between the radial (σr) and tangential (σθ) ve-
locity dispersions. The observable velocity dispersion is
constructed by integrating the three- dimensional stellar
radial velocity dispersion profile along the line-of-sight,
σ2los(R) =
I⋆(R)
rrdr√
r2 −R2
, (2)
where R is the projected radius on the sky. The surface
density of stars in all dSphs are reasonably well-fit by a
spherically-symmetric King profile (King 1962),
I⋆(R) ∝
r2king
)−1/2
r2king
)−1/2
where rt and rking are fitting parameters denoted as the
tidal and core radii. 6 The spherically symmetric stellar
density can be obtained with an integral transformation
of the surface density. From the form of Eq. (2), the
normalization in Eq. (3) is irrelevant.
Some dSphs show evidence for multiple, dynamically
distinct stellar populations, with each population de-
scribed by its own surface density and velocity dispersion
(Harbeck et al. 2001; Tolstoy et al. 2004; Westfall et al.
2006; McConnachie et al. 2006; Ibata et al. 2006). In a
dSph with i = 1...Np populations of stars, standard ob-
servational methods will sample a density-weighted av-
erage of the populations:
νi (4)
σ2r =
r,ı , (5)
where νi and σi are the density profile and radial stellar
velocity dispersion of stellar component i. In principle,
each component has its own stellar velocity anisotropy
profile, βi(r). In this case, Equation (1) is valid for ν⋆ and
σr as long as an effective velocity anisotropy is defined
β(r)=
ν⋆σ2r
βiνiσ
i . (6)
From these definitions, we also have
ı I⋆,ıσ
los,ı =
The conclusion we draw from this argument is that
the presence of multiple populations will not affect the
inferred mass structure of the system, provided that the
velocity anisotropy is modeled as a free function of ra-
dius. Since the functional form of the stellar velocity
anisotropy is unknown, we allow for a general, three pa-
rameter model of the velocity anisotropy profile,
β(r) = β∞
r2β + r
+ β0. (7)
A profile of this form allows for the possibility for β(r) to
change from radial to tangential orbits within the halo,
and a constant velocity anisotropy is recovered in the
limit β∞ → 0, and β0 → const.
In Equations (1) and (2), the radial stellar velocity dis-
persion, σr , depends on the total mass distribution, and
thus the parameters describing the dark matter density
profile. Dissipation-less N-body simulations show that
the density profiles of CDM halos can be characterized
ρ(r̃) =
r̃γ(1 + r̃)δ
; r̃ = r/rs, (8)
6 Our results are insensitive to this particular parameterization
of the light profile.
Redefining the Missing Satellites Problem 3
where rs and ρs set a radial scale and density normaliza-
tion and γ and δ parameterize the inner and outer slopes
of the distribution. For dark matter halos unaffected
by tidal interactions, the most recent high-resolution
simulations find δ + γ ≈ 3 works well for the outer
slope, while 0.7 . γ . 1.2 works well down to ∼ 0.1%
of halo virial radii (Navarro et al. 2004; Diemand et al.
2005). This interior slope is not altered as a subhalo
loses mass from tidal interactions with the MW potential
(Kazantzidis et al. 2004). The outer slope, δ, depends
more sensitively on the tidal interactions in the halo. The
majority of the stripped material will be from the outer
parts of halos, and thus δ of subhalo density profiles will
become steeper than those of field halos. Subhalos are
characterized by outer slopes in the range 2 . δ . 3.
Given the uncertainty in the β(r) and ρ(r) profiles,
we are left with nine shape parameters that must be
constrained via line-of-sight velocity dispersion measure-
ments: ρs, rs, β0, β∞, rβ , γ, δ, rking , and rt. While the
problem as posed may seem impossible, there are a num-
ber of physical parameters, which are degenerate between
different profile shapes, that are well constrained. Specif-
ically, the stellar populations constrain Vc(r) within a
radius comparable to the stellar radius rt ∼ kpc. As
a result, quantities like the local density and integrated
mass within the stellar radius are determined with high
precision (Strigari et al. 2006), while quantities that de-
pend on the mass distribution at larger radii are virtually
unconstrained by the data.
It is useful to determine the value of the radius where
the constraints are maximized. The location of this char-
acteristic radius is determined by the form of the integral
in Eq. (2). We can gain some insight using the exam-
ple of a power-law stellar distribution ν⋆(r), power-law
dark matter density profile ρ ∝ r−γ⋆ , and constant ve-
locity anisotropy. The line-of-sight velocity dispersion
depends on the three-dimensional stellar velocity disper-
sion, which can be written as
σ2r (r) = ν
Gν⋆(r)M(r)r
2β−2dr ∝ r2−γ⋆ (9)
From the shape of the King profile, the majority of
stars reside at projected radii rking . R . rt, where
the stellar distribution is falling rapidly ν⋆ ∼ r−3.5.
In this case, for β = 0, the LOS component scales as
σ2los(R) ∝
r−0.5−γ⋆(r2 −R2)−1/2dr and is dominated
by the mass and density profile at the smallest relevant
radii, r ∼ rking. For R . rking, ν⋆ ∝ r−1 and σ2los is simi-
larly dominated by r ∼ rking contributions. We note that
although the scaling arguments above hold for constant
velocity anisotropies, they can be extended by consider-
ing anisotropies that vary significantly in radius. They
are also independent of the precise form of I⋆, provided
there is a scale similar to rking .
In Strigari et al. (2007), it was shown that typical ve-
locity dispersion profiles best constrain the mass (and
density) within a characteristic radius ≃ 2 rking . For
example, the total mass within 2rking is constrained to
within ∼ 20% for dSphs with ∼ 200 line-of-sight veloc-
ities. Note that when deriving constraints using only
the innermost stellar velocity dispersion and fixing the
anisotropy to β = 0, the characteristic radius for best
constraints decreases to∼ 0.5rking (e.g. Penarrubia et al.
2007).
As listed in Table 1, the Milky Way dSphs are ob-
served to have variety of rking values, but rking ∼ 0.3
kpc is typical. The values of rking and rt are taken from
Mateo (1998). In order to facilitate comparison with
simulated subhalos, we chose a single characteristic ra-
dius of 0.6 kpc for all the dwarfs, and we represent the
mass within this radius as M0.6 = M(< 0.6 kpc). The
relative errors on the derived masses are unaffected for
small variations in the characteristic radius in the range
∼ 1.5− 2.5 rking. Deviations from a true King profile at
large radius (near rt) do not affect these arguments, as
long as there is a characteristic scale similar to rking in
the surface density. The only dSph significantly affected
by the choice of 0.6 kpc as the characteristic radius is
Leo II, which has rt = 0.5 kpc. Since the characteristic
radius is greater than twice rking , the constraints on its
mass will be weakest of all galaxies (with the exception
of Sagittarius, as discussed below).
3. DARK MATTER HALO MASSES AT THE
CHARACTERISTIC RADIUS
We use the following data sets: Wilkinson et al.
(2004); Munoz et al. (2005, 2006); Westfall et al. (2006);
Walker et al. (2005, 2006); Sohn et al. (2006); Siegal et
al. in preparation. These velocity dispersions are deter-
mined from the line-of-sight velocities of ∼ 200 stars in
each galaxy, although observations in the coming years
will likely increase this number by a factor ∼ 5 − 10.
From the data, we calculate the χ2, defined in our case
(σobs,ı − σth,ı)2
. (10)
Here σ2obs is the observed velocity dispersion in each bin,
σ2th is the theoretical value, obtained from Eq. (2), and
ǫ2ı are errors as determined from the observations.
It is easy to see that, when fitting to a single data set
of ∼ 200 stars, parameter degeneracies will be signifi-
cant. However, from the discussion in section 2, M0.6
is well-determined by the LOS data. To determine how
well M0.6 is constrained, we construct likelihood func-
tions for each galaxy. When thought of as a function
of the theoretical parameters, the likelihood function, L,
is defined as the probability that a data set is acquired
given a set of theoretical parameters. In our case L is a
function of the parameters γ, δ, rs, ρs, and β0, β∞, rβ ,
and is defined as L = e−χ
2/2. In writing this likelihood
function, we assume that the errors on the measured ve-
locity dispersions are Gaussian, which we find to be an
excellent approximation to the errors for a dSph system
(Strigari et al. 2007). We marginalize over the parame-
ters γ, δ, rs, ρs, and β0, β∞, rβ at fixed M0.6, and the
optimal values for M0.6 are determined by the maximum
of L .
We determine L for all nine dSphs with velocity dis-
persion measurements. For all galaxies we use the full
published velocity dispersion profiles. The only galaxy
that does not have a published velocity dispersion profile
is Sagittarius, and for this galaxy we use the central ve-
locity dispersion from Mateo (1998). The mass modeling
of Sagittarius is further complicated by the fact that it is
experiencing tidal interactions with the MW (Ibata et al.
4 Strigari et al.
Fig. 1.— The likelihood functions for the mass within 0.6 kpc
for the nine dSphs, normalized to unity at the peak.
1997; Majewski et al. 2003), so a mass estimate from the
Jeans equation is not necessarily reliable. We caution
that in this case the mass we determine is likely only an
approximation to the total mass of the system.
We determine the likelihoods by marginalizing over
the following ranges of the velocity anisotropy, inner
and outer slopes: −10 < β0 < 1, −10 < β∞ < 1,
0.1 < rβ < 10 kpc, 0.7 < γ < 1.2, and 2 < δ < 3.
As discussed above, these ranges for the asymptotic in-
ner and outer slopes are appropriate because we are con-
sidering CDM halos. It is important to emphasize that
these ranges are theoretically motivated and that obser-
vations alone do not demand such restrictive choices.
It is possible to fit all of the dSphs at present with
a constant density cores with scale-lengths ∼ 100 pc
(Strigari et al. 2006; Gilmore et al. 2007), although the
data by no means demand such a situation. Though
we consider inner and outer slopes in the ranges quoted
above, our results are not strongly affected if we widen
these intervals. For example, we find that if we allow
the inner slope to vary down to γ = 0, the widths of the
likelihoods are only changed by ∼ 10%. This reflects the
fact that there is a negligible degeneracy between M0.6
and the inner and outer slopes.
We are left to determine the regions of ρs and rs pa-
rameter space to marginalize over. In all dSphs, there
is a degeneracy in this parameter space, telling us that
it is not possible to derive an upper limit on this pair
of parameters from the data alone (Strigari et al. 2006).
While this degeneracy is not important when determin-
ing constraints on M0.6, it is the primary obstacle in
determining Vmax. From the fits we present below, we
find that the lowest rs value that provides an acceptable
fit is ∼ 0.1 kpc, and we use this as the lower limit in all
cases. In our fiducial mass models, we conservatively re-
strict the maximum value of rs using the known distance
to each dSph. In this case, we use 0.1 kpc < rs < D/2,
where D is the distance to the dSph.
In Figure 1 we show the M0.6 likelihood functions for
all of the dSphs. As is shown, we obtain strong con-
Fig. 2.— The velocity dispersion for Ursa Minor as a function of
radial distance, along with the model that maximizes the likelihood
function.
straints on M0.6 in all cases except Sagittarius, for which
we use only a central velocity dispersion. Table 1 sum-
marizes the best fitting M0.6 values for each dwarf. The
quoted errors correspond to the points where the likeli-
hood falls to 10% of its peak value. The upper panel of
Figure 3 shows M0.6 values for each dwarf as a function
of luminosity. In Figure 2 we show an example of the
velocity dispersion data as a function of radial distance
for Ursa Minor, along with the model that maximizes
the likelihood function. For all galaxies, we find χ2 per
degree of freedom values . 1.
The maximum likelihood method also allows us to con-
strain the mass at other radii spanned by the stellar dis-
tribution. The sixth column of Table 1 provides the inte-
grated mass within each dwarf’s King tidal radius. This
radius roughly corresponds to the largest radius where
a reasonable mass constraint is possible. As expected,
the mass within rt is not as well determined as the mass
within 2 rking. From these masses we are able to deter-
mine the mass-to-light ratios within rt, which we present
in the seventh column of Table 1. In the bottom panel
of Figure 3, we show mass-to-light ratios within rt as a
function of dwarf luminosity. We see the standard result
that the observable mass-to-light ratio increases with de-
creasing luminosity (Mateo 1998). Note, however, that
our results are inconsistent with the idea that all of the
dwarfs have the same integrated mass within their stellar
extent. We note that for Sagittarius, we can only obtain
a lower limit on the total mass-to-light ratio.
The last two columns in Table 1 list constraints on
Vmax for the dSphs. Column 8 shows results for an anal-
ysis with limits on rs as described above. In this case, the
integrated mass within the stellar radius is constrained
by the velocity dispersion data, but the halo rotation ve-
locity curve, Vc(r), can continue to rise as r increases
beyond the stellar radius in an unconstrained manner.
The result is that the velocity dispersion data alone pro-
vide only a lower limit on Vmax.
Stronger constraints on Vmax can be obtained if we
limit the range of rs by imposing a cosmology-dependent
prior on the dark matter mass profile. CDM simula-
Redefining the Missing Satellites Problem 5
TABLE 1
Parameters Describing Milky Way Satellites.
Galaxy rking rt LV Mass < 0.6 kpc Mass < rt M(< rt)/L Vmax [km s
−1] Vmax [km s
[kpc] [kpc] [106 L⊙] [10
7 M⊙] [10
7 M⊙] [M⊙/L⊙] (w/o prior) (with theory prior)
Draco 0.18 0.93 0.26 4.9
530 > 22 28
Ursa Minor 0.30 1.50 0.29 5.3
790 > 21 26
Leo I 0.20 0.80 4.79 4.3
106 > 14 19
Fornax 0.39 2.70 15.5 4.3
28 > 20 25
Leo II 0.19 0.52 0.58 2.1
128 > 17 9
Carina 0.26 0.85 0.43 3.4
82 > 13 15
Sculptor 0.28 1.63 2.15 2.7
68 > 20 14
Sextans 0.40 4.01 0.50 0.9
260 > 8 9
Sagittarius 0.3 4.0 18.1 20
> 20 > 11 > 19 —
Note. — Determination of the mass within 0.6 kpc and the maximum circular velocity for the dark matter halos of the dSphs.
The errors are determined as the location where the likelihood function falls off by 90% from its peak value. For Sagittarius, no
reliable estimate of Vmax with the CDM prior could be determined. The CDM prior is determined using the concordance cosmology
with σ8 = 0.74, n = 0.95 (see text for details).
Fig. 3.— The mass within 0.6 kpc (upper) and the mass-to-light
ratios within the King tidal radius (lower) for the Milky Way dSphs
as a function of dwarf luminosity. The error-bars here are defined as
the locations where the likelihoods fall to 40% of the peak values
(corresponding to ∼ 1σ errors). The lines denote, from top to
bottom, constant values of mass of 107, 108, 109 M⊙.
tions have shown that there is a correlation between
Vmax and rmax for halos, where rmax is the radius where
the circular velocity peaks. Because subhalo densities
will depend on the collapse time and orbital evolution
of each system, the precise Vmax-rmax relation is sen-
sitive to cosmology (e.g. σ8) and the formation his-
tory of the host halo itself (e.g. Zentner & Bullock 2003;
Power 2003; Kazantzidis et al. 2004; Bullock & Johnston
2005; Bullock & Johnston 2006). When converted to the
relevant halo parameters, the imposed Vmax-rmax rela-
tion can be seen as a theoretical prior on CDM ha-
los, restricting the parameter space we need to inte-
grate over. In order to illustrate the technique, we adopt
log10(rmax) = 1.35(log10(Vmax/kms
−1) − 1)− 0.196 kpc
with a scatter of 0.2 in log10, as measured from simulated
subhalos within the Via Lactea host halo (Diemand et al.
2007). This simulation is for a LCDM cosmology with
σ8 = 0.74 and n = 0.95. The scatter in the subhalo mass
function increases at the very high mass end, which re-
flects the fact that these most massive subhalos are those
that are accreted most recently (Zentner & Bullock 2003;
van den Bosch et al. 2005). However, as we show below
our results are not strongly dependent on the large scat-
ter at the high mass end.
Column 9 in Table 1 shows the allowed subhalo Vmax
values for the assumed prior. Note that in most cases,
this prior degrades the quality of the fit, and the likeli-
hood functions peak at a lower overall value. The mag-
nitude of this effect is not large except for the cases of
Leo II and Sagittarius. For Leo II, the peak likelihood
with the prior occurs at a value that is below the 10%
likelihood for the case without a prior on rs (i.e. the
data seem to prefer a puffier subhalo than would be ex-
pected in CDM). For Sagittarius, we are unable to obtain
a reasonable fit within a subhalo that is typical of those
expected. This is not too surprising. Sagittarius is be-
ing tidally disrupted and its dark matter halo is likely
atypical.
We emphasize that the Vmax determinations listed in
Column 9 are driven by theoretical assumptions, and can
only be fairly compared to predictions for this specific
cosmology (LCDM, σ8 = 0.74). The M0.6 values in Col-
umn 5 are applicable for any theoretical model, including
non-CDM models, or CDM models with any normaliza-
tion or power spectrum shape.
4. COMPARISON TO NUMERICAL SIMULATIONS
The recently-completed Via Lactea run is the highest-
resolution simulation of galactic substructure to date,
containing an order of magnitude more particles than
its predecessors (Diemand et al. 2007). As mentioned
above, Via Lactea assumes a LCDM cosmology with
σ8 = 0.74 and n = 0.95. For a detailed description of
the simulation, see Diemand et al. (2007). For our pur-
poses, the most important aspect of Via Lactea is its
ability to resolve the mass of subhalos on length scales
of the characteristic radius 0.6 kpc. In Via Lactea, the
force resolution is 90 pc and the smallest well-resolved
length scale is 300 pc, so that the mass within 0.6 kpc
is well-resolved in nearly all subhalos. Due to the choice
of time steps we expect the simulation to underestimate
local densities in the densest regions (by about 10% at
6 Strigari et al.
Fig. 4.— The mass within 0.6 kpc versus the maximum circular
velocity for the mass ranges of Via Lactea subhalos corresponding
to the population of satellites we study.
densities of 9 × 107M⊙/kpc3). There is only one sub-
halo with a higher local density than this at 0.6 kpc. For
this subhalo, ρ(r = 0.6 kpc) = 1.4 × 108M⊙/kpc3, so
its local density might be underestimated by up to 10%,
and the errors in the enclosed mass might be ∼ 20%
(Diemand et al. 2005). For all other subhalos the densi-
ties at 0.6 kpc are well below the affected densities, and
the enclosed mass should not be affected by more than
10% by the finite numerical resolution.
We define subhalos in Via Lactea to be the self-bound
halos that lie within the radius R200 = 389 kpc, where
R200 is defined to enclose an average density 200 times
the mean matter density. We note that in comparing
to the observed MW dwarf population, we could have
conservatively chosen subhalos that are restricted to lie
within the same radius as the most distant MW dSph
(250 kpc). We find that this choice has a negligible effect
on our conclusions – it reduces the count of small halos
by ∼ 10%.
In Figure 4, we show how M0.6 relates to the more
familiar quantity Vmax in Via Lactea subhalos. We note
that the relationship between subhaloM0.6 and Vmax will
be sensitive to the power spectrum shape and normaliza-
tion, as well as the nature of dark matter (Bullock et al.
2001; Zentner & Bullock 2003). The relationship shown
is only valid for the Via Lactea cosmology, but serves as
a useful reference for this comparison.
Given likelihood functions for the dSph M0.6 values,
we are now in position to determine the M0.6 mass func-
tion for Milky Way (MW) satellites and compare this to
the corresponding mass function in Via Lactea. For both
the observations and the simulation, we count the num-
ber of systems in four mass bins from 4 × 106 < M0.6 <
4× 108M⊙. This mass range is chosen to span the M0.6
values allowed by the likelihood functions for the MW
satellites. We assume that the two non-dSph satellites,
the LMC and SMC, belong in the highest mass bin, cor-
responding to M0.6 > 10
8 M⊙ (Harris & Zaritsky 2006;
van der Marel et al. 2002).
In Figure 5 we show resulting mass functions for MW
satellites (solid) and for Via Lactea subhalos (dashed,
Fig. 5.— The M0.6 mass function of Milky Way satellites and
dark subhalos in the Via Lactea simulation. The red (short-dashed)
curve is the total subhalo mass function from the simulation. The
black (solid) curve is the median of the observed satellite mass
function. The error-bars on the observed mass function represent
the upper and lower limits on the number of configurations that
occur with a probability of > 10−3.
with Poisson error-bars). For the MW satellites, the cen-
tral values correspond to the median number of galaxies
per bin, which are obtained from the maximum values
of the respective likelihood functions. The error-bars
on the satellite points are set by the upper and lower
configurations that occur with a probability of > 10−3
after drawing 1000 realizations from the respective like-
lihood functions. As seen in Figure 5, the predicted dark
subhalo mass function rises as ∼ M−20.6 while the visi-
ble MW satellite mass function is relatively flat. The
lowest mass bin (M0.6 ∼ 9 × 106M⊙) always contains 1
visible galaxy (Sextans). The second-to-lowest mass bin
(M0.6 ∼ 2.5×107M⊙) contains between 2 and 4 satellites
(Carina, Sculptor, and Leo II). The fact that these two
lowest bins are not consistent with zero galaxies has im-
portant implications for the Stoehr et al. (2002) solution
to the MSP: specifically, it implies that the 11 well-known
MW satellites do not reside in subhalos that resemble the
11 most massive subhalos in Via Lactea.
To further emphasize this point, we see from Figure 5
that the mass of the 11th most massive subhalo in Via
Lactea is 4 × 107M⊙. From the likelihood functions in
Figure 1, Sextans, Carina, Leo II, and Sculptor must
have values of M0.6 less than 4 × 107M⊙ at 99% c.l.,
implying a negligible probability that all of these dSphs
reside in halos with M0.6 > 4× 107M⊙.
Using the M0.6 mass function of MW satellites, we
can test other CDM-based solutions to the MSP. Two
models of interest are based on reionization suppres-
sion (Bullock et al. 2000; Moore et al. 2006) and on there
being a characteristic halo mass scale prior to subhalo
accretion (Diemand et al. 2007). To roughly represent
these models, we focus on two subsamples of Via Lactea
subhalos: the earliest forming (EF) halos, and the largest
mass halos before they were accreted (LBA) into the
host halo. As described in Diemand et al. (2007), the
LBA sample is defined to be the 10 subhalos that had
Redefining the Missing Satellites Problem 7
Fig. 6.— The solid and dashed curves show the MW satellites
and dark subhalos in Via Lactea, respectively. These lines are
reproduced from Figure 5, with error-bars suppressed for clarity.
The blue (dotted) curve represents the ten earliest forming halos
in Via Lactea, and the green (long-dashed) curve represents the 10
most massive halos before accretion into the Milky Way halo.
the highest Vmax value throughout their entire history.
These systems all had Vmax > 37.3 kms
−1 at some point
in their history. The EF sample consists of the 10 sub-
halos with Vmax > 16.2 kms
−1 (the limit of atomic cool-
ing) at z = 9.6. The Kravtsov et al. (2004) model would
correspond to a selection intermediate between EF and
LBA. In Figure 6 we show the observed mass function of
MW satellites (solid, squares) along with the EF (dotted,
triangles) and LBA (long-dashed, circles) samples. We
conclude that both of these models are in agreement with
the MW satellite mass function. Future observations and
quantification of the masses of the newly-discovered MW
satellites will enable us to do precision tests of the viable
MSP solutions. Additionally, once the capability to do
numerical simulations of substructure in warm dark mat-
ter models becomes a reality, the M0.6 mass function will
provide an invaluable tool to place constraints on WDM
models.
5. SUMMARY AND DISCUSSION
We have provided comprehensive dark matter mass
constraints for the 9 well-studied dSph satellite galax-
ies of the Milky Way and investigated CDM-based solu-
tions for the missing satellite problem in light of these
constraints. While subhalo Vmax values are the tradi-
tional means by which theoretical predictions quantify
substructure counts, this is not the most direct way
to confront the observational constraints. Specifically,
Vmax is poorly constrained by stellar velocity dispersion
measurements, and can only be estimated by adopting
cosmology-dependent, theoretical extrapolations. We ar-
gue the comparison between theory and observation is
best made using the integrated mass within a fixed phys-
ical radius comparable to the stellar extent of the known
satellites, ∼ 0.6 kpc. This approach is motivated by
Strigari et al. (2007) who showed that the mass within
two stellar King radii is best constrained by typical ve-
locity dispersion data.
Using M0.6 to represent the dark matter mass within a
radius of 0.6 kpc, we computed M0.6 likelihood functions
for the MW dSphs based on published velocity dispersion
data. Our models allow for a wide range of underlying
dark matter halo profile shapes and stellar velocity dis-
persion profiles. With this broad allowance, we showed
that the M0.6 for most dwarf satellites is constrained to
within ∼ 30%.
We derived the M0.6 mass function of MW satellites
(with error bars) and compared it to the same mass func-
tion computed directly from the Via Lactea substruc-
ture simulation. While the observed M0.6 mass function
of luminous satellites is relatively flat, the comparable
CDM subhalo mass function rises as ∼ M−20.6 . We rule
out the hypothesis that all of the well-known Milky Way
satellites strictly inhabit the most massive CDM subha-
los. If luminosity does track current subhalo mass, this
would only be possible if the subhalo population of the
Milky Way were drastically different than that predicted
in CDM. However, we show that other plausible CDM so-
lutions are consistent with the observed mass function.
Specifically, the earliest forming subhalos have a flatM0.6
mass function that is consistent with the satellite subhalo
mass function. This would be expected if the population
of bright dwarf spheroidals corresponds to the residual
halo population that accreted a significant mount of gas
before the epoch of reionization (Bullock et al. 2000).
We also tested the hypothesis that the present dwarf
spheroidal population corresponds to the subhalos that
were the most massive before they fell into the MW halo
(Kravtsov et al. 2004). This hypothesis is also consistent
with the current data.
In deriving the M0.6 mass function for this paper we
have set aside the issue of the most- recently discovered
population of MW dwarfs. We aim to return to this
issue in later work, but it is worth speculating on the
expected impact that these systems would have on our
conclusions. If we had included the new systems, mak-
ing ∼ 20 satellites in all, would it be possible to place
these systems in the ∼ 20 most massive subhalos in Via
Lactea? Given the probable mass ranges for the new
dwarfs, we find that this is unlikely. We can get a rough
estimate of their masses from their observed luminosi-
ties. We start by considering the mass-to-light ratios
of the known dSph population from figure 3 and from
Mateo (1998). If we assume that the other dwarfs have
similar M/L range, we can assign a mass range for each
of them. In all cases, the new MW dwarfs are approxi-
mately 1 to 2 orders of magnitude smaller in luminosity
than the well-known dSph population. Using the central
points for the known dSphs, we obtain M0.6/L spanning
the range from 3 − 230. Considering the width of the
likelihoods, we can allow a slightly larger range, 2− 350.
If we place the new dwarfs in this latter range, the uncer-
tainty in their masses is (2 − 350)LM⊙/L⊙. Even with
this generous range we expect most of the new dwarfs
have M0.6 . 10
The discovery of more members of the MW group,
7 These estimates are in rough agreement with recent de-
terminations from stellar velocity dispersion measurements in
the new dwarfs, as presented by N. Martin and J. Simon
at the 3rd Irvine Cosmology Workshop, March 22-24, 2007,
http://www.physics.uci.edu/Astrophysical-Probes/
http://www.physics.uci.edu/Astrophysical-Probes/
8 Strigari et al.
and the precise determination of the M0.6 mass func-
tion, could bring the status of the remaining viable MSP
solutions into sharper focus. These measurements would
also provide important constraints on warm dark matter
models or on the small scale power spectrum in CDM.
6. ACKNOWLEDGMENTS
We thank Jason Harris, Tobias Kaufmann, Savvas
Koushiappas, Andrey Kravtsov, Steve Majewski, Nicolas
Martin, Josh Simon, and Andrew Zentner for discussions
on this topic. We thank Mike Siegal for sharing his Leo
II data. LES is supported in part by a Gary McCue post-
doctoral fellowship through the Center for Cosmology at
the University of California, Irvine. L.E.S., J.S.B., and
M.K. are supported in part by NSF grant AST-0607746.
M.K. acknowledges support from PHY-0555689. J. D.
acknowledges support from NASA through Hubble Fel-
lowship grant HST-HF-01194.01 awarded by the Space
Telescope Science Institute, which is operated by the
Association of Universities for Research in Astronomy,
Inc., for NASA, under contract NAS 5-26555. P.M. ac-
knowledges support from NASA grants NAG5-11513 and
NNG04GK85G, and from the Alexander von Humboldt
Foundation. The Via Lactea simulation was performed
on NASA’s Project Columbia supercomputer system.
REFERENCES
Belokurov, V. et al. 2006, Astrophys. J., 647, L111
—. 2007, Astrophys. J., 654, 897
Benson, A. J., Lacey, C. G., Baugh, C. M., Cole, S., & Frenk,
C. S. 2002, Mon. Not. Roy. Astron. Soc., 333, 156
Bullock, J. S. & Johnston, K. V. 2005, ApJ, 635, 931
Bullock, J. S. & Johnston, K. V. 2006, To appear in the
proceedings of ’Island Universes: Structure and Evolution of
Disk Galaxies’, ed. R. de Jong (SPringer: Dordrecht)
Bullock, J. S., Kravtsov, A. V., & Weinberg, D. H. 2000,
Astrophys. J., 539, 517
Bullock, J. S. et al. 2001, Mon. Not. Roy. Astron. Soc., 321, 559
Diemand, J., Kuhlen, M., & Madau, P. 2007, Astrophys. J., 657,
Diemand, J., Kuhlen, M., & Madau, P. 2007, astro-ph/0703337
Diemand, J., Zemp, M., Moore, B., Stadel, J., & Carollo, M.
2005, Mon. Not. Roy. Astron. Soc., 364, 665
Gilmore, G. et al. 2007, astro-ph/0703308
Gnedin, N. Y. & Kravtsov, A. V. 2006, Astrophys. J., 645, 1054
Harbeck, D. et al. 2001, Astron. J., 122, 3092
Harris, J. & Zaritsky, D. 2006, AJ, 131, 2514
Hayashi, E., Navarro, et al. 2003, ApJ, 584, 541
Ibata, R., Chapman, S., Irwin, M., Lewis, G., & Martin, N. 2006,
Mon. Not. Roy. Astron. Soc. Lett., 373, L70
Ibata, R. A., Wyse, R. F. G., Gilmore, G., Irwin, M. U., &
Suntzeff, N. B. 1997, Astron. J., 113, 634
Kazantzidis, S., Mayer, L., Mastropietro, C., Diemand, J., Stadel,
J., & Moore, B. 2004, ApJ, 608, 663
King, I. 1962, Astron. J., 67, 471
Klypin, A., Kravtsov, A. V., Valenzuela, O., & Prada, F. 1999,
ApJ, 522, 82
Kravtsov, A. V., Gnedin, O. Y., & Klypin, A. A. 2004,
Astrophys. J., 609, 482
Lokas, E. L. 2002, Mon. Not. Roy. Astron. Soc., 333, 697
Lokas, E. L., Mamon, G. A., & Prada, F. 2005, Mon. Not. Roy.
Astron. Soc., 363, 918
Majewski, S. R., Skrutskie, M. F., Weinberg, M. D., &
Ostheimer, J. C. 2003, Astrophys. J., 599, 1082
Mateo, M. 1998, Ann. Rev. Astron. Astrophys., 36, 435
McConnachie, A. W., Penarrubia, J., & Navarro, J. F. 2006,
astro-ph/0608687
Moore, B., Diemand, J., Madau, P., Zemp, M., & Stadel, J. 2006,
Mon. Not. Roy. Astron. Soc., 368, 563
Moore, B. et al. 1999, ApJ, 524, L19
Munoz, R. R. et al. 2005, Astrophys. J., 631, L137
—. 2006, Astrophys. J., 649, 201
Navarro, J. F. et al. 2004, Mon. Not. Roy. Astron. Soc., 349, 1039
Penarrubia, J., McConnachie, A., & Navarro, J. F. 2007,
astro-ph/0701780
Power, C. 2003, Ph.D. Thesis, University of Durham
Sohn, S. T. et al. 2006, astro-ph/0608151
Somerville, R. S. 2002, ApJ, 572, L23
Stoehr, F., White, S. D. M., Tormen, G., & Springel, V. 2002,
Mon. Not. Roy. Astron. Soc., 335, L84
Strigari, L. E., Bullock, J. S., & Kaplinghat, M. 2007, Astrophys.
J., 657, L1
Strigari, L. E. et al. 2006, Astrophys. J., 652, 306
Strigari, L. E. et al. 2006, astro-ph/0611925
Tolstoy, E. et al. 2004, Astrophys. J., 617, L119
van den Bosch, F. C., Tormen, G., & Giocoli, C. 2005, Mon. Not.
Roy. Astron. Soc., 359, 1029
van der Marel, R. P., Alves, D. R., Hardy, E., & Suntzeff, N. B.
2002, AJ, 124, 2639
Walker, M. G., Mateo, M., Olszewski, E. W., Bernstein, R. A.,
Wang, X., & Woodroofe, M. 2005, AJ in press
(astro-ph/0511465)
Walker, M. G., Mateo, M., Olszewski, E. W., Pal, J. K., Sen, B.,
& Woodroofe, M. 2006, ApJ, 642, L41
Westfall, K. B., Majewski, S. R., Ostheimer, J. C., Frinchaboy,
P. M., Kunkel, W. E., Patterson, R. J., & Link, R. 2006, AJ,
131, 375
Wilkinson, M. I. et al. 2004, Astrophys. J., 611, L21
Willman, B. et al. 2005, AJ, 129, 2692
Willman, B. et al. 2005, Astrophys. J., 626, L85
Wu, X. 2007, astro-ph/0702233
Zentner, A. R. & Bullock, J. S. 2003, Astrophys. J., 598, 49
Zucker, D. B. et al. 2006a, Astrophys. J., 650, L41
—. 2006b, Astrophys. J., 643, L103
|
0704.1818 | Low-density graph codes that are optimal for source/channel coding and
binning | Low-density graph codes that are optimal for
source/channel coding and binning
Martin J. Wainwright Emin Martinian
Dept. of Statistics, and Tilda Consulting, Inc.
Dept. of Electrical Engineering and Computer Sciences Arlington, MA
University of California, Berkeley [email protected]
wainwrig@{eecs,stat}.berkeley.edu
Technical Report 730,
Department of Statistics, UC Berkeley,
April 2007
Abstract
We describe and analyze the joint source/channel coding properties of a class of sparse
graphical codes based on compounding a low-density generator matrix (LDGM) code with a
low-density parity check (LDPC) code. Our first pair of theorems establish that there exist
codes from this ensemble, with all degrees remaining bounded independently of block length,
that are simultaneously optimal as both source and channel codes when encoding and decoding
are performed optimally. More precisely, in the context of lossy compression, we prove that
finite degree constructions can achieve any pair (R,D) on the rate-distortion curve of the binary
symmetric source. In the context of channel coding, we prove that finite degree codes can achieve
any pair (C, p) on the capacity-noise curve of the binary symmetric channel. Next, we show that
our compound construction has a nested structure that can be exploited to achieve the Wyner-
Ziv bound for source coding with side information (SCSI), as well as the Gelfand-Pinsker bound
for channel coding with side information (CCSI). Although the current results are based on
optimal encoding and decoding, the proposed graphical codes have sparse structure and high
girth that renders them well-suited to message-passing and other efficient decoding procedures.
Keywords: Graphical codes; low-density parity check code (LDPC); low-density generator matrix
code (LDGM); weight enumerator; source coding; channel coding; Wyner-Ziv problem; Gelfand-
Pinsker problem; coding with side information; information embedding; distributed source coding.
1 Introduction
Over the past decade, codes based on graphical constructions, including turbo codes [3] and low-
density parity check (LDPC) codes [17], have proven extremely successful for channel coding prob-
lems. The sparse graphical nature of these codes makes them very well-suited to decoding using
efficient message-passing algorithms, such as the sum-product and max-product algorithms. The
asymptotic behavior of iterative decoding on graphs with high girth is well-characterized by the
density evolution method [25, 39], which yields a useful design principle for choosing degree dis-
tributions. Overall, suitably designed LDPC codes yield excellent practical performance under
iterative message-passing, frequently very close to Shannon limits [7].
http://arxiv.org/abs/0704.1818v1
However, many other communication problems involve aspects of lossy source coding, either
alone or in conjunction with channel coding, the latter case corresponding to joint source-channel
coding problems. Well-known examples include lossy source coding with side information (one
variant corresponding to the Wyner-Ziv problem [45]), and channel coding with side information
(one variant being the Gelfand-Pinsker problem [19]). The information-theoretic schemes achieving
the optimal rates for coding with side information involve delicate combinations of source and
channel coding. For problems of this nature—in contrast to the case of pure channel coding—the
use of sparse graphical codes and message-passing algorithm is not nearly as well understood. With
this perspective in mind, the focus of this paper is the design and analysis sparse graphical codes
for lossy source coding, as well as joint source/channel coding problems. Our main contribution
is to exhibit classes of graphical codes, with all degrees remaining bounded independently of the
blocklength, that simultaneously achieve the information-theoretic bounds for both source and
channel coding under optimal encoding and decoding.
1.1 Previous and ongoing work
A variety of code architectures have been suggested for lossy compression and related problems in
source/channel coding. One standard approach to lossy compression is via trellis-code quantization
(TCQ) [26]. The advantage of trellis constructions is that exact encoding and decoding can be
performed using the max-product or Viterbi algorithm [24], with complexity that grows linearly
in the trellis length but exponentially in the constraint length. Various researchers have exploited
trellis-based codes both for single-source and distributed compression [6, 23, 37, 46] as well as
information embedding problems [5, 15, 42]. One limitation of trellis-based approaches is the
fact that saturating rate-distortion bounds requires increasing the trellis constraint length [43],
which incurs exponential complexity (even for the max-product or sum-product message-passing
algorithms).
Other researchers have proposed and studied the use of low-density parity check (LDPC) codes
and turbo codes, which have proven extremely successful for channel coding, in application to
various types of compression problems. These techniques have proven particularly successful for
lossless distributed compression, often known as the Slepian-Wolf problem [18, 40]. An attractive
feature is that the source encoding step can be transformed to an equivalent noisy channel de-
coding problem, so that known constructions and iterative algorithms can be leveraged. For lossy
compression, other work [31] shows that it is possible to approach the binary rate-distortion bound
using LDPC-like codes, albeit with degrees that grow logarithmically with the blocklength.
A parallel line of work has studied the use of low-density generator matrix (LDGM) codes, which
correspond to the duals of LDPC codes, for lossy compression problems [30, 44, 9, 35, 34]. Focusing
on binary erasure quantization (a special compression problem dual to binary erasure channel
coding), Martinian and Yedidia [30] proved that LDGM codes combined with modified message-
passing can saturate the associated rate-distortion bound. Various researchers have used techniques
from statistical physics, including the cavity method and replica methods, to provide non-rigorous
analyses of LDGM performance for lossy compression of binary sources [8, 9, 35, 34]. In the limit
of zero-distortion, this analysis has been made rigorous in a sequence of papers [12, 32, 10, 14].
Moreover, our own recent work [28, 27] provides rigorous upper bounds on the effective rate-
distortion function of various classes of LDGM codes. In terms of practical algorithms for lossy
binary compression, researchers have explored variants of the sum-product algorithm [34] or survey
propagation algorithms [8, 44] for quantizing binary sources.
1.2 Our contributions
Classical random coding arguments [11] show that random binary linear codes will achieve both
channel capacity and rate-distortion bounds. The challenge addressed in this paper is the design
and analysis of codes with bounded graphical complexity, meaning that all degrees in a factor graph
representation of the code remain bounded independently of blocklength. Such sparsity is critical
if there is any hope to leverage efficient message-passing algorithms for encoding and decoding.
With this context, the primary contribution of this paper is the analysis of sparse graphical code
ensembles in which a low-density generator matrix (LDGM) code is compounded with a low-density
parity check (LDPC) code (see Fig. 2 for an illustration). Related compound constructions have
been considered in previous work, but focusing exclusively on channel coding [16, 36, 41]. In
contrast, this paper focuses on communication problems in which source coding plays an essential
role, including lossy compression itself as well as joint source/channel coding problems. Indeed, the
source coding analysis of the compound construction requires techniques fundamentally different
from those used in channel coding analysis. We also note that the compound code illustrated
in Fig. 2 can be applied to more general memoryless channels and sources; however, so as to bring
the primary contribution into sharp focus, this paper focuses exclusively on binary sources and/or
binary symmetric channels.
More specifically, our first pair of theorems establish that for any rate R ∈ (0, 1), there exist
codes from compound LDGM/LDPC ensembles with all degrees remaining bounded independently
of the blocklength that achieve both the channel capacity and the rate-distortion bound. To the
best of our knowledge, this is the first demonstration of code families with bounded graphical
complexity that are simultaneously optimal for both source and channel coding. Building on
these results, we demonstrate that codes from our ensemble have a naturally “nested” structure,
in which good channel codes can be partitioned into a collection of good source codes, and vice
versa. By exploiting this nested structure, we prove that codes from our ensembles can achieve
the information-theoretic limits for the binary versions of both the problem of lossy source coding
with side information (SCSI, known as the Wyner-Ziv problem [45]), and channel coding with side
information (CCSI, known as the Gelfand-Pinsker [19] problem). Although these results are based
on optimal encoding and decoding, a code drawn randomly from our ensembles will, with high
probability, have high girth and good expansion, and hence be well-suited to message-passing and
other efficient decoding procedures.
The remainder of this paper is organized as follows. Section 2 contains basic background
material and definitions for source and channel coding, and factor graph representations of binary
linear codes. In Section 3, we define the ensembles of compound codes that are the primary
focus of this paper, and state (without proof) our main results on their source and channel coding
optimality. In Section 4, we leverage these results to show that our compound codes can achieve
the information-theoretic limits for lossy source coding with side information (SCSI), and channel
coding with side information (CCSI). Sections 5 and 6 are devoted to proofs that codes from the
compound ensemble are optimal for lossy source coding (Section 5) and channel coding (Section 6)
respectively. We conclude the paper with a discussion in Section 7. Portions of this work have
previously appeared as conference papers [28, 29, 27].
2 Background
In this section, we provide relevant background material on source and channel coding, binary
linear codes, as well as factor graph representations of such codes.
2.1 Source and channel coding
A binary linear code C of block length n consists of all binary strings x ∈ {0, 1}n satisfying a
set of m < n equations in modulo two arithmetic. More precisely, given a parity check matrix
H ∈ {0, 1}m×n, the code is given by the null space
C : = {x ∈ {0, 1}n | Hx = 0} . (1)
Assuming the parity check matrix H is full rank, the code C consists of 2n−m = 2nR codewords,
where R = 1− m
is the code rate.
Channel coding: In the channel coding problem, the transmitter chooses some codeword x ∈ C
and transmits it over a noisy channel, so that the receiver observes a noise-corrupted version Y .
The channel behavior is modeled by a conditional distribution P(y | x) that specifies, for each
transmitted sequence Y , a probability distribution over possible received sequences {Y = y}. In
many cases, the channel is memoryless, meaning that it acts on each bit of C in an independent
manner, so that the channel model decomposes as P(y | x) =
i=1 fi(xi; yi) Here each function
fi(xi; yi) = P(yi | xi) is simply the conditional probability of observing bit yi given that xi was
transmitted. As a simple example, in the binary symmetric channel (BSC), the channel flips each
transmitted bit xi with probability p, so that P(yi | xi) = (1 − p) I[xi = yi] + p (1− I[xi 6= yi]),
where I(A) represents an indicator function of the event A. With this set-up, the goal of the re-
ceiver is to solve the channel decoding problem: estimate the most likely transmitted codeword,
given by x̂ : = argmax
P(y | x). The Shannon capacity [11] of a channel specifies an upper bound
on the rate R of any code for which transmission can be asymptotically error-free. Continuing
with our example of the BSC with flip probability p, the capacity is given by C = 1− h(p), where
h(p) := −p log2 p− (1− p) log2(1− p) is the binary entropy function.
Lossy source coding: In a lossy source coding problem, the encoder observes some source se-
quence S ∈ S, corresponding to a realization of some random vector with i.i.d. elements Si ∼ PS .
The idea is to compress the source by representing each source sequence S by some codeword
x ∈ C. As a particular example, one might be interested in compressing a symmetric Bernoulli
source, consisting of binary strings S ∈ {0, 1}n, with each element Si drawn in an independent
and identically distributed (i.i.d.) manner from a Bernoulli distribution with parameter p = 1
One could achieve a given compression rate R = m
by mapping each source sequence to some
codeword x ∈ C from a code containing 2m = 2nR elements, say indexed by the binary sequences
z ∈ {0, 1}m. In order to assess the quality of the compression, we define a source decoding map
x 7→ Ŝ(x), which associates a source reconstruction Ŝ(x) with each codeword x ∈ C. Given some
distortion metric d : S × S → R+, the source encoding problem is to find the codeword with min-
imal distortion—namely, the optimal encoding x̂ : = argmin
d(Ŝ(x), S). Classical rate-distortion
theory [11] specifies the optimal trade-offs between the compression rate R and the best achievable
average distortion D = E[d(Ŝ, S)], where the expectation is taken over the random source sequences
S. For instance, to follow up on the Bernoulli compression example, if we use the Hamming metric
d(Ŝ, S) = 1
i=1 |Ŝi − Si| as the distortion measure, then the rate-distortion function takes the
form R(D) = 1− h(D), where h is the previously defined binary entropy function.
We now provide definitions of “good” source and channel codes that are useful for future reference.
Definition 1. (a) A code family is a good D-distortion binary symmetric source code if for any
ǫ > 0, there exists a code with rate R < 1− h (D) + ǫ that achieves Hamming distortion less than
or equal to D.
(b) A code family is a good BSC(p)-noise channel code if for any ǫ > 0 there exists a code with
rate R > 1− h (p)− ǫ with error probability less than ǫ.
2.2 Factor graphs and graphical codes
Both the channel decoding and source encoding problems, if viewed naively, require searching over
an exponentially large codebook (since |C| = 2nR for a code of rate R). Therefore, any practically
useful code must have special structure that facilitates decoding and encoding operations. The
success of a large subclass of modern codes in use today, especially low-density parity check (LDPC)
codes [17, 38], is based on the sparsity of their associated factor graphs.
PSfrag replacements
y1 y2 y3 y4 y5 y6 y7 y8 y9 y10 y11 y12
c1 c2 c3 c4 c5 c6
PSfrag replacements
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12
z1 z2 z3 z4 z5 z6 z7 z8 z9
(a) (b)
Figure 1. (a) Factor graph representation of a rate R = 0.5 low-density parity check (LDPC) code
with bit degree dv = 3 and check degree d
= 6. (b) Factor graph representation of a rate R = 0.75
low-density generator matrix (LDGM) code with check degree dc = 3 and bit degree dv = 4.
Given a binary linear code C, specified by parity check matrix H, the code structure can be
captured by a bipartite graph, in which circular nodes (◦) represent the binary values xi (or columns
of H), and square nodes (�) represent the parity checks (or rows of H). For instance, Fig. 1(a)
shows the factor graph for a rate R = 1
code in parity check form, with m = 6 checks acting on
n = 12 bits. The edges in this graph correspond to 1’s in the parity check matrix, and reveal the
subset of bits on which each parity check acts. The parity check code in Fig. 1(a) is a regular
code with bit degree 3 and check degree 6. Such low-density constructions, meaning that both the
bit degrees and check degrees remain bounded independently of the block length n, are of most
practical use, since they can be efficiently represented and stored, and yield excellent performance
under message-passing decoding. In the context of a channel coding problem, the shaded circular
nodes at the top of the low-density parity check (LDPC) code in panel (a) represent the observed
variables yi received from the noisy channel.
Figure 1(b) shows a binary linear code represented in factor graph form by its generator matrix
G. In this dual representation, each codeword x ∈ {0, 1}n is generated by taking the matrix-vector
product of the form Gz, where z ∈ {0, 1}m is a sequence of information bits, and G ∈ {0, 1}n×m is
the generator matrix. For the code shown in panel (b), the blocklength is n = 12, and information
sequences are of length m = 9, for an overall rate of R = m/n = 0.75 in this case. The degrees of
the check and variable nodes in the factor graph are dc = 3 and dv = 4 respectively, so that the
associated generator matrix G has dc = 3 ones in each row, and dv = 4 ones in each column. When
the generator matrix is sparse in this setting, then the resulting code is known as a low-density
generator matrix (LDGM) code.
2.3 Weight enumerating functions
For future reference, it is useful to define the weight enumerating function of a code. Given a binary
linear code of blocklength m, its codewords x have renormalized Hamming weights w : =
range in the interval [0, 1]. Accordingly, it is convenient to define a function Wm : [0, 1] → R+ that,
for each w ∈ [0, 1], counts the number of codewords of weight w:
Wm(w) :=
x ∈ C | w =
⌉ }∣∣∣∣ , (2)
where ⌈·⌉ denotes the ceiling function. Although it is typically difficult to compute the weight enu-
merator itself, it is frequently possible to compute (or bound) the average weight enumerator, where
the expectation is taken over some random ensemble of codes. In particular, our analysis in the
sequel makes use of the average weight enumerator of a (dv, d
c)-regular LDPC code (see Fig. 1(a)),
defined as
Am(w; dv , d
c) :=
logE [Wm(w)] , (3)
where the expectation is taken over the ensemble of all regular (dv , d
c)-LDPC codes. For such
regular LDPC codes, this average weight enumerator has been extensively studied in previous
work [17, 22].
3 Optimality of bounded degree compound constructions
In this section, we describe the compound LDGM/LDPC construction that is the focus of this
paper, and describe our main results on their source and channel coding optimality.
3.1 Compound construction
Our main focus is the construction illustrated in Fig. 2, obtained by compounding an LDGM code
(top two layers) with an LDPC code (bottom two layers). The code is defined by a factor graph
with three layers: at the top, a vector x ∈ {0, 1}n of codeword bits is connected to a set of n parity
checks, which are in turn connected by a sparse generator matrix G to a vector y ∈ {0, 1}m of
information bits in the middle layer. The information bits y are also codewords in an LDPC code,
defined by the parity check matrix H connecting the middle and bottom layers.
In more detail, considering first the LDGM component of the compound code, each codeword
x ∈ {0, 1}n in the top layer is connected via the generator matrix G ∈ {0, 1}n×m to an information
sequence y ∈ {0, 1}m in the middle layer; more specifically, we have the algebraic relation x = Gy.
Note that this LDGM code has rate RG ≤
. Second, turning to the LDPC component of the
compound construction, its codewords correspond to a subset of information sequences y ∈ {0, 1}m
in the middle layer. In particular, any valid codeword y satisfies the parity check relation Hy = 0,
where H ∈ {0, 1}m×k joins the middle and bottom layers of the construction. Overall, this defines
an LDPC code with rate RH = 1−
, assuming that H has full row rank.
The overall code C obtained by concatenating the LDGM and LDPC codes has blocklength n,
and rate R upper bounded by RGRH . In algebraic terms, the code C is defined as
C : = {x ∈ {0, 1}n | x = Gy for some y ∈ {0, 1}m such that Hy = 0} , (4)
where all operations are in modulo two arithmetic.
PSfrag replacements
k1 k2
H1 H2
Figure 2. The compound LDGM/LDPC construction analyzed in this paper, consisting of a (n,m)
LDGM code over the middle and top layers, compounded with a (m, k) LDPC code over the middle
and bottom layers. Codewords x ∈ {0, 1}n are placed on the top row of the construction, and are
associated with information bit sequences z ∈ {0, 1}m in the middle layer. The LDGM code over the
top and middle layers is defined by a sparse generator matrix G ∈ {0, 1}n×m with at most dc ones
per row. The bottom LDPC over the middle and bottom layers is represented by a sparse parity
check matrix H ∈ {0, 1}k×m with dv ones per column, and d
ones per row.
Our analysis in this paper will be performed over random ensembles of compound LDGM/LDPC
ensembles. In particular, for each degree triplet (dc, dv, d
c), we focus on the following random
ensemble:
(a) For each fixed integer dc ≥ 4, the random generator matrix G ∈ {0, 1}
n×m is specified as
follows: for each of the n rows, we choose dc positions with replacement, and put a 1 in each
of these positions. This procedure yields a random matrix with at most dc ones per row,
since it is possible (although of asymptotically negligible probability for any fixed dc) that
the same position is chosen more than once.
(b) For each fixed degree pair (dv, d
c), the random LDPC matrix H ∈ {0, 1}
k×m is chosen uni-
formly at random from the space of all matrices with exactly dv ones per column, and exactly
d′c ones per row. This ensemble is a standard (dv , d
c)-regular LDPC ensemble.
We note that our reason for choosing the check-regular LDGM ensemble specified in step (a) is not
that it need define a particularly good code, but rather that it is convenient for theoretical purposes.
Interestingly, our analysis shows that the bounded degree dc check-regular LDGM ensemble, even
though it is sub-optimal for both source and channel coding in isolation [28, 29], defines optimal
source and channel codes when combined with a bottom LDPC code.
3.2 Main results
Our first main result is on the achievability of the Shannon rate-distortion bound using codes from
LDGM/LDPC compound construction with finite degrees (dc, dv , d
c). In particular, we make the
following claim:
Theorem 1. Given any pair (R,D) satisfying the Shannon bound, there is a set of finite degrees
(dc, dv , d
c) and a code from the associated LDGM/LDPC ensemble with rate R that is a D-good
source code (see Definition 1).
In other work [28, 27], we showed that standard LDGM codes from the check-regular ensemble
cannot achieve the rate-distortion bound with finite degrees. As will be highlighted by the proof of
Theorem 1 in Section 5, the inclusion of the LDPC lower code in the compound construction plays
a vital role in the achievability of the Shannon rate-distortion curve.
Our second main result of this result is complementary in nature to Theorem 1, regarding the
achievability of the Shannon channel capacity using codes from LDGM/LDPC compound construc-
tion with finite degrees (dc, dv , d
c). In particular, we have:
Theorem 2. For all rate-noise pairs (R, p) satisfying the Shannon channel coding bound R <
1 − h (p), there is a set of finite degrees (dc, dv , d
c) and a code from the associated LDGM/LDPC
ensemble with rate R that is a p-good channel code (see Definition 1).
To put this result into perspective, recall that the overall rate of this compound construction is
given by R = RGRH . Note that an LDGM code on its own (i.e., without the lower LDPC code)
is a special case of this construction with RH = 1. However, a standard LDGM of this variety is
not a good channel code, due to the large number of low-weight codewords. Essentially, the proof
of Theorem 2 (see Section 6) shows that using a non-trivial LDPC lower code (with RH < 1) can
eliminate these troublesome low-weight codewords.
4 Consequences for coding with side information
We now turn to consideration of the consequences of our two main results for problems of coding
with side information. It is well-known from previous work [47] that achieving the information-
theoretic limits for these problems requires nested constructions, in which a collection of good source
codes are nested inside a good channel code (or vice versa). Accordingly, we begin in Section 4.1
by describing how our compound construction naturally generates such nested ensembles. In Sec-
tions 4.2 and 4.3 respectively, we discuss how the compound construction can be used to achieve
the information-theoretic optimum for binary source coding with side information (a version of the
Wyner-Ziv problem [45]), and binary information embedding (a version of “dirty paper coding”,
or the Gelfand-Pinsker problem [19]).
4.1 Nested code structure
The structure of the compound LDGM/LDPC construction lends itself naturally to nested code
constructions. In particular, we first partition the set of k lower parity checks into two disjoint
subsets K1 and K2, of sizes k1 and k2 respectively, as illustrated in Fig. 2. Let H1 and H2 denote
the corresponding partitions of the full parity check matrix H ∈ {0, 1}k×m. Now let us set all parity
bits in the subset K2 equal to zero, and consider the LDGM/LDPC code C(G,H1) defined by the
generator matrix G and the parity check (sub)matrix H1, as follows
C(G,H1) := {x ∈ {0, 1}
n | x = Gy for some y ∈ {0, 1}m such that H1 y = 0} . (5)
Note that the rate of C(G,H1) is given by R
′ = RGRH1 , which can be suitably adjusted by
modifying the LDGM and LDPC rates respectively. Moreover, by applying Theorems 1 and 2,
there exist finite choices of degree such that C(G,H1) will be optimal for both source and channel
coding.
Considering now the remaining k2 parity bits in the subset K2, suppose that we set them equal
to a fixed binary sequence m ∈ {0, 1}k2 . Now consider the code
C(m) :=
x ∈ {0, 1}n | x = Gy for some y ∈ {0, 1}m such that
. (6)
Note that for each binary sequencem ∈ {0, 1}k2 , the code C(m) is a subcode of C(G,H1); moreover,
the collection of these subcodes forms a disjoint partition as follows
C(G,H1) =
m∈{0,1}k2
C(m). (7)
Again, Theorems 1 and 2 guarantee that (with suitable degree choices), each of the subcodes C(m)
is optimal for both source and channel coding. Thus, the LDGM/LDPC construction has a natural
nested property, in which a good source/channel code—namely C(G,H1)—is partitioned into a
disjoint collection {C(m), m ∈ {0, 1}k1} of good source/channel codes. We now illustrate how this
nested structure can be exploited for coding with side information.
4.2 Source coding with side information
We begin by showing that the compound construction can be used to perform source coding with
side information (SCSI).
4.2.1 Problem formulation
Suppose that we wish to compress a symmetric Bernoulli source S ∼ Ber(1
) so as to be able
to reconstruct it with Hamming distortion D. As discussed earlier in Section 2, the minimum
achievable rate is given by R(D) = 1− h (D). In the Wyner-Ziv extension of standard lossy com-
pression [45], there is an additional source of side information about S—say in the form Z = S ⊕W
where W ∼ Ber(δ) is observation noise—that is available only at the decoder. See Fig. 3 for a block
diagram representation of this problem.
Decoder
PSfrag replacements
Figure 3. Block diagram representation of source coding with side information (SCSI). A source S is
compressed to rate R. The decoder is given the compressed version, and side information Z = S⊕W ,
and wishes to use (Ŝ, Z) to reconstruct the source S up to distortion D.
For this binary version of source coding with side information (SCSI), it is known [2] that the
minimum achievable rate takes the form
RWZ(D, p) = l. c. e.
h (D ∗ p)− h (D) , (p, 0)
, (8)
where l. c. e. denotes the lower convex envelope. Note that in the special case p = 1
, the side
information is useless, so that the Wyner-Ziv rate reduces to classical rate-distortion. In the
discussion to follow, we focus only on achieving rates of the form h (D ∗ p)−h (D), as any remaining
rates on the Wyner-Ziv curve (8) can be achieved by time-sharing with the point (p, 0).
4.2.2 Coding procedure for SCSI
In order to achieve rates of the form R = h (D ∗ p)− h (D), we use the compound LDGM/LDPC
construction, as illustrated in Fig. 2, according to the following procedure.
Step #1, Source coding: The first step is a source coding operation, in which we transform the
source sequence S to a quantized representation S. In order to do so, we use the code C(G,H1), as
defined in equation (5) and illustrated in Fig. 4(a), composed of the generator matrix G and the
parity check matrix H1. Note that C(G,H1), when viewed as a code with blocklength n, has rate
R1 : =
1− k1
= m−k1
. Suppose that we choose1 the middle and lower layer sizes m and k1
respectively such that
m− k1
= 1− h (D) + ǫ/2, (9)
where ǫ > 0 is arbitrary. For any such choice, Theorem 1 guarantees the existence of finite degrees
(dc, dv , d
c) such that that C(G,H1) is a good D-distortion source code. Consequently, for the speci-
fied rate R1, we can use C(G,H1) in order to transform the source to some quantized representation
Ŝ such that the error Ŝ ⊕ S has average Hamming weighted bounded by D. Moreover, since Ŝ is a
codeword of C(G,H1), there is some sequence of information bits Ŷ ∈ {0, 1}
m such that Ŝ = GŶ
and H1Ŷ = 0.
PSfrag replacements
0 0 0
PSfrag replacements
0 0 0 1 0
k1 k2
H1 H2
(a) (b)
Figure 4. (a) Source coding stage for Wyner-Ziv procedure: the C(G,H1), specified by the generator
matrix G ∈ {0, 1}n×m and parity check matrix H1 ∈ {0, 1}
k1×m, is used to quantize the source vector
S ∈ {0, 1}n, thereby obtaining a quantized version Ŝ ∈ {0, 1}n and associated vector of information
bits Ŷ ∈ {0, 1}m, such that Ŝ = G Ŷ and H1 Ŷ = 0.
Step #2. Channel coding: Given the output (Ŷ , Ŝ) of the source coding step, consider the
sequence H2Ŷ ∈ {0, 1}
k2 of parity bits associated with the parity check matrix H2. Transmitting
this string of parity bits requires rate Rtrans =
. Overall, the decoder receives both these k2 parity
bits, as well as the side information sequence Z = S ⊕W . Using these two pieces of information,
the goal of the decoder is to recover the quantized sequence Ŝ.
Viewing this problem as one of channel coding, the effective rate of this channel code is
m−k1−k2
. Note that the side information can be written in the form
Z = S ⊕W = Ŝ ⊕ E ⊕W,
1Note that the choices of m and k1 need not be unique.
where E : = S⊕Ŝ is the quantization noise, andW ∼ Ber(p) is the channel noise. If the quantization
noise E were i.i.d. Ber(D), then the overall effective noise E ⊕W would be i.i.d. Ber(D ∗ p). (In
reality, the quantization noise is not exactly i.i.d. Ber(D), but it can be shown [47] that it can be
treated as such for theoretical purposes.) Consequently, if we choose k2 such that
m− k1 − k2
= 1− h (D ∗ p)− ǫ/2, (10)
for an arbitrary ǫ > 0, then Theorem 2 guarantees that the decoder will (w.h.p.) be able to recover
a codeword corrupted by (D ∗ p)-Bernoulli noise.
Summarizing our findings, we state the following:
Corollary 1. There exist finite choices of degrees (dc, dv, d
c) such that the compound LDGM/LDPC
construction achieves the Wyner-Ziv bound.
Proof. With the source coding rate R1 chosen according to equation (9), the encoder will return a
quantization Ŝ with average Hamming distance to the source S of at most D. With the channel
coding rate R2 chosen according to equation (10), the decoder can with high probability recover
the quantization Ŝ. The overall transmission rate of the scheme is
Rtrans =
m− k1
m− k1 − k2
= R1 − R2
= (1− h (D) + ǫ/2)− (1− h (D ∗ p)− ǫ/2)
= h (D ∗ p)− h (D) + ǫ.
Since ǫ > 0 was arbitrary, we have established that the scheme can achieve rates arbitrarily close
to the Wyner-Ziv bound.
4.3 Channel coding with side information
We now show how the compound construction can be used to perform channel coding with side
information (CCSI).
4.3.1 Problem formulation
In the binary information embedding problem, given a specified input vector V ∈ {0, 1}n, the
channel output Z ∈ {0, 1}n is assumed to take the form
Z = V ⊕ S ⊕W, (11)
where S is a host signal (not under control of the user), and W ∼ Ber(p) corresponds to channel
noise. The encoder is free to choose the input vector V ∈ {0, 1}n, subject to an average channel
constraint
E [‖V ‖1] ≤ w, (12)
for some parameter w ∈ (0, 1
]. The goal is to use a channel coding scheme that satisfies this
constraint (12) so as to maximize the number of possible messages m that can be reliably commu-
nicated. Moreover, We write V ≡ V
to indicate that each channel input is implicitly identified
with some underlying message m. Given the channel output Z = V
⊕ S ⊕W , the goal of the de-
coder is to recover the embedded message m. The capacity for this binary information embedding
problem [2] is given by
RIE(w, p) = u. c. e.
h (w)− h (p) , (0, 0)
, (13)
where u. c. e. denotes the upper convex envelope. As before, we focus on achieving rates of the form
h (w) − h (p), since any remaining points on the curve (13) can be achieved via time-sharing with
the (0, 0) point.
Encoder Decoder
PSfrag replacements
Figure 5. Block diagram representation of channel coding with side information (CCSI). The
encoder embeds a message m into the channel input V
, which is required to satisfy the average
channel constraint 1
‖1] ≤ w. The channel produces the output Z = Vm⊕S⊕W , where S is a
host signal known only to the encoder, and W ∼ Ber(p) is channel noise. Given the channel output
Y , the decoder outputs an estimate m̂ of the embedded message.
4.3.2 Coding procedure for CCSI
In order to achieve rates of the form R = h (w)− h (p), we again use the compound LDGM/LDPC
construction in Fig. 2, now according to the following two step procedure.
Step #1: Source coding: The goal of the first stage is to embed the message into the transmitted
signal V via a quantization process. In order to do so, we use the code illustrated in Fig. 6(a),
specified by the generator matrix G and parity check matrices H1 and H2. The set K1 of k1 parity
bits associated with the check matrix H1 remain fixed to zero throughout the scheme. On the other
hand, we use the remaining k2 lower parity bits associated with H2 to specify a particular message
m ∈ {0, 1}k2 that the decoder would like to recover. In algebraic terms, the resulting code C(m)
has the form
C(m) :=
x ∈ {0, 1}n | x = Gy for some y ∈ {0, 1}m such that
.(14)
Since the encoder has access to host signal S, it may use this code C(m) in order to quantize the
host signal. After doing so, the encoder has a quantized signal Ŝ
∈ {0, 1}n and an associated
sequence Ŷ
∈ {0, 1}m of information bits such that Ŝ
= GŶ
. Note that the quantized signal
) specifies the message m in an implicit manner, since m = H2 Ŷm by construction of the
code C(m).
Now suppose that we choose n,m and k such that
m− k1 − k2
= 1− h (w) + ǫ/2 (15)
for some ǫ > 0, then Theorem 1 guarantees that there exist finite degrees (dc, dv , d
c) such that
the resulting code is a good w-distortion source code. Otherwise stated, we are guaranteed that
w.h.p, the quantization error E : = S ⊕ Ŝ has average Hamming weight upper bounded by wn.
Consequently, we may set the channel input V equal to the quantization noise (V = E), thereby
ensuring that the average channel constraint (12) is satisfied.
Step #2, Channel coding: In the second phase, the decoder is given a noisy channel observation
of the form
Z = E ⊕ S ⊕W = Ŝ ⊕W, (16)
and its task is to recover Ŝ. In terms of the code architecture, the k1 lower parity bits remain set to
zero; the remaining k2 parity bits, which represent the message m, are unknown to the coder. The
resulting code, as illustrated illustrated in Fig. 6(b), can be viewed as channel code with effective
PSfrag replacements
0 0 0 1 0
H1 H2
PSfrag replacements
0 0 0
(a) (b)
Figure 6. (a) Source coding step for binary information embedding. The message m ∈ {0, 1}k2
specifies a particular coset; using this particular source code, the host signal S is compressed to Ŝ,
and the quantization error E = S ⊕ Ŝ is transmitted over the constrained channel. (b) Channel
coding step for binary information embedding. The decoder receives Z = Ŝ ⊕W where W ∼ Ber(p)
is channel noise, and seeks to recover Ŝ, and hence the embedded message m specifying the coset.
rate m−k1
. Now suppose that we choose k1 such that the effective code used by the decoder has
m− k1
= 1− h (p)− ǫ/2, (17)
for some ǫ > 0. Since the channel noise W is Ber(p) and the rate R2 chosen according to (17),
Theorem 2 guarantees that the decoder will w.h.p. be able to recover the pair Ŝ and Ŷ . Moreover,
by design of the quantization procedure, we have the equivalence m = H2 Ŷ so that a simple
syndrome-forming procedure allows the decoder to recover the hidden message.
Summarizing our findings, we state the following:
Corollary 2. There exist finite choices of degrees (dc, dv, d
c) such that the compound LDGM/LDPC
construction achieves the binary information embedding (Gelfand-Pinsker) bound.
Proof. With the source coding rate R1 chosen according to equation (15), the encoder will return
a quantization Ŝ of the host signal S with average Hamming distortion upper bounded by w.
Consequently, transmitting the quantization error E = S ⊕ Ŝ will satisfy the average channel
constraint (12). With the channel coding rate R2 chosen according to equation (17), the decoder
can with high probability recover the quantized signal Ŝ, and hence the message m. Overall, the
scheme allows a total of 2k2 distinct messages to be embedded, so that the effective information
embedding rate is
Rtrans =
m− k1
m− k1 − k2
= R2 − R1
= (1− h (p)− ǫ/2) − (1− h (w) + ǫ/2)
= h (w)− h (p) + ǫ,
for some ǫ > 0. Thus, we have shown that the proposed scheme achieves the binary information
embedding bound (13).
5 Proof of source coding optimality
This section is devoted to the proof of the previously stated Theorem 1 on the source coding
optimality of the compound construction.
5.1 Set-up
In establishing a rate-distortion result such as Theorem 1, perhaps the most natural focus is the
random variable
dn(S,C) :=
‖x− S‖1, (18)
corresponding to the (renormalized) minimum Hamming distance from a random source sequence
S ∈ {0, 1}n to the nearest codeword in the code C. Rather than analyzing this random variable
directly, our proof of Theorem 1 proceeds indirectly, by studying an alternative random variable.
Given a binary linear code with N codewords, let i = 0, 1, 2, . . . , N − 1 be indices for the
different codewords. We say that a codeword Xi is distortion D-good for a source sequence S if the
Hamming distance ‖Xi⊕S‖1 is at most Dn. We then set the indicator random variable Z
i(D) = 1
when codeword Xi is distortion D-good. With these definitions, our proof is based on the following
random variable:
Tn(S,C;D) :=
Zi(D). (19)
Note that Tn(S,C;D) simply counts the number of codewords that are distortion D-good for a
source sequence S. Moreover, for all distortions D, the random variable Tn(S,C;D) is linked to
dn(S,C) via the equivalence
P[Tn(S,C;D) > 0] = P[dn(S,C) ≤ D]. (20)
Throughout our analysis of P[Tn(S,C;D) > 0], we carefully track only its exponential behavior.
More precisely, the analysis to follow will establish an inverse polynomial lower bound of the
form P[Tn(S,C;D) > 0] ≥ 1/f(n) where f(·) collects various polynomial factors. The following
concentration result establishes that the polynomial factors in these bounds can be ignored:
Lemma 1 (Sharp concentration). Suppose that for some target distortion D, we have
P[Tn(S,C;D) > 0] ≥ 1/f(n), (21)
where f(·) is a polynomial function satisfying log f(n) = o(n). Then for all ǫ > 0, there exists a
fixed code C̄ of sufficiently large blocklength n such that E[dn(S; C̄)] ≤ D + ǫ.
Proof. Let us denote the random code C as (C1,C2), where C1 denotes the random LDGM top
code, and C2 denotes the random LDPC bottom code. Throughout the analysis, we condition on
some fixed LDPC bottom code, say C2 = C̄2. We begin by showing that the random variable
(dn(S,C) | C̄2) is sharply concentrated. In order to do so, we construct a vertex-exposure martin-
gale [33] of the following form. Consider a fixed sequential labelling {1, . . . , n} of the top LDGM
checks, with check i associated with source bit Si. We reveal the check and associated source
bit in a sequential manner for each i = 1, . . . , n, and so define a sequence of random variables
{U0, U1, . . . , Un} via U0 : = E[dn(S,C) | C̄2], and
Ui : = E
dn(S,C) | S1, . . . , Si, C̄2
, i = 1, . . . , n. (22)
By construction, we have Un = (dn(S,C) | C̄2). Moreover, this sequence satisfies the following
bounded difference property: adding any source bit Si and the associated check in moving from
Ui−1 to Ui can lead to a (renormalized) change in the minimum distortion of at most ci = 1/n.
Consequently, by applying Azuma’s inequality [1], we have, for any ǫ > 0,
[∣∣(dn(S,C) | C̄2)− E[dn(S,C) | C̄2]
∣∣ ≥ ǫ
≤ exp
. (23)
Next we observe that our assumption (21) of inverse polynomial decay implies that, for at least
one bottom code C̄2,
P[dn(S,C) ≤ D | C̄2] = P[Tn(S,C;D) > 0 | C̄2] ≥ 1/g(n), (24)
for some subexponential function g. Otherwise, there would exist some α > 0 such that
P[Tn(S,C;D) > 0 | C̄2] ≤ exp(−nα)
for all choices of bottom code C̄2, and taking averages would violate our assumption (21).
Finally, we claim that the concentration result (23) and inverse polynomial bound (24) yield
the result. Indeed, if for some ǫ > 0, we had D < E[dn(S,C) | C̄2] − ǫ, then the concentration
bound (23) would imply that the probability
P[dn(S,C) ≤ D | C̄2] ≤ P[dn(S,C) ≤ E[dn(S,C) | C̄2]− ǫ | C̄2]
[∣∣(dn(S,C) | C̄2)− E[dn(S,C) | C̄2]
∣∣ ≥ ǫ
decays exponentially, which would contradict the inverse polynomial bound (24) for sufficiently
large n. Thus, we have shown that assumption (21) implies that for all ǫ > 0, there exists a
sufficiently large n and fixed bottom code C̄2 such that E[dn(S,C) | C̄2] ≤ D + ǫ. If the average
over LDGM codes C1 satisfies this bound, then at least one choice of LDGM top code must also
satisfy it, whence we have established that there exists a fixed code C̄ such that E[dn(S; C̄)] ≤ D+ǫ,
as claimed.
5.2 Moment analysis
In order to analyze the probability P[Tn(S,C;D) > 0], we make use of the moment bounds given
in the following elementary lemma:
Lemma 2 (Moment methods). Given any random variable N taking non-negative integer values,
there holds
(E[N ])
E[N2]
≤ P[N > 0]
≤ E[N ]. (25)
Proof. The upper bound (b) is an immediate consequence of Markov’s inequality, whereas the lower
bound (a) follows by applying the Cauchy-Schwarz inequality [20] as follows
(E[N ])
N I[N > 0]
≤ E[N2] E
2[N > 0]
= E[N2] P[N > 0].
The remainder of the proof consists in applying these moment bounds to the random variable
Tn(S,C;D), in order to bound the probability P[Tn(S,C;D) > 0]. We begin by computing the first
moment:
Lemma 3 (First moment). For any code with rate R, the expected number of D-good codewords
scales exponentially as
logE[Tn] = [R− (1− h (D))] ± o(1). (26)
Proof. First, by linearity of expectation E[Tn] =
∑2nR−1
i=0 P[Z
i(D) = 1] = 2nRP[Z0(D) = 1], where
we have used symmetry of the code construction to assert that P[Zi(D) = 1] = P[Z0(D) = 1] for
all indices i. Now the event {Z0(D) = 1} is equivalent to an i.i.d Bernoulli(1
) sequence of length
n having Hamming weight less than or equal to Dn. By standard large deviations theory (either
Sanov’s theorem [11], or direct asymptotics of binomial coefficients), we have
logP[Z0(D) = 1] = 1− h (D) ± o(1),
which establishes the claim.
Unfortunately, however, the first moment E[Tn] need not be representative of typical behavior
of the random variable Tn, and hence overall distortion performance of the code. As a simple
illustration, consider an imaginary code consisting of 2nR copies of the all-zeroes codeword. Even
for this “code”, as long as R > 1− h (D), the expected number of distortion-D optimal codewords
grows exponentially. Indeed, although Tn = 0 for almost all source sequences, for a small subset of
source sequences (of probability mass ≈ 2−n [1−h(D)]), the random variable Tn takes on the enormous
value 2nR, so that the first moment grows exponentially. However, the average distortion incurred
by using this code will be ≈ 0.5 for any rate, so that the first moment is entirely misleading. In order
to assess the representativeness of the first moment, one needs to ensure that it is of essentially the
same order as the variance, hence the comparison involved in the second moment bound (25)(a).
5.3 Second moment analysis
Our analysis of the second moment begins with the following alternative representation:
Lemma 4.
E[T 2n(D)] = E[Tn(D)]
j 6=0
P[Zj(D) = 1 | Z0(D) = 1]
. (27)
Based on this lemma, proved in Appendix C, we see that the key quantity to control is the condi-
tional probability P[Zj(D) = 1 | Z0(D) = 1]. It is this overlap probability that differentiates the
low-density codes of interest here from the unstructured codebooks used in classical random coding
arguments.2 For a low-density graphical code, the dependence between the events {Zj(D) = 1}
and {Z0(D) = 1} requires some analysis.
Before proceeding with this analysis, we require some definitions. Recall our earlier definition (3)
of the average weight enumerator associated with an (dv , d
c) LDPC code, denoted by Am(w).
Moreover, let us define for each w ∈ [0, 1] the probability
Q(w;D) := P [‖X(w) ⊕ S‖1 ≤ Dn | ‖S‖1 ≤ Dn] , (28)
where the quantity X(w) ∈ {0, 1}n denotes a randomly chosen codeword, conditioned on its under-
lying length-m information sequence having Hamming weight ⌈wm⌉. As shown in Lemma 9 (see
Appendix A), the random codeword X(w) has i.i.d. Bernoulli elements with parameter
δ∗(w; dc) =
1− (1− 2w)dc
. (29)
With these definitions, we now break the sum on the RHS of equation (27) intom terms, indexed
by t = 1, 2, . . . ,m, where term t represents the contribution of a given non-zero information sequence
y ∈ {0, 1}m with (Hamming) weight t. Doing so yields
j 6=0
P[Zj(D) = 1 | Z0(D) = 1] =
Am(t/m)Q(t/m;D)
≤ m max
1≤t≤m
{Am(t/m) Q(t/m;D)}
≤ m max
w∈[0,1]
{Am(w) Q(w;D)} .
Consequently, we need to control both the LDPC weight enumerator Am(w) and the probability
Q(w;D) over the range of possible fractional weights w ∈ [0, 1].
5.4 Bounding the overlap probability
The following lemma, proved in Appendix D, provides a large deviations bound on the probability
Q(w;D).
Lemma 5. For each w ∈ [0, 1], we have
logQ(w;D) ≤ F (δ∗(w; dc);D) + o(1), (30)
2In the latter case, codewords are chosen independently from some ensemble, so that the overlap probability is
simply equal to P[Zj(D) = 1]. Thus, for the simple case of unstructured random coding, the second moment bound
actually provides the converse to Shannon’s rate-distortion theorem for the symmetric Bernoulli source.
where for each t ∈ (0, 1
] and D ∈ (0, 1
], the error exponent is given by
F (t;D) := D log
(1− t)eλ
+ (1−D) log
(1− t) + teλ
− λ∗D. (31)
Here λ∗ : = log
b2−4ac
, where a : = t (1− t) (1−D), b : = (1− 2D)t2, and c : = −t (1− t)D.
In general, for any D ∈ (0, 1
], the function F ( · ;D) has the following properties. At t = 0,
it achieves its maximum F (0 ;D) = 0, and then is strictly decreasing on the interval (0, 1
], ap-
proaching its minimum value − [1− h (D)] as t → 1
. Figure 7 illustrates the form of the function
F (δ∗(ω; dc);D) for two different values of distortion D, and for degrees dc ∈ {3, 4, 5}. Note that
0 0.1 0.2 0.3 0.4 0.5
Weight
Decay of overlap probability: D = 0.1101
0 0.1 0.2 0.3 0.4 0.5
Weight
Decay of overlap probability: D = 0.3160
(a) (b)
Figure 7. Plot of the upper bound (30) on the overlap probability 1
logQ(w;D) for different choices
of the degree dc, and distortion probabilities. (a) Distortion D = 0.1100. (b) Distortion D = 0.3160.
increasing dc causes F (δ
∗(ω; dc);D) to approach its minimum −[1− h (D)] more rapidly.
We are now equipped to establish the form of the effective rate-distortion function for any
compound LDGM/LDPC ensemble. Substituting the alternative form of E[T 2n ] from equation (27)
into the second moment lower bound (25) yields
logP[Tn(D) > 0] ≥
logE[Tn(D)]− log
j 6=0
P[Zj(D) = 1 | Z0(D) = 1]
≥ R− (1− h (D))− max
w∈[0,1]
logAm(w) +
logQ(w;D)
− o(1)
≥ R− (1− h (D))− max
w∈[0,1]
logAm(w)
+ F ( δ∗(w; dc),D)
− o(1), (32)
0 0.1 0.2 0.3 0.4 0.5
Weight
Minimum achievable rates: (R,D) = (0.50, 0.1100)
Compound
Naive LDGM
0 0.1 0.2 0.3 0.4 0.5
0.025
0.075
0.125
Weight
Minimum achievable rates: (R,D) = (0.10, 0.3160)
Compound
Naive LDGM
(a) (b)
Figure 8. Plot of the function defining the lower bound (33) on the minimum achievable rate for a
specified distortion. Shown are curves with LDGM top degree dc = 4, comparing the uncoded case
(no bottom code, dotted curve) to a bottom (4, 6) LDPC code (solid line). (a) Distortion D = 0.1100.
(b) Distortion D = 0.3160.
where the last step follows by applying the upper bound on Q from Lemma 5, and the relation
m = RGn =
n. Now letting B(w; dv , d
c) be any upper bound on the log of average weight
enumerator
logAm(w)
, we can then conclude that 1
log P[Tn(D) > 0] is asymptotically non-negative
for all rate-distortion pairs (R,D) satisfying
R ≥ max
w∈[0,1]
1− h (D) + F (δ∗(w; dc),D)
B(w;dv,d′c)
. (33)
Figure 8 illustrates the behavior of the RHS of equation (33), whose maximum defines the effective
rate-distortion function, for the case of LDGM top degree dc = 4. Panels (a) and (b) show the
cases of distortion D = 0.1100 and D = 0.3160 respectively, for which the respective Shannon rates
are R = 0.50 and R = 0.10. Each panel shows two plots, one corresponding the case of uncoded
information bits (a naive LDGM code), and the other to using a rate RH = 2/3 LDPC code with
degrees (dv , dc) = (4, 6). In all cases, the minimum achievable rate for the given distortion is
obtained by taking the maximum for w ∈ [0, 0.5] of the plotted function. For any choices of D, the
plotted curve is equal to the Shannon bound RSha = 1 − h (D) at w = 0, and decreases to 0 for
w = 1
Note the dramatic difference between the uncoded and compound constructions (LDPC-coded).
In particular, for both settings of the distortion (D = 0.1100 and D = 0.3160), the uncoded curves
rise from their initial values to maxima above the Shannon limit (dotted horizontal line). Con-
sequently, the minimum required rate using these constructions lies strictly above the Shannon
optimum. The compound construction curves, in contrast, decrease monotonically from their max-
imum value, achieved at w = 0 and corresponding to the Shannon optimum. In the following
section, we provide an analytical proof of the fact that for any distortion D ∈ [0, 1
), it is al-
ways possible to choose finite degrees such that the compound construction achieves the Shannon
optimum.
5.5 Finite degrees are sufficient
In order to complete the proof of Theorem 1, we need to show that for all rate-distortion pairs
(R,D) satisfying the Shannon bound, there exist LDPC codes with finite degrees (dv , d
c) and a
suitably large but finite top degree dc such that the compound LDGM/LDPC construction achieves
the specified (R,D).
Our proof proceeds as follows. Recall that in moving from equation (32) to equation (33), we
assumed a bound on the average weight enumerator Am of the form
logAm(w) ≤ B(w; dv , d
c) + o(1). (34)
For compactness in notation, we frequently write B(w), where the dependence on the degree pair
(dv , d
c) is understood implicitly. In the following paragraph, we specify a set of conditions on this
bounding function B, and we then show that under these conditions, there exists a finite degree dc
such that the compound construction achieves specified rate-distortion point. In Appendix F, we
then prove that the weight enumerator of standard regular LDPC codes satisfies the assumptions
required by our analysis.
Assumptions on weight enumerator bound We require that our bound B on the weight
enumerator satisfy the following conditions:
A1: the function B is symmetric around 1
, meaning that B(w) = B(1− w) for all w ∈ [0, 1].
A2: the function B is twice differentiable on (0, 1) with B′(1
) = 0 and B′′(1
) < 0.
A3: the function B achieves its unique optimum at w = 1
, where B(1
) = RH .
A4: there exists some ǫ1 > 0 such that B(w) < 0 for all w ∈ (0, ǫ1), meaning that the ensemble
has linear minimum distance.
In order to establish our claim, it suffices to show that for all (R,D) such that R > 1− h (D),
there exists a finite choice of dc such that
w∈[0,1]
+ F (δ∗(w; dc),D)
︸ ︷︷ ︸
≤ R− [1− h (D)] : = ∆ (35)
K(w; dc)
Restricting to even dc ensures that the function F is symmetric about w =
; combined with as-
sumption A2, this ensures that K is symmetric around 1
, so that we may restrict the maximization
to [0, 1
] without loss of generality. Our proof consists of the following steps:
(a) We first prove that there exists an ǫ1 > 0, independent of the choice of dc, such that
K(w; dc) ≤ ∆ for all w ∈ [0, ǫ1].
(b) We then prove that there exists ǫ2 > 0, again independent of the choice of dc, such that
K(w; dc) ≤ ∆ for all w ∈ [
− ǫ2,
(c) Finally, we specify a sufficiently large but finite degree d∗c that ensures the conditionK(w; d
c) ≤ ∆
for all w ∈ [ǫ1, ǫ2].
5.5.1 Step A
By assumption A4 (linear minimum distance), there exists some ǫ1 > 0 such that B(w) ≤ 0 for all
w ∈ [0, ǫ1]. Since F (δ
∗(w; dc);D) ≤ 0 for all w, we have K(w; dc) ≤ 0 < ∆ in this region. Note
that ǫ1 is independent of dc, since it specified entirely by the properties of the bottom code.
5.5.2 Step B
For this step of the proof, we require the following lemma on the properties of the function F :
Lemma 6. For all choices of even degrees dc ≥ 4, the function G(w; dc) = F (δ
∗(w; dc),D) is
differentiable in a neighborhood of w = 1
, with
; dc) = − [1− h (D)] , G
; dc) = 0, and G
; dc) = 0. (36)
See Appendix E for a proof of this claim. Next observe that we have the uniform bound
G(w; dc) ≤ G(w; 4) for all dc ≥ 4 and w ∈ [0,
]. This follows from the fact that F (u;D) is
decreasing in u, and that δ∗(w; 4) ≤ δ∗(w; dc) for all dc ≥ 4 and w ∈ [0,
]. Since B is independent
of dc, this implies that K(w; dc) ≤ K(w; 4) for all w ∈ [0,
]. Hence it suffices to set dc = 4, and
show that K(w; 4) ≤ ∆ for all w ∈ [1
− ǫ2,
]. Using Lemma 6, Assumption A2 concerning the
derivatives of B, and Assumption A4 (that B(1
) = RH), we have
; 4) = R− [1− h (D)] = ∆,
; 4) =
R B′(1
; 4) = 0, and
K ′′(
; 4) =
R B′′(1
+G′′(
; 4) =
R B′′(1
By the continuity of K ′′, the second derivative remains negative in a region around 1
, say for all
w ∈ [1
− ǫ2,
] for some ǫ2 > 0. Then, for all w ∈ [
− ǫ2,
], we have for some w̃ ∈ [w, 1
] the second
order expansion
K(w; 4) = K(
; 4) +K ′(
; 4)(w −
K(w̃; 4)
K(w̃; 4)
Thus, we have established that there exists an ǫ2 > 0, independent of the choice of dc, such that
for all even dc ≥ 4, we have
K(w; dc) ≤ K(w, 4) ≤ ∆ for all w ∈ [
− ǫ2,
]. (37)
5.5.3 Step C
Finally, we need to show that K(w; dc) ≤ ∆ for all w ∈ [ǫ1, ǫ2]. From assumption A3 and the
continuity of B, there exists some ρ(ǫ2) > 0 such that
B(w) ≤ RH [1− ρ(ǫ2)] for all w ≤
− ǫ2. (38)
From Lemma 6, limu→ 1
F (u;D) = F (1
;D) = − [1− h (D)]. Moreover, as dc → +∞, we have
δ∗(ǫ1; dc) →
. Therefore, for any ǫ3 > 0, there exists a finite degree d
c such that
F (δ∗(ǫ1; d
c);D) ≤ − [1− h (D)] + ǫ3.
Since F is non-increasing in w, we have F (δ∗(w; d∗c);D) ≤ − [1− h (D)] + ǫ3 for all w ∈ [ǫ1, ǫ2].
Putting together this bound with the earlier bound (38) yields that for all w ∈ [ǫ1, ǫ2]:
K(w; dc) = R
+ F (δ∗(w; d∗c),D)
≤ R [1− ρ(ǫ2)]− [1− h (D)] + ǫ3
= {R− [1− h (D)]}+ (ǫ3 − Rρ(ǫ2))
= ∆ + (ǫ3 − Rρ(ǫ2))
Since we are free to choose ǫ3 > 0, we may set ǫ3 =
Rρ(ǫ2)
to yield the claim.
6 Proof of channel coding optimality
In this section, we turn to the proof of the previously stated Theorem 2, concerning the channel
coding optimality of the compound construction.
If the codeword x ∈ {0, 1}n is transmitted, then the receiver observes V = x ⊕ W , where W
is a Ber(p) random vector. Our goal is to bound the probability that maximum likelihood (ML)
decoding fails where the probability is taken over the randomness in both the channel noise and
the code construction. To simplify the analysis, we focus on the following sub-optimal (non-ML)
decoding procedure. Let ǫn be any non-negative sequence such that ǫn/n → 0 but ǫ
n/n → +∞—say
for instance, ǫn = n
Definition 2 (Decoding Rule:). With the threshold d(n) := pn+ ǫn, decode to codeword xi ⇐⇒
‖xi ⊕ V ‖1 ≤ d(n), and no other codeword is within d(n) of V .
The extra term ǫn in the threshold d(n) is chosen for theoretical convenience. Using the following
two lemmas, we establish that this procedure has arbitrarily small probability of error, whence ML
decoding (which is at least as good) also has arbitrarily small error probability.
Lemma 7. Using the suboptimal procedure specified in the definition (2), the probability of decoding
error vanishes asymptotically provided that
RG B(w)−D (p||δ
∗(w; dc) ∗ p) < 0 for all w ∈ (0,
], (39)
where B is any function bounding the average weight enumerator as in equation (34).
Proof. Let N = 2nR = 2mRH denote the total number of codewords in the joint LDGM/LDPC
code. Due to the linearity of the code construction and symmetry of the decoding procedure, we
may assume without loss of generality that the all zeros codeword 0n was transmitted (i.e., x = 0n).
In this case, the channel output is simply V = W and so our decoding procedure will fail if and
only if one the following two conditions holds:
(i) either ‖W‖1 > d(n), or
(ii) there exists a sequence of information bits y ∈ {0, 1}m satisfying the parity check equation
Hy = 0 such that the codeword Gy satisfies ‖Gy ⊕W‖1 ≤ d(n).
Consequently, using the union bound, we can upper bound the probability of error as follows:
perr ≤ P[‖W‖1 > d(n)] +
‖Gyi ⊕W‖1 ≤ d(n)
. (40)
Since E[‖W‖1] = pn, we may apply Hoeffdings’s inequality [13] to conclude that
P[‖W‖1 > d(n)] ≤ 2 exp
→ 0 (41)
by our choice of ǫn. Now focusing on the second term, let us rewrite it as a sum over the possible
Hamming weights ℓ = 1, 2, . . . ,m of information sequences (i.e., ‖y‖1 = ℓ) as follows:
‖Gyi ⊕W‖1 ≤ d(n)
‖Gy ⊕W‖1 ≥ d(n)
∣∣ ‖y‖1 = ℓ
where we have used the fact that the (average) number of information sequences with fractional
weight ℓ/m is given by the LDPC weight enumerator Am(
). Analyzing the probability terms
in this sum, we note Lemma 9 (see Appendix A) guarantees that Gy has i.i.d. Ber(δ∗( ℓ
; dc))
elements, where δ∗( · ; dc) was defined in equation (29). Consequently, the vector Gy⊕W has i.i.d.
Ber(δ( ℓ
) ∗ p) elements. Applying Sanov’s theorem [11] for the special case of binomial variables
yields that for any information bit sequence y with ℓ ones, we have
‖Gy ⊕W‖1 ≥ d(n)
∣∣ ‖y‖1 = ℓ
≤ f(n)2−nD(p||δ(
)∗p), (42)
for some polynomial term f(n). We can then upper bound the second term in the error bound (40)
‖Gyi ⊕W‖1 ≤ d(n)
≤ f(m) exp
1≤ℓ≤m
) + o(m)− nD
p||δ(
) ∗ p
where we have used equation (42), as well as the assumed upper bound (34) on Am in terms of B.
Simplifying further, we take logarithms and rescale by m to assess the exponential rate of decay,
thereby obtaining
‖Gyi ⊕W‖1 ≤ d(n)
≤ max
1≤ℓ≤m
p||δ(
) ∗ p
+ o(1)
≤ max
w∈[0,1]
B(w)−
D (p||δ(w) ∗ p)
+ o(1),
and establishing the claim.
Lemma 8. For any p ∈ (0, 1) and total rate R : = RGRH < 1 − h (p), there exist finite choices
of the degree triplet (dc, dv , d
c) such that (39) is satisfied.
Proof. For notational convenience, we define
L(w) := RGB(w)−D (p||δ
∗(w; dc) ∗ p) . (43)
First of all, it is known [17] that a regular LDPC code with rate RH =
< 1 and dv ≥ 3 has linear
minimum distance. More specifically, there exists a threshold ν∗ = ν∗(dv, dc) such that B(w) ≤ 0
for all w ∈ [0, ν∗]. Hence, since B(w)−D (p||δ∗(w; dc) ∗ p) ≥ 0 for all w ∈ (0, 1), for w ∈ (0, ν
∗], we
have L(w) < 0.
Turning now to the interval [ν∗, 1
], consider the function
L̃(w) := Rh (w)−D (p||δ∗(w; dc) ∗ p) . (44)
Since B(w) ≤ RHh (w), we have L(w) ≤ L̃(w), so that it suffices to upper bound L̃. Observe that
) = R − (1 − h (p)) < 0 by assumption. Therefore, it suffices to show that, by appropriate
choice of dc, we can ensure that L̃(w) ≤ L̃(
). Noting that L̃ is infinitely differentiable, calculating
derivatives yields L̃′(1
) = 0 and L̃′′(1
) < 0. (See Appendix G for details of these derivative
calculations.) Hence, by second order Taylor series expansion around w = 1
, we obtain
L̃(w) = L̃(
L̃′′(w̄)(w −
where w̄ ∈ [w, 1
]. By continuity of L̃′′, we have L̃′′(w) < 0 for all w in some neighborhood of 1
so that the Taylor series expansion implies that L̃(w) ≤ L̃(1
) for all w in some neighborhood, say
(µ, 1
It remains to bound L̃ on the interval [ν∗, µ]. On this interval, we have L̃(w) ≤ Rh (µ) −
D (p||δ∗(ν∗; dc) ∗ p). By examining equation (29) from Lemma 9, we see that by choosing dc
sufficiently large, we can make δ∗(ν∗; dc) arbitrarily close to
, and hence D (p||δ∗(ν∗; dc) ∗ p)
arbitrarily close to 1 − h (p). More precisely, let us choose dc large enough to guarantee that
D (p||δ∗(ν∗; dc) ∗ p) < (1 − ǫ) (1 − h (p)), where ǫ =
R (1−h(µ))
1−h(p) . With this choice, we have, for all
w ∈ [ν∗, µ], the sequence of inequalities
L̃(w) ≤ Rh (µ)−D (p||δ∗(ν∗; dc) ∗ p)
< Rh (µ)−
(1− h (p))−R(1− h (µ))
= R− (1− h (p)) < 0,
which completes the proof.
7 Discussion
In this paper, we established that it is possible to achieve both the rate-distortion bound for
symmetric Bernoulli sources and the channel capacity for the binary symmetric channel using
codes with bounded graphical complexity. More specifically, we have established that there exist
low-density generator matrix (LDGM) codes and low-density parity check (LDPC) codes with
finite degrees that, when suitably compounded to form a new code, are optimal for both source
and channel coding. To the best of our knowledge, this is the first demonstration of classes of codes
with bounded graphical complexity that are optimal as source and channel codes simultaneously.
We also demonstrated that this compound construction has a naturally nested structure that can
be exploited to achieve the Wyner-Ziv bound [45] for lossy compression of binary data with side
information, as well as the Gelfand-Pinsker bound [19] for channel coding with side information.
Since the analysis of this paper assumed optimal decoding and encoding, the natural next step
is the development and analysis of computationally efficient algorithms for encoding and decoding.
Encouragingly, the bounded graphical complexity of our proposed codes ensures that they will, with
high probability, have high girth and good expansion, thus rendering them well-suited to message-
passing and other efficient decoding procedures. For pure channel coding, previous work [16, 36, 41]
has analyzed the performance of belief propagation when applied to various types of compound
codes, similar to those analyzed in this paper. On the other hand, for pure lossy source coding, our
own past work [44] provides empirical demonstration of the feasibility of modified message-passing
schemes for decoding of standard LDGM codes. It remains to extend both these techniques and their
analysis to more general joint source/channel coding problems, and the compound constructions
analyzed in this paper.
Acknowledgements
The work of MJW was supported by National Science Foundation grant CAREER-CCF-0545862,
a grant from Microsoft Corporation, and an Alfred P. Sloan Foundation Fellowship.
A Basic property of LDGM codes
For a given weight w ∈ (0, 1), suppose that we enforce that the information sequence y ∈ {0, 1}m
has exactly ⌈wm⌉ ones. Conditioned on this event, we can then consider the set of all codewords
X(w) ∈ {0, 1}n, where we randomize over low-density generator matrices G chosen as in step (a)
above. Note for any fixed code, X(w) is simply some codeword, but becomes a random variable
when we imagine choosing the generator matrix G randomly. The following lemma characterizes
this distribution as a function of the weight w and the LDGM top degree dc:
Lemma 9. Given a binary vector y ∈ {0, 1}m with a fraction w of ones, the distribution of
the random LDGM codeword X(w) induced by y is i.i.d. Bernoulli with parameter δ∗(w; dc) =
1− (1− 2w)dc
Proof. Given a fixed sequence y ∈ {0, 1}m with a fraction w ones, the random codeword bit Xi(w)
at bit i is formed by connecting dc edges to the set of information bits.
3 Each edge acts as an i.i.d.
Bernoulli variable with parameter w, so that we can write
Xi(w) = V1 ⊕ V2 ⊕ . . .⊕ Vdc , (45)
where each Vk ∼ Ber(w) is independent and identically distributed. A straightforward calculation
using z-transforms (see [17]) or Fourier transforms over GF (2) yields that Xi(w) is Bernoulli with
parameter δ∗(w; dc) as defined.
B Bounds on binomial coefficients
The following bounds on binomial coefficients are standard (see Chap. 12, [11]):
log(n+ 1)
. (46)
Here, for α ∈ (0, 1), the quantity h(α) := −α logα − (1 − α) log(1 − α) is the binomial entropy
function.
3In principle, our procedure allows two different edges to choose the same information bit, but the probability of
such double-edges is asymptotically negligible.
C Proof of Lemma 4
First, by the definition of Tn(D), we have
E[T 2n(D)] = E
Zi(D)Zj(D)
= E[Tn] +
j 6=i
P[Zi(D) = 1, Zi(D) = 1].
To simplify the second term on the RHS, we first note that for any i.i.d Bernoulli(1
) sequence
S ∈ {0, 1}n and any codeword Xj , the binary sequence S′ : = S ⊕ Xj is also i.i.d. Bernoulli(1
Consequently, for each pair i 6= j, we have
Zi(D) = 1, Zj(D) = 1
‖Xi ⊕ S‖1 ≤ Dn, ‖X
j ⊕ S‖1 ≤ Dn
‖Xi ⊕ S′‖1 ≤ Dn, ‖X
j ⊕ S′‖1 ≤ Dn
‖Xi ⊕Xj ⊕ S‖1 ≤ Dn, ‖S‖1 ≤ Dn
Note that for each j 6= i, the vector Xi ⊕Xj is a non-zero codeword. For each fixed i, summing
over j 6= i can be recast as summing over all non-zero codewords, so that
i 6=j
Zi(D) = 1, Zj(D) = 1
j 6=i
‖Xi ⊕Xj ⊕ S‖1 ≤ Dn, ‖S‖1 ≤ Dn
k 6=0
‖Xk ⊕ S‖1 ≤ Dn, ‖S‖1 ≤ Dn
= 2nR
k 6=0
‖Xk ⊕ S‖1 ≤ Dn, ‖S‖1 ≤ Dn
= 2nRP
Z0(D) = 1
k 6=0
Zk(D) = 1 | Z0(D) = 1
= E[Tn]
k 6=0
Zk(D) = 1 | Z0(D)
thus establishing the claim.
D Proof of Lemma 5
We reformulate the probability Q(w,D) as follows. Recall that Q involves conditioning the source
sequence S on the event ‖S‖1 ≤ Dn. Accordingly, we define a discrete variable T with distribution
P(T = t) =
) for t = 0, 1, . . . ,Dn,
representing the (random) number of 1s in the source sequence S. Let Ui and Vj denote Bernoulli
random variables with parameters 1 − δ∗(w; dc) and δ
∗(w; dc) respectively. With this set-up, con-
ditioned on codeword j having a fraction wn ones, the quantity Q(w,D) is equivalent to the
probability that the random variable
W : =
i=1 Uj +
j=1 Vj if T ≥ 1
j=1 Vj if T = 0
is less than Dn. To bound this probability, we use a Chernoff bound in the form
logP[W ≤ Dn] ≤ inf
logMW (λ)− λD
. (48)
We begin by computing the moment generating function MW . Taking conditional expectations and
using independence, we have
MW (λ) =
P[T = t] [MU (λ)]
[MV (λ)]
Here the cumulant generating functions have the form
logMU(λ) = log
(1− δ)eλ + δ
, and (49a)
logMV (λ) = log
(1− δ) + δeλ
, (49b)
where we have used (and will continue to use) δ as a shorthand for δ∗(w; dc).
Of interest to us is the exponential behavior of this expression in n. Using the standard entropy
approximations to the binomial coefficient (see Appendix B), we can bound MW (λ) as
− h (D) +
logMU (λ) +
logMV (λ)
︸ ︷︷ ︸
, (50)
where f(n) denotes a generic polynomial factor. Further analyzing this sum, we have
g(t) ≤
0≤t≤Dn
log g(t) +
log f(n)
log(nD)
= max
0≤t≤Dn
− h (D) +
logMU (λ) +
logMV (λ)
+ o(1)
≤ max
u∈[0,D]
{h (u)− h (D) + u logMU (λ) + (1− u) logMV (λ)} + o(1).
Combining this upper bound on 1
logMW (λ) with the Chernoff bound (48) yields that
logP[W ≤ Dn] ≤ inf
u∈[0,D]
G(u, λ; δ) + o(1) (51)
where the function G takes the form
G(u, λ; δ) := h (u)− h (D) + u logMU (λ) + (1− u) logMV (λ)− λD. (52)
Finally, we establish that the solution (u∗, λ∗) to the min-max saddle point problem (51) is
unique, and specified by u∗ = D and λ∗ as in Lemma 5. First of all, observe that for any δ ∈ (0, 1),
the function G is continuous, strictly concave in u and strictly convex in λ. (The strict concavity
follows since h (u) is strictly concave with the remaining terms linear; the strict convexity follows
since cumulant generating functions are strictly convex.) Therefore, for any fixed λ < 0, the
maximum over u ∈ [0,D] is always achieved. On the other hand, for any D > 0, u ∈ [0,D] and
δ ∈ (0, 1), we have G(u;λ; t) → +∞ as λ → −∞, so that the infimum is either achieved at some
λ∗ < 0, or at λ∗ = 0. We show below that it is always achieved at an interior point λ∗ < 0. Thus
far, using standard saddle point theory [21], we have established the existence and uniqueness of
the saddle point solution (u∗, λ∗).
To verify the fixed point conditions, we compute partial derivatives in order to find the optimum.
First, considering u, we compute
(u, λ; δ) = log
+ logMU (λ)− logMV (λ)
= log
+ log
(1− δ)eλ + δ
− log
(1− δ) + δeλ
Solving the equation ∂G
(u, λ; δ) = 0 yields
exp(λ)
1 + exp(λ)
1 + exp(λ)
(1−D) ≥ 0. (53)
Since D ≤ 1
, a bit of algebra shows that u′ ≥ D for all choices of λ. Since the maximization is
constrained to [0,D], the optimum is always attained at u∗ = D.
Turning now to the minimization over λ, we compute the partial derivative to find
(u, λ; δ) = u
(1− δ) exp(λ)
(1− δ) exp(λ) + δ
+ (1− u)
δ exp(λ)
(1− δ) + δ exp(λ)
Setting this partial derivative to zero yields a quadratic equation in exp(λ) with coefficients
a = δ (1− δ) (1 −D) (54a)
b = u(1− δ)2 + (1− u)δ2 −D
δ2 + (1− δ)2
. (54b)
c = −Dδ(1− δ). (54c)
The unique positive root ρ∗ of this quadratic equation is given by
ρ∗(δ,D, u) :=
b2 − 4ac
. (55)
It remains to show that ρ∗ ≤ 1, so that λ∗ : = log ρ∗ < 0. A bit of algebra (using the fact a ≥ 0)
shows that ρ∗ < 1 if and only if a+ b+ c > 0. We then note that at the optimal u∗ = D, we have
b = (1− 2D)δ2, whence
a+ b+ c = δ (1− δ) (1 −D) + (1− 2D)δ2 −Dδ(1 − δ)
= (1− 2D) δ > 0
since D < 1
and δ > 0. Hence, the optimal solution is λ∗ : = log ρ∗ < 0, as specified in the lemma
statement.
E Proof of Lemma 6
A straightforward calculation yields that
) = F (δ∗(
; dc);D) = F (
;D) = − (1− h (D))
as claimed. Turning next to the derivatives, we note that by inspection, the solution λ∗(t) defined in
Lemma 5 is twice continuously differentiable as a function of t. Consequently, the function F (t,D)
is twice continuously differentiable in t. Moreover, the function δ∗(w; dc) is twice continuously
differentiable in w. Overall, we conclude that G(w) = F (δ∗(w; dc);D) is twice continuously differ-
entiable in w, and that we can obtain derivatives via chain rule. Computing the first derivative,
we have
) = δ′(
)F ′(δ∗(
; dc);D) = 0
since δ′(w) = −dc (1− 2w)
dc−1, which reduces to zero at w = 1
. Turning to the second derivative,
we have
) = δ′′(
)F ′(δ∗(
; dc);D) +
F ′′(δ∗(
; dc);D) = δ
)F ′(δ∗(
; dc);D).
We again compute δ′′(w) = 2dc (dc − 1)(1 − 2w)
dc−2, which again reduces to zero at w = 1
since
dc ≥ 4 by assumption.
F Regular LDPC codes are sufficient
Consider a regular (dv, d
c) code from the standard Gallager LDPC ensemble. In order to complete
the proof of Theorem 1, we need to show for suitable choices of degree (dv , d
c), the average weight
enumerator of these codes can be suitably bounded, as in equation (34), by a function B that
satisfies the conditions specified in Section 5.5.
It can be shown [17, 22] that for even degrees d′c, the average weight enumerator of the regular
Gallager ensemble, for any block length m, satisfies the bound
logAm(w) = B(w; dv , d
c) + o(1).
The function B in this relation is defined for w ∈ [0, 1
B(w; dv , d
c) := (1− dv)h (w)− (1−RH) + dv inf
(1 + eλ)d
c + (1− eλ)d
, (56)
and by B(w) = B(w − 1
) for w ∈ [1
, 1]. Given that the minimization problem (56) is strictly
convex, a straightforward calculation of the derivative shows the optimum is achieved at λ∗, where
λ∗ ≤ 0 is the unique solution of the equation
(1 + eλ)d
c−1 − (1− eλ)d
(1 + eλ)d
c + (1− eλ)d
= w. (57)
Some numerical computation for RH = 0.5 and different choices (dv, d
c) yields the curves shown
in Fig. 9.
We now show that for suitable choices of degree (dv , d
c), the function B defined in equation (56)
satisfies the four assumptions specified in Section 5.5. First, for even degrees d′c, the function B
0 0.1 0.2 0.3 0.4 0.5
Weight
LDPC weight enumerators
= 10
Figure 9. Plots of LDPC weight enumerators for codes of rate RH = 0.5, and check degrees
∈ {6, 8, 10}.
is symmetric about w = 1
, so that assumption (A1) holds. Secondly, we have B(w) ≤ RH , and
moreover, for w = 1
, the optimal λ∗(1
) = 0, so that B(1
) = RH , and assumption (A3) is satisfied.
Next, it is known from the work of Gallager [17], and moreover is clear from the plots in Fig. 9,
that LDPC codes with dv > 2 have linear minimum distance, so that assumption (A4) holds.
The final condition to verify is assumption (A2), concerning the differentiability of B. We
summarize this claim in the following:
Lemma 10. The function B is twice continuously differentiable on (0, 1), and in particular we
) = 0, and B′′(
) < 0. (58)
Proof. Note that for each fixed w ∈ (0, 1), the function
f(λ) =
(1 + eλ)d
c + (1− eλ)d
(e−λ + 1)d
c + (e−λ − 1)d
is strictly convex and twice continuously differentiable as a function of λ. Moreover, the function
f∗(w) := infλ≤0 {f(λ)− λw} corresponds to the conjugate dual [21] of f(λ) + I≤0(λ). Since the
optimum is uniquely attained for each w ∈ (0, 1), an application of Danskin’s theorem [4] yields
that f∗ is differentiable with d
f∗(w) = −λ∗(w), where λ∗ is defined by equation (57). Putting
together the pieces, we have B′(w) = (1 − dv)h
′(w) − dvλ
∗(w). Evaluating at w = 1
yields
) = 0− dvλ
∗(0) = 0 as claimed.
We now claim that λ∗(w) is differentiable. Indeed, let us write the defining relation (57) for
λ∗(w) as F (λ,w) = 0 where F (λ,w) := f ′(λ)−w. Note that F is twice continuously differentiable
in both λ and w; moreover, ∂F
exists for all λ ≤ 0 and w, and satisfies ∂F
(λ,w) = f ′′(λ) > 0
by the strict convexity of f . Hence, applying the implicit function theorem [4] yields that λ∗(w)
is differentiable, and moreover that dλ
(w) = 1/f ′′(λ∗(w)). Hence, combined with our earlier
calculation of B′, we conclude that B′′(w) = (1 − dv)h
′′(w) − dv
f ′′(λ(w))
. Our final step is to
compute the second derivative f ′′. In order to do so, it is convenient to define g = log f ′, and
exploit the relation g′f ′ = f ′′. By definition, we have
g(λ) = λ+ log
(1 + eλ)d
c−1 − (1− eλ)d
− log
(1 + eλ)d
c + (1− eλ)d
whence
g′(λ) = 1 + eλ(d′c − 1)
(1 + eλ)d
c−2 + (1− eλ)d
(1 + eλ)d
c−1 − (1− eλ)d
− eλd′c
(1 + eλ)d
c−1 − (1− eλ)d
(1 + eλ)d
c + (1− eλ)d
Evaluating at w = 1
corresponds to λ(0) = 0, so that
f ′′(λ(
)) = f ′(0) g′(0) =
1 + (d′c − 1)
− d′c
Consequently, combining all of the pieces, we have
B′′(w) = (1− dv)h
)− dv
f ′′(λ(1
dv − 1
− 4dv < 0
as claimed.
G Derivatives of L̃
Here we calculate the first and second derivatives of the function L̃ defined in equation (44). The
first derivative takes the form
L̃′(v) = R log
δ′(v; dc)
δ(v; dc)
− (1− p)
δ′(v; dc)
1− δ(v; dc)
where δ′(v; dc) = dc(1 − 2v)
dc−1. Since δ′(1
; dc) = 0, we have L̃
) = 0 as claimed. Second, using
chain rule, we calculate
L̃′′(v) = −R
δ′′(v; dc)δ(v; dc)− [δ
′(v; dc)]
[δ(v; dc)]
− (1− p)
δ′′(v; dc)
1− δ(v; dc)
+ [δ′(v; dc)]
[1− δ(v; dc)]2
and δ′′(v; dc) = −dc (dc − 1) (1 − 2v)
dc−2. Now for dc > 2, we have δ
) = 0, so that L̃′′(1
−4R < 0 as claimed.
References
[1] N. Alon and J. Spencer. The Probabilistic Method. Wiley Interscience, New York, 2000.
[2] R. J. Barron, B. Chen, and G. W. Wornell. The duality between information embedding and source
coding with side information and some applications. IEEE Trans. Info. Theory, 49(5):1159–1180, 2003.
[3] C. Berroux and A. Glavieux. Near optimum error correcting coding and decoding: Turbo codes. IEEE
Trans. Commun., 44:1261–1271, October 1996.
[4] D. Bertsekas. Nonlinear programming. Athena Scientific, Belmont, MA, 1995.
[5] J. Chou, S. S. Pradhan, and K. Ramchandran. Turbo coded trellis-based constructions for data em-
bedding: Channel coding with side information. In Proceedings of the Asilomar Conference, November
2001.
[6] J. Chou, S. S. Pradhan, and K. Ramchandran. Turbo and trellis-based constructions for source coding
with side information. In Proceedings of the Data Compression Conference (DCC), 2003.
[7] S.-Y. Chung, G. D. Forney, T. Richardson, and R. Urbanke. On the design of low-density parity-check
codes within 0.0045 dB of the Shannon limit. IEEE Communications Letters, 5(2):58–60, February
2001.
[8] S. Ciliberti and M. Mézard. The theoretical capacity of the parity source coder. Technical report,
August 2005. arXiv:cond-mat/0506652.
[9] S. Ciliberti, M. Mézard, and R. Zecchina. Message-passing algorithms for non-linear nodes and data
compression. Technical report, November 2005. arXiv:cond-mat/0508723.
[10] S. Cocco, O. Dubois, J. Mandler, and R. Monasson. Rigorous decimation-based construction of ground
pure states for spin-glass models on random lattices. Physical Review Letters, 90(4), January 2003.
[11] T. Cover and J. Thomas. Elements of Information Theory. John Wiley and Sons, New York, 1991.
[12] N. Creignou, H. Daud/’e, and O. Dubois. Approximating the satisfiability threshold of random XOR
formulas. Combinatorics, Probability and Computing, 12:113–126, 2003.
[13] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer-Verlag,
New York, 1996.
[14] O. Dubois and J. Mandler. The 3-XORSAT threshold. In Proc. 43rd Symp. FOCS, pages 769–778,
2002.
[15] U. Erez and S. ten Brink. A close-to-capacity dirty paper coding scheme. IEEE Trans. Info. Theory,
51(10):3417–3432, 2005.
[16] O. Etesami and A. Shokrollahi. Raptor codes on binary memoryless symmetric channels. IEEE Trans.
on Information Theory, 52(5):2033–2051, 2006.
[17] R. G. Gallager. Low-density parity check codes. MIT Press, Cambridge, MA, 1963.
[18] J. Garcia-Frias and Y. Zhao. Compression of binary memoryless sources using punctured turbo codes.
IEEE Communication Letters, 6(9):394–396, September 2002.
http://arxiv.org/abs/cond-mat/0506652
http://arxiv.org/abs/cond-mat/0508723
[19] S. I. Gelfand and M. S. Pinsker. Coding for channel with random parameters. Probl. Pered. Inform.
(Probl. Inf. Tranmission), 9(1):19–31, 1983.
[20] G. Grimmett and D. Stirzaker. Probability and Random Processes. Oxford Science Publications, Claren-
don Press, Oxford, 1992.
[21] J. Hiriart-Urruty and C. Lemaréchal. Convex Analysis and Minimization Algorithms, volume 1.
Springer-Verlag, New York, 1993.
[22] S. Litsyn and V. Shevelev. On ensembles of low-density parity-check codes: asymptotic distance dis-
tributions. IEEE Trans. Info. Theory, 48(4):887–908, April 2002.
[23] A. Liveris, Z. Xiong, and C. Georghiades. Nested convolutional/turbo codes for the binary Wyner-Ziv
problem. In Proceedings of the International Conference on Image Processing (ICIP), volume 1, pages
601–604, September 2003.
[24] H. A. Loeliger. An introduction to factor graphs. IEEE Signal Processing Magazine, 21:28–41, 2004.
[25] M. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. Spielman. Improved low-density parity check
codes using irregular graphs. IEEE Trans. Info. Theory, 47:585–598, February 2001.
[26] M. W. Marcellin and T. R. Fischer. Trellis coded quantization of memoryless and Gauss-Markov
sources. IEEE Trans. Communications, 38(1):82–93, 1990.
[27] E. Martinian and M. J. Wainwright. Analysis of LDGM and compound codes for lossy compression
and binning. In Workshop on Information Theory and Applications (ITA), February 2006. Available
at arxiv:cs.IT/0602046.
[28] E. Martinian and M. J. Wainwright. Low density codes achieve the rate-distortion bound. In Data
Compression Conference, volume 1, March 2006. Available at arxiv:cs.IT/061123.
[29] E. Martinian and M. J. Wainwright. Low density codes can achieve the Wyner-Ziv and Gelfand-
Pinsker bounds. In International Symposium on Information Theory, July 2006. Available at
arxiv:cs.IT/0605091.
[30] E. Martinian and J. Yedidia. Iterative quantization using codes on graphs. In Allerton Conference on
Control, Computing, and Communication, October 2003.
[31] Y. Matsunaga and H. Yamamoto. A coding theorem for lossy data compression by LDPC codes. IEEE
Trans. Info. Theory, 49:2225–2229, 2003.
[32] M. Mézard, F. Ricci-Tersenghi, and R. Zecchina. Alternative solutions to diluted p-spin models and
XORSAT problems. Jour. of Statistical Physics, 111:105, 2002.
[33] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, Cambridge, UK,
1995.
[34] T. Murayama. Thouless-Anderson-Palmer approach for lossy compression. Physical Review E,
69:035105(1)–035105(4), 2004.
[35] T. Murayama and M. Okada. One step RSB scheme for the rate distortion function. J. Phys. A: Math.
Gen., 65:11123–11130, 2003.
[36] H. Pfister, I. Sason, and R. Urbanke. Capacity-achieving ensembles for the binary erasure channel with
bounded complexity. IEEE Trans. on Information Theory, 51(7):2352–2379, 2005.
[37] S. S. Pradhan and K. Ramchandran. Distributed source coding using syndromes (DISCUS): Design
and construction. IEEE Trans. Info. Theory, 49(3):626–643, 2003.
[38] T. Richardson, A. Shokrollahi, and R. Urbanke. Design of capacity-approaching irregular low-density
parity check codes. IEEE Trans. Info. Theory, 47:619–637, February 2001.
[39] T. Richardson and R. Urbanke. The capacity of low-density parity check codes under message-passing
decoding. IEEE Trans. Info. Theory, 47:599–618, February 2001.
[40] D. Schonberg, S. S. Pradhan, and K. Ramchandran. LDPC codes can approach the slepian-wolf bound
for general binary sources. In Proceedings of the 40th Annual Allerton Conference on Control, Com-
munication, and Computing, pages 576–585, October 2002.
[41] A. Shokrollahi. Raptor codes. IEEE Trans. on Information Theory, 52(6):2551–2567, 2006.
[42] Y. Sun, A. Liveris, V. Stankovic, and Z. Xiong. Near-capacity dirty-paper code designs based on TCQ
and IRA codes. In ISIT, September 2005.
[43] A. J. Viterbi and J. K. Omura. Trellis encoding of memoryless discrete-time sources with a fidelity
criterion. IEEE Trans. Info. Theory, IT-20(3):325–332, 1974.
http://arxiv.org/abs/cs/0602046
http://arxiv.org/abs/cs/0605091
[44] M. J. Wainwright and E. Maneva. Lossy source coding by message-passing and decimation over gen-
eralized codewords of LDGM codes. In International Symposium on Information Theory, Adelaide,
Australia, September 2005. Available at arxiv:cs.IT/0508068.
[45] A. D. Wyner and J. Ziv. The rate-distortion function for source encoding with side information at the
encoder. IEEE Trans. Info. Theory, IT-22:1–10, January 1976.
[46] Y. Yang, V. Stankovic, Z. Xiong, and W. Zhao. On multiterminal source code design. In Proceedings
of the Data Compression Conference, 2005.
[47] R. Zamir, S. S. (Shitz), and U. Erez. Nested linear/lattice codes for structured multiterminal binning.
IEEE Trans. Info. Theory, 6(48):1250–1276, 2002.
http://arxiv.org/abs/cs/0508068
Introduction
Previous and ongoing work
Our contributions
Background
Source and channel coding
Factor graphs and graphical codes
Weight enumerating functions
Optimality of bounded degree compound constructions
Compound construction
Main results
Consequences for coding with side information
Nested code structure
Source coding with side information
Problem formulation
Coding procedure for SCSI
Channel coding with side information
Problem formulation
Coding procedure for CCSI
Proof of source coding optimality
Set-up
Moment analysis
Second moment analysis
Bounding the overlap probability
Finite degrees are sufficient
Step A
Step B
Step C
Proof of channel coding optimality
Discussion
Basic property of LDGM codes
Bounds on binomial coefficients
Proof of Lemma ??
Proof of Lemma ??
Proof of Lemma ??
Regular LDPC codes are sufficient
Derivatives of L"0365L
|
0704.1819 | Comments on Charges and Near-Horizon Data of Black Rings | arXiv:0704.1819v3 [hep-th] 17 Dec 2007
Preprint typeset in JHEP style - HYPER VERSION TIT/HEP-570
arXiv:0704.1819
Comments on Charges and Near-Horizon Data
of Black Rings
Kentaro Hanaki1, Keisuke Ohashi2 and Yuji Tachikawa3
1 Department of Physics, University of Michigan,
Ann Arbor, MI 48109-1120, USA
E-mail: [email protected]
2 DAMTP, Centre for Mathematical Sciences, Cambridge University,
Wilberforce Road, Cambridge CB3OWA, UK
E-mail: [email protected]
3 School of Natural Sciences, Institute for Advanced Study,
Princeton, New Jersey 08540, USA
E-mail: [email protected]
Abstract: We study how the charges of the black rings measured at the asymptotic
infinity are encoded in the near-horizon metric and gauge potentials, independent of
the detailed structure of the connecting region. Our analysis clarifies how different sets
of four-dimensional charges can be assigned to a single five-dimensional object under
the Kaluza-Klein reduction. Possible choices are related by the Witten effect on dyons
and by a large gauge transformation in four and five dimensions, respectively.
Keywords: Black Rings, Page Charges.
http://arxiv.org/abs/0704.1819v3
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
http://jhep.sissa.it/stdsearch
Contents
1. Introduction 1
2. Near-Horizon Data and Conserved Charges 3
2.1 Electric charges 4
2.2 Angular momenta 6
2.3 Example 1: the black ring 9
2.3.1 Geometry 9
2.3.2 Electric charge 11
2.3.3 Angular momenta 11
2.4 Example 2: concentric black rings 12
2.5 Generalization 14
3. Relation to Four-Dimensional Charges 16
3.1 Mapping of the fields 17
3.2 Mapping of the charges 18
3.3 Reduction and the attractor 19
3.4 Gauge dependence and monodromy 20
3.5 Monodromy and Taub-NUT 21
4. Summary 24
A. Geometry of Concentric Black Rings 24
1. Introduction
One of the achievements of string/M theory is the microscopic explanation for the
Bekenstein-Hawking entropy for a class of four-dimensional supersymmetric black holes
[1, 2]. The microscopic counting predicts subleading corrections to the entropy, which
can also be calculated from the macroscopic point of view, i.e. from stringy modi-
fications to the Einstein-Hilbert Lagrangian [3]. Comparison of the two approaches
has proven to be very fruitful, e.g. it has led to the relation to the partition function
– 1 –
of topological strings [4]. Beginning in Ref. [5], it has been also generalized to non-
supersymmetric extremal black holes using the fact that the near-horizon geometry has
enhanced symmetry. The analysis has also been extended to rotating black holes [6].
There is a richer set of supersymmetric black objects in five dimensions, including
black rings [7], on which we focus. The entropy is still given by the area law macro-
scopically to leading order, and it can be understood microscopically using a D-brane
construction [8, 9]. The understanding of higher-derivative corrections remains more
elusive [10, 11, 12]. One reason for this is that the supersymmetric higher-derivative
terms were not known until quite recently [13]. Even with this supersymmetric higher-
derivative action, it has been quite difficult to construct the black ring solution embed-
ded in the asymptotically flat spacetime, and it is preferable if we can only study the
near horizon geometry. Then the problem is to find the charges carried by the black
ring from its data at the near-horizon region.
The usual approach taken in the literature so far is to consider the dimensional
reduction along a circle down to four dimensions, and to study the charges there [12,
14, 15, 16]. Then, the attractor mechanism fixes the scalar vacuum expectation values
(vevs) and the metric at the horizon by the electric and magnetic charges [17, 18].
Conversely, the magnetic charge can be measured by the flux, and the electric charge
can be found by taking the variation of the Lagrangian by the gauge potential. In
this way, the entropy as a function of charges can be obtained from the analysis of
the near-horizon region alone [5, 6]. Nevertheless, it has not been clarified how to
reconcile the competing proposals [8, 9, 19, 20, 21] of the mapping between the four-
and five-dimensional charges of the black rings embedded in the asymptotically flat
spacetime.
Thus we believe it worthwhile to revisit the identification of the charges directly
in five dimensions, with local five-dimensional Lorentz symmetry intact. It poses two
related problems because of the presence of the Chern-Simons interaction in the La-
grangian. One is that, in the presence of the Chern-Simons interaction, the equation
of motion of the gauge field is given by
d ⋆ F = F ∧ F, (1.1)
which means that the topological density of the gauge field itself becomes the source of
electric charge. To put it differently, the attractor mechanism for the black rings [22]
determines the scalar vevs at the near-horizon region via the magnetic dipole charges
only, and the information about the electric charges seems to be lost. Then the electric
charge of a black ring seems to be diffusely distributed throughout the spacetime.
Eq.(1.1) can be rewritten in the form
d(⋆F −A ∧ F ) = 0, (1.2)
– 2 –
(∗F − A ∧ F ) is independent of Σ. This integral is called the Page charge.
Similar analysis can be done for angular momenta, and Suryanarayana and Wapler [23]
obtained a nice formula for them using the Noether charge of Wald.
There is a second problem remaining for black rings, which stems from the fact
that A is not a well-defined one-form there because of the presence of the magnetic
dipole. It makes
(⋆F − A ∧ F ) ill-defined, because in the integral all the forms are
to be well-defined. The same can be said for the angular momenta. The aim of this
paper is then to show how this second problem can be overcome, and to see how the
near-horizon region of a black ring encodes its charges measured at the asymptotic
infinity.
In Section 2, we use elementary methods to convert the integral at the asymptotic
infinity to the one at the horizon. We apply our formalism to the supersymmetric black
ring and check that it correctly reproduces known values for the conserved charges. We
will show how the gauge non-invariance of
A∧F can be solved by using two coordinate
patches and a compensating term along the boundary of the patches. Then in Section
3 we will see that our viewpoint helps in identifying the relation of the charges under
the Kaluza-Klein reduction along S1. We will see that the change in the charges under
a large gauge transformation in five dimensions maps to the Witten effect on dyons [24]
in four dimensions. Proposals in the literature [8, 9, 19, 20, 21] will be found equivalent
under the transformation. We conclude with a summary in Section 4. In Appendix A
the geometry of the concentric rings is briefly reviewed.
2. Near-Horizon Data and Conserved Charges
To emphasize essential physical ideas, we discuss the problem first for the minimal
supergravity in five dimensions. Later in this section we will apply the technique to
the case with vector multiplets. The bosonic part of the Lagrangian of the minimal
supergravity theory is
⋆ R − F ∧ ⋆F − 4
A ∧ F ∧ F
. (2.1)
Our metric is mostly plus, and Rµν is defined to be positive for spheres. We define the
Hodge star operator for an n-form as
⋆ (dxµ0 ∧ · · · ∧ dxµn−1) =
(5− n)!ǫ
µ0···µn−1
µn···µ4dx
µn ∧ · · · ∧ dxµ4 . (2.2)
– 3 –
with the Levi-Civita symbol ǫ01234 = +1 and ǫ
01234 = −1 defined in local Lorentz
coordinates. The equations of motion are
Rµν = −
gµνFρσF
ρσ + 2FµρFν
ρ, (2.3)
d ⋆ F = − 2√
F ∧ F. (2.4)
2.1 Electric charges
From the equation of motion of the gauge field (2.4), we see that F ∧ F is the electric
current for the charge
⋆F . Thus, the charge is distributed diffusely in the spacetime
as was emphasized e.g. in [25]. However, the equation (2.4) can also be cast in the form
A ∧ F
= 0. (2.5)
At the asymptotic infinity, A ∧ F decays sufficiently rapidly, so that we have
A ∧ F
A ∧ F
, (2.6)
where the subscript ∞ indicates that the integral is taken at S3 at the asymptotic
infinity, and Σ is an arbitrary three-cycle surrounding the black object. Thus we can
think of the electric charge as the integral of the quantity inside the bracket, which is
called the Page charge.
One problem about the Page charge is that, even in the case where A is a glob-
ally defined one-form, it changes its value under a large gauge transformation. It is
completely analogous to the fact that
A for an uncontractible circle C is only de-
fined up to an integral multiple of 2π under a large gauge transformation. Indeed,
let us parametrize C by 0 ≤ θ ≤ 2π and we perform a gauge transformation by
g(θ) ∈ U(1), i.e. we change A to A + i−1g−1dg. Such a continuous g(θ) can be writ-
ten as g(θ) = exp(iφ(θ)). Then
A changes by
dφ(θ) = φ(2π) − φ(0), which can
jump by a multiple of 2π. Thus,
A is invariant under a small gauge transformation
φ(0) = φ(2π) but is not under a large gauge transformation φ(0) 6= φ(2π). Exactly the
same analysis can be done for
A ∧ F , and it changes under a large gauge transfor-
mation along C if Σ contains intersecting one-cycle C and two-cycle S and
F 6= 0.
However, this non-invariance under a large gauge transformation poses no problem if
Σ is at the asymptotic infinity of the flat space, because we usually demand that A
should decay sufficiently rapidly there, which removes the freedom to do a large gauge
transformation.
– 4 –
These facts are well-known, and have been utilized previously e.g. in [14]. It is
the manifestation of the fact that there are several notions of electric charges in the
presence of Chern-Simons interactions, as clearly discussed by Marolf in Ref. [26]. One
is the Maxwell charge which is gauge-invariant but not conserved, and another is the
charge which is conserved but not gauge-invariant. In our case
⋆F is the Maxwell
charge and
(⋆F + (2/
3)A∧F ) is the Page charge. Yet another notion of the charge
is the quantity which generates the symmetry in the Hamiltonian framework, which
can be constructed using Noether’s theorem and its generalization by the work of Wald
and collaborators [27, 28, 29, 30]. The charge thus constructed is called the Noether
charge, and in our case it agrees with the Page charge.
Unfortunately, the manipulation above cannot be directly applied to the black
rings with dipole charges. It is because A is not a globally well-defined one-form, and
the integrals are not even well-defined. The way out is to generalize the definition of
A ∧ F to the case A is a U(1) gauge field defined using two coordinate patches, so
F ∧ F = “
A ∧ F ” (2.7)
holds. Then the manipulation (2.6) makes sense. The essential idea is to introduce a
term localized in the boundary of the patches which compensates the gauge variation.
Copsey and Horowitz [31] used similar subtlety associated to the gauge transformation
between patches to study how the magnetic dipole enters in the first law of the black
rings.
Figure 1: Coordinate patches used to define
A ∧ F consistently without ambiguity.
Let us assume the whole spacetime is covered by two coordinate patches, S and
T , see Figure 1. We denote the boundary of two regions by D = ∂S = −∂T . The
gauge field A is represented as well-defined one-forms AS and AT on the patches S and
T , respectively. These two are related by a gauge transformation, AS = AT + β with
dβ = 0 on the boundary D. Suppose the region B has the boundary C = ∂B. Then
– 5 –
we have
F ∧ F =
F ∧ F +
F ∧ F (2.8)
C∩S+D∩B
AS ∧ F +
C∩T−D∩B
AT ∧ F (2.9)
AS ∧ F +
AT ∧ F ) +
(AS ∧ F − AT ∧ F ) (2.10)
AS ∧ F +
AT ∧ F ) +
AS ∧ β. (2.11)
Now we define the symbol
A ∧ F for a three-cycle M to mean
A ∧ F ” ≡
AS ∧ F +
AT ∧ F +
AS ∧ β, (2.12)
then the relation (2.7) holds as is. The important point here is that we need a term
D∩M AS ∧ β which compensates the gauge variation localized at the boundary of the
coordinate patches.
One immediate concern might be the gauge invariance of the definition (2.12),
but it is guaranteed for C = ∂B from the very fact the relation (2.7) holds. It is
because its left hand side is obviously gauge invariant. For illustration, consider the
case ∂B = C1 − C2. The Page charges measured at C1, C2 themselves are affected by
a large gauge transformation, but their difference is not. When one takes C1 as the
asymptotic infinity, it is conventional to set the gauge potential to be zero there, thus
fixing the gauge freedom. Then the Page charge at the cycle C2 is defined without
ambiguity.
In the following, we drop the quotation marks around the generalized integral
A ∧ F ”. We believe it does not cause any confusion.
2.2 Angular momenta
The technique similar to the one we used for electric charges can be applied to the
angular momenta, and we can obtain a formula which expresses them by the inte-
gral at the horizon. There is a general formalism, developed by Lee and Wald [27],
which constructs the appropriate integrand from a given arbitrary generally-covariant
Lagrangian, and the expression for the angular momenta was obtained in [23, 32]. In-
stead, here we will construct a suitable quantity in a more down-to-earth and direct
method. We will see that the integrand contains the gauge field A without the exterior
derivative, and that it is ill-defined in the presence of magnetic dipole. We will use the
technique developed in the last section to make it well-defined.
– 6 –
Firstly, the angular momentum corresponding to the axial Killing vector ξ can be
measured at the asymptotic infinity by Komar’s formula
Jξ = −
⋆∇ξ, (2.13)
where ∇ξ is an abbreviation for the two-form ∇µξνdxµ ∧ dxν = dξ. Using the Killing
identity, the divergence of the integrand is given by
d ⋆∇ξ = 2 ⋆ Rµνξµdxν , (2.14)
which vanishes in the pure gravity. Thus, the angular momentum of a black object of
the pure gravity theory can be measured by
⋆∇ξ for any surface S which surrounds
the object.
Let us analyze our case, where the equations of motion are given by (2.3) and (2.4).
We need to introduce some notations: £ξ denotes the Lie derivative along the vector
field ξ, ιξω denotes the interior product of a vector ξ to a differential form ω, i.e. the
contraction of the index of ξ to the first index of ω. Then £ξ = dιξ + ιξd when it acts
on the forms. For a vector ξ and a one-form A, we abbreviate ιξA as (ξ · A).
We will take the gauge where gauge potentials are invariant under the axial isometry
£ξA = 0. It can be achieved by averaging over the orbit of the isometry ξ. We
furthermore assume that every chain or cycle we use is invariant under the isometry ξ,
then any term of the form ιξ(· · · ) vanishes upon integration on such a chain or cycle.
Under these assumptions, the difference of the integral of ⋆∇ξ at the asymptotic
infinity and at C is evaluated with the help of the Einstein equation (2.3) to be
⋆∇ξ −
⋆∇ξ = 2
⋆Rµνξ
µdxν = 4
(ιξF ) ∧ ⋆F (2.15)
where B is a hypersurface connecting the asymptotic infinity and C. We dropped the
ιξ(⋆F
2) because it vanishes upon integration.
The right hand side can be partially-integrated using the following relations: one
d [⋆(ξ · A)F ] = −(ιξF ) ∧ ⋆F − (ξ · A)
F ∧ F (2.16)
and another is
d [(ξ · A)A ∧ F ] = (ξ · A)F ∧ F − (ιξF ) ∧ A ∧ F (2.17)
(ξ · A)F ∧ F − 1
ιξ(A ∧ F ∧ F ) (2.18)
– 7 –
of which the last term vanishes upon integration. Thus we have
dXξ[A] = −(ιξF ) ∧ ⋆F (2.19)
modulo the term of the form ιξ(· · · ), where
Xξ[A] ≡ ⋆(ξ · A)F +
(ξ ·A)A ∧ F. (2.20)
Xξ[A] is not a globally well-defined form. Thus, to perform the partial integration of
the right hand side of (2.19), compensating terms along the boundary of the coordinate
patches need to be introduced, just as we did in the previous section in the analysis of
the Page charge.
Let S and T be two coordinate patches, D = ∂S = −∂T be their common bound-
ary, and AS = AT + β as before. Let us call the correction term Yξ[β,AS] and we
define
Xξ[A] ≡
Xξ[AS] +
Xξ[AT ] +
Yξ[β,AT ]. (2.21)
We demand that it satisfies
Xξ[A] =
(ιξF ) ∧ ⋆F. (2.22)
Then Y [β,A] should solve
dYξ[β,AT ] = Xξ[AS]−Xξ[AT ]. (2.23)
The right hand side is automatically closed since dXξ[A] is gauge invariant. Thus the
equation above should have a solution if there is no cohomological obstruction. Indeed,
substituting (2.20) in the above equation, we get
Yξ[β,AT ] = (ξ · β)Z −
2(ξ · β)β ∧AT + (ξ · AT )β ∧ AT
(2.24)
modulo ιξ(· · · ), where dZ should satisfy
dZ = ⋆F +
AT ∧ F, (2.25)
the right hand side of which is closed using the equation of motion (2.4). Unfortunately
there seems to be no general way to write Z as a functional of A and β. We need to
choose Z by hand for each on-shell configuration. With these preparation, we can
finally integrate the right hand side of (2.15) partially and conclude that
(⋆∇ξ + 4Xξ[A]) . (2.26)
– 8 –
is independent under continuous deformation of C.
Taking C to be the 3-sphere at the asymptotic infinity, the terms X [A] vanish
too fast to contribute to the integral. Then, the integral above is proportional to the
Komar integral at the asymptotic infinity. Thus we arrive at the formula
Jξ = −
⋆∇ξ + 4 ⋆ (ξ ·A)F + 16
(ξ · A)A ∧ F
, (2.27)
where Σ is any surface enclosing the black object. The right hand side is precisely the
Noether charge of Wald as constructed in [23, 32].
The contribution
⋆∇ξ to the angular momentum is gauge invariant but is not
conserved. It is expected, since the matter energy-momentum tensor carries the angular
momentum. The rest of the terms in (2.27) was obtained by the partial integral of the
contribution from the matter energy-momentum tensor, and can also be obtained by
constructing the Noether charge. The price we paid is that it is now not invariant
under a gauge transformation.
2.3 Example 1: the black ring
Let us check our formulae against known examples. First we consider the celebrated
supersymmetric black ring in five dimensions [7].
2.3.1 Geometry
It has been known [33] that any supersymmetric solution of the minimal supergravity
in the asymptotically flat R1,4 can be written in the form
ds2 = −f 2(dt+ ω)2 + f−1ds2(R4) (2.28)
where f and ω is a function and a one-form on R4, respectively. For the supersymmetric
black ring [7], we use a coordinate system adopted for a ring of radius R in the R4 given
ds2(R4) =
(x− y)2
1− x2 + (1− x
2)dφ21 +
y2 − 1 + (y
2 − 1)dφ22
(2.29)
with the ranges −1 ≤ x ≤ 1, −∞ < y ≤ −1 and 0 ≤ φ1,2 < 2π.1 φ1, φ2 were denoted
by φ, ψ in Ref. [7].
1We fix the orientations so that
dx∧ dφ1 ∧ dφ2 > 0 and
dx∧ dφ1 < 0 for S2 surrounding the
ring.
– 9 –
The solution for the single black ring is parametrized by the radius R in the base
4 above, and two extra parameter q and Q. More details can be found in Appendix A.
q controls the magnetic dipole through S2 surrounding the ring,
q. (2.30)
Conserved charges measured at the asymptotic infinity are as follows:
Q, (2.31)
J1 = −
⋆∇ξ1 =
q(3Q− q2), (2.32)
J2 = −
⋆∇ξ2 =
q(6R2 + 3Q− q2) (2.33)
where ξ1, ξ2 are the vector fields ∂φ1 , ∂φ2 respectively.
There is a magnetic flux through S2 surrounding the ring, so we need to introduce
two patches S, T . We choose S to cover the region x < 1−ǫ and T to cover 1−ǫ < x < 1,
with infinitesimal ǫ. The boundary D is at x = ǫ and parametrized by 0 ≤ φ1, φ2 < 2π.
We choose the gauge transformation between the two patches to be
AT = AS +
qdφ1 (2.34)
which is chosen to make AT smooth at the origin of R
The horizon is located at y → −∞ and has the topology S1 × S2. The gauge
potential near the horizon is
AS = −
q(x+ 1)dχ, (2.35)
while the geometry near the horizon is given as
ds2 = 2dvdr +
rdvdψ + ℓ2dψ2 +
(dθ2 + sin2 θdχ2) (2.36)
where r = r(y) is chosen so that r → 0 corresponds to y → −∞, v is a combination of
t and y, x = cos θ, ψ = φ2 + C1/r + C0 for suitably chosen C0,1, χ = φ1 − φ2, and
ℓ2 = 3
(Q− q2)2
. (2.37)
It is a direct product of an extremal Bañados-Teitelboim-Zanelli (BTZ) black hole with
horizon length 2πℓ and curvature radius q and of a round two-sphere with radius q/2.
– 10 –
ℓ is a more physical quantity characterizing the ring than R is, so it is preferable
to express J2, (2.33), using ℓ in the form
−2ℓ2 + 3Q
. (2.38)
Our objective is to reproduce the conserved charges, (2.31), (2.32) and (2.38), purely
from the near-horizon data, (2.35) and (2.36).
2.3.2 Electric charge
We use the formula (2.6) to get the electric charge. Using the form of the gauge field
near the horizon (2.34) and (2.35), we obtain
A ∧ F
AS ∧ F +
AS ∧ β
Q+ q2
Q− q2
Q, (2.39)
which correctly reproduces the charge measured at the asymptotic infinity. Vanishing
⋆F at the horizon means that all the Maxwell charge of the system is carried
outside of the horizon in the form of
F ∧F , while all of the Page charge is still inside
the horizon.
One important fact behind the gauge invariance of the calculation above is that the
integral
AS along the ψ
′ direction is not just defined mod integer, but is well-defined
as a real number. It is because the circle along ψ, which is not contractible in the
near-horizon region, becomes contractible in the full geometry.
2.3.3 Angular momenta
The integral of the right hand side of (2.25) can be made arbitrarily small by choosing
very small ǫ, so that we can forget the complication coming from the choice of Z. Then
for ξ1 = ∂φ1 = ∂χ, we have
−1<x<1−ǫ
(ξ · AS)AS ∧ F +
x=1−ǫ
(ξ · β)β ∧ AT
(2π)2
(q3 + qQ) +
(−q3 + qQ)
q(3Q− q2), (2.40)
– 11 –
reproducing (2.32).
For ξψ = ∂ψ = ∂φ1 + ∂φ2 , we have a contribution from
⋆∇ξψ = 4π2qℓ2. Adding
contribution from X [A], we obtain
−2qℓ2 − q
+ 3qQ+
(2.41)
which matches with J1 + J2, see (2.32) and (2.38).
The second and the third terms in the expression above are obtained by the partial
integration of the contribution from the angular part of the energy-momentum tensor
of the gauge field. In this sense, a part of the angular momentum is carried outside of
the horizon and the part proportional to ℓ2 is carried inside the horizon. However, the
Noether charge of the black ring resides purely inside of the horizon.
2.4 Example 2: concentric black rings
The concentric black-ring solution constructed in Ref. [34] is a superposition of the
single black ring we discussed in the last subsection. We focus on the case where all the
rings lie on a plane in the base R4. For the superposition of N rings, the full geometry
is parametrized by 3N parameters qi, Qi and Ri, (i = 1, . . . , N). qi is the dipole charge
and Ri is the radius in the base R
4 of the i-th ring. For more details, see Appendix A.
We order the rings so that R1 < R2 < · · · < RN . The conserved charges measured at
infinity are known to be
Qi − q2i
, (2.42)
2s3 + 3s
(Qj − q2j )
, (2.43)
2s3 + 3s
(Qj − q2j ) + 6
(2.44)
where s is an abbreviation for the sum of the magnetic charges, i.e. s =
i=1 qi. Our
aim is to reproduce these results from the near-horizon data.
The near-horizon metric of i-th ring has the form (2.36) with q, Q, R replaced with
qi, Qi and Ri, respectively. The horizon radius ℓi is given by
ℓ2i = 3
(Qi − q2i )2
− R2i
. (2.45)
Since each ring has a magnetic dipole charge, we introduce coordinate patches S
and Ti so that the gauge field is non-singular in each patch. Let Ti be the patch covering
– 12 –
the region between (i− 1)-th and i-th ring and S be a patch covering the outer region.
More precisely, we introduce the ring coordinate (2.29) for each of the ring, and choose
S to cover −1 + ǫ < xi < 1 − ǫ for each ring while Ti to cover 1 − ǫ < xi < 1 for the
i-th ring and −1 < xi−1 < −1 + ǫ for the (i − 1)-th ring. Then, near the i-th horizon
the gauge field on S is given by
AS = −
− qi + 2s
qi(1 + x) + 2
j=i+1
. (2.46)
Its ψ component is determined in Appendix A, while the coefficient for dχ is determined
so that the field strength is reproduced, the gauge field is non-singular except for x = ±1
for the 1st to (N − 1)-th rings and non-singular except for x = −1 for the N -th ring.
The gauge field on Ti is given by
ATi = AS +
qjdφ1. (2.47)
The electric charge is given by using (2.6) and βi = AS −ATi = −
j=i qjdφ1 as
AS ∧ F +
Σi∩∂S
AS ∧ βi +
Σi−1∩∂S
AS ∧ βi
Qi − q2i
+ 2sqi
Qi − q2i
Qi − q2i
. (2.48)
This correctly reproduces the known result (2.42).
Let us move onto the evaluation of the angular momenta. Note that for certain
configurations of charges, the concentric black rings develop singularities on the rotation
axes. While the condition for the absence of singularities has not been known fully, it
was pointed out in Ref. [34] that there is no singularity on the rotation axes if all
Qi − q2i
(2.49)
are equal. We will show that we can obtain the correct angular momenta if this condi-
tion is satisfied.
– 13 –
The angular momentum associated with ξ1 = ∂φ1 = ∂χ is given by
J1 = −
(ξ1 · AS)AS ∧ F
Σi∩∂Ti
Σi−1∩∂Ti
(ξ1 · βi)βi ∧ATi . (2.50)
After summing up terms, we have
2s3 + 6
(Qi − q2i )
j=i+1
qj + 3
(qi(Qi − q2i ))
. (2.51)
If the condition (2.49) is satisfied, J1 computed above matches (2.43) and we have
2s3 + 3Λis
. (2.52)
Finally, let us consider the angular momentum associated with ξψ = ∂ψ = ∂φ1+∂φ2 .
In addition to (2.50) with ξ1 being replaced by ξψ, here we have to consider two more
contributions. Namely,
⋆∇ξψ −
Σi∩∂Ti
Σi−1∩∂Ti
(ξψ ·ATi)βi ∧ATi . (2.53)
It is easy to check that the sum of each term is given by
i + 4s
3 + 6s
(Qi − q2i )
. (2.54)
When evaluated under the condition (2.49), this gives
i + 4s
3 + 6Λis
(2.55)
and agrees with Jψ given as the sum of (2.43) and (2.44).
2.5 Generalization
It is straightforward to generalize the techniques we developed so far to the supergravity
theory with n of U(1) vector fields AI , (I = 1, . . . , n). There are (n−1) vector multiplets
because the gravity multiplet also contains the graviphoton field which is a vector field.
– 14 –
The scalars in the vector multiplet are denoted by M I , which are constrained by the
condition
N ≡ cIJKM IMJMK = 1. (2.56)
cIJK is a set of constants. The action for the boson fields is given by
⋆R− aIJdM I ∧ ⋆dMJ − aIJF I ∧ ⋆F J − cIJKAI ∧ F J ∧ FK
(2.57)
where R is the Ricci scalar, and
aIJ = −
(NIJ −NINJ) . (2.58)
In the last expression, NI = ∂N /∂M I and NIJ = ∂2N /∂M I∂MJ . This is the low-
energy action of M-theory compactified on a Calabi-Yau manifoldM with n = h1,1(M),
6cIJK =
ωI ∧ ωJ ∧ ωK (2.59)
is the triple intersection of integrally-quantized two-forms ωI on M . The action for the
minimal supergavity (2.1) is obtained by setting n = 1, c111 = (2/
3)3, and a11 = 2.
As for the calculation of the electric charges, one only needs to put the indices
I, J,K to the vector fields and the result is
⋆aIJF
cIJKA
J ∧ FK
. (2.60)
As for the angular momenta, there is extra terms coming from the energy-momentum
tensor of the scalar fields in the right hand side of (2.15). Its contribution to the angular
momenta vanishes upon integration, so that the result is
Jξ = −
⋆∇ξ + 2 ⋆ aIJ(ξ · AI)F J + 2cIJK(ξ ·AI)AJ ∧ FK
. (2.61)
For a more complicated Lagrangian, e.g. with charged hypermultiplets and/or with
higher-derivative corrections, it is easier to utilize the general framework set up by
Wald, than to find the partial integral in (2.15) by inspection. The charge constructed
by this technique has an important property [27] that it acts as the Hamiltonian for
the corresponding local symmetry in the Hamiltonian formulation of the theory, and it
reproduces the Page charge and the angular momenta (2.61). Consequently, the charge
as the generator of the symmetry is not the gauge-invariant Maxwell charge, but the
Page charge which depends on a large gauge transformation.
The integrands in the expressions above are not well-defined as differential forms
when there are magnetic fluxes, thus it needs to be defined appropriately as we did
– 15 –
in the previous sections. Generically, we would like to rewrite the integral of a gauge
invariant form ω on a region B to the integral of ω(1) satisfying
dω(1) = ω (2.62)
on its boundary ∂B. The problem is that ω(1) may depend on the gauge. On two
patches S and T , it is represented by differential forms ωS(1) and ω
(1) respectively. Since
ω is gauge-invariant, we have dωS
= dωT
. Thus, if we take a sufficiently small
coordinate patch, we can choose ω
(S,T )
such that
(S,T )
= ωS(1) − ωT(1). (2.63)
Then one defines the integral of ω(1) on C = ∂B via
ω(1) ≡
ωS(1) +
ωT(1) +
, (2.64)
where D = ∂S = −∂T . The equations (2.62), (2.63) are the so-called descent relation
which is important in the understanding of the anomaly. It will be interesting to
generalize our analysis to the case where there are more than two patches and multiple
overlaps among them. Presumably we need to include higher descendants ω
(S1,...,Sn)
the correction term at the boundary of n patches S1, . . . , Sn in the definition of the
integral (2.64).
3. Relation to Four-Dimensional Charges
We have seen how the near-horizon data of the black rings encode the charges measured
at the asymptotic infinity. We can also consider rings in the Taub-NUT space [19, 20,
21] instead in the five-dimensional Minkowski space. Then the theory can also be
thought of as a theory in four dimensions, via the Kaluza-Klein reduction along S1
of the Taub-NUT space. It has been established [35] that supersymmetric solutions
for five dimensional supergravity nicely reduces to supersymmetric solutions for the
corresponding four dimensional theory.
In four dimensions, there are no problems in defining the charges, because the
equations of motion and Bianchi identities yield the relations
dF I = 0, dGI = d(⋆(g
IJ )F
J + θIJF
J) = 0 (3.1)
where (g−2)IJ are the inverse coupling constants and θIJ are the theta angles. The
electric and magnetic charges can be readily obtained by integrating GI and F
I over
– 16 –
the horizon. Then it is natural to expect that our formulae for the charges will yield
the four-dimensional ones after the Kaluza-Klein reduction. One apparent problem is
that the Page charges changes under a large gauge transformation, whereas the four-
dimensional charges are seemingly well-defined as is. We will see that a large gauge
transformation corresponds to the Witten effect on dyons in four-dimensions.
3.1 Mapping of the fields
First let us recall the well-known mapping of the fields in four and five dimensions.
The details can be found e.g. in [11, 12, 15, 16]. When we reduce a five-dimensional
N = 2 supergravity with n vector fields along S1, it results in a four-dimensional N = 2
supergravity with (n+1) vector fields. The metrics in respective dimensions are related
ds25d = e
2ρ(dψ − A0)2 + e−ρds24d, (3.2)
where we take the periodicity of ψ to be 2π so that eρ is the five-dimensional radius of
the Kaluza-Klein circle. The factor in front of the four-dimensional metric is so chosen
that the four-dimensional Einstein-Hilbert term is canonical.
The gauge fields in four and five dimensions are related by
AI5d = a
I(dψ − A0) + AI4d (3.3)
where I = 1, . . . , n. It is chosen so that a gauge transformation of A0 do not affect AI4d.
We need to introduce coordinate patches when there is a flux for AI5d. We demand that
gauge transformations used between patches should not depend on ψ so that aI are
globally well-defined scalar fields.
Then, by the reduction of the five-dimensional action (2.57), the action of four-
dimensional gauge fields is determined to be 2
L = −
e3ρ + eρaIJa
F 0 ∧ ⋆F 0 − cIJKaIaJaKF 0 ∧ F 0
+ 2eρaIJa
IF 0 ∧ ⋆F J + 3cIJKaIaJF 0 ∧ FK
− eρaIJF I ∧ ⋆F J − 3cIJKaIF J ∧ FK . (3.4)
Partial integrations are necessary to bring the naive Kaluza-Klein reduction to the form
above. The resulting Lagrangian above follows from the prepotential
F (X) =
cIJKX
IXJXK
, (3.5)
2We take the following conventions in four dimensions: The orientations in four and five dimensions
are related such that
dx0 ∧ dx1 ∧ dx2 ∧ dx3 ∧ dψ = 2π
dx0 ∧ dx1 ∧ dx2 ∧ dx3. The Levi-Civita
symbol in four dimensions is defined by ǫ0123 = +1 and ǫ
0123 = −1 in local Lorentz coordinates.
– 17 –
if one defines special coordinates zI = XI/X0 by
zI = aI + ieρM I . (3.6)
This relation can be checked without the detailed Kaluza-Klein reduction. Indeed,
the ratio of aI and M I in (3.6) can be fixed by inspecting the mass squared of a
hypermultiplet, and the fact aI should enter in zI linearly with unit coefficient is fixed
by the monodromy.
3.2 Mapping of the charges
In many references including Ref. [12, 16, 23], the charge of the black object in five di-
mensions is defined to be the charges in four dimensions after the dimensional reduction
determined from the Lagrangian (3.4). It was motivated partly because the analysis
of the charge in five dimensions was subtle due to the presence of the Chern-Simons
interaction, whereas we studied how we can obtain the formula for the charges which
has five-dimensional general covariance in Section 2. Now let us compare the charges
thus defined in four- and five- dimensions.
Firstly, the magnetic charge
F 0 (3.7)
in four dimensions counts the number of the Kaluza-Klein monopole inside C. It is
also called the nut charge. The other magnetic charges in four dimensions
F I (3.8)
come directly from the dipole charges in five dimensions, as long as the surface C does
not enclose the nut. When C does contain a nut, the Kaluza-Klein circle is non-trivially
fibered over C. Thus, the surface C cannot be lifted to five dimensions. We will come
back to this problem in Section 3.5.
The formulae for the electric charges follow from the Lagrangian :
⋆2eρaIJ(F
J − aJF 0) + 6cIJKaJFK − 3cIJKaJaKF 0
, (3.9)
⋆e3ρF 0 − ⋆2eρaIJaI(F J − aJF 0) + 2cIJKaIaJaKF 0 − 3cIJKaIaJFK
(3.10)
It is easy to verify that the five-dimensional Page charges (2.60) and the Noether
charge Jψ (2.61) for the isometry ∂ψ along the Kaluza-Klein circle are related to the
– 18 –
four-dimensional electric charges via
QI = −
QI , Q0 = −
Jψ. (3.11)
An important point in the calculation is that the compensating term on the boundary
of the coordinate patches vanishes, since aI and F J4d are globally well-defined.
Thus we see that the four-dimensional charges are not the reduction of the gauge-
invariant Maxwell charges
⋆F or that of the gauge-invariant “Maxwell-like” part of
the angular momentum,
⋆∇ξ. They are rather the reduction of the Page or the
Noether charges, which change under a large gauge transformation.
3.3 Reduction and the attractor
In the literature, the attractor equation is often analyzed after the reduction to four
dimensions [12, 15, 16], while the five-dimensional attractor mechanism for the black
rings in [22] only determines the scalar vacuum expectation values via the magnetic
dipoles. As we saw in the previous sections, the electric charges at the asymptotic
infinity are encoded by the Wilson lines along the horizon. We show that how these
five-dimensional consideration reproduces the known attractor solution [36, 37] in four-
dimensions.
The five-dimensional metric is characterized by the magnetic charges qI through
the horizon, and the physical radius of the horizon ℓ = eρ there. From the attractor
mechanism for the black rings [22], the near-horizon geometry is of the form AdS3×S2,
and the curvature radii are q and q/2 in each factor, where q3 = cIJKq
IqJqK . The scalar
vevs are fixed to be proportional to the magnetic dipoles, i.e. M I = qI/q.
For the calculation of electric charges the Wilson lines aI along the horizon are
also important. Then we can evaluate the Page charges and angular momenta on the
horizon to obtain
QI = 6cIJKa
JqK , Q0 = qℓ
2 − 3cIJKaIaJqK . (3.12)
We can solve the equations above for ℓ and aI so that we have the formula for the
four-dimensional special coordinates zI in terms of the charges. The result is
zI = aI + ieρM I =
DIJQI + i
qI (3.13)
where
DIJ = cIJKq
K , DIJDJK = δ
K (3.14)
– 19 –
D = q3 = cIJKq
IqJqK , Q̂0 = qℓ
2 = Q0 +
DIJQIQJ . (3.15)
It is the well-known solution of the attractor equation in four-dimensions with q0 = 0
[36, 37].
Thus, the combination of the attractor mechanism in five dimensions and the tech-
nique of Page charges yield the attractor mechanism in four dimensions. The point is
that the Wilson lines aI along the horizon of the black string carry the information
of its electric charges. Conversely, the Wilson line at the horizon is determined by
the electric charge. The horizon length is also determined by the angular momentum.
In this sense, the attractor mechanism for the black rings also fixes all the relevant
near-horizon data by means of the charges, angular momenta and dipoles.
3.4 Gauge dependence and monodromy
Let us now come back to the question of the variation of the Page charges under large
gauge transformations. The problem is that the integral
A∧F depends on the shift
A→ A+β for dβ = 0 if C has a non-contractible loop ℓ and
β 6= 0. In the spacetime
which asymptotes to R4,1, the large gauge transformation can be fixed by demanding
that the gauge potential vanishes at the asymptotic infinity.
In the present case of reduction to four dimensions, however, the gauge potential
along the Kaluza-Klein circle is one of the moduli and is not a thing to be fixed. More
precisely, if the ψ direction is non-contractible, a large gauge transformation associated
to the Kaluza-Klein circle corresponds to a shift aI → aI + tI where tI are integers. In
four-dimensional language it is the shift
zI → zI + tI , (3.16)
and the gauge variation of the Page charge translates to the variation of the electric
charge under the transformation (3.16). It is precisely the Witten effect on dyons [24] if
one recalls the fact that the dynamical theta angles of the theory depends on zI . In the
terminology of N = 2 supergravity and special geometry, it is called the monodromy
transformation associated to the shift (3.16), which acts symplectically on the charges
(qI , QI) and on the projective special coordinates (X
I , FI)
For the M-theory compactification on the product of S1 and a Calabi-Yau, electric
charges QI and q
I correspond to the number of M2-branes and M5-branes wrapping
two-cycles ΠI and four-cycles ΣI , respectively. The relation (2.59) translates to 6cIJK =
#(ΣI ∩ ΣJ ∩ ΣK) in this language. The gauge fields AI arise from the Kaluza-Klein
reduction of the M-theory three-form C on ΠI . Thus, the results above imply that
– 20 –
the M2-brane charges transform non-trivially in the presence of M5-branes under large
gauge transformations of the C-field.
It might sound novel, but it can be clearly seen from the point of view of Type IIA
string theory on the Calabi-Yau. Consider a soliton without D6-brane charge. There,
the D2-brane charge QI of the soliton is induced by the world-volume gauge field F on
the D4 brane wrapped on a four-cycle Σ = qIΣI through the Chern-Simons coupling
(F +B) ∧ C (3.17)
where B is the NSNS two-form and C is the RR three-form. In this description, aI is
given by
B. The induced brane charge in the presence of the non-zero B-field is
an intricate problem in itself, but the end result is that the large gauge transformation
B → B + ω with
ω = tI changes the D2-brane charge of the system by 6cIJKq
ItJ .
It will be interesting to derive the same effect from the worldvolume Lagrangian [38]
of the M5 brane, which is subtle because the worldvolume tensor field is self-dual.
The change in the M2-brane charge induce a change in the Kaluza-Klein momentum
carried by the zero-mode on the black strings wrapped on S1, so that Q0 also changes
[2]. The point is that the momentum carried by non-zero modes, Q̂0 defined in (3.15),
is a monodromy-invariant quantity.
Before leaving this section, it is worth noticing that if an M2-brane has the world-
volume V , it enters in the equation of motion for G = dC in the following way:
d ⋆ G+G ∧G = δV (3.18)
where δV is the delta function supported on V . Thus, the quantized M2-brane charge
is not the source of the Maxwell charge. It is rather the source of the Page charge.
Essentially the same argument in five dimensions, using the specific decomposition
(2.28), was made in Ref. [39].
3.5 Monodromy and Taub-NUT
If we use the Taub-NUT space in the dimensional reduction, in other words if there
is a Kaluza-Klein monopole in the system, the Kaluza-Klein circle shrinks at the nut
of the monopole. As the circle is now contractible, one might think that one can no
longer do a large gauge transformation and that it is natural to choose aI = 0 at the
nut. Nevertheless, from a four-dimensional standpoint the monodromy transformation
should be always possible. How can these two points of view be reconciled?
Firstly, the fact that the five-dimensional spacetime is smooth at the nut only
requires that the gauge field strength is zero there and that the integral of the gauge
– 21 –
potential is an integer. There should be a patch around the nut in the five-dimensional
spacetime in which AI should be smooth, but it is not the patch connected to the
asymptotic region of the Taub-NUT space where aI is defined.
A similar problem was studied in Ref. [40]. There, it was shown how the winding
number can still be conserved in the background with the nut, where the circle on
which strings are wound degenerates. A crucial role is played by the normalizable self-
dual two-form ω localized at the nut, which gives the worldvolume gauge field A of the
D6-brane realized as the M-theory Kaluza-Klein monopole via C = A ∧ ω. It should
enter in the worldvolume Lagrangian in the combination dA+B, and the large gauge
transformation affects the contribution from B.
Indeed, the Kaluza-Klein ansatz of the gauge fields (3.3), one can make the com-
bined shift
aI → aI + tI , AI4d → AI4d + tIA0 (3.19)
without changing the five-dimensional gauge field strengths. Therefore, the magnetic
charge also transforms as
qI → qI + tIq0. (3.20)
The action of the transformation (3.16) on the electric charges then becomes
QI → QI + 6cIJKtJqK + 3cIJKtJ tKq0, (3.21)
Q0 → Q0 −QItI − 3cIJKtItJqK − cIJKtItJtKq0, (3.22)
which is exactly how the projective coordinates
X0, XI , FI = 3cIJKX
JXK/X0, F0 = −cIJKXIXJXK/(X0)2. (3.23)
get transformed by the monodromy aI → aI + tI . It was already noted in Ref. [21]
that the same symmetry acts on the functions which characterize the supersymmetric
solution on the Taub-NUT, (V,KI , LI ,M) in their notation. The point is that it
modifies the five-dimensional Page charges, and hence the four-dimensional charges.
If we neglect quantum corrections coming from instantons wrapping the Kaluza-
Klein circle, it is allowed to do the monodromy transformation zI → zI + tI even with
continuous parameters tI . It maps a solution of the equations of motion to another,
and the electric charges in four-dimensions depends continuously on the vevs for the
moduli aI at the asymptotic infinity. The issue concerning the stability of the solitons
can be safely ignored. In the analyses in Refs. [19, 20, 21], their proposals for the
identification of four-dimensional electric charges QI and of five-dimensional ones QI
were different from one another. The source of the discrepancy in the identification
is now clear after our long discussion. It can be readily checked that the differing
– 22 –
proposals for the identification can be connected by the monodromy transformation
with tI = 1
qI . Namely, the charges in the five-dimensional language are transformed
QI − 3cIJKqJqK , Jψ → Jψ − Jφ (3.24)
for Q0 ≫ q3 limit.3 Thus they are equivalent under a large gauge transformation.
The analysis above also answers the question raised in Section 3.2 how the dipole
charges in five dimensions are related in the magnetic charges in four dimensions in the
presence of the nut. It is instructive to consider the case of a black ring in the Taub-
NUT space. From a five-dimensional viewpoint, the dipole charge is not a conserved
quantity measurable at the asymptotic infinity. Correspondingly, the surface of the
Dirac string necessary to define the gauge potential can be chosen to fill the disc inside
the black ring only, and not to extend to the asymptotic infinity. It was what we did
in Section 2.3.1 in defining the coordinate patches. However, the gauge transformation
required to achieve it necessarily depends on the ψ coordinate, which is the direction
along the Kaluza-Klein circle. Hence it is not allowed if one carries out the reduction to
four dimensions. In this case, the Dirac string emanating from the black ring necessarily
extends all the way to the spatial infinity, thus making the magnetic charge measurable
at the asymptotic infinity. A related point is that dipole charges enter in the first law of
black objects because of the existence of two patches [31]4. It is easier to understand it
after the reduction because now it is a conserved quantity measurable at the asymptotic
infinity.
As a final example to illustrate the subtlety in the identification of the four- and
five-dimensional charges, let us consider a two-centered Taub-NUT space with centers
x1 and x2. There is an S
2 between two centers, and one can introduce a self-dual mag-
netic fluxes qI through it. Although the Chern-Simons interactions put some constraint
on the allowed qI , there is a supersymmetric solution of this form [44]. In this configu-
ration, the Wilson lines aI at x1 and x2 necessarily differ by the amount proportional
to the flux, and one cannot simultaneously make them zero. An important consequence
is that the magnetic charges F I4d of the nuts at x2 and x2 necessarily differ, in spite
of the fact that the geometry and the gauge fields in five dimensions are completely
symmetric under the exchange of x1 and x2.
3We noticed that a small discrepancy proportional to cIJKq
IqJqK remains, which is related to
the zero-point energy of the conformal field theory of the black string. Its effect on the entropy is
subleading in the large Q0 limit.
4The authors of [31] used the approach to the first law developed in [41]. There is another un-
derstanding of appearance of the dipole charges in the first law [42] if one follows the approach in
[43].
– 23 –
4. Summary
In this paper, we have first clarified how the near-horizon data of black objects encode
the conserved charges measured at asymptotic infinity. Namely, the existence of the
Chern-Simons coupling means that F ∧ F is a source of electric charges, thus it was
necessary to perform the partial integration once to rewrite the asymptotic electric
charge by the integral of A ∧ F over the horizon. Since F has magnetic flux through
the horizon, A∧F cannot be naively defined, and we showed how to treat it consistently.
Likewise, we obtained the formula for the angular momenta using the near-horizon data.
Then, we saw how our formula for the charges in five dimensions is related to the
four-dimensional formula under Kaluza-Klein reduction. We studied how the ambiguity
coming from large gauge transformations in five dimensions corresponds to the Witten
effect and the associated monodromy transformation in four dimensions.
It is now straightforward to obtain the correction to the entropy of the black rings,
since we now have the supersymmetric higher-derivative action [13], the near-horizon
geometry [45, 46, 47], and also the formulation developed in this paper to obtain con-
served charges from the near-horizon data alone. It would be interesting to see if it
matches with the microscopic calculation.
Acknowledgments
YT would like to thank Juan Maldacena, Masaki Shigemori and Johannes Walcher for
discussions. KH is supported by the Center-of-Excellence (COE) Program “Nanometer-
Scale Quantum Physics” conducted by Graduate Course of Solid State Physics and
Graduate Course of Fundamental Physics at Tokyo Institute of Technology. The work
of KO is supported by Japan Society for the Promotion of Science (JSPS) under the
Post-doctoral Research Program. YT is supported by the United States DOE Grant
DE-FG02-90ER40542.
A. Geometry of Concentric Black Rings
Any supersymmetric solution in the asymptotically flat R1,4 is known to be of the form
ds2 = −f 2(dt+ ω)2 + f−1ds2(R4) (A.1)
where f and ω is a function and a one-form on R4, respectively. We parametrize the
base R4 in the Gibbons-Hawking coordinate system
ds2(R4) = H [dr2 + r2(dθ2 + sin2 θdχ2)] +H−1(2dψ + cos θdχ)2 (A.2)
– 24 –
where (r, θ, φ) parametrize a flat R3, the periodicity of ψ is 2π and H = 1/r. Our
notation mostly follows the one in Ref. [34], with the change ψthere = 2ψhere. The
quantities f , ω and the gauge field F = dA are determined by three functions K, L
and M on the flat R3. The relations we need are
f−1 = H−1K2 + L, ι∂ψω = 2H
−2K3 + 3H−1KL+ 2M, (A.3)
d[f(dt+ ω)]− 1√
G+, ι∂ψG
+ = −3d(H−1K) (A.4)
where G+ = f(dω + ⋆dω)/2 is a self-dual two-form on R4.
To construct the concentric black ring solutions, we take N points xi, (i = 1, . . . , N)
at r = R2i /4, θ = π on R
3. The orbit of xi along the coordinate ψ is a ring of radius Ri
embedded in R4. We choose functions K, L and M by
K = −1
qihi, L = 1 +
(Qi − q2i )hi, M =
qi(1− |xi|hi) (A.5)
where hi(x) = 1/|x−xi| are harmonic functions on R3. For the case with a single ring,
conversion to the ring coordinate used in (2.29) can be achieved via
φ1 = ψ + χ/2, φ2 = ψ − χ/2 (A.6)
y2 − 1
x− y = 2
r sin
1− x2
x− y = 2
r cos
. (A.7)
The behavior of ω and F at the asymptotic infinity, and the near-horizon metric
(2.36) are well-known and are not repeated here. The reader is referred to the orig-
inal article Ref. [34]. The gauge potential near the horizon can be obtained by the
combination of (A.3) and (A.4). First we have
ι∂ψF =
(−dι∂ψ)[f(dt+ ω)] +
3d(KH−1). (A.8)
which can be integrated by inspection. Hence the ψ component of the gauge field is
given by
ι∂ψA =
H−1KL/2 +M
H−1K2 + L
(A.9)
for some constant c. By demanding ιψA→ 0 as r → ∞, we obtain
c = −1
qi. (A.10)
– 25 –
Thus, we have
ι∂ψA = −
Qi − q2i
. (A.11)
near the i-th horizon. The χ component of the gauge field is fixed by the magnetic
dipole through the horizon.
References
[1] A. Strominger and C. Vafa, “Microscopic Origin of the Bekenstein-Hawking Entropy,”
Phys. Lett. B 379 (1996) 99 [arXiv:hep-th/9601029].
[2] J. M. Maldacena, A. Strominger and E. Witten, “Black hole entropy in M-theory,”
JHEP 9712 (1997) 002 [arXiv:hep-th/9711053].
[3] T. Mohaupt, “Black hole entropy, special geometry and strings,” Fortsch. Phys. 49
(2001) 3 [arXiv:hep-th/0007195].
[4] H. Ooguri, A. Strominger and C. Vafa, “Black hole attractors and the topological
string,” Phys. Rev. D 70 (2004) 106007 [arXiv:hep-th/0405146].
[5] A. Sen, “Black hole entropy function and the attractor mechanism in higher derivative
gravity,” JHEP 0509 (2005) 038 [arXiv:hep-th/0506177].
[6] D. Astefanesei, K. Goldstein, R. P. Jena, A. Sen and S. P. Trivedi, “Rotating
attractors,” JHEP 0610 (2006) 058 [arXiv:hep-th/0606244].
[7] H. Elvang, R. Emparan, D. Mateos and H. S. Reall, “A supersymmetric black ring,”
Phys. Rev. Lett. 93 (2004) 211302 [arXiv:hep-th/0407065].
[8] I. Bena and P. Kraus, “Microscopic description of black rings in AdS/CFT,” JHEP
0412 (2004) 070 [arXiv:hep-th/0408186].
[9] M. Cyrier, M. Guica, D. Mateos and A. Strominger, “Microscopic entropy of the black
ring,” Phys. Rev. Lett. 94 (2005) 191601 [arXiv:hep-th/0411187].
[10] M. Guica, L. Huang, W. W. Li and A. Strominger, “R2 corrections for 5D black holes
and rings,” JHEP 0610 (2006) 036 [arXiv:hep-th/0505188].
[11] I. Bena and P. Kraus, “R2 corrections to black ring entropy,” arXiv:hep-th/0506015.
[12] R. G. Cai and D. W. Pang, “On Entropy Function for Supersymmetric Black Rings,”
JHEP 0704 (2007) 027 [arXiv:hep-th/0702040].
– 26 –
[13] K. Hanaki, K. Ohashi and Y. Tachikawa, “Supersymmetric Completion of an R2 Term
in Five-Dimensional Supergravity,” Prog. Theor. Phys. 117 (2007) 533
[arXiv:hep-th/0611329].
[14] J. F. Morales and H. Samtleben, “Entropy function and attractors for AdS black
holes,” JHEP 0610 (2006) 074 [arXiv:hep-th/0608044].
[15] G. L. Cardoso, J. M. Oberreuter and J. Perz, “Entropy function for rotating extremal
black holes in very special geometry,” JHEP 0705 (2007) 025 [arXiv:hep-th/0701176].
[16] K. Goldstein and R. P. Jena, “One entropy function to rule them all,”
arXiv:hep-th/0701221.
[17] A. Strominger, “Macroscopic Entropy of N = 2 Extremal Black Holes,” Phys. Lett. B
383 (1996) 39 [arXiv:hep-th/9602111].
[18] S. Ferrara and R. Kallosh, “Supersymmetry and Attractors,” Phys. Rev. D 54 (1996)
1514 [arXiv:hep-th/9602136].
[19] H. Elvang, R. Emparan, D. Mateos and H. S. Reall, “Supersymmetric 4D rotating
black holes from 5D black rings,” JHEP 0508 (2005) 042 [arXiv:hep-th/0504125].
[20] D. Gaiotto, A. Strominger and X. Yin, “5D black rings and 4D black holes,” JHEP
0602 (2006) 023 [arXiv:hep-th/0504126].
[21] I. Bena, P. Kraus and N. P. Warner, “Black rings in Taub-NUT,” Phys. Rev. D 72
(2005) 084019 [arXiv:hep-th/0504142].
[22] P. Kraus and F. Larsen, “Attractors and black rings,” Phys. Rev. D 72 (2005) 024010
[arXiv:hep-th/0503219].
[23] N. V. Suryanarayana and M. C. Wapler, “Charges from Attractors,” Class. Quant.
Grav. 24 (2007) 5047 [arXiv:0704.0955 [hep-th]].
[24] E. Witten, “Dyons of Charge eθ/2π,” Phys. Lett. B 86 (1979) 283.
[25] J. P. Gauntlett, R. C. Myers and P. K. Townsend, “Black holes of D = 5 supergravity,”
Class. Quant. Grav. 16 (1999) 1 [arXiv:hep-th/9810204].
[26] D. Marolf, “Chern-Simons terms and the three notions of charge,”
arXiv:hep-th/0006117.
[27] J. Lee and R. M. Wald, “Local symmetries and constraints,” J. Math. Phys. 31 (1990)
[28] R. M. Wald, “On Identically closed forms locally constructed from a field,” J. Math.
Phys. 31 (1990) 2378
– 27 –
[29] R. M. Wald, “Black hole entropy is the Noether charge,” Phys. Rev. D 48 (1993) 3427
[arXiv:gr-qc/9307038].
[30] V. Iyer and R. M. Wald, “A Comparison of Noether charge and Euclidean methods for
computing the entropy of stationary black holes,” Phys. Rev. D 52 (1995) 4430
[arXiv:gr-qc/9503052].
[31] K. Copsey and G. T. Horowitz, “The role of dipole charges in black hole
thermodynamics,” Phys. Rev. D 73 (2006) 024015 [arXiv:hep-th/0505278].
[32] M. Rogatko, “First law of black rings thermodynamics in higher dimensional
Chern-Simons gravity,” Phys. Rev. D 75 (2007) 024008 [arXiv:hep-th/0611260].
[33] J. P. Gauntlett, J. B. Gutowski, C. M. Hull, S. Pakis and H. S. Reall, “All
supersymmetric solutions of minimal supergravity in five dimensions,” Class. Quant.
Grav. 20 (2003) 4587 [arXiv:hep-th/0209114].
[34] J. P. Gauntlett and J. B. Gutowski, “Concentric black rings,” Phys. Rev. D 71 (2005)
025013 [arXiv:hep-th/0408010].
[35] K. Behrndt, G. Lopes Cardoso and S. Mahapatra, “Exploring the relation between 4D
and 5D BPS solutions,” Nucl. Phys. B 732 (2006) 200 [arXiv:hep-th/0506251].
[36] K. Behrndt, G. Lopes Cardoso, B. de Wit, R. Kallosh, D. Lust and T. Mohaupt,
“Classical and quantum N = 2 supersymmetric black holes,” Nucl. Phys. B 488 (1997)
236 [arXiv:hep-th/9610105].
[37] M. Shmakova, “Calabi-Yau black holes,” Phys. Rev. D 56 (1997) 540
[arXiv:hep-th/9612076].
[38] P. Pasti, D. P. Sorokin and M. Tonin, “Covariant action for a D = 11 five-brane with
the chiral field,” Phys. Lett. B 398 (1997) 41 [arXiv:hep-th/9701037].
[39] G. T. Horowitz and H. S. Reall, “How hairy can a black ring be?,” Class. Quant. Grav.
22 (2005) 1289 [arXiv:hep-th/0411268].
[40] R. Gregory, J. A. Harvey and G. W. Moore, “Unwinding strings and T-duality of
Kaluza-Klein and H-monopoles,” Adv. Theor. Math. Phys. 1 (1997) 283
[arXiv:hep-th/9708086].
[41] D. Sudarsky and R. M. Wald, “Extrema of mass, stationarity, and staticity, and
solutions to the Einstein Yang-Mills equations,” Phys. Rev. D 46 (1992) 1453.
[42] D. Astefanesei and E. Radu, “Quasilocal formalism and black ring thermodynamics,”
Phys. Rev. D 73 (2006) 044014 [arXiv:hep-th/0509144].
– 28 –
[43] J. D. Brown and J. W. York, “Quasilocal energy and conserved charges derived from
the gravitational action,” Phys. Rev. D 47 (1993) 1407.
[44] G. W. Gibbons, D. Kastor, L. A. J. London, P. K. Townsend and J. H. Traschen,
“Supersymmetric selfgravitating solitons,” Nucl. Phys. B 416 (1994) 850
[arXiv:hep-th/9310118].
[45] A. Castro, J. L. Davis, P. Kraus and F. Larsen, “5D attractors with higher
derivatives,” JHEP 0704 (2007) 091 [arXiv:hep-th/0702072].
[46] A. Castro, J. L. Davis, P. Kraus and F. Larsen, “5D Black Holes and Strings with
Higher Derivatives,” JHEP 0706 (2007) 007 [arXiv:hep-th/0703087].
[47] M. Alishahiha, “On R2 corrections for 5D black holes,” JHEP 0708 (2007) 094
[arXiv:hep-th/0703099].
– 29 –
|
0704.1820 | Odd Triplet Pairing in clean Superconductor/Ferromagnet heterostructures | Odd Triplet Pairing in clean Superconductor/Ferromagnet heterostructures
Klaus Halterman,1, ∗ Paul H. Barsic,2, † and Oriol T. Valls2, ‡
Physics and Computational Sciences, Research and Engineering Sciences Department,
Naval Air Warfare Center, China Lake, California 93555
School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455
(Dated: October 27, 2018)
We study triplet pairing correlations in clean Ferromagnet (F)/Superconductor (S) nanojunctions,
via fully self consistent solution of the Bogoliubov-de Gennes equations. We consider FSF trilayers,
with S being an s-wave superconductor, and an arbitrary angle α between the magnetizations of
the two F layers. We find that contrary to some previous expectations, triplet correlations, odd in
time, are induced in both the S and F layers in the clean limit. We investigate their behavior as a
function of time, position, and α. The triplet amplitudes are largest at times on the order of the
inverse “Debye” frequency, and at that time scale they are long ranged in both S and F. The zero
temperature condensation energy is found to be lowest when the magnetizations are antiparallel.
PACS numbers: 74.45.+c, 74.25.Bt, 74.78.Fk
The proximity effects in superconductor/ferromagnet
(SF) heterostructures lead to the coexistence of ferromag-
netic and superconducting ordering and to novel trans-
port phenomena[1, 2]. Interesting effects that arise from
the interplay between these orderings have potential tech-
nological applications in fields such as spintronics[3]. For
example, the relative orientation of the magnetizations in
the F layers in FSF trilayers can have a strong influence
on the conductivity[4, 5, 6, 7, 8], making them good spin
valve candidates. Such trilayers were first proposed[9] for
insulating F layers and later for metallic[10, 11] ones.
This interplay also results in fundamental new physics.
An outstanding example is the existence of “odd” triplet
superconductivity. This is an s-wave pairing triplet state
that is even in momentum, and therefore not destroyed
by nonmagnetic impurities, but with the triplet corre-
lations being odd in frequency, so that the equal time
triplet amplitudes vanish as required by the Pauli prin-
ciple. This exotic pairing state with total spin one was
proposed long ago [12] as a possible state in superfluid
3He. Although this type of pairing does not occur there,
it is possible in certain FSF systems[1, 2, 13, 14] with or-
dinary singlet pairing in S. This arrangement can induce,
via proximity effects, triplet correlations with m = 0 and
m = ±1 projections of the total spin. If the magnetiza-
tion orientations in both F layers are unidirectional and
along the quantization axis, symmetry arguments show
that only the m = 0 projection along that axis can exist.
Odd triplet pairing in F/S structures has been studied
in the dirty limit through linearized Usadel-type quasi-
classical equations [2, 13, 14, 15]. In this case, it was
found that m = 0 triplet pairs always exist. They are
suppressed in F over short length scales, just as the sin-
glet pairs. The m = ±1 components, for which the ex-
change field is not pair-breaking, can be long ranged, and
were found to exist for nonhomogeneous magnetization.
For FSF trilayers[2, 16, 17], the quasiclassical methods
predict that the structure contains a superposition of all
FIG. 1: Schematic of FSF junction. The left ferromagnetic
layer F1 has a magnetization oriented at an angle −α/2 in
the x− z plane, while the other ferromagnet, F2, has a mag-
netization orientation at an angle α/2 in the x− z plane.
three spin triplet projections except when the magneti-
zations of the F layers are collinear, in which case the
m = ±1 components along the magnetization axis van-
ish. It is noted in Ref. [1] that the existence of such effects
in the clean limit has not been established and may be
doubted. This we remedy in the present work, where
we establish that, contrary to the doubts voiced there,
induced, long-ranged, odd triplet pairing does occur in
clean FSF structures.
Experimental results that may argue for the existence
of long range triplet pairing of superconductors through
a ferromagnet have been obtained in superlattices[18]
with ferromagnetic spacers, and in two superconduc-
tors coupling through a single ferromagnet[19, 20].
Measurements[19] on a SQUID, in which a phase change
of π in the order parameter is found after inversion, in-
dicate an odd-parity state. Very recently, a Josephson
current through a strong ferromagnet was observed, in-
dicating the existence of a spin triplet state[20] induced
http://arxiv.org/abs/0704.1820v1
by NbTiN, an s-wave superconductor.
In this paper, we study the induced odd triplet
superconductivity in FSF trilayers in the clean limit
through a fully self-consistent solution of the microscopic
Bogoliubov-de Gennes (BdG) equations. We consider ar-
bitrary relative orientation of the magnetic moments in
the two F layers. We find that there are indeed induced
odd triplet correlations which can include both m = 0
and m = ±1 projections. We directly study their time
dependence and we find that they are largest for times of
order of the inverse cutoff “Debye” frequency. The corre-
lations are, at these time scales, long ranged in both the
S and F regions. We also find that the condensation en-
ergy depends on the relative orientation of the F layers,
being a minimum when they are antiparallel.
To find the triplet correlations arising from the non-
trivial spin structure in our FSF system, we use the BdG
equations with the BCS Hamiltonian, Heff :
Heff =
ψδ(r) +
(iσy)δβ∆(r)ψ
(r) + h.c.]−
(r)(h · σ)δβ ψβ(r)
where ∆(r) is the pair potential, to be determined self-
consistently, ψ
, ψδ are the creation and annihilation op-
erators with spin δ, EF is the Fermi energy, and σ are
the Pauli matrices. We describe the magnetism of the F
layers by an effective exchange field h(r) that vanishes in
the S layer. We will consider the geometry depicted in
Fig. 1, with the y axis normal to the layers and h(r) in
the x − z plane (which is infinite in extent) forming an
angle ±α/2 with the z axis in each F layer.
Next, we expand the field operators in terms of a Bo-
goliubov transformation which we write as:
ψδ(r) =
unδ(r)γn + ηδvnδ(r)γ
, (1)
where ηδ ≡ 1(−1) for spin down (up), unδ and vnδ are the
quasiparticle and quasihole amplitudes. This transforma-
tion diagonalizes Heff : [Heff , γn] = −ǫnγn, [Heff , γ
n. By taking the commutator [ψδ(r),Heff ], and with
h(r) in the x − z plane as explained above, we have the
following:
[ψ↑(r),Heff ] = (He − hz)ψ↑(r)− hxψ↓(r) + ∆(r)ψ
[ψ↓(r),Heff ] = (He + hz)ψ↓(r)− hxψ↑(r)−∆(r)ψ
Inserting (1) into (2) and introducing a set ρ of Pauli-like
matrices in particle-hole space, yields the spin-dependent
BdG equations:
H01̂− hzσz
∆(y)ρx − hx1̂
Φn = ǫnΦn,
where Φn ≡ (un↑(y), un↓(y), vn↑(y), vn↓(y))
T and H0 ≡
−∂2y/(2m) + ε⊥ − EF . Here ε⊥ is the transverse kinetic
energy and a factor of eik⊥·r has been suppressed. In
deriving Eq. (3) care has been taken to consistently use
the phase conventions in Eq. (1). To find the quasiparti-
cle amplitudes along a different quantization axis in the
x−z plane, one performs a spin rotation: Φn → Û(α
′)Φn,
where Û(α′) = cos(α′/2)1̂⊗ 1̂− i sin(α′/2)ρz ⊗ σy.
When the magnetizations of the F layers are collinear,
one can take hx = 0. For the general case shown in
Fig. 1 one has in the F1 layer, hx = h0 sin(−α/2) and
hz = h0 cos(−α/2), where h0 is the magnitude of h,
while in F2, hx = h0 sin(α/2), and hz = h0 cos(α/2).
With an appropriate choice of basis, Eqs. (3) are cast
into a matrix eigenvalue system that is solved itera-
tively with the self consistency condition, ∆(y) = g(y)f3
(f3 =
[〈ψ↑(r)ψ↓(r)〉 − 〈ψ↓(r)ψ↑(r)〉]). In the F layers
we have g(y) = 0, while in S, g(y) = g, g being the usual
BCS singlet coupling constant there. Through Eqs. (1),
the self-consistency condition becomes a sum over states
restricted by the factor g to within ωD from the Fermi
surface. Iteration is performed until self-consistency is
reached. The numerical process is the same that was used
in previous work[24, 25], with now the hx term requiring
larger four-component matrices to be diagonalized.
We now define the following time dependent triplet am-
plitude functions in terms of the field operators,
f̃0(r, t) =
[〈ψ↑(r, t)ψ↓(r, 0)〉+ 〈ψ↓(r, t)ψ↑(r, 0)〉] , (4a)
f̃1(r, t) =
[〈ψ↑(r, t)ψ↑(r, 0)〉 − 〈ψ↓(r, t)ψ↓(r, 0)〉] , (4b)
which, as required by the Pauli principle for these s-wave
amplitudes, vanish at t = 0, as we shall verify. Making
use of Eq. (1) and the commutators, one can derive and
formally integrate the Heisenberg equation of the motion
for the operators and obtain:
f̃0(y, t) =
[un↑(y)vn↓(y)− un↓(y)vn↑(y)]ζn(t), (5a)
f̃1(y, t) =−
[un↑(y)vn↑(y) + un↓(y)vn↓(y)]ζn(t),
FIG. 2: (Color online) The real part, f0, of the triplet ampli-
tude f̃0, for a FSF trilayer at 7 different times. We normalize
f0 by the singlet bulk pair amplitude, ∆0/g. The coordinate
y is scaled by the Fermi wavevector, Y ≡ kF y, and time by
the Debye frequency, τ ≡ ωDt. At τ = 0, f0 ≡ 0 as required
by the Pauli principle. The interface is marked by the verti-
cal dashed line, with an F region to the left and the S to the
right. Half of the S region and part of the left F layer are
shown. The inset shows the maximum value of f0 versus τ .
where ζn(t) ≡ cos(ǫnt)− i sin(ǫnt) tanh(ǫn/2T ).
The amplitudes in Eqs. (5) contain all information on
the space and time dependence of induced triplet correla-
tions throughout the FSF structure. The summations in
Eqs. (5) are over the entire self-consistent spectrum, en-
suring that f0 and f1 vanish identically at t = 0 and thus
obey the exclusion principle. Using a non self consistent
∆(y) leads to violations of this condition, particularly
near the interface where proximity effects are most pro-
nounced. Geometrically, the indirect coupling between
magnets is stronger with fairly thin S layers and rela-
tively thick F layers. We thus have chosen dS = (3/2)ξ0
and dF1 = dF2 = ξ0, with the BCS correlation length
ξ0 = 100k
. We consider the low T limit and take
ωD = 0.04EF . The magnetic exchange is parametrized
via I ≡ h0/EF . Results shown are for I = 0.5 (unless
otherwise noted) and the magnetization orientation an-
gle, α, is swept over the range 0 ≤ α ≤ π. No triplet
amplitudes arise in the absence of magnetism (I = 0).
For the time scales considered here, the imaginary
parts of f̃0(y, t) and f̃1(y, t) at t 6= 0 are considerably
smaller than their real parts, and thus we focus on the
latter, which we denote by f0(y, t) and f1(y, t). In Fig. 2,
the spatial dependence of f0 is shown for parallel mag-
netization directions (α = 0) at several times τ ≡ ωDt.
The spatial range shown includes part of the F1 layer
(to the left of the dashed line) and half of the S layer
(to the right). At finite τ , the maximum occurs in the
ferromagnet close to the interface, after which f0 under-
goes damped oscillations with the usual spatial length
FIG. 3: (Color online) Spatial and angular dependence of f1,
at τ = 4 ≈ τc and several α. Normalizations and ranges are
as in Fig. 2. Inset: maxima of f0 and f1 in F1 versus α.
scale ξf ≈ (kF↑ − kF↓)
−1 ≈ k−1F /I. The height of the
main peak first increases with time, but drops off after
a characteristic time, τc ≈ 4, as seen in the inset, which
depicts the maximum value of f0 as a function of τ . As
τ increases beyond τc, the modulating f0 in F develops
more complicated atomic scale interference patterns and
becomes considerably longer ranged. In S, we see imme-
diately that f0 is also larger near the interface. Since
the triplet amplitudes vanish at τ = 0, short time scales
exhibit correspondingly short triplet penetration. The
figure shows, however, that the value of f0 in S is sub-
stantial for τ & τc, extending over length scales on the
order of ξ0 without appreciable decay. In contrast, the
usual singlet correlations were found to monotonically
drop off from their τ = 0 value over τ scales of order
unity.
In the main plot of Fig. 3 we examine the spatial de-
pendence of the real part of the m = ±1 triplet ampli-
tude, f1. Normalizations and spatial ranges are as in
Fig. 2 but now the time is fixed at τ = 4 ≈ τc, and
five equally spaced magnetization orientations are con-
sidered. At α = 0, f1 vanishes identically at all τ , as
expected. For nonzero α, correlations in all triplet chan-
nels are present. As was found for f0, the plot clearly
shows that f1 is largest near the interface, in the F re-
gion. Our geometry and conventions imply (see Fig. 1)
that the magnetization has opposite x-components in the
F1 and F2 regions. The f1 triplet pair amplitude profile
is thus antisymmetric about the origin, in contrast to the
symmetric f0, implying the existence of one node in the
superconductor. Nevertheless, the penetration of the f1
correlations in S can be long ranged. We find that f1 and
f0 oscillate in phase and with the same wavelength, re-
gardless of α. The inset illustrates the maximum attained
values of f0 and f1 in F1 as α varies. It shows that for
FIG. 4: (Color online) The T = 0 condensation energy, ∆E0,
normalized by N(0)∆20 (N(0) is the usual density of states),
vs. the angle α for two values of I . When the two magne-
tizations are antiparallel (α = π) ∆E0 is lowest. The inset
shows the ordinary (singlet) pair potential averaged over the
S region, normalized to the bulk ∆0.
a broad range of α, α . 3π/4, the maximum of f0 varies
relatively little, after which it drops off rapidly to zero at
α = π. This is to be expected as the anti-parallel orienta-
tion corresponds to the case in which the magnetization
is in the x direction, which is perpendicular to the axis
of quantization (see Fig. 1). The rise in the maximum
of f1 is monotonic, cresting at α = π, consistent with
the main plot. At this angle the triplet correlations ex-
tend considerably into the superconductor. At α = π/2
the maxima coincide since the two triplet components
are then identical throughout the whole space because
the magnetization vectors have equal projections on the
x and z axes. At α = π both magnetizations are normal
to the axis of quantization z (see Fig. 1). By making use
of the rotation matrix Û (see below Eq. 3) one can verify
that the m = ±1 components with respect to the axis x
along the magnetizations are zero.
We next consider the condensation energy, ∆E0, cal-
culated by subtracting the zero temperature supercon-
ducting and normal state free energies. The calculation
uses the self consistent spectra and ∆(y), and methods
explained elsewhere [25, 26]. In the main plot of Fig. 4,
we show ∆E0 (normalized at twice its bulk S value) at
two different values of I. The condensation energy results
clearly demonstrate that the antiparallel state (α = π) is
in general the lowest energy ground state. These results
are consistent with previous studies[8] of FSF structures
with parallel and antiparallel magnetizations. The inset
contains the magnitude of the spatially averaged pair po-
tential, normalized by ∆0, at the same values of I. The
inset correlates with the main plot, as it shows that the
singlet superconducting correlations in S increase with
α and are larger at I = 1 than at I = 0.5. The half-
metallic case of I = 1 illustrates that by having a single
spin band populated at the Fermi surface, Andreev reflec-
tion is suppressed, in effect keeping the superconductivity
more contained within S.
Thus, we have shown that in clean FSF trilayers in-
duced odd triplet correlations, with m = 0 and m = ±1
projections of the total spin, exist. We have used a mi-
croscopic self-consistent method to study the time and
angular dependence of these triplet correlations. The
correlations in all 3 triplet channels were found, at times
τ ≡ ωDt & τc, where τc ≈ 4, to be long ranged in both
the F and S regions. Finally, study of the condensation
energy revealed that the ground state energy is always
lowest for antiparallel magnetizations.
This project was supported in part by a grant of HPC
resources from the ARSC at the University of Alaska
Fairbanks (part of the DoD HPCM program) and by the
University of Minnesota Graduate School.
∗ Electronic address: [email protected]
† Electronic address: [email protected]
‡ Electronic address: [email protected]; Also at Minnesota
Supercomputer Institute, University of Minnesota, Min-
neapolis, Minnesota 55455
[1] A.I. Buzdin, Rev. Mod. Phys. 77, 935 (2005).
[2] F.S. Bergeret, A.F Volkov, and K.B. Efetov, Rev. Mod.
Phys. 77, 1321 (2005).
[3] Igor Z̆utić, Jaroslav Fabian, S. Das Sarma, Rev. Mod.
Phys. 76, 323 (2004).
[4] J. Y. Gu et al., Phys. Rev. Lett. 89, 267001 (2002).
[5] I. C. Moraru, W. P. Pratt, N. O. Birge, Phys. Rev. Lett.
96, 037004 (2006).
[6] C. Bell, S. Turşucu, and J. Aarts, Phys. Rev. B 74,
214520 (2006).
[7] C. Visani et al., Phys. Rev. B 75, 054501 (2007).
[8] K. Halterman and O.T. Valls, Phys. Rev. B 72,
060514(R), (2005).
[9] P. G. de Gennes, Phys. Lett. 23, 10 (1966).
[10] L.R. Tagirov, Phys. Rev. Lett. 83, 2058 (1999).
[11] A.I. Buzdin, A.V. Vdyayev, and N.V. Ryzhanova, Euro-
phys. Lett. 48, 686 (1999).
[12] V.L. Berezinskii, JETP Lett. 20, 287, (1974).
[13] F.S. Bergeret, A.F Volkov, and K.B. Efetov, Phys. Rev.
Lett. 86, 3140 (2001).
[14] F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Phys.
Rev. B 68, 064513 (2003).
[15] T. Champel and M. Eschrig, Phys. Rev. B 72, 054523
(2005).
[16] T. Löfwander et al., Phys. Rev. Lett. 95, 187003 (2005).
[17] Ya. V. Fominov, A. A. Golubov, and M. Yu. Kupriyanov,
JETP Lett. 77, 510 (2003).
[18] V. Peña, et al., Phys. Rev. B 69, 224502 (2004).
[19] K. D. Nelson et al., Science 306, 1151 (2004).
[20] R. S. Keizer et al., Nature 439, 825, (2006).
[21] P.G. de Gennes, Superconductivity of Metals and Alloys
(Addison-Wesley, Reading, MA, 1989).
[22] A. F. Volkov, F. S. Bergeret, and K. B. Efetov, Phys.
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
Rev. Lett. 90, 117006 (2003).
[23] Superconductivity by J. B. Ketterson and S. N. Song
(1999), p. 286.
[24] K. Halterman and O.T. Valls, Phys. Rev. B 69, 014517
(2004).
[25] K. Halterman and O.T. Valls, Phys. Rev. B 70, 104516
(2004).
[26] I. Kosztin et al., Phys. Rev. B 58, 9365 (1998).
|
0704.1821 | The S-parameter in Holographic Technicolor Models | The S-parameter in
Holographic Technicolor Models
Kaustubh Agashea, Csaba Csákib, Christophe Grojeanc,d, and Matthew Reeceb
a Department of Physics, Syracuse University, Syracuse, NY 13244, USA
b Institute for High Energy Phenomenology
Newman Laboratory of Elementary Particle Physics,
Cornell University, Ithaca, NY 14853, USA
d CERN, Theory Division, CH 1211, Geneva 23, Switzerland
c Service de Physique Théorique, CEA Saclay, F91191, Gif-sur-Yvette, France
[email protected], [email protected],
[email protected], [email protected]
Abstract
We study the S parameter, considering especially its sign, in models of electroweak
symmetry breaking (EWSB) in extra dimensions, with fermions localized near the
UV brane. Such models are conjectured to be dual to 4D strong dynamics triggering
EWSB. The motivation for such a study is that a negative value of S can significantly
ameliorate the constraints from electroweak precision data on these models, allowing
lower mass scales (TeV or below) for the new particles and leading to easier discovery at
the LHC. We first extend an earlier proof of S > 0 for EWSB by boundary conditions in
arbitrary metric to the case of general kinetic functions for the gauge fields or arbitrary
kinetic mixing. We then consider EWSB in the bulk by a Higgs VEV showing that
S is positive for arbitrary metric and Higgs profile, assuming that the effects from
higher-dimensional operators in the 5D theory are sub-leading and can therefore be
neglected. For the specific case of AdS5 with a power law Higgs profile, we also show
that S ∼ +O(1), including effects of possible kinetic mixing from higher-dimensional
operator (of NDA size) in the 5D theory. Therefore, our work strongly suggests that
S is positive in calculable models in extra dimensions.
http://arxiv.org/abs/0704.1821v2
1 Introduction
One of the outstanding problems in particle physics is to understand the mechanism of
electroweak symmetry breaking. Broadly speaking, models of natural electroweak symmetry
breaking rely either on supersymmetry or on new strong dynamics at some scale near the
electroweak scale. However, it has long been appreciated that if the new strong dynamics is
QCD-like, it is in conflict with precision tests of electroweak observables [1]. Of particular
concern is the S parameter. It does not violate custodial symmetry; rather, it is directly
sensitive to the breaking of SU(2). As such, it is difficult to construct models that have S
consistent with data, without fine-tuning.
The search for a technicolor model consistent with data, then, must turn to non-QCD-
like dynamics. An example is “walking” [2], that is, approximately conformal dynamics,
which can arise in theories with extra flavors. It has been argued that such nearly-conformal
dynamics can give rise to a suppressed or even negative contribution to the S parameter [3].
However, lacking nonperturbative calculational tools, it is difficult to estimate S in a given
technicolor theory.
In recent years, a different avenue of studying dynamical EWSB models has opened up
via the realization that extra dimensional models [4] may provide a weakly coupled dual
description to technicolor type theories [5]. The most studied of these higgsless models [6] is
based on an AdS5 background in which the Higgs is localized on the TeV brane and has a very
large VEV, effectively decoupling from the physics. Unitarization is accomplished by gauge
KK modes, but this leads to a tension: these KK modes cannot be too heavy or perturbative
unitarity is lost, but if they are too light then there are difficulties with electroweak precision:
in particular, S is large and positive [7]. In this argument the fermions are assumed to be
elementary in the 4D picture (dual to them being localized on the Planck brane). A possible
way out is to assume that the direct contribution of the EWSB dynamics to the S-parameter
are compensated by contributions to the fermion-gauge boson vertices [8, 9]. In particular,
there exists a scenario where the fermions are partially composite in which S ≈ 0 [10],
corresponding to almost flat wave functions for the fermions along the extra dimension. The
price of this cancellation is a percent level tuning in the Lagrangian parameter determining
the shape of the fermion wave functions. Aside from the tuning itself, this is also undesirable
because it gives the model-builder very little freedom in addressing flavor problems: the
fermion profiles are almost completely fixed by consistency with electroweak precision.
While Higgsless models are the closest extra-dimensional models to traditional technicolor
models, models with a light Higgs in the spectrum do not require light gauge KK modes for
unitarization and can be thought of as composite Higgs models. Particularly appealing are
those where the Higgs is a pseudo-Nambu-Goldstone boson [11, 12]. In these models, the
electroweak constraints are less strong, simply because most of the new particles are heavy.
They still have a positive S, but it can be small enough to be consistent with data. Unlike
the Higgsless models where one is forced to delocalize the fermions, in these models with
a higher scale the fermions can be peaked near the UV brane so that flavor issues can be
addressed.
Recently, an interesting alternative direction to eliminating the S-parameter constraint
has been proposed in [13]. There it was argued, that by considering holographic models
of EWSB in more general backgrounds with non-trivial profiles of a bulk Higgs field one
could achieve S < 0. The aim of this paper is to investigate the feasibility of this proposal.
We will focus on the direct contribution of the strong dynamics to S. In particular, we
imagine that the SM fermions can be almost completely elementary in the 4D dual picture,
corresponding to them being localized near the UV brane. In this case, a negative S would
offer appealing new prospects for model-building since such values of S are less constrained
by data than a positive value [14]. Unfortunately we find that the S > 0 quite generally,
and that backgrounds giving negative S appear to be pathological.
The outline of the paper is as follows. We first present a general plausibility argument
based purely on 4D considerations that one is unlikely to find models where S < 0. This
argument is independent from the rest of the paper, and the readers interested in the holo-
graphic considerations may skip directly to section 3. Here we first review the formalism
to calculate the S parameter in quite general models of EWSB using an extra dimension.
We also extend the proof of S > 0 for BC breaking [7] in arbitrary metric to the case of
arbitrary kinetic functions or localized kinetic mixing terms. These proofs quite clearly show
that no form of boundary condition breaking will result in S < 0. However, one may hope
that (as argued in [13]) one can significantly modify this result by using a bulk Higgs with a
profile peaked towards the IR brane to break the electroweak symmetry. Thus, in the crucial
section 4, we show that S > 0 for models with bulk breaking from a scalar VEV as well.
Since the gauge boson mass is the lowest dimensional operator sensitive to EWSB one would
expect that this is already sufficient to cover all interesting possibilities. However, since
the Higgs VEV can be very strongly peaked, one may wonder if other (higher dimensional)
operators could become important as well. In particular, the kinetic mixing operator of L,R
after Higgs VEV insertion would be a direct contribution to S. To study the effect of this
operator in section 5, it is shown that the bulk mass term for axial field can be converted to
kinetic functions as well, making a unified treatment of the effects of bulk mass terms and
the effects of the kinetic mixing from the higher-dimensional operator possible. Although we
do not have a general proof that S > 0 including the effects of the bulk kinetic mixing for
a general metric and Higgs profile, in section 5.2 we present a detailed scan for AdS metric
and for power-law Higgs vev profile using the technique of the previous section for arbitrary
kinetic mixings. We find S > 0 once we require that the higher-dimensional operator is of
NDA size, and that the theory is ghost-free. We summarize and conclude in section 6.
2 A plausibility argument for S > 0
In this section we define S and sketch a brief argument for its positivity in a general techni-
color model. The reader mainly interested in the extra-dimensional constructions can skip
this section since it is independent from the rest of the paper. However, we think it is worth-
while to try to understand why one might expect S > 0 on simple physical grounds. The
only assumptions we will make are that we have some strongly coupled theory that sponta-
neously breaks SU(2)L×SU(2)R down to SU(2)V , and that at high energies the symmetry is
restored. With these assumptions, S > 0 is plausible. S < 0 would require more complicated
dynamics, and might well be impossible, though we cannot prove it.1
Consider a strongly-interacting theory with SU(2) vector current V aµ and SU(2) axial
vector current Aaµ. We define (where J represents V or A):
d4x e−iq·x
Jaµ(x)J
= δab
qµqν − gµνq2
2). (2.1)
We further define the left-right correlator, denoted simply Π(q2), as ΠV (q
2)−ΠA(q2). In the
usual way, ΠV and ΠA are related to positive spectral functions ρV (s) and ρA(s). Namely,
the Π functions are analytic functions of q2 everywhere in the complex plane except for
Minkowskian momenta, where poles and branch points can appear corresponding to physical
particles and multi-particle thresholds. The discontinuity across the singularities on the
q2 > 0 axis is given by a spectral function. In particular, there is a dispersion relation
ΠV (q
ρV (s)
s− q2 + iǫ
, (2.2)
with ρV (s) > 0, and similarly for ΠA.
Chiral symmetry breaking establishes that ρA(s) contains a term πf
πδ(s). This is the
massless particle pole corresponding to the Goldstone of the spontaneously broken SU(2)
axial flavor symmetry. (The corresponding pions, of course, are eaten once we couple the
theory to the Standard Model, becoming the longitudinal components of the W± and Z
bosons. However, for now we consider the technicolor sector decoupled from the Standard
Model.) We define a subtracted correlator by Π̄(q2) = Π(q2) +
and a subtracted spectral
function by ρ̄A(s) = ρA(s)− πf 2πδ(s). Now, the S parameter is given by
S = 4πΠ̄(0) = 4
(ρV (s)− ρ̄A(s)) . (2.3)
Interestingly, there are multiple well-established nonperturbative facts about ΠV − ΠA, but
none are sufficient to prove that S > 0. There are the famous Weinberg sum rules [17]
ds (ρV (s)− ρ̄A(s)) = f 2π , (2.4)
ds s (ρV (s)− ρ̄A(s)) = 0. (2.5)
Further, Witten proved that Σ(Q2) = −Q2(ΠV (Q2)−ΠA(Q2)) > 0 for all Euclidean momenta
Q2 = −q2 > 0 [18]. However, the positivity of S seems to be more difficult to prove.
Our plausibility argument is based on the function Σ(Q2). In terms of this function,
S = −4πΣ′(0). (Note that in Σ(Q2) the 1/Q2 pole from ΠA is multiplied by Q2, yielding
a constant that does not contribute when we take the derivative. Thus when considering
1For a related discussion of the calculation of S in strongly coupled theories, see [15].
Σ we do not need to subtract the pion pole as we did in Π̄.) We also know that Σ(0) =
f 2π > 0. On the other hand, we know something else that is very general about theories that
spontaneously break chiral symmetry: at very large Euclidean Q2, we should see symmetry
restoration. More specifically, we expect behavior like
Σ(Q2) → O
, (2.6)
where k is associated with the dimension of some operator that serves as an order parameter
for the symmetry breaking. (In some 5D models the decrease of ΠA − ΠV will actually be
faster, e.g. in Higgsless models one has exponential decrease.) While we are most familiar
with this from the OPE of QCD, it should be very general. If a theory did not have this
property and ΠV and ΠA differed significantly in the UV, we would not view it as a sponta-
neously broken symmetry, but as an explicitly broken one. Now, in this context, positivity
of S is just the statement that, because Σ(Q2) begins at a positive value and eventually
becomes very small, the smoothest behavior one can imagine is that it simply decreases
monotonically, and in particular, that Σ′(0) < 0 so that S > 0.2 The alternative would be
that the chiral symmetry breaking effects push Σ(Q2) in different directions over different
ranges of Q2. We have not proved that this is impossible in arbitrary theories, but it seems
plausible that the simpler case is true, namely that chiral symmetry restoration always acts
to decrease Σ(Q2) as we move to larger Q2. Indeed, we will show below that in a wide variety
of perturbative holographic theories S is positive.
3 Boundary-effective-action approach to oblique cor-
rections. Simple cases with boundary breaking
In this section we review the existing results and calculational methods for the electroweak
precision observables (and in particular the S-parameter) in holographic models of elec-
troweak symmetry breaking. There are two equivalent formalisms for calculating these
parameters. One is using the on-shell wave function of the W/Z bosons [19], and the
electroweak observables are calculated from integrals over the extra dimension involving
these wave functions. The advantage of this method is that since it uses the physical wave
functions it is easier to find connections to the Z and the KK mass scales. The alternative
formalism proposed by Barbieri, Pomarol and Rattazzi [7] (and later extended in [20] to in-
clude observables off the Z-pole) uses the method of the boundary effective action [21], and
involves off-shell wave functions of the boundary fields extended into the bulk. This latter
method leads more directly to a general expression of the electroweak parameters, so we will
be applying this method throughout this paper. Below we will review the basic expressions
from [7].
A theory of electroweak symmetry breaking with custodial symmetry has an SU(2)L×
SU(2)R global symmetry, of which the SU(2)L×U(1)Y subgroup is gauged (since the S-
parameter is unaffected by the extra B − L factor we will ignore it in our discussion). At
2For a related discussion of the behaviour of Σ
in the case of large-Nc QCD, see [16].
low energies, the global symmetry is broken to SU(2)D. In the holographic picture of [7] the
elementary SU(2)×U(1) gauge fields are extended into the bulk of the extra dimension. The
bulk wave functions are determined by solving the bulk EOM’s as a function of the boundary
fields, and the effective action is just the bulk action in terms of the boundary fields.
In order to first keep the discussion as general as possible, we use an arbitrary background
metric over an extra dimension parametrized by 0 < y < 1, where y = 0 corresponds to the
UV boundary, and y = 1 to the IR boundary. In order to simplify the bulk equations of
motion it is preferential to use the coordinates in which the metric takes the form 1 [7]
ds2 = e2σdx2 + e4σdy2 . (3.1)
The bulk action for the gauge fields is given by
S = − 1
(FLMN)
2 + (FRMN)
. (3.2)
The bulk equations of motion are given by
µ − p2e2σAL,Rµ = 0, (3.3)
or equivalently the same equations for the combinations Vµ, Aµ = (AµL ± AµR)/
We assume that the (light) SM fermions are effectively localized on the Planck brane
and that they carry their usual quantum numbers under SU(2)L × U(1)Y that remains
unbroken on the UV brane. The values of these fields on the UV brane have therefore a
standard couplings to fermion and they are the 4D interpolating fields we want to compute
an effective action for. This dictates the boundary conditions we want to impose on the UV
brane
ALaµ (p
2, 0) = ĀLaµ (p
2), AR 3µ (p
2, 0) = ĀR 3µ (p
2), AR 1,2µ (p
2, 0) = 0. (3.4)
R are vanishing because they correspond to ungauged symmetry generators. The solutions
of the bulk equations of motion satisfying these UV BC’s take the form
2, y) = v(y, p2)V̄µ(p
2), Aµ(p
2, y) = a(y, p2)Āµ(p
2). (3.5)
where the interpolating functions v and a satisfy the bulk equations
∂2yf(y, p
2)− p2e2σf(y, p2) = 0 (3.6)
and the UV BC’s
v(0, p2) = 1, a(0, p2) = 1. (3.7)
The effective action for the boundary fields reduces to a pure boundary term since by
integrating by parts the bulk action vanishes by the EOM’s:
Seff =
d4x(Vµ∂yV
µ + Aµ∂yA
µ)|y=0 =
d4p(V̄ 2µ ∂yv + Ā
µ∂ya)|y=0 (3.8)
1In this paper, we use a (−+ . . .+) signature. 5D bulk indices are denoted by capital Latin indices while
we use Greek letters for 4D spacetime indices. 5D indices will be raised and lowered using the 5D metric
while the 4D Minkowski metric is used for 4D indices.
And we obtain the non-trivial vacuum polarizations for the boundary vector fields
ΣV (p
2) = −
∂yv(0, p
2), ΣA(p
2) = −
∂ya(0, p
2). (3.9)
The various oblique electroweak parameters are then obtained from the momentum ex-
pansion of the vacuum polarizations in the effective action,
Σ(p2) = Σ(0) + p2Σ′(0) +
Σ′′(0) + . . . (3.10)
For example the S-parameter is given by
S = 16πΣ′3B(0) = 8π(Σ
V (0)− Σ′A(0)). (3.11)
A similar momentum expansion can be performed on the interpolating functions v and a:
v(y, p2) = v(0)(y) + p2v(1)(y) + . . ., and similarly for a. The S-parameter is then simply
expressed as
S = −8π
(1) − ∂ya(1))|y=0. (3.12)
The first general theorem was proved in [7]: for the case of boundary condition breaking in
a general metric, S ≥ 0. The proof uses the explicit calculation of the functions v(n), a(n), n =
0, 1. First, the bulk equations (3.3) write
(0) = ∂2ya
(0) = 0, ∂2yv
(1) = e2σv(0), ∂2ya
(1) = e2σa(0). (3.13)
And the p2-expanded UV BC’s are
v(0) = a(0) = 1, v(1) = a(1) = 0 at y = 0 (3.14)
Finally, we need to specify the BC’s on the IR brane that correspond to the breaking
SU(2)L×SU(2)R → SU(2)D
∂yVµ = 0, Aµ = 0, (3.15)
which translates into simple BC’s for the interpolating functions
(n) = a(n) = 0, n = 0, 1. (3.16)
The solution of these equations are v(0) = 1, a(0) = 1 − y, v(1) =
dy′′e2σ(y
′′) −
dy′e2σ(y
′), a(1) =
dy′′e2σ(y
′′)(1−y′′)−y
dy′′e2σ(y
′′)(1−y′′). Consequently
dye2σ(y)dy −
dy′(1− y′)e2σ(y′)
(3.17)
which is manifestly positive.
3.1 S > 0 for BC breaking with boundary kinetic mixing
The first simple generalization of the BC breaking model is to consider the same model but
with an additional localized kinetic mixing operator added on the TeV brane (the effect of
this operator has been studied in flat space in [7] and in AdS space in [19]). The localized
Lagrangian is
−gV 2µν . (3.18)
This contains only the kinetic term for the vector field since the axial gauge field is set to
zero by the BC breaking. In this case the BC at y = 1 for the vector field is modified
to ∂yVµ + τp
2Vµ = 0. In terms of the wave functions expanded in small momenta we get
(1)+τv(0) = 0. The only change in the solutions will be that now v(1)
= −τ−
e2σ(y
′)dy′,
resulting in
e2σ(y)dy −
(1− y′)e2σ(y′)dy′ + τ
(3.19)
Thus as long as the localized kinetic term has the proper sign, the shift in the S-parameter
will be positive. If the sign is negative, there will be an instability in the theory since fields
localized very close to the TeV brane will feel a wrong sign kinetic term. Thus we conclude
that for the physically relevant case S remains positive.
3.2 S > 0 for BC breaking with arbitrary kinetic functions
The next simple extension of the BPR result is to consider the case when there is an arbitrary
y-dependent function in front of the bulk gauge kinetic terms. These could be interpreted as
effects of gluon condensates modifying the kinetic terms in the IR. In this case the action is
S = −
φ2L(y)(F
2 + φ2R(y)(F
. (3.20)
φL,R(y) are arbitrary profiles for the gauge kinetic terms, which are assumed to be the
consequence of some bulk scalar field coupling to the gauge fields. Note that this case
also covers the situation when the gauge couplings are constant but g5L 6= g5R. The only
assumption we are making is that the gauge kinetic functions for L,R are strictly positive.
Otherwise one could create a wave packet localized around the region where the kinetic term
is negative which would have ghost-like behavior.
Due to the y-dependent kinetic terms it is not very useful to go into the V,A basis.
Instead we will directly solve the bulk equations in the original basis. The bulk equations of
motion for L,R are given by
L,R∂yA
µ )− p2e2σφ2L,RAL,Rµ = 0 (3.21)
To find the boundary effective action needed to evaluate the S-parameter we perform the
following decomposition:
ALµ(p
2, y) = L̄µ(p
2)LL(y, p
2) + R̄µ(p
2)LR(y, p
ARµ (p
2, y) = L̄µ(p
2)RL(y, p
2) + R̄µ(p
2)RR(y, p
2). (3.22)
Here L̄, R̄ are the boundary fields, and the fact that we have four wave functions expresses
the fact that these fields will be mixing due to the BC’s on the IR brane. The UV BC’s (3.4)
and the IR BC’s (3.15) can be written in terms of the interpolating functions as
(UV) LL(0, p
2) = 1, LR(0, p
2) = 0, RL(0, p
2) = 0, RR(0, p
2) = 1. (3.23)
LL(1, p
2) = RL(1, p
2), LR(1, p
2) = RR(1, p
∂y(LL(1, p
2) +RL(1, p
2)) = 0, ∂y(LR(1, p
2) +RR(1, p
2)) = 0.
(3.24)
The solution of these equations with the proper boundary conditions and for small values of
p2 is rather lengthy, so we have placed the details in Appendix A. The end result is that
S = −8π
φ2L∂yL
R + φ
|y=0 = −
(aLR + aRL), (3.25)
where the constants aRL are negative as their explicit expressions shows it. Therefore S is
positive.
4 S > 0 in models with bulk Higgs
Having shown than S > 0 for arbitrary metric and EWSB through BC’s, in this section,
we switch to considering breaking of electroweak symmetry by a bulk scalar (Higgs) vev.
We begin by neglecting the effects of kinetic mixing between SU(2)L and SU(2)R fields
coming from higher-dimensional operator in the 5D theory, expecting that their effect, being
suppressed by the 5D cut-off, is sub-leading. We will return to a consideration of such kinetic
mixing effects in the following sections.
We will again use the metric (3.1) and the bulk action (3.2). Instead of BC breaking we
assume that EWSB is caused by a bulk Higgs which results in a y-dependent profile for the
axial mass term
A2M . (4.1)
Here M2 is a positive function of y corresponding to the background Higgs VEV. The bulk
equations of motion are:
(∂2y − p2e2σ)Vµ = 0, (∂2y − p2e2σ −M2e4σ)Aµ = 0. (4.2)
On the IR brane, we want to impose regular Neumann BC’s that preserve the full SU(2)L×
SU(2)R gauge symmetry
(IR) ∂yVµ = 0, ∂yAµ = 0. (4.3)
As in the previous section, the BC’s on the UV brane just define the 4D interpolating fields
(UV ) Vµ(p
2, 0) = V̄µ(p
2), Aµ(p
2, 0) = Āµ(p
2). (4.4)
The solutions of the bulk equations of motion satisfying these BC’s take the form
2, y) = v(y, p2)V̄µ(p
2), Aµ(p
2, y) = a(y, p2)Āµ(p
2), (4.5)
where the interpolating functions v and a satisfy the bulk equations
∂2yv − p2e2σv = 0, ∂2ya− p2e2σa−M2e4σa = 0. (4.6)
As before, these interpolating functions are expanded in powers of the momentum: v(y, p2) =
v(0)(y) + p2v(1)(y) + . . ., and similarly for a. The S-parameter is again given by the same
expression
S = −8π
(1) − ∂ya(1))|y=0. (4.7)
We will not be able to find general solutions for a(1) and v(1) but we are going to prove that
(1) > ∂yv
(1) on the UV brane, which is exactly what is needed to conclude that S > 0.
First at the zeroth order in p2, the solution for v(0) is simply constant, v(0) = 1, as before.
And a(0) is the solution of
(0) = M2e4σa(0), a(0)|y=0 = 1, ∂ya(0)|y=1 = 0. (4.8)
In particular, since a(0) is positive at y = 0, this implies that a(0) remains positive: if a(0)
crosses through zero it must be decreasing, but then this equation shows that the derivative
will continue to decrease and can not become zero to satisfy the other boundary condition.
Now, since a(0) is positive, the equation of motion shows that it is always concave up, and
then the condition that its derivative is zero at y = 1 shows that it is a decreasing function
of y. In particular, we have for all y
a(0)(y) ≤ v(0)(y), (4.9)
with equality only at y = 0.
Next consider the order p2 terms. What we wish to show is that ∂ya
(1) > ∂yv
(1) at the
UV brane. First, let’s examine the behavior of v(1): the boundary conditions are v(1)|y=0 = 0
and ∂yv
= 0. The equation of motion is:
(1) = e2σv(0) = e2σ > 0, (4.10)
so the derivative of v(1) must increase to reach zero at y = 1. Thus it is negative everywhere
except y = 1, and v(1) is a monotonically decreasing function of y. Since v(1)|y=0 = 0, v(1) is
strictly negative on (0, 1].
For the moment suppose that a(1) is also strictly negative; we will provide an argument
for this shortly. The equation of motion for a(1) is:
(1) = e2σa(0) +M2e4σa(1). (4.11)
Now, we know that a(0) < v(0), so under our assumption that a(1) < 0, this means that
(1) ≤ ∂2yv(1), (4.12)
with equality only at y = 0. But we also know that ∂yv
(1)∂ya
(1) at y = 1, since they both
satisfy Neumann boundary conditions there. Since the derivative of ∂ya
(1) is strictly smaller
over (0, 1], it must start out at a higher value in order to reach the same boundary condition.
Thus we have that
> ∂yv
. (4.13)
The assumption that we made is that a(1) is strictly negative over the interval (0, 1]. The
reason is the following: suppose that a(1) becomes positive at some value of y. Then as it
passes through zero it is increasing. But then we also have that ∂2ya
(1) = e2σa(0)+M2e4σa(1),
and we have argued above that a(0) > 0. Thus if a(1) is positive, ∂ya
(1) remains positive,
because ∂2ya
(1) cannot become negative. In particular, it becomes impossible to reach the
boundary condition ∂ya
(1) = 0 at y = 1. This fills the missing step in our argument and
shows that the S parameter must be positive.
In the rest of this section we show that the above proof for the positivity of S remains
essentially unchanged in the case when the bulk gauge couplings for the SU(2)L and SU(2)R
gauge groups are not equal. In this case (in order to get diagonal bulk equations of motion)
one needs to also introduce the canonically normalized gauge fields. We start with the generic
action (metric factors are understood when contracting indices)
4g25L
(FLMN)
4g25R
(FRMN)
h2(z)
(LM −RM )2
(4.14)
To get to a canonically normalized diagonal basis we redefine the fields as
g25L + g
(L−R) , Ṽ =
g25L + g
. (4.15)
To get the boundary effective action, we write the fields Ṽ , Ã as
Ã(p2, z) =
g25L + g
L̄(p2)− R̄(p2)
ã(p2, z) , (4.16)
Ṽ (p2, z) =
g25L + g
L̄(p2) +
R̄(p2)
ṽ(p2, z) . (4.17)
Here L̄, R̄ are the boundary effective fields (with non-canonical normalization exactly as
in [7]), while the profiles ã, ṽ satisfy the same bulk equations and boundary conditions as
a, v in (4.2)–(4.4) with an appropriate replacement for M2 = (g25L + g
2. In terms of the
canonically normalized fields, the boundary effective action takes its usual form
Seff =
Ṽ ∂yṼ + Ã∂yÃ
. (4.18)
And we deduce the vacuum polarization
ΣL3B(p
2) = −
g25L + g
(∂y ṽ(0, p
2)− ∂yã(0, p2)) (4.19)
And finally the S-parameter is equal to
S = − 16π
g25L + g
(∂y ṽ
(1) − ∂yã(1)) (4.20)
Since ã(n), ṽ(n), n = 0, 1 satisfy the same equations (4.2)–(4.4) as before, the proof goes
through unchanged and we conclude that S > 0.
5 Bulk Higgs and bulk kinetic mixing
Next, we wish to consider the effects of kinetic mixing from higher-dimensional operator in
the bulk involving the Higgs VEV – as mentioned earlier, this kinetic mixing is suppressed
by the 5D cut-off and hence expected to be a sub-leading effect. The reader might wonder
why we neglected it before, but consider it now? The point is that, although the leading
effect on S parameter is positive as shown above, it can be accidentally suppressed so that
the formally sub-leading effects from the bulk kinetic mixing can be important, in particular,
such effects could change the sign of S. Also, the Higgs VEV can be large, especially when
the Higgs profile is “narrow” such that it approximates BC breaking, and thus the large
VEV can (at least partially) compensate the suppression from the 5D cut-off. Of course, in
this limit of BC breaking (δ-function VEV), we know that kinetic mixing gives S < 0 only
if tachyons are present in the spectrum, but we would like to cover the cases intermediate
between BC breaking limit and a broad Higgs profile as well. In this section, we develop
a formalism, valid for arbitrary metric and Higgs profile, to treat the bulk mass term and
kinetic mixing on the same footing and then we apply this technique to models in AdS space
and with power-law profiles for Higgs VEV in the next section.
We first present a discussion of how a profile for the y-dependent kinetic term is equivalent
to a bulk mass term. This is equivalent to the result [13] that a bulk mass term can be
equivalent to an effective metric. However, we find the particular formulation that we present
here to be more useful when we deal with the case of a kinetic mixing. Assume we have a
Lagrangian for a gauge field that has a kinetic term
S = − 1
−gφ2(y)F 2MN (5.1)
We work in the axial gauge A5 = 0 and again the metric takes the form (3.1). We redefine
the field to absorb the function φ: Ã(y) = φ(y)A(y). The action in terms of the new field is
then written as
S = − 1
e2σF̃ 2µν + 2(∂yõ)
2 + 2
Ã2µ − 4(∂yõ)õ
(5.2)
To see that the kinetic profile φ is equivalent to a mass term, we integrate by parts in the
second term
S = − 1
F̃ 2MN + 2e
(5.3)
Thus we find that a bulk kinetic profile is equivalent to a bulk mass plus a boundary mass.
The bulk equations of motion for the new variables will then be
∂2yõ − e2σp2õ −
õ = 0, (5.4)
and the boundary conditions become
∂yõ =
õ. (5.5)
Note, that despite the bulk mass term, there is still a massless mode whose wavefunction is
simply φ(z). Now we can reverse the argument and say that a bulk mass must be equivalent
to a profile for the bulk kinetic term plus a boundary mass term.
5.1 The general case
We have seen above how to go between a bulk mass terms and a kinetic function. We
will now use this method to discuss the general case, when there is electroweak symmetry
breaking due to a bulk higgs with a sharply peaked profile toward the IR brane, and the same
Higgs introduces kinetic mixing between L and R fields corresponding to a higher dimensional
operator from the bulk. For now we assume that the Higgs fields that breaks the electroweak
symmetry is in a (2,2) of SU(2)L×SU(2)R, with a VEV 〈H〉 = diag(h(z), h(z))/
2.1 This
Higgs profile h has dimension 3/2. The 5D action is given by
(FLMN)
2 + (FRMN)
− (DMH)†(DMH) +
Tr(FLMNH
†HFMN R)
(5.6)
Here α is a coefficient of O(1) and Λ is the 5D cutoff scale, given approximately by Λ ∼
24π3/g25. The kinetic mixing term just generates a shift in the kinetic terms of the vector
and axial vector field, and we will write the bulk mass term also as a shift in the kinetic
term for the axial vector field. The exact form of the translation between the two forms is
given by answering the question of how to redefine the field with an action (note that m2
has a mass dimension 3)
wF 2MN +m
22g25AµA
(5.7)
to a theory with only a modified kinetic term. The appropriate field redefinition A = ρÃ
will be canceling the mass term if ρ satisfies
∂y(w∂yρ) = m
2g25e
4σρ, (5.8)
1An alternative possibility would be to consider a Higgs in the (3,3) representation of SU(2)L×SU(2)R.
together with the boundary conditions ρ′|y=1 = 0, ρ|y=0 = 1. The relation between the new
and the old expression for w will be w̃ = ρ2w. The action in this case is given by
−gw̃F̃ 2MN +
w̃(0)
(∂yρ)Ã
2|y=0 (5.9)
This last boundary term is actually irrelevant for the S-parameter: since it does not contain
a derivative on the field it can not get an explicit p-dependence so it will not contribute to
S, so for practical purposes this boundary term can be neglected.
With this expression we now can calculate S. For this we need the modified version of
the formula from [13], where the breaking is not by boundary conditions but by a bulk Higgs.
The expression is
dye2σ(wV − w̃A). (5.10)
In our case wV = 1− αh
2(y)2g2
while w̃A = wAρ
2 = (1 +
αh2(y)2g2
This formula also gives another way to see that S > 0 in the absence of kinetic mixing,
without analyzing the functions v(1) and a(1) from Section 4 in detail. Without kinetic
mixing, wV = 1 and w̃A = ρ
2, and the equation of motion for ρ is simply ∂2yρ = m
2g25e
In that case ρ is just the function we called a(0) in Section 4. Since we showed there that
a(0) ≤ 1, we see that our expression 5.10 gives an alternative argument that S > 0 without
kinetic mixing, because it is simply an integral of e2σ(1− ρ2) ≥ 0.
5.2 Scan of the parameter space for AdS backgrounds
Having developed the formalism for a unified treatment of bulk mass terms and bulk kinetic
mixing, we then apply it to the AdS case with a power-law profile for the Higgs vev. Requiring
(i) calculability of the 5D theory, i.e., NDA size of the higher-dimensional operator, (ii) that
excited W/Z’s are heavier than a few 100 GeV, and (iii) a ghost-free theory, i.e., positive
kinetic terms for both V and A fields, we find that S is always positive in our scan for
this model. We do not have a general proof that S > 0 for an arbitrary background with
arbitrary Higgs profiles, if we include the effects of the bulk kinetic mixing, but we feel that
such a possibility is quite unlikely based on our exhaustive scan. For this scan we will take
the parametrization of the Higgs profile from [22]. Here the metric is taken as AdS space
ds2 =
ηµνdx
µdxν − dz2
, (5.11)
where as usual R < z < R′. The bulk Higgs VEV is assumed to be a pure monomial in
z (rather than a combination of an increasing and a decreasing function). The reason for
this is that we are only interested in the effect of the strong dynamics on the electroweak
precision parameters. A term in the Higgs VEV growing toward the UV brane would mean
that the value of bulk Higgs field evaluated on the UV brane gets a VEV, implying that
there is EWSB also by a elementary Higgs (in addition to the strong dynamics) in the 4D
dual. We do not want to consider such a case. The form of the Higgs VEV is then assumed
to be
v(z) =
2(1 + β) logR′/R
(1− (R/R′)2+2β)
, (5.12)
where the parameter β characterizes how peaked the Higgs profile is toward the TeV brane
(β → −1 corresponds to a flat profile, β → ∞ to an infinitely peaked one). The other
parameter V corresponds to an “effective Higgs VEV”, and is normalized such that for
V → 246 GeV we recover the SM and the KK modes decouple (R′ → ∞ irrespective of β).
For more details about the definitions of these parameters see [22].2
We first numerically fix the R′ parameter for every given V, β and kinetic mixing pa-
rameter α by requiring that the W -mass is reproduced. We do this approximately, since we
assume the simple matching relation 1/g2 = R log(R′/R)/g25 to numerically fix the value of
g5, which is only true to leading order, but due to wave function distortions and the extra
kinetic term will get corrected. Then, ρ can be numerically calculated by solving (5.8), and
from this S can be obtained via (5.10).
We see that S decreases as we increase α. On the the hand, the kinetic function for
vector field (wV ) also decreases in this limit. So, in order to find the minimal value of S
consistent with the absence of ghosts in the theory, we find numerically the maximal value
of α for every value of V, β for which the kinetic function of the vectorlike gauge field is still
strictly positive. We then show contour plots for the minimal value of S taking this optimal
value of α as a function of V, β in Fig. 5.2. In the first figure we fix R′ = 10−8 GeV−1, which
is the usual choice for higgsless models with light KK W’ and Z’ states capable of rendering
the model perturbative. In the second plot we choose the more conventional value R = 10−18
GeV−1. We can see the S is positive in both cases over all of the physical parameter space.
We can estimate the corrections to the above matching relation from the wavefunction
distortion and kinetic mixing as follows. The effect from wavefunction distortion is expected
to be ∼ g2S/(16π) which is <∼ 10% if we restrict to regions of parameter space with S <∼ 10.
Similarly, we estimate the effect due to kinetic mixing by simply integrating the operator
over the extra dimension to find a deviation ∼ g6(V R′)2 log2 (R′/R) / (24π3)2. So, if restrict
<∼ 1 TeV and 1/R′ >∼ 100 GeV, then this deviation is also small enough. We see that
both effects are small due to the deviation being non-zero only near IR brane – even though
it is O(1) in that region, whereas the zero-mode profile used in the matching relation is
spread throughout the extra dimension.
In order to be able to make a more general statement (and to check that the neglected
additional contributions to the gauge coupling matching from the wave function distortions
and the kinetic mixing indeed do not significantly our results) we have performed an addi-
tional scan over AdS space models where we do not require the correct physical value of MW
to be reproduced. In this scan we then treat R′ as an independent free parameter. In this
case the correct matching between g and g5 is no longer important for the sign of S, since at
2Refs. [23] also considered similar models.
400 500 600 700 800 900 1000
400 500 600 700 800 900 1000
Figure 1: The contours of models with fixed values of the S-parameter due to the electroweak
breaking sector. In the left panel we fix 1/R = 108 GeV, while in the right 1/R = 1018 GeV.
The gauge kinetic mixing parameter α is fixed to be the maximal value corresponding to the
given V, β (and R′ chosen such that the W mass is approximately reproduced). In the left
panel the contours are S = 1, 2, 3, 4, 5, 6, while in the right S = 1, 1.5, 2.
every place where g5 appears it is multiplied by a parameter we are scanning over anyway
(V or α).
We performed the scan again for two values of the AdS curvature, 1/R = 108 and 1018
GeV. For the first case we find that if we restrict α < 10, 1/R′ < 1 TeV there is no case with
S < 0. However, there are some cases with S < 0 for α > 10, although in these cases the
theory is likely not predictive. For 1/R = 1018 GeV we find that S < 0 only for V ∼ 250
GeV and β ∼ 0, 1/R′ ∼ 1 TeV. In this case α is of order one (for example α ∼ 5). This case
corresponds to the composite Higgs model of [11] and it is quite plausible that at tree-level
S < 0 if a large kinetic mixing is added in the bulk. However in this case EWSB is mostly
due to a Higgs, albeit a composite particle of the strong dynamics, rather than directly by
the strong dynamics, so it does not contradict the expectation that when EWSB is triggered
directly via strong dynamics, then S is always large and positive. However, it shows that
any general proof for S > 0 purely based on analyzing the properties of Eqs. (5.8)-(5.10) is
doomed to failure, since these equations contain physical situations where EWSB is not due
to the strong dynamics but due to a light Higgs in the spectrum. Thus any general proof
likely needs to include more physical requirements on the decoupling of the physical Higgs.
6 Conclusions
In this paper, we have studied the S parameter in holographic technicolor models, focusing
especially on its sign. The motivation for our study was as follows. An alternative (to SUSY)
solution to the Planck-weak hierarchy involves a strongly interacting 4D sector spontaneously
breaking the EW symmetry. One possibility for such a strong sector is a scaled-up version
of QCD as in the traditional technicolor models. In such models, we can use the QCD
data to “calculate” S finding S ∼ +O(1) which is ruled out by the electroweak precision
data. Faced by this constraint, the idea of a “walking” dynamics was proposed and it can
be then argued that S < 0 is possible which is much less constrained by the data, but the
S parameter cannot be calculated in such models. In short, there is a dearth of calculable
models of (non-supersymmetric) strong dynamics in 4D.
Based on the AdS/CFT duality, the conjecture is that certain kinds of theories of strong
dynamics in 4D are dual to physics of extra dimensions. The idea then is to construct
models of EWSB in an extra dimension. Such constructions allow more possibilities for
model-building, at the same time maintaining calculability if the 5D strong coupling scale is
larger than the compactification scale, corresponding to large number of technicolors in the
4D dual.
It was already shown that S > 0 for boundary condition breaking for arbitrary metric
(a proof for S > 0 for the case of breaking by a localized Higgs vev was recently studied in
reference [24]). In this paper, we have extended the proof for boundary condition breaking
to the case of arbitrary bulk kinetic functions for gauge fields or gauge kinetic mixing.
Throughout this paper, we have assumed that the (light) SM fermions are effectively
localized near the UV brane so that flavor violation due to higher-dimensional operators
in the 5D theory can be suppressed, at the same time allowing for a solution to the flavor
hierarchy. Such a localization of the light SM fermions in the extra dimension is dual to SM
fermions being “elementary”, i.e., not mixing with composites from the 4D strong sector.
It is known that the S parameter can be suppressed (or even switch sign) for a flat profile
for SM fermions (or near the TeV brane) – corresponding to mixing of elementary fermions
with composites in the 4D dual, but in such a scenario flavor issues could be a problem.
We also considered the case of bulk breaking of the EW symmetry motivated by recent
arguments that S < 0 is possible with different effective metrics for vector and axial fields.
For arbitrary metric and Higgs profile, we showed that S > 0 at leading order, i.e., neglect-
ing effects from all higher-dimensional operators in the 5D theory (especially bulk kinetic
mixing), which are expected to be sub-leading effects being suppressed by the cut-off of the
5D theory. We also note that boundary mass terms can generally be mimicked to arbitrary
precision by localized contributions to the bulk scalar profile, so we do not expect a more
general analysis of boundary plus bulk breaking to find new features. Obtaining S < 0 must
then require either an unphysical Higgs profile or higher-dimensional operators to contribute
effects larger than NDA size, in which case we lose calculability of the 5D theory.
To make our case for S > 0 stronger, we then explored effects of the bulk kinetic mixing
between SU(2)L,R gauge fields due to Higgs vev coming from a higher-dimensional operator
in the 5D theory. Even though, as mentioned above, this effect is expected to be sub-leading,
it can nevertheless be important (especially for the sign of S) if the leading contribution to
S is accidentally suppressed. Also, the large Higgs VEV, allowed for narrow profiles in the
extra dimension (approaching the BC breaking limit), can compensate the suppression due to
the cut-off in this operator. For this analysis, we found it convenient to convert bulk (mass)2
for gauge fields also to kinetic functions. Although a general proof for S > 0 is lacking in
such a scenario, using the above method of treating the bulk mass for axial fields, we found
that S ∼ +O(1) for AdS5 model with power-law Higgs profile in the viable (ghost-free) and
calculable regions of the parameter space.
In summary, our results combined with the previous literature strongly suggests that S is
positive for calculable models of technicolor in 4D and 5D. We also presented a plausibility
argument for S > 0 which is valid in general, i.e., even for non-calculable models.
7 Acknowledgments
We thank Giacomo Cacciapaglia, Cédric Delaunay, Antonio Delgado, Guido Marandella,
Riccardo Rattazzi, Matthew Schwartz and Raman Sundrum for discussions. We also thank
Johannes Hirn and Veronica Sanz for comments on the manuscript. As we were finishing the
paper, we learned that Raman Sundrum and Tom Kramer have also obtained results similar
to ours [25]. C.C. thanks the theory group members at CERN for their hospitality during
his visit. K.A. is supported in part by the U. S. DOE under Contract no. DE-FG-02-85ER
40231. The research of C.C. is supported in part by the DOE OJI grant DE-FG02-01ER41206
and in part by the NSF grant PHY-0355005. The research of M.R. is supported in part by
an NSF graduate student fellowship and in part by the NSF grant PHY-0355005. C.G.
is supported in part by the RTN European Program MRTN-CT-2004-503369 and by the
CNRS/USA exchange grant 3503.
Note added
After submitting our paper to the arXiv, we learned of [26] which gives a proof for S > 0
for an arbitrary bulk Higgs profile in AdS background that is similar to our proof in Section
4. However, our proof of S > 0 is valid for a general metric and we have also included the
effect of kinetic mixing between SU(2)L and SU(2)R fields via higher-dimensional operator
(with Higgs vev) for the calculation of S in AdS background. We thank Deog-Ki Hong and
Ho-Ung Yee for pointing out their paper to us.
A Details of BC breaking with arbitrary kinetic func-
tions
Here we present the detailed calculation of S in the case with boundary breaking and arbi-
trary kinetic functions described in Section 3.2. Recall that we had the following decompo-
sition:
ALµ(p
2, y) = L̄µ(p
2)LL(y, p
2) + R̄µ(p
2)LR(y, p
ARµ (p
2, y) = L̄µ(p
2)RL(y, p
2) + R̄µ(p
2)RR(y, p
2), (A.1)
with boundary conditions
(UV) LL(0, p
2) = 1, LR(0, p
2) = 0, RL(0, p
2) = 0, RR(0, p
2) = 1. (A.2)
LL(1, p
2) = RL(1, p
2), LR(1, p
2) = RR(1, p
∂y(LL(1, p
2) +RL(1, p
2)) = 0, ∂y(LR(1, p
2) +RR(1, p
2)) = 0.
(A.3)
The action again reduces to a boundary term
Seff =
φ2L(0)Lµ∂L
µ + φ2R(0)Rµ∂R
, (A.4)
so we find that
S = −8π
φ2L∂yL
R + φ
|y=0 (A.5)
where we have done an expansion in terms of the momentum for all the wave functions as
usual as LL(y, p
2) = L
L (y) + p
L (y) + . . .. The lowest order wave functions satisfy the
following bulk equations:
J ) = 0, (A.6)
where I and J can refer to L or R. Imposing the BC’s these equations can be simply solved
in terms of the integrals
fL(y) =
, fR(y) =
(A.7)
L = 1− fL(y), L
R = fL(y), R
L = fR(y), R
R = 1− fR(y). (A.8)
In order to actually find S we need to go one step further, that is calculate the next order
terms in the wave functions I
J . These will satisfy the equations
J ) = e
2σφ2II
J , (A.9)
where for the I
J we use the solutions in (A.8). The form of the solutions will be given by
J (y) =
φ2I(y
due2σφ2I(u)I
J (u) + aIJ
, (A.10)
where aIJ are constants. In terms of these quantities the S-parameter is just given by
S = −8π
(aLR + aRL) (A.11)
One can again solve the boundary conditions to find the constants aRL , aLR . These turn out
to be
aRL = −
φ2L(y)
dy′e2σ(y
′)φ2L(y
′)(1− fL(y′))
φ2L(1)
φ2R(1)
dye2σ(y)φ2R(y)fR(y)
φ2L(y)
φ2R(y)
dy′e2σ(y
′)φ2R(y
′)fR(y
(A.12)
where
φ2R(y)
φ2L(1)
φ2R(1)
φ2L(y)
. (A.13)
A similar expressions applies for aLR with L ↔ R everywhere. Since 0 < fL,R < 1, we can
see that every term in the expression is manifestly positive, so S is definitely positive.
References
[1] M. E. Peskin and T. Takeuchi, Phys. Rev. Lett. 65, 964 (1990); B. Holdom and J. Tern-
ing, Phys. Lett. B 247, 88 (1990); M. Golden and L. Randall, Nucl. Phys. B 361, 3
(1991); M. E. Peskin and T. Takeuchi, Phys. Rev. D 46, 381 (1992).
[2] B. Holdom, Phys. Rev. D 24, 1441 (1981).
[3] R. Sundrum and S. D. H. Hsu, Nucl. Phys. B 391, 127 (1993) [arXiv:hep-ph/9206225];
T. Appelquist and F. Sannino, Phys. Rev. D 59, 067702 (1999) [arXiv:hep-ph/9806409].
[4] L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999) [arXiv:hep-ph/9905221].
[5] N. Arkani-Hamed, M. Porrati and L. Randall, JHEP 0108, 017 (2001)
[arXiv:hep-th/0012148].
[6] C. Csáki, C. Grojean, H. Murayama, L. Pilo and J. Terning, Phys. Rev. D 69, 055006
(2004) [arXiv:hep-ph/0305237]; C. Csáki, C. Grojean, L. Pilo and J. Terning, Phys.
Rev. Lett. 92, 101802 (2004) [arXiv:hep-ph/0308038].
[7] R. Barbieri, A. Pomarol and R. Rattazzi, Phys. Lett. B 591, 141 (2004)
[arXiv:hep-ph/0310285].
[8] C. Grojean, W. Skiba and J. Terning, Phys. Rev. D 73, 075008 (2006)
[arXiv:hep-ph/0602154].
[9] G. Cacciapaglia, C. Csáki, G. Marandella and A. Strumia, Phys. Rev. D 74, 033011
(2006) [arXiv:hep-ph/0604111].
http://arxiv.org/abs/hep-ph/9206225
http://arxiv.org/abs/hep-ph/9806409
http://arxiv.org/abs/hep-ph/9905221
http://arxiv.org/abs/hep-th/0012148
http://arxiv.org/abs/hep-ph/0305237
http://arxiv.org/abs/hep-ph/0308038
http://arxiv.org/abs/hep-ph/0310285
http://arxiv.org/abs/hep-ph/0602154
http://arxiv.org/abs/hep-ph/0604111
[10] G. Cacciapaglia, C. Csáki, C. Grojean and J. Terning, Phys. Rev. D 71, 035015 (2005)
[arXiv:hep-ph/0409126]; R. Foadi, S. Gopalakrishna and C. Schmidt, Phys. Lett. B
606, 157 (2005) [arXiv:hep-ph/0409266]; R. S. Chivukula, E. H. Simmons, H. J. He,
M. Kurachi and M. Tanabashi, Phys. Rev. D 71, 115001 (2005) [arXiv:hep-ph/0502162].
[11] R. Contino, Y. Nomura and A. Pomarol, Nucl. Phys. B 671, 148 (2003)
[arXiv:hep-ph/0306259]; K. Agashe, R. Contino and A. Pomarol, Nucl. Phys. B 719,
165 (2005) [arXiv:hep-ph/0412089]; K. Agashe and R. Contino, Nucl. Phys. B 742, 59
(2006) [arXiv:hep-ph/0510164].
[12] G. F. Giudice, C. Grojean, A. Pomarol and R. Rattazzi, arXiv:hep-ph/0703164.
[13] J. Hirn and V. Sanz, Phys. Rev. Lett. 97, 121803 (2006) [arXiv:hep-ph/0606086]; J. Hirn
and V. Sanz, arXiv:hep-ph/0612239.
[14] W. M. Yao et al. [Particle Data Group], J. Phys. G 33, 1 (2006).
[15] M. Kurachi and R. Shrock, Phys. Rev. D 74, 056003 (2006) [arXiv:hep-ph/0607231].
[16] S. Friot, D. Greynat and E. de Rafael, JHEP 0410, 043 (2004) [arXiv:hep-ph/0408281].
[17] S. Weinberg, Phys. Rev. Lett. 18, 507 (1967).
[18] E. Witten, Phys. Rev. Lett. 51, 2351 (1983).
[19] C. Csáki, J. Erlich and J. Terning, Phys. Rev. D 66, 064021 (2002)
[arXiv:hep-ph/0203034]; G. Cacciapaglia, C. Csáki, C. Grojean and J. Terning, Phys.
Rev. D 70, 075014 (2004) [arXiv:hep-ph/0401160].
[20] R. Barbieri, A. Pomarol, R. Rattazzi and A. Strumia, Nucl. Phys. B 703, 127 (2004)
[arXiv:hep-ph/0405040].
[21] S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Phys. Lett. B 428, 105
(1998) [arXiv:hep-th/9802109]; E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998)
[arXiv:hep-th/9802150].
[22] G. Cacciapaglia, C. Csáki, G. Marandella and J. Terning, arXiv:hep-ph/0611358.
[23] H. Davoudiasl, B. Lillie and T. G. Rizzo, JHEP 0608, 042 (2006)
[arXiv:hep-ph/0508279]; C. D. Carone, J. Erlich and J. A. Tan, Phys. Rev. D
75, 075005 (2007) [arXiv:hep-ph/0612242]; M. Piai, arXiv:hep-ph/0609104 and
arXiv:hep-ph/0608241.
[24] A. Delgado and A. Falkowski, arXiv:hep-ph/0702234.
[25] T. Kramer and R. Sundrum, private communication.
[26] D. K. Hong and H. U. Yee, Phys. Rev. D 74, 015011 (2006) [arXiv:hep-ph/0602177].
http://arxiv.org/abs/hep-ph/0409126
http://arxiv.org/abs/hep-ph/0409266
http://arxiv.org/abs/hep-ph/0502162
http://arxiv.org/abs/hep-ph/0306259
http://arxiv.org/abs/hep-ph/0412089
http://arxiv.org/abs/hep-ph/0510164
http://arxiv.org/abs/hep-ph/0703164
http://arxiv.org/abs/hep-ph/0606086
http://arxiv.org/abs/hep-ph/0612239
http://arxiv.org/abs/hep-ph/0607231
http://arxiv.org/abs/hep-ph/0408281
http://arxiv.org/abs/hep-ph/0203034
http://arxiv.org/abs/hep-ph/0401160
http://arxiv.org/abs/hep-ph/0405040
http://arxiv.org/abs/hep-th/9802109
http://arxiv.org/abs/hep-th/9802150
http://arxiv.org/abs/hep-ph/0611358
http://arxiv.org/abs/hep-ph/0508279
http://arxiv.org/abs/hep-ph/0612242
http://arxiv.org/abs/hep-ph/0609104
http://arxiv.org/abs/hep-ph/0608241
http://arxiv.org/abs/hep-ph/0702234
http://arxiv.org/abs/hep-ph/0602177
Introduction
A plausibility argument for S > 0
Boundary-effective-action approach to oblique corrections. Simple cases with boundary breaking
S>0 for BC breaking with boundary kinetic mixing
S>0 for BC breaking with arbitrary kinetic functions
S>0 in models with bulk Higgs
Bulk Higgs and bulk kinetic mixing
The general case
Scan of the parameter space for AdS backgrounds
Conclusions
Acknowledgments
Details of BC breaking with arbitrary kinetic functions
|
0704.1822 | Energy Functionals for the Parabolic Monge-Ampere Equation | ENERGY FUNCTIONALS FOR THE PARABOLIC
MONGE-AMPÈRE EQUATION
ZUOLIANG HOU AND QI LI
1. Introduction
Because of its close connection with the Kähler-Ricci flow, the parabolic
complex Monge-Ampère equation on complex manifolds has been studied by
many authors. See, for instance, [Cao85, CT02, PS06]. On the other hand,
theories for complex Monge-Ampère equation on both bounded domains
and complex manifolds were developed in [BT76, Yau78, CKNS85, Ko l98].
In this paper, we are going to study the parabolic complex Monge-Ampère
equation over a bounded domain.
Let Ω ⊂ Cn be a bounded domain with smooth boundary ∂Ω. Denote
QT = Ω × (0, T ) with T > 0, B = Ω × {0}, Γ = ∂Ω × {0} and ΣT = ∂Ω ×
(0, T ). Let ∂pQT be the parabolic boundary of QT , i.e. ∂pQT = B∪Γ∪ΣT .
Consider the following boundary value problem:
− log det
= f(t, z, u) in QT ,
u = ϕ on ∂pQT .
where f ∈ C∞(R× Ω̄×R) and ϕ ∈ C∞(∂pQT ). We will always assume that
Then we will prove that
Theorem 1. Suppose there exists a spatial plurisubharmonic (psh) function
u ∈ C2(Q̄T ) such that
u t − log det
u αβ̄
≤ f(t, z, u) in QT ,
u ≤ ϕ on B and u = ϕ on ΣT ∩ Γ.
Then there exists a spatial psh solution u ∈ C∞(Q̄T ) of (1) with u ≥ u if
following compatibility condition is satisfied: ∀ z ∈ ∂Ω,
ϕt − log det
= f(0, z, ϕ(z)),
ϕtt −
log det(ϕαβ̄)
= ft(0, z, ϕ(z)) + fu(0, z, ϕ(z))ϕt .
Date: October 29, 2018.
http://arxiv.org/abs/0704.1822v1
2 ZUOLIANG HOU AND QI LI
Motivated by the energy functionals in the study of the Kähler-Ricci
flow, we introduce certain energy functionals to the complex Monge-Ampère
equation over a bounded domain. Given ϕ ∈ C∞(∂Ω), denote
(5) P(Ω, ϕ) =
u ∈ C2(Ω̄) | u is psh, and u = ϕ on ∂Ω
then define the F 0 functional by following variation formula:
(6) δF 0(u) =
δudet
We shall show that the F 0 functional is well-defined. Using this F 0 func-
tional and following the ideas of [PS06], we prove that
Theorem 2. Assume that both ϕ and f are independent of t, and
(7) fu ≤ 0 and fuu ≤ 0.
Then the solution u of (1) exists for T = +∞, and as t approaches +∞,
u(·, t) approaches the unique solution of the Dirichlet problem
= e−f(z,v) in QT ,
v = ϕ on ∂pQT ,
in C1,α(Ω̄) for any 0 < α < 1.
Remark : Similar energy functionals have been studied in [Bak83, Tso90,
Wan94, TW97, TW98] for the real Monge-Ampère equation and the real
Hessian equation with homogeneous boundary condition ϕ = 0, and the
convergence for the solution of the real Hessian equation was also proved in
[TW98]. Our construction of the energy functionals and the proof of the con-
vergence also work for these cases, and thus we also obtain an independent
proof of these results. Li [Li04] and Blocki [B lo05] studied the Dirichlet
problems for the complex k-Hessian equations over bounded complex do-
mains. Similar energy functional can also be constructed for the parabolic
complex k-Hessian equations and be used for the proof of the convergence.
2. A priori C2 estimate
By the work of Krylov [Kry83], Evans [Eva82], Caffarelli etc. [CKNS85]
and Guan [Gua98], it is well known that in order to prove the existence and
smoothness of (1), we only need to establish the a priori C2,1(Q̄T )1 estimate,
i.e. for solution u ∈ C4,1(Q̄T ) of (1) with
(9) u = u on ΣT ∪ Γ and u ≥ u in QT ,
(10) ‖u‖C2,1(QT ) ≤ M2,
where M2 only depends on QT , u, f and ‖u(·, 0)‖C2(Ω̄).
m,n(QT ) means m times and n times differentiable in space direction and time di-
rection respectively, same for Cm,n-norm.
ENERGY FUNCTIONALS FOR THE PARABOLIC MONGE-AMPÈRE EQUATION 3
Proof of (10). Since u is spatial psh and u ≥ u, so
u ≤ u ≤ sup
(11) ‖u‖C0(QT ) ≤ M0.
Step 1. |ut| ≤ C1 in Q̄T .
Let G = ut(2M0−u)−1. If G attains its minimum on Q̄T at the parabolic
boundary, then ut ≥ −C1 where C1 depends on M0 and u t on Σ. Otherwise,
at the point where G attains the minimum,
Gt ≤ 0 i.e. utt + (2M0 − u)−1u2t ≤ 0,
Gα = 0 i.e. utα + (2M0 − u)−1utuα = 0,
Gβ̄ = 0 i.e. utβ̄ + (2M0 − u)
−1utuβ̄ = 0,
and the matrix Gαβ̄ is non-negative, i.e.
(13) utαβ̄ + (2M0 − u)
−1utuαβ̄ ≥ 0.
Hence
(14) 0 ≤ uαβ̄
utαβ̄ + (2M0 − u)
−1utuαβ̄
= uαβ̄utαβ̄ + n(2M0 − u)
−1ut,
where (uαβ̄) is the inverse matrix for (uαβ̄), i.e.
uαβ̄uγβ̄ = δ
Differentiating (1) in t, we get
(15) utt − uαβ̄utαβ̄ = ft + fu ut,
(2M0 − u)−1u2t ≤ −utt
= −uαβ̄utαβ̄ − ft − fu ut
≤ n(2M0 − u)−1ut − fu ut − ft,
hence
u2t − (n− (2M0 − u)fu)ut + ft(2M0 − u) ≤ 0.
Therefore at point p, we get
(16) ut ≥ −C1
where C1 depends on M0 and f .
Similarly, by considering the function ut(2M0 + u)
−1 we can show that
(17) ut ≤ C1.
Step 2. |∇u| ≤ M1
4 ZUOLIANG HOU AND QI LI
Extend u|Σ to a spatial harmonic function h, then
(18) u ≤ u ≤ h in QT and u = u = h on ΣT .
(19) |∇u|ΣT ≤ M1.
Let L be the linear differential operator defined by
(20) Lv =
− uαβ̄vαβ̄ − fuv.
L(∇u + eλ|z|2) = L(∇u) + Leλ|z|2
≤ ∇f − eλ|z|2
uαᾱ − fu).
Noticed that and both u and u̇ are bounded and
= eu̇−f ,
(22) 0 < c0 ≤ det
≤ c1,
where c0 and c1 depends on M0 and f . Therefore
uαᾱ ≥ nc−1/n1 .
Hence after taking λ large enough, we can get
L(∇u + eλ|z|2) ≤ 0,
(24) |∇u| ≤ sup
|∇u| + C2 ≤ M1.
Step 3. |∇2u| ≤ M2 on Σ.
At point (p, t) ∈ Σ, we choose coordinates z1, · · · , zn for Ω, such that
at z1 = · · · = zn = 0 at p and the positive xn axis is the interior normal
direction of ∂Ω at p. We set s1 = y1, s2 = x1, · · · , s2n−1 = yn, s2n = xn and
s′ = (s1, · · · , s2n−1). We also assume that near p, ∂Ω is represented as a
graph
(25) xn = ρ(s
j,k<2n
Bjksjsk + O(|s′|3).
Since (u− u)(s′, ρ(s′), t) = 0, we have for j, k < 2n,
(26) (u− u)sjsk(p, t) = −(u− u)xn(p, t)Bjk,
hence
(27) |usjsk(p, t)| ≤ C3,
where C3 depends on ∂Ω, u and M1.
ENERGY FUNCTIONALS FOR THE PARABOLIC MONGE-AMPÈRE EQUATION 5
We will follow the construction of barrier function by Guan [Gua98] to
estimate |uxnsj |. For δ > 0, denote Qδ(p, t) =
Ω ∩Bδ(p)
× (0, t).
Lemma 3. Define the function
(28) d(z) = dist(z, ∂Ω)
(29) v = (u− u) + a(h− u) −Nd2.
Then for N sufficiently large and a, δ sufficiently small,
Lv ≥ ǫ(1 +
uαᾱ) in Qδ(p, t)
v ≥ 0 on ∂(Bδ(p) ∩ Ω) × (0, t)
v(z, 0) ≥ c3|z| for z ∈ Bδ(p) ∩ Ω
where ǫ depends on the uniform lower bound of he eigenvalues of {u αβ̄}.
Proof. See the proof of Lemma 2.1 in [Gua98]. �
For j < 2n, consider the operator
+ ρsj
Tj(u− u) = 0 on
∂Ω ∩Bδ(p)
× (0, t)
|Tj(u− u)| ≤ M1 on
Ω ∩ ∂Bδ(p)
× (0, t)
|Tj(u− u)(z, 0)| ≤ C4|z| for z ∈ Bδ(p)
So by Lemma 3 we may choose C5 independent of u, and A >> B >> 1 so
Av + B|z|2 − C5(uyn − u yn)
2 ± Tj(u− u)
≥ 0 in Qδ(p, t),
Av + B|z|2 − C5(uyn − u yn)
2 ± Tj(u− u) ≥ 0 on ∂pQδ(p, t).
Hence by the comparison principle,
Av + B|z|2 − C5(uyn − u yn)
2 ± Tj(u− u) ≥ 0 in Qδ(p, t),
and at (p, t)
(33) |uxnyj | ≤ M2.
To estimate |uxnxn |, we will follow the simplification in [Tru95]. For
(p, t) ∈ Σ, define
λ(p, t) = min{uξξ̄ | complex vector ξ ∈ Tp∂Ω, and |ξ| = 1}
Claim λ(p, t) ≥ c4 > 0 where c4 is independent of u.
Let us assume that λ(p, t) attains the minimum at (z0, t0) with ξ ∈ Tzo∂Ω.
We may assume that
λ(z0, t0) <
u ξξ̄(z0, t0).
6 ZUOLIANG HOU AND QI LI
Take a unitary frame e1, · · · , en around z0, such that e1(z0) = ξ, and Re en =
γ is the interior normal of ∂Ω along ∂Ω. Let r be the function which defines
Ω, then
(u− u )11̄(z, t) = −r11̄(z)(u − u )γ(z, t) z ∈ ∂Ω
Since u11̄(z0, t0) < u 11̄(z0, t0)/2, so
−r11̄(z0)(u− u )γ(z0, t0) ≤ −
u 11̄(z0, t0).
Hence
r11̄(z0)(u− u )γ(z0, t) ≥
u 11̄(z0, t) ≥ c5 > 0.
Since both ∇u and ∇u are bounded, we get
r11̄(z0) ≥ c6 > 0,
and for δ sufficiently small ( depends on r11̄ ) and z ∈ Bδ(z0) ∩ Ω,
r11̄(z) ≥
So by u11̄(z, t) ≥ u11̄(z0, t0), we get
u 11̄(z, t) − r11̄(z)(u− u )γ(z, t) ≥ u 11̄(z0, t0) − r11̄(z0)(u− u )γ(z0, t0).
Hence if we let
Ψ(z, t) =
r11̄(z)
r11̄(z0)(u− u )γ(z0, t0) + u 11̄(z, t) − u 11̄(z0, t0)
(u− u )γ(z, t) ≤ Ψ(z, t) on
∂Ω ∩Bδ(z0)
× (0, T )
(u− u )γ(z0, t0) = Ψ(z0, t0).
Now take the coordinate system z1, · · · , zn as before. Then
(u− u )xn(z, t) ≤
γn(z)
Ψ(z, t) on
∂Ω ∩Bδ(z0)
× (0, T )
(u− u )xn(z0, t0) =
γn(z0)
Ψ(z0, t0).
where γn depends on ∂Ω. After taking C6 independent of u and A >>
B >> 1, we get
Av + B|z|2 − C6(uyn − u yn)
Ψ(z, t)
γn(z)
− Tj(u− u)
≥ 0 in Qδ(p, t),
Av + B|z|2 − C6(uyn − u yn)
Ψ(z, t)
γn(z)
− Tj(u− u) ≥ 0 on ∂pQδ(p, t).
Av + B|z|2 − C6(uyn − u yn)
Ψ(z, t)
γn(z)
− Tj(u− u) ≥ 0 in Qδ(p, t),
|uxnxn(z0, t0)| ≤ C7.
ENERGY FUNCTIONALS FOR THE PARABOLIC MONGE-AMPÈRE EQUATION 7
Therefore at (z0, t0), uαβ̄ is uniformly bounded, hence
u11̄(z0, t0) ≥ c4
with c4 independent of u. Finally, from the equation
detuαβ̄ = e
we get
|uxnxn | ≤ M2.
Step 4. |∇2u| ≤ M2 in Q.
By the concavity of log det, we have
L(∇2u + eλ|z|2) ≤ O(1) − eλ|z|2
uαᾱ − fu
So for λ large enough,
L(∇2u + eλ|z|2) ≤ 0,
(35) sup |∇2u| ≤ sup
|∇2u| + C8
with C8 depends on M0, Ω and f .
3. The Functionals I, J and F 0
Let us recall the definition of P(Ω, ϕ) in (5),
P(Ω, ϕ) =
u ∈ C2(Ω̄ | u is psh, and u = ϕ on ∂Ω
Fixing v ∈ P, for u ∈ P, define
(36) Iv(u) = −
(u− v)(
−1∂∂̄u)n.
Proposition 4. There is a unique and well defined functional Jv on P(Ω, ϕ),
such that
(37) δJv(u) = −
−1∂∂̄u)n − (
−1∂∂̄v)n
and Jv(v) = 0.
Proof. Notice that P is connected, so we can connect v to u ∈ P by a path
ut, 0 ≤ t ≤ 1 such that u0 = v and u1 = u. Define
(38) Jv(u) = −
−1∂∂̄ut)n − (
−1∂∂̄v)n
We need to show that the integral in (38) is independent of the choice of
path ut. Let δut = wt be a variation of the path. Then
w1 = w0 = 0 and wt = 0 on ∂Ω,
8 ZUOLIANG HOU AND QI LI
−1∂∂̄u)n − (
−1∂∂̄v)n
−1∂∂̄u)n − (
−1∂∂̄v)n
+ u̇ n
−1∂∂̄w(
−1∂∂̄u)n−1
Since w0 = w1 = 0, an integration by part with respect to t gives
−1∂∂̄u)n − (
−1∂∂̄v)n
−1∂∂̄u)n dt = −
−1nw∂∂̄u̇(
−1∂∂̄u)n−1 dt.
Notice that both w and u̇ vanish on ∂Ω, so an integration by part with
respect to z gives
−1nw∂∂̄u̇(
−1∂∂̄u)n−1 = −
−1n∂w ∧ ∂̄u̇(
−1∂∂̄u)n−1
−1nu̇∂∂̄w(
−1∂∂̄u)n−1.
(39) δ
−1∂∂̄u)n − (
−1∂∂̄v)n
dt = 0,
and the functional J is well defined. �
Using the J functional, we can define the F 0 functional as
(40) F 0v (u) = Jv(u) −
−1∂∂̄v)n.
Then by Proposition 4, we have
(41) δF 0v (u) = −
−1∂∂̄u)n.
Proposition 5. The basic properties of I, J and F 0 are following:
(1) For any u ∈ P(Ω, ϕ), Iv(u) ≥ Jv(u) ≥ 0.
(2) F 0 is convex on P(Ω, ϕ), i.e. ∀u0, u1 ∈ P,
(42) F 0
(u0 + u1
F 0(u0) + F
0(u1)
(3) F 0 satisfies the cocycle condition, i.e. ∀u1, u2, u3 ∈ P(Ω, ϕ),
(43) F 0u1(u2) + F
(u3) = F
(u3).
ENERGY FUNCTIONALS FOR THE PARABOLIC MONGE-AMPÈRE EQUATION 9
Proof. Let w = (u− v) and ut = v + tw = (1 − t)v + tu, then
Iv(u) = −
−1∂∂̄u)n − (
−1∂∂̄v)n
−1∂∂̄ut)n dt
−1nw∂∂̄w(
−1∂∂̄ut)n−1
−1n∂w ∧ ∂̄w ∧ (
−1∂∂̄ut)n−1 ≥ 0,
Jv(u) = −
−1∂∂̄ut)n − (
−1∂∂̄v)n
−1∂∂̄us)n ds
−1nw∂∂̄w(
−1∂∂̄us)n−1 ds dt
(1 − s)
−1n∂w ∧ ∂̄w ∧ (
−1∂∂̄us)n−1 ds ≥ 0.
Compare (44) and (45), it is easy to see that
Iv(u) ≥ Jv(u) ≥ 0.
To prove (42), let ut = (1 − t)u0 + tu1, then
F 0(u1/2) − F 0(u0) = −
(u1 − u0) (
−1∂∂̄ut)n dt,
F 0(u1) − F 0(u1/2) = −
(u1 − u0) (
−1∂∂̄ut)n dt.
Since
(u1 − u0) (
−1∂∂̄ut)n dt−
(u1 − u0) (
−1∂∂̄ut)n dt.
(u1 − u0)
−1∂∂̄ut)n − (
−1∂∂̄ut+1/2)n
(ut+1/2 − ut)
−1∂∂̄ut)n − (
−1∂∂̄ut+1/2)n
dt ≥ 0.
F 0(u1) − F 0(u1/2) ≥ F 0(u1/2) − F 0(u0).
The cocycle condition is a simple consequence of the variation formula 41.
10 ZUOLIANG HOU AND QI LI
4. The Convergence
In this section, let us assume that both f and ϕ are independent of t. For
u ∈ P(Ω, ϕ), define
(46) F (u) = F 0(u) +
G(z, u)dV,
where dV is the volume element in Cn, and G(z, s) is the function given by
G(z, s) =
e−f(z,t) dt.
Then the variation of F is
(47) δF (u) = −
det(uαβ̄) − e−f(z,u)
Proof of Theorem 2. We will follow Phong and Sturm’s proof of the conver-
gence of the Kähler-Ricci flow in [PS06]. For any t > 0, the function u(·, t)
is in P(Ω, ϕ). So by (47)
F (u) = −
det(uαβ̄) − e
−f(z,u)
log det(uαβ̄) − (−f(z, u))
det(uαβ̄) − e
−f(z,u)
Thus F (u(·, t)) is monotonic decreasing as t approaches +∞. On the other
hand, u(·, t) is uniformly bounded in C2(Ω) by (10), so both F 0(u(·, t)) and
f(z, u(·, t)) are uniformly bounded, hence F (u) is bounded. Therefore
log det(uαβ̄) + f(z, u)
det(uαβ̄) − e
−f(z,u)
dt < ∞.
Observed that by the Mean Value Theorem, for x, y ∈ R,
(x + y)(ex − e−y) = (x + y)2eη ≥ emin(x,−y)(x− y)2,
where η is between x and −y. Thus
log det(uαβ̄) + f
det(uαβ̄) − e
log det(uαβ̄) + f
= C9|u̇|2
where C9 is independent of t. Hence
‖u̇‖2L2(Ω) dt ≤ ∞
(50) Y (t) =
|u̇(·, t)|2 det(uαβ̄) dV,
2üu̇ + u̇2uαβ̄ u̇αβ̄
det(uαβ̄) dV.
Differentiate (1) in t,
(51) ü− uαβ̄ u̇αβ̄ = fuu̇,
ENERGY FUNCTIONALS FOR THE PARABOLIC MONGE-AMPÈRE EQUATION 11
2u̇u̇αβ̄u
αβ̄ + u̇2
2fu + ü− fuu̇
det(uαβ̄) dV
2fu + ü− fuu̇
− 2u̇αu̇β̄u
det(uαβ̄) dV
From (51), we get
u − uαβ̄ üαβ̄ − fuü ≤ fuuu̇
Since fu ≤ 0 and fuu ≤ 0, so ü is bounded from above by the maximum
principle. Therefore
Ẏ ≤ C10
u̇2 det(uαβ̄) dV = C10Y,
(52) Y (t) ≤ Y (s)eC10(t−s) for t > s,
where C10 is independent of t. By (49), (52) and the uniform boundedness
of det(uαβ̄), we get
‖u(·, t)‖L2(Ω) = 0.
Since Ω is bounded, the L2 norm controls the L1 norm, hence
‖u(·, t)‖L1(Ω) = 0.
Notice that by the Mean Value Theorem,
|ex − 1| < e|x||x|
|eu̇ − 1| dV ≤ esup |u̇|
|u̇| dV
Hence eu̇ converges to 1 in L1(Ω) as t approaches +∞. Now u(·, t) is bounded
in C2(Ω), so u(·, t) converges to a unique function ũ, at least sequentially in
C1(Ω), hence f(z, u) → f(z, ũ) and
det(ũαβ̄) = lim
det(u(·, t)αβ̄) = lim
eu̇−f(z,u) = e−f(z,ũ),
i.e. ũ solves (8).
References
[Bak83] Ilya J. Bakelman. Variational problems and elliptic Monge-Ampère equations.
J. Differential Geom., 18(4):669–699 (1984), 1983.
[B lo05] Zbigniew B locki. Weak solutions to the complex Hessian equation. Ann. Inst.
Fourier (Grenoble), 55(5):1735–1756, 2005.
[BT76] Eric Bedford and B. A. Taylor. The Dirichlet problem for a complex Monge-
Ampère equation. Invent. Math., 37(1):1–44, 1976.
[Cao85] Huai Dong Cao. Deformation of Kähler metrics to Kähler-Einstein metrics on
compact Kähler manifolds. Invent. Math., 81(2):359–372, 1985.
12 ZUOLIANG HOU AND QI LI
[CKNS85] L. Caffarelli, J. J. Kohn, L. Nirenberg, and J. Spruck. The Dirichlet problem
for nonlinear second-order elliptic equations. II. Complex Monge-Ampère, and
uniformly elliptic, equations. Comm. Pure Appl. Math., 38(2):209–252, 1985.
[CT02] X. X. Chen and G. Tian. Ricci flow on Kähler-Einstein surfaces. Invent. Math.,
147(3):487–544, 2002.
[Eva82] Lawrence C. Evans. Classical solutions of fully nonlinear, convex, second-order
elliptic equations. Comm. Pure Appl. Math., 35(3):333–363, 1982.
[Gua98] Bo Guan. The Dirichlet problem for complex Monge-Ampère equations and
regularity of the pluri-complex Green function. Comm. Anal. Geom., 6(4):687–
703, 1998.
[Ko l98] S lawomir Ko lodziej. The complex Monge-Ampère equation. Acta Math.,
180(1):69–117, 1998.
[Kry83] N. V. Krylov. Boundedly inhomogeneous elliptic and parabolic equations in a
domain. Izv. Akad. Nauk SSSR Ser. Mat., 47(1):75–108, 1983.
[Li04] Song-Ying Li. On the Dirichlet problems for symmetric function equations of
the eigenvalues of the complex Hessian. Asian J. Math., 8(1):87–106, 2004.
[PS06] Duong H. Phong and Jacob Sturm. On stability and the convergence of the
Kähler-Ricci flow. J. Differential Geom., 72(1):149–168, 2006.
[Tru95] Neil S. Trudinger. On the Dirichlet problem for Hessian equations. Acta Math.,
175(2):151–164, 1995.
[Tso90] Kaising Tso. On a real Monge-Ampère functional. Invent. Math., 101(2):425–
448, 1990.
[TW97] Neil S. Trudinger and Xu-Jia Wang. Hessian measures. I. Topol. Methods Non-
linear Anal., 10(2):225–239, 1997. Dedicated to Olga Ladyzhenskaya.
[TW98] Neil S. Trudinger and Xu-Jia Wang. A Poincaré type inequality for Hessian
integrals. Calc. Var. Partial Differential Equations, 6(4):315–328, 1998.
[Wan94] Xu Jia Wang. A class of fully nonlinear elliptic equations and related function-
als. Indiana Univ. Math. J., 43(1):25–54, 1994.
[Yau78] Shing Tung Yau. On the Ricci curvature of a compact Kähler manifold and the
complex Monge-Ampère equation. I. Comm. Pure Appl. Math., 31(3):339–411,
1978.
Mathematics Department, Columbia University, New York, NY 10027
E-mail address: [email protected]
Mathematics Department, Columbia University, New York, NY 10027
E-mail address: [email protected]
1. Introduction
2. A priori C2 estimate
3. The Functionals I, J and F0
4. The Convergence
References
|
0704.1823 | Compatible Actions and Cohomology of Crystallographic Groups | COMPATIBLE ACTIONS AND COHOMOLOGY OF
CRYSTALLOGRAPHIC GROUPS
ALEJANDRO ADEM∗, JIANQUAN GE, JIANZHONG PAN, AND NANSEN PETROSYAN
Abstract. We compute the cohomology of crystallographic groups Γ = Zn ⋊Z/p with
holonomy of prime order by establishing the collapse at E2 of the spectral sequence
associated to their defining extension. As an application we compute the group of gerbes
associated to many six–dimensional toroidal orbifolds arising in string theory.
1. Introduction
Given a finite group G and an integral representation L for G (i.e. a homomorphism
G → GLn(Z), where L is the underlying ZG–module), we can define the semi–direct
product Γ = L⋊G. Calculating the cohomology of these groups is a problem of instrinsic
algebraic interest; indeed if the representation is faithful then these groups can be thought
of as crystallographic groups (see [7], page 74).
From the geometric point of view, the action on L gives rise to a G–action on the
n–torus X = Tn; this approach can be used to derive important examples of orbifolds,
known as toroidal orbifolds (see [2]). In the case when n = 6 these are of particular
interest in string theory (see [4], [11]).
Given the split group extension 0 → L → Γ → G → 1, the basic problem which
we address is that of providing conditions which imply the collapse (without extension
problems) of the associated Lyndon–Hochshild–Serre spectral sequence. The conditions
which we establish are representation–theoretic, namely depending solely on the structure
of the integral representation L. This can be a difficult problem (see [15] for further
background) and there are well–known examples where the spectral sequence does not
collapse.
Our approach is to systematically apply the methods used in [2]. The key idea is to
construct a free resolution F for the semidirect product L ⋊ G such that the Lyndon–
Hochschild–Serre spectral sequence of the group extension collapses at E2. This requires
Date: October 30, 2018.
Key words and phrases. spectral sequence, group cohomology.
∗The first author was partially supported by the NSF and by NSERC.
http://arxiv.org/abs/0704.1823v1
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 2
a chain–level argument, more specifically the construction of a compatible G–action on
a certain free resolution for the torsion–free abelian group L (see §2 for details). We
concentrate on the case of G = Z/p, a cyclic group of prime order p, as the representation
theory is well–understood. Our main algebraic result is the following
Theorem 1.1. Let G = Z/p, where p is any prime. If L is any finitely generated ZG–
lattice1, and Γ = L⋊G is the associated semi–direct product group, then for each k ≥ 0
Hk(Γ,Z) ∼=
i+j=k
H i(G,∧j(L∗))
where ∧j(L∗) denotes the j-th exterior power of the dual module L∗ = Hom(L,Z).
Expressed differently: these results imply a complete calculation for the integral coho-
mology of crystallographic groups Zn ⋊ Z/p where p is prime. These calculations can be
made explicit.
The theorem has an interesting geometric application:
Theorem 1.2. Let G = Z/p, where p is any prime. Suppose that G acts on a space X
homotopy equivalent to (S1)n with XG 6= ∅, then for each k ≥ 0
Hk(EG×G X,Z) ∼=
i+j=k
H i(G,Hj(X,Z)) ∼= Hk(Γ,Z).
where Γ = π1(X)⋊G.
On the other hand, the explicit computation for torsion–free crystallographic groups
with holonomy of prime order was carried out long ago by Charlap and Vásquez (see [8],
page 556). Combining the two results we obtain a complete calculation:
Theorem 1.3. Let Γ denote a crystallographic group with holonomy of prime order p,
expressed as an extension
1 → L → Γ → Z/p → 1
where L is a free abelian group of finite rank.
(1) If Γ is torsion–free, then L ∼= N ⊕ Z (it splits off a trivial direct summand) and
Hk(Γ,Z) ∼= H0(Z/p,∧k(L∗))⊕H1(Z/p,∧k−1(N∗))
for 0 ≤ k ≤ rk(L); Hk(Γ,Z) = 0 for k > rk(L).
(2) If Γ is not torsion–free, then H∗(Γ,Z) can be computed using Theorem 1.1.
1A ZG–lattice is a ZG–module which happens to be a free abelian group.
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 3
In this paper we also consider the situation for the cyclic group of order four; some
partial results are obtained but a general collapse has not been established. However,
based on these and other computations we conjecture that for G any cyclic group, the
spectral sequence associated to a semi–direct product of the form Zn ⋊ G must collapse
at E2.
In the last section we give an application of our methods to calculations for six–
dimensional toroidal orbifolds, showing that among the 18 inequivalent N = 1 supersym-
metric string theories on symmetric orbifolds of (2, 2)–type without discrete background,
only two of them cannot be analyzed using our methods i.e. we cannot show the existence
of compatible actions for the associated modules. If X = [T6/G] is an orbifold arising
this way, then our results provide a complete calculation for its associated group of gerbes
Gb(X ) ∼= H3(EG×G T
6,Z) (see [2] for more details).
2. Preliminary Results
The notion of a compatible action was first introduced in [9]. If such an action exists
it allows one to construct practical projective resolutions and from these to compute the
cohomology of the group. We will give the basic definition and the main theorem that
follows. More details can be found in [2].
Let Γ = L ×ρ G = L ⋊ G be the semidirect product of a finite group G and a finite
dimensional Z-lattice L via a representation ρ : G → GL(L). G acts on the group L by the
homomorphism ρ, and this extends linearly to an action on the group algebra R[L], where
R denotes a commutative ring with unit. We write lg for ρ(g)l where l ∈ R[L], g ∈ G.
In the rest of this paper R will represent Z (the integers) or Z(p) (the ring of integers
localized at a fixed prime p).
Definition 2.1. Given a free resolution ǫ : F → R of R over R[L], we say that it admits
an action of G compatible with ρ if for all g ∈ G there is an augmentation-preserving
chain map τ(g) : F → F such that
(1) τ(g)[l · f ] = lg · [τ(g)f ] for all l ∈ R[L] and f ∈ F ,
(2) τ(g)τ(g′) = τ(gg′) for all g, g′ ∈ G,
(3) τ(1) = 1F .
The following two lemmas (see [2]) reduce the construction of compatible actions to the
case of faithful indecomposable representations.
Lemma 2.2. If ǫi : Fi → R is a projective R[Li]-resolution of R for i = 1, 2, then
ǫ1 ⊗ ǫ2 : F1 ⊗ F2 → R is a projective R[L1 × L2]-resolution of R. Furthermore, if G acts
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 4
compatibly on Fi by τi for i = 1, 2, then a compatible action of G on ǫ1⊗ ǫ2 : F1⊗F2 → R
is given by τ(g)(f1 ⊗ f2) = τ1(g)(f1)⊗ τ2(g)(f2).
Lemma 2.3. If L is a R[G1]-module, π : G2 → G1 a group homomorphism, and ǫ : F → R
is a R[L]-resolution of R such that G1 acts compatibly on it by τ
′, then G2 also acts
compatibly on it by τ(g)f = τ ′(π(g))f for any g ∈ G.
If a compatible action exists, we can give F a Γ-module structure as follows. An
element γ ∈ Γ can be expressed uniquely as γ = lg, with l ∈ L and g ∈ G. We set
γ · f = (lg) · f = l · τ(g)f . Note that given any G–module M , this inflates to a Γ–action
on M via the projection Γ → G.
We can always construct a special free resolution F of R over L, characterized by the
property that the cochain complex HomL(F,R) for computing the cohomology H
∗(L,R)
has all coboundary maps zero (more details will be provided in the next section). Using
this fact, the following was proved in [2]:
Theorem 2.4. (Adem-Pan) Let ǫ : F → R be a special free resolution of R over L and
suppose that there is a compatible action of G on F . Then for all integers k ≥ 0, we have
Hk(L⋊G,R) =
i+j=k
H i(G,Hj(L,R)).
This result can be interpreted as saying that the Lyndon-Hochschild-Serre spectral
sequence
2 = H
p(G,Hq(L,R)) ⇒ Hp+q(L⋊G,R)
collapses at E2 without extension problems. Note that this is not always the case; in fact
there are examples of semi–direct products of the form Zn ⋊ (Z/p)2 where the associated
spectral sequence has non–trivial differentials (see [15]). This will be discussed in §5.
3. Construction of Compatible Actions
Let R[L] denote the group ring of L, a free abelian group with basis {x1, . . . , xn}.
Then the elements x1 − 1, . . . , xn − 1 form a regular sequence in R[L], hence the Koszul
complex K∗ = K(x1−1, . . . , xn−1) is a free resolution of the trivial module R. It has the
additional property of being a differential graded algebra (or DGA). We briefly recall how
it looks. There are generators a1, . . . , an in degree one, and the graded basis for K∗ can
be identified with the usual basis for the exterior algebra they generate. The differential
is given by the following formula: if ai1...ip = ai1 . . . aip is a basis element in Kp, then
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 5
d(ai1...ip) =
(−1)j−1(xij − 1)ai1...îj ...ip.
Now the cohomology of the free abelian group L is precisely an exterior algebra on n one-
dimensional generators, which in fact can be identified with the dual elements a∗1, . . . , a
In particular we see that the cochain complex HomR[L](K∗, R) has zero differentials, and
hence K∗ is a special free resolution of R over R[L] (this resolution also appears in [7]
pp. 96–97).
We now consider how to construct a compatible G–action on K∗, given a G–module
structure on L.
Theorem 3.1. If G acts on the lattice L, let K∗ = K(x1 − 1, . . . xn − 1) denote the
special free resolution of R over R[L] defined using the Koszul complex associated to the
elements x1 − 1, . . . , xn − 1, where {x1, . . . , xn} form a basis for L. Suppose that there is
a homomorphism τ : G → Aut(K1) such that for every g ∈ G and a ∈ K1 it satisfies
dτ(g)(a) = d(a)g
where d : K1 → K0 is the usual Koszul differential, and d(a)
g ∈ K0 = R[L]. Then τ
extends to K∗ using its DGA structure and so defines a compatible G–action on K∗.
Proof. First we observe that τ(g) acts on K0 = R[L] via the original G–action, i.e.
τ(g)(x) = xg for any x ∈ K0. Next we define the action on the basis of K∗ as a graded
R[L]–module, namely:
τ(g)(ai1 . . . aip) = τ(g)(ai1) . . . τ(g)(aip).
If α ∈ R[L] and u ∈ K∗, we define τ(g)(αu) = α
gτ(g)(x). By linearity and the DGA
structure of K∗, this will define τ : G → Aut(K∗), with the desired properties. �
Generally speaking it can be quite difficult to construct a compatible action; however
there is an important special case where it is quite straightforward.
Theorem 3.2. Let φ : G → Σn denote a group homomorphism, where Σn denotes the
symmetric group on n elements. Let G act on Zn via2 this homomorphism. Then the
associated Koszul complex K∗ admits a compatible G–action.
2Such a module is called a permutation module.
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 6
Proof. By Lemma 2.3 we can assume that G is a subgroup of Σn, hence it will suffice to
prove this for Σn itself. If we take generators a1, . . . , an for the Koszul complex corre-
sponding to the elements x1, . . . , xn in the underlying module L, then we can define τ as
follows:
τ(σ)(ai) = aσ(i).
This obviously defines a permutation representation onK1, and compatibility follows from
the fact that for all ai, 1 ≤ i ≤ n and σ ∈ Σ we have
dτ(σ)(ai) = d(aσ(i)) = xσ(i) − 1 = (xi − 1)
This completes the proof. �
Aside from permutation representations, it is difficult to construct general examples
of compatible actions. However if G is a cyclic group then we can handle an important
additional type of module.
Proposition 3.3. Let the cyclic group G = 〈t|tn = 1〉 act on Zn−1 by:
ξ1 : t 7→
0 1 0 . . . 0 0
0 0 1 . . . 0 0
. . .
0 0 0 . . . 1 0
−1 −1 −1 . . . −1 −1
∈ GLn−1(Z).
If x1, . . . , xn is the canonical basis under which the action is represented by the matrix
above, then the free resolution K∗ = K∗(x1−1, . . . , xn−1) admits an action of G compatible
with ξ1, which can be defined by:
τ(t)(a1) = −x
n−1an−1 τ(t)(ak) = −x
n−1(an−1 − ak−1), 1 < k ≤ n− 1.
Proof. The proof is a straightforward calculation verifying that τ defines a compatible
action. First we verify that τn = 1. For this we observe that if A is the matrix in GLn−1(Z)
representing the generator t, then expressed in terms of the basis {a1, . . . , an} we have
that τ(t) = x−1n−1A. If we iterate this action and use the fact that τ(g)(αu) = α
gτ(g)(u)
then we obtain
τ(t)n = (x−1n−1)
tn−1(x−1n−1)
tn−2 . . . (x−1n−1)
t(x−1n−1)A
n = 1.
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 7
This follows from the fact that the characteristic polynomial of A is the cyclotomic poly-
nomial p(z) = 1+ z + · · ·+ zn−1, hence we have that p(A) = 0 on the underlying module
L and so in multiplicative notation we have that ut
· · · · ·ut ·u = 1 for any u ∈ L.
Next we verify compatibility:
τ(t)d(a1) = τ(t)(x1 − 1) = x
n−1 − 1
dτ(t)(a1) = d(−x
n−1an−1) = −x
n−1(xn−1 − 1) = x
n−1 − 1.
Similarly for all 1 < k ≤ n− 1 we have that:
τ(t)d(ak) = τ(t)(xk − 1) = xk−1x
n−1 − 1
= −x−1n−1(xn−1 − 1− xk−1 − 1) = d(−x
n−1(an−1 − ak−1)) = dτ(t)(ak).
For G = Z/n, the module which gives rise to the matrix in 3.3 is the augmentation
ideal IG, which has rank equal to n − 1. The following proposition is an application of
the results in this section.
Proposition 3.4. Let G = Z/n, and assume that L is a ZG–lattice such that
L ∼= M ⊕ IGt
where M is a permutation module. Then, for any coefficient ring R, the special free
resolution K∗ over R[L] admits a compatible G–action.
Proof. This follows from applying Lemma 2.2 to Theorem 3.2 and Proposition 3.3. �
For our cohomology calculations it will be practical to use the coefficient ring R = Z(p),
where p is a prime. In this situation, for G = Z/p (see [10]) there are only three distinct
isomorphism classes of indecomposable RG–lattices, namely R (the trivial module), IG
(the augmentation ideal) and RG, the group ring. Moreover, if L is any finitely generated
ZG–lattice, we can construct a ZG-homomorphism f : L′ → L such that
• L′ ∼= Zr ⊕ ZGs ⊕ IGt
• f is an isomorphism after tensoring with R.
We shall call L′ a representation of type (r, s, t).
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 8
4. Applications to Cohomology
We are now ready to prove our main result.
Theorem 4.1. Let G = Z/p, where p is any prime. If L is any finitely generated ZG–
lattice, and Γ = L⋊G is the associated semi–direct product group, then for each k ≥ 0
Hk(Γ,Z) ∼=
i+j=k
H i(G,∧j(L∗))
where ∧j(L∗) denotes the j-th exterior power of the dual module L∗ = Hom(L,Z).
Proof. First, let us prove the analogous result for the cohomology with coefficients in
R = Z(p). We make the assumption that L is a module of type (r, s, t). We need to verify:
Hk(Γ, R) ∼=
i+j=k
H i(G,∧j(L∗R))
where L∗R = L
∗ ⊗ R. In fact, we see that in the associated Lyndon-Hochschild-Serre
spectral sequence for the extension 0 → L → Γ → G → 1 with
2 (R) = H
i(G,Hj(L,R)) ⇒ Hk(Γ, R)
there are no differentials and no extension problems. This follows from applying Theorem
2.4 and the fact that the module L gives rise to a special resolution with a compatible
action by Proposition 3.4.
Now let us consider the case when L is not of type (r, s, t). As observed previously, we
can construct a ZG–lattice L′ and a map f : L′ → L such that L′ is of type (r, s, t) and
f is an isomorphism after tensoring with R. Under these conditions f will induce a map
between the spectral sequences with R–coefficients for the extensions corresponding to L
and L′. However by our hypotheses, ∧k(L′∗R) and ∧
k(L∗R) are isomorphic as RG–modules
for all k ≥ 0, with the isomorphism induced by f . Hence the corresponding E2–terms are
isomorphic, and so the spectral sequences both collapse and the result follows.
It now remains to prove the result with coefficients in the integers Z. Note that by
the universal coefficient theorem, we have H∗(Γ,Z(p)) ∼= H
∗(Γ,Z) ⊗ Z(p) hence the only
relevant discrepancy between H∗(Γ,Z(p)) and H
∗(Γ,Z) might arise from the presence of
torsion prime to p in the integral cohomology of Γ. However, a quick inspection of the
spectral sequence of the extension 0 → L → Γ → G → 1 with Z(q) coefficients shows that
there is no torsion prime to p in the cohomology, as L is free abelian and G is a p–group.
This completes our proof. �
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 9
In [2], Corollary 3.3, it was observed that the spectral sequence for the extension L⋊G
satisfies a collapse at E2 wthout extension problems if the same is true for all the restricted
extensions L ⋊ Gp, where the Gp ⊂ G are the p–Sylow subgroups of G. We obtain the
following
Corollary 4.2. Let G denote a finite group of square–free order, and L any finitely
generated ZG–lattice. Then for all k ≥ 0 we have
Hk(L⋊G,Z) ∼=
i+j=k
H i(G,∧j(L∗)).
We now consider a more geometric situation. Suppose that the group G = Z/p acts on
a space X which has the homotopy type of a product of circles.
Theorem 4.3. Let G = Z/p, where p is any prime. Suppose G acts on a space X
homotopy equivalent to (S1)n with XG 6= ∅, then for each k ≥ 0
Hk(EG×G X,Z) ∼=
i+j=k
H i(G,Hj(X,Z)) ∼= Hk(Γ,Z)
where Γ = π1(X)⋊G.
Proof. The space EG×GX fits into a fibration X →֒ EG×GX → BG which has a section
due to the fact that XG 6= ∅. Let Γ denote the fundamental group of EG×GX . The long
exact sequence for the homotopy groups of the fibration gives rise to a split extension
1 → π1(X) → Γ → G → 1. Since π1(X) ∼= L, a ZG–lattice, this shows that Γ ∼= L ⋊G,
where the G action is induced on L via the action on the fiber. Note that EG ×G X is
an Eilenberg-MacLane space of type K(Γ, 1). Hence, H∗(EG ×G X,Z) ∼= H
∗(Γ,Z) and
the result follows from Theorem 4.1.
Note that a special case of this result was proved in [1], namely for actions where
π1(X)⊗ Z(p) is isomorphic to a direct sum of indecomposables of rank p− 1. The terms
H i(Z/p,∧j(L∗)) can be computed if L is known up to isomorphism. In fact, all we need
is to know L up to Z/p cohomology, as this will determine its indecomposable factors (at
least up to Z(p)–equivalence).
As we mentioned in the introduction, our results complete the calculation for the co-
homology of crystallographic groups with prime order holonomy when combined with
previous work on the torsion–free case (Bieberbach groups). The terms appearing in the
formulas in Theorem 1.3 can be explicitly computed, as was observed in [8].
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 10
5. Extensions to Other Groups
In this section we explore to what degree our results can be extended to other groups.
In the case of the cyclic group of order four, the indecomposable integral representations
are easy to describe, so it is a useful test case.
Let G = Z/4, from [3] we can give a complete list of all (nine) indecomposable pair-
wise nonequivalent integral representations by the following adopted table, where a is a
generator for Z/4:
ρ1 : a → 1; ρ2 : a → −1; ρ3 : a →
; ρ4 : a →
ρ5 : a →
0 0 −1
1 0 −1
0 1 −1
; ρ6 : a →
0 1 0
−1 0 1
0 0 1
ρ7 : a →
0 1 0 0
0 0 1 0
0 0 0 1
1 0 0 0
; ρ8 : a →
0 0 −1 1
1 0 −1 1
0 1 −1 0
0 0 0 1
ρ9 : a →
0 1 0 0
−1 0 0 1
0 0 −1 1
0 0 0 1
Theorem 5.1. Let G = Z/4 and L a finitely generated ZG–lattice. If L is a direct sum
of indecomposables of type ρi for i ≤ 7, and i 6= 6, then there is a compatible action and
Hk(L⋊G,Z) =
i+j=k
H i(G,Hj(L,Z)).
Proof. For the indecomposables ρ1, ρ2, ρ3, ρ4, compatible actions are known to exist on
the associated resolutions by the results in [2]. The same is true3 for ρ5 and ρ7 by 3.3 and
3.2. Hence if L is any integral representation expressed as a direct sum of ρi, i ≤ 7 and
i 6= 6, the result follows from 2.2 and 2.4. �
3In fact ρ5 corresponds to the dual module IG
∗, but for cyclic groups there is an isomorphism IG ∼=
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 11
In the case of ρ6, ρ8 and ρ9, a compatible action is not known to exist. However, via an
explicit computation done in [14], we can establish the collapse of the spectral sequence
for the extension Γ6 associated to ρ6, yielding
H i(Γ6,Z) =
Z if i = 0, 1
Z/4⊕ Z if i = 2
Z/2⊕ Z if i = 3
Z/4⊕ Z/2 if i ≥ 4
which verifies the statement analogous to 5.1 for the cohomology of Γ6.
Indeed, for all the examples of semidirect products we have considered so far, there is
a collapse at E2 in the Lyndon-Hochschild-Serre spectral sequence of the group extension
0 → L → G⋊ L → G → 1 and therefore we can make the following:
Conjecture 5.2. Suppose that G is a finite cyclic group and L a finitely generated ZG–
lattice; then for any k ≥ 0 we have
Hk(G⋊ L,Z) =
i+j=k
H i(G,Hj(L,Z)).
In [15] examples were given of semi-direct products of the form L ⋊ (Z/p)2 where the
associated mod p Lyndon-Hochschild-Serre spectral sequence has non–zero differentials.
This relies on the fact that for G = Z/p × Z/p, there exist ZG–modules M which are
not realizable as the cohomology of a G–space. These are the counterexamples to the
Steenrod Problem given by G.Carlsson (see [5]), where M can be identified with L∗. No
such counterexamples exist for finite cyclic groups Z/N , which means that disproving our
conjecture will require a different approach.
We should also mention that by using results due to Nakaoka (see [12], pages 19 and
50) we know that the spectral sequence for a wreath product Zn ⋊ G, where G ⊂ Σn
(the symmetric group) acting on Zn via permutations will always collapse at E2, without
extension problems. This can be interpreted as the fact that a strong collapse theorem
holds for all permutation modules and all finite groups G. A simple proof of this result
can be obtained by applying Theorem 2.4 to Proposition 3.2.
As suggested by [15], the results here can be considered part of a very general problem,
which is both interesting and quite challenging:
Problem: Given a finite group G, find suitable conditions on a ZG–lattice L so that the
spectral sequence for L⋊G collapses at E2.
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 12
6. Application to Computations for Toroidal Orbifolds
Interesting examples arise from calculations for six–dimensional orbifolds, where the
usual spectral sequence techniques become rather complicated. Here our methods provide
an important new ingredient that allows us to compute rigorously beyond the known
range. An important class of examples in physics arises from actions of a cyclic group G =
Z/N on T6. In our scheme, these come from six-dimensional integral representations of
Z/N . However, the constraints from physics impose certain restrictions on them (see [11],
[16]). If θ ∈ GL6(Z) is an element of order N , then it can be diagonalized over the complex
numbers. The associated eigenvalues, denoted α1, α2, α3, should satisfy α1α2α3 = 1, and
in addition all of the αi 6= 1. The first condition implies that the orbifold T
6 → T6/G
is a Calabi–Yau orbifold, and so admits a crepant resolution. These more restricted
representations have been classified4 in [11], where it is shown that there are precisely 18
inequivalent lattices of this type.
It turns out that calculations are focused on computing the equivariant cohomology
H∗(EG ×G T
6,Z) (see [2] and [4] for more details). As was observed in [2], we can
compute the group of gerbes associated to the orbifold X = [T6/G] via the isomorphism
Gb(X ) ∼= H3(EG×G T
6,Z) ∼= H3(Z6 ⋊G,Z),
whence our methods can be used to obtain some fairly complete results in this setting.
Before proceeding we recall that as in Corollary 4.2 the collapse of the spectral sequence
for an extension L ⋊ G will follow from the existence of compatible Sylp(G) actions on
the Koszul complex K∗ for every prime p dividing |G|. If these exist we shall say that K∗
admits a local compatible action.
Theorem 6.1. Among the 18 inequivalent integral representations associated to the six–
dimensional orbifolds T6/Z/N described above, only two of them are not known to admit
(local) compatible actions. Hence for those 16 examples there is an isomorphism
Hk(EZ/N ×Z/N T
6,Z) ∼=
i+j=k
H i(Z/N,Hj(T6,Z))
Proof. Consider the defining matrix of an indecomposable action of Z/N on Zn with
determinant one, expressed in canonical form as
4In the language of physics, they show that there exist 18 inequivalent N = 1 supersymmetric string
theories on symmetric orbifolds of (2, 2)–type without discrete background.
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 13
0 . . . 0 v1
1 0 . . . 0 v2
0 1 0 . . . 0 v3
. . .
0 . . . 1 vn
where v1 = ±1.
In [11] it was determined that the matrices that specify the indecomposable modules
appearing as summands for the N = 1 supersymmetric Z/N -orbifolds can be given as
follows, where the vectors represent the values (v1, v2, . . . , vn):
Indecomposable matrices relevant for N = 1 supersymmetry
n = 1 n = 2 n = 3 n = 4
Z/2(1) : (−1) Z/3(2) : (−1,−1) Z/4(3) : (−1,−1,−1) Z/6(4) : (−1, 0,−1, 0)
Z/4(2) : (−1, 0) Z/6(3) : (−1, 0, 0) Z/8(4) : (−1, 0, 0, 0)
Z/6(2) : (−1, 1) Z/12(4) : (−1, 0, 1, 0)
n = 5 n = 6
Z/6(5) : (−1,−1,−1,−1,−1) Z/7(6) : (−1,−1,−1,−1,−1,−1)
Z/8(5) : (−1,−1, 0, 0,−1) Z/8(6) : (−1, 0,−1, 0,−1, 0)
Z/12(6) : (−1,−1, 0, 1, 0,−1)
We will show that all of these, except possibly Z/8(5) and Z/12(6), admit local compat-
ible actions. The examples of rank two or less were dealt with in [2]; for N = 2, 3, 6, 7 the
result follows directly from 4.1 and 4.2. The case Z/4(3) was covered in 5.1. We will deal
explicitly with the cases Z/8(4), Z/12(4) and Z/8(6).
(1) The group Z/8 acts on Z4 with generator represented by the matrix:
0 0 0 −1
1 0 0 0
0 1 0 0
0 0 1 0
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 14
We define a compatible action by the following formulas:
τ(t)(a1) = −x
4 a4, τ(t)(a2) = a1, τ(t)(a3) = a2, τ(t)(a4) = a3.
(2) The group Z/12 acts on Z4 with generator represented by the matrix:
0 0 0 −1
1 0 0 0
0 1 0 1
0 0 1 0
For this example it suffices to construct a compatible action for Z/4 (with generator rep-
resented by the matrix T 3) as we already know that a compatible action exists restricted
to Z/3. Now we have that Z/4 acts on Z4 with
T 3 =
0 −1 0 −1
0 0 −1 0
0 1 0 0
1 0 1 0
which is a matrix whose square is −I. This implies that the module is a sum of two copies
of the faithful rank two indecomposable (see §5), for which a compatible action is known
to exist (as explained in Theorem 5.1), and so this case is taken care of.
(3) The group Z/8 acts on Z6 with generator represented by the matrix:
0 0 0 0 0 −1
1 0 0 0 0 0
0 1 0 0 0 −1
0 0 1 0 0 0
0 0 0 1 0 −1
0 0 0 0 1 0
The formulas for a compatible action are given by
τ(t)(a1) = −x
6 a6, τ(t)(a2) = a1, τ(t)(a3) = x
6 (a2 − a6)
τ(t)(a4) = a3, τ(t)(a5) = x
6 (a4 − a6), τ(t)(a6) = a5.
We have shown that (local) compatible actions exist for all representations constructed
using indecomposables other than Z/8(5) and Z/12(6). However, these indecomposables
can only appear once in the list due to dimensional constraints, namely in the form
Z/8(5) ⊕ Z/2(1) and Z/12(6) itself. Thus our proof is complete. �
COMPATIBLE ACTIONS AND COHOMOLOGY OF CRYSTALLOGRAPHIC GROUPS 15
References
[1] Adem, A., Z/pZ actions on (Sn)k, Trans. Amer. Math. Soc. 300 (1987), no. 2, 791–809.
[2] Adem, A. and Pan, J., Toroidal Orbifolds, Gerbes and Group Cohomology, Trans. Amer. Math.
Soc. 358 (2006), 3969-3983.
[3] Berman, S. and Gudikov, P. Indecomposable representations of finite groups over the ring of
p-adic integers, Izv. Akad. Nauk. SSSR 28 (1964), 875–910.
[4] de Boer, J., Dijkgraaf, R., Hori, K., Keurentjes, A., Morgan, J., Morrison, D. and Sethi, S.,
Triples, Fluxes, and Strings, Adv. Theor. Math. Phys. 4 (2000), no. 5, 995–1186.
[5] Carlsson, G., A counterexample to a conjecture of Steenrod, Inv. Math. 64 (1981), no. 1, 171–174.
[6] Cartan, H. and Eilenberg, S., Homological Algebra, Oxford University Press, Oxford, 1956.
[7] Charlap, L., Bieberbach Groups and Flat Manifolds, Universitext, Springer–Verlag, Berlin,
1986.
[8] Charlap, L. and Vasquez, A., Compact Flat Riemannian Manifolds II: the Cohomology of Zp–
manifolds, Amer. J. Math. 87 (1965), 551–563. Trans, Amer. Math. Soc.
[9] Brady, T., Free resolutions for semi-direct products, Tohoku Math. J. (2) 45 (1993), no. 4, 535–
[10] Curtis, C.W. and Reiner, I., Representation Theory of Finite Groups and Associative
Algebras Wiley-Interscience (1987).
[11] Erler, J. and Klemm, A., Comment on the generation number in orbifold compactifications,
Comm. Math. Phys. 153 (1993), 579–604.
[12] Evens, L., Cohomology of groups, Oxford Mathematical Monographs, Oxford University Press
(1991).
[13] Joyce, D., Deforming Calabi-Yau orbifolds, Asian J. Math. 3 (1999), no. 4, 853–867.
[14] Petrosyan, N. Jumps in cohomology of groups, periodicity, and semi–direct products, Ph.D. Dis-
sertation, University of Wisconsin–Madison (2006).
[15] Totaro, B., Cohomology of Semidirect Product Groups, J. of Algebra 182 (1996), 469–475.
[16] Vafa, C. and Witten, E., On orbifolds with discrete torsion, J. Geom. Phys. 15 (1995), no. 3,
189–214.
Department of Mathematics, University of British Columbia, Vancouver BC V6T 1Z2,
Canada
E-mail address : [email protected]
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China
E-mail address : [email protected]
Institute of Mathematics, Academia Sinica, Beijing 100080, China
E-mail address : [email protected]
Department of Mathematics, Indiana University, Bloomington IN 47405, USA
E-mail address : [email protected]
1. Introduction
2. Preliminary Results
3. Construction of Compatible Actions
4. Applications to Cohomology
5. Extensions to Other Groups
6. Application to Computations for Toroidal Orbifolds
References
|
0704.1824 | Stochastic Heat Equation Driven by Fractional Noise and Local Time | Stochastic Heat Equation Driven by Fractional
Noise and Local Time
Yaozhong Hu∗ and Nualart†
Department of Mathematics , University of Kansas
405 Snow Hall , Lawrence, Kansas 66045-2142
[email protected] and [email protected]
Abstract
The aim of this paper is to study the d-dimensional stochastic heat
equation with a multiplicative Gaussian noise which is white in space
and it has the covariance of a fractional Brownian motion with Hurst
parameter H ∈ (0, 1) in time. Two types of equations are considered.
First we consider the equation in the Itô-Skorohod sense, and later in
the Stratonovich sense. An explicit chaos development for the solution is
obtained. On the other hand, the moments of the solution are expressed
in terms of the exponential moments of some weighted intersection local
time of the Brownian motion.
1 Introduction
This paper deals with the d-dimensional stochastic heat equation
∆u+ u ⋄ ∂
(1.1)
driven by a Gaussian noise WH which is a white noise in the spatial variable
and a fractional Brownian motion with Hurst parameter H ∈ (0, 1) in the time
variable (see (2.1) in the next section for a precise definition of this noise). The
initial condition u0 is a bounded continuous function on R
d, and the solution will
be a random field {ut,x, t ≥ 0, x ∈ Rd}. The symbol ⋄ in Equation (1.1) denotes
the Wick product. For H = 1
is a space-time white noise, and in this
case, Equation (1.1) coincides with the stochastic heat equation considered by
Walsh (see [17]). We know that in this case the solution exists only in dimension
one (d = 1).
There has been some recent interest in studying stochastic partial differential
equations driven by a fractional noise. Linear stochastic evolution equations in
∗Y. Hu is supported by the National Science Foundation under DMS0504783
†D. Nualart is supported by the National Science Foundation under DMS0604207
http://arxiv.org/abs/0704.1824v1
a Hilbert space driven by an additive cylindrical fBm with Hurst parameter H
were studied by Duncan et al. in [3] in the case H ∈ (1
, 1) and by Tindel et al.
in [15] in the general case, where they provide necessary and sufficient conditions
for the existence and uniqueness of an evolution solution. In particular, the
heat equation
on Rd has a unique solution if and only if H > d
. The same result holds when
one adds to the above equation a nonlinearity of the form b(t, x, u), where b
satisfies the usual linear growth and Lipschitz conditions in the variable u, uni-
formly with respect to (t, x) (see Maslowski and Nualart in [9]). The stochastic
heat equation on [0,∞) × Rd with a multiplicative fractional white noise of
Hurst parameter H = (H0, H1, . . . , Hd) has been studied by Hu in [6] under the
conditions 1
< Hi < 1 for i = 0, . . . , d and
i=0Hi < d−
2H0−1
The main purpose of this paper is to find conditions on H and d for the
solution to Equation (1.1) to exist as a real-valued stochastic process, and to
relate the moments of the solution to the exponential moments of weighted in-
tersection local times. This relation is based on Feynman-Kac’s formula applied
to a regularization of Equation (1.1). In order to illustrate this fact, consider
the particular case d = 1 and H = 1
. It is known that there is no Feynman-
Kac’s formula for the solution of the one-dimensional stochastic heat equation
driven by a space-time white noise. Nevertheless, using an approximation of the
solution by regularizing the noise we can establish the following formula for the
moments:
ukt,x
u0(x+B
t ) exp
i,j=1,i<j
s −Bjs)ds
, (1.2)
for all k ≥ 2, where Bt is a k-dimensional Brownian motion independent of the
spaced-time white noise W
2 . In the case H > 1
and d ≥ 1, a similar formula
holds but
s − Bjs)ds has to be replaced by the weighted intersection
local time
Lt = H(2H − 1)
|s− r|2H−2 δ0(Bis −Bjr)dsdt, (1.3)
where
Bj , j ≥ 1
are independent d-dimensional Brownian motions (see The-
orem 5.3).
The solution of Equation (1.1) has a formal Wiener chaos expansion ut,x =∑∞
n=0 In(fn(·, t, x)). Then, for the existence of a real-valued square integrable
solution we need
n! ‖fn(·, t, x)‖2H⊗n
<∞, (1.4)
whereHd is the Hilbert space associated with the covariance of the noiseWH . It
turns out that, if H > 1
, the asymptotic behavior of the norms ‖fn(·, t, x)‖H⊗n
is similar to the behavior of the nth moment of the random variable Lt defined
in (1.3). More precisely, if u0 is a constant K, for all n ≥ 1 we have
2 ‖fn(·, t, x)‖2H⊗n
= K2E(Lnt ).
These facts leads to the following results:
i) If d = 1 and H > 1
, the series (1.4) converges, and there exists a solution
to Equation (1.1) which has moments of all orders that can be expressed in
terms of the exponential moments of the weighted intersection local times
Lt. In the case H =
we just need the local time of a one-dimensional
standard Brownian motion (see (1.2)).
ii) If H > 1
and d < 4H , the norms ‖fn(·, t, x)‖H⊗n
are finite and E(Lnt ) <
∞ for all n. In the particular case d = 2, the series (1.4) converges if t is
small enough, and the solution exists in a small time interval. Similarly,
if d = 2 the random variable Lt satisfies E(expλLt) < ∞ if λ and t are
small enough.
iii) If d = 1 and 3
< H < 1
, the norms ‖fn(·, t, x)‖H⊗n
are finite and E(Lnt ) <
∞ for all n.
A natural problem is to investigate what happens if we replace the Wick
product by the ordinary product in Equation (1.1), that is, we consider the
equation
∆u + u
. (1.5)
In terms of the mild formulation, the Wick product leads to the use of Itô-
Skorohod stochastic integrals, whereas the ordinary product requires the use
of Stratonovich integrals. For this reason, if we use the ordinary product we
must assume d = 1 and H > 1
. In this case we show that the solution exists
and its moments can be computed in terms of exponential moments of weighted
intersection local times and weighted self-intersection local times in the case
H > 3
The paper is organized as follows. Section 2 contains some preliminaries on
the fractional noiseWH and the Skorohod integral with respect to it. In Section
3 we present the results on the moments of the weighted intersection local times
assuming H ≥ 1
. Section 4 is devoted to study the Wiener chaos expansion
of the solution to Equation (1.1). The case H < 1
is more involved because
it requires the use of fractional derivatives. We show here that if 3
< H < 1
the norms ‖fn(·, t, x)‖H⊗n
are finite and they are related to the moments of a
fractional derivative of the intersection local time. We derive the formulas for
the moments of the solution in the case H ≥ 1
in Section 5. Finally, Section 6
deals with equations defined using ordinary product and Stratonovich integrals.
2 Preliminaries
Suppose that WH = {WH(t, A), t ≥ 0, A ∈ B(Rd), |A| < ∞}, where B(Rd) is
the Borel σ-algebra of Rd, is a zero mean Gaussian family of random variables
with the covariance function
E(WH(t, A)WH(s,B)) =
(t2H + s2H − |t− s|2H)|A ∩B|, (2.1)
defined in a complete probability space (Ω,F , P ), where H ∈ (0, 1), and |A|
denotes the Lebesgue measure of A. Thus, for each Borel set A with finite
Lebesgue measure, {WH(t, A), t ≥ 0} is a fractional Brownian motion (fBm)
with Hurst parameter H and variance t2H |A|, and the fractional Brownian
motions corresponding to disjoint sets are independent.
Then, the multiplicative noise ∂
appearing in Equation (1.1) is the for-
mal derivative of the random measure WH(t, A):
WH(t, A) =
dsdx.
We know that there is an integral representation of the form
WH(t, A) =
KH(t, s)W (ds, dx),
where W is a space-time white noise, and the square integrable kernel KH is
given by
KH(t, s) = cHs
(u− s)H−
2 du,
for some constant cH . We will set KH(t, s) = 0 if s > t.
Denote by E the space of step functions on R+. Let H be the closure of E
with respect to the inner product induced by
1[0,t],1[0,s]
= KH(t, s).
The operator K∗H : E → L2(R+) defined by K∗H(1[0,t])(s) = KH(t, s) provides
a linear isometry between H and L2(R+).
The mapping 1[0,t]×A →WH(t, A) extends to a linear isometry between the
tensor product H ⊗ L2(Rd), denoted by Hd, and the Gaussian space spanned
by WH . We will denote this isometry by WH . Then, for each ϕ ∈ Hd we have
WH(ϕ) =
(K∗H ⊗ I)ϕ(t, x)W (dt, dx).
We will make use of the notation WH(ϕ) =
ϕdWH .
If H = 1
, then H = L2(R+), and the operator K∗H is the identity. In this
case, we have Hd = L2(R+ × Rd).
Suppose now that H > 1
. The operator K∗H can be expressed as a fractional
integral operator composed with power functions (see [11]). More precisely, for
any function ϕ ∈ E with support included in the time interval [0, T ] we have
(K∗Hϕ) (t) = c
ϕ(s)sH−
where I
T− is the right-sided fractional integral operator defined by
T− f(t) =
Γ(H− 1
(s− t)H−
2 f(s)ds.
In this case the space H is not a space of functions (see [14]) because it contains
distributions. Denote by |H| the space of measurable functions on [0, T ] such
that ∫ ∞
|r − u|2H−2|ϕr||ϕu|drdu <∞.
Then, |H| ⊂ H and the inner product in the space H can be expressed in the
following form for ϕ, ψ ∈ |H|
〈ϕ, ψ〉H =
φ(r, u)ϕrϕudrdu, (2.2)
where φ(s, t) = H(2H − 1)|t− s|2H−2.
Using Hölder and Hardy-Littlewood inequalities, one can show (see [10])
‖ϕ‖Hd ≤ βH ‖ϕ‖L 1H (R+;L2(Rd)) , (2.3)
and this easily implies that
‖ϕ‖H⊗n
≤ βnH ‖ϕ‖L 1H (Rn
;L2(Rnd))
. (2.4)
If H < 1
, the operator K∗H can be expressed as a fractional derivative
operator composed with power functions (see [11]). More precisely, for any
function ϕ ∈ E with support included in the time interval [0, T ] we have
(K∗Hϕ) (t) = c
ϕ(s)sH−
where D
T− is the right-sided fractional derivative operator defined by
T− f(t) =
Γ(H+ 1
(T − t) 12−H
f(s)− f(t)
(s− t)H− 32
Moreover, for any γ > 1
− H and any T > 0 we have Cγ([0, T ]) ⊂ H =
T− (L
2([0, T ]).
If ϕ is a function with support on [0, T ], we can express the operator K∗H in
the following form
K∗Hϕ(t) = KH(T, t)ϕ(t) +
[ϕ(s) − ϕ(t)]∂KH
(s, t)ds. (2.5)
We are going to use the following notation for the operator K∗H :
K∗Hϕ =
[0,T ]
ϕ(t)K∗H(dt, r). (2.6)
Notice that if H > 1
, the kernel KH vanishes at the diagonal and we have
K∗H(dt, r) =
(t, r)1[r,T ](t)dt.
Let us now present some preliminaries on the Skorohod integral and the Wick
product. The nth Wiener chaos, denoted by Hn, is defined as the closed linear
span of the random variables of the form Hn(W
H(ϕ)), where ϕ is an element of
Hd with norm one and Hn is the nth Hermite polynomial. We denote by In the
linear isometry between H⊗nd (equipped with the modified norm
n! ‖·‖H⊗n
and the nth Wiener chaos Hn, given by In(ϕ
⊗n) = n!Hn(W
H(ϕ)), for any
ϕ ∈ Hd with ‖ϕ‖Hd = 1. Any square integrable random variable, which is
measurable with respect to the σ-field generated by WH , has an orthogonal
Wiener chaos expansion of the form
F = E(F ) +
In(fn),
where fn are symmetric elements of H⊗nd , uniquely determined by F .
Consider a random field u = {ut,x, t ≥ 0, x ∈ Rd} such that E
u2t,x
for all t, x. Then, u has a Wiener chaos expansion of the form
ut,x = E(ut,x) +
In(fn(·, t, x)), (2.7)
where the series converges in L2(Ω).
Definition 2.1 We say the random field u satisfying (2.7) is Skorohod inte-
grable if E(u) ∈ Hd, for all n ≥ 1, fn ∈ H⊗(n+1)d , and the series
WH(E(u)) +
In+1(f̃n)
converges in L2(Ω), where f̃n denotes the symmetrization of fn.We will denote
the sum of this series by δ(u) =
uδWH .
The Skorohod integral coincides with the adjoint of the derivative operator.
That is, if we define the space D1,2 as the closure of the set of smooth and
cylindrical random variables of the form
F = f(WH(h1), . . . ,W
H(hn)),
hi ∈ Hd, f ∈ C∞p (Rn) (f and all its partial derivatives have polynomial growth)
under the norm
‖DF‖1,2 =
E(F 2) + E(‖DF‖2Hd),
where
(WH(h1), . . . ,W
H(hn))hj ,
then, the following duality formula holds
E(δ(u)F ) = E
〈DF, u〉Hd
, (2.8)
for any F ∈ D1,2 and any Skorohod integrable process u.
If F ∈ D1,2 and h is a function which belongs to Hd, then Fh is Skorohod
integrable and, by definition, the Wick product equals to the Skorohod integral
of Fh:
δ(Fh) = F ⋄WH(h). (2.9)
This formula justifies the use of the Wick product in the formulation of Equation
(1.1).
Finally, let us remark that in the case H = 1
, if ut,x is an adapted stochastic
process such that E
u2t,xdxdt
<∞, then u is Skorohod integrable and
δ(u) coincides with the Itô stochastic integral:
δ(u) =
ut,xW (dt, dx).
3 Weighted intersection local times for standard
Brownian motions
In this section we will introduce different kinds of weighted intersection local
times which are relevant in computing the moments of the solutions of stochastic
heat equations with multiplicative fractional noise.
Suppose first that B1 and B2 are independent d-dimensional standard Brow-
nian motions. Consider a nonnegative measurable function η(s, t) on R2+. We
are interested in the weighted intersection local time formally defined by
η(s, t)δ0(B
s −B2t )dsdt. (3.1)
We will make use of the following conditions on the weight η:
C1) For all T > 0
‖η‖1,T := max
0≤t≤T
η(s, t)ds, sup
0≤s≤T
η(s, t)dt
C2) For all T > 0 there exist constants γT > 0 and H ∈ (0, 1) such that
η(s, t) ≤ γT |s− t|2H−2 ,
for all s, t ≤ T .
Clearly, C2) is stronger than C1). We will denote by pt(x) the d-dimensional
heat kernel pt(x) = (2πt)
2t . Consider the approximation of the inter-
section local time (3.1) defined by
η(s, t)pε(B
s −B2t )dsdt. (3.2)
Let us compute the kth moment of Iε, where k ≥ 1 is an integer. We can write
[0,T ]2k
η(si, ti)ψε(s, t)dsdt, (3.3)
where s = (s1, . . . , sk), t = (t1, . . . , tk) and
ψε(s, t) = E
−B2t1) · · · pε(B
−B2tk)
. (3.4)
Using the Fourier transform of the heat kernel we can write
ψε(s, t) =
ξj , b
− b2tj
|ξj |2
j,l=1
ξjCov
, (3.5)
where ξ = (ξ1, . . . , ξk) and b
t, i = 1, 2, are independent one-dimensional Brow-
nian motions. Then ψε(s, t) ≤ ψ(s, t), where
ψ(s, t) = (2π)−
2 [det (sj ∧ sl + tj ∧ tl)]−
2 . (3.6)
[0,T ]2k
η(si, ti)ψ(s, t)dsdt. (3.7)
Then, if αk <∞ for all k ≥ 1, the family Iε converges in Lp, for all p ≥ 2, to a
limit I and E(Ik) = αk. In fact,
ε,δ↓0
E(IεIδ) = α2,
so Iε converges in L
2, and the convergence in Lp follows from the boundedness
in Lq for q > p. Then the following result holds.
Proposition 3.1 Suppose that C1) holds and d = 1. Then, for all λ > 0 the
random variable defined in (3.2) satisfies
E (exp (λIε)) ≤ 1 + Φ
‖η‖1,T λ
, (3.8)
where Φ(x) =
Γ(k+1
. Also, Iε converges in L
p for all p ≥ 2, and the
limit, denoted by I, satisfies the estimate (3.8).
Proof The term ψ(s, t) defined in (3.6) can be estimated using Cauchy-
Schwarz inequality:
ψ(s, t) ≤ (2π)−k [det (sj ∧ sl)]
4 [det (tj ∧ tl)]
= 2−kπ−
2 [β(s)β(t)]
4 , (3.9)
where for any element (s1, . . . , sk) ∈ (0,∞)k with si 6= sj if i 6= j, we denote
by σ the permutation of its coordinates such that sσ(1) < · · · < sσ(n) and
β(s) = sσ(1)(sσ(2) − sσ(1)) · · · (sσ(k) − sσ(k−1)). Therefore, from (3.9) and (3.7)
we obtain
αk ≤ 2−kπ−
[0,T ]2k
η(si, ti) [β(s)β(t)]
4 dsdt. (3.10)
Applying again Cauchy-Schwarz inequality yields
αk ≤ 2−kπ−
[0,T ]2k
η(si, ti) [β(s)]
2 dsdt
[0,T ]2k
η(si, ti) [β(t)]
2 dsdt
2−1π−
2 ‖η‖1,T
[β(s)]
k!2−kT
2 ‖η‖k1,T
Γ(k+1
, (3.11)
where Tk = {s = (s1, . . . , sk) : 0 < s1 < · · · < sk < T }, which implies the
estimate (3.8).
This result can be extended to the case of a d-dimensional Brownian motion
under the stronger condition C2):
Proposition 3.2 Suppose that C2) holds and d < 4H. Then, limε↓0 Iε = I,
exists in Lp, for all p ≥ 2. Moreover, if d = 2 and λ < λ0(T ), where
λ0(T ) =
H(2H − 1)4π
, (3.12)
and βH is the constant appearing in the inequality (2.3), then
E (exp (λIε)) <∞, (3.13)
and I satisfies E (exp (λI)) <∞.
Proof As in the proof of Proposition 3.1, using condition C2) and inequality
(2.4) we obtain the estimates
αk ≤ γkT 2−dkπ−
[0,T ]2k
|ti − si|2H−2 [β(s)β(t)]−
4 dsdt
≤ γkT 2−dkπ−
2 αkH
[0,T ]k
[β(s)]
4H ds
γTαH2
−2π−1
2H Γ(1− d4H )
k2HT k(1−
Γ(k(1− d
) + 1)2H
γTαH2
−2π−1Γ(1− d
)2HT 2H−
where αH =
H(2H−1)
. This allows us to conclude the proof.
If d = 2 and η(s, t) = 1 it is known that the intersection local time
B2t )dsdt exists and it has finite exponential moments up to a critical exponent
λ0 (see Le Gall [7] and Bass and Chen [1]).
Consider now a one-dimensional standard Brownian motion B, and the
weighted self-intersection local time
η(s, t)δ0(Bs −Bt)dsdt.
As before, set
η(s, t)pε(Bs −Bt)dsdt.
Proposition 3.3 Suppose that C2) holds. If H > 1
, then we have
E (exp (λ [Iε − E (Iε)])) <∞, (3.14)
for all λ > 0. Moreover, the normalized local time I −E (I) exists as a limit in
Lp of Iε − E (Iε), for all p ≥ 2, and it has exponential moments of all orders.
If H > 3
, then we have for all λ > 0
E (exp (λIε)) <∞, (3.15)
for all λ > 0, and the local time I exists as a limit in Lp of Iε, for all p ≥ 2,
and it is exponentially integrable.
Proof We will follow the ideas of Le Gall in [7]. Suppose first that H > 1
let us show (3.14). To simplify the proof we assume T = 1. It suffices to show
these results for
Jε :=
η(s, t)pε(Bs −Bt)dsdt.
Denote, for n ≥ 1, and 1 ≤ k ≤ 2n−1
An,k =
2k − 2
2k − 1
2k − 1
αεn,k =
η(s, t)pε(Bs −Bt)dsdt
ᾱεn,k = α
n,k − E
αεn,k
Notice that the random variables αεn,k, 1 ≤ k ≤ 2n−1, are independent. We
2n−1∑
αεn,k,
Jε − E (Jε) =
2n−1∑
ᾱεn,k.
We can write
αεn,k = 2
2k − 1
2k − 1
×pε(B 2k−1
−B 2k−1
)dsdt
≤ γ12−2n−(2H−2)n
|t+ s|2H−2pε(B 2k−1
−B 2k−1
)dsdt,
which has the same distribution as
βεn,k = γ12
n−(2H−2)n
|t+ s|2H−2pε2n(B1s −B2t )dsdt,
where B1 and B2 are independent one-dimensional Brownian motions. Hence,
using the estimate (3.11), we obtain
ᾱεn,k
= 1 +
ᾱεn,k
≤ 1 +
βεn,k
≤ 1 +
n−(2H−2)nλ
Γ( j+2
for some constant CT . Hence,
ᾱεn,k
≤ 1 + cλ2−3n−2(2H−2)nλ2, (3.16)
for some function cλ.
Fix a > 0 such that a < 2(2H − 1)̇. For any N ≥ 2 define
(1− 2−a(j−1)),
and notice that limN→∞ bN = b∞ > 0. Then, by Hölder’s inequality, for all
N ≥ 2 we have
2n−1∑
ᾱεn,k
λbN
1− 2−a(N−1)
2n−1∑
ᾱεn,k
1−2−a(N−1)
λbN2a(N−1)
2N−1∑
ᾱεN,k
2−a(N−1)
λbN−1
2n−1∑
ᾱεn,k
a(N−1)ᾱεN,k
)]}2(1−a)(N−1)
Using (3.16), the second factor in the above expression can be dominated by
a(N−1)ᾱN,k
)]}2(1−a)(N−1)
1 + cλλ
2b222
2a(N−1)2−3N−2(2H−2)N
)2(1−a)(N−1)
≤ exp
22(a−2−2(2H−2))N
where κ = b222
−a−1. Thus by induction we have
2n−1∑
ᾱn,k
≤ exp
22(a−2−2(2H−2))n
×E (exp ᾱ1,1)
≤ exp(κcλλ2(1 − 2a+2−4H)−1)
×E (exp(ᾱ1,1)) <∞,
because a < 2(2H − 1). By Fatou lemma we see that
E (exp (λb∞ (Jε − E (Jε)))) <∞,
and (3.14) follows.
On the other hand, one can easily show that
ε,δ↓0
E((Jε − E (Jε)) (Jδ − E (Jδ))) =
s<t<1,s′<t′<1
η(s, t)η(s′, t′)
t− s |[s, t] ∩ [s′, t′]|
|[s, t] ∩ [s′, t′]| t′ − s′
])− 1
− ((t− s)(t′ − s′))−
dsdtds′dt′ <∞,
which implies the convergence of Iε in L
2. The convergence in Lp for p ≥ 2 and
the estimate (3.14) follow immediately.
The proof of the inequality (3.15) is similar. The estimate (3.16) is replaced
αεn,k
≤ 1 + dλ2−3n−2(2H−2)nλ, (3.17)
for a suitable function dλ, and we obtain
2n−1∑
≤ exp
κdλλ2
(− 52−2H)n
E (exp (α1,1))
≤ exp(
2(1− 2(−
−2H)n)−1)E (exp (α1,1)) <∞,
because H > 3
. By Fatou lemma we see that
E (exp (λb∞ (Jε − E (Jε)))) <∞,
which implies (3.15). The convergence in Lp of Iε is proved as usual.
Notice that condition H > 3
cannot be improved because
|t− s|−
2 δ0(Bs −Bt)dsdt
|t− s|−1dsdt = ∞.
4 Stochastic heat equation in the Itô-Skorohod
sense
In this section we study the stochastic partial differential equation (1.1) on
Rd, where WH is a zero mean Gaussian family of random variables with the
covariance function (2.1), defined on a complete probability space (Ω,F , P ),
and the initial condition u0 belongs to Cb(R
d). First we give the definition of a
solution using the Skorhohod integral, which corresponds formally to the Wick
product appearing in Equation (1.1).
For any t ≥ 0, we denote by Ft the σ-field generated by the random variables
{W (s, A), 0 ≤ s ≤ t, A ∈ B(Rd), |A| < ∞} and the P -null sets. A random field
u = {ut,x, t ≥ 0, x ∈ R} is adapted if for any (t, x), ut,x is Ft-measurable.
For any bounded Borel function ϕ on R we write ptϕ(x) =
pt(x −
y)ϕ(y)dy.
Definition 4.1 An adapted random field u = {ut,x, t ≥ 0, x ∈ Rd} such that
E(u2t,x) < ∞ for all (t, x) is a solution to Equation (1.1) if for any (t, x) ∈
[0,∞) × Rd, the process {pt−s(x − y)us,y1[0,t](s), s ≥ 0, y ∈ Rd} is Skorohod
integrable, and the following equation holds
ut,x = ptu0(x) +
pt−s(x − y)us,yδWHs,y . (4.1)
The fact that Equation (1.1) contains a multiplicative Gaussian noise allows
us to find recursively an explicit expression for the Wiener chaos expansion of
the solution. This approach has extensively used in the literature. For instance,
we refer to the papers by Hu [6], Buckdahn and Nualart [2], Nualart and Zakai
[13], Nualart and Rozovskii [12], and Tudor [16], among others.
Suppose that u = {ut,x, t ≥ 0, x ∈ Rd} is a solution to Equation (1.1). Then,
for any fixed (t, x), the random variable ut,x admits the following Wiener chaos
expansion
ut,x =
In(fn(·, t, x)), (4.2)
where for each (t, x), fn(·, t, x) is a symmetric element in H⊗nd . To find the
explicit form of fn we substitute (4.2) in the Skorohod integral appearing in
(4.1) we obtain
pt−s(x− y)us,yδWHs,y =
In(pt−s(x− y)fn(·, s, y)) δWHs,y
In+1( ˜pt−s(x− y)fn(·, s, y)) .
Here, ( ˜pt−s(x− y)fn(·, s, y) denotes the symmetrization of the function
pt−s(x− y)fn(s1, x1; . . . ; sn, xn; s, y)
in the variables (s1, x1), . . . , (sn, xn), (s, y), that is,
˜pt−s(x− y)fn(·, s, y) =
[pt−s(x− y)fn(s1, x1, . . . , sn, xn, s, y)
pt−sj (x − yj)
×fn(s1, x1, . . . , sj−1, xj−1, s, y, sj+1, xj+1, . . . , sn, yn, sj , yj)].
Thus, Equation (4.1) is equivalent to say that f0(t, x) = ptu0(x), and
fn+1(·, t, x) = ˜pt−s(x− y)fn(·, s, y) (4.3)
for all n ≥ 0. Notice that, the adaptability property of the random field u
implies that fn(s1, x1, . . . , sn, xn, t, x) = 0 if sj > t for some j.
This leads to the following formula for the kernels fn, for n ≥ 1
fn(s1, x1, . . . , sn, xn, t, x) =
×pt−sσ(n)(x − xσ(n)) · · · psσ(2)−sσ(1)(xσ(2) − xσ(1))psσ(1)u0(xσ(1)), (4.4)
where σ denotes the permutation of {1, 2, . . . , n} such that 0 < sσ(1) < · · · <
sσ(n) < t. This implies that there is a unique solution to Equation (4.1), and the
kernels of its chaos expansion are given by (4.4). In order to show the existence
of a solution, it suffices to check that the kernels defined in (4.4) determine
an adapted random field satisfying the conditions of Definition 4.1. This is
equivalent to show that for all (t, x) we have
n! ‖fn(·, t, x)‖2H⊗n
<∞. (4.5)
It is easy to show that (4.5) holds ifH = 1
and d = 1. In fact, we have, assuming
|u0| ≤ K, and with the notation x = (x1, . . . , xn), and s = (s1, . . . , sn):
‖fn(·, t, x)‖2H⊗n1
[0,t]n
pt−sσ(n)(x− xσ(n))
2 · · · psσ(2)−sσ(1)(xσ(2) − xσ(1))
×psσ(1)u0(xσ(1))
2 dxds
≤ K2 (4π)
[0,t]n
(sσ(j+1) − sσ(j))−
K2 (4π)
(sj+1 − sj)−
2 ds,
where Tn = {(s1, . . . , sn) ∈ [0, t]n : 0 < s1 < · · · < sn < t} and by convention
sn+1 = t. Hence,
‖fn(·, t, x)‖2H⊗n1 ≤
K22−nt
n!Γ(n+1
which implies (4.5). On the other hand, if H = 1
and d ≥ 2, these norms are
infinite.
Notice that if u0 = 1, then (n!)
2 ‖fn(·, t, x)‖2H⊗n1 coincides with the moment
of order n of the local time at zero of the one-dimensional Brownian motion
with variance 2t, that is,
(n!)2 ‖fn(·, t, x)‖2H⊗n1 = E
[(∫ t
δ0(B2s)ds
To handle the case H > 1
, we need the following technical lemma.
Lemma 4.2 Set
gs(x1, . . . , xn) = pt−sσ(n)(x− xσ(n)) · · · psσ(2)−sσ(1)(xσ(2) − xσ(1))). (4.6)
Then,
〈gs, gt〉L2(Rnd) = ψ(s, t),
where ψ(s, t) is defined in (3.4).
Proof By Plancherel’s identity
〈gs, gt〉L2(Rnd) = (2π)
−dn 〈Fgs,Fgt〉L2(Rnd) ,
where F denotes the Fourier transform, given by
Fgs(ξ1, . . . , ξn) = (2π)−
(sσ(j+1) − sσ(j))−
i 〈ξj , xj〉 −
∣∣xσ(j+1) − xσ(j)
sσ(j+1) − sσ(j)
with the convention xn+1 = x and sn+1 = t. Making the change of variables
uj = xσ(j+1) − xσ(j) if 1 ≤ j ≤ n− 1, and un = x− xσ(n), we obtain
Fgs(ξ1, . . . , ξn) = (2π)−
(sσ(j+1) − sσ(j))−
ξσ(j), x− un − · · · − uj
− |uj |
sσ(j+1) − sσ(j)
ξσ(j), x−Bt −Bsσ(j)
ξj , x−Bt −Bsj
As a consequence,
〈gs, gt〉L2(Rnd) = (2π)
ξj , B
−B2tj
dξ,
which implies the desired result.
In the case H > 1
, and assuming that u0 = 1, the next proposition shows
that the norm (n!)2 ‖fn(·, t, x)‖2H⊗n
coincides with the nth moment of the in-
tersection local time of two independent d-dimensional Brownian motions with
weight φ(t, s).
Proposition 4.3 Suppose that H > 1
and d < 4H. Then, for all n ≥ 1
(n!)2 ‖fn(·, t, x)‖2H⊗n
≤ ‖u0‖2∞E
[(∫ t
φ(s, r)δ0(B
s −B2r )dsdr
(4.7)
with equality if u0 is constant. Moreover, we have:
1. If d = 1, there exists a unique solution to Equation (4.1).
2. If d = 2 , then there exists a unique solution in an interval [0, T ] provided
T < T0, where
βHΓ(1−
)−1/(2H−1)
. (4.8)
Proof We have
(n!)2 ‖fn(·, t, x)‖2H⊗n
≤ ‖u0‖2∞
[0,t]n
φ(sj , tj) 〈gs, gt〉L2(Rnd) dsdt, (4.9)
where gs is defined in (4.6). Then the results follow easily from from Lemma
4.2 and Proposition 3.2.
In the two-dimensional case and assuming H > 1
, the solution would exists
in any interval [0, T ] as a distribution in the Watanabe space Dα,2 for any α > 0
(see [18]).
4.1 Case H < 1
and d = 1
We know that in this case, the norm in the space H is defined in terms of
fractional derivatives. The aim of this section is to show that ‖fn(·, t, x)‖2H⊗n1
is related to the nth moment of a fractional derivative of the self-intersection
local time of two independent one-dimensional Brownian motions, and these
moments are finite for all n ≥ 1, provided 3
< H < 1
Consider the operator (K∗H)
on functions of two variables defined as the
action of the operator K∗H on each coordinate. That is, using the notation (2.5)
we have
(K∗H)
f(r1, r2) = KH(T, r1)KH(T, r2)f(r1, r2)
+KH(T, r1)
(s, r2) (f(r1, s)− f(r1, r2)) ds
+KH(T, r2)
(v, r1) (f(v, r2)− f(r1, r2)) dv
(s, r2)
(v, r1) [f(v, s)− f(r1, s)− f(v, r2)− f(r1, r2)] dsdv.
Suppose that f(s, t) is a continuous function on [0, T ]2. Define the Hölder norms
‖f‖1,γ = sup
|f(s1, t)− f(s2, t)|
|s1 − s2|γ
, s1, s2, t ∈ T, s1 6= s2
‖f‖2,γ = sup
|f(s, t1)− f(s, t2)|
|t1 − t2|γ
, t1, t2, s ∈ T, t1 6= t2
‖f‖1,2,γ = sup
|f(s1, t1)− f(s1, t2)− f(s2, t1) + f(s2, t2)|
|s1 − s2|γ |t1 − t2|γ
where the supremum is taken in the set {t1, t2, s2, s2 ∈ T, s1 6= s2, t1 6= t2}. Set
‖f‖0,γ = ‖f‖1,γ + ‖f‖2,γ + ‖f‖1,2,γ
Then, (K∗H)
f is well defined if ‖f‖0,γ < ∞ for some γ >
− H . As
a consequence, if B1 and B2 are two independent one-dimensional Brownian
motions, the following random variable is well defined for all ε > 0
(K∗H)
· −B2· )(r, r)dr. (4.10)
The next theorem asserts that Jε converges in L
p for all p ≥ 2 to a fractional
derivative of the intersection local time of B1 and B2.
Proposition 4.4 Suppose that 3
< H < 1
.Then, for any integer k ≥ 1 and,
T > 0 we have E
≥ 0 and
Moreover, for all p ≥ 2, Jε converges in Lp as ε tends to zero to a random
variable denoted by
(K∗H)
· −B2· )(r, r)dr.
Proof Fix k ≥ 1. Let us compute the moment of order k of Jε. We can write
[0,T ]k
(K∗H)
p2ε(B
1 −B2)(ri, ri)
dr. (4.11)
Using the expression (2.6) for the operator K∗H , and the notation (3.4) yields
[0,T ]3k
ψε(s, t)
K∗H(dsi, ri)K
H(dti, ri)dr. (4.12)
As a consequence, using (3.5) we obtain
= (2π)−k
[0,T ]k
[0,T ]2k
j,l=1 ξjξlCov
−B2tj
,B1sl
−B2tl
K∗H(dsi, ri)K
H(dti, ri)e
j=1 ξ
j dξdr
≤ (2π)−k
[0,T ]k
[0,T ]k
K∗H(dti, ri)
dξdr.
Then, it suffices to show that for each k the following quantity is finite
[sj − sj−1 + tj − tj−1]−
K∗H(dsi, ri)
K∗H(dti, ri)dr, (4.13)
where Tk = {0 < t1 < · · · < tk < T }. Fix a constant a > 0. We are going to
compute
[tj − tj−1 + a]−2
K∗H(dti, ri).
To do this we need some notation. Let ∆j and Ij be the operators defined on
a function f(t1, . . . , tk) by
∆jf = f − f |tj=rj ,
Ijf = f |tj=rj .
The operatorK∗H(dti, ri) is the sum of two components (see (2.5)), and it suffices
to consider only the second one because the first one is easy to control. In this
way we need to estimate the following term
[0,T ]k
∆1 · · ·∆k
j [tj − tj−1 + a]
2 1{tj−1<tj}
(tj − rj)H−
j 1{rj<tj}
Because t
j ≤ 1, we can disregard the factors r
j and t
j . Using
the rule
∆j(FG) = F (tj)G(tj)− F (rj)G(rj)
= [F (tj)− F (rj)]G(tj) + F (rj) [G(tj)−G(rj)]
= ∆jFG+ IjF∆jG,
we obtain
∆1 · · ·∆k
[tj − tj−1 + a]−
2 1{tj−1<tj}
[tj − tj−1 + a]−
2 1{tj−1<tj}
where Sj is an operator of the form:
IIj , I∆j ,∆j−1Ij ,∆j−1∆j ,
and for each j, ∆j must appear only once in the product
j=1 Sj . Let us
estimate each one of the possible four terms. Fix ε > 0 such that H − 3
> 2ε.
1. Term IIj :
[tj − tj−1 + a]−
2 1{tj−1<tj}
= [rj − tj−1 + a]−
2 1{tj−1<rj},
2. Term I∆j :
∣∣∣I∆j
[tj − tj−1 + a]−
2 1{tj−1<tj}
∣∣∣[tj − tj−1 + a]−
2 1{tj−1<tj} − [rj − tj−1 + a]
2 1{tj−1<rj}
≤ C [tj − rj ]
[rj − tj−1 + a]H−1−ε 1{tj−1<rj}
+C [tj − tj−1 + a]−
2 1{rj<tj−1}.
3. Term ∆j−1I:
∣∣∣∆j−1I
[tj − tj−1 + a]−
2 1{tj−1<tj}
∣∣∣[tj − tj−1 + a]−
2 1{tj−1<tj} − [tj − rj−1 + a]
2 1{rj−1<tj}
≤ C [tj−1 − rj−1]
[tj − tj−1 + a]H−1−ε 1{rj−1<tj−1<tj}.
4. Term ∆j−1∆j :
∣∣∣∆j−1∆j
[tj − tj−1 + a]−
2 1{tj−1<tj}
∣∣∣[tj − tj−1 + a]−
2 1{tj−1<tj} − [rj − tj−1 + a]
2 1{tj−1<rj}
− [tj − rj−1 + a]−
2 1{rj−1<tj} + [rj − rj−1 + a]
2 1{rj−1<rj}
≤ C [tj − rj ]
[tj−1 − rj−1]
[rj − tj−1 + a]2H−
1{tj−1<rj<tj}
+C [tj−1 − rj−1]
[tj − tj−1 + a]H−1−ε 1{rj<tj−1<tj}
+C [rj − rj−1 + a]−
2 1{rj−1<rj<tj−1<tj}.
If we replace the constant a by sj−sj−1 and we treat the the term sj−sj−1
in the same way, using the inequality
(a+ b)−α ≤ a−
we obtain the same estimates as if we had started with
[tj − tj−1]−
K∗H(dtj , rj)
instead of (4.13). As a consequence, it suffices to control the following integral
j (t, r)dt
dr, (4.14)
where a, b ∈ {0, 1}, and Aj has one of the following forms
j = [rj − tj−1]
4 1{tj−1<rj},
j,1 = [tj − rj ]
[rj − tj−1]H−
1{tj−1<rj}
j,2 = [tj − tj−1]
4 [tj − rj ]H−
2 1{rj<tj−1},
j = [tj−1 − rj−1]
[tj − tj−1]H−
1{rj−1<tj−1<tj},
j,1 = [tj − rj ]
[tj−1 − rj−1]−1+ε [rj − tj−1]2H−
1{tj−1<rj<tj},
j,2 = [tj−1 − rj−1]
[tj − tj−1]H−
[tj − rj ]H−
2 1{rj<tj−1<tj},
j,3 = [rj − rj−1]
4 [tj − rj ]H−
2 [tj−1 − rj−1]H−
2 1{rj−1<rj<tj−1<tj},
and with the convention that any term of the form A
j or A
j must be followed
j or A
j and any term of the form A
j or A
j must be followed by
j or A
j . It is not difficult to check that the integral (4.14) is finite. For
instance, for a product of the form A
j,1 we get
{rj−1<tj−1<rj<tj}
[rj−1 − tj−2]−
4 [tj−1 − rj−1]−1+ε [rj − tj−1]2H−
× [tj − rj ]−1+ε dtj−1
= [rj−1 − tj−2]−
4 [rj − rj−1]2H−
−ε [tj − rj ]−1+ε ,
and the integral in the variable rj of the square of this expression will be finite
because 4H − 5
− 2ε > −1.
So, we have proved that supε E(J
ε ) < ∞ for all k. Notice that all these
moments are positive. It holds that limε,δ↓0 E(JεJδ) exists, and this implies the
convergence in L2, and also in Lp, for all p ≥ 2.
On the other hand, if the initial condition of Equation (1.1) is a constant
K, then for all n ≥ 1 we have
(n!)2 ‖fn(·, t, x)‖2H⊗n1 = K
[(∫ T
(K∗H)
· −B2· )(r, r)dr
provided H ∈
. In fact, by Lemma 4.2 we have
(n!)2 ‖fn(·, t, x)‖2H⊗n1 = K
[0,t]2n
〈gs, gt〉L2(Rn)
K∗H(dti, ri)
K∗H(dsi, ri)dsdt
[0,t]2n
ψ(s, t)
K∗H(dti, ri)
K∗H(dsi, ri)dsdt,
and it suffices to apply the above proposition.
However, we do not know the rate of convergence of the sequence ‖fn(·, t, x)‖2H⊗n1
as n tends to infinity, and for this reason we are not able to show the existence
of a solution to Equation (1.1) in this case.
5 Moments of the solution
In this section we introduce an approximation of the Gaussian noise WH by
means of an approximation of the identity. In the space variable we choose the
heat kernel to define this approximation and in the time variable we choose a
rectangular kernel. In this way, for any ε > 0 and δ > 0 we set
t,x =
ϕδ(t− s)pε(x− y)dWHs,y, (5.1)
where
ϕδ(t) =
1[0,δ](t).
Now we consider the approximation of Equation (1.1) defined by
t,x + u
t,x ⋄ Ẇ
t,x . (5.2)
We recall that the Wick product u
t,x⋄Ẇ
t,x is well defined as a square integrable
random variable provided the random variable u
t,x belongs to the space D
(see (2.9)), and in this case we have
uε,δs,y ⋄ Ẇ ε,δs,y =
ϕδ(s− r)pε(y − z)uε,δs,yδWHr,z . (5.3)
The mild or evolution version of Equation (5.2) will be
t,x = ptu0(y) +
pt−s(x− y)uε,δs,y ⋄ Ẇ ε,δs,y dsdy. (5.4)
Substituting (5.3) into (5.4), and formally applying Fubini’s theorem yields
t,x = ptu0(y)+
pt−s(x− y)ϕδ(s− r)pε(y − z)uε,δs,ydsdy
δWHr,z .
(5.5)
This leads to the following definition.
Definition 5.1 An adapted random field uε,δ = {uε,δt,x, t ≥ 0, x ∈ Rd} is a mild
solution to Equation (5.2) if for each (r, z) ∈ R+ × Rd the integral
Y t,xr,z =
pt−s(x − y)ϕδ(s− r)pε(y − z)uε,δs,ydsdy
exists and Y t,x is a Skorohod integrable process such that (5.5) holds for each
(t, x).
The above definition is equivalent to saying that u
t,x ∈ L2(Ω), and for any
random variable F ∈ D1,2 , we have
t,x) = E(F )ptu0(y)
〈(∫ t
pt−s(x− y)ϕδ(s− ·)pε(y − ·)uε,δs,ydsdy
Our aim is to construct a solution of Equation (5.2) using a suitable version
of Feynman-Kac’s formula. Suppose that B = {Bt, t ≥ 0} is a d-dimensional
Brownian motion starting at 0, independent of W . Set
t−s,x+Bs
ϕδ(t− s− r)pε(Bs + x− y)dWHr,yds
Aε,δr,ydW
r,y ,
where
Aε,δr,y =
ϕδ(t− s− r)pε(Bs + x− y)ds. (5.6)
Define
t,x = E
u0(x+Bt) exp
Aε,δr,ydW
r,y −
, (5.7)
where αε,δ =
∥∥Aε,δ
Proposition 5.2 The random field u
t,x given by (5.7) is a solution to Equation
(5.2).
Proof The proof is based on the notion of S transform from white noise
analysis (see [5]). For any element ϕ ∈ H1 we define
St,x(ϕ) = E
t,xFϕ
where
Fϕ = exp
WH(ϕ)− 1
‖ϕ‖2Hd
From (5.7) we have
St,x(ϕ) = E
u0(x+Bt) exp
WH(Aε,δ + ϕ)−
αε,δ −
‖ϕ‖2Hd
u0(x+Bt) exp
Aε,δ, ϕ
u0(x+Bt) exp
〈ϕδ(t− s− ·)pε(Bs + x− ·), ϕ〉Hd ds
By the classical Feynman-Kac’s formula, St,x(ϕ) satisfies the heat equation with
potential V (t, x) = 〈ϕδ(t− ·)pε(x− ·), ϕ〉Hd , that is,
∂St,x(ϕ)
∆St,x(ϕ) + St,x(ϕ) 〈ϕδ(t− ·)pε(x − ·), ϕ〉Hd .
As a consequence,
St,x(ϕ) = ptu0(x) +
pt−s(x− y)Ss,y(ϕ) 〈ϕδ(s− ·)pε(y − ·), ϕ〉Hd dsdy.
Notice that DFϕ = ϕFϕ. Hence, for any exponential random variable of this
form we have
t,xFϕ) = ptu0(x) +
pt−s(x− y)E
t,x 〈ϕδ(s− ·)pε(y − ·), DFϕ〉Hd
and we conclude by the duality relationship between the Skorohod integral and
the derivative operator.
The next theorem says that the random variables u
t,x have moments of all
orders, uniformly bounded in ε and δ, and converge to the solution to Equation
(1.1) as δ and ǫ tend to zero. Moreover, it provides an expression for the
moments of the solution to Equation (1.1).
Theorem 5.3 Suppose that H ≥ 1
and d = 1. Then, for any integer k ≥ 1 we
[∣∣∣uε,δt,x
<∞, (5.8)
and the limit limε↓0 limδ↓0 u
t,x exists in L
p, for all p ≥ 1, and it coincides with
the solution ut,x of Equation (1.1). Furthermore, if U
0 (t, x) =
j=1 u0(x+B
where Bj are independent d-dimensional Brownian motions, we have for any
k ≥ 2
ukt,x
UB0 (t, x) exp
s −Bjs)ds
. (5.9)
if H = 1
, and
ukt,x
UB0 (t, x) exp
φ(s, r)δ0(B
s −Bjr)dsdr
. (5.10)
if H > 1
In the case d = 2, for any integer k ≥ 2 there exists t0(k) > 0 such that
for all t < t0(k) (5.8) holds. If t < t0(M) for some M ≥ 3 then the limit
limε↓0 limδ↓0 u
t,x exists in L
p for all 2 ≤ p < M , and it coincides with the
solution ut,x of Equation (1.1). Moreover, (5.10) holds for all 1 ≤ k ≤M − 1.
Proof Fix an integer k ≥ 2. Suppose that Bi =
Bit , t ≥ 0
, i = 1, . . . , k are
independent d-dimensional standard Brownian motions starting at 0, indepen-
dent of WH . Then, using (5.7) we have
u0(x +B
t ) exp
Aε,δ,B
r,y dW
r,y −
αε,δ,B
where Aε,δ,B
r,y and α
ε,δ,Bj are computed using the Brownian motion Bj . There-
fore,
= E B
exp
∥∥∥∥∥∥
Aε,δ,B
∥∥∥∥∥∥
αε,δ,B
u0(x+B
exp
Aε,δ,B
, Aε,δ,B
u0(x+B
That is, the correction term 1
αε,δ in (5.7) due to the Wick product produces a
cancellation of the diagonal elements in the square norm of
j=1 A
ε,δ,Bj . The
next step is to compute the scalar product
Aε,δ,B
, Aε,δ,B
for i 6= j. We
consider two cases.
Case 1. Suppose first that H = 1
and d = 1. In this case we have
Aε,δ,B
, Aε,δ,B
ϕδ(t− s1 − r)pε(Bis1 + x− y)
×ϕδ(t− s2 − r)pε(Bjs2 + x− y)ds1ds2drdy
ϕδ(t− s1 − r)ϕδ(t− s2 − r)
×p2ε(Bis1 −B
)ds1ds2dr.
We have
ϕδ(t− s1 − r)ϕδ(t− s2 − r)dr
= δ−2
(t− s1) ∧ (t− s2)− (t− s1 − δ)+ ∨ (t− s2 − δ)+
= ηδ(s1, s2).
It it easy to check that ηδ is a a symmetric function on [0, t]
2 such that for any
continuous function g on [0, t]2,
ηδ(s1, s2)g(s1, s2)ds1ds2 =
g(s, s)ds.
As a consequence the following limit holds almost surely
Aε,δ,B
, Aε,δ,B
p2ε(B
s −Bjs)ds,
and by the properties of the local time of the one-dimensional Brownian motion
we obtain that, almost surely.
Aε,δ,B
, Aε,δ,B
s −Bjs)ds.
The function ηδ satisfies
0≤r≤t
ηδ(s, r)ds ≤ 1,
and, as a consequence, the estimate (3.8) implies that for all λ > 0
λ exp
Aε,δ,B
, Aε,δ,B
Hence (5.8) holds and limε↓0 limδ↓0 u
t,x := vt,x exists in L
p, for all p ≥ 1.
Moreover, E(vkt,x) equals to the right-hand side of Equation (5.9). Finally,
Equation (5.5) and the duality relationship (2.8) imply that for any random
variable F ∈ D1,2 with zero mean we have
pt−s(x− y)ϕδ(s− ·)pε(y − ·)uε,δs,ydsdy
and letting δ and ε tend to zero we get
Fvt,x
pt−s(x − y)ϕδ(s− ·)pε(y − ·)vs,ydsdy
which implies that the process v is the solution of Equation (1.1), and by the
uniqueness vt,x = ut,x.
Case 2. Consider now the case H > 1
and d = 2. We have
Aε,δ,B
, Aε,δ,B
ϕδ(t− s1 − r1)pε(Bis1 + x− y)
×ϕδ(t− s2 − r2)pε(Bjs2 + x− y)ds1ds2 φ(r1, r2)dr1dr2dy
ϕδ(t− s1 − r1)ϕδ(t− s2 − r2)
×p2ε(Bis1 −B
)ds1ds2φ(r1, r2)dr1dr2.
This scalar product can be written in the following form
Aε,δ,B
, Aε,δ,B
ηδ(s1 − s2)p2ε(Bis1 −B
)ds1ds2,
where
ηδ(s1, s2) =
ϕδ(t− s1 − r1)ϕδ(t− s2 − r2) φ(r1, r2)dr1dr2. (5.11)
We claim that there exists a constant γ such that
ηδ(s1, s2) ≤ γ|s1 − s2|2H−2. (5.12)
In fact, if |s2 − s1| = s we have
ηδ(s1, s2) ≤ H(2H − 1)δ−2
∫ s+δ
|u− v|2H−2dudv
(s+ δ)2H − (s− δ)2H − 2s2H
∫ s+δ
y2H−1 − (y − δ)2H−1
dy ≤ Hδ2H−2 ≤ H22−2Hs2H−2,
if s ≤ 2δ. On the other hand, if s ≥ 2δ, we have
(s+ δ)2H − (s− δ)2H − 2s2H
s2H−1 − (s− δ)2H−1
≤ H(2H − 1)(s− δ)2H−2
≤ H(2H − 1)22−2Hs2H−2.
It it easy to check that for any continuous function g on [0, t]2,
ηδ(s1, s2)g(s1, s2)ds1ds2 =
φ(s1, s2)g(s1, s2)ds1ds2.
As a consequence the following limit holds almost surely
Aε,δ,B
, Aε,δ,B
φ(s1, s2)δ0(B
−Bjs2)ds1ds2.
From (5.12) and the estimate (3.13) we get
Aε,δ,B
, Aε,δ,B
<∞, (5.13)
if λ < λ0(t), where λ0(t) is defined in (3.12) with gammaT replaced by γ.
Hence, for any integer k ≥ 2, if t < t0(k), where k(k−1)2 = λ0(t0(k)), then
(5.8) holds because
≤ ‖u0‖k
k(k − 1)
Aε,δ,B
, Aε,δ,B
)]) 2
k(k−1)
Finally, if t < t0(M) and M ≥ 3, the limit limε↓0 limδ↓0 uε,δt,x := vt,x exists in Lp,
for all 2 ≤ p < M and it is equal to the right-hand side of Equation (5.10). As
in the case H = 1
we show that vt,x = ut,x.
6 Pathwise heat equation
In this section we consider the one-dimensional stochastic partial differential
equation
∆u+ uẆHt,x, (6.1)
where the product between the solution u and the noise ẆHt,x is now an ordinary
product. We first introduce a notion of solution using the Stratonovich integral
and a weak formulation of the mild solution. Given a random field v = {vt,x, t ≥
0, x ∈ R} such that
|vt,x| dxdt < ∞ a.s. for all T > 0, the Stratonovich
integral ∫ T
vt,xdW
is defined as the following limit in probability if it exists
vt,xẆ
t,x dxdt,
where W
t,x is the approximation of the noise W
H introduced in (5.1).
Definition 6.1 A random field u = {ut,x, t ≥ 0, x ∈ R} is a weak solution to
Equation (6.1) if for any C∞ function ϕ with compact support on R, we have
ut,xϕ(x)dx =
u0(x)ϕ(x)dx+
us,xϕ
′′(x)dxds+
us,xϕ(x)dW
Consider the approximating stochastic heat equation
∂uε,δ
∆uε,δ + uε,δẆ
t,x . (6.2)
Theorem 6.2 Suppose that H > 3
. For any p ≥ 2, the limit
t,x = ut,x
exists in Lp, and defines a weak solution to Equation (6.2) in the sense of
Definition 6.1. Furthermore, for any positive integer k
ukt,x
UB0 (t, x) exp
i,j=1
φ(s1, s2)δ(B
−Bjs2)ds1ds2
where UB0 (t, x) has been defined in Theorem (5.3).
Proof By Feynman-Kac’s formula we can write
t,x = E
u0(x+Bt) exp
Aε,δr,ydW
, (6.3)
where Aε,δr,y has been defined in (5.6). We will first show that for all k ≥ 1
[∣∣∣uε,δt,x
<∞. (6.4)
Suppose that Bi =
Bit , t ≥ 0
, i = 1, . . . , k are independent standard Brownian
motions starting at 0, independent of WH .Then, we have, as in the proof of
Theorem 5.3
= E B
exp
i,j=1
Aε,δ,B
, Aε,δ,B
UB0 (t, x)
Notice that
Aε,δB
, Aε,δB
ηδ(s1, s2)p2ε(B
−Bjs2)ds1ds2,
where ηδ(s1, s2) satisfies (5.12). As a consequence, the inequalities (3.8) and
(3.15) imply that for all λ > 0, and all i,j we have
Aε,δB
, Aε,δB
Thus, (6.4) holds, and
= EB exp
UB0 (t, x) exp
i,j=1
φ(s1, s2)δ0(B
−Bjs2)ds1ds2
In a similar way we can show that the limit limε,ε′↓0 limδ,δ′↓0E
ε′,δ′
ists. Therefore, the iterated limit limε↓0 limδ↓0E
exists in L2.
Finally we need to show that
us,xϕ(x)dW
s,x −
uε,δs,xϕ(x)Ẇ
s,xdsdx
in probability. We know that
uε,δs,xϕ(x)Ẇ
s,xdsdx converges in L
2 to some
random variable G. Hence, if
Bε,δ =
uε,δs,x − us,x
ϕ(x)ẆHs,xdsdx (6.5)
converges in L2 to zero, us,xϕ(x) will be Stratonovich integrable and
us,xϕ(x)Ẇ
s,xdsdx = G.
The convergence to zero of (6.5) is done as follows. First we remark that Bε,δ =
δ(φε,δ), where
φε,δr,z =
uε,δs,x − us,x
ϕ(x)ϕδ(s− r)pε(x− z)dsdx.
Then, from the properties of the divergence operator, it suffices to show that
(∥∥Dφε,δ
H1⊗H1
= 0. (6.6)
It is clear that limε↓0 limδ↓0E
(∥∥φε,δ
= 0. On the other hand,
φε,δr,z
uε,δs,x
−D (us,x)
ϕ(x)ϕδ(s− r)pε(x − z)dsdx,
uε,δs,x
= E B
u0(x+Bt) exp
Aε,δs,ydW
Then, as before we can show that
ε,ε′↓0
δ.δ′↓0
uε,δs,x
= E B
u0(x+B1t )u0(x+B
t ) exp
i,j=1
φ(s1, s2)δ0(B
−Bjs2)ds1ds2
φ(s1, s2)δ0(B
−B2s2)ds1ds2
This implies that uε,δs,x converges in the space D
1,2 to us,x as δ ↓ 0 and ε ↓ 0.
Actually, the limit is in the norm of the space D1,2(H1). Then, (6.6) follows
easily.
Since the solution is square integrable it admits a Wiener-Itô chaos expan-
sion. The explicit form of the Wiener chaos coefficients are given below.
Theorem 6.3 The solution to (6.1) is given by
ut,x =
In(fn(·, t, x)) (6.7)
where
fn(t1, x1, . . . , tn, xn, t, x)
u0(x+Bt) exp
φ(s1, s2)δ0(Bs1 −Bs2)ds1ds2
×δ0(Bt1 + x− x1) · · · δ0(Btn + x− xn)] . (6.8)
Proof From the Feynman-Kac formula it follows that
t,x = E
u0(x +Bt) exp
Aε,δr,ydW
u0(x +Bt) exp
‖Aε,‖2H1
Aε,δr,ydW
r,y −
‖Aε,δ‖2H1
n (t, x)),
where
f ε,δn (t1, x1, . . . , tn, xn, t, x) = E
u0(x+Bt) exp
‖Aε,δ‖2H1
t1,x1
· · ·Aε,δtn,xn
Letting δ and ε go to 0, we obtain the chaos expansion of ut,x.
Consider the stochastic partial differential equation (6.1) and its approxima-
tion (6.2). The initial condition is u0(x). We shall study the strict positivity of
the solution. In particular we shall show that E [|ut(x)|−p] <∞.
Theorem 6.4 Let H > 3/4. If E (|u0(Bt)|) > 0, then for any 0 < p < ∞, we
have that
|ut,x|−p
<∞ (6.9)
and moreover,
|ut(x)|−p
≤ (E|u0(x +Bt)|)−p−1E B
|u0(x+Bt)|
× exp
δ(Bs1 −Bs2)φ(s1, s2)ds1ds2
. (6.10)
Proof Denote κp =
E B (|u0(x+Bt)|)
)−p−1
. Then, Jensen’s inequality ap-
plied to the equality u
t,x = E
u0(x +Bt) exp
Aε,δr,ydW
implies
|uε,δt,x|−p ≤ κpE B
|u0(x +Bt)| exp
Aε,δr,ydW
Therefore
|uε,δt,x|−p
≤ κpE B
|u0(x+Bt)|E
Aε,δr,ydW
= κpE
|u0(x+Bt)|E
∥∥Aε,δ
and we can conclude as in the proof of Theorem 6.2.
Using the theory of rough path analysis (see [8]) and p-variation estimates,
Gubinelli, Lejay and Tindel [4] have proved that for H > 3
, the equation
∆u+ σ(u)ẆHt,x
had a unique mild solution up to a random explosion time T > 0, provided
σ ∈ C2b (R). In this sense, the restriction H > 34 , that we found in the case
σ(x) = x is natural, and in this particular case, using chaos expansion and
Feynman-Kac’s formula we have been able to show the existence of a solution
for all times.
Acknowledgment. We thank Carl Mueller for discussions.
References
[1] R. F. Bass and Xia Chen: Self-intersection local time: Critical exponent,
large deviations, and laws of the iterated logarithm. Ann. Probab. 32 (2004)
3221-3247.
[2] R. Buckdahn and D. Nualart: Linear stochastic differential equations and
Wick products. Probab. Theory Related Fields 99 (1994) 501–526.
[3] T. E. Duncan, B.Maslowski and B. Pasik-Duncan: Fractional Brownian
Motion and Stochastic Equations in Hilbert Spaces. Stochastics and Dy-
namics 2 (2002) 225-250.
[4] M. Gubinelli, A. Lejay and S. Tindel: Young integrals and SPDE. To
appear in Potential Analysis, 2006.
[5] T. Hida, H. H. Kuo, J. Potthoff, and L. Streit: White noise. An infinite-
dimensional calculus. Mathematics and its Applications, 253. Kluwer Aca-
demic Publishers Group, Dordrecht, 1993.
[6] Y. Hu: Heat equation with fractional white noise potentials. Appl. Math.
Optim. 43 (2001) 221-243.
[7] Le Gall, J.-F. Exponential moments for the renormalized self-intersection
local time of planar Brownian motion. Séminaire de Probabilités, XXVIII,
172–180, Lecture Notes in Math., 1583, Springer, Berlin, 1994.
[8] T. Lyons and Z. Qian: System control and rough paths. Oxford Mathemat-
ical Monographs. Oxford Science Publications. Oxford University Press,
Oxford, 2002.
[9] B. Maslowski and D. Nualart: Evolution equations driven by a fractional
Brownian motion. Journal of Functional Analysis. 202 (2003) 277-305.
[10] J. Memin, Y. Mishura and E. Valkeila: Inequalities for the moments of
Wiener integrals with respect to a fractional Brownian motion. Statist.
Probab. Lett. 51 (2001) 197–206.
[11] D. Nualart: The Malliavin Calculus and related topics. 2nd edition.
Springer-Verlag 2006.
[12] D. Nualart and B. Rozovskii: Weighted stochastic Sobolev spaces and bi-
linear SPDEs driven by space-time white noise. J. Funct. Anal. 149 (1997)
200–225.
[13] D. Nualart and M. Zakai: Generalized Brownian functionals and the solu-
tion to a stochastic partial differential equation. J. Funct. Anal. 84 (1989)
279–296
[14] V. Pipiras and M. Taqqu: Integration questions related to fractional Brow-
nian motion. Probab. Theory Related Fields 118 (2000) 251–291.
[15] S. Tindel, C. Tudor and F. Viens: Stochastic evolution equations with
fractional Brownian motion. Probab. Theory Related Fields 127 (2003) 186–
[16] C. Tudor: Fractional bilinear stochastic equations with the drift in the first
fractional chaos. Stochastic Anal. Appl. 22 (2004) 1209–1233.
[17] J. B. Walsh: An introduction to stochastic partial differential equations.
In: Ecole d’Ete de Probabilites de Saint Flour XIV, Lecture Notes in
Mathematics 1180 (1986) 265-438.
[18] S. Watanabe: Lectures on stochastic differential equations and Malliavin
calculus. Published for the Tata Institute of Fundamental Research, Bom-
bay; by Springer-Verlag, Berlin, 1984.
Introduction
Preliminaries
Weighted intersection local times for standard Brownian motions
Stochastic heat equation in the Itô-Skorohod sense
Case H<12 and d=1
Moments of the solution
Pathwise heat equation
|
0704.1825 | Extended envelopes around Galactic Cepheids III. Y Oph and alpha Per
from near-infrared interferometry with CHARA/FLUOR | Extended envelopes around Galactic Cepheids III. Y Oph and
α Per from near-infrared interferometry with CHARA/FLUOR
Antoine Mérand
Center for High Angular Resolution Astronomy, Georgia State University, PO Box 3965,
Atlanta, Georgia 30302-3965, USA
[email protected]
Jason P. Aufdenberg
Embry-Riddle Aeronautical University, Physical Sciences Department, 600 S. Clyde Morris
Blvd, Daytona Beach, FL 32114, USA
Pierre Kervella and Vincent Coudé du Foresto
LESIA, UMR 8109, Observatoire de Paris, 5 place Jules Janssen, 92195 Meudon, France
Theo A. ten Brummelaar, Harold A. McAlister, Laszlo Sturmann, Judit Sturmann and
Nils H. Turner
Center for High Angular Resolution Astronomy, Georgia State University, PO Box 3965,
Atlanta, Georgia 30302-3965, USA
ABSTRACT
Unbiased angular diameter measurements are required for accurate distances
to Cepheids using the interferometric Baade Wesselink method (IBWM). The
precision of this technique is currently limited by interferometric measurements at
the 1.5% level. At this level, the center-to-limb darkening (CLD) and the presence
of circumstellar envelopes (CSE) seem to be the two main sources of bias. The
observations we performed aim at improving our knowledge of the interferometric
visibility profile of Cepheids. In particular, we assess the systematic presence of
CSE around Cepheids in order determine accurate distances with the IBWM
free from CSE biased angular diameters. We observed a Cepheid (Y Oph) for
which the pulsation is well resolved and a non-pulsating yellow supergiant (α Per)
using long-baseline near-infrared interferometry. We interpreted these data using
a simple CSE model we previously developed. We found that our observations
of α Per do not provide evidence for a CSE. The measured CLD is explained by
an hydrostatic photospheric model. Our observations of Y Oph, when compared
http://arxiv.org/abs/0704.1825v1
– 2 –
to smaller baseline measurements, suggest that it is surrounded by a CSE with
similar characteristics to CSE found previously around other Cepheids. We have
determined the distance to Y Oph to be d = 491 ± 18 pc. Additional evidence
points toward the conclusion that most Cepheids are surrounded by faint CSE,
detected by near infrared interferometry: after observing four Cepheids, all show
evidence for a CSE. Our CSE non-detection around a non-pulsating supergiant
in the instability strip, α Per, provides confidence in the detection technique and
suggests a pulsation driven mass-loss mechanism for the Cepheids.
Subject headings: stars: variables: Cepheid - stars: circumstellar matter - stars:
individual (Y Oph) - stars: individual (α Per) - techniques: interferometric
1. Introduction
In our two previous papers, (Kervella et al. 2006; Mérand et al. 2006b), hereafter Paper I
and Paper II, we reported the discovery of faint circumstellar envelops (CSE) around Galactic
classical Cepheids. Interestingly, all the Cepheids we observed (ℓ Car in Paper I, α UMi,
and δ Cep in Paper II) were found to harbor CSE with similar characteristics: a CSE 3 to 4
times larger than the star which accounts for a few percent of the total flux in the infrared
K band. The presence of CSE was discovered in our attempt to improve our knowledge of
Cepheids in the context of distance determination via the interferometric Baade-Wesselink
method (IBWM). Part of the method requires the measurement of the angular diameter
variation of the star during its pulsation. The determination of the angular diameters from
sparse interferometric measurements is not straightforward because optical interferometers
gather high angular resolution data only at a few baselines at a time, thus good phase and
angular resolution coverage cannot be achieved in a short time. For Cepheids, the main
uncertainty in the IBWM was thought to be the center-to-limb darkening (CLD), which
biases the interferometric angular diameter measurements (Marengo et al. 2004).
The direct measurement of CLD is possible using an optical interferometer, given suffi-
cient angular resolution and precision. Among current optical interferometers, CHARA/FLUOR
(ten Brummelaar et al. 2005; Mérand et al. 2006a) is one of the few capable of such a mea-
surement for Cepheids. The only Cepheid accessible to CHARA/FLUOR, i.e. large enough
in angular diameter, for such a measurement is Polaris (α UMi), which we observed and
found to have a CLD compatible with hydrostatic photospheric models, though surrounded
by a CSE (Paper II). Polaris, however, is a very low amplitude pulsation Cepheid: 0.4%
in diameter, compared to 15 to 20% for type I Cepheids (Moskalik & Gorynya 2005), thus
the agreement is not necessarily expected for large amplitude Cepheids, whose photospheres
– 3 –
are more out of equilibrium. The direct measurement of CLD of a high amplitude Cepheid
during its pulsation phase remains to be performed.
Hydrodynamic simulations (Marengo et al. 2003) suggest that the CLD variations dur-
ing the pulsation do not account for more than a 0.2% bias in distance determination in the
near infrared using the IBWM, where most of the IBWM observing work has been done in
recent years: the best formal distance determination to date using the IBWM is of the order
of 1.5% (Mérand et al. 2005b).
Whereas the near infrared IBWM seems to be relatively immune to bias from CLD, the
recent discovery of CSEs raises the issue of possible bias in angular diameter measurements,
hence bias in distance estimations at the 10% level (Paper II). It is therefore important
to continue the study of CSE around Cepheids. We present here interferometric observa-
tions of the non-pulsating supergiant α Per and the low amplitude Cepheid Y Oph. We
obtained these results in the near infrared K-band, using the Fiber Linked Unit for Optical
Recombination — FLUOR — (Mérand et al. 2006a), installed at Georgia State University’s
Center for High Angular Resolution Astronomy (CHARA) Array located on Mount Wilson,
California (ten Brummelaar et al. 2005).
2. The low amplitude Cepheid Y Oph
In the General Catalog of Variable Stars (Kholopov et al. 1998), Y Oph is classified in
the DCEPS category, i.e. low amplitude Cepheids with almost symmetrical light curves and
with periods less than 7 days. The GCVS definition adds that DCEPS are first overtone and/or
crossing the instability strip for the first time. A decrease in photometric variation amplitude
over time has been measured, as well as a period change (Fernie et al. 1995b). Using this
period change rate, 7.2± 1.5 syr−1 and the period of 17.1207 days, the star can be identified
as crossing the instability strip for the third time, according to models (Turner et al. 2006).
The fact that Y Oph belongs to the DCEPS category is questionable: its period is longer
than 7 days, by almost three times, though its light curve is quasi-symmetric and with a low
amplitude compared to other type I Cepheids of similar periods (Vinko et al. 1998). Indeed,
Y Oph is almost equally referred to in publications as being a fundamental-mode Cepheid
or a first overtone.
In this context, a direct determination of the linear diameter can settle whether Y Oph
belongs to the fundamental mode group or not. This is of prime importance: because of its
brightness and the large amount of observational data available, Y Oph is often used to cali-
brate the Period-Luminosity (PL) or the Period-Radius (PR) relations. The interferometric
– 4 –
Baade-Wesselink method offers this opportunity to geometrically measure the average linear
radius of pulsating stars: if Y Oph is not a fundamental pulsator, its average linear diameter
should depart from the classical PR relation.
2.1. Interferometric observations
The direct detection of angular diameter variations of a pulsating star has been achieved
for many stars now using optical interferometers (Lane et al. 2000; Kervella et al. 2004a;
Mérand et al. 2005b). We showed (Mérand et al. 2006b) that for a given average diameter,
one should use a baseline that maximizes the amplification factor between the variation in
angular diameter and observed squared visibility. This baseline provides an angular resolu-
tion of the order of Bθ/λ ≈ 1, in other words in the first lobe, just before the first minimum
(Bθ/λ ≈ 1.22 for a uniform disk model), where B is the baseline (in meters), θ the angular
diameter (in radians) and λ the wavelength of observation (in meters). According to pre-
vious interferometric measurements (Kervella et al. 2004a), the average angular diameter of
Y Oph is of the order of 1.45 mas (milli arcsecond).
Ideally, that would mean using a baseline of the order of 300 m, which is available at
the CHARA Array. Because of a trade we made with other observing programs, we used
only a 250 m baseline provided by telescopes S1 and E2.
The fringes squared visibility is estimated using the integration of the fringes power
spectrum. A full description of the algorithm can be found in Coude Du Foresto et al. (1997)
and Mérand et al. (2006a).
The raw squared visibilities have been calibrated using resolved calibrator stars, chosen
from a specific catalog (Mérand et al. 2005a) using criteria defined to minimize the calibra-
tion bias and maximize signal to noise. The error introduced by the uncertainty on each
calibrator’s estimated angular diameter has been properly propagated. Among the three
main calibrators (Tab. 1), one, HR 6639, turned out to be inconsistent with the others. The
raw visibilility of this star was found to vary too much to be consistent with the expected
statistical dispersion. The quantity to calibrate, the interferometric efficiency (also called
instrument visibility), is very stable for an instrument using single mode fibers, such as
FLUOR. If this quantity is assumed to be constant over a long period of time, and if obser-
vations of a given simple star are performed several times during this period, one can check
whether or not the variation of the raw visibilities with respect to the projected baseline
is consistent with a uniform disk model. Doing so, HR 6639 was found inconsistent with
the over stars observed during the same night (Fig 1). The unconsistency may be explained
– 5 –
by the presence of a faint companion with a magnitude difference of 3 or 4 with respect
to the primary. Two over calibrators, from another program, were also used as check stars:
HR 7809 and ρ Aql (Tab. 1). This latter calibrator is not part of the catalog by Mérand et al.
(2005a). Its angular diameter has been determined using the Kervella et al. (2004b) surface
brightness calibration applied to published photometric data in the visible and near infrared.
For each night we observed Y Oph, we determined a uniform disk diameter (Tab. 3)
based on several squared visibility measurements (Tab. 2). Each night was assigned a
unique pulsation phase established using the average date of observation and the Fernie et al.
(1995b) ephemeris, including the measured period change:
D = JD − 2440007.720 (1)
E = 0.05839D − 3.865× 10−10D2 (2)
P = 17.12507 + 3.88× 10−6E (3)
where E is the epoch is the epoch of maximum light (the fractional part is the pulsation
phase) and P the period at this epoch.
2.2. Pulsation
2.2.1. Radial Velocity integration
In order to measure the distance to a pulsating star, the IBWM makes use of radial
velocities and angular diameters. The latter is the integral other time of the former. The
radial velocities, which have been acquired at irregular intervals during the pulsation phase,
must be numerically integrated. This process is highly sensitive to noisy data and the best
way to achieve a robust integration is to interpolate the data before integration. For this
purpose, we use a periodic cubic spline function, defined by floating nodes (Fig. 2). The
coordinates of these nodes are adjusted such that the cubic spline function going through
these nodes provides the best fit to the data points. The phase positions φi of these nodes
are forced to be between 0 and 1, but they are replicated every φi+n, where n is an integer,
in order to obtain a periodic function of period 1.
Among published Y Oph radial velocities data, we chose Gorynya et al. (1998) because
of the uniform phase coverage and the algorithm used to extract radial velocities: the cross-
correlation method. As shown by Nardetto et al. (2004), the method used can influence
the distance determination via the choice of the so-called projection factor, which we shall
introduce in the following section. The pulsation phases have been also determined using
Eq. 2.
– 6 –
The data presented by Gorynya et al. (1998) were acquired between June 1996 and
August 1997. As we already mentioned, Y Oph is known for its changing period and photo-
metric amplitude. Based on Fernie et al. (1995b), the decrease in amplitude observed for the
photometric B and V bands does not have a measurable counterpart in radial velocity. This
is why we did not apply any correction in amplitude to the radial velocity data in order to
take into account the ten years between the spectroscopic and interferometric measurements.
2.2.2. Distance determination method
Once radial velocities vrad are interpolated (Fig. 2) and integrated, the distance d is
determined by fitting the radial displacement to the measured angular diameters (Fig. 3):
θUD(T )− θUD(0) = −2
vrad(t)dt (4)
where θUD is the interferometric uniform disk diameter, and k is defined as the ratio between
θUD and the true stellar angular diameter. The projection factor, p, is the ratio between the
pulsation velocity and the spectroscopically measured radial velocity. The actual parameters
of the fit are the average angular diameter θUD(0) and the biased distance
This formalism assumes that both k and p do not vary during the pulsation. There is
evidence that this might be true for k, based on hydrodynamic simulation (Marengo et al.
2003), at the 0.2% level. Observational evidence exists as well: when we measured the p-
factor of δ Cep (Mérand et al. 2005b) we did not find any difference between the shapes of
the left and right parts of Eq. 4, therefore kp is probably constant other a pulsation period,
at least at the level of precision we have available.
For this work, we will adopt the value for p we determined observationally for near
infrared interferometry/ cross correlation radial velocity: p = 1.27. This result has been
established for δ Cep (Mérand et al. 2005b). This is also the latest value computed from
hydrodynamical photospheric models (Nardetto et al. 2004). The IBWM fit yields a biased
distance d/k = 480± 18 pc and an average angular uniform disk diameter θUD(0) = 1.314±
0.005 mas. Note that we had to allow a phase shift between interferometric and radial
velocity observations: −0.074 ± 0.005 (Fig. 3). The final reduced χ2 is of the order of 3,
mostly due to one data point (φ = 0.887).
– 7 –
2.2.3. Choice of k
Usually, the choice of k is made assuming the star is a limb-darkened disk. The strength
of the CLD is computed using photospheric models, then a value of k is computed. This
approach is sometimes confusing because, even for a simple limb darkened disk, there is no
unique value of k, in the sense that this value varies with respect to angular resolution. The
uniform disk angular size depends upon which portion of the visibility curve is measured.
However, it is mostly unambiguous in the first lobe of visibility, i.e. at moderate angular
resolution: Bθ/λ ≤ 1.
However, as shown in Paper II, the presence of a faint CSE around Cepheids biases k
up to 10%, particularly when the angular resolution is moderate and the star is not well
resolved (V 2 ∼ 0.5). Under these conditions, the CSE is largely resolved, leading to a strong
bias if the CSE is omitted. On the other hand, at greater angular resolution (Bθ/λ ∼ 1), the
star is fully resolved (V 2 approaches it first null) and the bias from the CSE is minimized.
In any cases, it is critical to determine whether or not Y Oph is surrounded by a CSE if an
accurate distance is to be derived.
2.3. Interferometric evidence of a CSE around Y Oph
We propose here to compare the uniform disk diameters obtained by VLTI/VINCI
(Kervella et al. 2004a) and CHARA/FLUOR (this work). This makes sense because these
two instruments are very similar. Both observe in the near infrared K-band. Moreover, both
instruments observed Y Oph in the first lobe of the visibility profile, though at different
baselines.
If Y Oph is truly a uniform disk, or even a limb-darkened disk, the two instruments
should give similar results. That is because the star’s first lobe of squared visibility is insen-
sitive to the CLD and only dependent the on the size. Conversely, if Y Oph is surrounded
by a CSE, we expect a visibility deficit at smaller baseline (VLTI/VINCI), hence a larger
apparent uniform disk diameter (see Fig. 5). This is because the CSE is fully resolved at
baselines which barely resolve the star.
This is indeed the case, as seen on Fig. 4: VLTI/VINCI UD diameters are larger than
CHARA/FLUOR’s. Even if the angular resolution of VLTI/VINCI is smaller than for
CHARA/FLUOR, leading to less precise angular diameter estimations, the disagreement
is still statistically significant, of the order of 3 sigmas. Using the CSE model we can predict
the correct differential UD size between the two instruments, consistant with the presence
of a CSE around Y Oph. The amount of discrepancy can be used to estimate the flux ra-
– 8 –
tio between the CSE and the star. In the case of Y Oph, we find that the CSE amounts
for 5 ± 2% of the stellar flux. Note that for this comparison, we recomputed the phase of
VLTI/VINCI data using Fernie et al. (1995b) ephemeris presented in Eq. 2.
In Fig. 5, we plot k as a function of the observed squared visibility for different models:
hydrostatic CLD, 2% CSE and 5% CSE (K-Band flux ratio). For the hydrostatic model
we have k = 0.983. For the 5% CSE models, for CHARA/FLUOR (0.20 < V 2 < 0.35),
θUD/θ⋆ = 1.023. This is the value we shall adopt. If we ignore the presence of the CSE, the
bias introduced is 1.023/0.983 ≈ 1.04, or 4%.
2.4. Unbiased distance and linear radius
This presence of a 5% CSE leads to an unbiased distance of d = 491 ± 18 pc, which
corresponds to a 3.5% uncertainty on the distance. This is to be compared with the bias we
corrected for if one omits the CSE, of the order of 4%. Ignoring the CSE leads to a distance
of d = 472± 18 pc
We note that k biases only the distance, so one can form the following quantity:
[θUD(0)]× [d/k], which is the product of the two adjusted parameters in the fit, both biased.
This quantity is by definition the linear diameter of the star, and does not depend on the
factor k, even if it is still biased by the choice of p. If θ is in mas and d in parsecs, then
the average linear radius in solar radii is: R = 0.1075θd. In the case of Y Oph, this leads to
R = 67.8± 2.5 R⊙.
2.5. Conclusion
YOph appears larger (over 2 sigma) in the infrared K-band at a 140 m baseline comapred
to a 230 m baseline. Using a model of a star surrounded by a CSE we developed based
on observations of other Cepheids, this disagreement is explained both qualitatively and
quantitatively by a CSE accounting of 5% of the stellar flux in the near infrared K-Band.
This model allows us to unbias the distance estimate: d = 491 ± 18 pc. The linear radius
estimate is not biased by the presence of CSE and we found R = 67.8± 2.5 R⊙.
Our distance is not consistent with the estimation using the Barnes-Evans method:
Barnes et al. (2005) found d = 590 ± 42 pc (Bayesian) and d = 573 ± 8 pc (least squares).
For this work, they used the same set of radial velocities we used. Our estimate is even more
inconsistent with the other available interferometric estimate, by Kervella et al. (2004a): d =
690±50 pc. This later result has been established using an hybrid version of the BWmethod:
– 9 –
a value of the linear radius is estimated using the Period-Radius relation calibrated using
classical Cepheids, not measured from the data. This assumption is questionable, as we noted
before, since Y Oph is a low amplitude Cepheid. Kervella et al. (2004a) deduced R = 100±
8 R⊙ from PR relation, whereas we measured R = 67.8±2.5 R⊙. Because Y Oph’s measured
linear radius is not consistent with the PR relation for classical, fundamental mode Cepheids,
it is probably safe to exclude it from further calibrations. Interestingly, Barnes et al. (2005)
observationally determined Y Oph’s linear radius to be also slightly larger (2.5 sigma) than
what we find: R = 92.1±6.6 R⊙ (Bayesian) and R = 89.5±1.2 R⊙ (least squares). They use
a surface brightness technique using a visible-near infrared color, such as V-K. This method is
biased if the reddening is not well known. If the star is reddened, V magnitudes are increased
more than K-band magnitudes. This leads to an underestimated surface brightness, because
the star appears redder, thus cooler than it is. The total brightness (estimated from V) is also
underestimated. These two underestimates have opposing effects on the estimated angular
diameter: an underestimated surface brigthness leads to an overestimated angular diameter,
whereas an underestimated luminosity leads to an underestimated angular diameter. In the
case of a reddening law, the two effects combine and give rise to a larger angular diameter: the
surface brightness effect wins over the total luminosity. Based on their angular diameter, θ ≈
1.45 mas, it appears that Barnes et al. (2005) overestimated Y Oph’s angular size. Among
Cepheids brighter than mV = 6.0, Y Oph has the strongest B-V color excess, E(B − V ) =
0.645 (Fernie et al. 1995a) and one of the highest fraction of polarized light, p = 1.34±0.12 %
(Heiles 2000). Indeed, Y Oph is within the galactic plane: this means that it has a probably
a large extinction due to the interstellar medium.
3. The non-pulsating yellow super giant α Per
α Per (HR 1017, HD 20902, F5Ib) is among the most luminous non-pulsating stars
inside the Cepheids’ instability strip. The Doppler velocity variability has been found to be
very weak, of the order of 100 m/s in amplitude (Butler 1998). This amplitude is ten times
less than what is observed for the very low amplitude Cepheid Polaris (Hatzes & Cochran
2000).
α Per’s apparent angular size, approximately 3 mas (Nordgren et al. 2001), makes it
a perfect candidate for direct center-to-limb darkening detection with CHARA/FLUOR.
Following the approach we used for Polaris (Paper II), we observed α Per using three different
baselines, including one sampling the second lobe, in order to measure its CLD strength, but
also in order to be able to assess the possible presence of a CSE around this star.
– 10 –
3.1. Interferometric observations
If the star is purely a CLD disk, then only two baselines are required to measure the
angular diameter and the CLD strength. Observations must take place in the first lobe of the
squared visibility profile, in order to set the star’s angular diameter θ. The first lobe is defined
by BθUD/λ < 1.22. Additional observations should taken in the second lobe, in particular
near the local maximum (Bθ/λ ∼ 3/2), because the strength of the CLD is directly linked
to the height of the second lobe. To address the presence of a CSE, observations should be
made at a small baseline. Because the CSEs that were found around Cepheids are roughly 3
times larger that the star itself (Paper I and II), we chose a small baseline where Bθ/λ ∼ 1/3.
As demonstrated by our Polaris measurements, the presence of CSE is expected to weaken
the second lobe of visibility curve, mimicking stronger CLD.
FLUOR operates in the near infrared K-band, with a mean wavelength of λ0 ≈ 2.13 µm.
This sets the small, first-lobe and second-lobe baselines at approximatively 50, 150 and 220
meters, which are well-matched to CHARA baselines W1-W2 (∼ 100 m), E2-W2 (∼ 150 m)
and E2-W1 (∼ 250 m). See Fig. 6 for a graphical representation of the baseline coverage.
The data reduction was performed using the same pipeline as for Y Oph. Squared
visibilities were calibrated using a similar strategy we adopted for Y Oph. We used two sets
of calibrators: one for the shorter baselines, W1-W2 and E2-W2, and one for the longest
baseline, E2-W1 (Tab. 4).
3.2. Simple Model
3.2.1. Limb darkened disk
To probe the shape of the measured visibility profile, we first used an analytical model
which includes the stellar diameter and a CLD law. Because of its versatility, we adopt
here a power law (Hestroffer 1997): I(µ)/I(0) = µα, with µ = cos(θ), where θ is the angle
between the stellar surface and the line of sight. The uniform disk diameter case corresponds
to α = 0.0, whereas an increasing value of α corresponds to a stronger CLD.
All visibility models for this work have been computed taking into account instrumental
bandwidth smearing (Aufdenberg et al. 2006). From this two-parameter fit, we deduce a
stellar diameter of θα = 3.130 ± 0.007 mas and a power law coefficient α = 0.119 ± 0.016.
The fit yields a reduced χ2 = 1.0. There is a strong correlation between the diameter and
CLD coefficent: 0.9. This is a well known effect, that a change in CLD induces a change in
apparent diameter.
– 11 –
This fit is entirely satisfactory, as the reduced χ2 is of the order unity and the residuals
of the fit do not show any trend (Fig. 7). We note that this is not the case for Polaris (Paper
II). Polaris, with very similar u-v coverage to α Per, does not follow a simple LD disk model
because it is surrounded by a faint CSE.
3.2.2. Hydrostatic models
We computed a small grid of models which consists of six one-dimensional, spherical,
hydrostatic models using the PHOENIX general-purpose stellar and planetary atmosphere
code version 13.11; for a description see Hauschildt et al. (1999) and Hauschildt & Baron
(1999). The range of effective temperatures and surface gravities is based on a summary of
α Per’s parameters by Evans et al. (1996):
• effective temperatures, Teff = 6150 K, 6270 K, 6390 K;
• log(g) = 1.4, 1.7;
• radius, R = 3.92× 1012 cm;
• depth-independent micro-turbulence, ξ = 4.0 km s−1;
• mixing-length to pressure scale ratio, 1.5;
• solar elemental abundance LTE atomic and molecular line blanketing, typically 106
atomic lines and 3× 105 molecular lines dynamically selected at run time;
• non-LTE line blanketing of H I (30 levels, 435 bound-bound transitions), He I (19 levels,
171 bound-bound transitions), and He II (10 levels, 45 bound-bound transitions);
• boundary conditions: outer pressure, 10−4 dynes cm−2, extinction optical depth at
1.2 µm: outer 10−10, inner 102.
For this grid of models the atmospheric structure is computed at 50 radial shells (depths)
and the radiative transfer is computed along 50 rays tangent to these shells and 14 additional
rays which impact the innermost shell, the so-called core-intersecting rays. The intersection
of the rays with the outermost radial shell describes a center-to-limb intensity profile with
64 angular points.
Power law fits to the hydrostatic model visibilities range from α = 0.132 to α = 0.137
(Tab. 6), which correspond to 0.8 to 1.1 sigma above the value we measured. Our measured
– 12 –
CLD is below that predicted by the the models, or in other words the predicted darkening
is slightly overestimated.
3.3. Possible presence of CSE
Firstly, we employ the CSE model used for Cepheids (Paper I, Paper II). This model is
purely geometrical and it consists of a limb darkened disk surrounded by a faint CSE whose
two-dimensional geometric projection (on the sky) is modeled by a ring. The ring parameters
consist of the angular diameter, width and flux ratio with respect to the star (Fs/F⋆). We
adopt the same geometry found for Cepheids: a ring with a diameter equal to 2.6 times the
stellar diameter (Paper II) with no dependency with respect to the width (fixed to a small
fraction of the CSE angular diameter, say 1/5).
This model restriction is justified because testing the agreement between a generic CSE
model and interferometric data requires a complete angular resolution coverage, from very
short to very large baselines. Though our α Per data set is diverse regarding the baseline
coverage, we lack a very short baseline (B ∼ 50m) which was not available at that time at
the CHARA Array.
The simplest fit consists in adjusting the geometrical scale: the angular size ratio be-
tween the CSE and the star is fixed. This yields a reduced χ2 of 3, compared to 1.1 for a
hydrostatic LD and 1.0 for an adjusted CLD law (Tab. 7).
We can force the data to fit a CSE model by relaxing the CLD of the star or the CSE
flux ratio. A fit of the size of the star and the brightness of the CSE leads to a reduced χ2 of
1.4 and results in a very small flux ratio between the CSE and the star of 0.0006± 0.0026,
which provides an upper limit for the CSE flux of 0.26% (compared to 1.5% for Polaris and
δ Cep for example, and 5% for Y Oph). On the other hand, forcing the flux ratio to match
that for Cepheids with CSEs and leaving the CLD free leads to a reduced χ2 of 1.4 but also
to a highly unrealistic α = 0.066± 0.004 (Tab. 7 and Fig. 7).
This leads to the conclusion that the presence of a CSE around α Per similar to the
measured Cepheid CSE is highly improbable. As shown above, the measured CLD is slightly
weaker (1 sigma) than one predicted by model atmospheres. A compatible CSE model exists
if the CLD is actually even weaker but in unrealistic proportions.
– 13 –
3.4. Conclusion
The observed α Per visibilities are not compatible with the presence of a CSE similar to
that detected around Cepheids. The data are well explained by an adjusted center-to-limb
darkening. The strength of this CLD is compatible with the hydrostatic model within one
sigma.
4. Discussion
Using the interferometric Baade-Wesselink method, we determined the distance to the
low amplitude Cepheid Y Oph to be d = 491± 18 pc. This distance has been unbiased from
the presence of the circumstellar envelope. This bias was found to be of the order of 4% for
our particular angular resolution and the amount of CSE we measured. Y Oph’s average
linear diameter has also been estimated to be R = 67.8 ± 2.5 R⊙. This latter quantity
in intrinsically unbiased by the center-to-limb darkening or the presence of a circumstellar
envelope. This value is found to be substantially lower, almost 4 sigmas, than the Period-
Radius relation prediction: R = 100 ± 8 R⊙. Among other peculiarities, we found a very
large phase shift between radial velocity measurements and interferometric measurements:
∆φ = −0.074 ± 0.005, which corresponds to more that one day. For these reasons, we
think Y Oph should be excluded from future calibrations of popular relations for classical
Cepheids, such as Period-Radius, Period-Luminosity relations.
So far, the four Cepheids we looked at have a CSE: ℓ Car (Paper I), Polaris, δ Cep
(Paper II) and Y Oph (this work). On the other hand, we have presented here similar
observations of α Per, a non-pulsating supergiant inside the instability strip, which provides
no evidence for a circumstellar envelope. This non detection reinforces the confidence we
have in our CSE detection technique and draws more attention to a pulsation driven mass
loss mechanism.
Interestingly, there seems to be a correlation between the period and the CSE flux ratio
in theK-Band: short-period Cepheids seem to have a fainter CSE compared to longer-period
Cepheids (Tab. 8). Cepheids with long periods have higher masses and larger radii, thus, if
we assume that the CSE K-Band brightness is an indicator of the mass loss rate, this would
mean that heavier stars experience higher mass loss rates. This is of prime importance in the
context of calibrating relations between Cepheids’ fundamental parameters with respect to
their pulsation periods. If CSEs have some effects on the observational estimation of these
fundamental parameters (luminosity, mass, radius, etc.), a correlation between the period
and the relative flux of the CSE can lead to a biased calibration.
– 14 –
Research conducted at the CHARA Array is supported at Georgia State University
by the offices of the Dean of the College of Arts and Sciences and the Vice President for
Research. Additional support for this work has been provided by the National Science
Foundation through grants AST 03-07562 and 06-06958. We also wish to acknowledge the
support received from the W.M. Keck Foundation. This research has made use of SIMBAD
database and the VizieR catalogue access tool, operated at CDS, Strasbourg, France.
– 15 –
REFERENCES
Aufdenberg, J. P., Mérand, A., Foresto, V. C. d., Absil, O., Di Folco, E., Kervella, P.,
Ridgway, S. T., Berger, D. H., Brummelaar, T. A. t., McAlister, H. A., Sturmann,
J., Sturmann, L., & Turner, N. H. 2006, ApJ, 645, 664
Barnes, III, T. G., Storm, J., Jefferys, W. H., Gieren, W. P., & Fouqué, P. 2005, ApJ, 631,
Butler, R. P. 1998, ApJ, 494, 342
Coude Du Foresto, V., Ridgway, S., & Mariotti, J.-M. 1997, A&AS, 121, 379
Evans, N. R., Teays, T. J., Taylor, L. L., Lester, J. B., & Hindsley, R. B. 1996, AJ, 111,
Fernie, J. D., Evans, N. R., Beattie, B., & Seager, S. 1995a, Informational Bulletin on
Variable Stars, 4148, 1
Fernie, J. D., Khoshnevissan, M. H., & Seager, S. 1995b, AJ, 110, 1326
Gorynya, N. A., Samus’, N. N., Sachkov, M. E., Rastorguev, A. S., Glushkova, E. V., &
Antipin, S. V. 1998, Astronomy Letters, 24, 815
Hatzes, A. P., & Cochran, W. D. 2000, AJ, 120, 979
Hauschildt, P. H., Allard, F., Ferguson, J., Baron, E., & Alexander, D. R. 1999, ApJ, 525,
Hauschildt, P. H., & Baron, E. 1999, J. Comp. and App. Math., 109, 41
Heiles, C. 2000, AJ, 119, 923
Hestroffer, D. 1997, A&A, 327, 199
Kervella, P., Mérand, A., Perrin, G., & Coudé Du Foresto, V. 2006, A&A, 448, 623
Kervella, P., Nardetto, N., Bersier, D., Mourard, D., & Coudé du Foresto, V. 2004a, A&A,
416, 941
Kervella, P., Thévenin, F., Di Folco, E., & Ségransan, D. 2004b, A&A, 426, 297
– 16 –
Kholopov, P. N., Samus, N. N., Frolov, M. S., Goranskij, V. P., Gorynya, N. A., Karit-
skaya, E. A., Kazarovets, E. V., Kireeva, N. N., Kukarkina, N. P., Kurochkin, N. E.,
Medvedeva, G. I., Pastukhova, E. N., Perova, N. B., Rastorguev, A. S., & Shugarov,
S. Y. 1998, in Combined General Catalogue of Variable Stars, 4.1 Ed (II/214A).
(1998), 0–+
Lane, B. F., Kuchner, M. J., Boden, A. F., Creech-Eakman, M., & Kulkarni, S. R. 2000,
Nature, 407, 485
Marengo, M., Karovska, M., Sasselov, D. D., Papaliolios, C., Armstrong, J. T., & Nordgren,
T. E. 2003, ApJ, 589, 968
Marengo, M., Karovska, M., Sasselov, D. D., & Sanchez, M. 2004, ApJ, 603, 285
Mérand, A., Bordé, P., & Coudé Du Foresto, V. 2005a, A&A, 433, 1155
Mérand, A., Coudé du Foresto, V., Kellerer, A., ten Brummelaar, T., Reess, J.-M., & Ziegler,
D. 2006a, in Advances in Stellar Interferometry. Edited by Monnier, John D.; Schöller,
Markus; Danchi, William C.. Proceedings of the SPIE, Volume 6268, pp. (2006).
Mérand, A., Kervella, P., Coudé Du Foresto, V., Perrin, G., Ridgway, S. T., Aufdenberg,
J. P., Ten Brummelaar, T. A., McAlister, H. A., Sturmann, L., Sturmann, J., Turner,
N. H., & Berger, D. H. 2006b, A&A, 453, 155
Mérand, A., Kervella, P., Coudé Du Foresto, V., Ridgway, S. T., Aufdenberg, J. P., Ten
Brummelaar, T. A., Berger, D. H., Sturmann, J., Sturmann, L., Turner, N. H., &
McAlister, H. A. 2005b, A&A, 438, L9
Moskalik, & Gorynya. 2005, Acta Astronomica, 55, 247
Nardetto, N., Fokin, A., Mourard, D., Mathias, P., Kervella, P., & Bersier, D. 2004, A&A,
428, 131
Nordgren, T. E., Sudol, J. J., & Mozurkewich, D. 2001, AJ, 122, 2707
ten Brummelaar, T. A., McAlister, H. A., Ridgway, S. T., Bagnuolo, Jr., W. G., Turner,
N. H., Sturmann, L., Sturmann, J., Berger, D. H., Ogden, C. E., Cadman, R.,
Hartkopf, W. I., Hopper, C. H., & Shure, M. A. 2005, ApJ, 628, 453
Turner, D. G., Abdel-Sabour Abdel-Latif, M., & Berdnikov, L. N. 2006, PASP, 118, 410
Vinko, J., Remage Evans, N., Kiss, L. L., & Szabados, L. 1998, MNRAS, 296, 824
– 17 –
This preprint was prepared with the AAS LATEX macros v5.2.
– 18 –
230 240 250 260
Y Oph
B (m)
230 240 250 260
B (m)
HR 6639
Fig. 1.— Evidence that calibrator HR 6639 is probably a binary star. Calibrated squared
visibility as a function of baseline for UT 2006/07/10. Left is Y Oph and right is HR 6639.
The interferometric efficiency was supposed constant during the night and established using
HR 7809 and ρ Aql with a reduced χ2 of 1.3. A UD fit, shown as a continuous line, works for
Y Oph (χ2r = 0.6) whereas it fails for HR 6639 (χ
r = 7.2). This could the sign that HR 6639
has a faint companion: the visibility variation has a 3-5% amplitude, which corresponds to
a magnitude difference of 3 to 4 between the main star and its companion.
– 19 –
0.0 0.5 1.0
0.0 0.5 1.0
Fig. 2.— Y Oph: Radial velocities from Gorynya et al. (1998). The continuous line is the
periodic spline function defined by 3 adjustable floating nodes (large filled circles). The
systematic velocity, Vγ = −7.9 ± 0.1 kms
−1, has been evaluated using the interpolation
function and removed. The lower panel displays the residuals of the fit.
– 20 –
0.0 0.5 1.0
0.0 0.5 1.0
−0.02
Fig. 3.— Y Oph: angular diameter variations. Upper panel: CHARA/FLUOR uniform disk
angular diameters as a function of phase. Each data point corresponds to a given night,
which contains several individual squared visibility measurements (Tab. 3). The solid line
is the integration of the radial velocity (Fig. 2) with distance, average angular diameter and
phase shift adjusted. The dashed line has a phase shift set to zero. The lower panel shows
the residuals to the continuous line.
– 21 –
CSE 2%
CSE 5%
0.0 0.5 1.0
Fig. 4.— Y Oph: Comparison between CHARA/FLUOR and VLTI/VINCI observations.
Uniform disk angular diameter as a function of phase. Small data points (with small error
bars) and lower continuous line: CHARA/FLUOR observations and distance fit. Large open
squares: VLTI/VINCI observations. The distance fit and it uncertainty are reprensenteb by
the shaded band. Dashed and dotted lines: VLTI/VINCI expected biased observations based
on CHARA/FLUOR and our CSE model with a flux ratio of 2% (dashed) and 5% (dotted).
– 22 –
CSE 5%
CSE 2%
0.0 0.2 0.4 0.6 0.8
squared visibility
Fig. 5.— Y Oph: angular diameter correction factor. We plot here θUD/θ⋆ for three different
Cepheid models as a function of observed squared visibility (first lobe): the dashed line is
a simple limb darkened disk with the appropriate CLD strength; the continuous line is a
similar LD disk surrounded by a CSE with a 2% K-band flux (short period: Polaris and
δ Cep, see Paper II) and 5% (long period: ℓ Car, see paper I). Note that, in the presence of
CSE, the bias is stronger at large visibilities (hence smaller angular resolution). The shaded
regions represents near infrared Y Oph observations: from CHARA (this work) and the
VLTI (Kervella et al. 2004a).
– 23 –
−200 0 200
u (m)
Fig. 6.— α Per. Projected baselines, in meters. North is up East is right. The shaded area
corresponds to the squared visibility’s second lobe. The baselines are W1-W2, E2-W2 and
E2-W1, from the shortest to the longest.
– 24 –
adjusted CLD
CSE "Polaris"
CSE (1)
50 100 150 200 250
B (m)
Fig. 7.— α Per squared visibility models: UD, adjusted CLD, PHOENIX and 2 different
CSE models are plotted here as the residuals to the PHOENIX CLD with respect to baseline.
The common point at B ≈ 175 m is the first minimum of the visibility function. The top
of the second lobe is reached for B ≈ 230 m. See Tab. 7 for parameters and reduced χ2 of
each model.
– 25 –
α UMi δ Cep
Y Oph
l Car
α Per
0 10 20 30 40
Period (days)
Fig. 8.— measured relative K-band CSE fluxes (in percent) around Cepheids, as a function
of the pulsation period (in days). The non-pulsating yellow supergiant α Per is plotted with
– 26 –
Table 1. Y Oph calibrators.
Star S.T. UD Diam. Notes
(mas)
HD 153033 K5III 1.100± 0.015
HD 175583 K2III 1.021± 0.014
HR 6639 K0III 0.904± 0.012 rejected
HR 7809 K1III 1.055± 0.015
ρ Aql A2V 0.370± 0.005 not in M05
Note. — “S.T.” stands for spectral type. Uniform Disk diameters, given in mas, are only
intended for computing the expected squared visibility in the K-band. All stars but ρ Aql
are from M05 catalog (Mérand et al. 2005a). Refer to text for an explanation why HR 6639
has been rejected.
– 27 –
Table 2. Journal of observations: Y Oph.
MJD-53900 B P.A. V 2
(m) (deg)
18.260 229.808 63.540 0.2348± 0.0061
18.296 215.049 71.242 0.2823± 0.0065
18.321 207.247 77.913 0.3030± 0.0075
21.245 232.792 62.353 0.2511± 0.0065
21.283 216.576 70.243 0.2954± 0.0085
21.309 208.313 76.765 0.3194± 0.0085
22.212 245.894 58.023 0.2092± 0.0071
22.234 236.338 61.050 0.2352± 0.0078
22.251 228.941 63.902 0.2615± 0.0085
24.188 253.988 55.914 0.2108± 0.0062
24.212 243.641 58.680 0.2427± 0.0060
24.231 235.388 61.389 0.2634± 0.0065
26.168 259.377 54.714 0.2185± 0.0041
26.194 249.210 57.113 0.2526± 0.0039
26.224 235.772 61.251 0.2983± 0.0034
26.242 227.859 64.367 0.3171± 0.0035
27.194 247.788 57.495 0.2650± 0.0040
27.211 240.340 59.704 0.2854± 0.0038
27.228 232.806 62.348 0.3177± 0.0045
27.250 223.415 66.429 0.3385± 0.0045
28.241 225.978 65.207 0.3332± 0.0045
28.257 219.413 68.547 0.3629± 0.0061
29.219 234.329 61.775 0.2857± 0.0049
29.237 226.405 65.012 0.3148± 0.0053
29.253 219.709 68.380 0.3320± 0.0054
30.192 245.267 58.203 0.2510± 0.0035
30.216 234.738 61.624 0.2893± 0.0041
30.235 226.418 65.007 0.3112± 0.0044
30.244 222.544 66.866 0.3226± 0.0046
30.261 215.937 70.653 0.3462± 0.0048
– 28 –
Table 2—Continued
MJD-53900 B P.A. V 2
(m) (deg)
Note. — Date of observation (modified julian day), telescope projected separation (m),
baseline projection angle (degrees) and squared visibility
– 29 –
Table 3. Y Oph angular diameters.
MJD-53900 φ Nobs. B θUD χ
(m) (mas)
18.292 0.248 3 207-230 1.3912± 0.0067 0.55
21.279 0.423 3 208-233 1.3516± 0.0074 1.06
22.232 0.478 3 229-246 1.3387± 0.0077 0.05
24.210 0.594 3 235-254 1.2929± 0.0059 0.19
26.207 0.710 4 228-259 1.2488± 0.0029 0.61
27.221 0.770 4 223-248 1.2374± 0.0032 0.76
28.249 0.830 2 219-226 1.2394± 0.0055 0.30
29.237 0.887 3 220-234 1.2728± 0.0047 0.26
30.229 0.945 5 216-245 1.2715± 0.0030 0.56
Note. — Data points have been reduced to one uniform disk diameter per night. Avger-
age date of observation (modified julian day), pulsation phase, number of calibrated V 2,
projected baseline range, uniform disk angular diameter and reduced χ2.
– 30 –
Table 4. α Per calibrators.
Name S.T. UD Diam. Baselines
(mas)
HD 18970 G9.5III 1.551± 0.021 W1-W2, E2-W2
HD 20762 K0II-III 0.881± 0.012 E2-W1
HD 22427 K2III-IV 0.913± 0.013 E2-W1
HD 31579 K3III 1.517± 0.021 W1-W2, E2-W2
HD 214995 K0III 0.947± 0.013 W1-W2
Note. — “S.T.” stands for spectral type. Uniform Disk diameters, given in mas, are only
intended for computing the expected squared visibility in the K-band. These stars come
from our catalog of interferometric calibrators (Mérand et al. 2005a)
– 31 –
Table 5. Journal of observations: α Per.
MJD-54000 B P.A. V 2
(m) (deg)
46.281 147.82 79.37 0.02350± 0.00075
46.321 153.87 69.04 0.01368± 0.00067
46.347 155.79 62.17 0.01093± 0.00060
46.372 156.27 55.30 0.01134± 0.00039
46.398 155.65 48.10 0.01197± 0.00051
46.421 154.41 41.32 0.01367± 0.00052
46.442 152.94 34.84 0.01581± 0.00045
46.466 151.13 27.03 0.01824± 0.00064
46.488 149.58 19.40 0.02167± 0.00071
46.510 148.49 12.06 0.02250± 0.00058
46.539 147.77 1.53 0.02370± 0.00070
47.225 96.66 -44.51 0.27974± 0.00282
47.233 97.75 -47.34 0.27321± 0.00274
47.252 100.37 -53.97 0.24858± 0.00258
47.274 103.03 -60.91 0.23529± 0.00242
47.327 249.17 81.65 0.01363± 0.00032
47.352 251.22 74.84 0.01340± 0.00032
47.374 251.04 68.91 0.01330± 0.00035
47.380 250.67 67.10 0.01316± 0.00035
Note. — Date of observation (modified julian day), telescope projected separation, base-
line projection angle and squared visibility
– 32 –
Table 6. α Per PHOENIX models.
Teff log g = 1.4 log g = 1.7
6150 0.137 0.136
6270 0.135 0.134
6390 0.133 0.132
Note. — Models tabulated for different effective temperatures and surface gravities. The
K-band CLD is condensed into a power law coefficient α: I(µ) ∝ µα.
– 33 –
Table 7. Models for α Per.
Model θ⋆ α Fs/F⋆ χ
(mas) (%)
UD 3.080±0.004 0.000 - 5.9
PHOENIX CLD 3.137±0.004 0.135 - 1.1
adjusted CLD 3.130±0.007 0.119±0.016 - 1.0
CSE “Polaris” 3.086±0.007 0.135 1.5 3.0
CSE (1) 3.048±0.007 0.066±0.004 1.5 1.4
CSE (2) 3.095±0.010 0.135 0.06±0.26 1.4
Note. — The parameters are: θ⋆ the stellar angular diameter, the CLD power law coef-
ficient α and, if relevant, the brightness ratio between the CSE and the star Fs/F⋆. The
first line is the uniform disk diameter, the second one expected CLD from the PHOENIX
model, the third one is the adjusted CLD. The fourth line is a scaled Polaris CSE model
(Paper II). The last two lines are attempts to force a CSE model to the data. Lower script
are uncertainties, the absence of lower script means that the parameter is fixed
– 34 –
Table 8. Relative flux in K-Band for the CSE discovered around Cepheids and the non
pulsating α Per.
Star Period CSE flux
(d) (%)
α UMi 3.97 1.5± 0.4
δ Cep 5.37 1.5± 0.4
Y Oph 17.13 5.0± 2.0
ℓ Car 35.55 4.2± 0.2
α Per - < 0.26
Introduction
The low amplitude Cepheid Y Oph
Interferometric observations
Pulsation
Radial Velocity integration
Distance determination method
Choice of k
Interferometric evidence of a CSE around Y Oph
Unbiased distance and linear radius
Conclusion
The non-pulsating yellow super giant Per
Interferometric observations
Simple Model
Limb darkened disk
Hydrostatic models
Possible presence of CSE
Conclusion
Discussion
|
0704.1826 | Squark and Gaugino Hadroproduction and Decays in Non-Minimal Flavour
Violating Supersymmetry | Squark and Gaugino Hadroproduction and Decays in Non-Minimal
Flavour Violating Supersymmetry
Giuseppe Bozzi
Institut für Theoretische Physik, Universität Karlsruhe, Postfach 6980, D-76128 Karlsruhe, Germany
Benjamin Fuks, Björn Herrmann, and Michael Klasen∗
Laboratoire de Physique Subatomique et de Cosmologie,
Université Joseph Fourier/CNRS-IN2P3, 53 Avenue des Martyrs, F-38026 Grenoble, France
(Dated: October 29, 2018)
We present an extensive analysis of squark and gaugino hadroproduction and decays in non-
minimal flavour violating supersymmetry. We employ the so-called super-CKM basis to define the
possible misalignment of quark and squark rotations, and we use generalized (possibly complex)
charges to define the mutual couplings of (s)quarks and gauge bosons/gauginos. The cross sections
for all squark-(anti-)squark/gaugino pair and squark-gaugino associated production processes as
well as their decay widths are then given in compact analytic form. For four different constrained
supersymmetry breaking models with non-minimal flavour violation in the second/third generation
squark sector only, we establish the parameter space regions allowed/favoured by low-energy, elec-
troweak precision, and cosmological constraints and display the chirality and flavour decomposition
of all up- and down-type squark mass eigenstates. Finally, we compute numerically the dependence
of a representative sample of production cross sections at the LHC on the off-diagonal mass matrix
elements in the experimentally allowed/favoured ranges.
I. INTRODUCTION
KA-TP-07-2007
LPSC 07-023
SFB/CPP-07-09
Weak scale supersymmetry (SUSY) remains a both theoretically and phenomenologically attractive extension of
the Standard Model (SM) of particle physics [1, 2]. Apart from linking bosons with fermions and unifying internal and
external (space-time) symmetries, SUSY allows for a stabilization of the gap between the Planck and the electroweak
scale and for gauge coupling unification at high energies. It appears naturally in string theories, includes gravity, and
contains a stable lightest SUSY particle (LSP) as a dark matter candidate. Spin partners of the SM particles have not
yet been observed, and in order to remain a viable solution to the hierarchy problem, SUSY must be broken at low
energy via soft mass terms in the Lagrangian. As a consequence, the SUSY particles must be massive in comparison
to their SM counterparts, and the Tevatron and the LHC will perform a conclusive search covering a wide range of
masses up to the TeV scale.
If SUSY particles exist, they should also appear in virtual particle loops and affect low-energy and electroweak
precision observables. In particular, flavour-changing neutral currents (FCNC), which appear only at the one-loop
level even in the SM, put severe constraints on new physics contributions appearing at the same perturbative order.
Extended technicolour and many composite models have thus been ruled out, while the Minimal Supersymmetric
Standard Model (MSSM) has passed these crucial tests. This is largely due to the assumption of constrained Minimal
Flavour Violation (cMFV) [3, 4] or Minimal Flavour Violation (MFV) [5, 6, 7], where heavy SUSY particles may
appear in the loops, but flavour changes are either neglected or completely dictated by the structure of the Yukawa
couplings and thus the CKM-matrix [8, 9].
The squark mass matrices M2
, and M2
are usually expressed in the super-CKM flavour basis [10]. In MFV
SUSY scenarios, their flavour violating non-diagonal entries ∆ij , where i, j = L,R refer to the helicity of the (SM
partner of the) squark, stem from the trilinear Yukawa couplings of the fermion and Higgs supermultiplets and the
resulting different renormalizations of the quark and squark mass matrices, which induce additional flavour violation
at the weak scale through renormalization group running [11, 12, 13, 14], while in cMFV scenarios, these flavour
violating off-diagonal entries are simply neglected at both the SUSY-breaking and the weak scale.
When SUSY is embedded in larger structures such as grand unified theories (GUTs), new sources of flavour violation
can appear [15]. For example, local gauge symmetry allows for R-parity violating terms in the SUSY Lagrangian, but
these terms are today severely constrained by proton decay and collider searches. In non-minimal flavour violating
(NMFV) SUSY, additional sources of flavour violation are included in the mass matrices at the weak scale, and
∗[email protected]
http://arxiv.org/abs/0704.1826v2
mailto:[email protected]
their flavour violating off-diagonal terms cannot be simply deduced from the CKM matrix alone. NMFV is then
conveniently parameterized in the super-CKM basis by considering them as free parameters. The scaling of these
entries with the SUSY-breaking scale MSUSY implies a hierarchy ∆LL ≫ ∆LR,RL ≫ ∆RR [15].
Squark mixing is expected to be largest for the second and third generations due to the large Yukawa couplings
involved [16]. In addition, stringent experimental constraints for the first generation are imposed by precise mea-
surements of K0 − K̄0 mixing and first evidence of D0 − D̄0 mixing [17, 18, 19]. Furthermore, direct searches of
flavour violation depend on the possibility of flavour tagging, established experimentally only for heavy flavours. We
therefore consider here only flavour mixings of second- and third-generation squarks.
The direct search for SUSY particles constitutes a major physics goal of present (Tevatron) and future (LHC)
hadron colliders. SUSY particle hadroproduction and decay has therefore been studied in detail theoretically. Next-
to-leading order (NLO) SUSY-QCD calculations exist for the production of squarks and gluinos [20], sleptons [21],
and gauginos [22] as well as for their associated production [23]. The production of top [24] and bottom [25] squarks
with large helicity mixing has received particular attention. Recently, both QCD one-loop and electroweak tree-level
contributions have been calculated for non-diagonal, diagonal and mixed top and bottom squark pair production [26].
However, flavour violation has never been considered in the context of collider searches apart from the CKM-matrix
appearing in the electroweak stop-sbottom production channel [26].
It is the aim of this paper to investigate for the first time the possible effects of non-minimal flavour violation
at hadron colliders. To this end, we re-calculate all squark and gaugino production and decay helicity amplitudes,
keeping at the same time the CKM-matrix and the quark masses to account for non-diagonal charged-current gaugino
and Higgsino Yukawa interactions, and generalizing the two-dimensional helicity mixing matrices, often assumed to be
real, to generally complex six-dimensional helicity and generational mixing matrices. We keep the notation compact
by presenting all analytical expressions in terms of generalized couplings. In order to obtain numerical predictions
for hadron colliders, we have implemented all our results in a flexible computer program. In our phenomenological
analysis of NMFV squark and gaugino production, we concentrate on the LHC due to its larger centre-of-mass energy
and luminosity. We pay particular attention to the interesting interplay of parton density functions (PDFs), which
are dominated by light quarks, strong gluino contributions, which are generally larger than electroweak contributions
and need not be flavour-diagonal, and the appearance of third-generation squarks in the final state, which are easily
identified experimentally and generally lighter than first- and second-generation squarks.
After reviewing the MSSM with NMFV and setting up our notation in Sec. II, we define in Sec. III generalized
couplings of quarks, squarks, gauge bosons, and gauginos. We then use these couplings to present our analytical
calculations in concise form. In particular, we have computed partonic cross sections for NMFV squark-antisquark
and squark-squark pair production, squark and gaugino associated and gaugino pair production as well as NMFV
two-body decay widths of all squarks and gauginos. Section IV is devoted to a precise numerical analysis of the
experimentally allowed NMFV SUSY parameter space, an investigation of the corresponding helicity and flavour
decomposition of the up- and down-type squarks, and the definition of four collider-friendly benchmark points. These
points are then investigated in detail in Sec. V so as to determine the possible sensitivity of the LHC experiments on
the allowed NMFV parameter regions in the above-mentioned production channels. Our conclusions are presented in
Sec. VI.
II. NON-MINIMAL FLAVOUR VIOLATION IN THE MSSM
Within the SM, the only source of flavour violation arises through the rotation of the up-type (down-type) quark
interaction eigenstates u′L,R (d
L,R) to the basis of physical mass eigenstates uL,R (dL,R), such that
dL,R = V
L,R d
L,R and uL,R = V
L,R u
L,R. (1)
The four bi-unitary matrices V
L,R diagonalize the quark Yukawa matrices and render the charged-current interactions
proportional to the unitary CKM-matrix [8, 9]
V = V uL V
Vud Vus Vub
Vcd Vcs Vcb
Vtd Vts Vtb
. (2)
In the super-CKM basis [10], the squark interaction eigenstates undergo the same rotations at high energy scale
as their quark counterparts, so that their charged-current interactions are also proportional to the SM CKM-matrix.
However, different renormalizations of quarks and squarks introduce a mismatch of quark and squark field rotations
at low energies, so that the squark mass matrices effectively become non-diagonal [11, 12, 13, 14]. NMFV is then
conveniently parameterized by non-diagonal entries ∆
ij with i, j = L,R in the squared squark mass matrices
M2ũ =
∆ucLL ∆
LL muXu ∆
∆cuLL M
∆ctLL ∆
RL mcXc ∆
∆tuLL ∆
∆tuRL ∆
RL mtXt
muXu ∆
∆ucRR ∆
∆cuLR mcXc ∆
∆ctRR
∆tuLR ∆
LR mtXt ∆
∆dsLL ∆
LL mdXd ∆
∆sdLL M
∆sbLL ∆
RL msXs ∆
∆bdLL ∆
∆bdRL ∆
RL mbXb
mdXd ∆
∆dsRR ∆
∆sdLR msXs ∆
∆sbRR
∆bdLR ∆
LR mbXb ∆
, (4)
where the diagonal elements are given by
+m2q + cos 2βM
q − eqs2W ), (5)
+m2q + cos 2βM
W for up− type squarks, (6)
+m2q + cos 2βM
W for down− type squarks, (7)
while the well-known squark helicity mixing is generated by the elements
Xq = Aq − µ
cotβ for up− type squarks,
tanβ for down− type squarks. (8)
Here, mq, T
q , and eq denote the mass, weak isospin quantum number, and electric charge of the quark q. mZ is
the Z-boson mass, and sW (cW ) is the sine (cosine) of the electroweak mixing angle θW . The soft SUSY-breaking
mass terms are MQ̃q and MŨq,D̃q for the left- and right-handed squarks. Aq and µ are the trilinear coupling and
off-diagonal Higgs mass parameter, respectively, and tanβ = vu/vd is the ratio of vacuum expectation values of the
two Higgs doublets. The scaling of the flavour violating entries ∆
ij with the SUSY-breaking scale MSUSY implies a
hierarchy ∆
LL ≫ ∆
LR,RL ≫ ∆
RR among them [15]. They are usually normalized to the diagonal entries [18], so that
ij = λ
ij MĩqMj̃q′
. (9)
Note also that SU(2) gauge invariance relates the (numerically largest) ∆
LL of up- and down-type quarks through
the CKM-matrix, implying that a large difference between them is not allowed.
The diagonalization of the mass matrices M2ũ and M
requires the introduction of two additional 6 × 6 matrices
Ru and Rd with
diag (m2ũ1 , . . . ,m
) = RuM2ũ R
u† and diag (m2
, . . . ,m2
) = RdM2
Rd†. (10)
By convention, the masses are ordered according to mq̃1 < . . . < mq̃6 . The physical mass eigenstates are given by
. (11)
In the limit of vanishing off-diagonal parameters ∆
ij , the matrices R
q become flavour-diagonal, leaving only the
well-known helicity mixing already present in cMFV.
III. ANALYTICAL RESULTS FOR PRODUCTION CROSS SECTIONS AND DECAY WIDTHS
In this section, we introduce concise definitions of generalized strong and electroweak couplings in NMFV SUSY
and compute analytically the corresponding partonic cross sections for squark and gaugino production as well as their
decay widths. The cross sections of the production processes
aha(pa) bhb(pb) →
i (p1) q̃
j (p2),
χ̃±j (p1) q̃
i (p2),
i (p1) χ̃
j (p2)
are presented for definite helicities ha,b of the initial partons a, b = q, q̄, g and expressed in terms of the squark,
chargino, neutralino, and gluino masses mq̃k , mχ̃±
, mχ̃0
, and mg̃, the conventional Mandelstam variables,
s = (pa + pb)
2, t = (pa − p1)2, and u = (pa − p2)2, (13)
and the masses of the neutral and charged electroweak gauge bosons mZ and mW . Propagators appear as mass-
subtracted Mandelstam variables,
sw = s−m2W , sz = s−m2Z ,
= t−m2
, uχ̃0
= u−m2
tχ̃j = t−m2χ̃±
, uχ̃j = u−m2χ̃±
tg̃ = t−m2g̃ , ug̃ = u−m2g̃ ,
tq̃i = t−m2q̃i , uq̃i = u−m
Unpolarized cross sections, averaged over initial spins, can easily be derived from the expression
dσ̂ =
dσ̂1,1 + dσ̂1,−1 + dσ̂−1,1 + dσ̂−1,−1
, (15)
while single- and double-polarized cross sections, including the same average factor for initial spins, are given by
d∆σ̂L =
dσ̂1,1 + dσ̂1,−1 − dσ̂−1,1 − dσ̂−1,−1
and d∆σ̂LL =
dσ̂1,1 − dσ̂1,−1 − dσ̂−1,1 + dσ̂−1,−1
, (16)
so that the single- and double-spin asymmetries become
d∆σ̂L
and ALL =
d∆σ̂LL
. (17)
A. Generalized Strong and Electroweak Couplings in NMFV SUSY
Considering the strong interaction first, it is well known that the interaction of quarks, squarks, and gluinos, whose
coupling is normally just given by gs =
4παs, can in general lead to flavour violation in the left- and right-handed
sectors through non-diagonal entries in the matrices Rq,
Lq̃jqkg̃, Rq̃jqk g̃
jk,−R
j(k+3)
. (18)
Of course, the involved quark and squark both have to be up- or down-type, since the gluino is electrically neutral.
For the electroweak interaction, we define the square of the weak coupling constant g2W = e
2/ sin2 θW in terms
of the electromagnetic fine structure constant α = e2/(4π) and the squared sine of the electroweak mixing angle
xW = sin
2 θW = s
W = 1− cos2 θW = 1− c2W . Following the standard notation, the W± − χ̃0i − χ̃
j , Z − χ̃
i − χ̃
j , and
Z − χ̃0i − χ̃0j interaction vertices are proportional to [2]
OLij = −
j2 +Ni2V
j1 and O
N∗i3Uj2 +N
i2Uj1,
O′Lij = −Vi1V ∗j1 −
j2 + δijxW and O
ij = −U∗i1Uj1 −
U∗i2Uj2 + δijxW ,
O′′Lij = −
j4 and O
N∗i3Nj3 −
N∗i4Nj4. (19)
In NMFV, the coupling strengths of left- and right-handed (s)quarks to the electroweak gauge bosons are given by
{Lqq′Z , Rqq′Z} = (2T 3q − 2 eq xW )× δqq′ ,
{Lq̃iq̃jZ , Rq̃iq̃jZ} = (2T 3q̃ − 2 eq̃ xW )×
{Ruik Ru∗jk , Rui(3+k) Ru∗j(3+k)},
{Lqq′W , Rqq′W } = {
2 cW Vqq′ , 0},
{Lũid̃jW , Rũid̃jW } =
k,l=1
2 cW Vukdl R
jl , 0}, (20)
where the weak isospin quantum numbers are T 3
{q,q̃}
= ±1/2 for left-handed and T 3
{q,q̃}
= 0 for right-handed up- and
down-type (s)quarks, their fractional electromagnetic charges are denoted by e{q,q̃}, and Vkl are the elements of the
CKM-matrix defined in Eq. (2). To simplify the notation, we have introduced flavour indices in the latter, d1 = d,
d2 = s, d3 = b, u1 = u, u2 = c, and u3 = t.
The SUSY counterparts of these vertices correspond to the quark-squark-gaugino couplings,
Ld̃jdkχ̃0i
(eq − T 3q ) sW Ni1 + T 3q cW Ni2
Rd∗jk +
mdk cW Ni3R
j(k+3)
2mW cosβ
d̃jdkχ̃
= eq sW Ni1 R
j(k+3) −
mdk cW Ni3R
2mW cosβ
Lũjukχ̃0i =
(eq − T 3q ) sW Ni1 + T 3q cW Ni2
Ru∗jk +
muk cW Ni4 R
j(k+3)
2mW sinβ
−R∗ũjukχ̃0i = eq sW Ni1 R
j(k+3) −
muk cW Ni4 R
2mW sinβ
Ld̃julχ̃±i
Ui1 R
mdk Ui2R
j(k+3)√
2mW cosβ
Vuldk ,
d̃julχ̃
mul Vi2 V
Rdjk√
2mW sinβ
Lũjdlχ̃±i
V ∗i1 R
muk V
j(k+3)√
2mW sinβ
Vukdl ,
ũjdlχ̃
mdl U
Ru∗jk√
2mW cosβ
, (21)
where the matrices N , U and V relate to the gaugino/Higgsino mixing (see App. A). All other couplings vanish due
to (electromagnetic) charge conservation (e.g. Lũjulχ̃±i
). These general expressions can be simplified by neglecting the
Yukawa couplings except for the one of the top quark, whose mass is not small compared to mW . For the sake of
simplicity, we use the generic notation
C1abc, C2abc
= {Labc, Rabc} (22)
in the following.
B. Squark-Antisquark Pair Production
The production of charged squark-antisquark pairs
q(ha, pa) q̄
′(hb, pb) → ũi(p1) d̃∗j (p2), (23)
where i, j = 1, ..., 6 label up- and down-type squark mass eigenstates, ha,b helicities, and pa,b,1,2 four-momenta,
proceeds from an equally charged quark-antiquark initial state through the tree-level Feynman diagrams shown in
Fig. 1. The corresponding cross section can be written in a compact way as
FIG. 1: Tree-level Feynman diagrams for the production of charged squark-antisquark pairs in quark-antiquark collisions.
ha,hb
= (1− ha)(1 + hb)
k,l=1,...,4
N kl11
k=1,...,4
[NW ]k
[GW ]
tg̃ sw
+ (1− ha)(1 − hb)
k,l=1,...,4
N kl12
+ (1 + ha)(1 + hb)
k,l=1,...,4
N kl21
+ (1 + ha)(1 − hb)
k,l=1,...,4
N kl22
thanks to the form factors
W = π α
16 x2W (1− xW )2 s2
∣∣∣L∗qq′W Lũid̃jW
u t−m2ũi m
N klmn =
x2W (1− xW )2 s2
d̃jq′χ̃
Cm∗ũiqχ̃0k C
d̃jq′χ̃
Cmũiqχ̃0l
u t−m2ũi m
δmn +
(1− δmn)
Gmn =
2 π α2s
∣∣∣Cn∗
d̃jq′g̃
Cmũiqg̃
u t−m2ũi m
δmn +
m2g̃ s
(1− δmn)
[NW ]k = π α
6 x2W (1− xW )2 s2
L∗qq′W Lũid̃jW Lũiqχ̃0k
L∗q̃jq′χ̃0k
u t−m2ũi m
[GW ] = 4 π αs α
18 xW (1− xW ) s2
L∗ũiqg̃ Ld̃jq′g̃ L
qq′W Lũid̃jW
u t−m2ũi m
, (25)
which combine coupling constants and Dirac traces of the squared and interference diagrams. In cMFV, superpartners
of heavy flavours can only be produced through the purely left-handed s-channel W -exchange, since the t-channel
diagrams are suppressed by the small bottom and negligible top quark densities in the proton, and one recovers the
result in Ref. [26]. In NMFV, t-channel exchanges can, however, contribute to heavy-flavour final state production
from light-flavour initial states and even become dominant, due to the strong gluino coupling.
Neutral squark-antisquark pair production proceeds either from equally neutral quark-antiquark initial states
q(ha, pa) q̄
′(hb, pb) → q̃i(p1) q̃∗j (p2), (26)
through the five different gauge-boson/gaugino exchanges shown in Fig. 2 (top) or from gluon-gluon initial states
g(ha, pa) g(hb, pb) → q̃i(p1) q̃∗i (p2) (27)
through the purely strong couplings shown in Fig. 2 (bottom). The differential cross section for quark-antiquark
FIG. 2: Tree-level Feynman diagrams for the production of neutral squark-antisquark pairs in quark-antiquark (top) and
gluon-gluon collisions (bottom).
scattering
ha,hb
= (1− ha)(1 + hb)
[YZ]1
[G̃Y]1
tg̃ s
[G̃Z]1
tg̃ sz
[G̃G]1
tg̃ s
k,l=1,...,4
N kl11
k=1,...,4
[NY]k1
[NZ]k1
[NG]k1
+ (1 + ha)(1 − hb)
[YZ]2
[G̃Y]2
tg̃ s
[G̃Z]2
tg̃ sz
[G̃G]2
tg̃ s
k,l=1,...,4
N kl22
k=1,...,4
[NY]k2
[NZ]k2
[NG]k2
+ (1− ha)(1 − hb)
k,l=1,...,4
N kl12
+ (1 + ha)(1 + hb)
k,l=1,...,4
N kl21
involves many different form factors,
π α2 e2q e
q̃ δij δqq′
u t−m2q̃i m
16 s2 x2W (1 − xW )2
∣∣Lq̃iq̃jZ +Rq̃i q̃jZ
∣∣2 (Cmqq′Z
u t−m2q̃i m
G = 2 π α
s δij δqq′
u t−m2q̃i m
N klmn =
x2W (1 − xW )2 s2
Cm∗q̃iqχ̃0k C
q̃iqχ̃
Cnq̃jq′χ̃0kC
q̃jq′χ̃
u t−m2q̃i m
δmn +
(1− δmn)
G̃mn =
2 π α2s
∣∣∣Cmq̃iqg̃ C
q̃jq′g̃
u t−m2q̃i m
δmn +
m2g̃ s
(1− δmn)
[YZ]m =
π α2 eq eq̃ δij δqq′
2 s2 xW (1− xW )
Lq̃iq̃jZ +Rq̃i q̃jZ
Cmqq′Z
u t−m2q̃i m
[NY]km =
2 π α2 eq eq̃ δij δqq′
3 xW (1 − xW ) s2
Cmq̃iqχ̃0k C
q̃jq′χ̃
u t−m2q̃i m
FIG. 3: Tree-level Feynman diagrams for the production of one down-type squark (q̃i) and one up-type squark (q̃
j) in the
collision of an up-type quark (q) and a down-type quark (q′).
[NZ]km =
6 x2W (1− xW )2 s2
Cmq̃iqχ̃0k C
q̃jq′χ̃
Lq̃iq̃jZ +Rq̃iq̃jZ
Cmqq′Z
u t−m2q̃i m
[NG]km =
8 π ααs δij δqq′
9 xW (1− xW ) s2
Cmq̃iqχ̃0k C
q̃jq′χ̃
u t−m2q̃i m
= −4 π α
s δij δqq′
27 s2
Cm∗q̃iqg̃ C
q̃jq′g̃
u t−m2q̃i m
[G̃Y]m =
8 π ααs eq eq̃ δij δqq′
Cm∗q̃iqg̃ C
q̃jq′g̃
u t−m2q̃i m
[G̃Z]m =
2 π ααs
9 xW (1− xW ) s2
Cm∗q̃iqg̃ C
q̃jq′g̃
Lq̃iq̃jZ +Rq̃i q̃jZ
Cmqq′Z
u t−m2q̃i m
, (29)
since only very few interferences (those between strong and electroweak channels of the same propagator type) are
eliminated due to colour conservation. On the other hand, the gluon-initiated cross section
ha,hb
128s2
1− 2 tq̃iuq̃i
(1− hahb)− 2
sm2q̃i
tq̃iuq̃i
(1− hahb)−
sm2q̃i
tq̃iuq̃i
involves only the strong coupling constant and is thus quite compact. In the case of cMFV, but diagonal or non-
diagonal squark helicity, our results agree with those in Ref. [26]. Diagonal production of identical squark-antisquark
mass eigenstates is, of course, dominated by the strong quark-antiquark and gluon-gluon channels. Their relative im-
portance depends on the partonic luminosity and thus on the type and energy of the hadron collider under considera-
tion. Non-diagonal production of squarks of different helicity or flavour involves only electroweak and gluino-mediated
quark-antiquark scattering, and the relative importance of these processes depends largely on the gluino mass.
C. Squark Pair Production
While squark-antisquark pairs are readily produced in pp̄ collisions, e.g. at the Tevatron, from valence quarks and
antiquarks, pp colliders have a larger quark-quark luminosity and will thus more easily lead to squark pair production.
The production of one down-type and one up-type squark
q(ha, pa) q
′(hb, pb) → d̃i(p1) ũj(p2), (31)
in the collision of an up-type quark q and a down-type quark q′ proceeds through the t-channel chargino or u-channel
neutralino and gluino exchanges shown in Fig. 3. The corresponding cross section
FIG. 4: Tree-level Feynman diagrams for the production of two up-type or down-type squarks.
ha,hb
= (1−ha)(1−hb)
k=1,2
l=1,2
Ckl11
tχ̃k tχ̃l
k=1,...,4
l=1,...,4
N kl11
k=1,2
l=1,...,4
[CN ]kl11
tχ̃k uχ̃0
k=1,2
[CG]k11
tχ̃k ug̃
+ (1+ha)(1+hb)
k=1,2
l=1,2
Ckl22
tχ̃k tχ̃l
k=1,...,4
l=1,...,4
N kl22
k=1,2
l=1,...,4
[CN ]kl22
tχ̃k uχ̃0l
k=1,2
[CG]k22
tχ̃k ug̃
+ (1−ha)(1+hb)
k=1,2
l=1,2
Ckl12
tχ̃k tχ̃l
k=1,...,4
l=1,...,4
N kl12
k=1,2
l=1,...,4
[CN ]kl12
tχ̃k uχ̃0
k=1,2
[CG]k12
tχ̃k ug̃
+ (1+ha)(1−hb)
k=1,2
l=1,2
Ckl21
tχ̃k tχ̃l
k=1,...,4
l=1,...,4
N kl21
k=1,2
l=1,...,4
[CN ]kl21
tχ̃k uχ̃0
k=1,2
[CG]k21
tχ̃k ug̃
involves the form factors
Cklmn =
4 x2W s
ũjq′χ̃
d̃iqχ̃
ũjq′χ̃
d̃iqχ̃
u t−m2
m2ũj
(1− δmn) +mχ̃±
s δmn
N klmn =
x2W (1− xW )2 s2
Cm∗ũjqχ̃0k C
d̃iq′χ̃
Cmũjqχ̃0l C
d̃iq′χ̃
u t−m2
m2ũj
(1− δmn) +mχ̃0
s δmn
Gmn =
2 π α2s
∣∣∣Cmũjqg̃ C
d̃iq′g̃
u t−m2
m2ũj
(1− δmn) +m2g̃ s δmn
[CN ]klmn =
3 x2W (1− xW ) s2
ũjq′χ̃
d̃iqχ̃
Cmũjqχ̃0l C
d̃iq′χ̃
u t−m2
m2ũj
(δmn − 1) +mχ̃±
s δmn
[CG]kmn =
4 π ααs
9 s2 xW
ũjq′χ̃
d̃iqχ̃
Cm∗ũjqg̃ C
d̃iq′g̃
u t−m2
m2ũj
(δmn − 1) +mχ̃±
mg̃ s δmn
, (33)
where the neutralino-gluino interference term is absent due to colour conservation. The cross section for the charge-
conjugate production of antisquarks from antiquarks can be obtained from the equations above by replacing ha,b →
−ha,b. Heavy-flavour final states are completely absent in cMFV due to the negligible top quark and small bottom
quark densities in the proton and can thus only be obtained in NMFV.
The Feynman diagrams for pair production of two up- or down-type squarks
q(ha, pa) q
′(hb, pb) → q̃i(p1) q̃j(p2) (34)
are shown in Fig. 4. In NMFV, neutralino and gluino exchanges can lead to identical squark flavours for different
quark initial states, so that both t- and u-channels contribute and may interfere. The cross section
ha,hb
= (1 − ha)(1− hb)
k=1,...,4
l=1,...,4
[NT ]kl11
[NU ]kl11
[NT U ]kl11
[GT ]11
[GU ]11
[GT U ]11
ug̃tg̃
k=1,...,4
[NGA]k11
[NGB]k11
1 + δij
+ (1 + ha)(1 + hb)
k=1,...,4
l=1,...,4
[NT ]kl22
[NU ]kl22
[NT U ]kl22
[GT ]22
[GU ]22
[GT U ]22
ug̃tg̃
k=1,...,4
[NGA]k22
[NGB]k22
1 + δij
+ (1 − ha)(1 + hb)
k=1,...,4
l=1,...,4
[NT ]kl12
[NU ]kl12
[NT U ]kl12
[GT ]12
[GU ]12
[GT U ]12
ug̃tg̃
k=1,...,4
[NGA]k12
[NGB]k12
1 + δij
+ (1 + ha)(1− hb)
k=1,...,4
l=1,...,4
[NT ]kl21
[NU ]kl21
[NT U ]kl21
[GT ]21
[GU ]21
[GT U ]21
ug̃tg̃
k=1,...,4
[NGA]k21
[NGB]k21
1 + δij
depends therefore on the form factors
[NT ]klmn =
x2W (1− xW )2 s2
Cn∗q̃jq′χ̃0k C
q̃iqχ̃
Cnq̃jq′χ̃0l C
q̃iqχ̃
u t−m2q̃i m
(1− δmn) +mχ̃0
s δmn
[NU ]klmn =
x2W (1− xW )2 s2
Cn∗q̃iq′χ̃0k C
q̃jqχ̃
Cnq̃iq′χ̃0l C
q̃jqχ̃
u t−m2q̃i m
(1− δmn) +mχ̃0
s δmn
[NT U ]klmn =
2 π α2
3 x2W (1− xW )2 s2
Cm∗q̃iqχ̃0k C
q̃jq′χ̃
Cnq̃iq′χ̃0l C
q̃jqχ̃
u t−m2q̃i m
(1− δmn) +mχ̃0
s δmn
[GT ]mn =
2 π α2s
∣∣∣Cnq̃jq′g̃ C
q̃iqg̃
u t−m2q̃i m
(1− δmn) +m2g̃ s δmn
[GU ]mn =
2 π α2s
∣∣∣Cmq̃iq′g̃ C
q̃jqg̃
u t−m2q̃i m
(1− δmn) +m2g̃ s δmn
[GT U ]mn =
−4 π α2s
27 s2
Cmq̃iqg̃ C
q̃jq′g̃
Cm∗q̃iq′g̃ C
q̃jqg̃
u t−m2q̃i m
(1− δmn) +m2g̃ s δmn
[NGA]kmn =
8 π ααs
9 s2 xW (1− xW )
Cn∗q̃jq′χ̃0k C
q̃iqχ̃
Cm∗q̃iq′g̃ C
q̃jqg̃
] [ (
u t−m2q̃i m
(1− δmn) +mχ̃0
mg̃ s δmn
[NGB]kmn =
8 π ααs
9 s2 xW (1− xW )
Cn∗q̃iq′χ̃0k C
q̃jqχ̃
Cn∗q̃jq′g̃ C
q̃iqg̃
] [ (
u t−m2q̃i m
(1− δmn) +mχ̃0
mg̃ s δmn
. (36)
Gluinos will dominate over neutralino exchanges due to their strong coupling, and the two will only interfere in the
mixed t- and u-channels due to colour conservation. At the LHC, up-type squark pair production should dominate
over mixed up-/down-type squark production and down-type squark pair production, since the proton contains two
valence up-quarks and only one valence down-quark. As before, the charge-conjugate production of antisquark pairs
FIG. 5: Tree-level Feynman diagrams for the associated production of squarks and gauginos.
is obtained by making the replacement ha,b → −ha,b. If we neglect electroweak contributions as well as squark flavour
and helicity mixing and sum over left- and right-handed squark states, our results agree with those of Ref. [27].
D. Associated Production of Squarks and Gauginos
The associated production of squarks and neutralinos or charginos
q(ha, pa) g(hb, pb) → χ̃j(p1) q̃i(p2) (37)
is a semi-weak process that originates from quark-gluon initial states and has both an s-channel quark and a t-channel
squark contribution. They involve both a quark-squark-gaugino vertex that can in general be flavour violating. The
corresponding Feynman diagrams can be seen in Fig. 5. The squark-gaugino cross section
ha,hb
π ααs
nχ̃ s2
−uχ̃j
(1− ha)(1 − hb)
∣∣Lq̃iqχ̃j
∣∣2 + (1 + ha)(1 + hb)
∣∣Rq̃iqχ̃j
t+m2q̃i
t2q̃i
(1 − ha)
∣∣Lq̃iqχ̃j
∣∣2 + (1 + ha)
∣∣Rq̃iqχ̃j
2 (u t−m2q̃i m
s tq̃i
(1 − ha)(1− hb)
∣∣Lq̃iqχ̃j
∣∣2 + (1 + ha)(1 + hb)
∣∣Rq̃iqχ̃j
tχ̃j (tχ̃j − uq̃i)
s tq̃i
(1 − ha)
∣∣Lq̃iqχ̃j
∣∣2 + (1 + ha)
∣∣Rq̃iqχ̃j
, (38)
where nχ̃ = 6xW (1− xW ) for neutralinos and nχ̃ = 12xW for charginos, is sufficiently compact to be written without
the definition of form factors. Note that the t-channel diagram involves the coupling of the gluon to scalars and
does thus not depend on its helicity hb. The cross section of the charge-conjugate process can be obtained by taking
ha → −ha. Third-generation squarks can only be produced in NMFV, preferably through a light (valence) quark in
the s-channel. For non-mixing squarks and gauginos, we agree again with the results of Ref. [27].
E. Gaugino Pair Production
Finally, we consider the purely electroweak production of gaugino pairs
q(ha, pa) q̄
′(hb, pb) → χ̃i(p1) χ̃j(p2) (39)
from quark-antiquark initial states, where flavour violation can occur via the quark-squark-gaugino vertices in the t-
and u-channels (see Fig. 6). However, if it were not for different parton density weights, summation over complete
squark multiplet exchanges would make these channels insensitive to the exchanged squark flavour. Furthermore there
are no final state squarks that could be experimentally tagged. The cross section can be expressed generically as
FIG. 6: Tree-level Feynman diagrams for the production of gaugino pairs.
ha,hb
(1− ha)(1 + hb)
|QuLL|
uχ̃iuχ̃j +
∣∣QtLL
∣∣2 tχ̃itχ̃j + 2Re[Q
LL]mχ̃imχ̃js
(1 + ha)(1− hb)
|QuRR|
uχ̃iuχ̃j +
∣∣QtRR
∣∣2 tχ̃itχ̃j + 2Re[Q
RR]mχ̃imχ̃js
(1 + ha)(1 + hb)
|QuRL|
uχ̃iuχ̃j +
∣∣QtRL
∣∣2 tχ̃itχ̃j +Re[Q
RL](ut−m2χ̃im
(1− ha)(1− hb)
|QuLR|
uχ̃iuχ̃j +
∣∣QtLR
∣∣2 tχ̃itχ̃j +Re[Q
LR](ut−m2χ̃im
, (40)
i.e. in terms of generalized charges. For χ̃−i χ̃
j -production, these charges are given by
Qu−+LL =
eqδijδqq′
Lqq′ZO
2 xW (1 − xW ) sz
Ld̃kq′χ̃±i
d̃kqχ̃
2 xW ud̃k
Qt−+LL =
eqδijδqq′
Lqq′ZO
2 xW (1 − xW ) sz
ũkq′χ̃
Lũkqχ̃±i
2 xW tũk
Qu−+RR =
eqδijδqq′
Rqq′ZO
2 xW (1 − xW ) sz
Rd̃kq′χ̃±i
d̃kqχ̃
2 xW ud̃k
Qt−+RR =
eqδijδqq′
Rqq′ZO
2 xW (1 − xW ) sz
ũkq′χ̃
Rũkqχ̃±i
2 xW tũk
Qu−+LR =
Rd̃kq′χ̃±i
d̃kqχ̃
2 xW ud̃k
Qt−+LR =
ũkq′χ̃
Lũkqχ̃±i
2 xW tũk
Qu−+RL =
Ld̃kq′χ̃±i
d̃kqχ̃
2 xW ud̃k
Qt−+RL =
ũkq′χ̃
Rũkqχ̃±i
2 xW tũk
. (41)
Note that there is no interference between t- and u-channel diagrams due to (electromagnetic) charge conservation.
The cross section for chargino-pair production in e+e−-collisions can be deduced by setting eq → el = −1, Lqq′Z →
LeeZ = (2T
l − 2 el xW ) and Rqq′Z → ReeZ = −2 el xW . Neglecting all Yukawa couplings, we can then reproduce the
calculations of Ref. [28].
The charges of the chargino-neutralino associated production are given by
Qu+0LL =
2 (1− xW )xW
OL∗ji L
qq′W√
ũkq′χ̃
ũkqχ̃
Qt+0LL =
2 (1− xW )xW
OR∗ji L
qq′W√
d̃kqχ̃
Ld̃kq′χ̃0j
Qu+0RR =
2 (1− xW )xW
ũkq′χ̃
ũkqχ̃
Qt+0RR =
2 (1− xW )xW
d̃kqχ̃
Rd̃kq′χ̃0j
Qu+0LR =
2 (1− xW )xW
ũkq′χ̃
ũkqχ̃
Qt+0LR =
2 (1− xW )xW
d̃kqχ̃
Rd̃kq′χ̃0j
Qu+0RL =
2 (1− xW )xW
ũkq′χ̃
ũkqχ̃
Qt+0RL =
2 (1− xW )xW
d̃kqχ̃
Ld̃kq′χ̃0j
. (42)
The charge-conjugate process is again obtained by making the replacement ha,b → −ha,b in Eq. (40). In the case of
non-mixing squarks with neglected Yukawa couplings, we agree with the results of Ref. [22], provided we correct a
sign in their Eq. (2) as described in Ref. [29].
Finally, the charges for the neutralino pair production are given by
Qu00LL =
xW (1− xW )
1 + δij
Lqq′ZO
LQ̃kq′χ̃0i
Q̃kqχ̃
Qt00LL =
xW (1− xW )
1 + δij
Lqq′ZO
Q̃kqχ̃
LQ̃kq′χ̃0j
Qu00RR =
xW (1− xW )
1 + δij
Rqq′ZO
RQ̃kq′χ̃0i
Q̃kqχ̃
Qt00RR =
xW (1− xW )
1 + δij
Rqq′ZO
Q̃kqχ̃
RQ̃kq′χ̃0j
Qu00LR =
xW (1− xW )
1 + δij
RQ̃kq′χ̃0i
Q̃kqχ̃
Qt00LR =
xW (1− xW )
1 + δij
Q̃kqχ̃
RQ̃kq′χ̃0j
Qu00RL =
xW (1− xW )
1 + δij
LQ̃kq′χ̃0i
Q̃kqχ̃
Qt00RL =
xW (1− xW )
1 + δij
Q̃kqχ̃
LQ̃kq′χ̃0j
, (43)
which agrees with the results of Ref. [30] in the case of non-mixing squarks.
FIG. 7: Tree-level Feynman diagrams for squark decays into gauginos and quarks (top) and into electroweak gauge bosons and
lighter squarks (bottom).
F. Squark Decays
We turn now from SUSY particle production to decay processes and show in Fig. 7 the possible decays of squarks
into gauginos and quarks (top) as well as into electroweak gauge bosons and lighter squarks (bottom). Both processes
can in general induce flavour violation. The decay widths of the former are given by
Γq̃i→χ̃0jqk =
2m3q̃i xW (1− xW )
m2q̃i −m
−m2qk
)( ∣∣∣Lq̃iqkχ̃0j
∣∣∣Rq̃iqkχ̃0j
− 4mχ̃0
mqk Re
Lq̃iqkχ̃0jR
q̃iqkχ̃
λ1/2(m2q̃i ,m
,m2qk), (44)
Γq̃i→χ̃±j q
4m3q̃i xW
m2q̃i −m
−m2q′
)( ∣∣∣Lq̃iq′kχ̃±j
∣∣∣Rq̃iq′kχ̃±j
− 4mχ̃±
Lq̃iq′kχ̃
λ1/2(m2q̃i ,m
,m2q′
), (45)
Γq̃i→g̃qk =
3m3q̃i xW
m2q̃i −m
g̃ −m2qk
|Lq̃iqkg̃|
+ |Rq̃iqk g̃|
− 4mg̃mqk Re
Lq̃iqkg̃R
q̃iqkg̃
× λ1/2(m2q̃i ,m
), (46)
while those of the latter are given by
Γq̃i→Zq̃k =
16m3q̃i m
Z xW (1− xW )
|Lq̃i q̃kZ +Rq̃i q̃kZ |
λ3/2(m2q̃i ,m
), (47)
Γq̃i→W±q̃′k =
16m3q̃i m
W xW (1− xW )
∣∣∣Lq̃iq̃′kW
λ3/2(m2q̃i ,m
). (48)
The usual Källen function is
λ(x, y, z) = x2 + y2 + z2 − 2(x y + y z + z x). (49)
In cMFV, our results agree with those of Ref. [31].
G. Gluino Decays
Heavy gluinos can decay strongly into squarks and quarks as shown in Fig. 8. The corresponding decay width
FIG. 8: Tree-level Feynman diagram for gluino decays into squarks and quarks.
FIG. 9: Tree-level Feynman diagrams for gaugino decays into squarks and quarks (left) and into lighter gauginos and electroweak
gauge bosons (centre and right).
Γg̃→q̃∗
8m3g̃
m2g̃ −m2q̃j +m
)( ∣∣Lq̃jqkg̃
∣∣2 +
∣∣Rq̃jqk g̃
+ 4mg̃mqk Re
Lq̃jqk g̃R
q̃jqk g̃
× λ1/2(m2g̃,m2q̃j ,m
) (50)
can in general also induce flavour violation. In cMFV, our result agrees again with the one of Ref. [31].
H. Gaugino Decays
Heavier gauginos can decay into squarks and quarks as shown in Fig. 9 (left) or into lighter gauginos and electroweak
gauge bosons (Fig. 9 centre and right). The analytical decay widths are
→q̃j q̄
−m2q̃j +m
)( ∣∣∣Lq̃jq′kχ̃±i
∣∣∣Rq̃jq′kχ̃±i
+ 4mχ̃±
Lq̃jq′kχ̃
λ1/2(m2
,m2q̃j ,m
) (51)
m2W xW
+m4χ̃0
− 2m4W +m2χ̃±
m2W +m
m2W − 2m2χ̃±
m2χ̃0
( ∣∣OLij
∣∣2 +
∣∣ORij
− 12mχ̃±
m2W mχ̃0
OLijO
λ1/2(m2
,m2χ̃0
,m2W ), (52)
m2Z xW (1− xW )
− 2m4Z +m2χ̃±
m2Z +m
m2Z − 2m2χ̃±
( ∣∣O′Lij
∣∣2 +
∣∣O′Rij
− 12mχ̃±
m2Z mχ̃±
O′Lij O
λ1/2(m2
,m2Z) (53)
for charginos and
→q̃j q̄k
xW (1− xW )
m2χ̃0
−m2q̃j +m
)( ∣∣∣Lq̃jqkχ̃0i
∣∣∣Rq̃jqkχ̃0i
+ 4mχ̃0
mqk Re
Lq̃jqkχ̃0iR
q̃jqkχ̃
λ1/2(m2χ̃0
,m2q̃j ,m
) (54)
m2W xW
m4χ̃0
− 2m4W +m2χ̃0
m2W +m
m2W − 2m2χ̃0
( ∣∣OLij
∣∣2 +
∣∣ORij
− 12mχ̃0
m2W mχ̃±
OLijO
λ1/2(m2χ̃0
,m2W ), (55)
m2Z xW (1− xW )
m4χ̃0
+m4χ̃0
− 2m4Z +m2χ̃0
m2Z +m
m2Z − 2m2χ̃0
m2χ̃0
( ∣∣O′′Lij
∣∣2 +
∣∣O′′Rij
− 12mχ̃0
m2Z mχ̃0j Re
O′′Lij O
λ1/2(m2χ̃0
,m2χ̃0
,m2Z) (56)
for neutralinos, respectively. Chargino decays into a slepton and a neutrino (lepton and sneutrino) can be deduced
from the previous equations by taking the proper limits, i.e. by removing colour factors and up-type masses in the
coupling definitions. Our results agree then with those of Ref. [32] in the limit of non-mixing sneutrinos. Note that
the same simplifications also permit a verification of our results for squark decays into a gaugino and a quark in Eqs.
(44) and (45) when compared to their leptonic counterparts in Ref. [32].
IV. EXPERIMENTAL CONSTRAINTS, SCANS AND BENCHMARKS IN NMFV SUSY
In the absence of experimental evidence for supersymmetry, a large variety of data can be used to constrain the
MSSM parameter space. For example, sparticle mass limits can be obtained from searches of charginos (mχ̃±
GeV for heavier sneutrinos at LEP2), neutralinos (mχ̃0
≥ 59 GeV in minimal supergravity (mSUGRA) from the
combination of LEP2 results), gluinos (mg̃ ≥ 195 GeV from CDF), stops (mt̃1 ≥ 95 . . . 96 GeV for neutral- or
charged-current decays from the combination of LEP2 results), and other squarks (mq̃ ≥ 300 GeV for gluinos of equal
mass from CDF) at colliders [33]. Note that all of these limits have been obtained assuming minimal flavour violation.
For non-minimal flavour violation, rather strong constraints can be obtained from low-energy, electroweak precision,
and cosmological observables. These are discussed in the next subsection, followed by several scans for experimentally
allowed/favoured regions of the constrained MSSM parameter space and the definition of four NMFV benchmark
points/slopes. Finally, we exhibit the corresponding chirality and flavour decomposition of the various squark mass
eigenstates.
A. Low-Energy, Electroweak Precision, and Cosmological Constraints
In a rather complete analysis of FCNC constraints more than ten years ago [18], upper limits from the neutral
kaon sector (on ∆mK , ε, ε
′/ε), on B- (∆mB) and D-meson oscillations (∆mD), various rare decays (BR(b → sγ),
BR(µ → eγ), BR(τ → eγ), and BR(τ → µγ)), and electric dipole moments (dn and de) were used to impose
constraints on non-minimal flavour mixing in the squark and slepton sectors. The limit obtained for the absolute
value in the left-handed, down-type squark sector was rather weak (|λsbLL| < 4.4 . . .26 for varying gluino-to-squark
mass ratio), while the limits for the mixed/right-handed, imaginary or sleptonic parts were already several orders of
magnitude smaller. In the meantime, many of the experimental bounds have been improved or absolute values for the
observables have been determined, so that an updated analysis could be performed [34]. The results for the down-type
squark sector are cited in Tab. I. As can be seen and as has already been hinted at in the introduction, only mixing
between second- and third-generation squarks can be substantial, and this only in the left-left or right-right chiral
sectors, the latter being disfavoured by its scaling with the soft SUSY-breaking mass. Independent analyses focusing
TABLE I: The 95% probability bounds on |λ
ij | obtained in Ref. [34].
ij LL LR RL RR
12 1.4×10−2 9.0×10−5 9.0×10−5 9.0×10−3
13 9.0×10−2 1.7×10−2 1.7×10−2 7.0×10−2
23 1.6×10−1 4.5×10−3 6.0×10−3 2.2×10−1
on this particular sector, i.e. on BR(b → sγ), BR(b → sµµ), and ∆mBs , have been performed recently by two other
groups [35, 36] with very similar results.
In our own analysis, we take implicitly into account all of the previously mentioned constraints by restricting
ourselves to the case of only one real NMFV parameter, λ ≡ λsbLL = λctLL. Allowed regions for this parameter are then
obtained by imposing explicitly a number of low-energy, electroweak precision, and cosmological constraints. We start
by imposing the theoretically robust inclusive branching ratio
BR(b→ sγ) = (3.55± 0.26)× 10−4, (57)
obtained from the combined measurements of BaBar, Belle, and CLEO [37], at the 2σ-level on the two-loop QCD/one-
loop SUSY calculation [36, 38], which affects directly the allowed squark mixing between the second and third
generation.
A second important consequence of NMFV in the MSSM is the generation of large splittings between squark-mass
eigenvalues. The splitting within isospin doublets influences the Z- and W -boson self-energies at zero-momentum
ΣZ,W (0) in the electroweak ρ-parameter
∆ρ = ΣZ(0)/M
Z − ΣW (0)/M2W (58)
and consequently theW -boson massMW and the squared sine of the weak mixing angle sin
2 θW . The latest combined
fits of the Z-boson mass, width, pole asymmetry, W -boson and top-quark mass constrain new physics contributions
to T = −0.13± 0.11 [33] or
∆ρ = −αT = 0.00102± 0.00086, (59)
where we have used α(MZ) = 1/127.918. This value is then imposed at the 2σ-level on the one-loop NMFV and
two-loop cMFV SUSY calculation [39].
A third observable sensitive to SUSY loop-contributions is the anomalous magnetic moment aµ = (gµ − 2)/2 of the
muon, for which recent BNL data and the SM prediction disagree by [33]
∆aµ = (22± 10)× 10−10. (60)
In our calculation, we take into account the SM and MSSM contributions up to two loops [40, 41] and require them
to agree with the region above within two standard deviations.
For cosmological reasons, i.e. in order to have a suitable candidate for non-baryonic cold dark matter [42], we require
the lightest SUSY particle (LSP) to be stable, electrically neutral, and a colour singlet. The dark matter relic density
is then calculated using a modified version of DarkSUSY 4.1 [43], that takes into account the six-dimensional squark
helicity and flavour mixing, and constrained to the region
0.094 < ΩCDMh
2 < 0.136 (61)
at 95% (2σ) confidence level. This limit has recently been obtained from the three-year data of the WMAP satellite,
combined with the SDSS and SNLS survey and Baryon Acoustic Oscillation data and interpreted within an eleven-
parameter inflationary model [44], which is more general than the usual six-parameter “vanilla” concordance model
of cosmology. Note that this range is well compatible with the older, independently obtained range of 0.094 <
ΩCDMh
2 < 0.129 [45].
B. Scans of the Constrained NMFV MSSM Parameter Space
The above experimental limits are now imposed on the constrained MSSM (cMSSM), or minimal supergravity
(mSUGRA), model with five free parameters m0, m1/2, tanβ, A0, and sgn(µ) at the grand unification scale. Since
[GeV]1/2m
500 1000 1500
=0λ<0, µ=10, β=0, tan
[GeV]1/2m
500 1000 1500
=0.03λ<0, µ=10, β=0, tan
[GeV]1/2m
500 1000 1500
=0.05λ<0, µ=10, β=0, tan
[GeV]1/2m
500 1000 1500
=0.1λ<0, µ=10, β=0, tan
FIG. 10: The (m0,m1/2)-planes for tan β = 10, A0 = 0 GeV, µ < 0, and λ = 0, 0.03, 0.05 and 0.1. We show WMAP (black)
favoured as well as b → sγ (blue) and charged LSP (beige) excluded regions of mSUGRA parameter space in minimal (λ = 0)
and non-minimal (λ > 0) flavour violation.
our scans of the cMSSM parameter space in the common scalar mass m0 and the common fermion mass m1/2 depend
very little on the trilinear coupling A0, we set it to zero in the following. Furthermore, we fix a small (10), intermediate
(30), and large (50) value for the ratio of the Higgs vacuum expectation values tanβ. The impact of the sign of the
off-diagonal Higgs mass parameter µ is investigated for tanβ = 10 only, before we set it to µ > 0 for tanβ = 30 and
50 (see below).
With these boundary conditions at the grand unification scale, we solve the renormalization group equations numer-
ically to two-loop order using the computer program SPheno 2.2.3 [46] and compute the soft SUSY-breaking masses
at the electroweak scale with the complete one-loop formulas, supplemented by two-loop contributions in the case of
the neutral Higgs bosons and the µ-parameter. At this point we generalize the squark mass matrices as described
in Sec. II in order to account for flavour mixing in the left-chiral sector of the second- and third-generation squarks,
diagonalize these mass matrices, and compute the low-energy, electroweak precision, and cosmological observables
with the computer programs FeynHiggs 2.5.1 [47] and DarkSUSY 4.1 [43].
For the masses and widths of the electroweak gauge bosons and the mass of the top quark, we use the current
values of mZ = 91.1876 GeV, mW = 80.403 GeV, mt = 174.2 GeV, ΓZ = 2.4952 GeV, and ΓW = 2.141 GeV. The
CKM-matrix elements are computed using the parameterization
c12c13 s12c13 s13e
−s12c23 − c12s23s13eiδ c12c23 − s12s23s13eiδ s23c13
s12s23 − c12c23s13eiδ −c12s23 − s12c23s13eiδ c23c13
, (62)
where sij = sin θij and cij = cos θij relate to the mixing of two specific generations i and j and δ is the SM CP -violating
complex phase. The numerical values are given by
s12 = 0.2243, s23 = 0.0413, s13 = 0.0037, and δ = 1.05. (63)
The squared sine of the electroweak mixing angle sin2 θW = 1 − m2W /m2Z and the electromagnetic fine structure
constant α =
W sin
2 θW /π are calculated in the improved Born approximation using the world average value
of GF = 1.16637 · 10−5 GeV−2 for Fermi’s coupling constant [33].
Typical scans of the cMSSM parameter space in m0 and m1/2 with a relatively small value of tanβ = 10 and A0 = 0
are shown in Figs. 10 and 11 for µ < 0 and µ > 0, respectively. All experimental limits described in Sec. IVA are
imposed at the 2σ-level. The b→ sγ excluded region depends strongly on flavour mixing, while the regions favoured
by gµ − 2 and the dark matter relic density are quite insensitive to variations of the λ-parameter. ∆ρ constrains the
parameter space only for heavy scalar masses m0 > 2000 GeV and heavy gaugino masses m1/2 > 1500 GeV, so that
the corresponding excluded regions are not shown here.
The dominant SUSY effects in the calculation of the anomalous magnetic moment of the muon come from induced
quantum loops of a gaugino and a slepton. Squarks contribute only at the two-loop level. This reduces the dependence
on flavour violation in the squark sector considerably. Furthermore, the region µ < 0 is disfavoured in all SUSY models,
since the one-loop SUSY contributions are approximatively given by [48]
aSUSY, 1−loopµ ≃ 13× 10−10
100 GeV
MSUSY
tanβ sgn(µ), (64)
if all SUSY particles (the relevant ones are the smuon, sneutralino, chargino, and neutralino) have a common mass
MSUSY. Negative values of µ would then increase, not decrease, the disagreement between the experimental measure-
ments and the theoretical SM value of aµ. Furthermore, the measured b → sγ branching ratio excludes virtually all
[GeV]1/2m
500 1000 1500
=0λ>0, µ=10, β=0, tan
[GeV]1/2m
500 1000 1500
=0.03λ>0, µ=10, β=0, tan
[GeV]1/2m
500 1000 1500
=0.05λ>0, µ=10, β=0, tan
[GeV]1/2m
500 1000 1500
=0.1λ>0, µ=10, β=0, tan
FIG. 11: The (m0,m1/2)-planes for tanβ = 10, A0 = 0 GeV, µ > 0, and λ = 0, 0.03, 0.05 and 0.1. We show aµ (grey) and
WMAP (black) favoured as well as b → sγ (blue) and charged LSP (beige) excluded regions of mSUGRA parameter space in
minimal (λ = 0) and non-minimal (λ > 0) flavour violation.
[GeV]1/2m
500 1000 1500
=0λ>0, µ=30, β=0, tan
[GeV]1/2m
500 1000 1500
=0.03λ>0, µ=30, β=0, tan
[GeV]1/2m
500 1000 1500
=0.05λ>0, µ=30, β=0, tan
[GeV]1/2m
500 1000 1500
=0.1λ>0, µ=30, β=0, tan
FIG. 12: The (m0,m1/2) planes for tan β = 30, A0 = 0 GeV, µ > 0, and λ = 0, 0.03, 0.05 and 0.1. We show aµ (grey) and
WMAP (black) favoured as well as b → sγ (blue) and charged LSP (beige) excluded regions of mSUGRA parameter space in
minimal (λ = 0) and non-minimal (λ > 0) flavour violation.
of the region favoured by the dark matter relic density, except for very high scalar SUSY masses. We therefore do
not consider negative values of µ in the rest of this work. As stated above, we have also checked that the shape of the
different regions depends extremely weakly on the trilinear coupling A0.
In Figs. 12 and 13, we show the (m0,m1/2)-planes for larger tanβ, namely tanβ = 30 and tanβ = 50, and for
µ > 0. The regions which are favoured both by the anomalous magnetic moment of the muon and by the cold dark
matter relic density, and which are not excluded by the b→ sγ measurements, are stringently constrained and do not
allow for large flavour violation.
[GeV]1/2m
500 1000 1500
=0λ>0, µ=50, β=0, tan
[GeV]1/2m
500 1000 1500
=0.03λ>0, µ=50, β=0, tan
[GeV]1/2m
500 1000 1500
=0.05λ>0, µ=50, β=0, tan
[GeV]1/2m
500 1000 1500
=0.1λ>0, µ=50, β=0, tan
FIG. 13: The (m0,m1/2) planes for tan β = 50, A0 = 0 GeV, µ > 0, and λ = 0, 0.03, 0.05 and 0.1. We show aµ (grey) and
WMAP (black) favoured as well as b → sγ (blue) and charged LSP (beige) excluded regions of mSUGRA parameter space in
minimal (λ = 0) and non-minimal (λ > 0) flavour violation.
TABLE II: Benchmark points allowing for flavour violation among the second and third generations for A0 = 0, µ > 0, and
three different values of tanβ. For comparison we also show the nearest pre-WMAP SPS [49, 50] and post-WMAP BDEGOP
[51] benchmark points and indicate the relevant cosmological regions.
m0 [GeV] m1/2 [GeV] A0 [GeV] tanβ sgn(µ) SPS BDEGOP Cosmol. Region
A 700 200 0 10 1 2 E’ Focus Point
B 100 400 0 10 1 3 C’ Co-Annihilation
C 230 590 0 30 1 1b I’ Co-Annihilation
D 600 700 0 50 1 4 L’ Bulk/Higgs-funnel
C. (c)MFV and NMFV Benchmark Points and Slopes
Restricting ourselves to non-negative values of µ, we now inspect the (m0,m1/2)-planes in Figs. 11-13 for cMSSM
scenarios that
• are allowed/favoured by low-energy, electroweak precision, and cosmological constraints,
• permit non-minimal flavour violation among left-chiral squarks of the second and third generation up to λ ≤ 0.1,
• and are at the same time collider-friendly, i.e. have relatively low values of m0 and m1/2.
Our choices are presented in Tab. II, together with the nearest pre-WMAP Snowmass Points (and Slopes, SPS) [49, 50]
and the nearest post-WMAP scenarios proposed in Ref. [51]. We also indicate the relevant cosmological region for
each point and attach a model line (slope) to it, given by
A : 180 GeV ≤ m1/2 ≤ 250 GeV , m0 = − 1936 GeV + 12.9m1/2,
B : 400 GeV ≤ m1/2 ≤ 900 GeV , m0 = 4.93 GeV + 0.229m1/2,
C : 500 GeV ≤ m1/2 ≤ 700 GeV , m0 = 54 GeV + 0.297m1/2,
D : 575 GeV ≤ m1/2 ≤ 725 GeV , m0 = 600 GeV. (65)
These slopes trace the allowed/favoured regions from lower to higher masses and can, of course, also be used in cMFV
scenarios with λ = 0. We have verified that in the case of MFV [7] the hierarchy ∆
LL ≫ ∆
LR,RL ≫ ∆
RR and the
equality of λsbLL = λ
LL are still reasonably well fulfilled numerically with the values of λ
LL ≈ λctLL ranging from zero
to 5× 10−3 . . . 1× 10−2 for our four typical benchmark points.
Starting with Fig. 11 and tanβ = 10, the bulk region of equally low scalar and fermion masses is all but excluded
by the b → sγ branching ratio. This leaves as a favoured region first the so-called focus point region of low fermion
massesm1/2, where the lightest neutralinos are relatively heavy, have a significant Higgsino component, and annihilate
dominantly into pairs of electroweak gauge bosons. Our benchmark point A lies in this region, albeit at smaller masses
than SPS 2 (m0 = 1450 GeV, m1/2 = 300 GeV) and BDEGOP E’ (m0 = 1530 GeV, m1/2 = 300 GeV), which lie
outside the region favoured by aµ (grey-shaded) and lead to collider-unfriendly squark and gaugino masses.
The second favoured region for small tanβ is the co-annihilation branch of low scalar masses m0, where the lighter
tau-slepton mass eigenstate is not much heavier than the lightest neutralino and the two have a considerable co-
annihilation cross section. This is where we have chosen our benchmark point B, which differs from the points SPS
3 (m0 = 90 GeV, m1/2 = 400 GeV) and BDEGOP C’ (m0 = 85 GeV, m1/2 = 400 GeV) only very little in the scalar
mass. This minor difference may be traced to the fact that we use DarkSUSY 4.1 [43] instead of the private dark
matter program SSARD of Ref. [51].
At the larger value of tanβ = 30 in Fig. 12, only the co-annihilation region survives the constraints coming from
b → sγ decays. Here we choose our point C, which has slightly higher masses than both SPS 1b (m0 = 200 GeV,
m1/2 = 400 GeV) and BDEGOP I’ (m0 = 175 GeV, m1/2 = 350 GeV), due to the ever more stringent constraints
from the above-mentioned rare B-decay.
For the very large value of tanβ = 50 in Fig. 13, the bulk region reappears at relatively heavy scalar and fermion
masses. Here, the couplings of the heavier scalar and pseudo-scalar Higgses H0 and A0 to bottom quarks and tau-
leptons and the charged-Higgs coupling to top-bottom pairs are significantly enhanced, resulting e.g. in increased
dark matter annihilation cross sections through s-channel Higgs-exchange into bottom-quark final states. So as tanβ
increases further, the so-called Higgs-funnel region eventually makes its appearance on the diagonal of large scalar and
fermion masses. We choose our point D in the concentrated (bulky) region favoured by cosmology and aµ at masses,
that are slightly higher than those of SPS 4 (m0 = 400 GeV, m1/2 = 300 GeV) and BDEGOP L’ (m0 = 300 GeV,
m1/2 = 450 GeV). We do so in order to escape again from the constraints of the b → sγ decay, which are stronger
today than they were a few years ago. In this scenario, squarks and gluinos are very heavy with masses above 1 TeV.
0 0.2 0.4 0.6 0.8 1
0.0001
0.001
0 0.2 0.4 0.6 0.8 1
γ s →b
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
Up-type squark masses
~ ≈ 4u
~ ≈ 3u
0 0.2 0.4 0.6 0.8 1
Down-type squark masses
≈ 4d
≈ 2d
FIG. 14: Dependence of the precision variables BR(b → sγ), ∆ρ, and the cold dark matter relic density ΩCDMh
2 (top) as well
as of the lightest SUSY particle, up- and down-type squark masses (bottom) on the NMFV parameter λ in our benchmark
scenario A. The experimentally allowed ranges (within 2σ) are indicated by horizontal dashed lines.
D. Dependence of Precision Observables and Squark-Mass Eigenvalues on Flavour Violation
Let us now turn to the dependence of the precision variables discussed in Sec. IVA on the flavour violating parameter
λ in the four benchmark scenarios defined in Sec. IVC. As already mentioned, we expect the leptonic observable aµ
to depend weakly (at two loops only) on the squark sector, and this is confirmed by our numerical analysis. We find
constant values of 6, 14, 16, and 13×10−10 for the benchmarks A, B, C, and D, all of which lie well within 2σ (the
latter three even within 1σ) of the experimentally favoured range (22± 10)× 10−10.
The electroweak precision observable ∆ρ is shown first in Figs. 14-17 for the four benchmark scenarios A, B, C, and
D. On our logarithmic scale, only the experimental upper bound of the 2σ-range is visible as a dashed line. While the
self-energy diagrams of the electroweak gauge bosons depend obviously strongly on the helicities, flavours, and mass
eigenvalues of the squarks in the loop, the SUSY masses in our scenarios are sufficiently small and the experimental
error is still sufficiently large to allow for relatively large values of λ ≤ 0.57, 0.52, 0.38, and 0.32 for the benchmark
points A, B, C, and D, respectively. As mentioned above, ∆ρ conversely constrains SUSY models in cMFV only for
masses above 2000 GeV for m0 and 1500 GeV for m1/2.
The next diagram in Figs. 14-17 shows the dependence of the most stringent low-energy constraint, coming from
the good agreement between the measured b → sγ branching ratio and the two-loop SM prediction, on the NMFV
parameter λ. The dashed lines of the 2σ-bands exhibit two allowed regions, one close to λ = 0 (vertical green line)
and a second one around λ ≃ 0.57, 0.75, 0.62, and 0.57, respectively. As is well-known, the latter are, however,
disfavoured by b→ sµ+µ− data constraining the sign of the b→ sγ amplitude to be the same as in the SM [52]. We
will therefore limit ourselves later to the regions λ ≤ 0.05 (points A, C, and D) and λ ≤ 0.1 (point B) in the vicinity
of (c)MFV (see also Tab. I).
The 95% confidence-level (or 2σ) region for the cold dark matter density was given in absolute values in Ref. [44]
and is shown as a dashed band in the upper right part of Figs. 14-17. However, only the lower bound (0.094) is of
relevance, as the relic density falls with increasing λ. This is not so pronounced in our model B as in our model A,
where squark masses are light and the lightest neutralino has a sizable Higgsino-component, so that squark exchanges
contribute significantly to the annihilation cross sections. For models C and D there is little sensitivity of ΩCDMh
(except at very large λ ≤ 1), as the squark masses are generally larger.
0 0.2 0.4 0.6 0.8 1
0.0001
0.001
0 0.2 0.4 0.6 0.8 1
γ s →b
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
Up-type squark masses
~, 4u
~, 3u
~, 2u
0 0.2 0.4 0.6 0.8 1
Down-type squark masses
≈ 4d
≈ 2d
FIG. 15: Same as Fig. 14 for our benchmark scenario B.
0 0.2 0.4 0.6 0.8 1
0.00001
0.0001
0.001
0 0.2 0.4 0.6 0.8 1
γ s →b
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
Up-type squark masses
~, 4u
~, 3u
0 0.2 0.4 0.6 0.8 1
Down-type squark masses
FIG. 16: Same as Fig. 14 for our benchmark scenario C.
0 0.2 0.4 0.6 0.8 1
0.00001
0.0001
0.001
0 0.2 0.4 0.6 0.8 1
γ s →b
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
Up-type squark masses
~ ,4u
~, 3u
0 0.2 0.4 0.6 0.8 1
Down-type squark masses
≈ 4d
FIG. 17: Same as Fig. 14 for our benchmark scenario D.
The rapid fall-off of the relic density for very large λ ≤ 1 can be understood by looking at the resulting lightest up-
and down-type squark mass eigenvalues in the lower left part of Figs. 14-17. For maximal flavour violation, the off-
diagonal squark mass matrix elements are of similar size as the diagonal ones, leading to one squark mass eigenvalue
that approaches and finally falls below the lightest neutralino (dark matter) mass. Light squark propagators and
co-annihilation processes thus lead to a rapidly falling dark matter relic density and finally to cosmologically excluded
NMFV SUSY models, since the LSP must be electrically neutral and a colour singlet.
An interesting phenomenon of level reordering between neighbouring states can be observed in the lower central
diagrams of Figs. 14-17 for the two lowest mass eigenvalues of up-type squarks. The squark mass eigenstates are, by
definition, labeled in ascending order with the mass eigenvalues, so that ũ1 represents the lightest, ũ2 the second-
lightest, and ũ6 the heaviest up-type squark. As λ and the off-diagonal entries in the mass matrix increase, the splitting
between the lightest and highest mass eigenvalues naturally increases, whereas the intermediate squark masses (of
ũ3,4,5) are practically degenerate and insensitive to λ. These remarks also hold for the down-type squark masses
shown in the lower right diagrams of Figs. 14-17. However, for up-type squarks it is first the second-lowest mass that
decreases up to intermediate values of λ = 0.2...0.5, whereas the lowest mass is constant, and only at this point the
second-lowest mass becomes constant and takes approximately the value of the until here lowest squark mass, whereas
the lowest squark mass starts to decrease further with λ. These “avoided crossings” are a common phenomenon for
Hermitian matrices and reminiscent of meta-stable systems in quantum mechanics. At the point where the two
levels should cross, the corresponding squark eigenstates mix and change character, as will be explained in the next
subsection. For scenario C (Fig. 16), the phenomenon occurs even a second time with an additional avoided crossing
between the states ũ2 and ũ3 at λ ≃ 0.05. For scenario B (Fig. 15), this takes place at λ ≃ 0.1, and there is even
another crossing at λ ≃ 0.02. For down-type squarks, the level-reordering phenomenon is not so pronounced.
E. Chirality and Flavour Decomposition of Squark Mass Eigenstates
In NMFV, squarks will not only exhibit the traditional mixing of left- and right-handed helicities of third-generation
flavour eigenstates, but will in addition exhibit generational mixing. As discussed before, we restrict ourselves here to
the simultaneous mixing of left-handed second- and third-generation up- and down-type squarks. For our benchmark
scenario A, the helicity and flavour decomposition of the six up-type (left) and down-type (right) squark mass eigen-
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
FIG. 18: Dependence of the chirality (L, R) and flavour (u, c, t; d, s, and b) content of up- (ũi) and down-type (d̃i) squark
mass eigenstates on the NMFV parameter λ ∈ [0; 1] for benchmark point A.
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
FIG. 19: Same as Fig. 18 for λ ∈ [0; 0.1].
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
FIG. 20: Same as Fig. 18 for benchmark point B.
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
FIG. 21: Same as Fig. 20 for λ ∈ [0; 0.1].
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
FIG. 22: Same as Fig. 18 for benchmark point C.
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
FIG. 23: Same as Fig. 22 for λ ∈ [0; 0.1].
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
0 0.2 0.4 0.6 0.8
FIG. 24: Same as Fig. 18 for benchmark point D.
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
0 0.02 0.04 0.06 0.08 0.1
FIG. 25: Same as Fig. 24 for λ ∈ [0; 0.1].
states is shown in Fig. 18 for the full range of the parameter λ ∈ [0; 1] and in Fig. 19 for the experimentally favoured
range in the vicinity of (c)MFV, λ ∈ [0; 0.1]. First-generation and right-handed second-generation squarks remain, of
course, helicity- and flavour-diagonal,
ũ3 = c̃R , d̃2 = s̃R,
ũ4 = ũR , d̃3 = d̃R,
ũ5 = ũL , d̃5 = d̃L, (66)
with the left-handed and first-generation squarks being slightly heavier due their weak isospin coupling (see Eqs. (5)-
(7)) and different renormalization-group running effects. Production of these states will benefit from t- and u-channel
contributions of first- and second-generation quarks with enhanced parton densities in the external hadrons, but they
will not be identified easily with heavy-flavour tagging and are of little interest for our study of flavour violation.
The lightest up-type squark ũ1 remains the traditional mixture of left- and right-handed stops over a large region of
λ ≤ 0.4, but it shows at this point an interesting flavour transition, which is in fact expected from the level reordering
phenomenon discussed in the lower central plot of Fig. 14. The transition happens, however, above the experimental
limit of λ ≤ 0.1. Below this limit, it is the states ũ2, ũ6, d̃1, and in particular d̃4 and d̃6 that show, in addition to
helicity mixing, the most interesting and smooth variation of second- and third-generation flavour content (see Fig.
19). Note that at very low λ ≃ 0.002 the states d̃L and s̃L rapidly switch levels. This numerically small change was
not visible on the linear scale in Fig. 14.
For the benchmark point B, whose helicity and flavour decomposition is shown in Fig. 20, level reordering occurs
at λ ≃ 0.4 for the intermediate-mass up-type squarks,
ũ3,4 = c̃R , d̃2 = s̃R,
ũ4,3 = ũR , d̃3 = d̃R,
ũ5 = ũL , d̃5 = d̃L (67)
whereas the ordering of down-type squarks is very similar to scenario A. Close inspection of Fig. 21 shows, however,
that also d̃R and s̃R switch levels at low values of λ ≃ 0.02. At λ ≃ 0.01, in addition s̃R and b̃L switch levels, and at
λ ≃ 0.002 it is the states ũL and c̃L. The lightest up-type squark is again nothing but a mix of left- and right-handed
stops up to λ ≤ 0.4. Phenomenologically smooth transitions below λ ≤ 0.1 involving taggable third-generation squarks
are observed for ũ4, ũ6, d̃1, and d̃6.
The helicity and flavour decomposition for our scenario D, shown in Fig. 24, is rather similar to the one in scenario
A, i.e.
ũ3 = c̃R , d̃3 = s̃R,
ũ4 = ũR , d̃4 = d̃R,
ũ5 = ũL , d̃5 = d̃L (68)
are exactly the same in the up-squark sector, and only the mixed down-type state d̃4 is now lighter and becomes d̃2.
The lightest up-type squark, ũ1, is again mostly a mix of left- and right-handed top squarks up to λ ≃ 0.4, where
the level reordering and generation mixing occurs (see lower central part of Fig. 17). At the experimentally favoured
lower values of λ ≤ 0.1, the states ũ2, ũ6, d̃1, d̃2, and d̃6 exhibit some smooth variations, shown in detail in Fig. 25,
albeit to a lesser extent than in scenario A. At very low λ ≃ 0.004, it is now the up-type squarks ũL and c̃L that
rapidly switch levels. This numerically small change was again not visible on a linear scale (see Fig. 17).
For our scenario C, shown in Fig. 22, the assignment of the intermediate states
ũ3 = c̃R , d̃3 = s̃R,
ũ4 = ũR , d̃4 = d̃R,
ũ5 = ũL , d̃5 = d̃L (69)
is the same as for scenario D above λ ≥ 0.1. Just below, ũR and c̃R as well as d̃R and s̃R rapidly switch levels, whereas
ũL and c̃L switch levels at very low λ ≃ 0.002. These changes were already visible upon close inspection of the lower
central and right plots in Fig. 16. On the other hand, the lightest squarks ũ1 and d̃1 only acquire significant flavour
admixtures at relatively large λ ≃ 0.2...0.4, whereas they are mostly superpositions of left- and right-handed stops
and sbottoms in the experimentally favourable range of λ ≤ 0.1 shown in Fig. 23. Here, the heaviest states ũ6 and d̃6
show already smooth admixtures of third-generation squarks as it was the case for the scenarios A and D discussed
above. The most interesting states are, however, ũ2, ũ4, d̃2, and d̃4, respectively, since they represent combinations
of up to four different helicity and flavour states and have a significant, taggable third-generation flavour content.
V. NUMERICAL PREDICTIONS FOR NMFV SUSY PARTICLE PRODUCTION AT THE LHC
In this section, we present numerical predictions for the production cross sections of squark-antisquark pairs, squark
pairs, the associated production of squarks and gauginos, and gaugino pairs in NMFV SUSY at the CERN LHC, i.e.
for pp-collisions at
S = 14 TeV centre-of-mass energy. Thanks to the QCD factorization theorem, total unpolarized
hadronic cross sections
4m2/S
∫ 1/2 ln τ
−1/2 ln τ
∫ tmax
dt fa/A(xa,M
a ) fb/B(xb,M
can be calculated by convolving the relevant partonic cross sections dσ̂/dt, computed in Sec. III, with universal parton
densities fa/A and fb/B of partons a, b in the hadrons A,B, which depend on the longitudinal momentum fractions
of the two partons xa,b =
τe±y and on the unphysical factorization scales Ma,b. For consistency with our leading
order (LO) QCD calculation in the collinear approximation, where all squared quark masses (except for the top-quark
mass)m2q ≪ s, we employ the LO set of the latest CTEQ6 global parton density fit [53], which includes nf = 5 “light”
(including the bottom) quark flavours and the gluon, but no top-quark density. Whenever it occurs, i.e. for gluon
initial states and gluon or gluino exchanges, the strong coupling constant αs(µR) is calculated with the corresponding
LO value of Λ
LO = 165 MeV. We identify the renormalization scale µR with the factorization scales Ma =Mb and
set the scales to the average mass of the final state SUSY particles i and j, m = (mi +mj)/2.
The numerical cross sections for charged squark-antisquark and squark-squark production, neutral up- and down-
type squark-antisquark and squark-squark pair production, associated production of squarks with charginos and
neutralinos, and gaugino pair production are shown in Fig. 26 for our benchmark scenario A, in Fig. 27 for scenario
B, in Fig. 28 for scenario C, and in Fig. 29 for scenario D. The magnitudes of the cross sections vary from the barely
visible level of 10−2 fb for weak production of heavy final states over the semi-strong production of average squarks
and gauginos and quark-gluon initial states to large cross sections of 102 to 103 fb for the strong production of diagonal
squark-(anti)squark pairs or weak production of very light gaugino pairs. Unfortunately, these processes, whose cross
sections are largest (top right, center left, and lower right parts of Figs. 26-29), are practically insensitive to the flavour
violation parameter λ, as the strong gauge interaction is insensitive to quark flavours and gaugino pair production
cross sections are summed over exchanged squark flavours.
Some of the subleading, non-diagonal cross sections show, however, sharp transitions, in particular down-type
squark-antisquark production at the benchmark point B (centre-left part of Fig. 27), but also the other squark-
antisquark and squark-squark production processes. At λ = 0.02, the cross sections for d̃1d̃
6 and d̃3d̃
6 switch places.
Since the concerned sstrange and sbottom mass differences are rather small, this is mainly due to the different strange
and bottom quark densities in the proton. The cross section is mainly due to the exchange of strongly coupled gluinos
despite their larger mass. At λ = 0.035 the cross sections for d̃3d̃
6 and d̃1d̃
3 increase sharply, since d̃3 = d̃R can then
be produced from down-type valence quarks. The cross section of the latter process increases with the strange squark
content of d̃1.
At the benchmark point C (Fig. 28), sharp transitions occur between the ũ2/ũ4 and d̃2/d̃4 states, which are pure
charm/strange squarks below/above λ = 0.035, for all types of charged and neutral squark-antisquark and squark-
squark production and also squark-gaugino associated production. As a side-remark we note that an interesting
perspective might be the exploitation of these t-channel contributions to second- and third-generation squark pro-
duction for the determination of heavy-quark densities in the proton. This requires, of course, efficient experimental
techniques for heavy-flavour tagging.
Smooth transitions and semi-strong cross sections of about 1 fb are observed for the associated production of third-
generation squarks with charginos (lower left diagrams) and neutralinos (lower centre diagrams) and in particular
for the scenarios A and B. For benchmark point A (Fig. 26), the cross section for d̃4 production decreases with its
strange squark content, while the bottom squark content increases at the same time. For benchmark point B (Fig.
27), the same (opposite) happens for d̃6 (d̃1), while the cross sections for ũ6 increase/decrease with its charm/top
squark content. Even in minimal flavour violation, the associated production of stops and charginos is a particularly
useful channel for SUSY particle spectroscopy, as can be seen from the fact that cross sections vary over several orders
of magnitude among our four benchmark points (see also Ref. [54]).
An illustrative summary of flavour violating hadroproduction cross section contributions for third-generation squarks
and/or gauginos is presented in Tab. III, together with the competing flavour-diagonal contributions.
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
* + c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
* + c.c.
→p p
≈* 4d
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
→p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
χ∼ →p p
∼ ≈ 1u
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
χ∼ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
10000
+ c.c.
χ∼ →p p
FIG. 26: Cross sections for charged squark-antisquark (top left) and squark-squark (top centre) production, neutral up-type (top
right) and down-type (centre left) squark-antisquark and squark-squark pair (centre and centre right) production, associated
production of squarks with charginos (bottom left) and neutralinos (bottom centre), and gaugino pair production (bottom
right) at the LHC in our benchmark scenario A.
VI. CONCLUSIONS
In conclusion, we have performed an extensive analysis of squark and gaugino hadroproduction and decays in non-
minimal flavour violating supersymmetry. Within the super-CKM basis, we have taken into account the possible
misalignment of quark and squark rotations and computed all squared helicity amplitudes for the production and the
decay widths of squarks and gauginos in compact analytic form, verifying that our results agree with the literature in
the case of non-mixing squarks whenever possible. Flavour violating effects have also been included in our analysis
of dark matter (co-)annihilation processes. We have then analyzed the NMFV SUSY parameter space for regions
allowed by low-energy, electroweak precision, and cosmological data and defined four new post-WMAP benchmark
points and slopes equally valid in minimal and non-minimal flavour violating SUSY. We found that left-chiral mixing
of second- and third-generation squarks is slightly stronger constrained than previously believed, mostly due to smaller
experimental errors on the b → sγ branching ratio and the cold dark matter relic density. For our four benchmark
points, we have presented the dependence of squark mass eigenvalues and the flavour and helicity decomposition of
λ0 0.02 0.04 0.06 0.08 0.1λ0 0.02 0.04 0.06 0.08 0.1
+ c.c.
u~ →p p
λ0 0.02 0.04 0.06 0.08 0.1λ0 0.02 0.04 0.06 0.08 0.1
+ c.c.
u~ →p p
λ0 0.02 0.04 0.06 0.08 0.1λ0 0.02 0.04 0.06 0.08 0.1
* + c.c.
u~ →p p
λ0 0.02 0.04 0.06 0.08 0.1λ0 0.02 0.04 0.06 0.08 0.1
* + c.c.
→p p
λ0 0.02 0.04 0.06 0.08 0.1λ0 0.02 0.04 0.06 0.08 0.1
+ c.c.
u~ →p p
λ0 0.02 0.04 0.06 0.08 0.1λ0 0.02 0.04 0.06 0.08 0.1
+ c.c.
→p p
λ0 0.02 0.04 0.06 0.08 0.1λ0 0.02 0.04 0.06 0.08 0.1
+ c.c.
χ∼ →p p
λ0 0.02 0.04 0.06 0.08 0.1λ0 0.02 0.04 0.06 0.08 0.1
+ c.c.
χ∼ →p p
λ0 0.02 0.04 0.06 0.08 0.1λ0 0.02 0.04 0.06 0.08 0.1
+ c.c.
χ∼ →p p
FIG. 27: Same as Fig. 26 for our benchmark scenario B.
TABLE III: Dominant s-, t-, and u-channel contributions to the flavour violating hadroproduction of third-generation squarks
and/or gauginos and the competing dominant flavour-diagonal contributions.
Exchange s t u
Final State
t̃b̃∗ W NMFV-g̃ -
b̃s̃∗ NMFV-Z NMFV-g̃ -
t̃c̃∗ NMFV-Z NMFV-g̃ -
t̃b̃ - - NMFV-g̃
b̃b̃ - g̃ g̃
t̃t̃ - NMFV-g̃ NMFV-g̃
χ̃0b̃ b b̃ -
χ̃±b̃ NMFV-c NMFV-b̃ -
χ̃0 t̃ NMFV-c NMFV-t̃ -
t̃ b t̃ -
χ̃χ̃ γ, Z,W q̃ q̃
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
* + c.c.
u~ iu
~ →p p
u~ ≈*
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
* + c.c.
→p p
≈* 2d
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
→p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
χ∼ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
χ∼ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
χ∼ iχ
∼ →p p
FIG. 28: Same as Fig. 26 for our benchmark scenario C.
the squark mass eigenstates on the flavour violating parameter λ. We have computed numerically all production
cross sections for the LHC and discussed in detail their dependence on flavour violation. A full experimental study
including heavy-flavour tagging efficiencies, detector resolutions, and background processes would, of course, be very
interesting in order to establish the experimental significance of NMFV. While the implementation of our analytical
results in a general-purpose Monte Carlo generator should now be straight-forward, such a detailed experimental
study represents a research project of its own [55] and is beyond the scope of the work presented here.
Acknowledgments
A large part of this work has been performed in the context of the CERN 2006/2007 workshop on “Flavour in the
Era of the LHC”. The authors also acknowledge interesting discussions with J. Debove, A. Djouadi, W. Porod, J.M.
Richard, and P. Skands. This work was supported by two Ph.D. fellowships of the French ministry for education and
research.
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
* + c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
* + c.c.
→p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
u~ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
→p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
χ∼ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
χ∼ →p p
λ0 0.01 0.02 0.03 0.04 0.05λ0 0.01 0.02 0.03 0.04 0.05
+ c.c.
χ∼ →p p
FIG. 29: Same as Fig. 26 for our benchmark scenario D.
APPENDIX A: GAUGINO AND HIGGSINO MIXING
The soft SUSY-breaking terms in the minimally supersymmetric Lagrangian include a term
L ⊃ −1
(ψ0)T Y ψ0 + h.c., (A1)
which is bilinear in the (2-component) fermionic partners
ψ0j = (−iB̃,−iW̃ 3, H̃01 , H̃02 )T (A2)
of the neutral electroweak gauge and Higgs bosons and proportional to the, generally complex and symmetric, neu-
tralino mass matrix
M1 0 −mZ sW cβ mZ sW sβ
0 M2 mZ cW cβ −mZ cW sβ
−mZ sW cβ mZ cW cβ 0 −µ
mZ sW sβ −mZ cW sβ −µ 0
. (A3)
Here, M1, M2, and µ are the SUSY-breaking bino, wino, and off-diagonal higgsino mass parameters with tanβ =
sβ/cβ = vu/vd being the ratio of the vacuum expectation values vu,d of the two Higgs doublets, while mZ is the SM Z-
boson mass and sW (cW ) is the sine (co-sine) of the electroweak mixing angle θW . After electroweak gauge-symmetry
breaking and diagonalization of the mass matrix Y , one obtains the neutralino mass eigenstates
χ0i = Nij ψ
j , i, j = 1, . . . , 4, (A4)
where N is a unitary matrix satisfying the relation
N∗ Y N−1 = diag (mχ̃0
,mχ̃0
,mχ̃0
,mχ̃0
). (A5)
In 4-component notation, the Majorana-fermionic neutralino mass eigenstates can be written as
χ̃0i =
, i = 1, . . . , 4. (A6)
Their mass eigenvalues mχ̃0
can, e.g., be found in analytic form in [56] and can be chosen to be real and non-negative.
The chargino mass term in the SUSY Lagrangian
L ⊃ −1
(ψ+ψ−)
+ h.c. (A7)
is bilinear in the (2-component) fermionic partners
ψ±j = (−iW̃
±, H̃±2,1)
T (A8)
of the charged electroweak gauge and Higgs bosons and proportional to the, generally complex, chargino mass matrix
M2 mW
2 cβ µ
, (A9)
where mW is the mass of the SM W -boson. Its diagonalization leads to the chargino mass eigenstates
χ+i = Vij ψ
χ−i = Uij ψ
, i, j = 1, 2, (A10)
where the matrices U and V satisfy the relation
U∗X V −1 = diag (mχ̃±
,mχ̃±
). (A11)
In 4-component notation, the Dirac-fermionic chargino mass eigenstates can be written as
χ̃±i =
. (A12)
The mass eigenvalues can be chosen to be real and non-negative and are given by [2]
M22 + µ
2 + 2m2W ∓
(M22 − µ2)2 + 4m4W c22β + 4m2W (M22 + µ2 + 2M2 µ s2β)
]1/2}
, (A13)
while the matrices
U = O− and V =
O+ , if detX ≥ 0
σ3 O+ , if detX < 0
with O± =
cos θ± sin θ±
− sin θ± cos θ±
(A14)
are determined by the mixing angles θ± with 0 ≤ θ± ≤ π/2 and
tan 2θ+ =
2mW (M2 sinβ + µ cosβ)
M22 − µ2 + 2m2W cos 2β
and tan 2θ− =
2mW (M2 cosβ + µ sinβ)
M22 − µ2 − 2m2W cos 2β
. (A15)
[1] H. P. Nilles, Phys. Rept. 110 (1984) 1.
[2] H. E. Haber and G. L. Kane, Phys. Rept. 117 (1985) 75.
[3] M. Ciuchini, G. Degrassi, P. Gambino and G. F. Giudice, Nucl. Phys. B 534 (1998) 3.
[4] A. J. Buras, P. Gambino, M. Gorbahn, S. Jäger and L. Silvestrini, Phys. Lett. B 500 (2001) 161.
[5] L. J. Hall and L. Randall, Phys. Rev. Lett. 65 (1990) 2939.
[6] G. D’Ambrosio, G. F. Giudice, G. Isidori and A. Strumia, Nucl. Phys. B 645 (2002) 155.
[7] W. Altmannshofer, A. J. Buras and D. Guadagnoli, arXiv:hep-ph/0703200.
[8] N. Cabibbo, Phys. Rev. Lett. 10 (1963) 531.
[9] M. Kobayashi and T. Maskawa, Prog. Theor. Phys. 49 (1973) 652.
[10] L. J. Hall, V. A. Kostelecky and S. Raby, Nucl. Phys. B 267 (1986) 415.
[11] J. F. Donoghue, H. P. Nilles and D. Wyler, Phys. Lett. B 128 (1983) 55.
[12] M. J. Duncan, Nucl. Phys. B 221 (1983) 285.
[13] A. Bouquet, J. Kaplan and C. A. Savoy, Phys. Lett. B 148 (1984) 69.
[14] F. Borzumati and A. Masiero, Phys. Rev. Lett. 57 (1986) 961.
[15] F. Gabbiani and A. Masiero, Nucl. Phys. B 322 (1989) 235.
[16] P. Brax and C. A. Savoy, Nucl. Phys. B 447 (1995) 227.
[17] J. S. Hagelin, S. Kelley and T. Tanaka, Nucl. Phys. B 415 (1994) 293.
[18] F. Gabbiani, E. Gabrielli, A. Masiero and L. Silvestrini, Nucl. Phys. B 477 (1996) 321.
[19] M. Ciuchini, E. Franco, D. Guadagnoli, V. Lubicz, M. Pierini, V. Porretti and L. Silvestrini, hep-ph/0703204.
[20] W. Beenakker, R. Höpker, M. Spira and P. M. Zerwas, Nucl. Phys. B 492 (1997) 51;
E. L. Berger, M. Klasen and T. Tait, Phys. Rev. D 59 (1999) 074024.
[21] H. Baer, B. W. Harris and M. H. Reno, Phys. Rev. D 57 (1998) 5871.
[22] W. Beenakker, M. Klasen, M. Krämer, T. Plehn, M. Spira and P. M. Zerwas, Phys. Rev. Lett. 83 (1999) 3780.
[23] E. L. Berger, M. Klasen and T. Tait, Phys. Lett. B 459 (1999) 165;
E. L. Berger, M. Klasen and T. M. P. Tait, Phys. Rev. D 62 (2000) 095014 [Erratum-ibid. D 67 (2003) 099901].
[24] W. Beenakker, M. Krämer, T. Plehn, M. Spira and P. M. Zerwas, Nucl. Phys. B 515 (1998) 3
[25] E. L. Berger, B. W. Harris, D. E. Kaplan, Z. Sullivan, T. M. P. Tait and C. E. M. Wagner, Phys. Rev. Lett. 86 (2001)
4231.
[26] G. Bozzi, B. Fuks and M. Klasen, Phys. Rev. D 72 (2005) 035016.
[27] S. Dawson, E. Eichten and C. Quigg, Phys. Rev. D 31 (1985) 1581.
[28] S. Y. Choi, A. Djouadi, H. S. Song and P. M. Zerwas, Eur. Phys. J. C 8 (1999) 669.
[29] M. Spira, Nucl. Phys. Proc. Suppl. 89 (2000) 222.
[30] G. J. Gounaris, J. Layssac, P. I. Porfyriadis and F. M. Renard, Phys. Rev. D 70 (2004) 033011.
[31] A. Bartl, W. Majerotto and W. Porod, Z. Phys. C 64 (1994) 499 [Erratum-ibid. C 68 (1995) 518].
[32] M. Obara and N. Oshimo, JHEP 0608 (2006) 054.
[33] W. M. Yao et al. [Particle Data Group], J. Phys. G 33 (2006) 1.
[34] M. Ciuchini, A. Masiero, P. Paradisi, L. Silvestrini, S. K. Vempati and O. Vives, hep-ph/0702144.
[35] J. Foster, K. I. Okumura and L. Roszkowski, Phys. Lett. B 641 (2006) 452.
[36] T. Hahn, W. Hollik, J. I. Illana and S. Penaranda, hep-ph/0512315.
[37] E. Barberio et al. [Heavy Flavor Averaging Group (HFAG)], hep-ex/0603003.
[38] A. L. Kagan and M. Neubert, Phys. Rev. D 58 (1998) 094012.
[39] S. Heinemeyer, W. Hollik, F. Merz and S. Penaranda, Eur. Phys. J. C 37 (2004) 481.
[40] S. Heinemeyer, D. Stöckinger and G. Weiglein, Nucl. Phys. B 690 (2004) 62.
[41] S. Heinemeyer, D. Stöckinger and G. Weiglein, Nucl. Phys. B 699 (2004) 103.
[42] J. R. Ellis, J. S. Hagelin, D. V. Nanopoulos, K. A. Olive and M. Srednicki, Nucl. Phys. B 238 (1984) 453.
[43] P. Gondolo, J. Edsjo, P. Ullio, L. Bergstrom, M. Schelke and E. A. Baltz, JCAP 0407 (2004) 008.
[44] J. Hamann, S. Hannestad, M. S. Sloth and Y. Y. Y. Wong, Phys. Rev. D 75 (2007) 023522.
[45] J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Phys. Lett. B 565 (2003) 176.
[46] W. Porod, Comput. Phys. Commun. 153 (2003) 275.
[47] S. Heinemeyer, W. Hollik and G. Weiglein, Comput. Phys. Commun. 124 (2000) 76.
[48] T. Moroi, Phys. Rev. D 53 (1996) 6565 [Erratum-ibid. D 56 (1997) 4424].
[49] B. C. Allanach et al., Eur. Phys. J. C 25 (2002) 113.
[50] J. A. Aguilar-Saavedra et al., Eur. Phys. J. C 46 (2006) 43.
[51] M. Battaglia, A. De Roeck, J. R. Ellis, F. Gianotti, K. A. Olive and L. Pape, Eur. Phys. J. C 33 (2004) 273.
[52] P. Gambino, U. Haisch and M. Misiak, Phys. Rev. Lett. 94 (2005) 061803.
[53] J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. Nadolsky and W. K. Tung, JHEP 0207 (2002) 012.
[54] M. Beccaria, G. Macorini, L. Panizzi, F. M. Renard and C. Verzegnassi, Phys. Rev. D 74 (2006) 093009.
[55] B. Fuks, M. Klasen and P. Skands, in preparation.
[56] M. M. El Kheishen, A. A. Aboshousha and A. A. Shafik, Phys. Rev. D 45 (1992) 4345.
http://arxiv.org/abs/hep-ph/0703200
http://arxiv.org/abs/hep-ph/0703204
http://arxiv.org/abs/hep-ph/0702144
http://arxiv.org/abs/hep-ph/0512315
http://arxiv.org/abs/hep-ex/0603003
Introduction
Non-Minimal Flavour Violation in the MSSM
Analytical Results for Production Cross Sections and Decay Widths
Generalized Strong and Electroweak Couplings in NMFV SUSY
Squark-Antisquark Pair Production
Squark Pair Production
Associated Production of Squarks and Gauginos
Gaugino Pair Production
Squark Decays
Gluino Decays
Gaugino Decays
Experimental Constraints, Scans and Benchmarks in NMFV SUSY
Low-Energy, Electroweak Precision, and Cosmological Constraints
Scans of the Constrained NMFV MSSM Parameter Space
(c)MFV and NMFV Benchmark Points and Slopes
Dependence of Precision Observables and Squark-Mass Eigenvalues on Flavour Violation
Chirality and Flavour Decomposition of Squark Mass Eigenstates
Numerical Predictions for NMFV SUSY Particle Production at the LHC
Conclusions
Acknowledgments
Gaugino and Higgsino Mixing
References
|
0704.1827 | Transaction-Oriented Simulation In Ad Hoc Grids | Microsoft Word - MSc thesis - G Krafft - 2007.doc
TRANSACTION-ORIENTED SIMULATION IN
AD HOC GRIDS
Gerald Krafft
This report is submitted in partial fulfilment of the requirements of the
M.Sc. degree in Advanced Computer Science at the Westminster
University.
Supervisor: Vladimir Getov
Submitted on: 24th January 2007
Abstract
Abstract
Computer Simulation is an important area of Computer Science that is used in many
other research areas like for instance engineering, military, biology and climate
research. But the growing demand for more and more complex simulations can lead to
long runtimes even on modern computer systems. Performing complex Computer
Simulations in parallel, distributed across several processors or computing nodes within
a network has proven to reduce the runtime of such complex simulations.
Large-scale parallel computer systems are usually very expensive. Grid Computing is a
cost-effective way to perform resource intensive computing tasks because it allows
several organisations to share their computing resources. Besides more traditional
Computing Grids the concept of Ad Hoc Grids has emerged that offers a dynamic and
transient resource-sharing infrastructure, suitable for short-term collaborations and with
a very small administrative overhead to allow even small organisations or individual
users to form Computing Grids. A Grid framework that fulfils the requirements of Ad
Hoc Grids is ProActive.
This paper analyses the possibilities of performing parallel transaction-oriented
simulations with a special focus on the space-parallel approach and discrete event
simulation synchronisation algorithms that are suitable for transaction-oriented
simulation and the target environment of Ad Hoc Grids. To demonstrate the findings a
Java-based parallel transaction-oriented simulator is implemented on the basis of the
promising Shock Resistant Time Warp synchronisation algorithm and using the Grid
framework ProActive. The validation of this parallel simulator shows that the Shock
Resistant Time Warp algorithm can successfully reduce the number of rolled back
Transaction moves but it also reveals circumstances in which the Shock Resistant Time
Warp algorithm can be outperformed by the normal Time Warp algorithm. The
conclusion of this paper suggests possible improvements to the Shock Resistant Time
Warp algorithm to avoid such problems.
Table of Content
Table of Content
1 INTRODUCTION ...................................................................................................1
2 FUNDAMENTAL CONCEPTS .............................................................................4
2.1 GRID COMPUTING...............................................................................................4
2.1.1 Ad Hoc Grids .............................................................................................5
2.2 GRANULARITY AND HARDWARE ARCHITECTURE................................................6
2.3 SIMULATION TYPES.............................................................................................7
2.3.1 Transaction-Oriented Simulation and GPSS.............................................8
2.4 PARALLELISATION OF DISCRETE EVENT SIMULATION .........................................9
2.5 SYNCHRONISATION ALGORITHMS .....................................................................10
2.5.1 Conservative Algorithms .........................................................................11
2.5.2 Optimistic Algorithms..............................................................................11
3 AD HOC GRID ASPECTS ...................................................................................16
3.1 CONSIDERATIONS..............................................................................................16
3.1.1 Service Deployment .................................................................................16
3.1.2 Service Migration ....................................................................................17
3.1.3 Fault Tolerance........................................................................................17
3.1.4 Resource Discovery .................................................................................17
3.2 PROACTIVE ......................................................................................................18
3.2.1 Descriptor-Based Deployment.................................................................19
3.2.2 Peer-to-Peer Infrastructure.....................................................................19
3.2.3 Active Object Migration ..........................................................................20
3.2.4 Transparent Fault Tolerance ...................................................................20
4 PARALLEL TRANSACTION-ORIENTED SIMULATION............................21
4.1 PAST RESEARCH WORK......................................................................................21
4.1.1 Transactions as events.............................................................................22
4.1.2 Accessing objects in other LPs ................................................................22
4.1.3 Analysis of GPSS language .....................................................................24
4.2 SYNCHRONISATION ALGORITHM .......................................................................26
4.2.1 Requirements ...........................................................................................26
Table of Content
4.2.2 Algorithm selection..................................................................................27
4.2.3 Shock resistant Time Warp Algorithm......................................................29
4.3 GVT CALCULATION .........................................................................................33
4.4 END OF SIMULATION.........................................................................................35
4.5 CANCELLATION TECHNIQUES ...........................................................................36
4.6 LOAD BALANCING ............................................................................................37
4.7 MODEL PARTITIONING ......................................................................................38
5 IMPLEMENTATION............................................................................................40
5.1 IMPLEMENTATION CONSIDERATIONS.................................................................40
5.1.1 Overall Architecture ................................................................................40
5.1.2 Transaction Chain and Scheduling..........................................................42
5.1.3 Generation and Termination of Transactions..........................................43
5.1.4 Supported GPSS Syntax...........................................................................45
5.1.5 Simulation Termination at Specific Simulation Time...............................46
5.2 IMPLEMENTATION PHASES ................................................................................47
5.2.1 Model Parsing .........................................................................................47
5.2.2 Basic GPSS Simulation Engine ...............................................................50
5.2.3 Time Warp Parallel Simulation Engine ...................................................51
5.2.4 Shock Resistant Time Warp......................................................................53
5.3 SPECIFIC IMPLEMENTATION DETAILS ................................................................56
5.3.1 Scheduling ...............................................................................................56
5.3.2 GVT Calculation and End of Simulation.................................................64
5.3.3 State Saving and Rollbacks......................................................................66
5.3.4 Memory Management ..............................................................................68
5.3.5 Logging....................................................................................................69
5.4 RUNNING THE PARALLEL SIMULATOR...............................................................70
5.4.1 Prerequisites ............................................................................................70
5.4.2 Files .........................................................................................................70
5.4.3 Configuration...........................................................................................72
5.4.4 Starting a Simulation...............................................................................73
5.4.5 Increasing Memory Provided by JVM.....................................................73
Table of Content
6 VALIDATION OF THE PARALLEL SIMULATOR.........................................75
6.1 VALIDATION 1 ...................................................................................................76
6.2 VALIDATION 2 ...................................................................................................78
6.3 VALIDATION 3 ...................................................................................................80
6.4 VALIDATION 4 ...................................................................................................84
6.5 VALIDATION 5 ...................................................................................................86
6.6 VALIDATION 6 ...................................................................................................91
6.7 VALIDATION ANALYSIS .....................................................................................93
7 CONCLUSIONS....................................................................................................96
REFERENCES ..............................................................................................................98
APPENDIX A: DETAILED GPSS SYNTAX............................................................102
APPENDIX B: SIMULATOR CONFIGURATION SETTINGS............................107
APPENDIX C: SIMULATOR LOG4J LOGGERS .................................................110
APPENDIX D: STRUCTURE OF THE ATTACHED CD ......................................114
APPENDIX E: DOCUMENTATION OF SELECTED CLASSES.........................115
APPENDIX F: VALIDATION OUTPUT LOGS......................................................147
Abbreviations
Abbreviations
GFT Global Furthest Time
GPSS General Purpose Simulation System
GPW Global Progress Window
GVT Global Virtual Time
J2SE Java 2 Platform, Standard Edition
JRE Java Runtime Environment
JVM Java Virtual Machine
LP Logical Process
LPCC Logical Process Control Component
LVT Local Virtual Time
NAT Network Address Translation
Figures
Figures
Figure 1: Ad Hoc Grid architecture overview [27]............................................................6
Figure 2: Comparison of fine grained and coarse grained granularity ..............................7
Figure 3: Classification of simulation types ......................................................................7
Figure 4: Parallelisation of independent simulation runs ..................................................9
Figure 5: Event execution with rollback [21]..................................................................12
Figure 6: Comparison of aggressive and lazy cancellation .............................................13
Figure 7: Accessing objects in other LPs.........................................................................23
Figure 8: Global Progress Window with its zones [33]...................................................28
Figure 9: Overview of LP and LPCC in Shock Resistant Time Warp [8] .......................30
Figure 10: Cancellation in transaction-oriented simulation ............................................37
Figure 11: Architecture overview....................................................................................41
Figure 12: Single synchronised Termination Counter .....................................................45
Figure 13: Simulation model class hierarchies for parsing and simulation runtime .......49
Figure 14: Parallel simulator main class overview (excluding LPCC) ...........................51
Figure 15: Main communication sequence diagram........................................................53
Figure 16: Parallel simulator main class overview..........................................................54
Figure 17: Scheduling flowchart - part 1.........................................................................57
Figure 18: Scheduling flowchart - part 2.........................................................................58
Figure 19: Scheduling flowchart - part 3.........................................................................59
Figure 20: Extended parallel simulation scheduling flowchart .......................................62
Figure 21: GVT calculation sequence diagram ...............................................................64
Figures
Figure 22: State saving example......................................................................................67
Figure 23: Validation 5.1 Actuator value graph...............................................................89
Tables
Tables
Table 1: Change of Transaction execution path...............................................................25
Table 2: Access to objects................................................................................................25
Table 3: Shock Resistant Time Warp sensors ..................................................................31
Table 4: Shock Resistant Time Warp indicators ..............................................................31
Table 5: Transaction-oriented sensor names....................................................................33
Table 6: Transaction-oriented indicator names................................................................33
Table 7: Overview of supported GPSS block types.........................................................46
Table 8: Methods implementing basic GPSS scheduling functionality...........................60
Table 9: Methods implementing extended parallel simulation scheduling .....................63
Table 10: Validation 5.1 LPCC Actuator values..............................................................89
Table 11: LP2 processing statistics of validation 5..........................................................90
Introduction
1 Introduction
Computer Simulation is one of the oldest areas in Computer Science. It provides
answers about the behaviour of real or imaginary systems that otherwise could only be
gained under great expenditure of time, with high costs or that could not be gained at
all. Computer Simulation uses simulation models that are usually simpler than the
systems they represent but that are expected to behave as analogue as possible or as
required. The growing demand of complex Computer Simulations for instance in
engineering, military, biology and climate research has also lead to a growing demand
in computing power. One possibility to reduce the runtime of large, complex Computer
Simulations is to perform such simulations distributed on several CPUs or computing
nodes. This has induced the availability of high-performance parallel computer systems.
Even so the performance of such systems has constantly increased, the ever-growing
demand to simulate more and more complex systems means that suitable high-
performance systems are still very expensive.
Grid computing promises to provide large-scale computing resources at lower costs by
allowing several organisations to share their resources. But traditional Computing Grids
are relatively static environments that require a dedicated administrative authority and
are therefore less well suited for transient short-term collaborations and small
organisations with fewer resources. Ad Hoc Grids provide such a dynamic and transient
resource-sharing infrastructure that allows even small organisations or individual users
to form Computing Grids. They will make Grid computing and Grid resources widely
available to small organisations and mainstream users allowing them to perform
resource intensive computing tasks like Computer Simulations.
There are several approaches to performing Computer Simulations distributed across a
parallel computer system. The space-parallel approach [12] is one of these approaches
that is robust, applicable to many different simulation types and that can be used to
speed up single simulation runs. It requires the simulation model to be partitioned into
relatively independent sub-systems that are then performed in parallel on several nodes.
Synchronisation between these nodes is still required because the model sub-systems
are not usually fully independent. A lot of past research has concentrated on different
Introduction
synchronisation algorithms for parallel simulation. Some of these are only suitable for
certain types of parallel systems, like for instance shared memory systems.
This work investigates the possibility of performing parallel transaction-oriented
simulation in an Ad Hoc Grid environment with the main focus on the aspects of
parallel simulation. Potential synchronisation algorithms and other simulation aspects
are analysed in respect of their suitability for transaction-oriented simulation and Ad
Hoc Grids as the target environment and the chosen solutions are described and reasons
for their choice given. A past attempt to investigate the parallelisation of transaction-
oriented simulation was presented in [19] with the result that the synchronisation
algorithm employed was not well suited for transaction-oriented simulation. Lessons
from this past attempt have been learned and included in the considerations of this
work. Furthermore this work outlines certain requirements that a Grid environment
needs to fulfil in order to be appropriate for Ad Hoc Grids. The proposed solutions are
demonstrated by implementing a Java-based parallel transaction-oriented simulator
using the Grid middleware ProActive [15], which fulfils the requirements described
before.
The specific simulation type transaction-oriented simulation was chosen because it is
still taught at many universities and is therefore well known. It uses a relatively simple
language for the modelling that does not require extensive programming skills and it is a
special type of discrete event simulation so that most findings can also be applied to this
wider simulation classification.
The remainder of this report is organised as follows. Section 2 introduces the
fundamental concepts and terminology essential for the understanding of this work. In
section 3 the specific requirements of Ad Hoc Grids are outlined and the Grid
middleware ProActive is briefly described as an environment that fulfils these
requirements. Section 4 focuses on the aspects of parallel simulation and their
application to transaction-oriented simulation. Past research results are discussed,
requirements for a suitable synchronisation algorithm outlined and the most promising
algorithm selected. This section also addresses other points related to parallel
transaction-oriented simulation like GVT calculation, handling of the simulation end,
suitable cancellation techniques and the influence of the model partitioning. Section 5,
which is the largest section of this report, describes the implementation of the parallel
Introduction
transaction-oriented simulator, starting from the initial implementation considerations
and the implementation phases to specific details of the implementation and how the
simulator is used. The functionality of the implemented parallel simulator is then
validated in section 6 and the final conclusions presented in section 7.
Fundamental Concepts
2 Fundamental Concepts
This main section introduces the fundamental concepts and terminology essential for the
understanding of this work. It covers areas like Grid Computing and the relation
between the granularity of parallel algorithms and their expected target hardware
architecture. It also describes the classification of simulation models as well as different
approaches to parallel discrete event simulation and the main groups of synchronisation
algorithms.
2.1 Grid Computing
The term “the Grid” first appeared in the mid-1990s in connection with a proposed
distributed computing infrastructure for advanced science and engineering [9]. Today
Grid computing is commonly used for a „distributed computing infrastructure that
supports the creation and operation of virtual organizations by providing mechanisms
for controlled, cross-organizational resource sharing“ [9]. Similar to electric power grids
Grid computing provides computational resources to clients using a network of multi
organisational resource providers establishing a collaboration. In the context of Grid
computing resource sharing means the “access to computers, software, data, and other
resources” [9]. Control is needed for the sharing of resources that describes who is
providing and who is consuming resources, what is shared and what are the conditions
for the resource sharing to occur. These sharing rules and the group of organisations or
individuals that are defined by it form a so-called Virtual Organisation (VO).
Grid computing technology has evolved and gone through several phases since it’s
beginning [9]. The first phase was characteristic for custom solutions to Grid computing
problems. These were usually built directly on top of Internet protocols with limited
functionality for security, scalability and robustness and interoperability was not
considered to be important. From 1997 the emerging open source Globus Toolkit
version 2 (GT2) became the de facto standard for Grid computing. It provided usability
and interoperability via a set of protocols, APIs and services and was used in many Grid
deployments worldwide. With the Open Grid Service Architecture (OGSA), which is a
true community standard, came the shift of Grid computing towards a service-oriented
architecture. In addition to a set of standard interfaces and services OGSA provides the
framework in which a wide range of interoperable and portable services can be defined.
Fundamental Concepts
2.1.1 Ad Hoc Grids
Traditional computing Grids share certain characteristics [1]. They usually use a
dedicated administrative authority, which often consists of a group of trained
professionals to regulate and control membership and sharing rules of the Virtual
Organisations. This includes administration of policy enforcement, monitoring and
maintenance of the Grid resources. Well-defined policies are used for access privileges
and the deployment of Grid applications and services.
It can be seen that these common characteristics are not ideal for a transient short-term
collaboration with a dynamically changing structure because the administrative
overhead for establishing and maintaining such a Virtual Organisation could outweigh
its benefits [2]. Ad Hoc Grids provide this kind of dynamic and transient resource
sharing infrastructure. According to [27] “An Ad Hoc Grid is a spontaneous formation
of cooperating heterogeneous computing nodes into a logical community without a
preconfigured fixed infrastructure and with only minimal administrative requirements”.
The transient dynamic structure of an Ad Hoc Grid means that new nodes can join or
leave the collaboration at almost any time but Ad Hoc Grids can also contain permanent
nodes.
Figure 1 [27] at the next page shows two example Ad Hoc Grids structures. Ad hoc Grid
A is a collaboration of nodes from two organisations. It contains permanent nodes in
form of dedicated high-performance computers but also transient nodes in form of non-
dedicated workstations. Compared to this Ad hoc Grids B is an example for a more
personal Grid system. It consists entirely of transient individual nodes. A practical
example for the application of an Ad Hoc Grid is a group of scientists that for a specific
scientific experiment want to collaborate and share computing resources. Using Ad Hoc
Grid technology they can establish a short-term collaboration lasting only for the time
of the experiment. These scientists might be part of research organisations but as the
example of Ad hoc Grid B from Figure 1 shows Ad Hoc Grids allow even individuals to
form Grid collaborations without the resources of large organisations. This way Ad Hoc
Grids offer a way to more mainstream and personal Grid computing.
Fundamental Concepts
Figure 1: Ad Hoc Grid architecture overview [27]
2.2 Granularity and Hardware Architecture
When evaluating the suitability of different parallel algorithms for a specific parallel
hardware architecture it is important to consider the granularity of the parallel
algorithms and to compare the granularity to the processing and communication
performance provided by the hardware architecture.
Definition: granularity
The granularity of a parallel algorithm can be defined as the ratio of the amount of
computation to the amount of communication performed [18].
According to this definition parallel algorithms with a fine grained granularity perform a
large amount of communication compared to the actual computation as apposed to
parallel algorithms with a coarse grained granularity which only perform a small
Fundamental Concepts
amount of communication compared to the computation. The following diagram
illustrates the difference between fine grained and coarse grained granularity.
Granularity
fine grained
coarse grained
Chronological sequence of the algorithms
computation communication
Figure 2: Comparison of fine grained and coarse grained granularity
Independent of the exact performance figures of a parallel hardware architecture it can
be seen that for a hardware architecture with a high communication performance a fine
grained parallel algorithm is well suited and that a hardware architecture with a low
communication performance will require a coarse grained parallel algorithms [23].
2.3 Simulation Types
Simulation models are classified into continuous and discrete simulation according to
when state transitions can occur [14]. Figure 3 below illustrates the classification of
simulation types.
Simulation
DiscreteContinuous
Discrete-eventTime-controlled
Event-oriented Activity-oriented Process-oriented Transaction-oriented
Figure 3: Classification of simulation types
Fundamental Concepts
In continuous simulation the state can change continuously with the time. This type of
simulation uses differential equations, which are solved numerically to calculate the
state. Continuous models are for instance used to simulate the streaming of liquids or
Discrete simulation models allow the changing of the state only at discrete time
intervals. They can be divided further according to whether the discrete time intervals
are of fixed or variable length. In a time-controlled simulation model the time advances
in fixed steps changing the state after each step as required.
But for many systems the state only changes in variable intervals, which are determined
during the simulation. For these systems the discrete event simulation model is used. In
discrete event simulation the state of the entities in the system is changed by events.
Each event is linked to a specific simulated time and the simulation system keeps a list
of events sorted by their time [35]. It then selects the next event according to its time
stamp and executes it resulting in a change of the system state. The simulated time then
jumps to the time of the next event that will be executed. The execution of an event can
create new events with a time greater than the current simulated time that will be sorted
into the event list according to their time stamp. Discrete event simulation is very
flexible and can be applied to many groups of systems, which is why many general-
purpose simulation systems use the discrete event model and a lot of research has gone
into this model.
2.3.1 Transaction-Oriented Simulation and GPSS
A special case of the discrete event simulation is the transaction-oriented simulation.
Transaction-oriented simulation uses two types of objects. There are stationary objects
that make up the model of the system and then there are mobile objects called
Transactions that move through the system and that can change the state of the
stationary objects. The movement of a Transaction happens at a certain time (i.e. the
time does not progress while the Transaction is moved), which is equivalent to an event
in the discrete event model. But stationary objects can delay the movement of a
Transaction by a random or fixed time. They can also spawn one Transaction into
several or assemble several sub-Transactions back to one. The fact that transaction-
oriented simulation systems are usually a bit simpler than full discrete event simulation
Fundamental Concepts
systems makes them very useful for teaching purpose and academic use, especially
because most discrete event simulation aspects can be applied to transaction-oriented
simulation and vice versa.
The best-known transaction-oriented simulation language is GPSS, which stand for
General Purpose Simulation System. GPSS was developed by Geoffrey Gordon at IBM
around 1960 and has contributed important concepts to discrete event simulation. Later
improved versions of the GPSS language were implemented in many systems, two of
which are GPSS/H [36] and GPSS/PC. A detailed description of transaction-oriented
simulation and the improved GPSS/H language can be found in [26].
2.4 Parallelisation of Discrete Event Simulation
Parallelisation of computer simulation is important because the growing performance of
modern computer systems leads to a demand for the simulation of more and more
complex systems that still result in excessive simulation time. Parallelisation reduces the
time required for simulating such complex systems by performing different parts of the
simulation in parallel on multiple CPUs or multiple computers within a network.
There are different approaches for the parallelisation of discrete event simulation that
also cover different levels of parallelisation. One approach is to perform independent
simulation runs in parallel [21]. There is only little communication needed for this
approach, as it is limited to sending the model and a set of parameters to each node and
collecting the simulation results after the simulation runs have finished. But this
approach is relatively trivial and does not reduce the simulation time of a single
simulation run. It can be used for simulations that consist of many shorter simulation
runs. But these simulation runs have to be independent from each other (i.e. parameters
for the simulation runs do not depend on results from each other).
Simulation runs
Simulation runs
Simulation runs
Parameter
generation
Distributing
parameters
Collecting
results
Analysation and
visualisation
Figure 4: Parallelisation of independent simulation runs
Fundamental Concepts
Two other approaches for the parallelisation of discrete event simulation are the time-
parallel approach and the space-parallel approach [12]. Both can be used to reduce the
simulation time of single simulation runs. The time-parallel approach partitions the
simulated time into intervals [T1, T2], [T2, T3], …, [Ti, Ti+1]. Each of these time intervals
is then run on separate processors or nodes. This approach relies on being able to
determine the starting state of each time interval before the simulation of the earlier time
interval has been completed, e.g. it has to be possible to determine the state T2 before
the simulation of the time interval [T1, T2] has been completed which is only possible
for certain systems to be simulated, e.g. systems with state recurrences.
For the space-parallel approach the system model is partitioned into relatively
independent sub-systems. Each of these sub-systems is then assigned and performed by
a logical process (LP) with different LPs running on separate processors or nodes. In
most cases these sub-systems will not be completely independent from each other,
which is why the LPs will have to communicate with each other in order to exchange
events. The space-parallel approach offers greater robustness and is applicable to most
discrete event systems but the resulting speedup will depend on how the system is
partitioned and how relative independent the resulting sub-systems are. A high
dependency between the sub-systems will result in an increased synchronisation and
communication overhead between the LPs. It will further depend on the synchronisation
algorithm used.
2.5 Synchronisation Algorithms
The central problem for the space-parallel simulation approach is the synchronisation of
the event execution. This synchronisation is also called time management. In discrete
event simulation each event has a time stamp, which is the simulated time at which the
event occurs. If two events are causal dependent on each other then they have to be
performed in the correct order. Because causal dependent events could originate in
different LPs synchronisation between the LPs becomes very important.
There are two main classes of algorithms for the event synchronisation between LPs,
which are the classes of conservative and optimistic algorithms.
Fundamental Concepts
2.5.1 Conservative Algorithms
Conservative algorithms prevent that causal dependent events are executed out of order
by executing only “safe” events [12]. An LP will consider an event to be “safe” if it is
guaranteed that the LP cannot later receive an event with an earlier time stamp. The
main task of conservative algorithms is to provide such guarantees so that LPs can
determine which of the events are guaranteed and can be executed.
Definition: guaranteed event
An event e with the timestamp t which is to be executed in LPi is called guaranteed
event if LPi knows all events with a timestamp t’ < t that it will need to execute during
the whole simulation.
One drawback of conservative algorithms is that LPs will have to wait or block if they
don’t have any “safe” events. This can even lead to deadlocks where all LPs are waiting
for guarantees so that they can execute their events. Many of the conservative
algorithms also require additional information about the simulation model like the
communication topology1 or lookahead2 information. Further details about conservative
algorithms can be found in [12], [34].
2.5.2 Optimistic Algorithms
Optimistic algorithms allow causal dependent events to be executed out of order first
but they provide mechanisms to later recover from possible violations of the causal
order. The best-known and most analysed optimistic algorithm is Time Warp [16] on
which many other optimistic algorithms are based. In Time Warp an LP will first
execute all events in its local event list but if it receives an event from another LP with a
time stamp smaller than the ones already executed then it will rollback all its events that
should have been executed after the event just received. State checkpointing3 (also
1 describes which LP can send events to which other LP
2 is the models ability to predict the future course of events [5]
3 the state of the simulation is saved into a state list together with the current simulation
time after the execution of each event or in other defined intervals
Fundamental Concepts
known as state saving) is used in order to be able to rollback the state of the LP if
required.
e1 5 e2 10
rollback
e3 8 e2 10
simulated
Figure 5: Event execution with rollback [21]
Figure 5 shows an example LP that performs the local events e1 and e2 with the time
stamps of 5 and 10 but then receives another event e3 with a time stamp of 8 from a
different LP. At this point LP1 will rollback the execution of event e2 then execute the
newly received event e3 and afterwards execute the event e2 again in order to retain the
causal order of the events. The rollback of already executed events can result in having
to rollback events that have already been sent to other LPs. To archive this anti-events
are sent to the same LPs like the original events, which will result in the original event
being deleted if it has not been executed yet or in a rollback of the received event and all
later events. These rollbacks and anti-events can lead to large cascaded rollbacks and
many events having to be executed again. It is also possible that after the rollback the
same events that have been rolled back are executed again sending out the same events
to other LPs for which anti-events were sent during the rollback. In order to avoid this a
different mechanism for cancelling events exists which is called lazy cancellation [13].
Compared to the original cancellation mechanism that is also called aggressive
cancellation and was suggested by Jefferson [16], the lazy cancellation mechanism does
not send out anti-events immediately during the rollback but instead keeps a history of
the events sent that have been rolled back and only sends out anti-events when the event
that was sent and rolled back is not re-executed. If for instance the LP is rolled back
from the simulation time t’ to the new simulation time t’’ ≤ t’ then the lazy cancellation
mechanism will re-execute the events in the interval [t’’,t’] and will only sent anti-
events for events that had been sent during the first execution of that time interval but
that were not generated during the re-execution. The difference between aggressive
Fundamental Concepts
cancellation and lazy cancellation can be seen in the following diagram. In this diagram
the event index is describing the scheduled time of the event.
e3 e3e2
e4 e4
e4 e4- e4
e3 e3e2
Lazy cancellationAggressive cancellation
timetime
ex ex-
Event for other LP Rollback Anti-event for other LP
Figure 6: Comparison of aggressive and lazy cancellation
As shown in Figure 6 lazy cancellation can reduce cascaded rollbacks but it can also
allow false events to propagate further and therefore lead to longer cascaded rollbacks
when such false events are cancelled.
The concept of Global Virtual Time (GVT) is used to regain memory and to control the
overall global progress of the simulation. The GVT is defined as the minimum of the
local simulation time, also called Local Virtual Time (LVT), of all LPs and of the time
stamps of all events that have been send but not yet processed by the receiving LP [16].
The GVT describes the minimum simulation time any unexecuted event can have at a
particular point in real time. It therefore acts as a guarantee for all executed events with
a time stamp smaller than the GVT, which can now be deleted. Further memory is freed
by removing all state checkpoints with a virtual time less than the GVT except the one
closest to the GVT.
Both conservative and optimistic algorithms have their advantages and disadvantages.
The speedup of conservative algorithms can be limited because only guarantied events
are executed. Compared to conservative algorithm optimistic algorithms can offer grater
exploitation of parallelism [5] and they are less reliant on application specific
information [11] or information about the communication topology. But optimistic
algorithms have the overhead of maintaining rollback information and over-optimistic
Fundamental Concepts
event execution in some LPs can lead to frequent and cascaded rollbacks and result in a
degradation of the effective processing rate of events. Therefore research has focused on
combining the advantages of conservative and optimistic algorithms creating so-called
hybrid algorithms and on controlling the optimism in Time Warp. Such attempts to limit
the optimism in Time Warp can be grouped into non-adaptive algorithms, adaptive
algorithms with local state and adaptive algorithms with global state. Carl Tropper [34]
and Samir R. Das [5] both give a good overview on algorithms in these categories.
The group of non-adaptive algorithms for instance contains algorithms that use time
windows in order to limit how far ahead of the current GVT single LPs can process their
events, which limits the frequency and length of rollbacks. Other algorithms in this
group add conservative ideas to Time Warp. One example for this is the Breathing Time
Buckets algorithm (also known as SPEEDES algorithm) [30]. Like Time Warp this
algorithm executes all local events immediately and performs local rollbacks if required
but it only sends events to other LPs that have been guaranteed by the current GVT and
by doing so avoids cascaded rollbacks. The problem of all these algorithms is that either
the effectiveness depends on finding the optimum value for static parameters like the
window size or conservative aspects of the algorithm limit its effectiveness for models
with certain characteristics. Finding the optimum value for such control parameters can
be difficult for simulation modellers and many simulation models show a very dynamic
behaviour of their characteristics, which would require different parameters at different
times of the simulation.
Adaptive algorithms solve this problem by dynamically adapting the control parameters
of the synchronisation algorithm according to “selected aspects of the state of the
simulation” [24]. Some of these algorithms use mainly global state information like the
Adaptive Memory Management algorithm [6], which uses the total amount of memory
used by all LPs or the Near Perfect State Information algorithms [28] that are based on
the availability of a reduced information set that almost perfectly describes the current
global state of the simulation. Adaptive algorithms based on local state use only local
information available to each LP in order to change the control parameters. They collect
historic local state information and from these try to predict future local states and the
required control parameter. Some examples for adaptive algorithms using local state
Fundamental Concepts
information are Adaptive Time Warp [4], Probabilistic Direct Optimism Control [7] and
the Shock Resistant Time Warp algorithm [8].
Ad Hoc Grid Aspects
3 Ad Hoc Grid Aspects
There are certain requirements that a Grid environment needs to fulfil in order to be
suitable for Ad Hoc Grids. These are outlined in this main section of the report and the
Grid middleware ProActive [15] is chosen for the planned implementation of a Grid-
based parallel simulator because it fulfils the requirements mentioned.
3.1 Considerations
In section 2.1.1 Ad Hoc Grids where described as dynamic, spontaneous and transient
resource sharing infrastructures. The dynamic and transient structure of Ad Hoc Grids
and the fact that Ad Hoc Grids should only have a minimal administrative overhead
compared to traditional Grids creates special requirements that a Grid environment
needs to fulfil in order to be suitable for Ad Hoc Grids. These requirements include
automatic service deployment, service migration, fault tolerance and the discovery of
resources.
3.1.1 Service Deployment
In a traditional Grid environment the deployment of Grid services is performed by an
administrative authority that is also responsible for the usage policy and the monitoring
of the Grid services. Grid services are usually deployed by installing a service factory
onto the nodes. A service is then instantiated by calling the service factory for that
service which will return a handle to the newly created service instance. In traditional
Grid environments the deployment of service factories requires special access
permissions and is performed by administrators.
Because of their dynamically changing structure Ad Hoc Grids need different ways of
deploying Grid services that impose less administrative overhead. Automatic or hot
service deployment has been suggested as a possible solution [10]. A Grid environment
suitable for Ad Hoc Grids will have to provide means of installing services onto nodes
either automatically or with very little administrative overhead.
Ad Hoc Grid Aspects
3.1.2 Service Migration
Because Ad Hoc Grids allow a transient collaboration of nodes and the fact that nodes
can join or leave the collaboration at different times a Grid application cannot rely on
the discovered resources to be available for the whole runtime of the application. One
solution to reach some degree of certainty about the availability of resources within an
Ad Hoc Grid is the introduction of a scheme where individual nodes of the Grid
guarantee the availability of the resources provided by them for a certain time as
suggested in [1]. But such guaranties might not be possible for all transient nodes,
especially for personal individual nodes as shown in the example Ad hoc Grid B in
section 2.1.1.
Whether or not guarantees are used for the availability of resources an application
running within an Ad Hoc Grid will have to migrate services or other resources from a
node that wishes to leave the collaboration to another node that is available.
The migration of services or processes within distributed systems is a known problem
and a detailed description can be found in [32]. A Grid environment for Ad Hoc Grids
will have to support service migration in order to adapt to the dynamically changing
structure of the Grid.
3.1.3 Fault Tolerance
Ad Hoc Grids can contain transient nodes like personal computer and there might be no
guarantee for how long such nodes are available to the Grid application. In addition Ad
Hoc Grids might be based on off-the-shelf computing and networking hardware that is
more susceptible to hardware faults than special purpose build hardware.
A Grid environment suitable for Ad Hoc Grids will therefore have to provide
mechanisms that offer fault tolerance and that can handle the loss of the connection to a
node or the unexpected disappearing of a node in a manner that is transparent to Grid
applications using the Ad Hoc Grid.
3.1.4 Resource Discovery
Resource discovery is one of the main tasks of Grid environments. It is often
implemented by a special resource discovery service that keeps a directory of available
Ad Hoc Grid Aspects
resources and their specifications. But in an Ad Hoc Grid this becomes more of a
challenge because of its dynamically changing structure. The task of the resource
discovery can be divided further into the sub tasks of node discovery and node property
assessment [27]. The node discovery task deals with the detection of new nodes that are
joining and existing nodes that are leaving the collaboration. In an Ad Hoc Grid this
detection has to be optimised towards the detection of frequent changes in the Grid
structure. When a new node has joined the collaboration then its properties and shared
resources will have to be discovered which is described by the node property
assessment task. In addition to this high-level resource information some Grid
environments also provide low-level resource information about the nodes. Such low-
level resource information can include properties like the operating system type and
available hardware resources. But depending on the abstraction level implemented by
the Grid environment such low-level resource information might not be needed nor be
accessible for Grid applications.
The minimum resource discovery functionality that an Ad Hoc Grid environment has to
provide is the node discovery and more specifically the detection of new nodes joining
the Grid structure and existing nodes that are leaving the structure.
3.2 ProActive
ProActive is a Grid middleware implemented in Java that supports parallel, distributed,
and concurrent computing including mobility and security features within a uniform
framework [15]. It is developed and maintained as an open-source project at INRIA4
and uses the Active Object pattern to provide remotely accessible objects that can act as
Grid services or mobile agents. Calls to such active objects can be performed
asynchronous using a future-based synchronisation scheme known as wait-by-necessity
for return values. A detailed documentation including programming tutorials as well as
the full source code can be found at the ProActive Web site [15].
4 Institut national de recherche en informatique et en automatique (National Institute for
Research in Computer Science and Control)
Ad Hoc Grid Aspects
ProActive was chosen as the Grid environment for the implementation of this project
because it fulfils the specific requirements of Ad Hoc Grids as outlined in 3.1. As such it
is very well suited for the dynamic and transient structure of Ad Hoc Grids and allows
the setup of Grid structures with very little administrative overhead. The next few
sections will briefly describe the features of ProActive that make it especially suited for
Ad Hoc Grids.
3.2.1 Descriptor-Based Deployment
ProActive uses a deployment descriptor XML file to separate Grid applications and
their source code from deployment related information. The source code of such Grid
applications will only refer to virtual nodes. The actual mapping from a virtual node to
real ProActive nodes is defined by the deployment descriptor file. When a Grid
application is started ProActive will read the deployment descriptor file and will provide
access to the actual nodes within the Grid application. The deployment descriptor file
includes information about how the nodes are acquired or created. ProActive supports
the creation of its nodes on physical nodes via several protocols, these include for
instance ssh, rsh, rlogin as well as other Grid environments like Globus Toolkit or glite.
Alternatively ProActive nodes can be started manually using the startNode.sh script
provided. For the actual communication between Grid nodes, ProActive can use a
variety of communication protocols like for instance rmi, http or soap. Even file transfer
is supported as part of the deployment process. Further details about the deployment
functionality provided by ProActive can be found in its documentation at the ProActive
Web site [15].
3.2.2 Peer-to-Peer Infrastructure
ProActive provides a self-organising Peer-to-Peer functionality that can be used to
discover new nodes, which are not defined within the deployment descriptor file of a
Grid application. The only thing required is an entry point into an existing ProActive-
based Peer-to-Peer network, for instance through a known node that is already part of
that network. Further nodes from the Peer-to-Peer network can then be discovered and
used by the Grid application. The Peer-to-Peer functionality of ProActive is not limited
to sub-networks, it can communicate through firewalls and NAT routers and is therefore
suitable for Internet-based Peer-to-Peer infrastructures. It is also self-organising which
Ad Hoc Grid Aspects
means that an existing Peer-to-Peer network tries to keep itself alive as long as there are
nodes belonging to it.
3.2.3 Active Object Migration
In ProActive Active Objects can easily be migrated between different nodes. This can
either be triggered by the Active Object itself or by an external tool. The migration
functionality is based on standard Java serialisation, which is why Active Objects that
need to be migrated and their private passive objects have to be serialisable. A detailed
description of the migration functionality including examples can be found in the
ProActive documentation.
3.2.4 Transparent Fault Tolerance
ProActive can provide fault tolerance to Grid applications that is fully transparent. Fault
tolerance can be enabled for Grid applications just by configuring it within the
deployment descriptor configuration. The only requirement is that Active Objects for
which fault tolerance is to be enabled need to be serialisable.
There are currently two fault tolerance protocols provided by ProActive. Both protocols
use checkpointing and are based on the standard Java serialisation functionality. Further
details about how the fault tolerance works and how it is configured can be found in the
ProActive documentation.
Parallel Transaction-oriented Simulation
4 Parallel Transaction-oriented Simulation
4.1 Past research work
Past research performed by the author looked at the parallelisation of transaction-
oriented simulation using an existing Matlab-based5 GPSS simulator and Message-
Passing for the communication [19]. It was shown that the Breathing Time Buckets
algorithm, which is also known as SPEEDES algorithm (a description can be found in
section 2.5.2), can be applied to transaction-oriented simulation. This algorithm uses a
relatively simple communication scheme without anti-events and cancellations.
But further evaluation has revealed that the Breathing Time Buckets algorithm is not
well suited for transaction-oriented simulation. The reason for this is that the Breathing
Time Buckets algorithm makes use of what is know as the event horizon [29]. This
event horizon is the time stamp of the earliest new event generated by the execution of
the current events. Using this event horizon the Breathing Time Buckets algorithm can
execute local events until it reaches the time of a new event that needs to be sent to
another LP. At this point a GVT calculation is required because only events guaranteed
by the GVT can be sent. The Breathing Time Buckets algorithm works well for discrete
event models that have a large event horizon, i.e. where current events create new
events that are relatively far in the future so that many local events can be executed
before a GVT calculation is required. This is where the Breathing Time Buckets
algorithm fails when it is applied to transaction-oriented simulation. In transaction-
oriented simulation the simulation time does not change while a Transaction is moved.
Whenever a Transaction moves from one LP to another this results in an event horizon
of zero because the time stamp of the Transaction in the new LP will be the same like
the time stamp it had in the LP from which it was sent. The validation of the parallel
transaction-oriented simulator based on Breathing Time Buckets (alias SPEEDES)
showed that a GVT calculation was required each time a Transaction needed to be sent
to another LP.
5 MATLAB is a numerical computing environment and programming language created
by The MathWorks Inc.
Parallel Transaction-oriented Simulation
The described past research comes to the conclusion that the Breathing Time Buckets
algorithm does not perform well for transaction-oriented simulation but the research still
provides some useful findings about the application of discrete event simulation
algorithms to transaction-oriented simulation. Some of these findings that also apply to
this work are outlined in the following sections.
4.1.1 Transactions as events
An event can be described as a change of the state at a specified time. From the
simulation perspective this change of state is always caused by an action (e.g. the
execution of an event procedure). Therefore an event can also be seen as an action that
is performed at a specific point in time. In transaction-oriented simulation the state of
the simulation system is changed by the execution of blocks through Transactions.
Transactions are moved from block to block at a specific point in time as long as they
are movable, i.e. not advanced and not terminated. Therefore this movement of a
Transaction for a specific point in time and as long as the Transaction is movable
describes an action, which is equivalent to the event describing an action in the discrete
event model.
Considering this equivalence it is generally possible to apply synchronisation
algorithms and other techniques for discrete event simulation also to transaction-
oriented simulation. But because transaction-oriented simulation has specific properties
certain algorithms are more and other less well suited for transaction-oriented
simulation.
4.1.2 Accessing objects in other LPs
In a simulation that performs partitions of the simulation model on different LPs it is
possible that the simulation of the model partition within one LP needs to access an
object in another LP. For instance this could be a TEST block within one LP trying to
access the properties of a STORAGE entity within another LP. The main problem for
accessing objects like this in other LPs is that at a certain point of real time each LP can
have a different simulation time. Figure 7 shows an example for this problem. In this
example the LP1 that contains object o1 has already reached the simulation time 12 and
the LP2, which is trying to access the object o1 has reached the simulation time 5. It can
be seen that event e4 from LP2 would potentially read the wrong value for object o1
Parallel Transaction-oriented Simulation
because this read access should happen at the simulation time 5 which is before the
event e2 at LP1 overwrote the value of o1. Instead event e4 reads the value of o1 as it
appears at the simulation time 12.
e1 3, read o1
e2 8, write o1
e3 12, ...
Past events
Future events
e4 5, read o1
Current event position
Event list Event list
Object
Current simulation time = 12
Current event = e3
Current simulation time = 5
Current event = e4
Figure 7: Accessing objects in other LPs
Because accessing an object within another LP is an action that is linked to a specific
point of simulation time it can also be viewed as an event according to the description of
events in section 4.1.1. Like other events they have to be executed in the correct causal
order. This means that event e4 that is reading the value of object o1 has to happen at the
simulation time 5. Sending this event to LP1 would cause LP1 to roll back to the
simulation time 5 so that e4 would read the correct value.
Treating the access to objects as a special kind of event solves the problem mentioned
above. Such a solution can also be applied to transaction-oriented simulation by
implementing a simulation scheduler that besides Transactions can also handle these
kinds of object access events. Alternatively the object access could be implemented as a
pseudo Transaction that does not point to its next block but instead to an access method
that when executed performs the object access and for a read access returns the value.
Such a pseudo Transaction would send the value back to the originating LP and then be
deleted. Depending on the synchronisation algorithm it can also be useful treat read and
write access differently. If for instance an optimistic synchronisation algorithm is used
that saves system states for possible required rollbacks then rollbacks as a result of
Parallel Transaction-oriented Simulation
object read access events can be avoided if the LP that contains the object has passed
the time of the read access. In this case the object value for the required point in
simulation time could be read from the saved system state instead of rolling back the
whole LP.
Another even simpler solution to the problem of accessing objects in other LPs is to
prevent the access of objects in other LPs all together, i.e. to allow only access to
objects within the same LP. This sounds like a contradiction but by preventing one LP
from accessing local objects in another LP the event that wants to access a particular
object needs to be moved to the LP that holds that object. For the example from Figure
7 this means that instead of synchronizing the object access from event e4 on LP2 to
object o1 held by LP1 the event e4 is moved to LP1 that holds object o1 so that accessing
the object can be performed as a local action. This solution reduces the problem to the
general problem of moving events and synchronisation between LPs as solved by
discrete event synchronisation algorithms (see section 2.5).
4.1.3 Analysis of GPSS language
A synchronisation strategy is a requirement for parallel discrete event simulation
because LPs cannot predict the correct causal order of the events they will execute as
they can receive further events from other LPs at any time. When applying discrete
event synchronisation algorithms to transaction-oriented simulation based on GPSS/H it
is first of interest to analyse which of the GPSS/H blocks6 can actually cause the
transfer of Transactions to another LP or which of them require access to objects that
might be located at a different LP. Because Transactions usually move from one block
to the next a transfer to a different LP can only be the result of a block that causes the
execution of a Transaction to jump to a different block than the next following including
blocks that can cause a conditional branching of the execution path. The following two
tables list GPSS/H blocks that can change the execution path of a Transaction or that
access other objects within the model.
6 A detailed description of the GPSS/H language and its block types can be found in
[26].
Parallel Transaction-oriented Simulation
Blocks that can change the execution path
Block Change of execution path
TRANSFER Jump to specified block
SPLIT Jump of Transaction copy to specified block
GATE Jump to specified block depending on Logic Switch
TEST Jump to specified block depending on condition
LINK Jump to specified block depending on condition
UNLINK Jump of the unlinked Transactions to specified block and possible
jump of Transaction causing the unlink operation
Table 1: Change of Transaction execution path
Blocks that can access objects
Block Access to object
SEIZE
RELEASE
Access to Facility object
ENTER
LEAVE
Access to Storage object
QUEUE
DEPART
Access to Queue object
LOGIC
Access to Logic Switch
UNLINK
Access to User Chain
TERMINATE Access to Termination Counter
Table 2: Access to objects
Parallel Transaction-oriented Simulation
4.2 Synchronisation algorithm
An important conclusion from section 4.1 is that the choice of synchronisation
algorithm has a large influence on how much of the parallelism that exists in a
simulation model can be utilised by the parallel simulation system. A basic overview of
the classification of synchronisation algorithms for discrete event simulation was given
in section 2.5. Conservative algorithms utilise the parallelism less well than optimistic
algorithms because they require guarantees, which are often derived from additional
knowledge about the behaviour of the simulation model, like for instance the
communication topology or lookahead attributes of the model. For this reason
conservative algorithms are often used to simulate very specific systems where such
knowledge is given or can easily be derived from the model. For general simulation
systems optimistic algorithms are better suited as they can utilise the parallelism within
a model to a higher degree without requiring any guarantees or additional knowledge.
Another important aspect for choosing the right synchronisation algorithm is the
relation between the performance properties of the expected parallel hardware
architecture and the granularity of the parallel algorithm as outlined in section 2.2. In
order for the parallel algorithm to perform well in general on the target hardware
environment the granularity of the algorithm, i.e. the ratio between computation and
communication has to fit the ratio of the computation performance and communication
performance of the parallel hardware.
The goal of this work is to provide a basic parallel transaction-oriented simulation
system for Ad Hoc Grid environments. Ad Hoc Grids can make use of special high
performance hardware but more likely will be based on standard hardware machines
using Intranet or Internet as the communication channel. It can therefore be expected
that Ad Hoc Grids will mostly be targeted at parallel systems with reasonable
computation performance but relatively poor communication performance.
4.2.1 Requirements
Considering the target environment of Ad Hoc Grids and the goal of designing and
implementing a general parallel simulation system based on the transaction-oriented
Parallel Transaction-oriented Simulation
simulation language GPSS it can be concluded that the best suitable synchronisation
algorithm is an optimistic or hybrid algorithm that has a coarse grained granularity. The
algorithm should require only little communication compared to the amount of
computation it performs. At the same time the algorithm should be flexible enough to
adapt to a changing environment, as this is the case in Ad Hoc Grids. A further
requirement is that the algorithm can be adapted to and is suitable for transaction-
oriented simulation. Finding such an algorithm is a condition for achieving the outlined
goals.
4.2.2 Algorithm selection
Most optimistic algorithms are based on the Time Warp algorithm but attempt to limit
the optimism. As described in section 2.5.2 these algorithms can be grouped into non-
adaptive algorithms, adaptive algorithms with local state and adaptive algorithms with
global state. Non-adaptive algorithms usually rely on external parameters (e.g. the
window size for window based algorithms) to specify how strongly the optimism is
limited. Such algorithms are not ideal for a general simulation system as it can be
difficult for a simulation modeller to find the optimum parameters for each simulation
model. It is also common that simulation models change their behaviour during the
runtime of the simulation.
As a result later research has focused more on the adaptive algorithms, which qualify
for a general simulation system. They are also better suited for dynamically changing
environments like Ad Hoc Grids.
Two interesting adaptive algorithms are the Elastic Time algorithm [28] and the
Adaptive Memory Management algorithm [6]. The Elastic Time algorithm is based on
Near Perfect State Information (NPSI). It requires a feedback system that constantly
receives input state vectors from all LPs, processes these using several functions and
then returns output vectors to all LPs that describe how the optimism of each LP needs
to be controlled. As described in [28] for a shared memory system such a near-perfect
state information feedback system can be implemented using a dedicated set of
processes and processors but for a distributed memory system a high speed
asynchronous reduction network would be needed. This shows that the Elastic Time
algorithm is not suited for a parallel simulation system based on Grid environments
Parallel Transaction-oriented Simulation
where communication links between nodes might use the Internet and nodes might not
be physically close to each other. Similar to the Elastic Time algorithm the Adaptive
Memory Management algorithm is also best suited for shared memory systems. The
Adaptive Memory Management algorithm is based on the link between optimism and
memory usage in optimistic algorithms. The more over-optimistic an LP is the more
memory does it use to store the executed events and state information which cannot be
committed and fossil collected as they are far ahead of the GVT. It is shown that by
limiting the overall memory available to the optimistic simulation artificially, the
optimism can also be controlled. For this the Adaptive Memory Management algorithm
uses a shared memory pool providing the memory used by all LPs. The algorithm then
dynamically changes the size of the memory pool and therefore the total amount of
memory available to the simulation based on several parameters like frequency of
rollbacks, fossil collections and cancel backs in order to find the optimum amount of
memory for the best performance. The required shared memory pool can easily be
provided in a shared memory system but in a distributed memory system implementing
it would require extensive synchronisation and communication between the nodes
which makes this algorithm unsuitable for this work.
An algorithm that is more applicable to Grid environments as it does not need a shared
memory or a high speed reduction network is the algorithm suggested in [33]. This
algorithm uses a Global Progress Window (GPW) described by the GVT and the Global
Furthest Time (GFT). Because the GVT is equivalent to the LVT of the slowest LP and
the GFT is the LVT of the LP furthest ahead in simulation time the GPW represents the
window in simulation time in which all LPs are located. This time window is then
divided further into the slow zone, the fast zone and the hysteresis zone as shown in
Figure 8.
GVT GFT
hysteresis
hl hu
Slow Zone Fast Zone
Figure 8: Global Progress Window with its zones [33]
Parallel Transaction-oriented Simulation
The algorithm will slow down LPs in the fast zone and try to accelerate LPs in the slow
zone with the hysteresis zone acting as a buffer between the other two zones. This
algorithm could be implemented without any additional communication overhead
because the GFT can be determined and passed back to the LPs by the same process that
performs the GVT calculation. It is therefore well suited for very loosely coupled
systems based on relatively slow communication channels. The only small disadvantage
is that similar to many other algorithms the fast LPs will always be penalized even if
they don’t actually contribute to the majority of the cascaded rollback. In [33] the
authors also explore how feasible LP migration and load balancing is for reducing the
runtime of a parallel simulation.
The most promising algorithm regards the requirements outlined in 4.2.1 is the Shock
Resistant Time Warp algorithm [8]. This algorithm follows similar ideas like the Elastic
Time algorithm and the Adaptive Memory Management algorithm mentioned above but
at the same time is very different. Similar to the Elastic Time algorithm state vectors are
used to describe the current states of all LPs plus a set of functions to determine the
output vector but the Shock Resistant Time Warp algorithm does not require a global
state. Instead each LP tries to optimise its parameters towards the best performance. And
similar to the Adaptive Memory Management algorithm the optimism is controlled
indirectly be setting artificial memory limits but each LP will artificially limit its own
memory instead of using an overall memory limit for the whole simulation.
The Shock Resistant Time Warp algorithm was chosen for the implementation of the
parallel transaction-oriented simulator because it promises to be very adaptable and at
the same time is very flexible regards changes in the environment and it does not create
any additional communication overhead compared to Time Warp. The following section
will describe this algorithm in more detail.
4.2.3 Shock resistant Time Warp Algorithm
The Shock Resistant Time Warp algorithm [8] is a fully distributed approach to
controlling the optimism in Time Warp LPs that requires no additional communication
between the LPs. It is based on the Time Warp algorithm but extends each LP with a
control component called LPCC that constantly collects information about the current
Parallel Transaction-oriented Simulation
state of the LP using a set of sensors. These sets of sensor values are then translated into
sets of indicator values representing state vectors for the LP. The LPCC will keep a
history of such state vectors so that it can search for past state vectors that are similar to
the current state vector but provide a better performance indicator. An actuator value
will be derived from the most similar of such state vectors that is subsequently used to
control the optimism of the LP. Figure 9 gives an overview of the interaction between
LPCC and LP.
Figure 9: Overview of LP and LPCC in Shock Resistant Time Warp [8]
The specific sensors used by the LPCC are described in Table 3 but other or additional
sensors could be used if appropriate. There are two types of sensors. The point sample
sensors describe a momentary value of a performance or state metric, which can
fluctuate significantly whereas the cumulative sensors characterise metrics that contain
a sum value produced over the runtime of the simulation. The indicator for each sensor
is calculated depending on which type of sensor it is. For cumulative sensors the rate of
increase over a specified time period is used as the indicator values and for point sample
Parallel Transaction-oriented Simulation
sensors the arithmetic mean value over the same time period. Table 4 shows the
corresponding indicators.
Sensor Type Description
CommittedEvents cumulative total number of events committed
SimulatedEvents cumulative total number of events simulated
MsgsSent cumulative total number of (positive) messages sent
AntiMsgsSent cumulative total number of anti-messages sent
MsgsRcvd cumulative total number of (positive) messages received
AntiMsgsRcvd cumulative total number of anti-messages received
EventsRollback cumulative total number of events rolled back
EventsUsed point sample momentary number of events in use
Table 3: Shock Resistant Time Warp sensors
Indicator Description
EventRate number of events committed per second
SimulationRate number of events simulated per second
MsgsSentRate number of (positive) messages sent per second
AntiMsgsSentRate number of anti-messages sent per second
MsgsRcvdRate number of (positive) messages received per second
AntiMsgsRcvd number of anti-messages received per second
EventsRollbackRate number of events rolled back per second
MemoryConsumption average number of events in use
Table 4: Shock Resistant Time Warp indicators
Two of these indicators are slightly special. The EventRate indicator, which describes
the number of events committed per second during a time period, is the performance
indicator used to identify how much useful work has been performed. And the actuator
value MemoryLimit is derived from the MemoryConsumption indicator. For a state
vector with n different indicator values the LPCC will use an n-dimensional state vector
Parallel Transaction-oriented Simulation
space to store and compare the state vectors. The similarity of two state vectors within
this state vector space is characterised by the Euclidean distance between the vectors.
When searching for the most similar historic state vector that has a higher performance
indicator then the Euclidean distance is calculated by ignoring the indicators EventRate
and MemoryConsumption because the EventRate is the indicator that the LPCC is
trying to optimise and the MemoryConsumption is directly linked to the MemoryLimit
actuator controlled by the LPCC.
Keeping a full history of the past state vectors would require a large amount of memory
and would create an exponentially increasing performance overhead. For these reasons
the Shock Resistant Time Warp algorithm uses a clustering mechanism to cluster similar
state vectors. The algorithm will keep a defined number of clusters. At first each new
state is stores as a new cluster but when the cluster limit is reached then new states are
added to existing clusters if the distance between the state and the cluster is smaller than
any of the inter-cluster distances and otherwise the two closest clusters are merged into
one and the second cluster is replaced with the new state vector. The clustering
mechanism limits the total number of clusters stored and at the same time clusters will
move their location within the state space to reflect the mean position of the state
vectors they represent.
The Shock Resistant Time Warp algorithm as described in [8] is specific to discrete
event simulation but it can also be applied to transaction-oriented simulation because of
the equivalence between events in discrete event simulation and the movement of
Transactions in transaction-oriented simulation as outlined in 4.1.1. Because the
transaction-oriented simulation does not know events as such the names of the sensors
and indicators described above need to be changed to avoid confusion when applying
the Shock Resistant Time Warp algorithm to transaction-oriented simulation. The two
tables below show the sensor and indicator names that will be used for this work.
Parallel Transaction-oriented Simulation
Discrete event sensor Transaction-oriented sensor
CommittedEvents CommittedMoves
EventsUsed UncommittedMoves
SimulatedEvents SimulatedMoves
MsgsSent XactsSent
AntiMsgsSent AntiXactsSent
MsgsRcvd XactsReceived
AntiMsgsRcvd AntiXactsReceived
EventsRollback MovesRolledback
Table 5: Transaction-oriented sensor names
Discrete event indicator Transaction-oriented indicator
EventRate CommittedMoveRate
MemoryConsumption AvgUncommittedMoves
SimulationRate SimulatenRate
MsgsSentRate XactSentRate
AntiMsgsSentRate AntiXactSentRate
MsgsRcvdRate XactReceivedRate
AntiMsgsRcvd AntiXactReceivedRate
EventsRollbackRate MovesRolledbackRate
Table 6: Transaction-oriented indicator names
4.3 GVT Calculation
The concept of Global Virtual Time (GVT) was mentioned and briefly explained in
2.5.2. GVT is a fundamental concept of optimistic synchronisation algorithms and
describes a lower bound on the simulation times of all LPs. Its main purpose is to
guarantee past simulation states as being correct so that the memory for these saved
Parallel Transaction-oriented Simulation
states can be reclaimed through fossil collection. Another important purpose is to
determine the overall progress of the simulation, which includes the detection of the
simulation end. Besides these reasons optimistic parallel simulations can often run
without any additional GVT calculations for long time periods or even until they reach
the simulation end if enough memory for the required state saving is available. In
environments with a relatively low communication performance like Computing Grids
it is desirable to minimise the need for GVT calculations because the GVT calculation
process is based on the exchange of messages and adds a communication overhead.
The best-known GVT calculation algorithm was suggested by Jefferson [16]. It defines
the GVT as the minimum of all local simulation times and the time stamps of all events
sent but not yet acknowledged as being handled by the receiving LP. The planned
parallel simulator will use this algorithm for the GVT calculation because it is relatively
easy to implement and well studied. Future work could also look at alternative GVT
algorithms that might be suitable for Grid environments, like the one suggested in [20].
The movement of a Transaction in transaction-oriented simulation can be seen as
equivalent to an event being executed in discrete event simulation as concluded in 4.1.1.
But in transaction-oriented simulation the causal order is not only determined by the
movement time of a Transaction but also by its priority because if several Transactions
exist that have the same move time then they are moved through the system in order of
their priority, i.e. Transactions with higher priority first. As a result the priority had to be
included in the GVT calculation in [19] because the Breathing Time Buckets algorithm
(SPEEDES algorithm) used there needs the GVT to guarantee outgoing Transactions.
For a parallel transaction-oriented simulator based on the Time Warp algorithm or the
Shock Resistant Time Warp algorithm it is not necessary to include the Transaction
priority in the GVT calculation because the GVT is only used to determine the progress
of the overall simulation and to regain memory through fossil collection. For the Shock
Resistant Time Warp algorithm one additional use of the GVT is to determine realistic
values for the CommittedEvents sensor. Events are committed when receiving a GVT
that is greater than the event’s time. As a result the number of committed events during
a certain period of time is only known if GVT calculations have been performed. The
suggested parallel simulator based on the Shock Resistant Time Warp algorithm will
therefore synchronise the processing of its LPCC with GVT calculations.
Parallel Transaction-oriented Simulation
4.4 End of Simulation
In transaction-oriented simulation a simulation is complete when the defined end state is
reached, i.e. the termination counter reaches a value less or equal to zero. When using
an optimistic synchronisation algorithm for the parallelisation of transaction-oriented
simulation it is crucial to consider that optimistic algorithms will first execute all local
events without guarantee that the causal order is correct. They will recover from wrong
states by performing a rollback if it later turns out that the causal order was violated.
Therefore any local state reached by an optimistic LP has to be considered provisional
until a GVT has been received that guarantees the state. In addition it needs to be
considered that at any point in real time it is most likely that each of the LPs has reached
a different local simulation time so that after an end state has been reached by one of the
LPs that is guaranteed by a GVT it is important to synchronise the states of all LPs so
that the combined end state from all model partitions is equivalent to the model end
state that would have been reached in a sequential simulator.
To summarise, a parallel transaction-oriented simulation based on an optimistic
algorithm is only complete when the defined end state has been reached in one of the
LPs and when this state has been confirmed by a GVT. Furthermore if the confirmed
end of the simulation has been reached by one of the LPs then the states of all the other
LPs need to be synchronised so that they all reflect the state that would exist within the
model when the Transaction causing the simulation end executed its TERMINATE
block. These significant aspects regarding the simulation end of a parallel transaction-
oriented simulation that had not been considered in [19].
A mechanism is suggested for this work that leads to a consistent and correct global end
state of the simulation considering the problems mentioned above. For this mechanism
the LP reaching a provisional end state is switched into the provisional end mode. In this
mode the LP will stop to process any further Transactions leaving the local model
partition in the same state but it will still respond to and process control messages like
GVT parameter requests and it will receive Transactions from other LPs that might
cause a rollback. The LP will stay in this provisional end mode until the end of the
simulation is confirmed by a GVT or a received Transaction causes a rollback with a
potential re-execution that is not resulting in the same end state. While the LP is in the
provisional end mode additional GVT parameters are passed on for every GVT
Parallel Transaction-oriented Simulation
calculation denoting the fact that a provisional end state has been reached and the
simulation time and priority of the Transaction that caused the provisional end. The
GVT calculation process can then assess whether the earliest current provisional end
state is guaranteed by the GVT. If this is the case then all other LPs are forced to
synchronise to the correct end state by rolling back using the simulation time and
priority of the Transaction that caused the provisional end and the simulation is stopped.
4.5 Cancellation Techniques
Transaction-oriented simulation has some specific properties compared to discrete event
simulation. One of these properties is that Transactions do not consume simulation time
while they are moving from block to block. This has an influence on which of the
synchronisation algorithms are suitable for transaction-oriented simulation as described
in 4.1 but also on the cancellation techniques used. If a Transaction moves from LP1 to
LP2 then it will arrive at LP2 with the same simulation time that it had at LP1. A
Transaction moving from one LP to another is therefore equivalent to an event in
discrete event simulation that when executed creates another event for the other LP with
exactly the same time stamp. Because simulation models can contain loops as it is
common for the models of quality control systems where an item failing the quality
control needs to loop back through the production process (see [26] for an example) this
specific behaviour of transaction-oriented simulation can lead to endless rollback loops
if aggressive cancellation is used (cancellation techniques were briefly described in
2.5.2).
The example in Figure 10 demonstrates this effect. It shows the movement of a
Transaction x1 from LP1 to LP2 but without a delay in simulation time the Transaction is
transferred back to LP1. As a result LP1 will be rolled back to the simulation time just
before x1 was moved. At this point two copies of Transaction x1 will exist in LP1. The
first one is x1 itself which needs to be moved again and the second is x1’ which is the
copy that was send back from LP2. This is the point from where the execution differs
between lazy cancellation and aggressive cancellation. In lazy cancellation x1 would be
moved again resulting in the same transfer to LP2. But because x1 was sent to LP1
already it will not be transferred again and no anti-transaction will be sent. From here
Parallel Transaction-oriented Simulation
LP1 just proceeds moving the Transactions in its Transaction chain according to their
simulation time (Transaction priorities are ignored for this example). Apposed to that
the rollback in aggressive cancellation would result in an anti-Transaction being sent out
for x1 immediately which would cause a second rollback in LP2 and another anti-
Transaction for x1’ being sent back to LP1. At the end both LPs will end up in the same
state in which they were before x1 was moved by LP1. The same cycle of events would
start again without any actual simulation progress.
Lazy cancellation
Aggressive cancellation
Transaction transferred to other LP
Rollback
Anti-Transaction for other LP
x1 x1'
x1- x1'-
cycle 1 repeat of cycle 1
x2 x2
Figure 10: Cancellation in transaction-oriented simulation
It can therefore be concluded that lazy cancellation needs to be used for a parallel
transaction-oriented simulation based on an optimistic algorithm in order to avoid such
endless loops.
4.6 Load Balancing
Load balancing and the automatic migration of slow LPs to nodes or processors that
have a lighter work load has been suggested in order to reduce the runtime of parallel
simulations. This has also been explored by the authors of [33]. They concluded that the
Parallel Transaction-oriented Simulation
migration of LPs involves a “substantial amount of overheads in saving the process
context, flushing the communication channels to prevent loss of messages”. And
especially on loosely coupled systems with relatively slow communication channels
sending the full process context of the LP from one node to another can add a
significant performance penalty to the overall simulation. This penalty would depend on
the size of the process context as well as the communication performance between the
nodes involved in the migration. The gained performance on the other hand depends on
the difference in processing performance and other workload on these nodes. To
determine reliably when such an automatic migration is beneficial within a loosely
coupled, dynamically changing Ad Hoc Grid environment would be difficult and it is
likely that the performance penalty outweighs the gains.
This work will therefore not investigate the load balancing and automatic LP migration
for performance reasons but only support automatic LP migration as part of the fault
tolerance functionality provided by ProActive and described in 3.1.3. Manual LP
migration will be supported by the parallel simulator using ProActive tools.
4.7 Model Partitioning
Besides the chosen synchronisation algorithm the partitioning of the simulation model
also has a large influence on the performance of the parallel simulation because the
communication required between the Logical Processes depends to a large degree on
how independent the partitions of a simulation model are. Looking at the requirements
of a general-purpose transaction-oriented simulation system for Ad Hoc Grid
environments in 4.2 the conclusion was drawn that the required communication needs to
be kept to a minimum in order to reach acceptable performance results through
parallelisation in such environments. The communication required for the exchange of
Transactions between the Logical Processes is part of this overall communication.
A simulation model that is supposed to be run in a Grid based parallel simulation system
therefore needs to be partitioned in such a way that the expected amount of Transactions
moved within the partitions is significantly larger than the amount of Transactions that
need to be transferred between these partitions. This means that Grid based parallel
Parallel Transaction-oriented Simulation
simulation systems are best suited for the simulation of systems that contain relatively
independent sub-systems.
In practice the ratio of computation performance to communication performance
provided by the underlying hardware architecture of the Grid environment will have to
match the ratio of computation performance to communication performance required by
the parallel simulation as reasoned in 2.2. Whether a partitioned simulation model will
perform well will therefore also depend on the underlying hardware architecture.
Implementation
5 Implementation
The GPSS based parallel transaction-oriented simulator will be implemented using the
JavaTM 2 Platform Standard Edition 5.0, also known as J2SE5.0 [31] and ProActive
version 3.1 [15] as the Grid environment. An object-oriented design will be applied for
the implementation of the simulator and resulting classes will be grouped into a
hierarchy of packages according to the functional parts of the parallel simulator and the
implementation phases. The parallel simulator will use the logging library log4j [3] for
all its output, which will provide very flexible means to enable or disable specific parts
of the output as required. The log4j library is the same logging library that is used by
ProActive so that only one configuration file will be needed to configure the logging of
ProActive and the parallel simulator.
5.1 Implementation Considerations
5.1.1 Overall Architecture
Figure 11 shows the suggested architecture of the parallel simulator including its main
components. The main parts of the parallel simulator will be the Simulation Controller
and the Logical Processes. The Simulation Controller controls the overall simulation. It
is created when the user starts the simulation and will use the Model Parser component
to read the simulation model file and parse it into an in memory object structure
representation of the model. After the model is parsed the Simulation Controller will
create Logical Process instances, one for each model partition contained within the
simulation model. The Simulation Controller and the Logical Processes will be
implemented as ProActive Active Objects so that they can communicate with each other
via method calls. Communication will take place between the Simulation Controller and
the Logical Processes but also between the Logical Processes for instance in order to
exchange Transactions. Note that the communication between the Logical Processes is
not illustrated in Figure 11. After the Logical Process instances have been created, they
will be initialised, they will receive the model partitions from the Simulation Controller
that they are going to simulate and the simulation is started.
Implementation
Simulation Controller
Model Parser
GVT Calculation
Reporting
Logical Process
State List
State Cluster Space
Simulation Engine
Model Partition
Transaction Chain
Figure 11: Architecture overview
Each Logical Process implements an LP according to the Shock Resistant Time Warp
algorithm. The main component of the Logical Process is the Simulation Engine, which
contains the Transaction chain and the model partition that is simulated. The Simulation
Engine is the part that is performing the actual simulation. It is moving the Transactions
from block to block by executing the block functionality using the Transactions.
Another important part of the Logical Process is the State List. It contains historic
simulation states in order to allow rollbacks as required by optimistic synchronisation
algorithms. Note that there will be other lists like for instance the list of Transactions
received and the list of Transactions sent to other Logical Processes, which are not
shown in Figure 11. Furthermore the Logical Process will contain the Logical Process
Control Component (LPCC) according to the Shock Resistant Time Warp algorithm
described in 4.2.3. Using specific sensors within the Logical Process the LPCC will
limit the optimism by the means of an artificial memory limit if this promises a better
simulation performance at the current circumstances.
The Simulation Controller will perform GVT calculations in order to establish the
overall progress of the simulation and if requested by one of the Logical Processes that
needs to reclaim memory using fossil collection. GVT calculation will also be used to
Implementation
confirm a provisional simulation end state that might be reached by one of the Logical
Processes.
When the end of the simulation is reached then the Simulation Controller will ensure
that the partial models in all Logical Processes are set to the correct and consistent end
state and it will collect information from all Logical Processes in order to assemble and
output the post simulation report.
5.1.2 Transaction Chain and Scheduling
Thomas J. Schriber gives an overview in section 4. and 7. of [26] on how the
Transaction chains and the scheduling work in the original GPSS/H implementation.
The scheduling and the Transaction chains for the parallel transaction-oriented simulator
of this work will be based on his description but with some significant differences. Only
one Transaction chain is used containing Transactions for current and future simulation
time in a sorted order. The Transactions within the chain are first sorted ascending by
their next moving time and Transactions for the same time are also sorted descending by
their priority. Transactions will be taken out of the chain before they are moved and put
back into the chain after they have been moved (unless the Transaction has been
terminated). The functionality to put a Transaction back into the chain will do so at the
right position ensuring the correct sort order of the Transactions within the chain. The
scheduling will be slightly simpler than described in [26] because no “Model’s Status
changed flag” will be needed.
Another difference is that in order to keep the time management as simple as possible
the proposed parallel simulator will restrict the simulation time and time related
parameters to integer values instead of floating point values. At first this might seem
like a major restriction but it is not because decimal places can be represented using
scaling. If for instance a simulation time with three decimal places is needed for a
simulation then a scale of 1000:1 can be used, which means 1000 time units of the
parallel simulator represent one second of simulation time or a single time unit
represents 1ms. The Java integer type long will be used for time values providing a
large value range that allows flexible scaling for different required precisions.
The actual scheduling will be implemented using a SimulationEngine class. This class
will for instance contain the functionality for moving Transactions, updating the
Implementation
simulation time and also the Transaction chain itself. The SimulationEngine class will
be implemented as part of the Basic GPSS Simulation Engine implementation phase
detailed in section 5.2.2 and a description of how the scheduling was implemented can
be found in 5.3.1.
5.1.3 Generation and Termination of Transactions
Performing transaction-oriented simulation in a parallel simulator also has an influence
on how Transactions are generated and how they are terminated.
Generating Transactions
Transactions are generated in the GENERATE blocks of the simulation. During the
generation each Transaction receives a unique numerical ID that identifies the
Transaction during the rest of the simulation. In a parallel transaction-oriented simulator
GENERATE blocks can exist in any of the model partitions and therefore in any of the
LPs. This requires a scheme, which ensures that the Transaction IDs generated in each
LP are unique across the overall parallel simulation. Ideally such a scheme requires as
little communication between the LPs as possible.
The scheme used for this parallel simulator will generate unique Transaction IDs
without any additional communication overhead. This is achieved by partitioning the
value range of the numeric IDs according to the number of partitions in the simulation
model. The only requirement of this scheme is that all LPs are aware of the total number
of partitions and LPs within the simulation. This information will be passed to them
during the initialisation. The used scheme is based on an offset that depends on the total
number of LPs. Each LP has its own counter that is used to generate unique Transaction
IDs. These counters are initialised with different starting values and the same offset is
used for incrementing the counters when one of their values has been used for a
Transaction ID. A simulation with n LPs (i.e. n partitions) will use the offset n to
increment the local Transaction ID counters and each LP will initialise its counter with
its own number in the list of LPs. In a simulation with 3 LPs, LP1 would initialise its
counter to the value 1, LP2 to 2 and LP3 to 3 and the increment offset used would be the
total number of LPs which is 3. The sequence of IDs generated by these LPs would be
LP1: 1, 4, 7, … and by LP2: 2, 5, 8, … and by LP3: 3, 6, 9, … and so forth. Further
advantages of this scheme are that it partitions the possible ID value range into equally
Implementation
large numerical partitions independent of the total size of the value range and it also
makes it possible to determine in which partition a Transaction was generated using
their ID.
Terminating Transactions
The termination of Transactions raises similar problems like their generation. According
to GPSS Transactions are terminated in TERMINATE blocks. Each time a Transaction
is terminated a global Termination Counter is decremented by the decrement parameter
of the TERMINATE block involved. A GPSS simulation initialises the Termination
Counter with a positive value at the start of the simulation and the counter is then used
to detect the end of the simulation, which is reached as soon as the Termination Counter
has a value of zero or less. The required global Termination Counter could be located in
one of the LPs but accessing it form other LPs would require additional synchronisation.
The problem of accessing such a Termination Counter is the same like accessing other
objects from different LPs as outlined in 4.1.2.
In order to avoid the additional complexity and communication overhead of
implementing a global Termination Counter the parallel simulator will use a separate
local Termination Counter in each LP. This solution will perform simulations that don’t
require a global Termination Counter without additional communication overhead. For
simulations that do require a global Termination Counter the problem can be reduced to
the synchronisation and the movement of Transactions between LPs as solved by the
synchronisation algorithm. In this case all TERMINATE blocks of such a simulation
need to be located within the same partition. This will result in additional
communication and synchronisation when Transactions are moved from other LPs to
the one containing the TERMINATE blocks. Figure 12 below shows how a simulation
model with two partitions that needs a single Termination Counter can be converted into
equivalent simulation models so that the simulation will effectively use a single
Termination Counter.
Implementation
TERMINATE
TERMINATE
Original partitioning
Partition 1 Partition 2
Partitioning using a single synchronised Termination Counter
Option 1 Option 2
TRANSFER
TRANSFER
Partition 1 Partition 2
TERMINATE
TERMINATE
Partition 3
TRANSFER
TERMINATE
Partition 1 Partition 2
TERMINATE
......
Figure 12: Single synchronised Termination Counter
5.1.4 Supported GPSS Syntax
In order to ease the migration of existing GPSS models to this new parallel GPSS
simulator the GPSS syntax supported will be kept as close as possible to the original
GPSS/H language described in [26]. At the same time only a sub-set of the full GPSS/H
language will be implemented but this sub-set will include all the main GPSS
functionality including functionality needed to demonstrate the parallel simulation of
partitioned models on more than one LP. The simulator will not support Transaction
cloning, Logic Switches, User Chains and user defined properties for Transactions but
such functionality can easily be added in future if required.
Implementation
A detailed description of the GPSS syntax expected by the parallel GPSS simulator can
be found in Appendix A. In particular the simulator will support the generating, delay
and termination of Transactions as well as their transfer to other partitions of the model.
It will also support Facilities, Queues, Storages and labels. The following table gives an
overview of the supported GPSS block types.
Block type Short description
GENERATE Generate Transactions
TERMINATE Terminate Transactions
ADVANCE Delay the movement of Transactions
SEIZE Capture Facility
RELEASE Release Facility
ENTER Capture Storage units
LEAVE Release Storage units
QUEUE Enter Queue
DEPART Leave Queue
TRANSFER Jump to a different block than the next following
Table 7: Overview of supported GPSS block types
In addition to the GPSS functionality described above the new reserved word
PARTITION is introduced. This reserved word marks the start of a new partition within
the model. If the model does not start with such a partition definition then a default
partition is created for the following blocks. All block definitions following a partition
definition are automatically assigned to that partition. During the simulation each
partition will be performed on a separate LP.
5.1.5 Simulation Termination at Specific Simulation Time
The parallel simulator will not provide any syntax or configuration options for
terminating a simulation when a specific simulation time is reached but such behaviour
Implementation
can easily be modelled in any simulation model using an additional GENERATE and
TERMINATE block. For instance if a simulation is supposed to be terminated when it
reaches a simulation time of 10,000 then an additional GENERATE block is added that
generates a single Transaction for the simulation time 10,000 immediately followed by a
TERMINATE block that stops the simulation when that Transaction is terminated. This
additional set of GENERATE and TERMINATE block can either be added to the end of
an existing partition or as an additional partition. All other TERMINATE blocks in such
a simulation will need to have a decrement parameter of 0. The following GPSS code
shows an example model that will terminate at the simulation time 10,000.
PARTITION Partition1,1 sets Termination Counter to 1
… original model partition
GENERATE 1,0,10000 generates a Transaction for time 10000
TERMINATE 1 end of simulation after 1 Transaction
5.2 Implementation Phases
The following sections will describe the four main development phases of the parallel
simulator.
5.2.1 Model Parsing
The classes for parsing and validating the GPSS model read from the model file can be
found in the package parallelJavaGpssSimulator.gpss.parser. A GPSS model file is
parsed by calling the method ModelFileParser.parseFile(). This method returns an
instance of the class Model from the package parallelJavaGpssSimulator.gpss that
contains the whole GPSS model as an object structure. The Model instance contains a
list of model partitions represented by instances of the class Partition and each Partition
instance contains a list of GPSS blocks and lists of other entities like labels, queues,
facilities and storages that make up the model partition.
Global GPSS block references
GPSS simulators require a way of referencing GPSS blocks. A TRANSFER block for
instance needs to reference the block it should transfer Transactions to. Sequential
simulators often just use the block index within the model to refer to a specific block.
Implementation
But in a parallel GPSS simulator each Logical Process only knows its own model
partition. Still a block reference needs to uniquely identify a GPSS block within the
whole simulation model and ideally it should also be possible to determine the target
partition from a block reference. The GlobalBlockReference class in the package
parallelJavaGpssSimulator.gpss implements a block reference that fulfils these criteria
and it is used to represent global block references and also block labels within the
runtime model object structure and other parts of the simulator.
Parser class hierarchy
A parallel class hierarchy and the Builder design pattern [22] is used in order to separate
the code for parsing and validating the GPSS model from the code that represents the
model at the runtime of the simulation. This second class hierarchy is found in the
parallelJavaGpssSimulator.gpss.parser package and contains a builder class for all
element types that can make up the model structure at runtime. Figure 13 shows the
UML diagram of the two class hierarchies including some of the relevant methods.
When loading and parsing a GPSS model file the instance of the ModelFileParser class
internally creates an instance of the ModelBuilder class, which for each partition found
in the model file holds an instance of a PartitionBuilder class and the PartitionBuilder
class holds builder classes for all blocks and other entities that are found in the partition.
The parsing and validation of the different model elements is delegated to these builder
classes. In addition the Factory design pattern [22] is used by the BlockBuilderFactory
class that creates the correct builder class for a GPSS block depending on the block type
keyword found in the model file. All builder classes have a build() method that returns
an instance of the corresponding simulation runtime class for that element. These
build() methods are called recursively so that the ModelBuilder.build() method calls the
build() method of the PartitionBuilder instances it contains and each PartitionBuilder
instance calls the build() method of all builder classes it contains. This delegation of
responsibility within the class hierarchy makes it possible to return an instance of the
Model class representing the whole GPSS model just by calling the
ModelBuilder.build() method.
As mention above the package parallelJavaGpssSimulator.gpss.parser is only used to
load, parse and verify a GPSS model from file into the object structure used at
simulation runtime. For this reason the only class from this package with public
Implementation
visibility is ModelFileParser. All other classes of this package are only visible within
the package itself.
Model
Partition
Block
AdvanceBlock
TransferBlock
FacilityEntity
StorageEntity
#build() : Model
ModelBuilder
#build() : Partition
PartitionBuilder
#build() : Block
BlockBuilder
#build() : AdvanceBlock
AdvanceBlockBuilder
#build() : TransferBlock
TransferBlockBuilder
#build() : FacilityEntity
FacilityEntityBuilder
#build() : StorageEntity
StorageEntityBuilder
+parseFile(in fileName : String) : Model
ModelFileParser
Runtime model object structure classes in
parallelJavaGpssSimulator.gpss
Model parsing object structure classes in
parallelJavaGpssSimulator.gpss.parser
Figure 13: Simulation model class hierarchies for parsing and simulation runtime
Test and Debugging of the Model Parsing
In order to test and debug the parsing of the GPSS model file and the correct creation of
the object structure representing the GPSS model at simulation run time, the toString()
methods of all classes from the runtime class hierarchy were implemented to output
their properties in textual form and to recursively call the toString() methods of any sub-
element contained. A test application class with a main() method was implemented to
load a GPSS model from a file using the ModelFileParser.parseFile() method and to
output the whole structure of the model in textual form. Using this application different
Implementation
GPSS test models were parsed that contained all of the supported GPSS entities and the
textual output of the resulting object structures was checked. These tests included
checking the default values of the different GPSS block types and parsing errors for
invalid model files.
5.2.2 Basic GPSS Simulation Engine
The second implementation phase focused on the development of the basic GPSS
simulation functionality. A GPSS simulation engine was implemented that can perform
the sequential simulation of one model partition. This sequential simulation engine will
be the basis of the parallel simulation engine implemented in the third phase.
The classes for the basic GPSS simulation functionality can be found in the package
parallelJavaGpssSimulator.gpss. The main class in this package is the
SimulationEngine class that encapsulates the GPSS simulation engine functionality. It
uses the runtime model object structure class hierarchy mentioned in 5.2.1 to represent
the model partition and the model state in memory. The runtime model object structure
class hierarchy contains classes for the GPSS block types plus some additional classes
to represent other GPSS entities like Facilities, Queues and Storages. Each of these
classes implements the functionality that will be performed when a block of that type is
executed or the GPSS entity is used by a Transaction. Two further classes in this
package are the Transaction class representing a single Transaction and the
GlobalBlockReference class introduced in 5.2.1.
The basic GPSS simulation scheduling is also implemented using the SimulationEngine
class (a detailed description of the scheduling can be found in 5.3.1). It will generate
Transactions and move them through the simulation model partition using the runtime
model object structure. All block instances within this runtime model object structure
inherit from the abstract Block class and therefore have to implement the execute()
method. When a Transaction is moved through the model it will call this execute()
method for each block it enters.
Test and Debugging of Basic GPSS Simulation functionality
The test application class TestSimulationApp was used to test and debug the basic GPSS
simulation functionality implemented during this phase. This class contains a main()
Implementation
method and can therefore be run as an application. It allows the simulation of a single
model partition using the SimulationEngine class. The exact simulation processing can
be followed using the log4j logger parallelJavaGpssSimulator.gpss (details of the
logging can be found in 5.3.5). In debug mode this logger outputs detailed steps of the
simulation. Several test models where used to test the correct implementation of the
basic scheduling and the GPSS blocks and other entities. They will not be described in
more detail here because the same functionality will be tested again in the final version
of the simulator (see validation phase in section 6).
5.2.3 Time Warp Parallel Simulation Engine
During the third development phase the parallel simulator was implemented based on
the Time Warp synchronisation algorithm. The functionality of the parallel simulator is
split into the Simulation Controller side and the Logical Process side, each found in a
different package. Figure 14 shows the general architecture and the main classes
involved at each side during this development phase. Classes marked with (AO) are
instantiated as ProActive Active Objects. Instances of the LogicalProcess class also
communicate with each other, for instance in order to exchange Transactions, which is
not displayed in Figure 14.
LogicalProcess (AO)
ParallelSimulationEngine
Partition
Simulate
SimulationController (AO)
Configuration
ModelFileParser
LogicalProcess (AO)
ParallelSimulationEngine
Partition
Figure 14: Parallel simulator main class overview (excluding LPCC)
Implementation
Simulation Controller side
At the Simulation Controller side is the root package parallelJavaGpssSimulator. It
contains the class Simulate, which is the application class that is used to start the
parallel simulator. When run the Simulate class will load the configuration from
command line arguments or the configuration file, it will also load and parse the
simulation model file and then create an Active Object instance of the
SimulationController class (the JavaDoc documentation for this class can be found in
Appendix E). The SimulationController instance receives the configuration settings and
the simulation model when its simulate() method is called. As a result of this call the
SimulationController class will read the deployment descriptor file and create the
required number of LogicalProcess instances at the specified nodes.
Logical Process side
The functionality of the Logical Processes is found in the package
parallelJavaGpssSimulator.lp. This package contains the LogicalProcess class, the
ParallelSimulationEngine class (the JavaDoc documentation for both can be found in
Appendix E) and a few helper classes. The LogicalProcess instances are created as
Active Objects by the Simulation Controller. After their creation the LogicalProcess
instances receive the simulation model partitions and the configuration when their
initialize() method is called. When all LogicalProcess instances are initialised then the
Simulation Controller calls their startSimulation() method to start the simulation. Figure
15 illustrates the communication flow between the Simulation Controller and the
Logical Processes before and at the end of the simulation. The method calls just
described can be found at the start of this communication flow. When the Simulation
Controller detects that a confirmed simulation end has been reached then all Logical
Processes are requested to end the simulation with a consistent state matching that
confirmed simulation end using the endOfSimulationByTransaction() method. The
Logical Processes will confirm when they reached the consistent simulation state after
which the Simulation Controller will request the post simulation report details from
each Logical Process. Further specific details about the implementation of the parallel
simulator can be found in section 5.3.
Implementation
Simulation
Controller
LP 1 LP n
initialize()
startSimulation()
endOfSimulationByTransaction()
getSimulationReport()
Initialization
Simulation
End of simulation
Post simulation reporting
... ...
Communication phases
Figure 15: Main communication sequence diagram
Test and Debugging of the Time Warp parallel simulator
The parallel simulator resulting from this development phase was tested and debugged
with extensive logging enabled and using different models. The functionality was tested
again in the final version of the parallel simulator as part of the validation phase, of
which details can be found in section 6.
5.2.4 Shock Resistant Time Warp
This development phase extended the Time Warp based parallel simulator from the
former development phase to support the Shock Resistant Time Warp algorithm by
adding the LPCC and the required sensor value functionality to the LogicalProcess
class. The functionality for the Shock Resistant Time Warp algorithm is found in the
package parallelJavaGpssSimulator.lp.lpcc. The main class in this package is the class
LPControlComponent that implements the LPCC (see Appendix E for the JavaDoc
documentation of this class). The package also contains two classes that represent the
sets of sensor and indicator values and the class StateClusterSpace that encapsulates the
functionality to store and retrieve past indicator state information using the cluster
Implementation
technique described in [8]. The Shock Resistant Time Warp algorithm of the parallel
simulator is implemented so that it can be enabled and disabled by a configuration
setting of the parallel simulator as required. If the LPCC and therefore the Shock
Resistant Time Warp algorithm is disabled then the parallel simulator will simulate
according to the normal Time Warp algorithm, if it is enabled then the Shock Resistant
Time Warp algorithm will be used. The option to enable/disable the LPCC makes it
possible to compare the performance of both algorithms for specific simulation models
and hardware setup using the same parallel simulator.
The Logical Process Control Component (LPCC) implemented by the
LPControlComponent class is used by the LogicalProcess instances during a simulation
according to the Shock Resistant Time Warp algorithm. It is the main component of this
algorithm that attempts to steer the parameters of the LPs towards values of past states
that promise better performance using an actuator that limits the number of
uncommitted Transaction moves allowed. Figure 16 shows the architecture and main
classes used by the final version of the parallel simulator including the
LPControlComponent class representing the LPCC.
LogicalProcess (AO)
ParallelSimulationEngine
Partition
LPControlComponent
StateClusterSpace
LogicalProcess (AO)
ParallelSimulationEngine
Partition
LPControlComponent
StateClusterSpace
Simulate
SimulationController (AO)
Configuration
ModelFileParser
Figure 16: Parallel simulator main class overview
Implementation
The LPCC receives the current sensor values with each simulation time update cycle
(details of the scheduling can be found in 5.3.1) but the main processing of the LPCC is
only called during specified time intervals as set in the configuration file of the
simulator. When the main processing of the LPCC is called using its
processSensorValues() method then the LPCC will create a set of indicator values for
the sensor values cumulated. Using the State Cluster Space it will search for a similar
indicator set that promises better performance and it will set the actuator according to
the indicator set found. Finally the current indicator set will be added to the State
Cluster Space. The LPCC is also used to check whether the current number of
uncommitted Transaction moves exceeds the current actuator limit. Within the
scheduling cycle the LP will call the isUncommittedMovesValueWithinActuatorRange()
method of its LPControlComponent instance to perform this check. As a result the
number of uncommitted Transaction moves passed in is compared to the maximum
actuator limit determined by the mean actuator value and the standard deviation with a
confidence level of 95% as described in [8]. The method will return false if the number
of uncommitted Transaction moves exceeds the maximum actuator limit forcing the LP
into cancelback mode (see 5.3.4).
State Cluster Space
The StateClusterSpace class encapsulates the functionality to store sets of indicator
values and to return a similar indicator set for a given one. Each stored indicator set is
treated as a vector in an n-dimensional vector space with n being the number of
indicators per set. The similarity between two indicator sets is determined by their
Euclidean vector distance. A clustering technique is used that groups similar indicator
sets into clusters to limit the amount of memory required when large numbers of
indicator sets are stored.
The two main public methods provided by the StateClusterSpace class are
addIndicatorSet() and getClosestIndicatorSetForHigherCommittedMoveRate(). The
first method adds a new indicator set to the State Cluster Space and the second returns
the indicator set most similar to the one passed in that has a higher CommittedMoveRate
indicator value. Note that the two indicators AvgUncommittedMoves and
CommittedMoveRate are ignored when determining the similarity by calculating the
Implementation
Euclidean distance because AvgUncommittedMoves is directly linked to the actuator and
CommittedMoveRate is the performance indicator that is hoped to be maximized.
Test and Debugging of the Shock Resistant Time Warp and the State Cluster Space
The State Cluster Space was tested and debugged using the test application class
TestStateClusterSpaceApp, which allows for the StateClusterSpace class to be tested
outside the parallel simulator. Using this class the detailed functionality of the State
Cluster Space was tested using specific scenarios that would have been difficult to
create within the parallel simulator. The test application class is left in the project so that
possible future changes or enhancements to the StateClusterSpace class can also be
tested outside the parallel simulator.
The implementation of the Shock Resistant Time Warp algorithm was tested and
debugged in the final version of the parallel simulator using a selection of different
models of which a significant one was chosen for validation 5 in section 6.5.
5.3 Specific Implementation Details
The following sections describe some specific implementation details of the parallel
simulator.
5.3.1 Scheduling
The scheduling of the parallel simulator was implemented in two phases. The first part
is the basic scheduling of the GPSS simulation that was implemented using the
SimulationEngine class as described in section 5.2.2. This scheduling algorithm was
later extended for the parallel simulation by the LogicalProcess class and the
ParallelSimulationEngine class, which inherits from the SimulationEngine class as part
of the Time Warp parallel simulator implementation phase described in 5.2.3.
Basic GPSS Scheduling
The basic GPSS scheduling is implemented using the functionality provided by the
SimulationEngine class. A flowchart diagram of the scheduling algorithm is shown in
Figure 17. As seen from this diagram the scheduling algorithm will first initialise the
GENERATE blocks in order to create the first Transactions. Subsequent Transactions
Implementation
are created whenever a Transaction leaves a GENERATE block. The algorithm then
updates the simulation time to the move time of the earliest movable Transaction. After
the simulation time has been updated all movable Transactions with a move time of the
current simulation time are moved through the model as far as possible. Unless this
results in the simulation being completed the algorithm will repeat the cycle of updating
the simulation time and moving the Transactions.
Start
Initialise GENERATE blocks
Update simulation time
Move all Transactions for
current simulation time
Is simulation finished?
Figure 17: Scheduling flowchart - part 1
Figure 18 shows the flowchart of the Move all Transactions for current simulation time
processing block from Figure 17. The algorithm for this block will retrieve the first
movable Transaction for the current simulation time and take this Transaction out of the
Transaction chain. If no such Transaction is found then the processing block is left.
Otherwise the Transaction is moved through the model as far as possible. If the
Transaction is not terminated as a result then it is chained back into the Transaction
Implementation
chain at the correct position according to its move time and priority (note that the move
time and priority could have changed while the Transaction was moved).
Start
Chain out next movable
Transaction for current time
Transaction found?
Move Transaction as far as
possible
Transaction terminated?
Chain in Transaction
Move all Transactions for current
simulation time
Figure 18: Scheduling flowchart - part 2
Implementation
The Move Transaction as far as possible processing block is split down further and its
algorithm illustrated in the flowchart shown in Figure 19.
Start
Is current block
GENERATE?
Execute GENERATE block
Execute next block
Has Transaction
time changed?
Is Transaction
terminated?
Is next block in
same partition?
Move Transaction as far as
possible
Figure 19: Scheduling flowchart - part 3
Implementation
The algorithm will first check whether the Transaction is currently within a
GENERATE block and if so the GENERATE block is execute. Then the Transaction is
moved into the next following block by executing it. Unless the move time of the
Transaction changed, the Transaction got terminated or the next block of the Transaction
lays within a different partition the algorithm will repeatedly execute the next block for
the Transaction in a loop and therefore move the Transaction from block to block. From
this flowchart it can be seen that the execution of GENERATE blocks is treated
different to the execution of other blocks. The reason is that GENERATE blocks are the
only blocks that are executed when a Transaction leaves the block where as all other
blocks are executed when the Transaction enters them. This allows a GENERATE block
to create the next Transaction when the last one created leaves it. The table below
mentions the different methods that implement the flowchart processing blocks
described.
Flowchart processing block Method
Initialise GENERATE blocks SimulationEngine.initializeGenerateBlocks()
Update simulation time SimulationEngine.updateClock()
Move all Transactions for
current simulation time
SimulationEngine.
moveAllTransactionsAtCurrentTime()
Chain out next movable
Transaction for current time
SimulationEngine.
chainOutNextMovableTransactionForCurrentTime()
Move Transaction as far as
possible
SimulationEngine.moveTransaction()
Chain in Transaction SimulationEngine.chainIn()
Execute GENERATE block GenerateBlock.execute()
Execute next block Calls the execute() method of the next block instance
for the Transaction
Table 8: Methods implementing basic GPSS scheduling functionality
Implementation
Extended parallel simulation scheduling
For the parallel simulator the simulation scheduling is implemented in the Logical
Processes. It integrates the Active Object request processing of the LogicalProcess class
and the synchronisation algorithm of the parallel simulation. This results in a scheduling
algorithm that looks quite different to the one for the basic GPSS simulation. A slightly
simplified flowchart of this algorithm can be found in Figure 20 (note that the darker
flowchart processing blocks are blocks that already existed in the basic GPSS
scheduling algorithm).
Because the LogicalProcess class is used as an Active Object its scheduling algorithm is
implemented in the runActivity() method inherited from the org.objectweb.proactive.
RunActive interface that is part of the ProActive library. The algorithm first checks
whether the body of the Active Object is still active and then processes any Active
Object method requests received. If the Logical Process is not in the mode
SIMULATING then the algorithm will return and loop through checking the body and
processing Active Object requests. If the mode is changed to SIMULATING then it will
proceed to update the simulation time. This step existed already in the basic GPSS
scheduling algorithm. Note that the functionality to initialize the GENERATE blocks is
not part of the actual scheduling algorithm any more as it is performed when the
LogicalProcess class is initialized using the initialize() method. After the simulation
time has been updated the start state for the new simulation time will be saved. The state
saving and rollback process is described in detail in section 5.3.3. The next step is to
handle received Transactions, which includes anti-Transactions and cancelbacks. They
are received via ProActive remote method calls and stored in an input list during the
Process Active Object requests step. Normal received Transactions are handled by
chaining them into the Transaction chain. This might require a rollback if the local
simulation time has already passed the move time of the new Transaction. In order to
handle a received anti-Transaction the matching normal Transaction has to be found and
deleted. If the normal Transaction has been moved through the model already then a
rollback is required as well. Cancelback requests are also handled by performing a
rollback (see section 5.3.4 for details of the memory management and cancelback).
Implementation
Start
Process Active Object
requests
Update simulation time
Move all Transactions for
current simulation time
Save current state
Handle received
Transactions
Send lazy-cancellation
anti-Transactions
Do movable
Transactions exist?
Is in Cancel Back
mode?
Send outgoing Transactions
Update LPCC
Is Active Object
body active?
Mode = SIMULATING?
Figure 20: Extended parallel simulation scheduling flowchart
Implementation
Following the handling of received Transactions and anti-Transactions the scheduling
algorithm will send out any anti-Transactions required by the lazy-cancellation
mechanism. It will identify all Transactions that have been sent out for an earlier
simulation time and which have been rolled back and subsequently not sent again. Such
Transactions need to be cancelled by sending out anti-Transactions. If following the
lazy-cancellation handling the Simulation Engine has movable Transactions and is not
in cancelback mode then all movable Transactions for the current simulation time are
moved through the simulation model. Any outgoing Transactions are sent to their
destination Logical Process and the LPCC sensors are updated. The whole scheduling
algorithm will be repeated until the LogicalProcess instance is terminated and its Active
Object body becomes inactive. The methods implementing the flowchart processing
blocks described are shown below.
Flowchart processing block Method
Process Active Object
requests
LogicalProcess.processActiveObjectRequests()
Update simulation time SimulationEngine.updateClock()
Save current state LogicalProcess.saveCurrentState()
Handle received Transactions LogicalProcess.handleReceivedTransactions()
Send lazy-cancellation anti-
Transactions
LogicalProcess.
sendLazyCancellationAntiTransactions()
Move all Transactions for
current simulation time
ParallelSimulationEngine.
moveAllTransactionsAtCurrentTime()
Send outgoing Transactions LogicalProcess.
sendTransactionsFromSimulationEngine()
Update LPCC LogicalProcess.updateLPControlComponent()
Table 9: Methods implementing extended parallel simulation scheduling
Implementation
5.3.2 GVT Calculation and End of Simulation
Details of why and how the GVT is calculated during the simulation have already been
described in 4.3 but here the focus lies on the actual implementation. The GVT
calculation is performed by the SimulationController class within the private method
performGvtCalculation(). During the GVT calculation the Simulation Controller will
request the required parameters from each LP, determine the GVT and pass the GVT
back to the LPs so that these can perform the fossil collection. Figure 21 shows the
sequence diagram of the GVT calculation process.
Simulation
Controller
LP 1 LP n
requestGvtParameter()
receiveGvt()
performGvtCalculation()
Figure 21: GVT calculation sequence diagram
There are different circumstances that can cause a GVT calculation within the parallel
simulator. First LPs can request a GVT calculation from the Simulation Controller by
calling its requestGvtCalculation() method. This happens when an LP reached certain
defined memory limits (as described in 5.3.4) or when a provisional simulation end is
reached by one of the LPs, which is described in more detail further below. Another
reason for a GVT calculation is that the LPCC processing is required because the
defined processing time interval has passed. For the Shock Resistant Time Warp
algorithm the LPCC processing is linked to a GVT calculation so that the sensor and
indicator for the number of committed Transaction moves have realistic values that
reflect the simulation progress made during the time interval. For this reason the LPCC
processing times are controlled by the Simulation Controller and linked to GVT
Implementation
calculations that are triggered when the next LPCC processing is needed. An additional
parameter for the method receiveGvt() of the LogicalProcess class indicates to the LP
that an LPCC processing is needed after the new GVT has been received. Finally the
user can also trigger a GVT calculation, which is useful for simulations in normal Time
Warp mode that might not require any GVT calculation for large parts of the simulation.
Forcing a GVT calculation allows the user to check what progress the simulation has
made so far as the GVT is an approximation for the confirmed simulation times that has
been reached by all LPs.
End of simulation
The detection of the simulation end is closely linked to the GVT calculation because a
provisional simulation end reached by one of the LPs can only be confirmed by a GVT.
The background of detecting the simulation end has already been discussed in 4.4 but
the actual implementation will be explained here. When an LP reaches a provisional
simulation end then the parallel simulator will attempt to confirm this simulation end as
soon as possible if at all possible. First the LP reaching the provisional simulation end
will request a GVT calculation from the Simulation Controller. But the resulting GVT
might not confirm the provisional simulation end if the LP is ahead of other LPs in
respect of the simulation time. For this case a scheme is introduced in which the LP
reaching the provisional simulation end tells all other LPs to request a GVT calculation
themselves if they pass the simulation time of that provisional simulation end. The
method forceGvtAt() of the LogicalProcess class is used to tell other LPs about the
provisional simulation end time. Because it is possible for more than one LP to reach a
provisional simulation end before any of them is confirmed this method will keep a list
of the times at which the LPs need to request GVT calculations.
Whether or not a provisional simulation end reached by one of the LPs is confirmed, is
detected by the private method performGvtCalculation() of the SimulationController
class that also performs the calculation of the GVT. In order to make this possible the
method requestGvtCalculation() of the LogicProcess class returns additional
information about a possible simulation end reached by that LP. This way the GVT
calculation process described above is also used to confirm a provisional simulation
end. Such a simulation end is confirmed during the GVT calculation when it is found
that all other LPs have reached a later simulation time than the one that reported the
Implementation
provisional simulation end. In this case no future rollback could occur that can undo the
provisional simulation end, which is therefore guaranteed. If the GVT calculation
confirms a simulation end then no GVT is send back to the LPs but instead the
Simulation Controller calls the method endOfSimulationByTransaction() of all LPs as
shown in Figure 15 of section 5.2.3.
5.3.3 State Saving and Rollbacks
Optimistic synchronisation algorithms execute all local events without guarantee that
additional events received later will not violate the causal order of events already
execute. In order to correct such causal violations they have to provide means to restore
a past state before the causal violation occurred so that the new event can be inserted
into the event chain and the events be executed again in the correct order. A common
technique to allow the restoration of past states is called State Saving or State
Checkpointing where an LP saves the state of the simulation into a list of simulation
states each time the state changes or in defined intervals.
The parallel simulator implemented employs a relatively simple state saving scheme.
Each time the simulation time is incremented the LP serialises the state of the
Simulation Engine and saves it together with the corresponding simulation time into a
state list. Each state record therefore describes the simulation state at the beginning of
that time, i.e. before any Transactions were moved. To keep the complexity of this
solution low the standard object serialisation functionality provided by Java is used to
serialise and deserialise the state to and from a Stream object that is then stored in the
state list. The state list keeps all states sorted by their time. The saving of the state is
implemented in the method saveCurrentState() of the LogicalProcess class.
The purpose of saving the simulation state is to allow LPs to rollback to past simulation
states if required. The functionality to rollback to a past simulation state is implemented
in the method rollbackState(). Using the example shown in Figure 22 the principle of
rolling back to a past simulation state is briefly explained.
Implementation
t = 0 t = 3 t = 8 t = 12
S(0) S(3) S(8) S(12)
Reale time
Simulation time
Saved states S(0)
S(12)
List of saved
states
Current simulation time: 12
Figure 22: State saving example
Figure 22 shows the state information of an LP that has gone through the simulation
times 0, 3, 8 and 12 and that has saved the simulation state at the beginning of each of
these times. There are two possible options for a rollback depending on whether a state
for the simulation time that needs to be restored exists in the state list or not. If for
instance the LP receives a Transaction for the time 3 then the LP will just restore the
state of the time 3, chain in the new Transaction and proceed moving the Transactions in
their required order. But if a Transaction for the simulation time 5 is received, which
implies that a rollback to the simulation time 5 is needed then the state of the time 8 is
restored because this is the same state that would have existed at the simulation time 5.
Recapitulating it can be said that if no saved state exists for the simulation time to that a
rollback is needed then the rollback functionality will restore the state with the next
higher simulation time.
In addition to the basic task of restoring the correct simulation state the rollbackState()
method also performs a few related tasks like chaining in any Transactions that were
received after that restored state was saved or marking any Transactions sent out after
the rollback simulation time for the lazy-cancellation mechanism.
A further task related to the state management is performing the fossil collection which
is implemented by the commitState() method of the LogicalProcess class. This method
is called when the LP receives a new GVT. It will remove any past simulation states and
other information, for instance about Transactions received or sent, that are not needed
any more.
Because of the time scale of this project and in order to keep the complexity of the
implementation low, the state saving scheme used by the parallel simulator is a
Implementation
relatively basic periodic state saving scheme. Future work on the simulator could look at
enhancing the state saving using an adaptive periodic checkpointing scheme with
variable state saving intervals as suggested in [25]. Alternatively an incremental state
saving scheme could be used but this would drastically increase the complexity of the
state saving because the standard Java serialisation functionality could not be used or
would need to be extended. An incremental state saving scheme would also add an
additional overhead for restoring a specific state so that an adaptive periodic
checkpointing scheme appears to be the best option for future enhancements.
5.3.4 Memory Management
Optimistic synchronisation algorithms make extensive use of the available memory in
order to save state information that allow the restoration and the rollback to a past
simulation state required if an LP receives an event or Transaction that would violate the
causal order of events or Transactions already executed. At the same time a parallel
simulator has to avoid running out of available memory completely as this would mean
the abortion of the simulation. The parallel simulator implemented here will therefore
use a relatively simple mechanism to avoid reaching the given memory limits. It will
monitor the memory available to the LP within the JVM during the simulation and
perform defined actions if the available memory drops below certain limits. The first
limit is defined at 5MB. If the amount of available memory goes below this limit then
the LP will request a GVT calculation from the Simulation Controller in the expectation
that a new GVT will confirm some of the uncommitted Transaction moves and saved
simulation states so that fossil collection can free up some of the memory currently
used.
In some circumstances GVT calculations will not free up any memory used by the LP or
not enough. This is for instance the case when the LP is far ahead in simulation time
compared to the other LPs. If none if its uncommitted Transaction moves or saved
simulation states are confirmed by the new GVT then no memory will be freed by fossil
collection. Otherwise it is also possible that only very few uncommitted Transaction
moves and saved simulation states are confirmed by the new GVT resulting in very little
memory being free. The parallel simulator defines a second memory limit of 1MB for
the case that GVT calculations did not help in freeing memory. When the memory
available to the LP drops below this second limit then the LP switches into cancelback
Implementation
mode. A cancelback strategy was already mentioned by David Jefferson [17] but the
cancelback strategy used here will differ slightly from the one suggested by him. When
the LP operates in cancelback mode then it will still respond to control messages and
will still receive Transactions from other LPs but it will stop moving or processing any
local Transaction so that no simulation progress is made by the LP and no simulation
state information are saved as a result. Further the LP will attempt to cancel back
Transactions that it received from other LPs in order to free memory or at least stop
memory usage growing further. To cancelback a Transaction means that all local traces
that a Transaction was received are removed and the Transaction is sent back to its
original sender that will rollback to the move time of that Transaction. The main
methods involved with the cancelback mechanism are the method
LogicalProcess.needToCancelBackTransactions() which is called by an LP that is in
cancelback mode and the method LogicalProcess.cancelBackTransaction() which is
used to send a cancelled Transaction back to the sender LP. This cancelback mechanism
of the parallel simulator is not only used for the general memory management but also
when the Actuator value of the LPCC has been exceeded.
5.3.5 Logging
The parallel simulator uses the Java logging library log4j [3] for its logging and
standard user output. It is the same logging library that is used by ProActive. The log4j
library makes it possible to enable or disable parts or all of the logging or to change the
detail of logging by means of a configuration file without any changes to the Java code.
To utilise the same logging library for ProActive and the parallel simulator means that
only a single configuration file can be used to configure the logging output for both. A
hierarchical structure of loggers combined with inheritance between loggers makes it
very easy and fast to configure the logging of the simulator. A detailed description of the
log4j library and its configuration can be found at [3]. The specific loggers used by the
parallel simulator are described in Appendix C.
As mentioned above the parallel simulator will use the same log4j configuration file like
ProActive. By default this is the file proactive-log4j but a different file can be specified
as described in the ProActive documentation [15]. The log4j root logger for all output
from the parallel simulator is parallelJavaGpssSimulator (in the log4j configuration file
all loggers have to be prefixed with “log4j.logger.” so that this logger would appear as
Implementation
log4j.logger.parallelJavaGpssSimulator). A hierarchy of lower level loggers allow the
configuration of which information will be output or logged by the parallel simulator.
The log4j logging library supports the inheritance of logger properties, which means
that a lower level logger that is not specifically configured will inherit the configuration
from a logger at a higher level within the same name space. For example if only the
logger parallelJavaGpssSimulator is configured then all other loggers of the parallel
simulator would inherit the same configuration settings from it.
5.4 Running the Parallel Simulator
5.4.1 Prerequisites
The parallel GPSS simulator was implemented using the JavaTM 2 Platform Standard
Edition 5.0, also known as J2SE5.0 or Java 1.5 [31] and ProActive version 3.1 [15] as
the Grid environment. J2SE5.0 or the JRE of the same version plus ProActive 3.1 need
to be installed on all nodes that are supposed to be used by the parallel GPSS simulator.
The parallel simulator might also work with later versions of the Java Runtime
Environment and ProActive as long as these are backwards compatible but the author of
this work can give no guarantees in this respect.
Because the parallel simulator and the libraries it uses are written in Java it can be run
on many different platforms. But the main target platforms of this work are Unix based
systems because the scripts that are part of the parallel simulator are only provided as
Unix shell scripts. These relatively basic scripts will need to be rewritten before the
parallel simulator can be used on Windows or other non-Unix based platforms.
5.4.2 Files
The following files are required or are optionally needed in order to run the parallel
simulator. They can be found in the folder /ParallelJavaGpssSimulator/ on the attached
CD and will briefly be described here.
deploymentDescriptor.xml
This is the ProActive deployment descriptor file mentioned in 3.2.1. It is read by
ProActive to determine which nodes the parallel simulator should use and how these
Implementation
need to be accessed. A detailed description of this file and the deployment configuration
of ProActive can be found at the ProActive project Web site [15]. ProActive uses the
concept of virtual nodes for its deployment. For the parallel simulator the ProActive
deployment descriptor file needs to contain the virtual node ParallelJavaGpssSimulator.
If this virtual node is not found then the parallel simulator will abort with an error
message. In addition the deployment descriptor file needs to define enough JVM nodes
linked to this virtual node for the number of partitions within the simulation model to be
simulated.
DescriptorSchema.xsd
This is the XML schema file that describes the structure of the deployment descriptor
XML file. It is used by ProActive to verify that the XML structure of the file
deploymentDescriptor.xml mentioned above is correct.
env.sh
This Unix shell script is part of ProActive and is only included because it is needed by
the file startNode.sh described further down. It can also be found in the ProActive
installation. Together with the file startNode.sh it is used to start ProActive nodes
directly from this folder. But first the environment variable PROACTIVE defined in the
beginning of this file might have to be changed to point to the installation location of the
ProActive library.
ParallelJavaGpssSimulator.jar
This is the JAR file (Java archive) that contains the Java class files, which make up the
parallel simulator. It is required by the script simulate.sh described further down in
order to start and run the parallel simulator.
proactive.java.policy
This is a copy of the default security policy file provided by ProActive. It can also be
found in the ProActive installation and is provided here so that the parallel simulator
can be run straight from this folder. This security policy file basically disables any
access restrictions by granting all permissions. It should only be used when no security
and access restrictions are needed. Please refer to the ProActive documentation [15]
regards defining a proper security policy for a ProActive Grid application like the
parallel simulator.
Implementation
proactive-log4j
This is the log4j logging configuration file used by the parallel simulator and ProActive.
A description of this file and how logging is configured for the parallel simulator can be
found in section 5.3.5.
simulate.config
This is the default configuration file for the parallel simulator. The configuration of the
parallel simulator is explained in detail in 5.4.3.
simulate.sh
This Unix shell script is used to start the parallel simulator. It defines the two
environment variables PROACTIVE and SIMULATOR. Both might need to be changed
before the parallel simulator can be run so that PROACTIVE points to the ProActive
installation directory and SIMULATOR points to the directory containing the parallel
simulator JAR file ParallelJavaGpssSimulator.jar. Further details about how to run the
parallel simulator can be found in 5.4.4.
startNode.sh
This Unix shell script is part of ProActive and is used to start a ProActive node. It is a
copy of the of the same file found in the ProActive installation and is only provided here
so that ProActive nodes for the LPs of the parallel simulator can be started straight from
the same directory. The file env.sh is called be this script to setup all environment
variables needed by ProActive.
5.4.3 Configuration
The parallel simulator can be configured using command line arguments or by a
configuration file. The reading of the configuration settings from the command line
arguments or from the configuration file is handled by the Configuration class in the
root package. If the parallel simulator is started with no further command line
arguments after the simulation model file name then the default configuration file
simulate.config is used for the configuration. If the next command line argument after
the simulation model file name has the format ConfigFile=… then the specified
configuration file is used. Otherwise the configuration is read from the existing
Implementation
command line arguments and default values are used for any configuration settings not
specified.
Configuration settings have the format <parameter name>=<value> and Boolean
configuration settings can be specified without value and equal sign in which case they
are set to true. This is useful when specifying configuration settings as command line
arguments. For instance to get the parallel simulator to output the parsed simulation
model it is enough to add the command line argument ParseModelOnly instead of
ParseModelOnly=true. A detailed description of the configuration settings can be found
in Appendix B.
5.4.4 Starting a Simulation
Before a simulation model can be simulated using the parallel simulator the
deploymentDescriptor.xml needs to contain enough node definitions linked to the virtual
node ParallelJavaGpssSimulator for the number of partitions within the simulation
model. If the deployment descriptor file does not define how ProActive can
automatically start the required nodes then the ProActive nodes have to be created
manually on the relevant machines using the startNode.sh script before the parallel
simulator can be started.
The parallel simulator is started using the shell script simulate.sh. The exact syntax is:
simulate.sh <simulation model file> [<command line argument>] […]
The configuration of the parallel simulator and possible command line arguments are
described in 5.4.3 and the files required to run the parallel simulator and their meaning
are explained in 5.4.2.
5.4.5 Increasing Memory Provided by JVM
By default the JVM of J2SE5.0 provides only a maximum of 64MB of memory to the
Java applications that run inside it (Maximum Memory Allocation Pool Size).
Considering that at the time of this paper standard PCs already come with a physical
memory of around 1GB and dedicated server machines even more, the Maximum
Memory Allocation Pool Size of the JVM does not seem appropriate. Therefore in order
to make the best possible use of the memory provided by the Grid nodes the Maximum
Implementation
Memory Allocation Pool Size of the JVM needs to be increased to the amount of
memory available. This is especially important for long running simulations and
complex simulation models.
The Maximum Memory Allocation Pool Size of the JVM can be set using the command
line argument –Xmxn of the java command (see Java documentation for more details
[31]). If the ProActive nodes running the LPs of the parallel simulator are started using
the startNode.sh script then this command line argument with the appropriate memory
size can be added to this script, otherwise if the nodes are started via the deployment
descriptor file then the command line argument has to be added there. The following
example shows how the startNode.sh script needs to be changed in order to increase the
Maximum Memory Allocation Pool Size from its default value to 512MB.
$JAVACMD org.objectweb.proactive.core.node.StartNode $1 $2 $3 $4
$5 $6 $7
Extract of the startNode.sh script with default memory pool size
$JAVACMD –Xmx512m org.objectweb.proactive.core.node.StartNode $1
$2 $3 $4 $5 $6 $7
Extract of the startNode.sh script with memory pool size of 512MB
Validation of the Parallel Simulator
6 Validation of the Parallel Simulator
The functionality of the parallel simulator was validated using a set of example
simulation models. These simulation models were deliberately kept very simple in order
to evaluate specific aspects of the parallel simulator as complex models would possibly
hide some of the findings and would make the analysis of the results more difficult.
Each of the validations evaluates a particular part of the overall functionality and the
example simulation models were specifically chosen for that evaluation. They therefore
don’t represent any real live systems. Of course it cannot be expected that this
validation using example simulation models will prove the absolute correctness of the
implemented functionality. But instead the different validation runs performed provide a
sufficient level of confidence that the functionality of the parallel simulator is correct.
All files required to perform these validations including the specific configuration files
and the resulting validation output log files can be found in specific sub folders of the
attached CD. For further details about the CD see Appendix D. The relevant output log
files of the validation runs performed are also included in Appendix F. Line numbers in
brackets were added to all lines of the output log files in order to make it possible to
refer to a particular line. The log4j logging system [3] was specifically configured for
each validation run to include certain details or exclude details that were not relevant to
that particular validation. The Termination Counters for the validation runs were chosen
so that the simulation runs were long enough to evaluate the specific aspects but also
kept as short as possible in order to avoid unnecessary long output log files.
Nevertheless some of the validations still resulted in long output log files. In these cases
some of the lines that were not relevant to the validation have been removed from the
output logs listed in Appendix F. The complete output log files can still be found on the
attached CD.
The validation runs were performed on a standard PC with a single CPU (Intel Pentium
4 with 3.2GHz, 1GB RAM) running SuSE Linux 10.0. As the validation was performed
only on a single CPU it should be noted that it does not represent a detailed
investigation into the performance of the parallel simulator. Such an investigation would
exceed the expected time scale of this project because the performance of a parallel
simulation depends on a lot of different factors besides the simulation system (e.g.
Validation of the Parallel Simulator
simulation model, computation and communication performance of the hardware used)
and would need to be analysed using a variety of simulation models and on different
systems in order to draw any reliable conclusions. Nevertheless some basic performance
conclusions where made as part of Validation 5 and 6.
6.1 Validation 1
The first validation checks the correct parsing of the supported GPSS syntax elements.
Two models are used to evaluate the parser component of the parallel simulator. Both
include examples of all GPSS block types and other GPSS entities but in the first model
all possible parameters of the blocks and entities are used whereas in the second model
all optional parameters are left out in order to test the correct defaulting by the parser.
For both models the simulator was started using the ParseModelOnly command line
argument option. When this option is specified then the simulator will not actually
perform the simulation but instead parse the specified simulation model and either
output the parsed in memory object structure representation of the simulation model or
parsing errors if found.
Validation 1.1
The first simulation model used is shown below:
PARTITION Partition1,5
STORAGE Storage1,2
GENERATE 1,0,100,50,5
ENTER Storage1,1
ADVANCE 5,3
LEAVE Storage1,1
TRANSFER 0.5,Label1
TERMINATE 1
PARTITION Partition2,10
Label1 QUEUE Queue1
DEPART Queue1
SEIZE Facility1
RELEASE Facility1
TERMINATE 1
Simulation model file model_validation1.1.gps
Validation of the Parallel Simulator
The output log for this simulation model can be found in Appendix F. A comparison of
the original simulation model file and the in memory object structure representation that
was output by the simulator shows that they are equivalent and that the parser correctly
parsed all lines of the simulation model.
Validation 1.2
The simulation model for this validation is based on the earlier simulation model but all
optional elements of the model were removed.
STORAGE Storage1
GENERATE
ENTER Storage1
ADVANCE
LEAVE Storage1
TRANSFER Label1
TERMINATE
PARTITION Partition2
Label1 QUEUE Queue1
DEPART Queue1
SEIZE Facility1
RELEASE Facility1
TERMINATE
Simulation model file model_validation1.2.gps
As described this simulation model tests the parser regards setting default values for
optional elements and parameters. Comparing the simulation model file to
corresponding output log in Appendix F it can be found that the parser automatically
created a new partition before parsing the first line of the model so that the in memory
representation of the model contains two partitions (see line 2 of output log). The
default name given to this partition by the parser is ‘Partition 1’ (see line 6 of output
log). Line 9 shows that the Storage size was set to its maximum value of 2147483647.
The GENERATE block at line 10 was parsed with all its parameters set to its default
values as described in Appendix A. This also applies to the ADVANCED block at line
12 and the TERMINATE blocks at line 15 and 27 of the output log. The usage count of
the ENTER and LEAVE block at the lines 11 and 13 were set to the expected default
value of 1 and the TRANSFER block at line 14 of the output log also has the default
transfer probability of 1 so that all Transactions would be transferred to the specified
Validation of the Parallel Simulator
label. It can be seen that all the missing parameters were set to their expected default
values.
6.2 Validation 2
This validation evaluates the basic GPSS functionality of the parallel simulator. This
includes the basic scheduling and the movement of Transactions as well as the correct
processing of the GPSS blocks. The simulation model used for this contains only a
single partition but otherwise all possible GPSS block types and entities. There is even a
TRANSFER block that transfers Transactions with a probability of 0.5. The model is
shown below:
PARTITION Partition1,4
STORAGE Storage1,2
GENERATE 3,2
QUEUE Queue1
ENTER Storage1,1
ADVANCE 5,3
LEAVE Storage1,1
DEPART Queue1
TRANSFER 0.5,Label1
SEIZE Facility1
RELEASE Facility1
Label1 TERMINATE 1
Simulation model file model_validation2.gps
The model is simulated with the log4j loggers parallelJavaGpssSimulator.simulation
and parallelJavaGpssSimulator.gpss set to DEBUG (see configuration file proactive-
log4j at the corresponding sub folder on the attached CD). The last of these two loggers
will result in a very detailed logging of the GPSS processing and Transaction
movement. For this reason the Termination Counter is kept very small, i.e. set to 4 so
that the simulation is stopped after 4 Transactions have been terminated. Otherwise the
output log would be too long to be useful. The deployment descriptor XML file is set to
a single ProActive node as the model contains exactly one partition and the simulation
will require only one LP.
Validation of the Parallel Simulator
The interesting output for this validation is the output log of the LP. Following this
output log the simulation starts with initialising the GENERATE block (line 4 to 6).
This results in a new Transaction with the ID 1 being chained in for the move time 4.
The model above shows the GENERATE block with an average interarrival time of 3
and a half range of 2. This means that the interarrival times of the generated
Transactions will lie in the open interval (1,5) with possible values of 2, 3 or 4. The
current block of the new Transaction is (1,1) which is the GENERATE block itself as
this Transaction has not been moved yet (in the logging of the parallel simulator a block
reference is shown as a comma separated set of the partition number and the block
number within that partition). The next step of the simulator found in the log is the
updating of the simulation time to the value 4 at line 7 because the first movable
Transaction (the one just generated) has a move time of 4. The lines 8 to 16 show how
this Transaction is moved through the model until it reaches the ADVANCED block
where it is delayed. The first block to be executed by the Transaction is the GENERATE
block which results in a second Transaction being created when the first one is leaving
this block as shown in line 10 and 11. The lines 12 and 13 show the first Transaction
executing the QUEUE and ENTER block until it reaches the ADVANCE block at line
14. The ADVANE block changes the move time of the Transaction from 4 to 9 (delay by
a value of 5), which means that, this Transaction is no longer movable at the simulation
time of 4. At line 16 the Transaction is therefore chain back into the Transaction chain
and because there is no other movable Transaction for the time of 4 the current
simulation time is updated to the move time of the next movable Transaction, which is
the one with an ID of 2 and a move time of 7. In the lines 18 to 26 the second
Transaction is going through the same move process like the first Transaction before
and when it is leaving the GENERATE block this results in a third Transaction with a
move time of 10 being created and chained in. When the ADVANCE block changes the
move time of the second Transaction from 7 to 13 as shown in line 24 the current
simulation time is updated to the value of 9 and the first Transaction starts moving again
(see line 27 to 35). It will execute the LEAVE and DEPART block before reaching the
TRANSFER block at line 34. Here it is transferred directly to the TERMINATE block
which can be seen from the next block property of the Transaction jumping from the
block (1,7) to block (1,10). After executing the TERMINATE block the Transaction
stops moving but is not chained back into the Transaction chain as it has been
Validation of the Parallel Simulator
terminated (see line 35 and 36). The simulator proceeds with updating the simulation
time and moving the next Transaction. The rest of the output log can be followed
analogue to before.
The output log of the Simulate process is also found in Appendix F. This log contains
the post simulation report and shows the interesting fact that all of the 4 Transactions
that executed the TRANSFER block were transferred directly to the TERMINATE
block. There should have been a ratio of 50% of the Transactions transferred but
because the number of Transactions is very low this results in a large statistical error.
Nevertheless Validation 3 will show that for a large number of Transactions the
statistical behaviour of the TRANSFER block is correct.
The validation has shown that the Transaction scheduling and movement as well as the
processing of the blocks is performed by the simulator as expected.
6.3 Validation 3
The third validation focuses on the exchange of Transaction between LPs. It evaluates
that the sending of Transactions from one LP to another works correctly including the
correct functioning of the TRANSFER block. In addition it shows that an LP can
correctly handle the situation where it has no movable Transactions left. In a sequential
simulator this would lead to an error and the abortion of the simulation but in a parallel
simulator this is a valid state as long as at least one of the LPs has movable
Transactions. Further this validation shows the correct processing of the simulation end
by the Simulation Controller and the LPs.
Validation 3.1
This validation run uses a very simple model with two partitions. The first partition
contains a GENERATE block and a TRANSFER block and the second partition the
TERMINATE block. When run, the model will generate Transactions in the first
partition and then transfer them to the second partition where they are terminated.
Detailed GPSS logging is used again in order to follow the Transaction processing. The
loggers enabled for debug output are shown below.
Validation of the Parallel Simulator
log4j.logger.parallelJavaGpssSimulator.gpss=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.rollback=DEBUG
log4j.logger.parallelJavaGpssSimulator.simulation=DEBUG
Extract of the used log4j configuration file proactive-log4j
To avoid unnecessary long output log files the Termination Counter for the partitions is
set to 4 again so that the simulation will be finished after 4 Transactions have been
terminated. In addition the GENERATE block has a limit count parameter of 10 so that
it will only create a maximum of 10 Transactions. This limit is used because otherwise
LP1 simulating the first partition would create more Transactions before the Simulation
Controller has established the confirmed end of the simulation resulting in a longer
output log with details that are not relevant to the simulation. The whole simulation
model is shown below.
PARTITION Partition1,4
GENERATE 3,2,,10
TRANSFER Label1
PARTITION Partition2,4
Label1 TERMINATE 1
Simulation model file model_validation3.1.gps
Looking at the output log of LP1 the start of the Transaction processing is similar to the
one of Validation 2. The initialisation of the GENERATE block creates the first
Transaction (see line 5 to 7) and when the Transaction leaves the GENERATE block it
creates the next Transaction and so forth. After the first Transaction with the ID 1
executed the TRANSFER block at line 15 it stops moving but is not chained back into
the Transaction chain because it has been transferred to LP2 that simulates partition 2.
The next block property of the Transaction shown in that line now points to the first
block of partition two, i.e. has the value (2,1). Swapping to the output log of LP2 it can
be seen at line 10 that LP2 just received the Transaction with the ID 1 and that this
Transaction is chained in. Because of the communication delay and LP1 being ahead in
simulation time LP2 already receives the next few Transactions as well as shown in line
Validation of the Parallel Simulator
12 and 13. In the lines 14 to 16 the first Transaction received is now chained out and
moved into the TERMINATE block where it is terminated. The processing of any
subsequent Transactions within LP1 and LP2 follows the same pattern.
The correct handling of the situation when an LP has no movable Transaction can be
seen at line 9 of the output log of LP2. The LP will just stay in a waiting position not
moving any Transaction until it either receives a Transaction from another LP or until
the Simulation Controller establishes that the end of the simulation has been reached.
Using all three output log files from LP1, LP2 and the simulate process the correct
processing of the simulation end can be followed. The first step of the simulation end is
the forth Transaction being terminated in LP2 (see line 32 of the output log of LP2). The
LP detects that a provisional simulation end has been reached (line 33) and requests a
GVT calculation from the Simulation Controller (line 34). Subsequently it is still
receiving Transactions from LP1 (e.g. line 35, 38 and 41) but no Transactions are
moved because the LP is in the provisional simulation end mode. The output log of the
simulate process shows at the lines 14 to 16 that the Simulation Controller performs a
GVT calculation and receives the information that LP1 has reached the simulation time
26 and LP2 has reached a provisional simulation end at the time 11. Because all other
LPs except LP2 have passed the provisional simulation end time the Simulation
Controller concludes that the simulation end is confirmed. It now informs the LPs of the
simulation end which can be seen in line 55 of the output log of LP2 and line 83 of the
output log of LP1. Because LP1 is ahead of the simulation end time this information
causes it to rollback to the state at the start of simulation time 11 (line 84). The rollback
leads to the Transaction with ID 7 being moved to the TRANSFER block again to reach
the exact state needed for the simulation end. It can be seen from the output log of LP1
that the lines 32 to 37 are identical to the lines 85 to 90 which is a result of the rollback
and re-execution in order to reach a state that is consistent with the simulation end in
LP2. Both LPs confirm to the Simulation Controller that they reached the consistent
simulation end state, which then outputs the combined post simulation report showing
the correct counts as seen in line 23 to 30 of the simulate process output log. This post
simulation report confirms that four Transactions were moved through all blocks and a
5th is already waiting in the GENERATE. The GENERATE block has not yet been
Validation of the Parallel Simulator
executed by the 5th Transaction because GENERATE blocks are executed when a
Transaction leaves them.
The output logs confirm that the transfer of Transactions between LPs and the handling
of the simulation end reached by one of the LPs works correctly as expected.
Validation 3.2
The second validation of this validation group looks at the correct statistical behaviour
of the TRANSFER block when it is used with a transfer probability parameter. The
model used for the validation run is similar to the model used by validation 3.1 but
differs in the fact that this time the partition 1 has its own TERMINATE block and that
the TRANSFER block only transfers 25% of the Transactions to partition 2. Below is
the complete simulation model used for this run.
PARTITION Partition1,750
GENERATE 3,2
TRANSFER 0.25,Label1
TERMINATE 1
PARTITION Partition2,750
Label1 TERMINATE 1
Simulation model file model_validation3.2.gps
Another difference is that larger Termination Counter values are used in order to get
reliable values for the statistical behaviour. With such large Termination Counter values
the output logs of the LPs would be very long and not of much use, which is why, they
are not included in Appendix F. The interesting output log for this validation run is the
one of the simulate process and specifically the post simulation report. The expected
simulation behaviour from the model shown above would be that LP1 reaches the
simulation end after 750 Transactions have been terminated in it’s TERMINATE block.
Because the TRANSFER block transfers 25% of the Transactions to LP2 this means
that about 1000 Transactions would need to be generated in LP1 of which around 250
should end up in LP2. The output log of the simulate process confirms in the lines 27 to
31 that this is the case. In fact the number of Transactions that reached LP2 and the
overall count is only short of two Transactions. This proves that the statistical behaviour
of the TRANSFER block is correct.
Validation of the Parallel Simulator
6.4 Validation 4
Evaluating the memory management of the parallel simulator is the subject of this
validation. It will show that the simulator performs the correct actions when the two
different defined memory limits are reached. The simulation model used for this
validation is shown below.
PARTITION Partition1,2000
GENERATE 1,0
Label1 QUEUE Queue1
DEPART Queue1
TERMINATE 1
PARTITION Partition2
GENERATE 1,0,2000
TRANSFER Label1
Simulation model file model_validation4.gps
The simulation model contains two partitions. The first partition has a GENERATE
block that will create a Transaction for each time unit. All Transactions will be
terminated in the TERMINATE block of the first partition. The only purpose of the
QUEUE and DEPART block in between is to slightly slow down the processing of the
Transactions by the LP. The second partition also generates Transactions for each time
unit but with an offset of 2000 so that its Transactions will start from the simulation
time 2000. The second partition will therefore always be ahead in simulation time
compared to the first partition and all its Transactions are transferred to the QUEUE
block within the first partition. From the design of this model it can be seen that the first
partition will become a bottleneck because on top of its own Transactions it will also
receive Transactions from the second partition. The number of Transactions in its
Transaction chain, i.e. Transactions that still need to be moved will constantly grow.
In order to reach the memory limits of the parallel simulator more quickly the script
startnode.sh, which is used to start the ProActive nodes for the LPs, is changed so that
the command line argument –Xmx12m is passed to the Java Virtual Machine. This
instructs the JVM to make only 12MB of memory available to its Java programs. The
LPs for this validation are therefore run with a memory limit of 12MB. To avoid any
memory management side effects introduced by the LPCC, the LPCC is switched off in
Validation of the Parallel Simulator
the config file simulate.config. The Termination Counter for the simulation model was
chosen as small as possible but just about large enough for the simulation to reach the
desired effects on the hardware used. The logging configuration was changed to include
the current time and the debug output was enabled only for the loggers shown below.
log4j.logger.parallelJavaGpssSimulator.lp=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.commit=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.rollback=DEBUG
log4j.logger.parallelJavaGpssSimulator.simulation=DEBUG
Extract of the used log4j configuration file proactive-log4j
The output log of LP1 shows at line 7 that the memory limit 1 of 5MB available
memory left is already reached after around 2 minutes of simulating. As expected the LP
requests a GVT calculation so that some of its past states can be confirmed and fossil
collected in order to free up memory. The GVT is received from the Simulation
Controller and possible states are committed and fossil collection at line 9 and 10.
Because Java uses garbage collection the memory freed by the LP does not become
available immediately. As a result the LP is still operating pass the memory limit 1 and
requests a few further GVT calculations until between line 65 and 66 there is more than
one minute of simulation without GVT calculation because the garbage collector has
freed enough memory for the LP to be out of memory limit 1. This pattern repeats itself
several time as the memory used by the LP keeps growing until at line 376 of the output
log LP1 reaches the memory limit 2 of 1MB of available memory left. At this point LP1
turns into cancelback mode and cancels back 25 of the Transactions it received from
LP2 in order to free up memory. This can be seen at line 377. After the cancelback of
these Transactions and another GVT calculation the memory available to LP1 raises
above the memory limit 2 and the LP changes back from cancelback mode into normal
simulation mode as shown in line 381. The effects of the Transactions cancelled back on
LP2 can be seen in line 294, 295 and the following lines of the output log of LP2.
Because Transactions are cancelled back one by one as they might have been received
from different LPs they do not all arrive at LP2 at once. The log shows that the 25
Transactions are cancelled back by LP2 in groups of 9, 9, 5 and 2 Transactions. It also
Validation of the Parallel Simulator
shows that LP2 has reached the memory limit 2 even earlier than LP1. This fact can be
explained by looking at the output log of the simulate process. The GVT calculation
shown at the lines 311 to 313 indicates that LP2 has a lot more saved simulation states
then LP1. LP1 hast started simulating at the time 1, has created one Transaction for
every time unit and has reached a simulation time of 1520. That makes it 1520 saved
simulation states of which some will have been fossil collected already. LP2 has started
simulating from the time 2000 and has reached a simulation time of 4481 which means
2481 saves simulation states of which non will have been fossil collected as the GVT
has not yet reached 2000. Saved simulation states require more memory than
outstanding Transactions. This explains why LP2 had reached the memory limit 2 and
the cancelback mode before LP1. But because LP2 does not receive Transactions from
other LPs it has no Transactions that it can cancelback.
The validation showed that the memory management of the parallel simulator works as
expected. The LPs perform the required actions when they reach the defined memory
limits and avoided Out Of Memory Exceptions by not running completely out of
memory.
6.5 Validation 5
The fifth validation evaluates the correct functioning of the Shock Resistant Time Warp
algorithm and its main component, the LPCC. It will show that the LPCC within the
LPs is able to steer the simulation towards an actuator value that results in less rolled
back Transaction moves compared to normal Time Warp.
The simulation model used for this validation contains two partitions. Both partitions
have a GENERATE block and a TERMINATE block but in addition partition 1 also
contains a TRANSFER block that with a very small probability of 0.001 sends some of
its Transactions to partition 2. The whole model is constructed so that partition 2 is
usually ahead in simulation time compared to partition 1, achieved through the different
configuration of the GENERATE blocks, and that occasionally partition 2 receives a
Transaction from the first partition. Because partition 2 is usually ahead in simulation
time this will lead to rollbacks in this partition. The simulation stops when 20000
Transactions have been terminated in partition 2. This model attempts to emulate the
Validation of the Parallel Simulator
common scenario where a distributed simulation uses nodes with different performance
parameters or partitions that create different loads so that during the simulation the LPs
drift apart and some of them are further ahead in simulation time than others leading to
rollbacks and re-execution. The details of the model used can be seen below.
PARTITION Partition1,20000
GENERATE 1,0
TRANSFER 0.001,Label1
TERMINATE 0
PARTITION Partition2,20000
GENERATE 4,0,5000
Label1 TERMINATE 1
Simulation model file model_validation5.gps
In order to reduce the influence of the general memory management on this validation
the amount of memory available to the LPs was increase from the default value of
64MB to 128MB by adding the JVM command line argument -Xmx128m in the
startNode.sh script used to start the ProActive nodes for the LPs. The logging
configuration was extended to get additional debug output relevant to the processing of
the LPCC and some additional LP statistics at the end of the simulation. The loggers for
which debug logging was enabled are shown below.
log4j.logger.parallelJavaGpssSimulator.lp=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.commit=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.rollback=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.stats=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.lpcc=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.lpcc.statespace=DEBUG
log4j.logger.parallelJavaGpssSimulator.simulation=DEBUG
Extract of the used log4j configuration file proactive-log4j
Validation 5.1
The first validation run of this model was performed with the LPCC enabled and the
LPCC processing time interval set to 5 seconds. The extract of the simulation
configuration file below shows all the configuration settings relevant to the LPCC.
Validation of the Parallel Simulator
LpccEnabled=true
LpccClusterNumber=500
LpccUpdateInterval=5
Extract of the configuration file simulate.config for validation 5.1
From the output log of LP2 it can be seen that LP2 constantly has to rollback to an
earlier simulation time because of Transactions it receives from LP1. For instance in
line 6 of this output log LP2 has to roll back from simulation time 13332 to the time
1133 and in line 7 from time 6288 to 1439. The LPCC is processing the indicator set
around every 5 seconds. Such a processing step is shown for instance in the lines 18 to
25 and lines 37 to 44. During these first two LPCC processing steps no actuator is being
set (9223372036854775807 is the Java value of Long.MAX_VALUE meaning no
actuator is set) because no past indicator set that promises a better performance
indicator could be found. But at the third LPCC indicator processing a better indicator
set is found as shown from line 55 to 62 and the actuator was set to 4967 as a result. At
this point the number of uncommitted Transaction moves does not reach this limit but
slightly later during the simulation, when the actuator limit is 4388, the LP reaches a
number of uncommitted Transaction moves that exceeds the current actuator limit
forcing the LP into cancelback mode. This can be found in the output log from line 109.
While in cancelback mode LP2 is not cancelling back any Transactions received from
LP1 as these are earlier than any Transaction generated within LP2 and therefore have
been executed and terminated already but being in cancelback mode also means that no
local Transactions are processed reducing the lead in simulation time of LP2 compared
to LP1. LP2 stays in cancelback mode until Transaction moves are committed during
the next GVT calculation reducing the number of uncommitted Transaction moves
below the actuator limit. The following table shows the Actuator values set by the
LPCC during the simulation and whether the Actuator limit was exceeded resulting in
the cancelback mode.
Validation of the Parallel Simulator
LPCC processing step Time Actuator limit Limit exceeded
1 19:37:10 no limit No
2 19:37:15 no limit No
3 19:37:20 4967 No
4 19:37:25 4267 No
5 19:37:30 4388 Yes
6 19:37:35 4396 No
7 19:37:40 3135 Yes
8 19:37:45 3146 Yes
9 19:37:50 3762 No
10 19:37:55 2817 No
Table 10: Validation 5.1 LPCC Actuator values
From this table it is possible to see that the LPCC is limiting the number of
uncommitted Transactions and therefore the progress of LP2 in order to reduce the
number of rolled back Transaction moves and increase the number of committed
Transaction moves per second, which is the performance indicator. The graph below
shows the same Actuator values in graphical form.
Actuator limit
1 2 3 4 5 6 7 8 9 10
LPCC processing steps
Actuator limit
Figure 23: Validation 5.1 Actuator value graph
Validation of the Parallel Simulator
Validation 5.2
Exactly the same simulation model and logging was used for the second simulation run
but this time the LPCC was switched off so that the normal Time Warp algorithm was
used to simulate the model. The only configuration setting changed for this simulation is
the following:
LpccEnabled=false
Extract of the configuration file simulate.config for validation 5.2
Regards rollbacks the output log for LP2 looks similar compared to Validation 5.1 but
does not contain any logging from the LPCC as this was switched off. Therefore the LP
does not reach any Actuator limit and does not switch into cancelback mode.
Comparison of validation 5.1 and 5.2
Comparing the output logs of validation 5.1 and 5.2 it becomes visible that performing
the given simulation model using the Shock Resistant Time Warp algorithm reduces the
number of Transaction moves rolled back. This can be seen especially when comparing
the statistic information output by LP2 at the end of both simulation runs as found in the
output log files for LP2 or in the table below.
LP statistic item Validation 5.1 Validation 5.2
Total committed Transaction moves 19639 19953
Total Transaction moves rolled back 70331 77726
Total simulated Transaction moves 90330 97725
Table 11: LP2 processing statistics of validation 5
Table 11 shows that the simulation run of validation 5.1 using the Shock Resistant Time
Warp algorithm required around 7400 less rolled back Transaction moves, which is
about 10% less compared to the simulation run of validation 5.2 using the Time Warp
algorithm. As a result the total number of Transaction moves performed by the
simulation was reduced as well. The simulation run using the Shock Resistant Time
Warp algorithm also performed slightly better than the simulation run using the Time
Warp algorithm. This can be seen from the simulation performance information output
Validation of the Parallel Simulator
as part of the post simulation report found in the output logs of the simulate process for
both simulation runs. For validation 5.1 the simulation performance was 1640 time units
per second real time and for validation 5.2 1607 time units per second real time. The
performance difference is quite small which suggests that for the example model used
the processing saved on rolled back Transaction moves just about out weights the extra
processing required for the LPCC, additional GVT calculations and the extra logging
for the LPCC (the LP2 output log size of validation 5.2 is only around 3% of the one of
validation 5.1). But for more complex simulation models where rollbacks in one LP lead
to cascaded rollbacks in other LPs a much larger saving on the number of rolled back
Transaction moves can be expected. It also needs to be considered that the hardware
setup used (i.e. all nodes being run on a single CPU machine) is not ideal for a
performance evaluation as the main purpose of this validation is to evaluate the
functionality of the parallel simulator.
6.6 Validation 6
During the testing of the parallel simulator it became apparent that in same cases the
normal Time Warp algorithm can outperform the Shock Resistant Time Warp algorithm.
This last validation is showing this in an example. The simulation model used is very
similar to the one used for validation 5. It contains two partitions with the first partition
transferring some of its Transactions to the second partition but this time the
GENERATE blocks are configured so that the first partition is ahead in simulation time
compared to the second. The simulation is finished when 3000 Transactions have been
terminated in one of the partitions. The complete simulation model can be seen here:
PARTITION Partition1,3000
GENERATE 1,0,2000
TRANSFER 0.3,Label1
TERMINATE 1
PARTITION Partition2,3000
GENERATE 1,0
Label1 TERMINATE 1
Simulation model file model_validation6.gps
Validation of the Parallel Simulator
As a result of the changed GENERATE block configuration and the first partition being
ahead of the second partition in simulation time, all Transactions received by partition 2
from partition 1 are in the future for partition 2 and no rollbacks will be caused. But it
will lead to an increase of the number of outstanding Transactions within partition 2
pushing up the number of uncommitted Transaction moves during the simulation.
The logging configuration for this validation is also similar to the one used for
validation 5 except that the LP statistic is not needed this time and is therefore switched
off. The extract below shows the loggers for which debug output was enabled.
log4j.logger.parallelJavaGpssSimulator.lp=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.commit=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.rollback=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.lpcc=DEBUG
log4j.logger.parallelJavaGpssSimulator.lp.lpcc.statespace=DEBUG
log4j.logger.parallelJavaGpssSimulator.simulation=DEBUG
Extract of the used log4j configuration file proactive-log4j
Like for validation 5 the script startNode.sh used to run the LP nodes is changed to
extend the memory limit of the JVM to 128MB.
Validation 6.1
For the first validation run the LPCC was enabled using the same configuration like for
validation 5.1 resulting in the model being simulated using the Shock Resistant Time
Warp algorithm. The significant effect of the simulation run is that the LPCC in LP2
starts setting actuator values in order to steer the local simulation processing towards a
state that promises better performance but because the number of uncommitted
Transaction moves within the second partition increases as a result of the Transactions
received from partition 1 the actuator limits set by the LPCC are reached and the LP is
switched into cancelback mode leading to its simulation progress being slowed down. In
addition LP1 is also slowed down by the Transactions cancelled back from LP 2 as
indicated by the output log of LP1. The actuator being set for LP2 can for instance be
seen in line 25 to 28 of the output log of LP2. Subsequently the actuator limit is reached
Validation of the Parallel Simulator
as shown in line 32, 36 and 42 of the output log and the LP turned into cancelback mode
in the lines below. LP2 keeps switching into cancel back mode and keeps cancelling
back Transactions to LP1 for large parts of the simulation resulting in a significant
slowdown of the overall simulation progress.
Validation 6.2
The output logs for validation 6.2 are very short compared to the former simulation run
because the simulation is performed using the normal Time Warp algorithm with the
LPCC being switched off. Therefore no actuator values are set and none of the LPs is
switching into cancelback mode. There are also no rollbacks so that the simulation
progresses with the optimum performance for the model and setup used.
Comparison of validation 6.1 and 6.2
The simulation model for this validation is processed with the optimum simulation
performance by the normal Time Warp algorithm. As a result no performance increase
can be expected from the Shock Resistant Time Warp algorithm. But the Shock
Resistant Time Warp Algorithm performs significantly worse than the normal Time
Warp algorithm. This is caused by the Shock Resistant Time Warp algorithm slowing
down the LP that is already behind in simulation time, i.e. the slowest LP. The
validation shows that the approach of the Shock Resistant Time Warp algorithm to
optimise the parameters of each LP by only considering local status information within
these LPs does not always work.
6.7 Validation Analysis
The first few validations evaluate basic functional aspects of the parallel simulator. For
instance validation 1 focuses on the GPSS parser component of the simulator and
validation 2 on the basic GPSS simulation engine functionality. The transfer of
Transactions between LPs is the main subject matter of validation 3 and the memory
management is evaluated by validation 4. Validation 5 examines the correct functioning
of the LPCC as the main component of the Shock Resistant Time Warp algorithm.
Using specific simulation models and configurations these validations demonstrate with
Validation of the Parallel Simulator
a certain degree of confidence that the parallel simulator is functionally correct and
working as expected.
In addition validation 5 and 6 give some basic idea about the performance of the parallel
transaction-oriented simulation based on the Shock Resistant Time Warp algorithm and
about the performance of the parallel simulator even so the validations performed here
cannot be seen as proper performance validations. Validation 5 shows that the Shock
Resistant Time Warp algorithm can successfully reduce the number of rolled back
Transaction moves resulting in more useful processing during the simulation and
possibly better performance. But validation 6 revealed that the Shock Resistant Time
Warp algorithm can also perform significantly worse than normal Time Warp. This is
usually the case when the LPCC of the LP that is already furthest behind in simulation
time decides to set an actuator value that limits the simulation progress resulting in the
LP and the overall simulation progress being slowed down further.
The problem of the Shock Resistant Time Warp algorithm found is a direct result of the
fact that if implemented as described in [8] the control decisions of each LPCC are only
based on local sensor values within each LP and not on an overall picture of the
simulation progress. Such problems could possibly be avoided by combining the Shock
Resistant Time Warp algorithm with ideas from the adaptive throttle scheme suggested
in [33] which is also briefly described in section 4.2.2. The GFT needed by this
algorithm in order to describe the Global Progress Window could easily be determined
and passed back to the LPs together with the GVT after the GVT calculation without
much additional processing being required or communication overhead being created.
Using the information of such a global progress window one option to improve the
Shock Resistant Time Warp algorithm would be to add another sensor to the LPCC that
describes the position of the current LP within the Global Progress Window. But another
option that promises greater influence of the global progress information on the local
LPCC control decisions would be to change the function that determines the new
actuator value in a way that makes direct use of the global progress information. Such a
function could for instance ignore the actuator value resulting from the best past state
found if the position of the LP within the Global Progress Window is very close towards
the GVT. It could also increase or decrease the actuator value returned by the original
function depending on whether the LP is located within the slow or the fast zone of the
Validation of the Parallel Simulator
Global Progress Window (see Figure 8 in 4.2.2). And a finer influence of the position
within the Global Progress Window could be reached by dividing this window into
more than 2 zones, each resulting in a slightly different effect on the actuator value and
the future progress of the LP. Future work on this parallel simulator could investigate
and compare these options with a prospect of creating a synchronisation algorithm that
combines the advantages of both these algorithms.
Conclusions
7 Conclusions
Even so the performance of modern computer systems has steadily increased during that
last decades the ever growing demand for the simulation of more and more complex
systems can still lead to long runtimes. The runtime of a simulation can often be
reduced by performing its parts distributed across several processors or nodes of a
parallel computer system. Purpose-build parallel computer systems are usually very
expensive. This is where Computing Grids provide a cost-saving alternative by allowing
several organisations to share their computing resources. A certain type of Computing
Grids called Ad Hoc Grid offers a dynamic and transient resource-sharing infrastructure,
suitable for short-term collaborations and with a very small administrative overhead that
makes it even for small organisations or individual users possible to form Computing
Grids.
In the first part of this paper the requirements of Ad Hoc Grids are outlined and the Grid
framework ProActive [15] is identified as a Grid environment that fulfils these
requirements. The second part analyses the possibilities of performing parallel
transaction-oriented simulation with a special focus on the space-parallel approach and
synchronisation algorithms for discrete event simulation. From the algorithms
considered the Shock Resistant Time Warp algorithm [8] was chosen as the most
suitable for transaction-oriented simulation as well as the target environment of Ad Hoc
Grids. This algorithm was subsequently applied to transaction-oriented simulation,
considering further aspects and properties of this simulation type. These additional
considerations included the GVT calculation, detection of the simulation end,
cancellation techniques suitable for transaction-oriented simulation and the influence of
the model partitioning. Following the theoretical decisions a Grid-based parallel
transaction-oriented simulator was then implemented in order to demonstrate the
decisions made. Finally the functionality of the simulator was evaluated using different
simulation models in several validation runs in order to show the correctness of the
implementation.
The main contribution of this work is to provide a Grid-based parallel transaction-
oriented simulator that can be used for further research, for educational purpose or even
for real live simulations. The chosen Grid framework ProActive ensures its suitability
Conclusions
for Ad Hoc Grids. The parallel simulator can operate according to the normal Time
Warp or the Shock Resistant Time Warp algorithm allowing large-scale performance
comparisons of these two synchronisation algorithms using different simulation models
and on different hardware environments. It was shown that the Shock Resistant Time
Warp algorithm can successfully reduce the number of rolled back Transaction moves,
which for simulations with many or long cascaded rollbacks will lead to a better
simulation performance. But this work also revealed a problem of the Shock Resistant
Time Warp algorithm, implemented as described in [8]. Because according to this
algorithm all LPs try to optimise their properties based only on local information it is
possible for the Shock Resistant Time Warp algorithm to perform significantly worse
than the normal Time Warp algorithm. Future work on this simulator could improve the
Shock Resistant Time Warp algorithm by making the LPs aware of their position within
the GPW as suggested in [33]. Combining these two synchronisation algorithms would
create an algorithm that has the advantages of both without any major additional
communication and processing overhead.
Future work on improving this parallel transaction-oriented simulator could also look at
employing different GVT algorithms and state saving schemes. Possible options were
suggested in 4.3 and 5.3.3. This work also discussed the aspect of accessing objects in a
different LP including a single shared Termination Counter. As mentioned in the report
these options were not implemented in the parallel simulator and could be considered
for future enhancements. Finally the simulator does not support the full GPSS/H
language but only a large sub-set of the most important entities, which leaves further
room for improvements.
References
References
[1] Amin K. An Integrated Architecture for Ad Hoc Grids [online]. 2004 [cited 8 Jan
2007]. Available from: URL: http://students.csci.unt.edu/~amin/publications/
phd-thesis-proposal/proposal.pdf
[2] Amin K, von Laszewski G and Mikler A. Toward an Architecture for Ad Hoc
Grids. In: 12th International Conference on Advanced Computing and
Communications (ADCOM 2004); 15-18 Dec 2004; Ahmedabad Gujarat, India
[online]. 2004 [cited 8 Jan 2007]. Available from: URL: http://www.mcs.anl.gov/
~gregor/papers/vonLaszewski-adhoc-adcom2004.pdf
[3] Apache Software Foundation. log4j logging library [online]. [cited 8 Jan 2007].
Available from: URL: http://logging.apache.org/log4j/docs/
[4] Ball D and Hoyt S. The adaptive Time-Warp concurrency control algorithm.
Proceedings of the SCS Multiconference on Distributed Simulation; San Diego,
CA, USA; 22(1):174-177, 1990.
[5] Das S R. Adaptive protocols for parallel discrete event simulation. Proceedings of
the 28th conference on Winter simulation; 08-11 Dec 1996; Coronado, CA,
USA; 186-193. New York: ACM Press; 1996.
[6] Das S R and Fujimoto R M. Adaptive memory management and optimism control
in time warp. ACM Transactions on Modeling and Computer Simulation
(TOMACS); 7(2):239-271. New York: ACM Press; 1997.
[7] Ferscha A. Probabilistic adaptive direct optimism control in Time Warp.
Proceedings of the ninth workshop on Parallel and distributed simulation; Lake
Placid, New York; 120-129. Washington, DC: IEEE Computer Society; 1995.
[8] Ferscha A and Johnson J. Shock resistant Time Warp. Proceedings of the
thirteenth workshop on Parallel and distributed simulation; Atlanta, Georgia,
USA; 92-100. Washington, DC: IEEE Computer Society; 1999.
[9] Foster I and Kesselman C. The Grid: Blueprint for a New Computing
Infrastructure. 2nd edition. San Francisco: Morgan Kaufmann; 2004.
References
[10] Friese T, Smith M and Freisleben B. Hot service deployment in an ad hoc grid
environment. Proceedings of the 2nd international conference on Service
oriented computing; New York; 75-83. New York: ACM Press; 2004.
[11] Fujimoto R M. Parallel and distributed simulation. Proceedings of the 31st
conference on Winter simulation; Phoenix, Arizona, USA; 122-131. New York:
ACM Press; 1999.
[12] Fujimoto R M. Parallel and Distributed discrete event simulation: algorithms and
applications. Proceedings of the 25th conference on Winter simulation; Los
Angeles, USA. New York: ACM Press; 1993.
[13] Gafni A. Space Management and Cancellation Mechanisms for Time Warp [Ph.D.
dissertation]. Dept. of Computer Science, University of Southern California, TR-
85-341; 1985.
[14] GDPA. Category of Methods "Simulation Models" (SIMU) [online]. [cited 8 Jan
2007]. Available from: URL: http://www.informatik.uni-bremen.de/gdpa/
methods/ma-simu.htm
[15] INRIA. ProActive - Programming, Composing, Deploying on the Grid [online].
[cited 8 Jan 2007]. Available from: http://www-sop.inria.fr/oasis/ProActive/
[16] Jefferson D R. Virtual time. ACM Transactions on Programming Languages and
Systems (TOPLAS); 7(3):404-425. New York: ACM Press; 1985.
[17] Jefferson D R. Virtual Time II: storage management in conservative and optimistic
systems. Proceedings of the ninth annual ACM symposium on Principles of
distributed computing; Quebec City, Quebec, Canada; 75-89. New York: ACM
Press; 1990.
[18] Konieczny D, Kwiatkowski J and Skrzypczynski G. Parallel Search Algorithms
for the Distributed environments. Proceedings of the 16th IASTED International
Conference APPLIED INFORMATICS; 23-25 Feb 1998; Garmisch-
Partenkirchen, Germany; 324-327. 1998.
References
[19] Krafft G. Parallelisation of Transaction-Oriented Simulation using Message
Passing in SCEs [BSc thesis in German] [online]. Hochschule Wismar -
University of Technology, Business & Design; 15 Jul 1998 [cited 8 Jan 2007].
Available from: URL: http://www.mb.hs-wismar.de/~pawel/
Study_DiplomBelege/WWW_Seiten/98_Diplom_Krafft/index_en.html
[20] Mattern F. Efficient Algorithms for Distributed Snapshots and Global Virtual Time
Approximation. Journal of Parallel and Distributed Computing, Aug 1993;
18(4):423-434.
[21] Mehl H. Methoden verteilter Simulation. Braunschweig: Vieweg; 1994.
[22] Metsker S J. Design Patterns - Java Workbook. Boston: Addison-Wesley; 2002.
[23] Pawletta S. Erweiterung eines wissenschaftlich-technischen Berechnungs und
Visualisierungssystems zu einer Entwicklungsumgebung für parallele
Applikationen. Wien: ARGESIM/ASIM-Verlag; 2000.
[24] Reynolds Jr P F. A spectrum of options for parallel simulation. Proceedings of the
20th conference on Winter simulation; San Diego, California, USA; 325-332.
New York: ACM Press; 1988.
[25] Rönngren R and Ayani R. Adaptive Checkpointing in Time Warp. Proceedings of
the eighth workshop on Parallel and distributed simulation; Edinburgh, Scotland,
UK; 110-117. New York: ACM Press; 1994.
[26] Schriber T J. An Introduction to Simulation Using GPSS/H. John Wiley & Sons;
1991.
[27] Smith M, Friese T and Freisleben B. Towards a Service Oriented Ad-Hoc Grid.
Proceedings of the 3rd International Symposium On Parallel and Distributed
Computing; Cork, Ireland; 201-208. Washington, DC: IEEE Computer Society;
2004.
[28] Srinivasan S and Reynolds Jr P F. NPSI adaptive synchronization algorithms for
PDES. Proceedings of the 27th conference on Winter simulation; Arlington,
Virginia, USA; 658-665. New York: ACM Press; 1995.
References
[29] Steinman J. Breathing Time Warp. Proceedings of the seventh workshop on
Parallel and distributed simulation; San Diego, California, USA; 109-118. New
York: ACM Press; 1993.
[30] Steinmann J. SPEEDES: Synchronous Parallel Environment for Emulation and
Discreet Event Simulation. Proceedings of the SCS Western Multiconference on
Advances in Parallel and Distributed Simulation; 1991.
[31] Sun Microsystems, Inc. Java Technology [online]. [cited 8 Jan 2007]. Available
from: URL: http://java.sun.com/
[32] Tanenbaum A and van Steen M. Distributed Systems, Principles and Paradigms.
New Jersey: Prentice Hall; 2002.
[33] Tay S C, Teo Y M and Kong S T. Speculative parallel simulation with an adaptive
throttle scheme. Proceedings of the eleventh workshop on Parallel and
distributed simulation; Lockenhaus, Austria; 116-123. Washington, DC: IEEE
Computer Society; 1997.
[34] Tropper C. Parallel discrete-event simulation applications. Journal of Parallel and
Distributed Computing, Mar 2002; 62(3):327-335.
[35] Wikipedia. Computer simulation [online]. [cited 3 Dec 2005]. Available from:
URL: http://en.wikipedia.org/wiki/Computer_simulation
[36] Wolverine Software. GPSS/H - Serving the simulation community since 1977
[online]. [cited 3 Dec 2005]. Available from: URL:
http://www.wolverinesoftware.com/GPSSHOverview.htm
Appendix A: Detailed GPSS Syntax
Appendix A: Detailed GPSS Syntax
This is a detailed description of the GPSS syntax supported by the parallel GPSS
simulator. The correct syntax of the simulation models loaded into the simulator is
validated by the GPSS parser within the simulator (see section 5.2.1).
GPSS simulation model files are line-based text files. Each line contains a block
definition, a Storage definition or a partition definition. Comment lines starting with the
sign * or empty lines are ignored by the parser.
A block definition line starts with an optional label followed by the reserved word of a
block type and an optional comma separated list of parameters. The label is used to
reference the block definition for jumps or a branched execution path from other blocks.
The label, the block type and the parameter list need to be separated by at least one
space. Note that a comma separated parameter list cannot contain any spaces. Any other
characters following the comma separated parameter list are considered to be comments
and are ignored. Labels as well as other entity names (i.e. for Facilities, Queues and
Storages) can contain any alphanumerical characters but no spaces and they must be
different to the defined reserved words. Labels for two different block definitions within
the same model cannot be same. But labels are case sensitive so that two labels that only
differ in the case of their characters are considered different. Apposed to that reserved
words are not case sensitive but is an accepted convention in GPSS to use upper case
letters for GPSS reserved words. Storage definitions and partition definitions cannot
start with a label. They therefore start with the reserved word STORAGE or
PARTITION. Below the syntax of the different GPSS definitions is explained in more
detail. For a detailed description of the actual GPSS functionality see [26].
Partition definition:
Reserved word: PARTITION
Syntax: PARTITION [<partition name>[,<termination counter>]]
Description:
The partition definition declares the beginning of a new partition. If the optional
partition name parameter is not specified then it will default to ‘Partition x’ with x being
the number of the partition within the model. The partition name cannot contain any
Appendix A: Detailed GPSS Syntax
spaces. If the optional termination counter parameter is not specified then the default
termination counter value from the simulator configuration will be used. If specified
then the termination counter parameter has to be a positive integer value.
Storage definition:
Reserved word: STORAGE
Syntax: STORAGE <Storage name>[,<Storage capacity>]
Description:
The Storage definition declares a Storage entity. The Storage name parameter is
required. If the optional Storage capacity parameter is not specified then it will default
to the maximum value of the Java int type. If specified then the Storage capacity
parameter needs to be a positive integer value. The Storage definition has to appear in
the simulation model before any block that uses the specific Storage.
Block definitions:
Reserved word: ADVANCE
Syntax: [<label>] ADVANCE [<average holding time>[,<half range>]]
Description:
The ADVANCE block delays Transactions by a fixed or random amount of time. The
optional average holding time parameter describes the average time by which the
Transaction is delayed defaulting to zero. Together with the average holding time the
second parameter which is the half range parameter describe the uniformly distributed
random value range from which the actually delay time is drawn. If the half range
parameter is not specified then the delay time will always have the deterministic value
of the average holding time parameter. Both parameters can either be a positive integer
value or zero. In addition the half range parameter cannot be greater than the average
holding time parameter.
Appendix A: Detailed GPSS Syntax
Reserved word: DEPART
Syntax: [<label>] DEPART <Queue name>
Description:
The DEPART block causes the Transaction to leave the specified Queue entity. The
required Queue name parameter describes the Queue entity that the Transaction will
leave.
Reserved word: ENTER
Syntax: [<label>] ENTER <Storage name>[,<usage count>]
Description:
Through the ENTER block a Transaction will capture a certain number of units from the
specified Storage entity. The Storage name parameter defines the Storage entity used
and the second optional usage count parameter specifies how many Storage units will
be captured by the Transaction. If not specified this parameter will default to 1.
Otherwise this parameter has to have a positive integer value that is less or equal to the
size of the Storage. If the specified number of units are not available for that Storage
then the Transaction will be blocked until they become available.
Reserved word: GENERATE
Syntax: [<label>] GENERATE [<average interarrival time>[,<half range>
[,<time offset>[,<limit count>[,<priority>]]]]]
Description:
The GENERATE block generates new Transactions that enter the simulation. The first
two parameter average interarrival time and half range describe the uniformly
distributed random value range from which the interarrival time is drawn. The
interarrival time is the time between the last Transaction entering the simulation through
this block and the next one. Both these parameters default to zero if not specified. The
next parameter is the time offset parameter that describes the arrival time of the first
Transaction. If it is not specified then the arrival time of the first Transaction will be
determined via a uniformly distributed random sample using the first two parameters.
The limit count parameter specifies the total count of Transactions that will enter the
simulation through this GENERATE block. If the count is reached then no further
Transactions are generated. If this parameter is not specified then no limit applies. The
Appendix A: Detailed GPSS Syntax
priority parameter specifies the priority value assigned to the generated Transactions
and will default to 0. If specified all these parameters are required to have a positive
integer value or zero. In addition the half range parameter cannot be greater than the
average interarrival time parameter.
Reserved word: LEAVE
Syntax: [<label>] LEAVE <Storage name>[,<usage count>]
Description:
The LEAVE block will release the specified number of Storage units held by the
Transaction. The Storage name parameter defines the Storage entity from which units
will be released and the second optional usage count parameter specifies how many
Storage units will be released by the Transaction. If not specified this parameter will
default to 1. Otherwise this parameter has to have a positive integer value that is less or
equal to the size of the Storage. If the specified number of units is greater than the
number of units currently held by the Transaction then a runtime error will occur.
Reserved word: QUEUE
Syntax: [<label>] QUEUE <Queue name>
Description:
The QUEUE block causes the Transaction to enter the specified Queue entity. The
required Queue name parameter describes the Queue entity that the Transaction will
enter.
Reserved word: RELEASE
Syntax: [<label>] RELEASE <Facility name>
Description:
The RELEASE block will release the specified Facility entity held by the Transaction.
The Facility name parameter defines the Facility entity that will be released. If the
Facility entity is not currently held by the Transaction then a runtime error will occur.
Appendix A: Detailed GPSS Syntax
Reserved word: SEIZE
Syntax: [<label>] SEIZE <Facility name>
Description:
Through the SEIZE block a Transaction will capture the specified Facility entity. The
Facility name parameter defines the Facility entity that will be captured. If the Facility
entity is already held by another Transaction then the Transaction will be blocked until
it becomes available.
Reserved word: TERMINATE
Syntax: [<label>] TERMINATE [<Termination Counter decrement>]
Description:
TERMINATE blocks are used to destroy Transactions. When a Transaction enters a
terminate block then the Transaction is removed from the model and not chained back
into the Transaction chain. Each time a Transaction is destroyed by a TERMINATE
block the local Termination counter is decremented by the decrement specified for that
TERMINATE block. The Termination Counter decrement parameter is optional and will
default to zero if it is not specified. TERMINATE blocks with a zero decrement
parameter will not change the Termination Counter when they destroy a Transaction.
Reserved word: TRANSFER
Syntax: [<label>] TRANSFER [<transfer probability>,]<destination label>
Description:
A TRANSFER block changes the execution path of a Transaction based on the specified
probability. Normally a Transaction is moved from one block to the next but when it
executes a TRANSFER block the Transaction can be transferred to a different block
than the next following. This destination can even be located in a different partition of
the model. For the decision of whether a Transaction is transferred or not a random
value is drawn and compared to the specified probability. If the random value is less
than or equal to the probability then the Transaction is transferred. The transfer
probability parameter needs to be a floating point number between 0 and 1 (inclusive).
It is optional and will default to 1, which will transfer all Transactions, if not specified.
The destination label parameter has to be a valid block label within the model.
Appendix B: Simulator configuration settings
Appendix B: Simulator configuration settings
This appendix describes the configuration settings that can be used for the parallel
simulator. Most of these settings can be applied as command line arguments or as
settings within the simulate.config file. A general description of the simulator
configuration can be found in 5.4.3.
Setting: ConfigFile
Default value: simulate.config
Description:
This configuration setting can only be used as a command line argument and it has to
follow straight after the simulation model file name. It specifies the name of the
configuration file used by the parallel simulator.
Setting: DefaultTC
Default value: none
Description:
This is a configuration setting that defines the default Termination Counter used for
partitions that do not have a Termination Counter defined in the simulation model file.
When specified then it needs to have a non-negative numeric value.
Setting: DeploymentDescriptor
Default value: ./deploymentDescriptor.xml
Description:
The ProActive deployment descriptor file used by the parallel simulator is specified
using this configuration setting.
Setting: LogConfigDetails
Default value: false
Description:
If this Boolean configuration setting is switched on then the parallel simulator will
always output the current configuration setting used including default ones at the start of
a simulation. This can be useful for debugging purposes.
Appendix B: Simulator configuration settings
Setting: LpccClusterNumber
Default value: 1000
Description:
This numeric configuration setting sets the maximum number of clusters stored in the
State Cluster Space used by the LPCC. If a new indicator set is added to the State
Cluster Space and the maximum number of clusters has been reached already then two
clusters or a cluster and the new indicator set are merged. The larger the value of this
setting the more distinct state indicator sets can be stored and used by the Shock
Resistant Time Warp algorithm but the more memory is also need to store such
information.
Setting: LpccEnabled
Default value: true
Description:
If this Boolean configuration setting is set to true which is also the default of this setting
then the LPCC is enabled and the simulation is performed according to the Shock
Resistant Time Warp algorithm. Otherwise the LPCC is switched off and the normal
Time Warp algorithm is used for the parallel simulation.
Setting: LpccUpdateInterval
Default value: 10
Description:
This is a configuration setting that defines the LPCC processing time interval. Its value
has to be a positive number greater than zero and describes the number of seconds
between LPCC processing steps. It also specifies how often the LPCC tries to find and
set a new actuator value. For long simulation runs on systems with large memory pools
larger LPCC processing intervals can be beneficial because less GVT calculation are
needed. On the other hand if the systems used have frequently changing additional loads
and the simulation model is known to have a frequently changing behaviour pattern then
small values might give better results.
Appendix B: Simulator configuration settings
Setting: ParseModelOnly
Default value: false
Description:
If this configuration setting is enabled then the parallel simulator will parse the
simulation model file and output the in memory representation of the model but no
simulation will be performed. This setting can therefore be used to evaluate whether the
simulation model was parsed correctly and to check which defaults have been set for
optional GPSS parameter.
Appendix C: Simulator log4j loggers
Appendix C: Simulator log4j loggers
The following loggers of the parallel simulator can be configured in its log4j
configuration file. In this section each logger is briefly described and its supported log
levels are mentioned.
Logger: parallelJavaGpssSimulator.gpss
Log levels used: debug
Description:
This logger is used to output debug information of the GPSS block and Transaction
processing during the simulation. It creates a detailed log of when a Transaction is
moved, which blocks it executes and when it is chained in or out. It is also the root
logger for any logging related to the GPSS simulation processing.
Logger: parallelJavaGpssSimulator.gpss.facility
Log levels used: debug
Description:
Whenever a Transaction releases a Facility entity this logger outputs detailed
information about the Transaction and when it captured and released the Facility.
Logger: parallelJavaGpssSimulator.gpss.queue
Log levels used: debug
Description:
Whenever a Transaction leaves a Queue entity this logger outputs detailed information
about the Transaction and when it entered and left the Queue.
Logger: parallelJavaGpssSimulator.gpss.storage
Log levels used: debug
Description:
Whenever a Transaction releases a Storage entity this logger outputs detailed
information about the Transaction and when it captured and released the Storage.
Appendix C: Simulator log4j loggers
Logger: parallelJavaGpssSimulator.lp
Log levels used: debug, info, error, fatal
Description:
This logger is the root logger for any output of the LPs. The default log level is info
which outputs some basic information about what partition was assigned to the LP and
when the simulation is completed. Errors within the LP are also output using this logger
and if the debug log level is enabled then it outputs detailed information about the
communication and the processing of the LP related to the synchronisation algorithm.
Logger: parallelJavaGpssSimulator.lp.commit
Log levels used: debug
Description:
This logger outputs information about when simulation states are committed and for
which simulation time.
Logger: parallelJavaGpssSimulator.lp.rollback
Log levels used: debug
Description:
A logger that outputs information about the rollbacks performed.
Logger: parallelJavaGpssSimulator.lp.memory
Log levels used: debug
Description:
This logger outputs detailed information about the current memory usage of the LP and
the amount of available memory within the JVM. It is called with each scheduling cycle
and can therefore create very large logs if enabled.
Logger: parallelJavaGpssSimulator.lp.stats
Log levels used: debug
Description:
This logger outputs the values of the sensor counters that are also user by the LPCC as a
statistic of the overall LP processing at the end of the simulation.
Appendix C: Simulator log4j loggers
Logger: parallelJavaGpssSimulator.lp.lpcc
Log levels used: debug
Description:
A logger that outputs detailed information about the processing of the LPCC, including
for instance any actuator values set or when an actuator limit has been exceeded.
Logger: parallelJavaGpssSimulator.lp.lpcc.statespace
Log levels used: debug
Description:
This logger outputs information about the processing of the State Cluster Space. This
includes details of new indicator sets added or possible past indicator sets found that
promises better performance.
Logger: parallelJavaGpssSimulator.simulation
Log levels used: debug, info, error, fatal
Description:
This is the root logger for all general output about the simulation, the Simulation
Controller and the simulate process. The default log level is info, which outputs the
standard information about the simulation. The logger also outputs errors thrown during
the simulation and in debug mode gives detailed information about the processing of the
Simulation Controller.
Logger: parallelJavaGpssSimulator.simulation.gvt
Log levels used: debug, info
Description:
A logger that outputs detailed information about GVT calculations if in debug level. If
the logger is set to the info log level then only basic information about the GVT reached
is logged.
Logger: parallelJavaGpssSimulator.simulation.report
Log levels used: info
Description:
This logger is the root logger for the post simulation report. It can be used to switch off
the output of the post simulation report by setting the log level to off.
Appendix C: Simulator log4j loggers
Logger: parallelJavaGpssSimulator.simulation.report.block
Log levels used: info
Description:
This is the logger that is used for the block section of the post simulation report. It
allows this section to be switched off if required.
Logger: parallelJavaGpssSimulator.simulation.report.summary
Log levels used: info
Description:
This is the logger that is used for the summary section of the post simulation report. It
allows this section to be switched off if required.
Logger: parallelJavaGpssSimulator.simulation.report.chain
Log levels used: info
Description:
This is the logger that is used for the Transaction chain section of the post simulation
report. It allows this section to be switched off if required.
Appendix D: Structure of the attached CD
Appendix D: Structure of the attached CD
The folder structure of the attached CD is briefly explained in this section. The root
folder of the CD also contains this report as a Microsoft Word and PDF document.
/ParallelJavaGpssSimulator
This is the main folder of the parallel simulator. It contains all the files required to run
the simulator as described in 5.4.2. It also contains some of the folders mentioned
below.
/ParallelJavaGpssSimulator/bin
The folder structure within this folder contains all the binary Java class files of the
parallel simulator. The same Java class files are also included in the main JAR file of
the simulator.
/ParallelJavaGpssSimulator/doc
This folder contains the full JavaDoc documentation of the parallel simulator. The
JavaDoc documentation can be viewed by opening the index.html file within this folder
in a Web browser. It describes the source code of the parallel simulator and is generated
from comments within the source code using the JavaDoc tool.
/ParallelJavaGpssSimulator/src
The src folder contains the actual source code of the parallel simulator, i.e. all the java
files.
/ParallelJavaGpssSimulator/validation
This folder contains a sub-folder for each validation. All files required to repeat the
validation runs can be found in these sub-folders, including simulation models,
configuration and all the output log files of the validation runs described in section 6.
The validation runs can be performed directly from these folders.
/ProActive
This additional folder contains the compressed archive of the ProActive version used.
Appendix E: Documentation of selected classes
Appendix E: Documentation of selected classes
This section contains the JavaDoc documentation of the following selected classes.
• parallelJavaGpssSimulator.SimulationController
• parallelJavaGpssSimulator.lp.LogicalProcess
• parallelJavaGpssSimulator.lp.ParallelSimulationEngine
• parallelJavaGpssSimulator.lp.lpcc.LPControlComponent
The full JavaDoc documentation of all classes can be found on the attached CD, see
Appendix D for further details.
Appendix E: Documentation of selected classes – SimulationController
��������������
��
�����
����� ��� ��
������
����
����������� �
��������������
��
������
��
����������������
All Implemented Interfaces:
java.io.Serializable,
org.objectweb.proactive.Active,
org.objectweb.proactive.RunActive
�� ��� �����
��
����������������
��� ����������� �
��� ����� �
���������
���������
� ���������
������� �
��� ����
���
��
��� �� �
�� ���
���� �
���
����
� ��
��� � ��� �����
���������
� ��� ����
����
���
� �� �� ���
��� �� ��� �
�
����
�� ��
���
�
��������� ������ �� ���
� �����
��
��� !��� ���
������������ �
��
��� ��
��������� ������ ������ �
Author:
Gerald Krafft
See Also:
Serialized Form
Field Summary
������ ���������������� VIRTUAL_NODE_NAME
Name of the virtual node that needs to be defined in the
deployment descriptor file.
Appendix E: Documentation of selected classes – SimulationController
Constructor Summary
SimulationController��
Main constructor
Method Summary
������ SimulationController createActiveInstance������ �
���������
Static method that creates an Active Object
SimulationController instance on the specified node
SimulationState getSimulationState��
Returns the state of the simulation
���� reportException����������� ��
�����
� ��� �������!���
��"��
Called by logical process instances to report exceptions
thrown by the simulation.
���� requestGvtCalculation��
Called by LPs to request a GVT calculation by the
SimulationController.
���� runActivity������ �
���������
�#��$ ��$�
Implements the main activity loop of the Active Object
���� simulate�Model ���
�� Configuration ���%����������
Starts parallel simulation of the specified model and
using the specified configuration
���� terminateLPs��
This method terminates all LPs.
Appendix E: Documentation of selected classes – SimulationController
Methods inherited from class java.lang.Object
&����� %������
�'����� (��('��
� ����%$� ����%$���� ��������� ����� ����� ����
Field Detail
�����������������
�� ��� ������ %���� ���������������� �����������������
Name of the virtual node that needs to be defined in the deployment descriptor
file. Its value is "ParallelJavaGpssSimulator".
See Also:
Constant Field Values
Constructor Detail
��
����������������
�� ���
��
������������������
Main constructor
Method Detail
�
�� �����!
�� ��� ���� �
�� �����!������ �
���������
�#��$ ��$�
Implements the main activity loop of the Active Object
Specified by:
����������$ in interface ����� �
���������
���������
Appendix E: Documentation of selected classes – SimulationController
Parameters:
��$ - body of the Active Object
See Also:
��������
�����������$������ �
���������
�#��$�
������ ������ ��� �
�� ��� ������ ����������'�������
������ ������ ��� ������� �
���������
�(���� ����� �
���������
������
����� ��
������
����� �
���������
�����
Static method that creates an Active Object SimulationController instance on the
specified node
Parameters:
- node at which the instance will be created or within current JVM if null
Returns:
active instance of SimulationController
Throws:
����� �
���������
������
����� ��
�����
����� �
���������
�����
��
����
�� ��� ���� ��
�����)��
� ���
�� '��%��������� ���%����������
�(���� ����� �
���������
�!�������
������
����� �
���������
������
'���������������� ��
������
����� �
���������
�����'��������
�%�� �
������
Appendix E: Documentation of selected classes – SimulationController
����������'�������*���� ��
�����
Starts parallel simulation of the specified model and using the specified
configuration
Parameters:
� - GPSS model that will be simulated
���%��������� - configuration settings
Throws:
����� �
���������
�!�������
����� - can be thrown by ProActive
����� �
���������
����� - can be thrown by ProActive
CriticalSimulatorException - Critical error that makes simulation impossible
����� �
���������
�����'��������
�%�� �
����� - can be thrown by
ProActive
����������'�������*���� ��
����� - can be thrown by ProActive
����������"
�� ��� ���� ����������" ��
�(���� ��������"� ��
�����
This method terminates all LPs. It is called by the main application Simulate
class. It returns an exception if terminating the LPs fails which automatically
forces calls to this method to be synchronous.
Throws:
��������"� ��
�����
Appendix E: Documentation of selected classes – SimulationController
�������# ������
�� ��� ���� �������# ����������������� ��
�����
� ��� �������!���
��"��
Called by logical process instances to report exceptions thrown by the
simulation. This method is used for exceptions that occur within runActivity() of
these instances and not within remote method calls. Exceptions thrown within
remote method calls are automatically passed back by ProActive.
Parameters:
- Exception that was thrown in LP
�������!���
��"��
� - index of the LP that reports the exception
��
������
�� ��� ��������������
��
������
������
Returns the state of the simulation
Returns:
state of the simulation
��%
� �������
������
�� ��� ���� ��%
� �������
��������
Called by LPs to request a GVT calculation by the SimulationController.
Appendix E: Documentation of selected classes – LogicalProcess
��������������
��
�����&��
����� ��������
�����
����������� �
��������������
��
�����������$� ��"�� �
All Implemented Interfaces:
java.io.Serializable,
org.objectweb.proactive.Active,
org.objectweb.proactive.RunActive
�� ��� ����� ��$� ��"�� �
��� ����������� �
��� ����� �
���������
���������
� ���������
������� �
���� �
�� ��� ���
��
�����
������� "��#
������
� �� ��� �
����
���
��
����
��
���
�������� $
�� �����
������� ����
�����%��
������
�% ��� ����
���
���� � �����
���������
�� �
��
��������� ������ �� ����� ���� �����
��
�� !��� �
�� ����� �
����� ��
�� ��
����
�
� !��� ��� ����
���
��
��� �� �
��
�� %�� ���
��
��� �% ��� ����
���
�
Author:
Gerald Krafft
See Also:
Serialized Form
Constructor Summary
LogicalProcess��
Main constructor (also used for serialization purpose)
Appendix E: Documentation of selected classes – LogicalProcess
Method Summary
���� cancelBackTransaction�Transaction �����
Called by other Logical Processes to force a cancel
back of the specified Transaction sent by this LP.
� ���� commitState����� ����
Performs fossil collection for changes earlier than the
������ LogicalProcess createActiveInstance������ �
���������
This static method is called by the Simulation
Controller in order to create a ProActive Active Object
instance of the LogicalProcess class.
����� �
���������
����������
��#���
��+����
endOfSimulationByTransaction�Transaction �����
Requests the Logical Process to end the simulation at
the specified Transaction.
���� forceGvtAt����� ���
Calling this method will force the Logical Process to
request a GVT calculation as soon as it passes the specified
simulation time.
SimulationReportSet getSimulationReport� ���
�� ������
'(����
�����
Returns the simulation report.
� ���� handleReceivedTransactions��
Goes through the list of received Transactions and anti-
Transactions and either chains the new Transaction in or
undoes the original Transaction for received anti-Transaction.
Appendix E: Documentation of selected classes – LogicalProcess
���� initialize�Partition ����������
����� �
���������
�������,���� �������!���
��,�����
SimulationController ����������'�������
Configuration ���%����������
Initializes the Logical Process.
� ���� needToCancelBackTransactions����� ������
Cancel back a certain number of Transactions.
���� receiveGvt����� ���� ���
�� ����!���
������
Called by SimulationController to send the calculated
GVT (global virtual time).
����� �
���������
������
�����
��#���
��+����
receiveTransaction�Transaction ����� ���
�� �����
Public method that is used by other Logical Processes
to send a Transaction or anti-Transaction to this Logical
Process.
LocalGvtParameter requestGvtParameter��
Returns the parameters of this Logical Process required
for the GVT calculation.
� ���� rollbackState����� ���
Rolls the state of the simulation engine back to the state
for the given time or the next later state.
���� runActivity������ �
���������
�#��$ ��$�
Implements the main activity loop of the Active Object
� ���� saveCurrentState��
Saves the current state of the simulation engine into the
local state list (unless an unconfirmed end of simulation has
Appendix E: Documentation of selected classes – LogicalProcess
been reached by this LP or the LP is in Cancelback mode, in
both cases the local simulation time would have the value of
Long.MAX_VALUE).
� ���� sendLazyCancellationAntiTransactions��
This method performs the main lazy-cancellation for
Transactions that have been sent and subsequently rolled
back.
���� startSimulation��
Tells the LP to start simulating the local partition of the
simulation model.
Methods inherited from class java.lang.Object
&����� %������
�'����� (��('��
� ����%$� ����%$���� ��������� ����� ����� ����
Constructor Detail
��$� ��"�� �
�� ��� ��$� ��"�� � ��
Main constructor (also used for serialization purpose)
Method Detail
������ ������ ��� �
�� ��� ������ -������!���
������ ������ ��� ������� �
���������
�(���� ����� �
���������
������
����� ��
������
����� �
���������
�����
Appendix E: Documentation of selected classes – LogicalProcess
This static method is called by the Simulation Controller in order to create a
ProActive Active Object instance of the LogicalProcess class. The Active Object
instance is created at the specified node.
Parameters:
- node at which the Active Object LogicalProcess instance will be created
Returns:
the Active Object LogicalProcess instance (i.e. a stub of the LogicalProcess
instance)
Throws:
����� �
���������
������
����� ��
�����
����� �
���������
�����
��������'�
�� ��� ���� ��������&��!�������� ����������
����� �
���������
�������,���� �������!���
��,�����
����������'�������
� ����������'�������
'��%��������� ���%����������
Initializes the Logical Process. This method is called by the Simulation
Controller. The initialization is done outside the constructor because it requires
the group of all Logical Process active objects to be passed in. The
LogicalProcess instance cannot be used before it is initialized.
Parameters:
��������� - the simulation model partition that this LP will process
�������!���
��,���� - a group containing all LPs (i.e. stubs to all LPs)
Appendix E: Documentation of selected classes – LogicalProcess
�
�� �����!
�� ��� ���� �
�� �����!������ �
���������
�#��$ ��$�
Implements the main activity loop of the Active Object
Specified by:
����������$ in interface ����� �
���������
���������
Parameters:
��$ - body of the Active Object
See Also:
��������
�����������$������ �
���������
�#��$�
����
��
������
�� ��� ���� ����
��
��������
Tells the LP to start simulating the local partition of the simulation model. The
LP needs to be initialized by calling initialize() before the simulation can be
started. This method is called by the Simulation Controller after it created and
initialized all LogicalProcess instances.
�� �������� � ����
�� ��� ����� �
���������
�����������
��#���
��+����
�� �������� � �����.���������� ����� ���
�� �����
Public method that is used by other Logical Processes to send a Transaction or
anti-Transaction to this Logical Process.
Parameters:
���� - Transaction received from other LP
���� - true if an anti-Transaction has been received
Appendix E: Documentation of selected classes – LogicalProcess
Returns:
Returns a Future object which allows the send to verify that is has been received
�� ��(� )���� � ����
�� ��� ���� �� ��'� (���� � �����.���������� �����
Called by other Logical Processes to force a cancel back of the specified
Transaction sent by this LP.
Parameters:
���� - Transaction that needs to be cancelled back
*��+���� ����+���� � ����
� ���� )��*���� ����*���� � ���� ��
Goes through the list of received Transactions and anti-Transactions and either
chains the new Transaction in or undoes the original Transaction for received
anti-Transaction. This method also handles received cancelbacks.
���+����� ��(� )���� � ����
� ���� ���*����� ��'� (���� � ���� ����� ������
Cancel back a certain number of Transactions. This method is called by the
Logical Process if it is in CancelBack mode and it will attempt to cancel back
the specified number of received Transactions from the end of the Transaction
chain, i.e. the Transactions that are furthest ahead in simulation time and that
where received from other LPs.
Parameters:
����� - number of Transactions to cancel back
Appendix E: Documentation of selected classes – LogicalProcess
��+��'!��� ���������������� � ����
� ���� ��*��&!��� ���������������� � ���� ��
This method performs the main lazy-cancellation for Transactions that have been
sent and subsequently rolled back. The method is called after the simulation time
has been updated (increased). It looks for any past sent and rolled back
Transactions that still exist in rolledBackSentHistoryList (i.e. that had not been
re-sent in identical form after the rollback) and sends out anti-Transactions for
these.
�����
� ���� �����
��������� ����
Performs fossil collection for changes earlier than the GVT. This will remove
any saved state information and any records in the sent and received history lists
that are not needed any more.
Parameters:
�������
- time until which all Transaction movements are guarantied, this
means there cannot be any rollback to a time before this time
����,� )
� ���� ����+� (
��������� ���
Rolls the state of the simulation engine back to the state for the given time or the
next later state. This also changes some of the information within the Logical
Process back to what it was at the time to which the simulation engine is rolled
back.
Parameters:
- time to which the simulation state will be rolled back
Appendix E: Documentation of selected classes – LogicalProcess
����
�����
� ���� ����
�����
������
Saves the current state of the simulation engine into the local state list (unless an
unconfirmed end of simulation has been reached by this LP or the LP is in
Cancelback mode, in both cases the local simulation time would have the value
of Long.MAX_VALUE).
��%
� ����"��������
�� ��� -����,��!����
� ��%
� ����"����������
Returns the parameters of this Logical Process required for the GVT calculation.
This method is called by the Simulation Controller when it performs a GVT
calculation. The parameters include the minimum time of all received and not
executed Transactions (i.e. either in receivedList or in the simulation engine
queue) and the minimum time of any Transaction in transit (i.e. sent but not yet
received).
Returns:
GVT parameter object
�� �������
�� ��� ���� �� ������������ ���� ���
�� ����!���
������
Called by SimulationController to send the calculated GVT (global virtual time).
This time guarantees all executed Transactions and state changes with a time
smaller than the GVT and as a result the Logical Process can perform fossil
collection by committing any changes that happened before the GVT.
Parameters:
��� - GVT (global virtual time)
Appendix E: Documentation of selected classes – LogicalProcess
-�� ������
�� ��� ���� ,�� ����������� ���
Calling this method will force the Logical Process to request a GVT calculation
as soon as it passes the specified simulation time. If the specified time has been
passed already then a GVT calculation is requested at the next simulation
scheduling cycle. This method is called by a Logical Process that reached an
unconfirmed End of Simulation in order to force other LPs to request a GVT
calculation when they pass the provisional End of Simulation time.
Parameters:
- simulation time after which a GVT calculation should be requested
��+�-
��
������(!���� � ����
�� ��� ����� �
���������
�����������
��#���
��+����
��*�,
��
������'!���� � �����.���������� �����
Requests the Logical Process to end the simulation at the specified Transaction.
This method is called by the Simulation Controller when a GVT calculation
confirms a provisional End of Simulation reached by one of the LPs. If this is
the LP that reported the unconfirmed End of Simulation by this Transaction then
it will have stopped simulating already. All other LPs will be rolled back to the
time of this Transaction and then they will simulate any Transactions for the
same time that in a sequential simulator would have been executed before the
specified Transaction. Afterwards the simulation is stopped and completion is
reported back to the SimulationController.
Parameters:
���� - Transaction that finished the simulation
Returns:
BooleanWrapper to indicate to the SimulationController that the LP completed
the simulation at the specified end
Appendix E: Documentation of selected classes – LogicalProcess
��
������������
�� ��� �����������
�����
� $��
��
������������� ���
�� ������
'(����
�����
Returns the simulation report. This method is called by the Simulation
Controller after the simulation has finished in order to output the combined
simulation report from all LPs. The simulation report can optionally contain the
Transaction chain report section. This additional section is optional because it
can be very large. It is therefore only returned if needed, i.e. requested by the
user. This method will be called by the Simulation Controller after the
simulation was completed in order to output the combined reports from all LPs.
Parameters:
������
'(����
���� - include Transaction chain report section
Returns:
populated instance of SimulationReportSet
Appendix E: Documentation of selected classes – ParallelSimulationEngine
��������������
��
�����&��
����� ��
�������� ��
���������
����������� �
������
�/���,���������������������������� ����
��������������
��
���������"�������
��
��������$���
All Implemented Interfaces:
java.io.Serializable
�� ��� ����� "�������
��
��������$���
��� ���������� ����
��� ���������
������� �
��� �
� ����
���
�
��
� ���� �� ��� ����
���� ���� �
�� �&��
�� ���
��� '��� ����
���
�
��
� �
����� �� �������
������
%�
����
��(����� �� ��� �
� ����
Author:
Gerald Krafft
See Also:
���������� ����
, Serialized Form
Constructor Summary
ParallelSimulationEngine��
Constructor for serialization purpose
ParallelSimulationEngine�Partition ����������
Main constructor
Appendix E: Documentation of selected classes – ParallelSimulationEngine
Method Summary
� ���� chainIn�Transaction ��0����
Overrides chainIn() from class
parallelJavaGpssSimulator.gpss.SimulationEngine.
� ���� deleteLaterTransactions�Transaction �����
Removes all Transactions from the local chain that
would be executed/moved after the specified Transaction,
i.e. all Transactions that have a move time later than the
specified Transaction or with the same move time but a
lower priority.
� ���
�� deleteTransaction�Transaction �����
Removes the specified Transaction from the
Transaction chain.
� ���� getMinChainTime��
Returns the minimum time of all movable
Transactions in the Transaction chain.
� ���� getNoOfTransactionsInChain��
Returns the number of Transactions currently in the
chain
���� getTotalTransactionMoves��
Returns the total number of Transaction moves
performed since the start of the simulation.
��������������$-���1Transaction2
getTransactionChain��
Gives access to the Transaction chain for classes
that inherit from SimulationEngine and makes this visible
Appendix E: Documentation of selected classes – ParallelSimulationEngine
within the current package.
��������������$-���1Transaction2 getTransactionToSendList��
Returns the out list of Transactions that need to be
sent to other LPs.
� Transaction getUnconfirmedEndOfSimulationXact��
Returns the Transaction that caused an unconfirmed
end of simulation within this simulation engine.
���� moveAllTransactionsAtCurrentTime��
Moves all Transactions that are movable at the
current simulation time.
� ���� moveTransaction�Transaction �����
Overrides the inherited method in order to add some
sensor information used by the LP Control Component.
� ���� setCurrentSimulationTime����� ����
������������.��
Sets the simulation time to the specified value
�� unconfirmedEndOfSimulationReached��
Returns whether an unconfirmed end of simulation
has been reached by this engine
�� updateClock��
Overrides the inherited method.
Methods inherited from class parallelJavaGpssSimulator.gpss.SimulationEngine
chainOutNextMovableTransactionForCurrentTime� getBlockForBlockReference�
getBlockReferenceForLocalBlock� getBlockReport� getChainReport� getCurrentSimulationTime�
Appendix E: Documentation of selected classes – ParallelSimulationEngine
getFacilitySummaryReport� getNextTransactionId� getNoOfTransactionsAtBlock� getPartition�
getQueueSummaryReport� getStorageSummaryReport� initializeGenerateBlocks� isTransactionBlocked�
setBlockReferenceToLocalBlock
Methods inherited from class java.lang.Object
&����� %������
�'����� (��('��
� ����%$� ����%$���� ��������� ����� ����� ����
Constructor Detail
"�������
��
��������$���
�� ��� "�������
��
��������$�����
Constructor for serialization purpose
"�������
��
��������$���
�� ��� "�������
��
��������$����!�������� ����������
Main constructor
Parameters:
��������� - model partition that will be simulated by this simulation engine
Method Detail
*�����
� ���� )������.���������� ��0����
Overrides chainIn() from class
parallelJavaGpssSimulator.gpss.SimulationEngine. If the next block of the
Transaction to be chained in lies in a different partition then the Transaction is
Appendix E: Documentation of selected classes – ParallelSimulationEngine
stored in the out list so that it can later be sent to the LP of that partition,
otherwise the inherited chainIn() method is called.
Overrides:
chainIn in class SimulationEngine
Parameters:
��0��� - Transaction to be added to the chain
See Also:
���������� ����
��(���"��������
�/���,������������������.�����������
����������� � ���� ���
���������
�� ��� ���� ����������� � ���� ���
�����������
�(���� "������#���3�
�����
Moves all Transactions that are movable at the current simulation time. This
method overrides the same method from class
parallelJavaGpssSimulator.gpss.SimulationEngine in order to implement end of
simulation detection for parallel simulation.
Overrides:
moveAllTransactionsAtCurrentTime in class SimulationEngine
Throws:
InvalidBlockReferenceException
See Also:
���������� ����
���.�������������'���
��.��
�������� � ����
� ������������ � �����.���������� �����
Appendix E: Documentation of selected classes – ParallelSimulationEngine
Overrides the inherited method in order to add some sensor information used by
the LP Control Component. In addition it calls the inherited method to perform
the actual movement of the Transaction.
Overrides:
moveTransaction in class SimulationEngine
Parameters:
���� - Transaction to move
See Also:
���������� ����
.�����������.���������� �����
�+������ )
�� ��� ���
��
�*������ (��
Overrides the inherited method. The inherited method is only called and the
simulation time updated if no provisional End of Simulation has been reached.
Overrides:
updateClock in class SimulationEngine
Returns:
true if a movable Transaction was found, otherwise false
See Also:
���������� ����
������
'���3��
+��������� � ����
� ���
�� *��������� � �����.���������� �����
Removes the specified Transaction from the Transaction chain.
Appendix E: Documentation of selected classes – ParallelSimulationEngine
Parameters:
����"� - Id of the Transaction
Returns:
true if the Transaction was found and removed, otherwise false
+�������������� � ����
� ���� *�������������� � ���� �.���������� �����
Removes all Transactions from the local chain that would be executed/moved
after the specified Transaction, i.e. all Transactions that have a move time later
than the specified Transaction or with the same move time but a lower priority.
Parameters:
���� - Transaction for which any later Transactions will be removed
$������ � �����*���
� ��������������$-���1.����������2 $������ � �����)�����
Gives access to the Transaction chain for classes that inherit from
SimulationEngine and makes this visible within the current package.
Overrides:
getTransactionChain in class SimulationEngine
Returns:
Returns the Transaction chain.
$������*�������
� ���� $������)���������
Appendix E: Documentation of selected classes – ParallelSimulationEngine
Returns the minimum time of all movable Transactions in the Transaction chain.
This is the current simulation time unless there are no Transactions in the chain
or an unconfirmed end of simulation has been reached in which case
Long.MAX_VALUE is returned. This method is used by the LP to determine the
local time that will be sent to the GVT calculation.
Returns:
minimum local chain time
$�����-���� � ���� ���*���
� ���� $�����,���� � ���� ���)�����
Returns the number of Transactions currently in the chain
Returns:
number of Transactions in the chain
���
�����
��
����������
� ���� ���
�����
��
��������������� ����
������������.��
Description copied from class: SimulationEngine
Sets the simulation time to the specified value
Overrides:
setCurrentSimulationTime in class SimulationEngine
Parameters:
������������.��
- new current simulation time
$���� ��-����+��+�-
��
������.� �
� .���������� $���� ��,����*��*�,
��
������-� ���
Appendix E: Documentation of selected classes – ParallelSimulationEngine
Returns the Transaction that caused an unconfirmed end of simulation within
this simulation engine.
Returns:
Returns Transaction that caused an unconfirmed end of simulation
� ��-����+��+�-
��
��������� *�+
�� ��� ���
��
� ��,����*��*�,
��
��������� )�*��
Returns whether an unconfirmed end of simulation has been reached by this
engine
Returns:
true if unconfirmed end of simulation has been reached
$����������� � ��������
�� ��� ���� $����������� � �������� ��
Returns the total number of Transaction moves performed since the start of the
simulation. This information is required by the LPCC as a sensor value.
Returns:
total number of Transaction moves performed
$������ � ������
��+�� �
�� ��� ��������������$-���1.����������2 $������ � ������
��*�� ���
Returns the out list of Transactions that need to be sent to other LPs.
Returns:
outgoing list of Transactions
Appendix E: Documentation of selected classes – LPControlComponent
��������������
��
�����&��&��
����� �����
����������
����������� �
��������������
��
����������� ��"����������������
All Implemented Interfaces:
java.io.Serializable
�� ��� ����� �"����������������
��� ����������� �
��� ���������
������� �
���� �
�� ��� ���
�� ��� �� ��
��� �����
�
�� !���� ��
�� �
�� �% ���
����) *�����
� ���� +
�������� �� �� ���� �� ��
��� ��� �������� �% ����
�� �� ����
��
� ��� �����
� ��� �% ��
��� �
���
��� �
��� �� �
�� ��
������
� ��� ����
���
��!
�� ��
� ��������
������ ���%���
Author:
Gerald Krafft
See Also:
Serialized Form
Constructor Summary
LPControlComponent��
Constructor for serialization
LPControlComponent���� �����
�'�����
Main constructor, initializes the cluster space with the maximum number of
clusters to be held.
Appendix E: Documentation of selected classes – LPControlComponent
Method Summary
���� getCurrentUncommittedMovesMeanLimit��
Returns the mean limit for uncommitted Transaction moves
(AvgUncommittedMoves) that is based on the indicators passed in the last
time processSensorValues() was called.
���� getCurrentUncommittedMovesUpperLimit��
Returns the current upper limit for uncommitted Transaction moves as
determined by the LPCC.
���� getLastSensorProcessingTime��
Returns the last time the sensor values were processed in milliseconds.
SensorSet getSensorSet��
Returns the sensor set with the current sensor values.
�� isUncommittedMovesValueWithinActuatorRange����� ���������
Returns true if the value is within the actuator limit using the
UncommittedMoves standard deviation and a confidence level of 95%.
���� processSensorValues��
This method performs the main processing of the sensor values which
will result in a new actuator value.
Methods inherited from class java.lang.Object
&����� %������
�'����� (��('��
� ����%$� ����%$���� ��������� ����� ����� ����
Appendix E: Documentation of selected classes – LPControlComponent
Constructor Detail
�"����������������
�� ��� �"������������������
Constructor for serialization
�"����������������
�� ��� �"�������������������� �����
�'�����
Main constructor, initializes the cluster space with the maximum number of
clusters to be held.
Parameters:
�����
�'���� - maximum number of clusters to be held
Method Detail
�� ��
�� ��� �
�����
� $��
�� ��
Returns the sensor set with the current sensor values.
Returns:
sensor set
$���� �
�� ��"�� � ��$����
�� ��� ���� $���� �
�� ��"�� � ��$������
Returns the last time the sensor values were processed in milliseconds.
Returns:
the last time the sensor values were processed.
Appendix E: Documentation of selected classes – LPControlComponent
��� �
�� �����
�
�� ��� ���� ��� �
�� �����
� ��
This method performs the main processing of the sensor values which will result
in a new actuator value. It generates an indicator set for the sensor values,
determines the closest (most similar) past indicator set with a higher
performance indicator (CommittedMoveRate) using a state cluster space and
then adds the current indicator set to the state cluster space.
$���
������� �������+���� ���������
�� ��� ���� $���
������� �������*���� �����������
Returns the mean limit for uncommitted Transaction moves
(AvgUncommittedMoves) that is based on the indicators passed in the last time
processSensorValues() was called.
Returns:
mean actuator limit
$���
������� �������+���� ����������
�� ��� ���� $���
������� �������*���� ������������
Returns the current upper limit for uncommitted Transaction moves as
determined by the LPCC. This is the upper limit based on the average
uncommitted moves limit, the standard deviation and a confidence level of 95%.
Returns:
upper actuator value
Appendix E: Documentation of selected classes – LPControlComponent
� �� �������+���� ���
�/��*��� �
�������$�
�� ��� ���
�� � �� �������*���� ���
�.��)��� �
�������$������ ���������
Returns true if the value is within the actuator limit using the
UncommittedMoves standard deviation and a confidence level of 95%.
Parameters:
���������
� - current UncommittedMoves sample value
Returns:
true if UncommittedMoves sample value is within actuator limits, otherwise
false
Appendix F: Validation output logs
Appendix F: Validation output logs
This appendix contains the relevant output log files resulting from the validation runs
performed as part of the validation in section 6. Line numbers in brackets were added to
all lines of the output log files in order to make it possible to refer to a specific line. For
very long output log files non-relevant lines where removed and replaced with “...”. But
the complete output log files can still be found on the attached CD.
Appendix F: Validation output logs – validation 1
Appendix F: Validation output logs – validation 1
Appendix F: Validation output logs – validation 2
Appendix F: Validation output logs – validation 2
Appendix F: Validation output logs – validation 2
Appendix F: Validation output logs – validation 2
Appendix F: Validation output logs – validation 2
Appendix F: Validation output logs – validation 2
Appendix F: Validation output logs – validation 3
Appendix F: Validation output logs – validation 3
Appendix F: Validation output logs – validation 3
Appendix F: Validation output logs – validation 3
Appendix F: Validation output logs – validation 3
Appendix F: Validation output logs – validation 3
Appendix F: Validation output logs – validation 3
Appendix F: Validation output logs – validation 3
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 4
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 5
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
Appendix F: Validation output logs – validation 6
|
0704.1828 | Gauge invariance in gravity-like descriptions of massive gauge field
theories | Gauge invariance in gravity-like descriptions of massive gauge field theories
Dennis D. Dietrich
Institut für Theoretische Physik, Universität Heidelberg, Heidelberg, Germany
(Dated: January 13, 2019)
We discuss gravity-like formulations of massive Abelian and non-Abelian gauge field theories in
four space-time dimensions with particular emphasis on the issue of gauge invariance. Alternative
descriptions in terms of antisymmetric tensor fields and geometric variables, respectively, are anal-
ysed. In both approaches Stückelberg degrees of freedom factor out. We also demonstrate, in the
Abelian case, that the massless limit for the gauge propagator, which does not exist in the vector
potential formulation, is well-defined for the antisymmetric tensor fields.
PACS numbers: 11.15.-q, 11.15.Ex, 11.30.Qc, 12.15.-y
I. INTRODUCTION
Massive gauge bosons belong to the fundamental con-
cepts we use for picturing nature. Prominent exam-
ples are found in the physics of electroweak interac-
tions, superconductivity, and confinement. Even more
than in the massless case, gauge invariance is a severe
constraint for the construction of massive gauge field
theories. Usually additional fields beyond the original
gauge field have to be included in order to obtain gauge
invariant expressions.[26] Technically, this is linked to
the fact that the aforementioned gauge field—the Yang–
Mills connection—changes inhomogeneously under gauge
transformations and encodes also spurious degrees of
freedom arising from the construction principle of gauge
invariance. This complicates the extraction of physical
quantities. A variety of approaches has been developed
in order to deal with this situation. Wilson loops [1]
represent gauge invariant but non-local variables.[27] Al-
ternatively, there exist decomposition techniques like the
one due to Cho, Faddeev, and Niemi [2]. Here we first
pursue a reformulation of massive Yang–Mills theories in
terms of antisymmetric gauge algebra valued tensor fields
Baµν (Sect. II) and subsequently continue with a represen-
tation in terms of geometric variables (Sect. III).
In Sect. II A we review the massless case. It is re-
lated to gravity [3] formulated as BF gravity [4] and
thus linked to quantum gravity. The antisymmetric ten-
sor field can be seen as dual field strength and transforms
homogeneously under gauge transformations. This fact
already makes it simpler to keep track of gauge invari-
ance. In Sect. II B the generalisation to the massive case
is presented. In the Baµν field representation the (non-
Abelian) Stückelberg fields, which are commonly present
in massive gauge field theories and needed there in or-
der to keep track of gauge invariance, factor out com-
pletely. In other words, no scalar fields are needed for a
gauge invariant formulation of massive gauge field theo-
ries in terms of antisymmetric tensor fields. The case of
a constant mass is linked to sigma models (gauged and
ungauged) in different respects. Sect. II B 1 contains the
generalisation to a position dependent mass, which cor-
responds to introducing the Higgs degree of freedom. In
Sect. II B 2 non-diagonal mass terms are admitted. This
is necessary to accommodate the Weinberg–Salammodel,
which is studied as particular case.
Sect. III presents a description of the massive case,
with constant and varying mass, in terms of geometric
variables. In this step the remaining gauge degrees of
freedom are eliminated. The emergent description is in
terms of local colour singlet variables. Finally, Sect. III A
is concerned with the geometric representation of the
Weinberg–Salam model.
The Appendix treats the Abelian case. It allows to
better interpret and understand several of the findings in
the non-Abelian settings. Of course, in the Abelian case
already the Bµν field is gauge invariant. Among other
things, we demonstrate that them→ 0 limit of the gauge
propagator for the Bµν fields is well-defined as opposed
to the ill-defined limit for the Aµ field propagator.
Sect. IV summarises the paper.
II. ANTISYMMETRIC TENSOR FIELDS
A. Massless
Before we investigate massive gauge field theories let
us recall some details about the massless case. The parti-
tion function of a massless non-Abelian gauge field theory
without fermions is given by
[dA] exp{i
xL}, (1)
with the Lagrangian density
L = L0 := − 14g2F
aµν (2)
and the field tensor
µν := ∂µA
ν − ∂νAaµ + fabcAbµAcν . (3)
Aaµ stands for the gauge field, f
abc for the antisymmetric
structure constant, and g for the coupling constant. [28]
Variation of the classical action with respect to the gauge
field gives the classical Yang–Mills equations
Dabµ (A)F
bµν = 0, (4)
http://arxiv.org/abs/0704.1828v1
where the covariant derivative is defined as Dabµ (A) :=
δab∂µ + f
acbAcµ. The partition function in the first-order
formalism can be obtained after multiplying Eq. (1) with
a prefactor in form of a Gaussian integral over an anti-
symmetric tensor field Baµν ,
[dA][dB] exp{i
d4x[L0 − g
BaµνB
aµν ]}. (5)
(”∼=” indicates that in the last step the normalisation of
the partition function has been changed.) Subsequently,
the field Baµν is shifted by
F̃ aµν , where the dual field
tensor is defined as F̃ aµν :=
ǫµνκλF
[dA][dB] ×
× exp{i
x[− 1
aµν − g
aµν ]}. (6)
In this form the partition function is formulated in terms
of the Yang–Mills connection Aaµ and the antisymmetric
tensor field Baµν as independent variables. Variation of
the classical action with respect to these variables leads
to the classical equations of motion
µν = −F̃ aµν and Dabµ (A)B̃bµν = 0, (7)
where B̃aµν :=
ǫκλµνB
aκλ. By eliminating Baµν the origi-
nal Yang–Mills equation (4) is reproduced. Every term in
the classical action in the partition function (6) contains
at most one derivative as opposed to two in Eq. (1). This
explains the name ”first-order” formalism. The classical
action in Eq. (6) is invariant under simultaneous gauge
transformations of the independent variables according
AaµT a =: Aµ → AµU := U [A
µ − iU †(∂µU)]U † (8)
BaµνT a =: Bµν → Bµν
:= UBµνU †, (9)
or infinitesimally,
δAaµ = ∂µθ
a + fabcAbµθ
δBaµν = f
abcBbµνθ
c. (10)
The T a stand for the generators of the gauge group. From
the Bianchi identity Dabµ (A)F̃
bµν = 0 follows a second
symmetry of the BF term alone: Infinitesimally, for un-
changed Aaµ,
µν = ∂µϑ
ν − ∂νϑaµ + fabc(Abµϑcν −Abνϑcµ). (11)
A particular combination of the transformations (10) and
(11), θa = nµAaµ and ϑ
ν = n
µBaµν , corresponds to the
transformation of a tensor under an infinitesimal local
coordinate transformation xµ → xµ − nµ(x),
δBµν = Bλν∂µn
λ +Bµλ∂νn
λ + nλ∂λBµν , (12)
that is a diffeomorphism. Hence, the BF term is dif-
feomorphism invariant, which explains why this theory
is also known as BF gravity. The BB term is not dif-
feomorphism invariant and, hence, imposes a constraint.
The combination of the two terms amounts to an action
of Plebanski type which are studied in the context of
quantum gravity [3, 4].
We now would like to eliminate the Yang–Mills connec-
tion by integrating it out. For fixed Baµν the integrand
of the path integral is not gauge invariant with respect
to gauge transformations of the gauge field Aaµ alone; the
field tensor F aµν transforms homogeneously and the corre-
sponding gauge transformations are not absorbed if Baµν
is held fixed. Therefore, the integral over the gauge group
is in general not cyclic which otherwise would render the
path integral ill-defined. The term in the exponent linear
in the gauge field Aaµ, A
ν∂µB̃
aµν , is obtained by carry-
ing out a partial integration in which surface terms are
ignored. Afterwards it is absorbed by shifting Aaµ by
(B−1)abµν(∂λB̃
bλν), where Babµν := f
abcB̃aµν . In general its
inverse (B−1)abµν , defined by (B
−1)abµνB
νκ = δacgµλ
exists in three or more space-time dimensions [5]. We
are left with a Gaussian integral in Aaµ giving the inverse
square-root of the determinant of Babµν ,
[da] exp{− i
bν}. (13)
In the last expression Babµν appears in the place of an in-
verse gluon propagator, that is sandwiched between two
gauge fields. This analogy carries even further: Interpret-
ing ∂µB̃
aµν as a current, (B−1)abµν(∂λB̃
bλν), the current
together with the ”propagator” (B−1)abµν , is exactly the
abovementioned term to be absorbed in the gauge field
Aaµ. Finally, we obtain,
[dB]Det−
B exp{i
d4x[− g
BaµνB
aµν −
(∂κB̃
aκµ)(B−1)abµν(∂λB̃
bλν)]}. (14)
This result is known from [5, 6, 7]. The exponent in
the previous expression corresponds to the value of the
[dA] integral at the saddle-point value Ăaµ of the gauge
field. It obeys the classical field equation (7). Using
Ăaµ(B) = (B
−1)abµν(∂λB̃
bλν) the second term in the above
exponent can be rewritten as − i
d4xB̃aµνF
aµν [Ă(B)],
which involves an integration by parts and makes its
gauge invariance manifest. The fluctuations aaµ around
the saddle point Ăaµ, contributing to the partition func-
tion (6), are Gaussian because the action in the first-order
formalism is only of second order in the gauge field Aaµ.
They give rise to the determinant (13). What happens
if a zero of the determinant is encountered can be un-
derstood by looking at the Abelian case discussed in Ap-
pendix A. There the BF term does not fix a gauge for the
integration over the gauge field Aµ because the Abelian
field tensor Fµν is gauge invariant. If it is performed
nevertheless one encounters a functional δ distribution
which enforces the vanishing of the current ∂µB̃
µν . In
this sense the zeros of the determinant in the non-Abelian
case arise if B̃aµν is such that the BF term does not totally
fix a gauge for the [dA] integration, but leaves behind a
residual gauge invariance. It in turn corresponds to van-
ishing components of the current ∂µB̃
aµν . (Technically,
there then is at least one flat direction in the otherwise
Gaussian integrand. The flat directions are along those
eigenvectors of B possessing zero eigenvalues.)
When incorporated with the exponent, which requires
a regularisation [8], the determinant contributes a term
proportional to 1
ln detB to the action. This term to-
gether with the BB term constitutes the effective poten-
tial, which is obtained from the exponent in the partition
function after dropping all terms containing derivatives of
fields. The effective potential becomes singular for field
configurations for which detB = 0. It is gauge invari-
ant because all contributing addends are gauge invariant
separately.
The classical equations of motion obtained by varying
the action in Eq. (14) with respect to the dual antisym-
metric tensor field B̃aµν are given by
g2B̃aµν = (g
µ − gρµgσν )∂ρ(B−1)abσκ(∂λB̃bλκ)−
−(∂ρB̃dρκ)(B−1)dbκµfabc(B−1)ceνλ(∂σB̃eσλ),
which coincides with the first of Eqs. (7) with the
field tensor evaluated at the saddle point of the action,
F aµν [Ă(B)]. Taking into account additionally the effect
due to fluctuations of Aaµ contributes a term proportional
to δDetB
δB̃aµν
det−1B to the previous equation.
B. Massive
In the massive case the prototypical Lagrangian is of
the form L = L0 +Lm, where Lm := m
aµ. (Due to
our conventions the physical mass is given by mphys :=
mg.) This contribution to the Lagrangian is of course
not gauge invariant. Putting it, regardlessly, into the
partition function, gives
[dA][dU ] exp{i
x[L0 + m
aµ]}, (16)
which can be interpreted as the unitary gauge represen-
tation of an extended theory. In order to see this let us
split the functional integral over Aaµ into an integral over
the gauge group [dU ] and gauge inequivalent field config-
urations [dA]′. Usually this separation is carried out by
fixing a gauge according to
[dA]′ :=
[dA]δ[fa(A) − Ca]∆f (A). (17)
fa(A) = Ca is the gauge condition and ∆f (A) stands
for the Faddeev–Popov determinant defined through
[dA]δ[fa(A) − Ca]∆f (A) [29]. Introducing this
reparametrisation into the partition function (16) yields,
[dA]′[dU ] exp(i
d4x{− 1
F aµνF
aµν +
[Aµ − iU †(∂µU)]a[Aµ − iU †(∂µU)]a}).
L0 is gauge invariant in any case and remains thus un-
affected. In the mass term the gauge transformations
appear explicitly [9]. We now replace all of these gauge
transformations with an auxiliary (gauge group valued)
scalar field Φ, U † → Φ, obeying the constraint
!≡ 1. (19)
The field Φ can be expressed as Φ =: e−iθ, where
θ =: θaT a is the gauge algebra valued non-Abelian gen-
eralisation of the Stückelberg field [10]. For a massive
gauge theory they are a manifestation of the longitudi-
nal degrees of freedom of the gauge bosons. In the con-
text of symmetry breaking they arise as Goldstone modes
(”pions”). In the context of the Thirring model these ob-
servations have been made in [11]. There it was noted
as well that the θ is also the field used in the canonical
Hamiltonian Batalin–Fradkin–Vilkovisky formalism [12].
We can extract the manifestly gauge invariant classical
Lagrangian
Lcl := − 14g2F
aµν +m2tr[(DµΦ)
†(DµΦ)], (20)
where the scalars have been rearranged making use of the
product rule of differentiation and the cyclic property of
the trace and whereDµΦ := ∂µΦ−iAµΦ. Eq. (20) resem-
bles the Lagrangian density of a non-linear gauged sigma
model. In the Abelian case the fields θ decouple from
the dynamics. For non-Abelian gauge groups they do
not and one would have to deal with the non-polynomial
coupling to them.
In the following we show that these spurious degrees of
freedom can be absorbed when making the transition to a
formulation based on the antisymmetric tensor field Baµν .
Introducing the antisymmetric tensor field into the corre-
sponding partition function, like in the previous section,
results in,
[dA][dΦ][dB] exp(i
×{− g
aµν − 1
aµν +
[Aµ − iΦ(∂µΦ†)]a[Aµ − iΦ(∂µΦ†)]a}).
Removing the gauge scalars Φ from the mass term by a
gauge transformation of the gauge field Aaµ makes them
explicit in the BF term,
[dA][dΦ][dB] exp{i
d4x[− g
BaµνB
aµν −
−tr(ΦF̃µνΦ†Bµν) + m
aµ]}. (22)
In the next step we would like to integrate over the Yang–
Mills connection Aaµ. Already in the previous expression,
however, we can perceive that the final result will only
depend on the combination of fields Φ†BµνΦ. [The Φ
field can also be made explicit in the BB term in form
of the constraint (19).] Therefore, the functional inte-
gral over Φ only covers multiple times the range which
is already covered by the [dB] integration. Hence the
degrees of freedom of the field Φ have become obsolete
in this formulation and the [dΦ] integral can be factored
out. Thus, we could have performed the unitary gauge
calculation right from the start. In either case, the final
result reads,
[dB]Det−
M exp{i
x[− g
aµν −
(∂κB̃
aκµ)(M−1)abµν(∂λB̃
bλν)]}, (23)
where Mabµν := B
µν − m2δabgµν , which coincides with
[13]. Mabµν and hence (M
−1)abµν transform homogeneously
under the adjoint representation. In Eq. (14) the cen-
tral matrix (B−1)abµν in the analogous term transformed
in exactly the same way. There this behaviour ensured
the gauge invariance of this term’s contribution to the
classical action. Consequently, the classical action in the
massive case has the same invariance properties. In par-
ticular, the aforementioned gauge invariant classical ac-
tion describes a massive gauge theory without having to
resort to additional scalar fields. For detB 6= 0, the limit
m → 0 is smooth. For detB = 0 the conserved current
components alluded to above would have to be separated
appropriately in order to recover the corresponding δ dis-
tributions present in these situations in the massless case.
Again the effective action is dominated by the term
proportional to 1
detM. The contribution from the mass
to M shifts the eigenvalues from the values obtained for
B. Hence the singular contributions are typically ob-
tained for eigenvalues of B of the order of m2. The ef-
fective potential is again gauge invariant, for the same
reason as in the massless case.
The classical equations of motion obtained by variation
of the action in Eq. (21) are given by,
µν = −F̃ aµν ,
Dabµ (A)B̃
bµν = −m2[Aν − iΦ(∂νΦ†)]a,
0 = δ
x{[Aaµ − iΦ(∂µΦ†)]a}2. (24)
In these equations a unique solution can be chosen, that
is a gauge be fixed, by selecting the scalar field Φ. Φ ≡ 1
gives the unitary gauge, in which the last of the above
equations drops out. The general non-Abelian case is
difficult to handle already on the classical level, which
is one of the main motivations to look for an alternative
formulation. In the non-Abelian case, the equation of
motion obtained from Eq. (23) resembles strongly the
massless case,
g2B̃aµν = (g
µ − gρµgσν )∂ρ(M−1)abσκ(∂λB̃bλκ)−
−(∂ρB̃dρκ)(M−1)dbκµfabc(M−1)ceνλ(∂σB̃eσλ),
insofar as all occurrences of (B−1)abµν have been replaced
by (M−1)abµν . Incorporation of the effect of the Gaussian
fluctuations of the gauge field Aaµ would give rise to a con-
tribution proportional to δB
δB̃aµν
det−1M in the previous
equation.
Before we go over to more general cases of massive
non-Abelian gauge field theories, let us have a look at
the weak coupling limit: There the BB term in Eq. (21)
is neglected. Subsequently, integrating out the Baµν field
enforces F aµν ≡ 0. [This condition also arises from the
classical equation of motion (24) for g=0.] Hence, for
vanishing coupling exclusively pure gauge configurations
of the gauge field Aaµ contribute. They can be combined
with the Φ fields and one is left with a non-linear reali-
sation of a partition function,
g=0∼=
[dΦ] exp{im2
d4x tr[(∂µΦ
†)(∂µΦ)]}, (26)
of a free massless scalar [13]. Setting g = 0 interchanges
with integrating out the Baµν field from the partition func-
tion (21). Thus, the partition function (23) with g = 0 is
equivalent to (26). That a scalar degree of freedom can
be described by means of an antisymmetric tensor field
has been noticed in [14].
1. Position-dependent mass and the Higgs
One possible generalisation of the above set-up is ob-
tained by softening the constraint (19). This can be seen
as allowing for a position dependent mass. The new
degree of freedom ultimately corresponds to the Higgs.
When introducing the mass m as new degree of freedom
(as ”mass scalar”) we can restrict its variation by in-
troducing a potential term V (m2), which remains to be
specified, and a kinetic term K(m), which we choose in
its canonical form K(m) = 1
(∂µm)(∂
µm). It gives a
penalty for fast variations of m between neighbouring
space-time points. The fixed mass model is obtained in
the limit of an infinitely sharp potential with its mini-
mum located at a non-zero value for the mass. Putting
together the partition function in unitary gauge leads to,
[dA][dm] exp{i
d4x[− 1
F aµνF
aµν +
aµ +K(m) + V (m2)]}, (27)
where we have introduced the normalisation constant
N := dim R, with R standing for the representation of
the scalars. This factor allows us to keep the canonical
normalisation of the mass scalar m. We can now repeat
the same steps as in the previous section in order to iden-
tify the classical Lagrangian,
Lcl := − 14g2F
aµν +N−1tr[(Dµφ)†(Dµφ)] + V (|φ|2),
where now φ := mΦ. In order to reformulate the parti-
tion function in terms of the antisymmetric tensor field
we can once more repeat the steps in the previous sec-
tion. Again the spurious degrees of freedom represented
by the field Φ can be factored out. Finally, this gives
[15],
[dB][dm]Det−
M exp{i
d4x[− g
BaµνB
aµν −
(∂κB̃
aκµ)(M−1)abµν(∂λB̃
bλν) +
+K(m) + V (m2)]}, (28)
whereMabµν = B
µν −m2N−1δbgµν depends on the space-
time dependent mass m. The determinant can as usual
be included with the exponent in form of a term pro-
portional to 1
detM, the pole of which will dominate the
effective potential. As just mentioned, however,M is also
a function of m. Hence, in order to find the minimum,
the effective potential must also be varied with respect
to the mass m.
Carrying the representation in terms of antisymmetric
tensor fields another step further, the partition function
containing the kinetic term K(m) of the mass scalar can
be expressed as Abelian version of Eq. (26),
[db][da] exp{i
d4x[− 1
b̃µνf
µν + 1
µ]} =
[dm] exp{i
d4x[ 1
(∂µm)(∂
µm)]}, (29)
where here the mass scalar m is identified with the
Abelian gauge parameter. Combining the last equation
with the partition function (28) all occurrences of the
mass scalar m can be replaced by the phase integral
dxµaµ. The bf term enforces the curvature f
to vanish which constrains aµ to pure gauges ∂µm and
the aforementioned integral becomes path-independent.
2. Non-diagonal mass term and the Weinberg–Salam model
The mass terms investigated so far had in common that
all the bosonic degrees of freedom they described pos-
sessed the same mass. A more general mass term would
be given by Lm := m
ab. Another similar ap-
proach is based on the Lagrangian Lm := m
tr{AµAµΨ}
where Ψ is group valued and constant. We shall begin our
discussion with this second variant and limit ourselves to
a Ψ with real entries and trΨ = 1, which, in fact, does
not impose additional constraints. Using this expression
in the partition function (27) and making explicit the
gauge scalars yields,
[dA][dm] exp{i
d4x[− 1
F aµνF
aµν +
tr{(Dµφ)†(Dµφ)Ψ} + V (m2)]}. (30)
Expressed in terms of the antisymmetric tensor field
Baµν , the corresponding partition function coincides with
Eq. (28) but with Mabµν replaced by M
µν := B
m2tr{T aT bΨ}gµν.
Let us now consider directly the SU(2)× U(1)
Weinberg–Salam model. Its partition function can be
expressed as,
[dA][dψ] exp{i
x[− 1
aµν +
∂ µ + iAµ)(
∂ µ − iAµ)ψ + V (|ψ|2)]},
where ψ is a complex scalar doublet, Aµ := A
a, with
a ∈ {0; . . . ; 3}, T a here stands for the generators of
SU(2) in fundamental representation, and, accordingly,
T 0 for g0
times the 2 × 2 unit matrix, with the U(1)
coupling constant g0. The partition function can be
reparametrised with ψ = mΦψ̂, where m =
|ψ|2, Φ
is a group valued scalar field as above, and ψ̂ is a con-
stant doublet with |ψ̂|2 = 1. The partition function then
becomes,
[dA][dΦ][dm] exp(i
x{− 1
aµν +
tr[Φ†(
∂ µ + iAµ)(
µ − iAµ)ΦΨ] +
(∂µm)(∂
µm) + V (m2)}), (32)
where
Ψ = ψ̂ ⊗ ψ̂†. (33)
Making the transition to the first order formalism leads
[dA][dB][dΦ][dm] exp(i
d4x{− g
BaµνB
aµν −
F aµνB̃
aµν +K(m) + V (m2) +
tr[Φ†(
∂ µ + iAµ)(
∂ µ − iAµ)ΦΨ]}). (34)
As in the previous case, a gauge transformation of the
gauge field Aaµ can remove the gauge scalar Φ from the
mass term (despite the matrix Ψ). Thereafter Φ only
appears in the combination Φ†BµνΦ and the integral [dΦ]
merely leads to repetitions of the [dB] integral. [The U(1)
part drops out completely right away.] Therefore the [dΦ]
integration can be factored out,
[dA][dB][dm] exp{i
d4x[− g
BaµνB
aµν −
F aµνB̃
aµν + m
tr(AµA
µΨ) +
+K(m) + V (m2)]}. (35)
The subsequent integration over the gauge fields A
µ leads
[dB][dm]Det−
M exp{i
d4x[− g
BaµνB
aµν −
(∂κB̃
aκµ)(M−1)abµν(∂λB̃
bλν) +
+K(m) + V (m2)]}, (36)
where M
µν := B
µν −m2tr(T aT bΨ)gµν .
From hereon we continue our discussion based on the
mass matrix
ab := 1
tr({T a, T b}Ψ), (37)
which had already been mentioned at the beginning of
Sect. II B 2. mab is real and has been chosen to be sym-
metric. (Antisymmetric parts are projected out by the
contraction with the symmetric A
bµ.) Thus it pos-
sesses a complete orthonormal set of eigenvectors µ
j with
the associated real eigenvalues mj , m
= 6Σjmjµ
With the help of these normalised eigenvectors one can
construct projectors π
j := 6Σjµ
j and decompose the
mass matrix, mab = mjπ
j . The projectors are com-
plete, 1ab = Σjπ
j , idempotent 6Σjπ
j = π
j , and
satisfy π
j 6=k
= 0. The matrix B
µν , the antisym-
metric tensor field Baµν , and the gauge field A
µ can
also be decomposed with the help of the eigenvectors:
µν = µ
, where bjkµν := µ
µν = b
where bjµν := B
j ; and A
µ = a
j , where a
µ := A
Using this decomposition in the partition function (36)
leads to,
[db][dm]Det−
m exp{i
d4x[− g
bjµνb
jµν −
(∂κb̃
jκµ)(m−1)jkµν(∂λb̃
kλν) +
+K(m) + V (m2)]}, (38)
where mjkµν := b
µν −m2
jlδklgµν .
Making use of the concrete form of mab given in
Eq. (37), inserting Ψ from Eq. (33), and subsequent diag-
onalisation leads to the eigenvalues 0, 1
and 1
These correspond to the photon, the two W bosons and
the heavier Z boson, respectively. The thus obtained tree-
level Z to W mass ratio squared consistently reproduces
the cosine of the Weinberg angle in terms of the coupling
constants, cos2 ϑw =
g2+g2
. Due to the masslessness of
the photon one addend in the sum over l in the expression
µν above does not contribute. Still, the totalm
µν does
not vanish like in the case of a single massless Abelian
gauge boson (see Appendix A). Physically this corre-
sponds to the coupling of the photon to the W and Z
bosons.
III. GEOMETRIC REPRESENTATION
The fact that the antisymmetric tensor field Baµν trans-
forms homogeneously represents already an advantage
over the formulation in terms of the inhomogeneously
transforming gauge fields Aaµ. Still, B
µν contains de-
grees of freedom linked to the gauge transformations (9).
These can be eliminated by making the transition to
a formulation in terms of geometric variables. In this
section we provide a classically equivalent description of
the massive gauge field theories in terms of geometric
variables in Euclidean space for two colours by adapt-
ing Ref. [16] to include mass. The first-order action is
quadratic in the gauge-field Aaµ.[30] Thus the evaluation
of the classical action at the saddle point yields the ex-
pression equivalent to the different exponents obtained
after integrating out the gauge field Aaµ in the various
partition functions in the previous section. In Euclidean
space the classical massive Yang–Mills action in the first
order formalism reads
d4x(LBB + LBF + LAA), (39)
where
LBB = − g
BaµνB
µν , (40)
LBF = + i4ǫ
µνκλBaµνF
κλ, (41)
LAA = −m
µ. (42)
At first we will investigate the situation for the unitary
gauge mass term LAA and study the role played by the
scalars Φ afterwards.
As starting point it is important to note that a metric
can be constructed that makes the tensor Baµν self-dual
[7]. In order to exploit this fact, it is convenient to define
the antisymmetric tensor (j ∈ {1; 2; 3})
µν := η
ν , (43)
with the self-dual ’t Hooft symbol η
[17] [31] and the
tetrad eAµ . From there we construct a metric gµν in terms
of the tensor T jµν
gµν ≡ eAµ eAν = 16ǫ
jklT jµκT
kκλT lλν , (44)
where
jµν := 1
g)3 := 1
(ǫjklT
T kµ2ν2T
×(ǫj′k′l′T j
×ǫµ1ν1κ1λ1ǫµ2ν2κ2λ2ǫµ3ν3κ3λ3 (46)
Subsequently, we introduce a triad daj such that
Baµν =: d
µν . (47)
This permits us to reexpress the BB term of the classical
Lagrangian,
LBB = − g
µνhjkT
µν , (48)
where hjk := d
k. Putting Eqs. (47) and (45) into the
saddle point condition
ǫκλµνDabµ (Ă)B
κλ = +im
2Ăaν (49)
gives
µ (Ă)(
gdbjT
jµν) = +im2Ăaν . (50)
In the following we define the connection coefficients γµ|kj
as expansion parameters of the covariant derivative of the
triads at the saddle point in terms of the triads,
Dabµ (Ă)d
j =: γµ|kj dak. (51)
This would not be directly possible for more than two
colours, as then the set of triads is not complete. The
connection coefficients allow us to define covariant deriva-
tives according to
∇µ|kj := ∂µδkj + γµ|kj . (52)
These, in turn, permit us to rewrite the saddle point
condition (49) as
dak∇µ|kj (
gT jµν) = im2Ăaν , (53)
and the mass term in the classical Lagrangian becomes
LAA = 12m2 [∇µ|
gT iµν)]hkl[∇κ|lj(
gT jκν)]. (54)
In the limit m→ 0 this term enforces the covariant con-
servation condition ∇µ|ki (
gT iµν) ≡ 0, known for the
massless case. It results also directly from the saddle
point condition (53). Here dak∇µ|ki (
gT iµν) are the di-
rect analogues of the Abelian currents ǫµνκλ∂µBκλ, which
are conserved in the massless case [see Eq. (A6)] and dis-
tributed following a Gaussian distribution in the massive
case [see Eq. (A10)].
The commutator of the above covariant derivatives
yields a Riemann-like tensor Rkjµν
Rkjµν := [∇µ,∇ν ]kj . (55)
By evaluating, in adjoint representation (marked by
)̊, the following difference of double commutators
[D̊µ(Ă), [D̊ν(Ă), d̊j ]]− (µ↔ ν) in two different ways, one
can show that
i[d̊j , F̊µν(Ă)] = d̊kR
jµν , (56)
or in components,
F aµν(Ă) =
ǫabcdbjdckR
jµν , (57)
where dajdak := δ
defines the inverse triad, daj = hjkdak.
Hence, we are now in the position to rewrite the remain-
ing BF term of the Lagrangian density. Introducing
Eqs. (47) and (57) into Eq. (41) results in
LBF = i4
gT jµνRklµνǫjmkh
lm. (58)
Let us now repeat the previous steps with a mass term
in which the gauge scalars Φ are explicit,
LΦAA := −
[Aµ − iΦ(∂µΦ†)]a[Aµ − iΦ(∂µΦ†)]a. (59)
In that case the saddle point condition (49) is given by,
µ (Ă)B̃
κλ = im
2[Ăν − iΦ(∂νΦ†)]a, (60)
or in the form of Eq. (53), that is with the left-hand side
replaced,
dak∇µ|kj (
gT jµν) = im2[Ăν − iΦ(∂νΦ†)]a. (61)
Reexpressing LΦAA with the help of the previous equation
reproduces exactly the unitary gauge result (54) for the
mass term.
Finally, the tensor B appearing in the determinant
(13), which accounts for the Gaussian fluctuations of the
gauge field Aaµ, formulated in the new variables reads
gfabcdai T
iµν . Now all ingredients are known
which are needed to express the equivalent of the parti-
tion function (16) in terms of the new variables. For a
position-dependent mass the discussion does not change
materially. The potential and kinematic term for the
mass scalar m have to be added to the action.
Contrary to the massless case the Aaµ dependent part
of the Euclidean action is genuinely complex. Without
mass only the T-odd and hence purely imaginary BF
term was Aaµ dependent. With mass there contributes
the additional T-even and thus real mass term. There-
fore the saddle point value Ăaµ for the gauge field becomes
complex. This is a known phenomenon and in this con-
text it is essential to deform the integration contour of
the path integral in the partition function to run through
the saddle point [18]. For the Gaussian integrals which
are under consideration here, in doing so, we do not re-
ceive additional contributions. The imaginary part IĂaµ
of the saddle point value of the gauge field transforms
homogeneously under gauge transformations. The com-
plex valued saddle point of the gauge field which is in-
tegrated out does not affect the real-valuedness of the
remaining fields, here Baµν . In this sense the field B
represents a parameter for the integration over Aaµ. The
tensor T jµν is real-valued by definition and therefore the
same holds also for the triad daj [see Eq. (47)]. hkl is
composed of the triads and, consequently, real-valued as
well. The imaginary part of the saddle point value of the
gauge field, IĂaµ, enters the connection coefficients (51).
Through them it affects the covariant derivative (52) and
the Riemann-like tensor (55). More concretely the con-
nection coefficients γµ|kj can be decomposed according to
Dabµ (RĂ)dbj = (Rγµ|kj )dak, (62)
abc(IĂcµ)dbj = (Iγµ|kj )dak, (63)
with the obvious consequences for the covariant deriva-
tive,
∇µ|kj = R∇µ|kj + iI∇µ|kj , (64)
R∇µ|kj = ∂µδkj +Rγµ|kj , (65)
I∇µ|kj = Iγµ|kj . (66)
This composition reflects in the mass term,
RLAA = 12m2 {[R∇µ|
gT iµν)]hkl[R∇κ|lj(
gT jκν)]−
−[Iγµ|kj (
gT iµν)]hkl[Iγκ|lj(
gT jκν)]}
ILAA = 22m2 [R∇µ|
gT iµν)]hkl[I∇κ|lj(
jκν)]
on one hand, and in the Riemann-like tensor,
RRkjµν = [R∇µ,R∇ν ]kj − [I∇µ, I∇ν ]kj (67)
IRkjµν = [R∇µ, I∇ν ]kj + [I∇µ,R∇ν ]kj . (68)
on the other. The connection to the imaginary part of
Ăaµ is more direct in Eq. (57) which yields,
RF aµν (Ă) = 12ǫ
abcdbjdckRRkjµν , (69)
IF aµν (Ă) = 12ǫ
kIRkjµν , (70)
Finally, the BF term becomes,
RLBF = − 14
gT jµνǫjmkh
lmIRklµν , (71)
ILBF = + 14
gT jµνǫjmkh
lmRRklµν . (72)
Summing up, at the complex saddle point of the [dA] in-
tegration the emerging Euclidean LAA and LBF are both
complex, whereas before they were real and purely imag-
inary, respectively. Both terms together determine the
saddle point value Ăaµ. Therefore, they become coupled
and cannot be considered separately anymore. This was
already to be expected from the analysis in Minkowski
space in Sect. II, where the matrixMabµν combines T-odd
and T-even contributions, which originate from LAA and
LBF , respectively. There the different contributions be-
come entangled when the inverse (M−1)abµν is calculated.
A. Weinberg–Salam model
Finally, let us reformulate the Weinberg–Salam model
in geometric variables. We omit here the kinematic term
K(m) and the potential term V (m2) for the sake of
brevity because they do not interfere with the calcula-
tions and can be reinstated at every time. The remaining
terms of the classical action are
x(LAbelBB + LAbelBF + LBB + LBF + LAA),
LAA := −m
µ, (73)
LAbelBB := − g
µν , (74)
LAbelBF := + i4ǫ
µνκλB0µνF
κλ, (75)
and LBB as well as LBF have been defined in Eqs. (40)
and (41), respectively.
The saddle point conditions for the [dA] integration
with this action are given by
ǫκλµνDabµ (Ă)B
κλ = +im
abAbν , (76)
κλ = +im
ν . (77)
For the following it is convenient to use linear combina-
tions of these equations, which are obtained by contrac-
tion with the eigenvectors µ
of the matrix mab—defined
between Eqs. (37) and (38)—,
ǫκλµν [µalD
µ (Ă)B
κλ + µ
l ∂µB
κλ] = im
abAbν .(78)
The non-Abelian term on the left-hand side can be
rewritten using the results from the first part of Sect. III.
The right-hand side may be expressed in terms of eigen-
values of the matrix mab. We find (no summation over
Xaν = im2mla
ν , (79)
where
aν := daj∇µ|
gT kµν) + 1
l ∂µB
κλ. (80)
The mass term can be decomposed in the eigenbasis of
ab as well and, subsequently, be formulated in terms of
the geometric variables,
LAA = −m
(m̄−1)abXaνXbν , (81)
where
(m̄−1)ab :=
∑∀ml 6=0
. (82)
With the help of these relations and the results from
the beginning of Sect. III we are now in the posi-
tion to express the classical action in geometric vari-
ables: The mass term is given in the previous expres-
sion. It describes a Gaussian distribution of a com-
posite current. The components of the current are su-
perpositions of Abelian and non-Abelian contributions.
This mixture is caused by the symmetry breaking pat-
tern SU(2)L × U(1)Y → U(1)em which leaves unbroken
U(1)em and not the U(1)Y which is a symmetry in the
unbroken phase. The Abelian antisymmetric fields B0µν
in LAbelBB are gauge invariant and we leave LAbelBB as de-
fined in Eq. (74). In geometric variables LBB is given
by Eq. (48) and LBF by Eq. (58). At the end the ki-
netic term K(m) and the potential term V (m2) should
be reinstated.
Additional contributions from fluctuations give rise to
an addend (on the level of the Lagrangian) proportional
ln detm, wherem can be expressed in the new vari-
ables, mjkµν = f
abcdal µ
gT lµν −m2
ljδklgµν .
Repeating the entire calculation not in unitary gauge,
but with explicit gauge scalars Φ, yields exactly the same
result because the mass term and the saddle point con-
dition change in unison, such that Eq. (79) is obtained
again. This has already been demonstrated explicitly for
a massive Yang–Mills theory just before Sect. III A.
IV. SUMMARY
We have discussed the formulation of massive gauge
field theories in terms of antisymmetric tensor fields
(Sect. II) and of geometric variables (Sect. III). The
description in terms of an antisymmetric tensor field
Baµν has the advantage that it transforms homogeneously
under gauge transformations, whereas the usual gauge
field Aaµ transforms inhomogeneously, which complicates
a gauge-independent treatment of massive gauge field
theories. In fact, the (Stückelberg-like) degrees of free-
dom needed for a gauge-invariant formulation in terms
of a Yang–Mills connections are directly absorbed in
the antisymmetric tensor fields. No scalar field is re-
quired in order to construct a gauge invariant massive
theory in terms of the new variables. After recapitu-
lating the massless case in Sect. IIA, we have treated
the massive setting in Sect. IIB. After the fixed mass
case, at the beginning of Sect. IIB, this section encom-
passes also a position dependent mass (Sect. IIB1), that
is the Higgs degree of freedom, and a non-diagonal mass
term (Sect. IIB2). This is required for describing the
Weinberg–Salam model. In this context, we have identi-
fied the degrees of freedom which represent the different
electroweak gauge bosons in the Baµν representation by a
gauge-invariant eigenvector decomposition.
The Abelian section (App. A) serves as basis for an
easier understanding of some issues arising in the non-
Abelian case, like for example vanishing conserved cur-
rents. In that section we also address the massless limits
of propagators in the Aµ and Bµν representations, respec-
tively. We notice that while the limit is ill-defined for the
Aµ fields it is well-defined for the Bµν fields. That is due
to the consistent treatment of gauge degrees of freedom
in the latter case.
In Sect. III we continue with a description of massive
gauge field theories in terms of geometric variables in
four space-time dimensions and for two colours. Thereby
we can eliminate the remaining degrees of freedom which
are still encoded in the Baµν fields. After deriving the
expressions for a fixed mass and in the presence of the
Higgs degree of freedom, respectively, we also investigate
the Weinberg–Salam model (Sect. III A).
Acknowledgments
DDD would like to thank Gerald Dunne and Stefan
Hofmann for helpful, informative and inspiring discus-
sions. Thanks are again due to Stefan Hofmann for read-
ing the manuscript.
APPENDIX A: ABELIAN
1. Massless
The partition function of an Abelian gauge field theory
without fermions is given by
[dA] exp{i
xL} (A1)
with the Lagrangian density
L = L0 := − 14g2FµνF
µν (A2)
and the field tensor
Fµν := ∂µAν − ∂νAµ. (A3)
g stands for the coupling constant. The transition to the
first-order formalism can be performed just like in the
non-Abelian case, which is treated in the main body of
the paper. We find the partition function,
[dA][dB] ×
× exp{i
d4x[− 1
F̃µνB
µν − g
µν ]}. (A4)
Here the antisymmetric tensor field Bµν , like the field
tensor Fµν , is gauge invariant. The classical equations of
motion are given by
µν = 0 and g2Bµν = −F̃µν , (A5)
which after elimination of Bµν reproduce the Maxwell
equations one would obtain from Eq. (A2). Now we can
formally integrate out the gauge field Aµ. As no gauge
is fixed by the BF term because the Abelian field ten-
sor Fµν is gauge invariant this gives rise to a functional
δ distribution. This constrains the allowed field configu-
rations to those for which the conserved current ∂µB̃
vanishes,
[dB]δ(∂µB̃
µν) exp{i
d4x[− g
µν ]}.
2. Massive
In the massive case the Lagrangian density becomes
L = L0 + Lm, where Lm := m
µ. First, we here
repeat some steps carried out above in the non-Abelian
case: We can directly write down the partition function
in unitary gauge. Regauging like in Eq. (18) leads to
[dA]′[dU ] exp(i
d4x{− 1
[Aµ − iU †(∂µU)][Aµ − iU †(∂µU)]}).(A7)
The corresponding gauge-invariant Lagrangian then
reads,
Lcl := − 14g2FµνF
µν + m
(DµΦ)
†(DµΦ), (A8)
with the constraint Φ†Φ
= 1. Constructing a partition
function in the first-order formalism from the previous
Lagrangian yields,
[dA][dΦ][dB] ×
× exp(i
d4x{− 1
Bµν F̃
µν − g
[Aµ − iΦ(∂µΦ†)][Aµ − iΦ(∂µΦ†)]}). (A9)
The Φ fields can be absorbed entirely in a gauge-
transformation of the gauge field Aµ. The integration
over Φ decouples. This can also be seen by putting the
parametrisation Φ = e−iθ into the previous equation and
carrying out the [dA] integration,
[dB][dθ] exp{i
d4x[− g
(∂κB̃
κµ)gµν(∂λB̃
λν)− (∂µθ)(∂κBκµ)]}.
(A10)
The only θ dependent term in the exponent is a total
derivative and drops out, leading to a factorisation of
the θ integral.
A third way which yields the same final result, starts
by integrating out the θ field first. This gives a transverse
mass term∼ Aµ(gµν− ∂µ∂ν� )A
ν . Integration overAµ then
leads to the same result as before.
Instead of a vanishing current ∂µB̃
µν like in the mass-
less case, in the massive case the current has a Gaussian
distribution. The distribution’s width is proportional to
the mass of the gauge boson.
m → 0 limit
In the gauge-field representation the massless limit for
the classical actions discussed above are smooth. In
terms of the Bµν field the mass m ends up in the denom-
inator of the corresponding term in the action. Together
with the m dependent normalisation factors arising form
the integrations over the gauge-field in the course of the
derivation of the Bµν representation, however, the limit
m → 0 still yields the m = 0 result for the partition
function (A6).
Still, it is known that the perturbative propagator for
a massive photon is ill-defined if the mass goes to zero:
Variation of the exponent of the Abelian massive parti-
tion function in unitary gauge with respect to Aκ and Aλ
gives the inverse propagator for the gauge fields,
(G−1)κλ = [(p2 −m2phys)gκλ − pκpλ], (A11)
which here is already transformed to momentum space.
The corresponding equation of motion,
(G−1)κλGλµ
= gκµ, (A12)
is solved by
Gλµ =
p2 −m2phys
m2phys
p2 −m2phys
, (A13)
with boundary conditions (an ǫ prescription) to be spec-
ified and mphys := mg. This propagator diverges in the
limit m→ 0.
In the representation based on the antisymmetric ten-
sor fields, variation of the exponent of the partition func-
tion (A10) with respect to the fields B̃µν and B̃κλ yields
the inverse propagator
(G−1)µν|κλ = gµκgνλ − gνκgµλ +
+m−2phys(∂
µ∂κgνλ − ∂ν∂κgµλ −
− ∂µ∂λgνκ + ∂ν∂λgµκ), (A14)
already expressed in momentum space. Variation with
respect to B̃µν instead of Bµν corresponds only to a
reshuffling of the Lorentz indices and gives an equiva-
lent description. The antisymmetric structure of the in-
verse propagator is due to the antisymmetry of B̃µν . The
equation of motion is then given by
(G−1)µν|κλGκλ|ρσ
= gµρ g
σ − gµσgνρ (A15)
and solved by
2Gκλ|ρσ = (gκρgλσ − gκσgλρ)−
p2 −m2phys
×(pκpρgλσ − pκpσgλρ −
pλpρgκσ + pλpσgκρ). (A16)
Here we observe that the limit m→ 0 is well-defined,
2Gκλ|ρσ
m→0−−−→ gκρgλσ − gκσgλρ −
(pκpρgλσ − pκpσgλρ −
− pλpρgκσ + pλpσgκρ). (A17)
This is due to the consistent treatment of the gauge de-
grees of freedom in the second approach.
[1] A. M. Polyakov, Nucl. Phys. B 164 (1980) 171;
Yu. M. Makeenko and A. A. Migdal, Phys. Lett. B 88
(1979) 135 [Erratum-ibid. B 89 (1980) 437];
Nucl. Phys. B 188 (1981) 269 [Sov. J. Nucl. Phys. 32
(1980) 431; Yad. Fiz. 32 (1980) 838].
[2] Y. M. Cho, Phys. Rev. D 21 (1980) 1080;
L. D. Faddeev and A. J. Niemi, Phys. Rev. Lett. 82
(1999) 1624 [arXiv:hep-th/9807069];
K.-I. Kondo, Phys. Rev. D 74 (2006) 125003
[arXiv:hep-th/0609166].
[3] S. W. MacDowell and F. Mansouri, Phys. Rev. Lett. 38
(1977) 739 [Erratum-ibid. 38 (1977) 1376].
[4] J. F. Plebanski, J. Math. Phys. 12 (1977) 2511;
J. C. Baez, Lect. Notes Phys. 543 (2000) 25
[arXiv:gr-qc/9905087];
T. Thiemann, Lect. Notes Phys. 631 (2003) 41
[arXiv:gr-qc/0210094].
[5] S. Deser and C. Teitelboim, Phys. Rev. D 13 (1976) 1592.
[6] M. B. Halpern, Phys. Rev. D 16 (1977) 1798.
[7] O. Ganor and J. Sonnenschein, Int. J. Mod. Phys. A 11
(1996) 5701 [arXiv:hep-th/9507036].
[8] M. Schaden, H. Reinhardt, P. A. Amundsen and
M. J. Lavelle, Nucl. Phys. B 339 (1990) 595.
[9] T. Kunimasa and T. Goto, Prog. Theor. Phys. 37 (1967),
[10] E. C. G. Stückelberg, Annals Phys. 21 (1934) 367-389
and 744.
[11] K. I. Kondo, Prog. Theor. Phys. 98 (1997) 211
[arXiv:hep-th/9603151].
[12] I. A. Batalin and E. S. Fradkin, Phy. Lett. B 180 (1986)
Nucl. Phys. B 279 (1987) 514;
E. S. Fradkin and G. A. Vilkovisky, Phys. Lett. B 55
(1975) 224;
I. A. Batalin and E. S. Fradkin, Riv. Nuovo Cimento 9
(1986) 1;
M. Henneaux, Phys. Rep. 126 (1985) 1.;
M. Henneaux and C. Teitelboim, Quantization of Gauge
Systems, (Princeton Univ. Press, 1992);
T. Fujiwara, Y. Igarashi, and J. Kubo, Nucl. Phys. B
341 (1990) 695;
Phys. Lett. B 261 (1990) 427;
K.-I. Kondo, Nucl. Phys. B 450 (1995) 251.
[13] Freedman and Townsend, Nucl. Phys. B 177 (1981) 282-
[14] K. Hayashi, Phys. Lett. B 44 (1973) 497;
M. Kalb and P. Ramond, Phys. Rev. D 9 (1974) 2273;
E. Cremmer and J. Scherk, Nucl. Phys. B 72 (1974) 117;
Y. Nambu, Phys. Rept. 23 (1976) 250;
P. K. Townsend, Phys. Lett. B 88 (1979) 97.
[15] K. Seo, M. Okawa, and A. Sugamoto, Phys. Rev. D 12
(1979) 3744;
K. Seo and M. Okawa, Phys. Rev. D 6 (1980) 1614.
[16] D. Diakonov and V. Petrov, Grav. Cosmol. 8 (2002) 33
[arXiv:hep-th/0108097].
[17] G. ’t Hooft, Phys. Rev. D 14 (1976) 3432 [Erratum-ibid.
D 18 (1978) 2199].
[18] see, e.g. G. Alexanian, R. MacKenzie, M. B. Paranjape
and J. Ruel, arXiv:hep-th/0609146.
[19] S. S. Chern and J. Simons, Annals Math. 99 (1974) 48;
S. Deser, R. Jackiw and S. Templeton, Annals Phys. 140
(1982) 372 [Erratum-ibid. 185 (1988) 406; 281 (2000)
409-449];
Phys. Rev. Lett. 48 (1982) 975.
[20] D. Karabali and V. P. Nair, Nucl. Phys. B 464 (1996)
D. Karabali and V. P. Nair, Phys. Lett. B 379 (1996)
[21] D. Karabali, C. J. Kim, and V. P. Nair, Nucl. Phys. B
524 (1998) 661 [arXiv:hep-th/9705087];
D. Karabali, C. J. Kim, and V. P. Nair, Phys. Lett. B
434 (1998) 103 [arXiv:hep-th/9804132];
D. Karabali, C. J. Kim, and V. P. Nair, Phys. Rev. D 64
(2001) 025011 [arXiv:hep-th/0007188].
[22] L. Freidel, arXiv:hep-th/0604185;
L. Freidel, R. G. Leigh, and D. Minic,
arXiv:hep-th/0604184.
[23] I. Bars, Phys. Rev. Lett. 40 (1978) 688;
Nucl. Phys. B 149 (1979) 39;
I. Bars and F. Green, Nucl. Phys. B 148 (1979) 445.
[24] V. N. Gribov, Nucl. Phys. B 139 (1978) 1.
[25] F. A. Lunev, J. Math. Phys. 37 (1996) 5351
[arXiv:hep-th/9503133].
[26] We do not discuss here odd dimensional cases, where
mass can be created topologically by means of Chern–
Simons terms [19].
[27] Again in an odd number of space-time dimensions (2+1)
Karabali and Nair [20] also together with Kim [21] have
carried out a gauge invariant analysis of non-Abelian
gauge field theories. As pointed out by Freidel et al. [22]
this approach appears to be extendable to 3+1 dimen-
sions making use of the so-called ”corner variables” [23].
[28] The functional integral over a gauge field is ill-defined as
long as no gauge is fixed. We have to keep this fact in
mind at all times and will discuss it in detail when a field
is really integrated out.
[29] Due to the possible occurrence of what is known as Gri-
bov copies [24] this method might not achieve a unique
splitting of the two parts of the integration. For our il-
lustrative purposes, however, this is not important.
[30] For three colours and four space-time dimensions there
exists a treatment of the massless setting due to Lunev
[25], different from the one which here is extended to the
massive case.
[31] One possible representation is ηi
= ǫamn, η
4n = −δ
m4 = −δ
m, and η
= 0, where m,n ∈ {1; 2; 3}.
http://arxiv.org/abs/hep-th/9807069
http://arxiv.org/abs/hep-th/0609166
http://arxiv.org/abs/gr-qc/9905087
http://arxiv.org/abs/gr-qc/0210094
http://arxiv.org/abs/hep-th/9507036
http://arxiv.org/abs/hep-th/9603151
http://arxiv.org/abs/hep-th/0108097
http://arxiv.org/abs/hep-th/0609146
http://arxiv.org/abs/hep-th/9705087
http://arxiv.org/abs/hep-th/9804132
http://arxiv.org/abs/hep-th/0007188
http://arxiv.org/abs/hep-th/0604185
http://arxiv.org/abs/hep-th/0604184
http://arxiv.org/abs/hep-th/9503133
|
0704.1830 | An Electromagnetic Calorimeter for the JLab Real Compton Scattering
Experiment | An Electromagnetic Calorimeter for the JLab Real Compton Scattering Experiment
D. J. Hamiltona,∗, A. Shahinyanb, B. Wojtsekhowskic, J. R. M. Annanda, T.-H. Changd, E. Chudakovc, A. Danagouliand,
P. Degtyarenkoc, K. Egiyanb,1, R. Gilmane, V. Gorbenkof, J. Hinesg,2, E. Hovhannisyanb,1, C. E. Hyde-Wrighth, C.W. de Jagerc,
A. Ketikyanb, V. H. Mamyanb,c,3, R. Michaelsc, A. M. Nathand, V. Nelyubini, I. Rachekj, M. Roedelbromd, A. Petrosyanb,1,
R. Pomatsalyukf, V. Popovc, J. Segalc, Y. Shestakovj, J. Templong,4, H. Voskanyanb
aUniversity of Glasgow, Glasgow G12 8QQ, Scotland, UK
bYerevan Physics Institute, Yerevan 375036, Armenia
cThomas Jefferson National Accelerator Facility, Newport News, VA 23606, USA
dUniversity of Illinois, Urbana-Champaign, IL 61801, USA
eRutgers University, Piscataway, NJ 08855, USA
fKharkov Institute of Physics and Technology, Kharkov 61108, Ukraine
gThe University of Georgia, Athens, GA 30602, USA
hOld Dominion University, Norfolk, VA 23529, USA
i St. Petersburg Nuclear Physics Institute, Gatchina, 188350, Russia
jBudker Institute for Nuclear Physics, Novosibirsk 630090, Russia
Abstract
A lead-glass hodoscope calorimeter that was constructed for use in the Jefferson Lab Real Compton Scattering experiment is
described. The detector provides a measurement of the coordinates and the energy of scattered photons in the GeV energy range
with resolutions of 5 mm and 6%/
Eγ [GeV]. Features of both the detector design and its performance in the high luminosity
environment during the experiment are presented.
Keywords: Calorimeters, Čerenkov detectors
PACS: 29.40Vj, 29.40.Ka ‘
1. Introduction
A calorimeter was constructed as part of the instrumentation
of the Jefferson Lab (JLab) Hall A experiment E99-114, “Ex-
clusive Compton Scattering on the Proton” [1], the schematic
layout for which is shown in Fig. 1. The study of elastic pho-
ton scattering provides important information about nucleon
structure, which is complementary to that obtained from elastic
electron scattering [2]. Experimental data on the Real Comp-
ton Scattering (RCS) process at large photon energies and large
scattering angles are rather scarce, due mainly to the absence of
high luminosity facilities with suitable high-resolution photon
detectors. Such data are however crucial, as the basic mecha-
nism of the RCS reaction is the subject of active debate [3, 4, 5].
The only data available before the JLab E99-114 experiment
were obtained at Cornell about 30 years ago [6].
The construction of the CEBAF (Continuous Electron Beam
Accelerator Facility) accelerator has led to an extension of
many experiments with electron and photon beams in the GeV
energy range and much improved precision. This is the result of
a number of fundamental improvements to the electron beam,
∗Tel.: +44-141-330-5898; Fax: +44-141-330-5889
Email address: [email protected] (D. J. Hamilton)
1deceased
2present address: Applied Biosystems/MDS, USA
3present address: University of Virginia, Charlottesville, VA 22901, USA
4present address: NIKHEF, 1009 DB Amsterdam, The Netherlands
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
���������
Beam dump
Cable lines
Calorimeter
Front−end electronics
Beam line
Magnet
Radiator and target
Figure 1: Layout of the RCS experiment in Hall A. An electron beam incident
on a radiator produces an intense flux of high energy photons.
including a 100% duty cycle, low emittance and high polariza-
tion, in addition to new dedicated target and detector systems.
The CEBAF duty factor provides an improvement of a factor of
15 compared to the best duty factor of a beam extracted from a
synchrotron, at a similar instantaneous rate in the detectors.
In 1994 work began on the development of a technique for an
RCS experiment at JLab, leading in 1997 to the instigation of
a large-scale prototyping effort. The results of the subsequent
test runs in 1998 and 1999 [7] provided sufficient information
Preprint submitted to NIM A August 16, 2018
http://arxiv.org/abs/0704.1830v3
Figure 2: A photograph of the experimental set-up for E99-114, showing the calorimeter (center) and part of the proton spectrometer (rear).
for the final design of the apparatus presented in the present
article. The fully realized physics experiment took place in
2002 (see Fig. 1) at a photon-nucleon luminosity which was
a factor of 1300 higher than in the previous Cornell experi-
ment. The experimental technique involves utilizing a mixed
electron-photon beam which is incident on a liquid hydrogen
target and passes to a beam dump. The scattered photons are
detected in the calorimeter, while the recoiling protons are de-
tected in a high resolution magnetic spectrometer (HRS-L). A
magnet between the hydrogen target and the calorimeter de-
flects the scattered electrons, which then allows for clean sep-
aration between Compton scattering and elastic e-p scattering
events. The Data Acquisition Electronics (DAQ) is shielded by
a 4 inch thick concrete wall from the beam dump and the target.
Figure 2 shows a photograph of the experimental set-up with
the calorimeter in the center.
The experiment relied on a proton-photon time coincidence
and an accurate measurement of the proton-photon kinematic
correlation for event selection. The improvement in the event
rate over the previous measurement was achieved through the
use of a mixed electron-photon beam, which in turn required
a veto detector in front of the calorimeter or the magnetic de-
flection of the scattered electron [1]. In order to ensure redun-
dancy and cross-checking, both a veto and deflection magnet
were designed and built. The fact that a clean photon beam was
not required meant that the photon radiator could be situated
very close to the hydrogen target, leading to a much reduced
background near the beam line and a dramatic reduction of the
photon beam size. This small beam size in combination with
the large dispersion in the HRS-L proton detector system [8]
resulted in very good momentum and angle resolution for the
recoiling proton without the need for a tracking detector near
the target, where the background rate is high.
Good energy and coordinate resolutions were key features
of the photon detector design goals, both of which were sig-
nificantly improved in the JLab experiment as compared to the
Cornell one. An energy resolution of at least 10% is required to
separate cleanly RCS events from electron bremsstrahlung and
neutral pion events. In order to separate further the background
from neural pion photo-production, which is the dominant com-
ponent of the high-energy background in this measurement, a
high angular resolution between proton and photon detectors is
crucial. This was achieved on the photon side by constructing a
highly segmented calorimeter of 704 channels. The RCS exper-
iment was the first instance of a calorimeter being operated at
an effective electron-nucleon luminosity of 1039 cm2/s [9, 10]
(a 40 µA electron beam on a 6% Cu radiator upstream of a
15 cm long liquid hydrogen target). It was observed in the test
runs that the counting rate in the calorimeter fell rapidly as the
threshold level was increased, which presented an opportunity
to maintain a relatively low trigger rate even at high luminosity.
However, on-line use of the calorimeter signal required a set of
summing electronics and careful equalizing and monitoring of
the individual channel outputs during the experiment.
As the RCS experiment represented the first use of such a
calorimeter at very high luminosity, a detailed study of the
calorimeter performance throughout the course of the experi-
ment has been conducted. This includes a study of the rela-
tionship between luminosity, trigger rate, energy resolution and
ADC pedestal widths. An observed fall-off in energy resolu-
tion as the experiment progressed allowed for characterization
of radiation damage sustained by the lead-glass blocks. It was
possible to mitigate this radiation damage after the experiment
by annealing, with both UV curing and heating proving effec-
tive.
We begin by discussing the various components which make
up the calorimeter and the methods used in their construction.
This is followed by a description of veto hodoscopes which
were used for particle identification purposes. An overview
of the high-voltage and data acquisition systems is then pre-
sented, followed, finally, by a discussion on the performance
of the calorimeter in the unique high-luminosity environment
during the RCS experiment.
2. Calorimeter
The concepts and technology associated with a fine-
granularity lead-glass Čerenkov electromagnetic calorimeter
(GAMS) were developed by Yu. Prokoshkin and collaborators
at the Institute of High Energy Physics (IHEP) in Serpukhov,
Russia [11]. The GAMS type concept has since been employed
for detection of high-energy electrons and photons in several
experiments at JLab, IHEP, CERN, FNAL and DESY (see for
example [12]). Many of the design features of the calorimeter
presented in this article are similar to those of Serpukhov. A
schematic showing the overall design of the RCS calorimeter
can be seen in Fig. 3. The main components are:
• the lead-glass blocks;
• a light-tight box containing the PhotoMultiplier Tubes
(PMTs);
• a gain-monitoring system;
• a doubly-segmented veto hodoscopes;
• the front-end electronics;
• an elevated platform;
• a lifting frame.
The calorimeter frame hosts a matrix of 22×32 lead-glass
blocks together with their associated PMTs and High Voltage
(HV) dividers. Immediately in front of the lead-glass blocks
is a sheet of UltraViolet-Transmitting (UVT) Lucite, which is
used to distribute calibration light pulses for gain-monitoring
purposes uniformly among all 704 blocks. The light-tight box
provides protection of the PMTs from ambient light and con-
tains an air-cooling system as well as the HV and signal cable
systems. Two veto hodoscopes, operating as Čerenkov coun-
ters with UVT Lucite as a radiator, are located in front of the
calorimeter. The front-end electronics located a few feet be-
hind the detector were assembled in three relay racks. They are
comprised of 38 analog summers, trigger logic and patch pan-
els. The elevated platform was needed to bring the calorimeter
to the level of the beam line, while the lifting frame was used
to re-position the calorimeter in the experimental hall by means
of an overhead crane. This procedure, which took on average
around two hours, was performed more than 25 times during
the course of the experiment.
platform
��������
Calorimeter
Elevation
route
Cable
boxframe
Lifting Light−tight Front−end electronics
system
monitoring
Beam line
Veto counters
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
Figure 3: Schematic side view of the RCS calorimeter detector system.
2.1. Calorimeter Design
The main frame of the calorimeter is made of 10 inch wide
steel C-channels. A thick flat aluminum plate was bolted to the
bottom of the frame, with a second plate installed vertically and
aligned to 90◦ with respect to the first one by means of align-
ment screws (see Fig. 4). Another set of screws, mounted inside
and at the top of the main frame on the opposite side of the ver-
tical alignment plate, was used to compress all gaps between the
lead-glass modules and to fix their positions. The load was ap-
plied to the lead-glass blocks through 1 inch× 1 inch × 0.5 inch
plastic plates and a 0.125 inch rubber pad. In order to further as-
sist block alignment, 1 inch wide stainless steel strips of 0.004
inch thickness running from top to bottom of the frame were
inserted between every two columns of the lead-glass modules.
2.1.1. Air Cooling
All PMTs and HV dividers are located inside a light-tight
box, as shown in Fig. 5. As the current on each HV divider
is 1 mA, simultaneous operation of all PMTs would, without
cooling, lead to a temperature rise inside the box of around
50-70◦C. An air-cooling system was developed to prevent the
PMTs from overheating, and to aid the stable operation of the
calorimeter. The air supply was provided by two parallel oil-
less regenerative blowers of R4110-2 type5, which are capable
5Manufactured by S&F Supplies, Brooklyn, NY 11205, USA.
Plastic plate ������
��������������
������
������
������
���������������
�����
�����
�����
��������������
������
������
������
���������������
�����
�����
�����
��������������������������������
������������������������
��������������������
������������������
��������������������������������������������������������
Plastic
Alignment
screw
alignment
Vertical
plate
Lead−glass
force
Compacting
Horizontal
alignment
plate
Main frame
SS strip
Rubber
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������������������������
������������
������������
������������
������������
������������
��������������������������������������������������������������������
��������
��������
������������������������������������������
������������������������������������������
������������������������������������������
������������������������������������������
Figure 4: Front cross-section of the calorimeter, showing the mechanical com-
ponents.
of supplying air at a maximum pressure of 52 inches water and
a maximum flow of 92 CFM. The air is directed toward the HV
divider via vertical collector tubes and numerous outlets. When
the value on any one of the temperature sensors installed in sev-
eral positions inside the box exceeds a preset limit, the HV on
the PMTs is turned off by an interlock system. The air line is
equipped with a flow switch of type FST-321-SPDT which was
included in the interlock system. The average temperature in-
side the box during the entire experimental run did not exceed
the preset limit of 55◦C.
2.1.2. Cabling System
A simple and reliable cabling system is one of the key fea-
tures of multichannel detectors, with easy access to the PMTs
and HV dividers for installation and repair being one of the key
features. The cabling system includes:
• 1 foot long HV and signal pig-tails soldered to the HV
divider;
• patch panels for Lemo and HV connectors;
• 10 feet long cables from those patch panels to the front-
end electronics and the HV distribution boxes;
• the HV distribution boxes themselves;
• BNC-BNC patch panels for the outputs of the front-end
modules;
��������������������
��������������������
��������������������
��������������������
��������������������
��������������������
��������������������
��������������������
��������������������
��������������������
��������������������
��������������������
��������������������
��������������������
������������������
������������������
������������������
������������������
Patch panel
Air outlet
Pressure Gauge
Flow switch Blower R4110−2
Cable
bundle
Lead Glass
Air supplyPMT tube HV, Signal
cables
Figure 5: A schematic showing the calorimeter air cooling and cabling systems.
• BNC-BNC patch panels on the DAQ side for the analog
signals;
• BNC-Lemo patch panels on the DAQ side for the veto-
counter lines.
Figure 6 shows the cabling arrangement inside the PMT box.
The patch panels, which are custom-built and mounted on the
air supply tubes, have the ability to swing to the side in order to
allow access to the PMTs and the HV dividers. The box has two
moving doors, the opening of which leads to activation of an in-
terlock system connected to the HV supply. In order to reduce
the diameter of the cable bundle from the PMT box, RG-174
cable (diameter 0.1 inch) was used for the PMT signals, and a
twisted pair for the HV connection (two individually insulated
inner 26 AWG conductors with an overall flame-retardant PVC
jacket, part number 001-21803 from the General Wire Product
company). The box patch panels used for the HV lines each
convert 24 of the above twisted pairs (single HV line) to the
multi-wire HV cables (the part 001-21798 made by General
Wire Product), which run to the HV power supply units located
in the shielded area near DAQ racks.
2.2. Lead-Glass Counter
The basic components of the segmented calorimeter are the
TF-1 lead-glass blocks and the FEU 84-3 PMTs. In the 1980s
the Yerevan Physics Institute (YerPhI) purchased a consignment
of TF-1 lead-glass blocks of 4 cm × 4 cm ×40 cm and FEU 84-3
PMTs of 34 mm diameter (with an active photo-cathode diam-
eter of 25 mm) for the construction of a calorimeter to be used
Lemo connector
from HV divider
HV connector
Patch panel
Al T−channel
HV cables
Signal cables
HV&Sig. cables
Figure 6: A photograph of the cabling inside the PMT box.
in several experiments at the YerPhI synchrotron. In January
of 1998 the RCS experiment at JLab was approved and soon
after these calorimeter components were shipped from Yerevan
to JLab. This represented the YerPhi contribution to the exper-
iment, as the properties of the TF-1 lead-glass met the require-
ments of the experiment in terms of photon/electron detection
with reasonable energy and position resolution and radiation
hardness. The properties of TF-1 lead-glass [12, 13] are given
in Table 1.
Table 1: Important properties of TF-1 lead-glass.
Density 3.86 gcm−3
Refractive Index 1.65
Radiation Length 2.5 cm
Moliére Radius 3.50 cm
Critical Energy 15 MeV
All PMTs had to pass a performance test with the follow-
ing selection criteria: a dark current less than 30 nA, a gain of
106 with stable operation over the course of the experiment (2
months), a linear dependence of the PMT response (within 2 %)
on an incident optical pulse of 300 to 30000 photons. 704 PMTs
out of the 900 available were selected as a result of these per-
formance tests. Furthermore, the dimensional tolerances were
checked for all lead-glass blocks, with strict requirements de-
manded on the length (400±2 mm) and transverse dimensions
(40±0.2 mm).
2.2.1. Design of the Counter
In designing the individual counters for the RCS calorime-
ter, much attention was paid to reliability, simplicity and the
possibility to quickly replace a PMT and/or HV divider. The
individual counter design is shown in Fig. 7. A titanium flange
is glued to one end of the lead-glass block by means of EPOXY-
190. Titanium was selected because its thermal expansion coef-
ficient is very close to that of the lead glass. The PMT housing,
which is bolted to the Ti flange, is made of an anodized Al
flange and an Al tube. The housing contains the PMT and a µ-
metal shield, the HV divider, a spring, a smaller Al tube which
transfers a force from the spring to the PMT, and a ring-shaped
spring holder. The optical contact between the PMT and the
lead-glass block is achieved by use of optical grease, type BC-
630 (Bicron), which was found to increase the amount of light
detected by the PMT by 30-40% compared to the case without
grease. The PMT is pressed to the lead-glass block by means
of a spring, which pushes the HV base with a force of 0.5-1 lbs.
Such a large force is essential for the stability of the optical
contact over time at the elevated temperature of the PMTs. The
glue-joint between the lead glass and the Ti flange, which holds
that force, failed after several months in a significant fraction
(up to 5%) of the counters. An alternative scheme of force
compensation was realized in which the force was applied to
the PMT housing from the external bars placed horizontally be-
tween the PMT housing and the patch-panel assembly. Each
individual lead-glass block was wrapped in aluminized Mylar
film and black Tedlar (a polyvinyl fluoride film from DuPont)
for optimal light collection and inter-block isolation. Single-
side aluminized Mylar film was used with the Al layer on the
opposite side of the glass. Such an orientation of the film limits
the diffusion of Al atoms into the glass and the non-oxidized
surface of aluminum, which is protected by Mylar, provides a
better reflectivity. The wrapping covers the side surface of the
lead-glass block, leaving the front face open for the gain mon-
itoring. The signal and the HV cables are each one foot long.
They are soldered to the HV divider on one end and terminated
with Lemo c©00 and circular plastic connectors (cable mount re-
ceptacle from Hypertronics) on the other end. The cables leave
the PMT housing through the open center of the spring holder.
2.2.2. HV Divider
At the full luminosity of the RCS experiment (0.5 ×
1039 cm2/s) and at a distance of 6 m from the target the back-
ground energy load per lead-glass block reaches a level of
108 MeVee (electron equivalent) per second, which was found
from the average value of anode current in the PMTs and the
shift of the ADC pedestals for a 150 ns gate width. At least
30% of this energy flux is due to high energy particles which
define the counting rate. The average energy of the signals for
that component, according to the observed rate distribution, is
in the range of 100-300 MeVee, depending on the beam en-
ergy and the detector angle. The corresponding charge in the
PMT pulse is around 5-15 pC collected in 10-20 ns. The elec-
tronic scheme and the selected scale of 1 MeV per ADC chan-
nel (50 fC) resulted in an average anode current of 5 µA due to
background load. A high-current HV base (1 mA) was there-
fore chosen to reduce the effect of the beam intensity variation
on the PMT amplitude and the corresponding energy resolu-
tion to the level of 1%. The scheme of the HV base is shown
in Fig. 8. According to the specification data for the FEU 84-
3 PMTs the maximum operation voltage is 1900 V. Therefore
a nominal voltage value of 1800 V and a current value in the
voltage divider of 1 mA were chosen.
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
�������������������
������������������������������������������������������������������
grease metal
Al tube
Al tube
housing
Ti flange
Lead−glass
Al flange
Al mylarTedlar
FEU 84−3
Optical
Spring
Signal
Glue joint
Spring holder
��������������������������������������������������������������
�������������������������������
�������������������������������
�������������������������������
�������������������������������
Figure 7: Schematic of the lead-glass module structure.
7.5ns
VME Scalers
Fastbus TDC
Fastbus ADC
Fastbus ADC
Fastbus ADC
DAQ Platform100m cable lines
Calorimeter Platform
LG PMT signals
600 central channels
LG PMT signals
104 rim channels
SUM−32 analog signals
56 channels
SUM−32 logic signals
56 channels
LRS428F PS706D
LRS3412
PHOTON TRIGGER
To Trigger Superviser2ns6ns
Sum−8
PS755
PS757
Figure 9: A block diagram of the calorimeter electronics.
R1 R2
R9 R10
D9 D10
R11 R12
Figure 8: Schematic of the high-voltage divider for the FEU 84-3 PMT. The
values of the resistors are R(1 − 10) = 100 kΩ, R11 = 130 kΩ,R12 =
150 kΩ, R13 = 200 kΩ, R14 = 150 kΩ, R15 = 10 kΩ,R16 = 10 MΩ, R17 =
4 kΩ. The capacitance C is 10 nF.
2.3. Electronics
The calorimeter electronics were distributed over two loca-
tions; see the block diagram in Fig. 9. The first group of
modules (front-end) is located in three racks mounted on the
calorimeter platform in close vicinity to the lead-glass blocks.
These are the trigger electronics modules which included a mix
of custom-built and commercially available NIM units:
• 38 custom-built analog summing modules used for level-
one signal summing 6;
• 14 linear fan-in/fan-out modules (LeCroy model 428F) for
a second-level signal summation;
• 4 discriminator units (Phillips Scientific model 706);
• a master OR circuit, realized with Phillips Scientific logic
units (four model 755 and one model 757 modules);
• several additional NIM modules used to provide auxiliary
trigger signals for the calorimeter calibration with cosmics
and for the PMT gain-monitoring system.
The second group of electronic modules, which include
charge and time digitizers as well as equipment for the Data
Acquisition, High Voltage supply and slow-control systems, is
6This module was designed by S. Sherman, Rutgers University.
placed behind a radiation-protecting concrete wall. All 704
lead-glass PMT signals and 56 SUM-32 signals are digitized by
LeCroy 1881M FastBus ADC modules. In addition, 56 SUM-
32 discriminator pulses are directed to scalers and to LeCroy
1877 FastBus TDCs. Further detailed information about the
electronics is presented in Section 5.
The signals between these locations are transmitted via
patch-panels and coaxial cables, consisting of a total number
of 1040 signal and 920 HV lines. The length of the signal ca-
bles is about 100 m, which serve as delay lines allowing the
timing of the signals at the ADC inputs to be properly set with
respect to the ADC gate, formed by the experiment trigger. The
width of the ADC gate (150 ns) was made much wider than
the duration of PMT pulse in order to accommodate the wider
pulses caused by propagation in the 500 ns delay RG-58 signal
cables. The cables are placed on a chain of bogies, which per-
mits the calorimeter platform to be moved in the experimental
hall without disconnecting the cables. This helped allow for a
quick change of kinematics.
2.3.1. Trigger Scheme
The fast on-line photon trigger is based on PMT signals from
the calorimeter counters. The principle of its operation is a sim-
ple constant-threshold method, in which a logic pulse is pro-
duced if the energy deposition in the calorimeter is above a
given magnitude. Since the Molière radius of the calorimeter
material is RM ≈ 3.5 cm, the transverse size of the electro-
magnetic shower in the calorimeter exceeds the size of a single
lead-glass block. This enables a good position sensitivity of
the device, while at the same time making it mandatory for the
trigger scheme to sum up signals from several adjacent coun-
ters to get a signal proportional to the energy deposited in the
calorimeter.
From an electronics point of view, the simplest realization of
such a trigger would be a summation of all blocks followed by
a single discriminator. However, such a design is inappropri-
ate for a high-luminosity experiment due to the very high back-
ground level. The opposing extreme approach would be to form
a summing signal for a small group including a single counter
hit and its 8 adjacent counters, thus forming a 3 × 3 block struc-
ture. This would have to be done for every lead-glass block, ex-
cept for those at the calorimeter’s edges, leading to an optimal
signal-to-background ratio, but an impractical 600 channels of
analog splitter→analog summer→discriminator circuitry fol-
lowed by a 600-input fan-in module. The trigger scheme that
was adopted and is shown in Fig. 10 is a trade-off between the
above extreme cases. This scheme contains two levels of ana-
log summation followed by appropriate discriminators and an
OR-circuit. It involved the following functions:
• the signals from each PMT in the 75 2×4 sub-arrays of ad-
jacent lead-glass blocks, excluding the outer-most blocks,
are summed in a custom-made analog summing module
to give a SUM-8 signal (this module duplicates the signals
from the PMTs with less then 1% integral nonlinearity);
• these signals, in turn, are further summed in overlapping
groups of four in LeCroy LRS428F NIM modules to pro-
duce 56 SUM-32 signals. Thus, each SUM-32 signal is
proportional to the energy deposition in a subsection of the
calorimeter of 4 blocks high and 8 blocks wide, i.e. 16×32
cm2. Although this amounts to only 5% of the calorime-
ter acceptance, for any photon hit (except for those at the
edges) there will be at least one segment which contains
the whole electromagnetic shower.
• the SUM-32 signals are sent to constant-threshold discrim-
inators, from which the logical pulses are OR-ed to form
the photon singles trigger T1 (see Section 5). The discrim-
inator threshold is remotely adjustable, and was typically
set to around half of the RCS photon energy for a given
kinematic setting.
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
������
������
������
������
������
������
������
������
������
������
������
������
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
������
������
������
������
������
������
������
������
������
������
������
������
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
������
������
������
������
������
������
������
������
������
������
������
������
������
������
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
������
������
������
������
������
������
������
������
������
������
������
������
������
������
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
������
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
������
������
������
������
������
������
������
������
������
������
������
������
������
������
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
������
������
������
������
������
������
������
������
������
������
������
������
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
������
������
������
������
141312
56 57 58 59 60
73 75747271
66 67
07 08 09
01 0403
S06 S07 S08
S50 S51
S11S09 S10
S03S01
Figure 10: The principle of two-level summation of signals for the hardware
trigger: 75 eight-block sub-arrays and 56 overlapping groups of four sub-arrays
forming SUM-32 signals labeled as S01-S56. In the highlighted example the
sums 02,03,07, and 08 form a S02 signal.
2.4. Gain Monitoring System
The detector is equipped with a system that distributes light
pulses to each calorimeter module. The main purpose of this
system is to provide a quick way to check the detector opera-
tion and to calibrate the dependence of the signal amplitudes on
the applied HV. The detector response to photons of a given en-
ergy may drift with time, due to drifts in the PMT gains and to
changes in the glass transparency caused by radiation damage.
For this reason, the gain monitoring system also allowed mea-
surements of the relative gains of all detector channels during
the experiment. In designing the gain-monitoring system ideas
developed for a large lead-glass calorimeter at BNL[14] were
used.
The system includes two components: a stable light source
and a system to distribute the light to all calorimeter modules.
The light source consists of an LN300 nitrogen laser7, which
provides 5 ns long, 300 µJ ultraviolet light pulses of 337 nm
7 Manufactured by Laser Photonics, Inc, FL 32826, USA.
wavelength. The light pulse coming out of the laser is atten-
uated, typically by two orders of magnitude, and monitored
using a silicon photo-diode S1226-18BQ8 mounted at 150◦ to
the laser beam. The light passes through an optical filter, sev-
eral of which of varying densities are mounted on a remotely
controlled wheel with lenses, before arriving at a wavelength
shifter. The wavelength shifter used is a 1 inch diameter semi-
spherical piece of plastic scintillator, in which the ultraviolet
light is fully absorbed and converted to a blue (∼ 425 nm) light
pulse, radiated isotropically. Surrounding the scintillator about
40 plastic fibers (2 mm thick and 4 m long) are arranged, in or-
der to transport the light to the sides of a Lucite plate. This plate
is mounted adjacent to the front face of the lead-glass calorime-
ter and covers its full aperture (see Fig.11). The light passes
through the length of the plate, causing it to glow due to light
scattering in the Lucite. Finally, in order to eliminate the cross-
talk between adjacent counters a mask is inserted between the
Lucite plate and the detector face. This mask, which reduces
the cross-talk by at least a factor of 100, is built of 12.7 mm
thick black plastic and contains a 2 cm × 2 cm hole in front of
each module.
Light distributor
Optic fibers
Lead−glass modules
Light source
Figure 11: Schematic of the Gain-monitoring system.
Such a system was found to provide a rather uniform light
collection for all modules, and proved useful for detector test-
ing and tuning, as well as for troubleshooting during the exper-
iment. However, it was found that monitoring over extended
periods of time proved to be less informative than first thought.
The reason for this is due to the fact that the main radiation dam-
age to the lead-glass blocks occurred at a depth of about 2-4 cm
8Manufactured by Hamamatsu Photonics, Hamamatsu, Japan.
from the front face. The monitoring light passes through the
damaged area, while an electromagnetic shower has its maxi-
mum at a depth of about 10 cm. Therefore, as a result of this
radiation damage the magnitude of the monitoring signals drops
relatively quicker than the real signals. Consequently, the re-
sulting change in light-output during the experiment was char-
acterized primarily through online analysis of dedicated elastic
e-p scattering runs. This data was then used for periodic re-
calibration of the individual calorimeter gains.
3. Veto Hodoscopes
In order to ensure clean identification of the scattered pho-
tons through rejection of high-energy electrons in the compli-
cated environment created by the mixed electron-photon beam,
a veto detector which utilizes UVT Lucite as a Čerenkov radi-
ator was developed. This veto detector proved particularly use-
ful for low luminosity runs, where its use made it possible to
take data without relying on the deflection magnet (see Fig. 1).
The veto detector consists of two separate hodoscopes located
in front of the calorimeter’s gain monitoring system. The first
hodoscope has 80 counters oriented vertically, while the second
has 110 counters oriented horizontally as shown in Fig. 12. The
segmentation scheme for the veto detector was chosen so that
it was consistent with the position resolution of the lead-glass
calorimeter. An effective dead time of an individual counter is
about 100 ns due to combined double-pulse resolution of the
PMT, the front-end electronics, the TDC, and the ADC gate-
width.
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
�������������������������������������
PMT XP2971
Honeycomb plate
Lucite radiator
Figure 12: Cut-off view of the “horizontal” veto hodoscope.
Each counter is made of a UVT Lucite bar with a PMT glued
directly to one of its end, which can be seen in Fig. 13. The Lu-
cite bar of 2×2 cm2 cross section was glued to a XP2971 PMT
and wrapped in aluminized Mylar and black Tedlar. Counters
are mounted on a light honeycomb plate via an alignment
groove and fixed by tape. The counters are staggered in such
a way so as to allow for the PMTs and the counters to overlap.
The average PMT pulse generated by a high-energy electron
corresponds to 20 photo-electrons. An amplifier, powered by
the HV line current, was added to the standard HV divider, in
order that the PMT gain could be reduced by a factor of 10
XP2971
��������
��������
��������
��������
��������
��������
��������
��������
��������
��������
��������
��������
��������
��������
��������
��������
��������
��������
Lucite radiator
Figure 13: Schematic of the veto counter.
[15, 16]. After gain-matching by using cosmic ray data a good
rate uniformity was achieved, as can be seen in the experimental
rate distribution of the counters shown in Fig. 14. The regular
variation in this distribution reflects the shielding effect result-
ing from the staggered arrangement of the counters. A signif-
0 25 50 75 100 125
0 20 40 60 80
Horizontal Veto Detector
Vertical Veto Detector
Counter number
Figure 14: The counting rate in the veto counters observed at luminosity of
1.5 · 1038 cm−2/s.
icant reduction of the rate (by a factor of 5) was achieved by
adding a 2 inch polyethylene plate in front of the hodoscopes.
Such a reduction as a result of this additional shielding is con-
sistent with the observed variation of the rate (see Fig. 14) and
indicates that the typical energy of the dominant background
is around a few MeV. The veto plane efficiency measured for
different beam intensities is shown in Table 2. It drops signif-
icantly at high rate due to electronic dead-time, which limited
the beam intensity to 3-5 µA in data-taking runs with the veto.
An analysis of the experimental data with and without veto de-
tectors showed that the deflection of the electrons by the magnet
Table 2: The efficiency of the veto hodoscopes and the rate of a single counter
at different beam currents. The detector was installed at 30◦ with respect to the
beam at a distance 13 m from the target. The radiator had been removed from
the beam path, the deflection magnet was off and the 2 inch thick polyethylene
protection plate was installed.
Run Beam Rate of the Efficiency Efficiency
current counter V12 horizontal vertical
[µA] [MHz] hodoscope hodoscope
1811 2.5 0.5 96.5% 96.8%
1813 5.0 1.0 95.9% 95.0%
1814 7.5 1.5 95.0% 94.0%
1815 10. 1.9 94.4% 93.0%
1816 14. 2.5 93.4% 91.0%
1817 19 3.2 92.2% 89.3%
provided a sufficiently clean photon event sample. As a result
the veto hodoscopes were switched off during most high lu-
minosity data-taking runs, although they proved important in
analysis of low luminosity runs and in understanding various
aspects of the experiment.
4. High Voltage System
Each PMT high-voltage supply was individually monitored
and controlled by the High Voltage System (HVS). The HVS
consists of six power supply crates of LeCroy type 1458 with
high-voltage modules of type 1461N, a cable system, and a set
of software programs. The latter allows to control, monitor,
download and save the high-voltage settings and is described
below in more detail. Automatic HV monitoring provides an
alarm feature with a verbal announcement and a flashing signal
on the terminal. The controls are implemented over an Ethernet
network using TCP/IP protocol. A Graphical User Interface
(GUI) running on a Linux PC provides access to all features
of the LeCroy system, loading the settings and saving them in
a file. A sample distribution of the HV settings is shown in
Fig. 15.
The connections between the outputs of the high-voltage
modules and the PMT dividers were arranged using 100 m long
multi-wire cables. The transition from the individual HV sup-
ply outputs to a multi-wire cable and back to the individual
PMT was arranged via high-voltage distribution boxes that are
located inside the DAQ area and front-end patch panels outside
the PMT box. These boxes have input connectors for individual
channels on one side and two high-voltage multi-pin connec-
tors (27 pins from FISCHER part number D107 A051-27) on
the other. High-voltage distribution boxes were mounted on the
side of the calorimeter stand and on the electronics rack.
5. Data Acquisition System
Since the calorimeter was intended to be used in Hall A
at JLab together with the standard Hall A detector devices,
the Data Acquisition System of the calorimeter is part of the
standard Hall A DAQ system. The latter uses CODA (CE-
BAF On-line Data Acquisition system) [17] developed by the
-1800 -1700 -1600 -1500 -1400 -1300 -1200 -1100 -1000
At the end of the experiment
High Voltage [V]
-1800 -1700 -1600 -1500 -1400 -1300 -1200 -1100 -1000
At the start of the experiment
Figure 15: The HV settings for the calorimeter PMTs.
JLab data-acquisition group. The calorimeter DAQ includes
one Fastbus crate with a single-board VME computer installed
using a VME-Fastbus interface and a trigger supervisor mod-
ule [18], which synchronizes the read-out of all the information
in a given event. The most important software components are
a Read-Out Controller (ROC), which runs on the VME com-
puter under the VxWorks OS, and an Event Builder and Event
Recorder which both run on a Linux workstation. For a detailed
description of the design and operation of the Hall A DAQ sys-
tem see [8] and references therein.
All 704 PMT signals and 56 SUM-32 signals are digitized by
LeCroy 1881M FastBus ADC modules. The 56 SUM-32 dis-
criminator pulses are also read-out by scalers and LeCroy 1877
FastBus TDCs. During the RCS experiment the calorimeter
was operating in conjunction with one of the High Resolution
Spectrometers (HRS), which belong to the standard Hall A de-
tector equipment [8]. The Hall A Data Acquisition System is
able to accumulate data involving several event types simulta-
neously. In the RCS experiment there were 8 types of trigger
signals and corresponding event types. Trigger signals from the
HRS are generated by three scintillator planes: S0, S1 and S2
(see Fig. 8 in [8]). In the standard configuration the main sin-
gle arm trigger in the spectrometer is formed by a coincidence
of signals from S1 and S2. An alternative trigger, logically de-
scribed by (S0 AND S1) OR (S0 AND S2), is used to measure
the trigger efficiency. In the RCS experiment one more proton
arm trigger was used, defined as being a single hit in the S0
plane. As this is the fastest signal produced in the proton arm,
it was better suited to form a fast coincidence trigger with the
photon calorimeter.
The logic of the Photon Arm singles trigger was described in
detail in Section 2.3. Besides this singles trigger there are two
auxiliary triggers that serve to monitor the calorimeter blocks
and electronics. The first is a photon arm cosmics trigger, which
was defined by a coincidence between signals from two plas-
tic scintillator paddles, placed on top and under the bottom of
photo tube
Laser calibration trigger
Cosmics trigger
top paddle
bottom paddle
1024 Hz Pulser
Retiming
Retiming
ADC gates
TDC Common Stop
Proton Arm
ADC gates
TDC Common Stop
Photon Arm
Proton Arm
Main photon trigger
Photon Arm
Figure 16: Schematic diagram of the DAQ trigger logic.
the calorimeter. The other trigger is the light-calibration (laser)
trigger which was used for gain monitoring purposes.
The two-arm coincidence trigger is formed by a time overlap
of the main calorimeter trigger and the signal from the S0 scin-
tillator plane in the HRS. The width of the proton trigger pulse
is set to 100 ns, while the photon trigger pulse, which is delayed
in a programmable delay line, is set to 10 ns. As a result, the co-
incidence events are synchronized with the photon trigger, and
a correct timing relation between trigger signals from two arms
is maintained for all 25 kinematic configurations of the RCS
experiment. Finally, a 1024 Hz puls generator signal forms a
pulser trigger, which was used to measure the dead time of the
electronics.
All 8 trigger signals are sent to the Trigger Supervisor mod-
ule which starts the DAQ readout. Most inputs of the Trigger
Supervisor can be individually pre-scaled. Triggers which are
accepted by the DAQ are then re-timed with the scintillators of
a corresponding arm to make gates for ADCs and TDCs. This
re-timing removes trigger time jitter and ensures the timing is
independent of the trigger type. Table 3 includes information
on the trigger and event types used in the RCS experiment and
shows typical pre-scale factors used during the data-taking. A
schematic diagram of the overall RCS experiment DAQ trigger
logic is shown in Fig.16.
6. Calorimeter Performance
The calorimeter used in the RCS experiment had three related
purposes. The first purpose is to provide a coincidence trigger
Table 3: A list of triggers used in the RCS experiment. Typical pre-scale factors
which were set during a data-taking run (run #1819) are shown.
Trigger Trigger Description pre-scale
ID factor
T1 Photon arm singles trigger 100,000
T2 Photon arm cosmics trigger 100,000
T3 Main Proton arm trigger:
(S1 AND S2)
T4 Additional Proton arm trigger:
(S0 AND S1) OR (S0 AND S2)
T5 Coincidence trigger 1
T6 Calorimeter light-calibration
trigger
T7 Signal from the HRS S0 scintil-
lator plane
65,000
T8 1024 Hz pulser trigger 1,024
Coincidence Time (ns)
10 20 30 40
Figure 17: The time of the calorimeter trigger relative to the recoil proton trig-
ger for a production run in kinematic 3E at maximum luminosity (detected
Eγ = 1.31 GeV). The solid curve shows all events, while the dashed curve
shows events with a cut on energy in the most energetic cluster > 1.0 GeV.
signal for operation of the DAQ. Fig. 17 shows the coincidence
time distribution, where one can see a clear relation between en-
ergy threshold and time resolution. The observed resolution of
around 8 ns (FWHM) was sufficient to identify cleanly coinci-
dence events over the background, which meant that no off-line
corrections were applied for variation of the average time of in-
dividual S UM − 32 summing modules. The second purpose
is determination of the energy of the scattered photon/electron
to within an accuracy of a few percent, while the third is rea-
sonably accurate reconstruction of the photon/electron hit co-
ordinates in order that kinematic correlation cuts between the
scattered photon/electron and the recoil proton can be made.
The off-line analysis procedure and the observed position and
energy resolutions are presented and discussed in the following
two sections.
6.1. Shower Reconstruction Analysis and Position Resolution
The off-line shower reconstruction involves a search for clus-
ters and can be characterized by the following definitions:
1. a cluster is a group of adjacent blocks;
2. a cluster occupies 9 (3 × 3) blocks of the calorimeter;
3. the distribution of the shower energy deposition over the
cluster blocks (the so-called shower profile) satisfies the
following conditions:
(a) the maximum energy deposition is in the central
block;
(b) the energy deposition in the corner blocks is less than
that in each of two neighboring blocks;
(c) around 50% of the total shower energy must be de-
posited in the central row (and column) of the cluster.
For an example in which the shower center is in the middle
of the central block, around 84% of the total shower energy
is in the central block, about 14% is in the four neighboring
blocks, and the remaining 2% is in the corner blocks. Even at
the largest luminosity used in the RCS experiment the proba-
bility of observing two clusters with energies above 50% of the
elastic value was less than 10%, so for the 704 block hodoscope
a two-cluster overlap was very unlikely.
The shower energy reconstruction requires both hardware
and software calibration of the calorimeter channels. On the
hardware side, equalization of the counter gains was initially
done with cosmic muons, which produce 20 MeV energy equiv-
alent light output per 4 cm path (muon trajectories perpendic-
ular to the long axis of the lead-glass blocks). The calibration
was done by selecting cosmic events for which the signals in
both counters above and below a given counter were large. The
final adjustment of each counter’s gain was done by using cali-
bration with elastic e-p events. This calibration provided PMT
gain values which were on average different from the initial cos-
mic set by 20%
The purpose of the software calibration is to define the co-
efficients for transformation of the ADC amplitudes to energy
deposition for each calorimeter module. These calibration co-
efficients are obtained from elastic e-p data by minimizing the
function:
Ci · (A
i − Pi) − E
where:
n = 1 ÷ N — number of the selected calibration event;
i — number of the block, included in the cluster;
Mn — set of the blocks’ numbers, in the cluster;
Ani — amplitude into the i-th block;
Pi — pedestal of the i-th block;
Ene — known energy of electron;
Ci — calibration coefficients, which need to be fitted.
The scattered electron energy Ene is calculated by using the en-
ergy of the primary electron beam and the scattered electron
angle. A cut on the proton momentum-angle correlation is used
to select clean elastic events.
Following calculation of the calibration coefficients, the total
energy deposition E, as well as the X and Y coordinates of the
shower center of gravity are calculated by the formulae:
Ei , X =
Ei · Xi/E , Y =
Ei · Yi/E
where M is the set of blocks numbers which make up the clus-
ter, Ei is the energy deposition in the i-th block, and Xi and Yi
are the coordinates of the i-th block center. The coordinates cal-
culated by this simple center of gravity method are then used for
a more accurate determination of the incident hit position. This
second iteration was developed during the second test run [7],
in which a two-layer MWPC was constructed and positioned
directly in front of the calorimeter. This chamber had 128 sen-
sitive wires in both X and Y directions, with a wire spacing of
2 mm and a position resolution of 1 mm. In this more refined
procedure, the coordinate xo of the shower center of gravity in-
side the cell (relative to the cell’s low boundary) is used. An
estimate of the coordinate xe can be determined from a polyno-
mial in this coordinate (P(xo)):
xe = P(xo) = a1 · xo + a3 · x
o + a5 · x
o + a7 · x
o + a9 · x
For symmetry reasons, only odd degrees of the polynomial are
used. The coefficients an are calculated by minimizing the func-
tional:
P(an, x
o) − x
where:
i = 1 ÷ N — number of event;
xio — coordinate of the shower center of gravity inside the
cell;
xit — coordinate of the track (MWPC) on the calorimeter
plane;
an — coordinate transformation coefficients to be fitted.
The resulting resolution obtained from such a fitting proce-
dure was found to be around 5.5 mm for a scattered electron
energy of 2.3 GeV. For the case of production data, where the
MWPC was not used, Fig. 18 shows a scatter plot of events
on the front face of the calorimeter. The parameter plotted
is the differences between the observed hit coordinates in the
calorimeter and the coordinates calculated from the proton pa-
rameters and an assumed two-body kinematic correlation. The
dominant contribution to the widths of the RCS and e-p peaks
that can be seen in this figure is from the angular resolution of
the detected proton, which is itself dominated by multiple scat-
tering. As the calorimeter distance varied during the experiment
between 5.5 m and 20 m, the contribution to the combined an-
gular resolution from the calorimeter position resolution of a
few millimeters was minimal.
6.2. Trigger Rate and Energy Resolution
At high luminosity, when a reduction of the accidental co-
incidences in the raw trigger rate is very important, the trigger
threshold should be set as close to the signal amplitude for elas-
tic RCS photons as practical. However, the actual value of the
threshold for an individual event has a significant uncertainty
due to pile-up of the low-amplitude signals, fluctuations of the
signal shape (mainly due to summing of the signals from the
PMTs with different HV and transit time), and inequality of the
x (cm)δ
y (cm)
’p)γ,γp(p(e,e’p)
p)0π,γp(
Figure 18: The scatter plot of p − γ(e) events in the plane of the calorimeter
front face.
gain in the individual counters. Too high a threshold, therefore,
can lead to a loss in detection efficiency.
The counting rate of the calorimeter trigger, f , which defines
a practical level of operational luminosity has an exponential
dependence on the threshold, as can be seen in Fig. 19. It can
be described by a function of Ethr:
f = A × exp(−B × Ethr/Emax),
where Emax is the maximum energy of an elastically scat-
tered photon/electron for a given scattering angle, A an angle-
dependent constant, and B a universal constant ≈ 9±1. The an-
0.3 0.6 0.9 1.2 1.5 1.8 2.1
40200 60 80 100 120 140
elastic
Threshold [GeV]
Threshold [mV]
Beam energy:
Beam current:
Target:
Calorimeter angle:
Calorimeter to target:
Solid angle:
10 µ A
3.3 GeV
Figure 19: Calorimeter trigger rate vs threshold level.
gular variation of the constant A, after normalization to a fixed
luminosity and the calorimeter solid angle, is less than a factor
of 2 for the RCS kinematics. The threshold for all kinemat-
ics was chosen to be around half of the elastic energy, thereby
balancing the need for a low trigger rate without affecting the
detection efficiency.
In order to ensure proper operation and to monitor the perfor-
mance of each counter the widths of the ADC pedestals were
used (see Fig. 20). One can see that these widths vary slightly
with block number, which reflects the position of the block in
the calorimeter and its angle with respect to the beam direction.
This pedestal width also allows for an estimate of the contri-
bution of the background induced base-line fluctuations to the
overall energy resolution. For the example shown in Fig. 20 the
width of 6 MeV per block leads to energy spectrum noise of
about 20 MeV because a 9-block cluster is used in the off-line
analysis.
0 100 200 300 400 500 600 700
LG block number
Figure 20: The width of the ADC pedestals for the calorimeter in a typical run.
The observed reduction of the width vs the block number reflects the lower
background at larger detector angle with respect to the beam direction.
The energy resolution of the calorimeter was measured by
using elastic e-p scattering. Such data were collected many
times during the experiment for kinematic checks and calorime-
ter gain calibration. Table 4 presents the observed resolution
and the corresponding ADC pedestal widths over the course of
the experiment. For completeness, the pedestal widths for cos-
mic and production data are also included. At high luminosity
the energy resolution degrades due to fluctuations of the base
line (pedestal width) and the inclusion of more accidental hits
during the ADC gate period. However, for the 9-block cluster
size used in the data analysis the contribution of the base line
fluctuations to the energy resolution is just 1-2%. The measured
widths of ADC pedestals confirmed the results of Monte Carlo
simulations and test runs that the radiation background is three
times higher with the 6% Cu radiator upstream of the target than
without it.
The resolution obtained from e-p calibration runs was cor-
rected for the drift of the gains so it could be attributed directly
to the effect of lead glass radiation damage. It degraded over
the course of the experiment from 5.5% (for a 1 GeV photon
energy) at the start to larger than 10% by the end. It was es-
timated that this corresponds to a final accumulated radiation
dose of about 3-10 kRad, which is in agreement with the known
level of radiation hardness of the TF-1 lead glass [19]. This
observed radiation dose corresponds to a 500 hour experiment
with a 15 cm LH2 target and 50 µA beam.
6.3. Annealing of the radiation damage
The front face of the calorimeter during the experiment was
protected by plastic material with an effective thickness of
10 g/cm2. For the majority of the time the calorimeter was lo-
cated at a distance of 5-8 m and an angle of 40-50◦ with respect
to the electron beam direction. The transparency of 20 lead-
glass blocks was measured after the experiment, the results of
which are shown in Fig. 21. This plot shows the relative trans-
mission through 4 cm of glass in the direction transverse to
the block length at different locations. The values were nor-
malized to the transmission through similar lead-glass blocks
which were not used in the experiment. The transmission mea-
surement was done with a blue LED (λmax of 430 nm) and a
Hamamatsu photo-diode (1226-44).
0 10 20 30 40
Distance from the calorimeter face [cm]
Figure 21: The blue light attenuation in 4 cm of lead-glass vs distance from
the front face of calorimeter measured before (solid) and after (dashed) UV
irradiation.
A UV technique was developed and used in order to cure ra-
diation damage. The UV light was produced by a 10 kW total
power 55-inch long lamp9, which was installed vertically at a
distance of 45 inches from the calorimeter face and a quartz
plate (C55QUARTZ) was used as an infrared filter. The inten-
sity of the UV light at the face of the lead-glass blocks was
found to be 75 mW/cm2 by using a UVX digital radiometer10.
In situ UV irradiation without disassembly of the lead-glass
stack was performed over an 18 hour period. All PMTs were
removed before irradiation to ensure the safety of the photo-
cathode. The resultant improvement in transparency can be
seen in Fig. 21. An alternative but equally effective method
to restore the lead-glass transparency, which involved heating
of the lead-glass blocks to 250◦C for several hours, was also
9Type A94551FCB manufactured by American Ultraviolet, Lebanon, IN
46052, USA
10 Manufactured by UVP, Inc., Upland, CA 91786, USA
Table 4: Pedestal widths and calorimeter energy resolution at different stages of the RCS experiment for cosmic (c), electron (e) and production (γ) runs in order of increasing effective luminosity.
Runs L e f f Beam Current Accumulated Detected Ee/γ σE /E σE /E at Eγ=1 GeV Θcal σped
(1038 cm−2/s) (µA) Beam Charge (C) (GeV) (%) (%) (degrees) (MeV)
1517 (c) - - - - - - - 1.5
1811 (e) 0.1 2.5 2.4 2.78 4.2 7.0 30 1.7
1488 (e) 0.2 5 0.5 1.32 4.9 5.5 46 1.75
2125 (e) 1.0 25 6.6 2.83 4.9 8.2 34 2.6
2593 (e) 1.5 38 14.9 1.32 9.9 11.3 57 2.0
1930 (e) 1.6 40 4.4 3.39 4.2 7.7 22 3.7
1938 (γ) 1.8 15 4.5 3.23 - - 22 4.1
2170 (γ) 2.4 20 6.8 2.72 - - 34 4.0
1852 (γ) 4.2 35 3.0 1.63 - - 50 5.0
tested. The net effect of heating on the transparency of the lead-
glass was similar to the UV curing results.
In summary, operation of the calorimeter at high luminos-
ity, particularly when the electron beam was incident on the
bremsstrahlung radiator, led to a degradation in energy resolu-
tion due to fluctuations in the base-line and a higher accidental
rate within the ADC gate period. For typical clusters this effect
was found to be around a percent or two. By far the largest con-
tributor to the observed degradation in resolution was radiation
damage sustained by the lead-glass blocks, which led to the res-
olution being a factor of two larger at the end of the experiment.
The resulting estimates of the total accumulated dose were con-
sistent with expectations for this type of lead-glass. Finally, it
was found that both UV curing and heating of the lead-glass
were successful in annealing this damage.
7. Summary
The design of a segmented electromagnetic calorimeter
which was used in the JLab RCS experiment has been de-
scribed. The performance of the calorimeter in an unprece-
dented high luminosity, high background environment has been
discussed. Good energy and position resolution enabled a suc-
cessful measurement of the RCS process over a wide range of
kinematics.
8. Acknowledgments
We acknowledge the RCS collaborators who helped to oper-
ate the detector and the JLab technical staff for providing out-
standing support, and specially D. Hayes, T. Hartlove, T. Hun-
yady, and S. Mayilyan for help in the construction of the lead-
glass modules. We appreciate S. Corneliussen’s careful reading
of the manuscript and his valuable suggestions. This work was
supported in part by the National Science Foundation in grants
for the University of Illinois University and by DOE contract
DE-AC05-84ER40150 under which the Southeastern Universi-
ties Research Association (SURA) operates the Thomas Jeffer-
son National Accelerator Facility for the United States Depart-
ment of Energy.
References
[1] C. Hyde-Wright, A. Nathan, and B. Wojtsekhowski, spokespersons, JLab
experiment E99-114.
[2] Charles Hyde-Wright and Kees de Jager Ann.Rev.Nucl.Part.Sci. 54, 217
(2004).
[3] A.V. Radyushkin, Phys. Rev. D 58, 114008 (1998).
[4] H.W. Huang, P. Kroll, T. Morii, Eur. Phys. J. C 23, 301 (2002); erratum
ibid., C 31, 279 (2003).
[5] R. Thompson, A. Pang, Ch.-R. Ji, Phys. Rev. D 73, 054023 (2006).
[6] M.A. Shupe et al., Phys. Rev. D 19, 1929 (1979).
[7] E. Chudakov et al., Study of Hall A Photon Spectrometer. Hall A internal
report, 1998.
[8] J. Alcorn et al., Nucl. Instr. Meth. A 522, (2004) 294.
[9] D.J. Hamilton et al., Phys. Rev. Lett. 94, 242001 (2005).
[10] A. Danagoulian et al., Phys. Rev. Lett. 98, 152001 (2007).
[11] Yu.D. Prokoshkin et al., Nucl. Instr. Meth. A 248, 86102 (1986).
[12] M.Y. Balatz et al., Nucl. Instr. Meth. A 545, 114 (2005).
[13] R.G. Astvatsaturov et al., Nucl. Instr. Meth. 107, 105 (1973).
[14] R.R. Crittenden et al., Nucl. Instr. Meth. A 387, 377 (1997).
[15] V. Popov et al., Proceedings of IEEE 2001 Nuclear Science Symposium
(NSS) And Medical Imaging Conference (MIC). Ed. J.D. Valentine IEEE
(2001) p. 634-637.
[16] V. Popov, Nucl. Instr. Meth. A 505, 316 (2003).
[17] W.A. Watson et al., CODA: a scalable, distributed data acquisition sys-
tem, in: Proceedings of the Real Time 1993 Conference, p. 296;
[18] E. Jastrzembski et al., The Jefferson Lab trigger supervisor system, 11th
IEEE NPSS Real Time 1999 Conference, JLab-TN-99-13, 1999.
[19] A.V. Inyakin et al., Nucl. Instr. Meth. 215, 103 (1983).
1 Introduction
2 Calorimeter
2.1 Calorimeter Design
2.1.1 Air Cooling
2.1.2 Cabling System
2.2 Lead-Glass Counter
2.2.1 Design of the Counter
2.2.2 HV Divider
2.3 Electronics
2.3.1 Trigger Scheme
2.4 Gain Monitoring System
3 Veto Hodoscopes
4 High Voltage System
5 Data Acquisition System
6 Calorimeter Performance
6.1 Shower Reconstruction Analysis and Position Resolution
6.2 Trigger Rate and Energy Resolution
6.3 Annealing of the radiation damage
7 Summary
8 Acknowledgments
|
0704.1831 | Multiplicity Fluctuations in Au+Au Collisions at RHIC | January 4, 2019
Multiplicity Fluctuations in Au+Au Collisions at RHIC
V.P. Konchakovski,1, 2 M.I. Gorenstein,1, 3 and E.L. Bratkovskaya3
1Bogolyubov Institute for Theoretical Physics, Kiev, Ukraine
2Helmholtz Research School, University of Frankfurt, Frankfurt, Germany
3Frankfurt Institute for Advanced Studies, Frankfurt, Germany
Abstract
The preliminary data of the PHENIX collaboration for the scaled variances of charged hadron
multiplicity fluctuations in Au+Au at
s = 200 GeV are analyzed within the model of independent
sources. We use the HSD transport model to calculate the participant number fluctuations and
the number of charged hadrons per nucleon participant in different centrality bins. This combined
picture leads to a good agreement with the PHENIX data and suggests that the measured multi-
plicity fluctuations result dominantly from participant number fluctuations. The role of centrality
selection and acceptance is discussed separately.
PACS numbers: 24.10.Lx, 24.60.Ky, 25.75.-q
Keywords: nucleus-nucleus collisions, fluctuations, transport models
http://arxiv.org/abs/0704.1831v1
The event-by-event fluctuations in high energy nucleus-nucleus (A+A) collisions (see e.g.,
the reviews [1, 2]) are expected to provide signals of the transition between different phases
(see e.g., Refs. [3, 4]) and the QCD critical point [5]. In the present letter we study the
charged multiplicity fluctuations in Au+Au collisions at RHIC energies. The preliminary
data of the PHENIX collaboration [6] at
s = 200 GeV are analyzed within the model
of independent sources while employing the microscopic Hadron-String-Dynamics (HSD)
transport model [7, 8] to define the centrality selection and to calculate the properties of
hadron production sources.
The centrality selection is an important aspect of fluctuation studies in A+A collisions. At
the SPS fixed target experiments the samples of collisions with a fixed number of projectile
participants N
P can be selected to minimize the participant number fluctuations in the
sample of collision events. This selection is possible due to a measurement of the number of
nucleon spectators from the projectile, N
S , in each individual collision by a calorimeter
which covers the projectile fragmentation domain. However, even in the sample with N
const the number of target participants fluctuates considerably. In the following the variance,
V ar(n) ≡ 〈n2〉 − 〈n〉2, and scaled variance, ω ≡ V ar(n)/〈n〉, where n stands for a given
random variable and 〈· · · 〉 for event-by-event averaging, will be used to quantify fluctuations.
In each sample with N
P = const the number of target participants fluctuates around its
mean value, 〈N targP 〉 = N
P , with the scaled variance ω
P . Within the HSD and UrQMD
transport models it was found in Ref. [9] that the fluctuations of N
P strongly influence
the charged hadron fluctuations. The constant values of N
P and fluctuations of N
P lead
also to an asymmetry between the fluctuations in the projectile and target hemispheres. The
consequences of this asymmetry depend on the A+A dynamics as discussed in Ref. [10].
In Au+Au collisions at RHIC a different centrality selection is used. There are two kinds
of detectors which define the centrality of Au+Au collision, Beam-Beam Counters (BBC)
and Zero Degree Calorimeters (ZDC). At the c.m. pair energy
s = 200 GeV, the BBC
measure the charged particle multiplicity in the pseudorapidity range 3.0 < |η| < 3.9, and
the ZDC – the number of neutrons with |η| > 6.0 [6]. These neutrons are part of the nucleon
spectators. Due to technical reasons the neutron spectators can be only detected by the ZDC
(not protons and nuclear fragments), but in both hemispheres. The BBC distribution will be
used in the HSD calculations to divide Au+Au collision events into 5% centrality samples.
HSD does not specify different spectator groups – neutrons, protons, and nuclear fragments
such that we can not use the ZDC information. In Fig. 1 (left) the HSD results for the BBC
distribution and centrality classes in Au+Au collisions at
s =200 GeV are shown. We
find a good agreement between the HSD shape of the BBC distribution and the PHENIX
data [6]. The experimental estimates of 〈NP 〉 for different centrality classes are based on
the Glauber model. These estimates vary by less than 0.5% depending on the shape of the
cut in the ZDC/BBC space or whether the BBC alone is used as a centrality measure [6].
Note, however, that the HSD 〈NP 〉 numbers are not exactly equal to the PHENIX values.
It is also not obvious that different definitions for the 5% centrality classes give the same
values of the scaled variance ωP for the participant number fluctuations.
0 200 400 600 800
Charge in BBC
0 100 200 300 400
FIG. 1: HSD model results for Au+Au collisions at
s = 200 GeV. Left: Centrality classes defined
via the BBC distribution. Right: The average number of participants, 〈NP 〉, and the scaled variance
of the participant number fluctuations, ωP , calculated for the 5% BBC centrality classes.
Defining the centrality selection via the HSD transport model (which is similar to the
BBC in the PHENIX experiment) we calculate the mean number of nucleon participants,
〈NP 〉, and the scaled variance of its fluctuations, ωP , in each 5% centrality sample. The
results are shown in Fig. 1, right. The Fig. 2 (left) shows the HSD results for the mean
number of charged hadrons per nucleon participant, ni = 〈Ni〉/〈NP 〉, where the index i
stands for “−”, “+”, and “ch”, i.e negatively, positively, and all charged final hadrons.
Note that the centrality dependence of ni is opposite to that of ωP : ni increases with 〈NP 〉,
whereas ωP decreases.
The PHENIX detector accepts charged particles in a small region of the phase space with
pseudorapidity |η| < 0.26 and azimuthal angle φ < 245o and the pT range from 0.2 to 2.0
GeV/c [6]. The fraction of the accepted particles qi = 〈Nacci 〉/〈Ni〉 calculated within the
HSD model is shown in the r.h.s. of Fig. 2. According to the HSD results only 3÷ 3.5% of
charged particles are accepted by the mid-rapidity PHENIX detector.
0 100 200 300 400
0 100 200 300 400
0.030
0.035
FIG. 2: HSD results for different BBC centrality classes in Au+Au collisions at
s = 200 GeV.
Left: The mean number of charged hadrons per participant, ni = 〈Ni〉/〈NP 〉. Right: The fraction
of accepted particles, qi = 〈Nacci 〉/〈Ni〉.
To estimate the role of the participant number event-by-event fluctuations we use the
model of independent sources (see e.g., Refs [1, 9, 10]),
ωi = ω
i + ni ωP . (1)
The first term in the r.h.s. of Eq. (1) corresponds to the fluctuations of the hadron mul-
tiplicity from one source, and the second term, ni ωP , gives additional fluctuations due to
the fluctuations of the number of sources. As usually, we have assumed that the number of
sources is proportional to the number of nucleon participants. The value of ni in Eq. (1) is
then the average number of i’th particles per participant, ni = 〈Ni〉/〈NP 〉. We also assume
that nucleon-nucleon collisions define the fluctuations ω∗i from a single source. To calculate
the fluctuations ωacci in the PHENIX acceptance we use the acceptance scaling formula (see
e.g., Refs. [1, 9, 10]):
ωacci = 1 − qi + qi ωi , (2)
where qi is the fraction of the accepted i’th hadrons by the PHENIX detector. Using Eq. (1)
for ωi one finds,
ωacci = 1 − qi + qi ω∗i + qi ni ωP . (3)
The HSD results for ωP (Fig. 1, right), ni (Fig. 2, left), qi (Fig. 2, right), together with the
HSD nucleon-nucleon values, ω∗
= 3.0, ω∗+ = 2.7, and ω
ch = 5.7 at
s = 200 GeV, define
completely the results for ωacci according to Eq. (3). We find a surprisingly good agreement
of the results given by Eq. (3) with the PHENIX data shown in Fig. 3. Note that the
centrality dependence of ωacci stems from the product, ni · ωP , in the last term of the r.h.s.
of Eq. (3).
100 200 300 400
PHENIX
Eq. (3)
FIG. 3: The scaled variance of charged particle fluctuations in Au+Au collisions at
s = 200 GeV
with the PHENIX acceptance. The circles are the PHENIX data [6] while the open points (con-
nected by the solid line) correspond to Eq. (3) with the HSD results for ωP , ni, and qi.
In summary, the preliminary PHENIX data [6] for the scaled variances of charged hadron
multiplicity fluctuations in Au+Au collisions at
s = 200 GeV have been analyzed within
the model of independent sources. Assuming that the number of hadron sources are propor-
tional to the number of nucleon participants, the HSD transport model was used to calculate
the scaled variance of participant number fluctuations, ωP , and the number of i’th hadrons
per nucleon accepted by the mid-rapidity PHENIX detector, qini, in different Beam-Beam
Counter centrality classes. The HSD model for nucleon-nucleon collisions was also used to
estimate the fluctuations from a single source, ω∗i . We find that this model description is in
a good agreement with the PHENIX data [6]. In different (5%) centrality classes ωP goes
down and qini goes up with increasing 〈NP 〉. This results in non-monotonic dependence of
ωacci on 〈NP 〉 as seen in the PHENIX data.
We conclude that both qualitative and quantitative features of the centrality dependence
of the fluctuations seen in the present PHENIX data are the consequences of participant
number fluctuations. To avoid a dominance of the participant number fluctuations one needs
to analyze most central collisions with a much more rigid (≤ 1%) centrality selection. The
statistical model then predicts ω± < 1 [11], whereas the HSD transport model predicts the
values of ω± much larger than unity at
s = 200 GeV [12]. To allow for a clear distinction
between these predictions it is mandatory to enlarge the acceptance of the mid-rapidity
detector up to about 10% (see the discussion in Ref. [12]).
Acknowledgments: We like to thank V.V. Begun, W. Cassing, M. Gaździcki,
W. Greiner, M. Hauer, B. Lungwitz, I.N. Mishustin and J.T. Mitchell for useful discus-
sions. One of the author (M.I.G.) is thankful to the Humboldt Foundation for financial
support.
[1] H. Heiselberg, Phys. Rep. 351, 161 (2001).
[2] S. Jeon and V. Koch, Review for Quark-Gluon Plasma 3, eds. R.C. Hwa and X.-N. Wang,
World Scientific, Singapore, 430-490 (2004) [arXiv:hep-ph/0304012].
[3] M. Gaździcki, M. I. Gorenstein, and S. Mrowczynski, Phys. Lett. B 585, 115 (2004);
M. I. Gorenstein, M. Gaździcki, and O. S. Zozulya, Phys. Lett. B 585, 237 (2004).
[4] I.N. Mishustin, Phys. Rev. Lett. 82, 4779 (1999); Nucl. Phys. A 681, 56 (2001); H. Heiselberg
and A.D. Jackson, Phys. Rev. C 63, 064904 (2001).
[5] M.A. Stephanov, K. Rajagopal, and E.V. Shuryak, Phys. Rev. Lett. 81, 4816 (1998); Phys.
Rev. D 60, 114028 (1999); M.A. Stephanov, Acta Phys. Polon. B 35, 2939 (2004);
[6] S.S. Adler et al., [PHENIX Collaboration], arXiv: nucl-ex/0409015; J.T. Mitchell [PHENIX
http://arxiv.org/abs/hep-ph/0304012
http://arxiv.org/abs/nucl-ex/0409015
Collaboration], J. Phys. Conf. Ser. 27, 88 (2005); J.T. Mitchell, private communications.
[7] W. Ehehalt and W. Cassing, Nucl. Phys. A 602, 449 (1996); E.L. Bratkovskaya and
W. Cassing, Nucl. Phys. A 619 413 (1997); W. Cassing and E.L. Bratkovskaya, Phys. Rep.
308, 65 (1999); W. Cassing, E.L. Bratkovskaya, and A. Sibirtsev, Nucl. Phys. A 691, 753
(2001); W. Cassing, E. L. Bratkovskaya, and S. Juchem, Nucl. Phys. A 674, 249 (2000).
[8] H. Weber, et al., Phys. Rev. C67, 014904 (2003); E.L. Bratkovskaya, et al., Phys. Rev. C 67,
054905 (2003); ibid, 69, 054907 (2004); Prog. Part. Nucl. Phys. 53, 225 (2004).
[9] V.P. Konchakovski, et al., Phys. Rev. C 73, 034902 (2006); ibid. C 74, 064911 (2006).
[10] M. Gaździcki and M.I. Gorenstein, Phys. Lett. B 640, 155 (2006).
[11] V.V. Begun, et al., Phys. Rev. C 74, 044903 (2006); nucl-th/0611075.
[12] V.P. Konchakovski, E.L. Bratkovskaya, and M.I. Gorenstein, nucl-th/0703052.
http://arxiv.org/abs/nucl-th/0611075
http://arxiv.org/abs/nucl-th/0703052
References
|
0704.1832 | Near and Mid-IR Photometry of the Pleiades, and a New List of Substellar
Candidate Members | ApJS, in press; version with embedded figures can be obtained
at http://spider.ipac.caltech.edu/staff/stauffer/
Near and Mid-IR Photometry of the Pleiades, and a New List of
Substellar Candidate Members1,2
John R. Stauffer
Spitzer Science Center, Caltech 314-6, Pasadena, CA 91125
[email protected]
Lee W. Hartmann
Astronomy Department, University of Michigan
Giovanni G. Fazio, Lori E. Allen, Brian M. Patten
Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138
Patrick J. Lowrance, Robert L. Hurt, Luisa M. Rebull
Spitzer Science Center, Caltech , Pasadena, CA 91125
Roc M. Cutri, Solange V. Ramirez
Infrared Processing and Analysis Center, Caltech 220-6, Pasadena, CA 91125
Erick T. Young, George H. Rieke, Nadya I. Gorlova3, James C. Muzerolle
Steward Observatory, University of Arizona, Tucson, AZ 85726
Cathy L. Slesnick
Astronomy Department, Caltech, Pasadena, CA 91125
Michael F. Skrutskie
Astronomy Department, University of Virginia, Charlottesville, VA 22903
1This work is based (in part) on observations made with the Spitzer Space Telescope, which is operated
by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contract 1407.
2This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project
of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute
of Technology, funded by the National Aeronautics and Space Administration and the National Science
Foundation.
3Current address: University of Florida, 211 Bryant Space Center, Gainesville, FL 32611
http://arxiv.org/abs/0704.1832v1
http://spider.ipac.caltech.edu/staff/stauffer/
– 2 –
ABSTRACT
We make use of new near and mid-IR photometry of the Pleiades cluster in
order to help identify proposed cluster members. We also use the new photometry
with previously published photometry to define the single-star main sequence
locus at the age of the Pleiades in a variety of color-magnitude planes.
The new near and mid-IR photometry extend effectively two magnitudes
deeper than the 2MASS All-Sky Point Source catalog, and hence allow us to
select a new set of candidate very low mass and sub-stellar mass members of the
Pleiades in the central square degree of the cluster. We identify 42 new candi-
date members fainter than Ks =14 (corresponding to 0.1 Mo). These candidate
members should eventually allow a better estimate of the cluster mass function
to be made down to of order 0.04 solar masses.
We also use new IRAC data, in particular the images obtained at 8 um,
in order to comment briefly on interstellar dust in and near the Pleiades. We
confirm, as expected, that – with one exception – a sample of low mass stars
recently identified as having 24 um excesses due to debris disks do not have
significant excesses at IRAC wavelengths. However, evidence is also presented
that several of the Pleiades high mass stars are found to be impacting with local
condensations of the molecular cloud that is passing through the Pleiades at the
current epoch.
Subject headings: stars: low mass — young; open clusters — associations:
individual (Pleiades)
1. Introduction
Because of its proximity, youth, richness, and location in the northern hemisphere, the
Pleiades has long been a favorite target of observers. The Pleiades was one of the first open
clusters to have members identified via their common proper motion (Trumpler 1921), and
the cluster has since then been the subject of more than a dozen proper motion studies.
Some of the earliest photoelectric photometry was for members of the Pleiades (Cummings
1921), and the cluster has been the subject of dozens of papers providing additional optical
photometry of its members. The youth and nearness of the Pleiades make it a particularly
attractive target for identifying its substellar population, and it was the first open cluster
studied for those purposes (Jameson & Skillen 1989; Stauffer et al. 1989). More than 20 pa-
pers have been subsequently published, identifying additional substellar candidate members
of the Pleiades or studying their properties.
– 3 –
We have three primary goals for this paper. First, while extensive optical photometry
for Pleiades members is available in the literature, photometry in the near and mid-IR is
relatively spotty. We will remedy this situation by using new 2MASS JHKs and Spitzer
IRAC photometry for a large number of Pleiades members. We will use these data to help
identify cluster non-members and to define the single-star locus in color-magnitude diagrams
for stars of 100 Myr age. Second, we will use our new IR imaging photometry of the center of
the Pleiades to identify a new set of candidate substellar members of the cluster, extending
down to stars expected to have masses of order 0.04 M⊙. Third, we will use the IRAC data
to briefly comment on the presence of circumstellar debris disks in the Pleiades and the
interaction of the Pleiades stars with the molecular cloud that is currently passing through
the cluster.
In order to make best use of the IR imaging data, we will begin with a necessary
digression. As noted above, more than a dozen proper motion surveys of the Pleiades have
been made in order to identify cluster members. However, no single catalog of the cluster
has been published which attempts to collect all of those candidate members in a single
table and cross-identify those stars. Another problem is that while there have been many
papers devoted to providing optical photometry of cluster members, that photometry has
been bewilderingly inhomogeneous in terms of the number of photometric systems used. In
Sec. 3 and in the Appendix, we describe our efforts to create a reasonably complete catalog
of candidate Pleiades members and to provide optical photometry transformed to the best
of our ability onto a single system.
2. New Observational Data
2.1. 2MASS “6x” Imaging of the Pleiades
During the final months of Two Micron All Sky Survey (2MASS; Skrutskie et al. (2006))
operations, a series of special observations were carried out that employed exposures six times
longer than used for the the primary survey. These so-called “6x” observations targeted 30
regions of scientific interest including a 3 deg x 2 deg area centered on the Pleiades cluster.
The 2MASS 6x data were reduced using an automated processing pipeline similar to that
used for the main survey data, and a calibrated 6x Image Atlas and extracted 6x Point and
Extended Source Catalogs (6x-PSC and 6x-XSC) analogous to the 2MASS All-Sky Atlas,
PSC and XSC have been released as part of the 2MASS Extended Mission. A description
of the content and formats of the 6x image and catalog products, and details about the 6x
– 4 –
observations and data reduction are given by Cutri et al. (2006; section A3). 1 The 2MASS
6x Atlas and Catalogs may be accessed via the on-line services of the NASA/IPAC Infrared
Science Archive (http://irsa.ipac.caltech.edu).
Figure 1 shows the area on the sky imaged by the 2MASS 6x observations in the Pleiades
field. The region was covered by two rows of scans, each scan being one degree long (in
declination) and 8.5’ wide in right ascension. Within each row, the scans overlap by approx-
imately one arcminute in right ascension. There are small gaps in coverage in the declination
boundary between the rows, and one complete scan in the southern row is missing because
the data in that scan did not meet the minimum required photometric quality. The total
area covered by the 6x Pleiades observations is approximately 5.3 sq. degrees.
There are approximately 43,000 sources extracted from the 6x Pleiades observations
in the 2MASS 6x-PSC, and nearly 1,500 in the 6x-XSC. Because there are at most about
1000 Pleiades members expected in this region, only ∼2% of the 6x-PSC sources are cluster
members, and the rest are field stars and background galaxies. The 6x-XSC objects are
virtually all resolved background galaxies. Near infrared color-magnitude and color-color
diagrams of the unresolved sources from the 2MASS 6x-PSC and all sources in the 6x-XSC
sources from the Pleiades region are shown in Figures 2 and 3, respectively. The extragalactic
sources tend to be redder than most stars, and the galaxies become relatively more numerous
towards fainter magnitudes. Unresolved galaxies dominate the point sources that are fainter
than Ks > 15.5 and redder than J −Ks > 1.2 mag.
The 2MASS 6x observations were conducted using the same freeze-frame scanning tech-
nique used for the primary survey (Skrutskie et al. 2006). The longer exposure times were
achieved by increasing the “READ2-READ1” integration to 7.8 sec from the 1.3 sec used for
primary survey. However, the 51 ms “READ1” exposure time was not changed for the 6x
observations. As a result, there is an effective “sensitivity gap” in the 8-11 mag region where
objects may be saturated in the 7.8 sec READ2-READ1 6x exposures, but too faint to be
detected in the 51 ms READ1 exposures. Because the sensitivity gap can result in incom-
pleteness and/or flux bias in the photometric overlap regime, the near infrared photometry
for sources brighter than J=11 mag in the 6x-PSC was taken from the 2MASS All-Sky PSC
during compilation of the catalog of Pleiades candidate members presented in Table 2 (c.f.
Section 3).
1http://www.ipac.caltech.edu/2mass/releases/allsky/doc/explsup.html
http://irsa.ipac.caltech.edu
– 5 –
2.2. Shallow IRAC Imaging
Imaging of the Pleiades with Spitzer was obtained in April 2004 as part of a joint GTO
program conducted by the IRAC instrument team and the MIPS instrument team. Initial
results of the MIPS survey of the Pleiades have already been reported in Gorlova et al. (2006).
The IRAC observations were obtained as two astronomical observing requests (AORs). One
of them was centered near the cluster center, at RA=03h47m00.0s and Dec=24d07m (2000),
and consisted of a 12 row by 12 column map, with “frametimes” of 0.6 and 12.0 seconds
and two dithers at each map position. The map steps were 290′′ in both the column and
row direction. The resultant map covers a region of approximately one square degree, and
a total integration time per position of 24 sec over most of the map. The second AOR used
the same basic mapping parameters, except it was smaller (9 rows by 9 columns) and was
instead centered northwest from the cluster center at RA=03h44m36.0s and Dec=25d24m.
A two-band color image of the AOR covering the center of the Pleiades is shown in Figure 4.
A pictorial guide to the IRAC image providing Greek names for a few of the brightest stars,
and Hertzsprung (1947) numbers for several stars mentioned in Section 6 is provided in
Figure 5.
We began our analysis with the basic calibrated data (BCDs) from the Spitzer pipeline,
using the S13 version of the Spitzer Science Center pipeline software. Artifact mitigation
and masking was done using the IDL tools provided on the Spitzer contributed software
website. For each AOR, the artifact-corrected BCDs were combined into single mosaics
for each channel using the post-BCD “MOPEX” package (Makovoz & Marleau 2005). The
mosaic images were constructed with 1.22×1.22 arcsecond pixels (i.e., approximately the
same pixel size as the native IRAC arrays).
We derived aperture photometry for stars present in these IRAC mosaics using both
APEX (a component of the MOPEX package) and the “phot” routine in DAOPHOT. In
both cases, we used a 3 pixel radius aperture and a sky annulus from 3 to 7 pixels (except
that for Channel 4, for the phot package we used a 2 pixel radius aperture and a 2 to 6 pixel
annulus because that provided more reliable fluxes at low flux levels). We used the flux for
zero magnitude calibrations provided in the IRAC data handbook (280.9, 179.7, 115.0 and
64.1 Jy for Ch 1 through Ch 4, respectively), and the aperture corrections provided in the
same handbook (multiplicative flux correction factors of 1.124, 1.127, 1.143 and 1.584 for Ch
1-4, inclusive. The Ch4 correction factor is much bigger because it is for an aperture radius
of 2 rather than 3 pixels.).
Figure 6 and Figure 7 provide two means to assess the accuracy of the IRAC photometry.
The first figure compares the aperture photometry from APEX to that from phot, and shows
that the two packages yield very similar results when used in the same way. For this reason,
– 6 –
we have simply averaged the fluxes from the two packages to obtain our final reported value.
The second figure shows the difference between the derived 3.6 and 4.5 µm magnitudes
for Pleiades members. Based on previous studies (e.g. Allen et al. (2004)), we expected
this difference to be essentially zero for most stars, and the Pleiades data corroborate that
expectation. For [3.6]<10.5, the RMS dispersion of the magnitude difference between the two
channels is 0.024 mag. Assuming that each channel has similar uncertainties, this indicates
an internal 1-σ accuracy of order 0.017 mag. The absolute calibration uncertainty for the
IRAC fluxes is currently estimated at of order 0.02 mag. Figure 7 also shows that fainter than
[3.6]=10.5 (spectral type later than about M0), the [3.6]−[4.5] color for M dwarfs departs
slightly from zero, becoming increasingly redder to the limit of the data (about M6).
3. A Catalog of Pleiades Candidate Members
If one limits oneself to only stars visible with the naked eye, it is easy to identify which
stars are members of the Pleiades – all of the stars within a degree of the cluster center that
have V < 6 are indeed members. However, if one were to try to identify the M dwarf stellar
members of the cluster (roughly 14 < V < 23), only of order 1% of the stars towards the
cluster center are likely to be members, and it is much harder to construct an uncontaminated
catalog. The problem is exacerbated by the fact that the Pleiades is old enough that mass
segregation through dynamical processes has occurred, and therefore one has to survey a
much larger region of the sky in order to include all of the M dwarf members.
The other primary difficulty in constructing a comprehensive member catalog for the
Pleiades is that the pedigree of the candidates varies greatly. For the best studied stars,
astrometric positions can be measured over temporal baselines ranging up to a century or
more, and the separation of cluster members from field stars in a vector point diagram
(VPD) can be extremely good. In addition, accurate radial velocities and other spectral
indicators are available for essentially all of the bright cluster members, and these further
allow membership assessment to be essentially definitive. Conversely, at the faint end (for
stars near the hydrogen burning mass limit in the Pleiades), members are near the detection
limit of the existing wide-field photographic plates, and the errors on the proper motions
become correspondingly large, causing the separation of cluster members from field stars
in the VPD to become poor. These stars are also sufficiently faint that spectra capable of
discriminating members from field dwarfs can only be obtained with 8m class telescopes, and
only a very small fraction of the faint candidates have had such spectra obtained. Therefore,
any comprehensive catalog created for the Pleiades will necessarily have stars ranging from
certain members to candidates for which very little is known, and where the fraction of
– 7 –
spurious candidate members increases to lower masses.
In order to address the membership uncertainties and biases, we have chosen a sliding
scale for inclusion in our catalog. For all stars, we require that the available photometry yields
location in color-color and color-magnitude diagrams consistent with cluster membership.
For the stars with well-calibrated photoelectric photometry, this means the star should not
fall below the Pleiades single-star locus by more than about 0.2 mag or above that locus
by more than about 1.0 mag (the expected displacement for a hierarchical triple with three
nearly equal mass components). For stars with only photographic optical photometry, where
the 1-σ uncertainties are of order 0.1 to 0.2 mag, we still require the star’s photometry to
be consistent with membership, but the allowed displacements from the single star locus are
considerably larger. Where accurate radial velocities are known, we require that the star
be considered a radial velocity member based on the paper where the radial velocities were
presented. Where stars have been previously identified as non-members based on photometric
or spectroscopic indices, we adopt those conclusions.
Two other relevant pieces of information are sometimes available. In some cases, in-
dividual proper motion membership probabilities are provided by the various membership
surveys. If no other information is available, and if the membership probability for a given
candidate is less than 0.1, we exclude that star from our final catalog. However, often a star
appears in several catalogs; if it appears in two or more proper motion membership lists we
include it in the final catalog even if P < 0.1 in one of those catalogs. Second, an entirely
different means to identify candidate Pleiades members is via flare star surveys towards the
cluster (Haro et al. 1982; Jones 1981). A star with a formally low membership probability
in one catalog but whose photometry is consistent with membership and that was identified
as a flare star is retained in our catalog.
Further details of the catalog construction are provided in the appendix, as are details of
the means by which the B, V , and I photometry have been homogenized. A full discussion
and listing of all of the papers from which we have extracted astrometric and photometric
information is also provided in the appendix. Here we simply provide a very brief description
of the inputs to the catalog.
We include candidate cluster members from the following proper motion surveys: Trumpler
(1921), Hertzsprung (1947), Jones (1981), Pels and Lub – as reported in van Leeuwen, Alphenaar, & Brand
(1986), Stauffer et al. (1991), Artyukhina (1969), Hambly et al. (1993), Pinfield et al. (2000),
Adams et al. (2001) and Deacon & Hambly (2004). Another important compilation which
provides the initial identification of a significant number of low mass cluster members is the
flare star catalog of Haro et al. (1982). Table 1 provides a brief synopsis of the characteristics
of the candidate member catalogs from these papers. The Trumpler paper is listed twice
– 8 –
in Table 1 because there are two membership surveys included in that paper, with differing
spatial coverages and different limiting magnitudes.
In our final catalog, we have attempted to follow the standard naming convention
whereby the primary name is derived from the paper where it was first identified as a cluster
member. An exception to this arises for stars with both Trumpler (1921) and Hertzsprung
(1947) names, where we use the Hertzsprung numbers as the standard name because that
is the most commonly used designation for these stars in the literature. The failure for
the Trumpler numbers to be given precedence in the literature perhaps stems from the fact
that the Trumpler catalog was published in the Lick Observatory Bulletins as opposed to a
refereed journal. In addition to providing a primary name for each star, we provide cross-
identifications to some of the other catalogs, particularly where there is existing photometry
or spectroscopy of that star using the alternate names. For the brightest cluster members, we
provide additional cross-references (e.g., Greek names, Flamsteed numbers, HD numbers).
For each star, we attempt to include an estimate for Johnson B and V , and for Cousins
I (IC). Only a very small fraction of the cluster members have photoelectric photometry in
these systems, unfortunately. Photometry for many of the stars has often been obtained in
other systems, including Walraven, Geneva, Kron, and Johnson. We have used previously
published transformations from the appropriate indices in those systems to Johnson BV or
Cousins I. In other cases, photometry is available in a natural I band system, primarily
for some of the relatively faint cluster members. We have attempted to transform those I
band data to IC by deriving our own conversion using stars for which we already have a IC
estimate as well as the natural I measurement. Details of these issues are provided in the
Appendix.
Finally, we have cross-correlated the cluster candidates catalog with the 2MASS All-Sky
PSC and also with the 6x-PSC for the Pleiades. For every star in the catalog, we obtain
JHKs photometry and 2MASS positions. Where we have both main survey 2MASS data
and data from the 6x catalog, we adopt the 6x data for stars with J >11, and data from
the standard 2MASS catalog otherwise. We verified that the two catalogs do not have any
obvious photometric or astrometric offsets relative to each other. The coordinates we list in
our catalog are entirely from these 2MASS sources, and hence they inherit the very good
and homogeneous 2MASS positional accuracies of order 0.1 arcseconds RMS.
We have then plotted the candidate Pleiades members in a variety of color-magnitude
diagrams and color-color diagrams, and required that a star must have photometry that is
consistent with cluster membership. Figure 8 illustrates this process, and indicates why (for
example) we have excluded HII 1695 from our final catalog.
– 9 –
Table 2 provides the collected data for the 1417 stars we have retained as candidate
Pleiades members. The first two columns are the J2000 RA and Dec from 2MASS; the next
are the 2MASS JHKs photometry and their uncertainties, and the 2MASS photometric
quality flag (“ph-qual”). If the number following the 2MASS quality flag is a 1, the 2MASS
data come from the 2MASS All-Sky PSC; if it is a 2, the data come from the 6x-PSC. The
next three columns provide the B, V and IC photometry, followed by a flag which indicates
the provenance of that photometry. The last column provides the most commonly used
names for these stars. The hydrogen burning mass limit for the Pleiades occurs at about
V=22, I=18, Ks=14.4. Fifty-three of the candidate members in the catalog are fainter than
this limit, and hence should be sub-stellar if they are indeed Pleiades members.
Table 3 provides the IRAC [3.6], [4.5], [5.8] and [8.0] photometry we have derived for
Pleiades candidate members included within the region covered by the IRAC shallow survey
of the Pleiades (see section 2). The brightest stars are saturated even in our short integration
frame data, particularly for the more sensitive 3.6 and 4.5 µm channels. At the faint end,
we provide photometry only for 3.6 and 4.5 µm because the objects are undetected in the
two longer wavelength channels. At the “top” and “bottom” of the survey region, we have
incomplete wavelength coverage for a band of width about 5′, and for stars in those areas
we report only photometry in either the 3.6 and 5.8 bands or in 4.5 and 8.0 bands.
Because Table 2 is an amalgam of many previous catalogs, each of which have different
spatial coverage, magnitude limits and other idiosyncrasies, it is necessarily incomplete and
inhomogeneous. It also certainly includes some non-members. For V < 12, we expect very
few non-members because of the extensive spectroscopic data available for those stars; the
fraction of non-members will likely increase to fainter magnitudes, particularly for stars
located far from the cluster center. The catalog is simply an attempt to collect all of the
available data, identify some of the non-members and eliminate duplications. We hope that
it will also serve as a starting point for future efforts to produce a “cleaner” catalog.
Figure 9 shows the distribution on the sky of the stars in Table 2. The complete
spatial distribution of all members of the Pleiades may differ slightly from what is shown
due to the inhomogeneous properties of the proper motion surveys. However, we believe
that those effects are relatively small and the distribution shown is mostly representative
of the parent population. One thing that is evident in Figure 9 is mass segregation – the
highest mass cluster members are much more centrally located than the lowest mass cluster
members. This fact is reinforced by calculating the cumulative number of stars as a function
of distance from the cluster center for different absolute magnitude bins. Figure 10 illustrates
this fact. Another property of the Pleiades illustrated by Figure 9 is that the cluster appears
to be elongated parallel to the galactic plane, as expected from n-body simulations of galactic
– 10 –
clusters (Terlevich 1987). Similar plots showing the flattening of the cluster and evidence for
mass segregation for the V < 12 cluster members were provided by (Raboud & Mermilliod
1998).
4. Empirical Pleiades Isochrones and Comparison to Model Isochrones
Young, nearby, rich open clusters like the Pleiades can and should be used to provide
template data which can help interpret observations of more distant clusters or to test
theoretical models. The identification of candidate members of distant open clusters is often
based on plots of stars in a color-magnitude diagram, overlaid upon which is a line meant
to define the single-star locus at the distance of the cluster. The stars lying near or slightly
above the locus are chosen as possible or probable cluster members. The data we have
collected for the Pleiades provide a means to define the single-star locus for 100 Myr, solar
metallicity stars in a variety of widely used color systems down to and slightly below the
hydrogen burning mass limit. Figure 11 and Figure 12 illustrate the appearance of the
Pleiades stars in two of these diagrams, and the single-star locus we have defined. The curve
defining the single-star locus was drawn entirely “by eye.” It is displaced slightly above the
lower envelope to the locus of stars to account for photometric uncertainties (which increase
to fainter magnitudes). We attempted to use all of the information available to us, however.
That is, there should also be an upper envelope to the Pleiades locus in these diagrams, since
equal mass binaries should be displaced above the single star sequence by 0.7 magnitudes
(and one expects very few systems of higher multiplicity). Therefore, the single star locus
was defined with that upper envelope in mind. Table 4 provides the single-star loci for the
Pleiades for BV IcJKs plus the four IRAC channels. We have dereddened the empirical loci
by the canonical mean extinction to the Pleiades of AV = 0.12 (and, correspondingly, AB
= 0.16, AI = 0.07, AJ = 0.03, AK = 0.01, as per the reddening law of Rieke & Lebofsky
(1985)).
The other benefit to constructing the new catalog is that it can provide an improved
comparison dataset to test theoretical isochrones. The new catalog provides homogeneous
photometry in many photometric bands for stars ranging from several solar masses down
to below 0.1 M⊙. We take the distance to the Pleiades as 133 pc, and refer the reader to
Soderblom et al. (2005) for a discussion and a listing of the most recent determinations. The
age of the Pleiades is not as well-defined, but is probably somewhere between 100 and 125
Myr (Meynet, Mermilliod, & Maeder 1993; Stauffer et al. 1998). We adopt 100 Myr for the
purposes of this discussion; our conclusions relative to the theoretical isochrones would not be
affected significantly if we instead chose 125 Myr. As noted above, we adopt AV=0.12 as the
– 11 –
mean Pleiades extinction, and apply that value to the theoretical isochrones. A small number
of Pleiades members have significantly larger extinctions (Breger 1986; Stauffer & Hartmann
1987), and we have dereddened those stars individually to the mean cluster reddening.
Figures 13 and 14 compare theoretical 100 Myr isochrones from Siess et al. (2000) and
Baraffe et al. (1998) to the Pleiades member photometry from Table 2 for stars for which
we have photoelectric photometry. Neither set of isochrones are a good fit to the V − I
based color-magnitude diagram. For Baraffe et al. (1998) this is not a surprise because they
illustrated that their isochrones are too blue in V−I for cool stars in their paper, and ascribed
the problem as likely the result of an incomplete line list, resulting in too little absorption in
the V band. For Siess et al. (2000), the poor fit in the V − I CMD is somewhat unexpected
in that they transform from the theoretical to the observational plane using empirical color-
temperature relations. In any event, it is clear that neither model isochrones match the
shape of the Pleiades locus in the V vs. V − I plane, and therefore use of these V − I based
isochrones for younger clusters is not likely to yield accurate results (unless the color-Teff
relation is recalibrated, as described for example in Jeffries & Oliveira (2005)). On the other
hand, the Baraffe et al. (1998) model provides a quite good fit to the Pleiades single star
locus for an age of 100 Myr in the K vs. I − K plane.2. This perhaps lends support to
the hypothesis that the misfit in the V vs. V − I plane is due to missing opacity in their
V band atmospheres for low mass stars (see also Chabrier et al. (2000) for further evidence
in support of this idea). The Siess et al. (2000) isochrones do not fit the Pleiades locus in
the K vs. I − K plane particularly well, being too faint near I − K=2 and too bright for
I −K > 2.5.
5. Identification of New Very Low Mass Candidate Members
The highest spatial density for Pleiades members of any mass should be at the cluster
center. However, searches for substellar members of the Pleiades have generally avoided
the cluster center because of the deleterious effects of scattered light from the high mass
cluster members and because of the variable background from the Pleiades reflection nebulae.
The deep 2MASS and IRAC 3.6 and 4.5 µm imaging provide accurate photometry to well
below the hydrogen burning mass limit, and are less affected by the nebular emission than
shorter wavelength images. We therefore expect that it should be possible to identify a new
2These isochrones are calculated for the standard K filter, rather than Ks. However, the difference in
location of the isochrones in these plots because of this should be very slight, and we do not believe our
conclusions are significantly affected.
– 12 –
set of candidate Pleiades substellar members by combining our new near and mid-infrared
photometry.
The substellar mass limit in the Pleiades occurs at about Ks=14.4, near the limit of
the 2MASS All-Sky PSC. As illustrated in Figure 2, the deep 2MASS survey of the Pleiades
should easily detect objects at least two magnitudes fainter than the substellar limit. The
key to actually identifying those objects and separating them from the background sources
is to find color-magnitude or color-color diagrams which separate the Pleiades members from
the other objects. As shown in Figure 15, late-type Pleiades members separate fairly well
from most field stars towards the Pleiades in a Ks vs. Ks − [3.6] color-magnitude diagram.
However, as illustrated in Figure 2, in the Ks magnitude range of interest there is also a
large population of red galaxies, and they are in fact the primary contaminants to identi-
fying Pleiades substellar objects in the Ks vs. Ks − [3.6] plane. Fortunately, most of the
contaminant galaxies are slightly resolved in the 2MASS and IRAC imaging, and we have
found that we can eliminate most of the red galaxies by their non-stellar image shape.
Figure 15 shows the first step in our process of identifying new very low mass members of
the Pleiades. The red plus symbols are the known Pleiades members from Table 2. The red
open circles are candidate Pleiades substellar members from deep imaging surveys published
in the literature, mostly of parts of the cluster exterior to the central square degree, where
the IRAC photometry is from Lowrance et al. (2007). The blue, filled circles are field M
and L dwarfs, placed at the distance of the Pleiades, using photometry from Patten et al.
(2006). Because the Pleiades is ∼100 Myr, its very low mass stellar and substellar objects
will be displaced about 0.7 mag above the locus of the field M and L dwarfs according to the
Baraffe et al. (1998) and Chabrier et al. (2000) models, in accord with the location in the
diagram of the previously identified, candidate VLM and substellar objects. The trapezoidal
shaped region outlined with a dashed line is the region in the diagram which we define as
containing candidate new VLM and substellar members of the Pleiades. We place the faint
limit of this region at Ks=16.2 in order to avoid the large apparent increase in faint, red
objects for Ks> 16.2, caused largely by increasing errors in the Ks photometry. Also, the
2MASS extended object flags cease to be useful fainter than about Ks= 16.
We took the following steps to identify a set of candidate substellar members of the
Pleiades:
• keep only objects which fall in the trapezoidal region in Figure 15.
• remove objects flagged as non-stellar by the 2MASS pipeline software;
• remove objects which appear non-stellar to the eye in the IRAC images;
– 13 –
• remove objects which do not fall in or near the locus of field M and L dwarfs in a J−H
vs. H −Ks diagram;
• remove objects which have 3.6 and 4.5 µm magnitudes that differ by more than 0.2
• remove objects which fall below the ZAMS in a J vs. J −Ks diagram.
As shown in Figure 15, all stars earlier than about mid-M have Ks − [3.6] colors bluer
than 0.4. This ensures that for most of the area of the trapezoidal region, the primary
contaminants are distant galaxies. Fortunately, the 2MASS catalog provides two types of
flags for identifying extended objects. For each filter, a chi-square flag measures the match
between the objects shape and the instrumental PSF, with values greater than 2.0 generally
indicative of a non-stellar object. In order not to be misguided by an image artifact in one
filter, we throw out the most discrepant of the three flags and average the other two. We
discard objects with mean χ2 greater than 1.9. The other indicator is the 2MASS extended
object flag, which is the synthesis of several independent tests of the objects shape, surface
brightness and color (see Jarrett, T. et al (2000) for a description of this process). If one
simply excludes the objects classified as extended in the 2MASS 6x image by either of these
techniques, the number of candidate VLM and substellar objects lying inside the trapezoidal
region decreases by nearly a half.
We have one additional means to demonstrate that many of the identified objects are
probably Pleiades members, and that is via proper motions. The mean Pleiades proper
motion is ∆RA = 20 mas yr−1 and ∆Dec = −45 mas yr−1 (Jones 1973). With an epoch
difference of only 3.5 years between the deep 2MASS and IRAC imaging, the expected motion
for a Pleiades member is only 0.07 arcseconds in RA and −0.16 arcseconds in Dec. Given
the relatively large pixel size for the two cameras, and the undersampled nature of the IRAC
3.6 and 4.5 µm images, it is not a priori obvious that one would expect to reliably detect the
Pleiades motion. However, both the 2MASS and IRAC astrometric solutions have been very
accurately calibrated. Also, for the present purpose, we only ask whether the data support a
conclusion that most of the identified substellar candidates are true Pleiades members (i.e.,
as an ensemble), rather than that each star is well enough separated in a VPD to derive a
high membership probability.
Figure 16 provides a set of plots that we believe support the conclusion that the majority
of the surviving VLM and substellar candidates are Pleiades members. The first plot shows
the measured motions between the epoch of the 2MASS and IRAC observations for all known
Pleiades members from Table 2 that lie in the central square degree region and have 11 <
Ks < 14 (i.e., just brighter than the substellar candidates). The mean offset of the Pleiades
– 14 –
stellar members from the background population is well-defined and is quantitatively of the
expected magnitude and sign (+0.07 arcsec in RA and −0.16 arcsec in Dec). The RMS
dispersion of the coordinate difference for the field population in RA and Dec is 0.076 and
0.062 arcseconds, supportive of our claim that the relative astrometry for the two cameras
is quite good. Because we expect that the background population should have essentially
no mean proper motion, the non-zero mean “motion” of the field population of about <
∆RA>=0.3 arcseconds is presumably not real. Instead, the offset is probably due to the
uncertainty in transferring the Spitzer coordinate zero-point between the warm star-tracker
and the cryogenic focal plane. Because it is simply a zero-point offset applicable to all the
objects in the IRAC catalog, it has no effect on the ability to separate Pleiades members
from the field star population.
The second panel in Figure 16 shows the proper motion of the candidate Pleiades VLM
and substellar objects. While these objects do not show as clean a distribution as the known
members, their mean motion is clearly in the same direction. After removing 2-σ deviants,
the median offsets for the substellar candidates are 0.04 and −0.11 arcseconds in RA and
Dec, respectively. The objects whose motions differ significantly from the Pleiades mean
may be non-members or they may be members with poorly determined motions (since a few
of the high probability members in the first panel also show discrepant motions).
The other two panels in Figure 16 show the proper motions of two possible control
samples. The first control sample was defined as the set of stars that fall up to 0.3 magnitudes
below the lower sloping boundary of the trapezoid in Figure 15. These objects should be late
type dwarfs that are either older or more distant than the Pleiades or red galaxies. We used
the 2MASS data to remove extended or blended objects from the sample in the same way as
for the Pleiades candidates. If the objects are nearby field stars, we expect to see large proper
motions; if galaxies, the real proper motions would be small – but relatively large apparent
proper motions due to poor centroiding or different centroids at different effective wavelengths
could be present. The second control set was defined to have −0.1 < K − [3.6] < 0.1 and
14.0 < K < 14.5, and to be stellar based on the 2MASS flags. This control sample should
therefore be relatively distant G and K dwarfs primarily. Both control samples have proper
motion distributions that differ greatly from the Pleiades samples and that make sense for,
respectively, a nearby and a distant field star sample.
Figure 17 shows the Pleiades members from Table 2 and the 55 candidate VLM and
substellar members that survived all of our culling steps. We cross-correlated this list with the
stars from Table 2 and with a list of the previously identified candidate substellar members
of the cluster from other deep imaging surveys. Fourteen of the surviving objects correspond
to previously identified Pleiades VLM and substellar candidates. We provide the new list
– 15 –
of candidate members in Table 5. The columns marked as µ(RA) and µ(DEC) are the
measured motions, in arcsec over the 3.5 year epoch difference between the 2MASS-6x and
IRAC observations. Forty-two of these objects have Ks> 14.0, and hence inferred masses
less than about 0.1 M⊙; thirty-one of them have Ks> 14.4, and hence have inferred masses
below the hydrogen burning mass limit.
Our candidate list could be contaminated by foreground late type dwarfs that happen
to lie in the line of sight to the Pleiades. How many such objects should we expect? In
order to pass our culling steps, such stars would have to be mid to late M dwarfs, or early
to mid L dwarfs. We use the known M dwarfs within 8 pc to estimate how many field M
dwarfs should lie in a one square degree region and at distance between 70 and 100 parsecs
(so they would be coincident in a CMD with the 100 Myr Pleiades members). The result is
∼3 such field M dwarf contaminants. Cruz et al. (2006) estimate that the volume density of
L dwarfs is comparable to that for late-M dwarfs, and therefore a very conservative estimate
is that there might also be 3 field L dwarfs contaminating our sample. We regard this (6
contaminating field dwarfs) as an upper limit because our various selection criteria would
exclude early M dwarfs and late L dwarfs. Bihain et al. (2006) made an estimate of the
number of contaminating field dwarfs in their Pleiades survey of 1.8 square degrees; for the
spectral type range of our objects, their algorithm would have predicted just one or two
contaminating field dwarfs for our survey.
How many substellar Pleiades members should there be in the region we have surveyed?
That is, of course, part of the question we are trying to answer. However, previous studies
have estimated that the Pleiades stellar mass function for M < 0.5 M⊙ can be approximated
as a power-law with an exponent of -1 (dN/dM ∝ M−1). Using the known Pleiades members
from Table 2 that lie within the region of the IRAC survey and that have masses of 0.2 <
M/M⊙< 0.5 (as estimated from the Baraffe et al. (1998) 100 Myr isochrone) to normalize
the relation, the M−1 mass function predicts about 48 members in our search region and
with 14 < K < 16.2 (corresponding to 0.1 < M/M⊙< 0.035). Other studies have suggested
that the mass function in the Pleiades becomes shallower below 0.1 M⊙, dN/dM ∝ M
−0.6.
Using the same normalization as above, this functional form for the Pleiades mass function
for M < 0.1 M⊙ yields a prediction of 20 VLM and substellar members in our survey. The
number of candidates we have found falls between these two estimates. Better proper motions
and low-resolution spectroscopy will almost certaintly eliminate some of these candidates as
non-members.
– 16 –
6. Mid-IR Observations of Dust and PAHS in the Pleiades
Since the earliest days of astrophotography, it has been clear that the Pleiades stars
are in relatively close proximity to interstellar matter whose optical manifestation is the
spider-web like network of filaments seen particularly strongly towards several of the B stars
in the cluster. High resolution spectra of the brightest Pleiades stars as well as CO maps
towards the cluster show that there is gas as well as dust present, and that the (primary)
interstellar cloud has a significant radial velocity offset relative to the Pleiades (White 2003;
Federman & Willson 1984). The gas and dust, therefore, are not a remnant from the forma-
tion of the cluster but are simply evidence of a a transitory event as this small cloud passes by
the cluster in our line of sight (see also Breger (1986)). There are at least two claimed mor-
phological signatures of a direct interaction of the Pleiades with the cloud. White & Bally
(1993) provided evidence that the IRAS 60 and 100 µm image of the vicinity of the Pleiades
showed a dark channel immediately to the east of the Pleiades, which they interpreted as
the “wake” of the Pleiades as it plowed through the cloud from the east. Herbig & Simon
(2001) provided a detailed analysis of the optically brightest nebular feature in the Pleiades
– IC 349 (Barnard’s Merope nebula) – and concluded that the shape and structure of that
nebula could best be understood if the cloud was running into the Pleiades from the south-
east. Herbig & Simon (2001) concluded that the IC 349 cloudlet, and by extension the rest
of the gas and dust enveloping the Pleiades, are relatively distant outliers of the Taurus
molecular clouds (see also Eggen (1950) for a much earlier discussion ascribing the Merope
nebulae as outliers of the Taurus clouds). White (2003) has more recently proposed a hybrid
model, where there are two separate interstellar cloud complexes with very different space
motions, both of which are colliding simultaneously with the Pleiades and with each other.
Breger (1986) provided polarization measurements for a sample of member and back-
ground stars towards the Pleiades, and argued that the variation in polarization signatures
across the face of the cluster was evidence that some of the gas and dust was within the clus-
ter. In particular, Figure 6 of that paper showed a fairly distinct interface region, with little
residual polarization to the NE portion of the cluster and an L-shaped boundary running
EW along the southern edge of the cluster and then north-south along the western edge of
the cluster. Stars to the south and west of that boundary show relatively large polarizations
and consistent angles (see also our Figure 5 where we provide a few polarization vectors from
Breger (1986) to illustrate the location of the interface region and the fact that the position
angle of the polarization correlates well with the location in the interface).
There is a general correspondence between the polarization map and what is seen with
IRAC, in the sense that the B stars in the NE portion of the cluster (Atlas and Alcyone)
have little nebular emission in their vicinity, whereas those in the western part of the cluster
– 17 –
(Maia, Electra and Asterope) have prominent, filamentary dust emission in their vicinity.
The L-shaped boundary is in fact visible in Figure 4 as enhanced nebular emission running
between and below a line roughly joining Merope and Electra, and then making a right
angle and running roughly parallel to a line running from Electra to Maia to HII1234 (see
Figure 5).
6.1. Pleiades Dust-Star Encounters Imaged with IRAC
The Pleiades dust filaments are most strongly evident in IRAC’s 8 µm channel, as
evidenced by the distinct red color of the nebular features in Figure 4. The dominance at 8
µm is an expected feature of reflection nebulae, as exemplified by NGC 7023 (Werner et al.
2004), where most of the mid-infrared emission arises from polycyclic aromatic hydrocarbons
(PAHs) whose strongest bands in the 3 to 10 µm region fall at 7.7 and 8.6 µm. One might
expect that if portions of the passing cloud were particularly near to one of the Pleiades
members, it might be possible to identify such interactions by searching for stars with 8.0
µm excesses or for stars with extended emission at 8 µm. Figure 18 provides two such plots.
Four stars stand out as having significant extended 8 µm emission, with two of those stars
also having an 8 µm excess based on their [3.6]−[8.0] color. All of these stars, plus IC 349,
are located approximately along the interface region identified by Breger (1986).
We have subtracted a PSF from the 8 µm images for the stars with extended emission,
and those PSF-subtracted images are provided in Figure 19. The image for HII 1234 has
the appearance of a bow-shock. The shape is reminiscent of predictions for what one should
expect from a collision between a large cloud or a sheet of gas and an A star as described
in Artymowicz & Clampin (1997). The Artymowicz & Clampin (1997) model posits that
A stars encountering a cloud will carve a paraboloidal shaped cavity in the cloud via radi-
ation pressure. The exact size and shape of the cavity depend on the relative velocity of
the encounter, the star’s mass and luminosity and properties of the ISM grains. For typical
parameters, the predicted characteristic size of the cavity is of order 1000 AU, quite compa-
rable to the size of the structures around HII 652 and HII 1234. The observed appearance
of the cavity depends on the view angle to the observer. However, in any case, the direction
from which the gas is moving relative to the star can be inferred from the location of the
star relative to the curved rim of the cavity; the “wind” originates approximately from the
direction connecting the star and the apex of the rim. For HII 1234, this indicates the cloud
which it is encountering has a motion relative to HII 1234 from the SSE, in accord with a
Taurus origin and not in accord for where a cloud is impacting the Pleiades from the west
as posited in White (2003). The nebular emission for HII 652 is less strongly bow-shaped,
– 18 –
but the peak of the excess emission is displaced roughly southward from the star, consistent
with the Taurus model and inconsistent with gas flowing from the west.
Despite being the brightest part of the Pleiades nebulae in the optical, IC 349 appears
to be undetected in the 8 µm image. This is not because the 8 µm image is insensitive to
the nebular emission - there is generally good agreement between the structures seen in the
optical and at 8 µm, and most of the filaments present in optical images of the Pleiades are
also visible on the 8 µm image (see Figures 4 and 19) and even the psf-subtracted image of
Merope shows well-defined nebular filaments. The lack of enhanced 8 µm emission from the
region of IC 349 is probably because all of the small particles have been scoured away from
this cloudlet, consistent with Herbig’s model to explain the HST surface photometry and
colors. There is no PAH emission from IC 349 because there are none of the small molecules
that are the postulated source of the PAH emission.
IC349 is very bright in the optical, and undetected to a good sensitivity limit at 8 µm; it
must be detectable via imaging at some wavelength between 5000 Å and 8 µm. We checked
our 3.6 µm data for this purpose. In the standard BCD mosaic image, we were unable to
discern an excess at the location of IC349 either simply by displaying the image with various
stretches or by doing cuts through the image. We performed a PSF subtraction of Merope
from the image in order to attempt to improve our ability to detect faint, extended emission
30” from Merope - unfortunately, bright stars have ghost images in IRAC Ch. 1, and in this
case the ghost image falls almost exactly at the location of IC349. IC349 is also not detected
in visual inspection of our 2MASS 6x images.
6.2. Circumstellar Disks and IRAC
As part of the Spitzer FEPS (Formation and Evolution of Planetary Systems) Legacy
program, using pointed MIPS photometry, Stauffer et al. (2005) identified three G dwarfs
in the Pleiades as having 24 µm excesses probably indicative of circumstellar dust disks.
Gorlova et al. (2006) reported results of a MIPS GTO survey of the Pleiades, and identified
nine cluster members that appear to have 24 µm excesses due to circumstellar disks. However,
it is possible that in a few cases these apparent excesses could be due instead to a knot of
the passing interstellar dust impacting the cluster member, or that the 24 µm excess could
be flux from a background galaxy projected onto the line of sight to the Pleiades member.
Careful analysis of the IRAC images of these cluster members may help confirm that the
MIPS excesses are evidence for debris disks rather than the other possible explanations.
Six of the Pleiades members with probable 24 µm excesses are included in the region
– 19 –
mapped with IRAC. However, only four of them have data at 8 µm – the other two fall
near the edge of the mapped region and only have data at 3.6 and 5.8 µm. None of the
six stars appear to have significant local nebular dust from visual inspection of the IRAC
mosaic images. Also, none of them appear problematic in Figure 18. For a slightly more
quantitative analysis of possible nebular contamination, we also constructed aperture growth
curves for the six stars, and compared them to other Pleiades members. All but one of the
six show aperture growth curves that are normal and consistent with the expected IRAC
PSF. The one exception is HII 489, which has a slight excess at large aperture sizes as is
illustrated in Figure 20. Because HII 489 only has a small 24 µm excess, it is possible that
the 24 µm excess is due to a local knot of the interstellar cloud material and is not due to a
debris disk. For the other five 24 µm excess stars we find no such problem, and we conclude
that their 24 µm excesses are indeed best explained as due to debris disks.
7. Summary and Conclusions
We have collated the primary membership catalogs for the Pleiades to produce the first
catalog of the cluster extending from its highest mass members to the substellar limit. At the
bright end, we expect this catalog to be essentially complete and with few or no non-member
contaminants. At the faint end, the data establishing membership are much sparser, and we
expect a significant number of objects will be non-members. We hope that the creation of
this catalog will spur efforts to obtain accurate radial velocities and proper motions for the
faint candidate members in order to eventually provide a well-vetted membership catalog
for the stellar members of the Pleiades. Towards that end, it would be useful to update
the current catalog with other data – such as radial velocities, lithium equivalent widths,
x-ray fluxes, Hα equivalent widths, etc. – which could be used to help accurately establish
membership for the low mass cluster candidates. It is also possible to make more use of
“negative information” present in the proper motion catalogs. That is, if a member from
one catalog is not included in another study but does fall within its areal and luminosity
coverage, that suggests that it likely failed the membership criteria of the second study. For
a few individual stars, we have done this type of comparison, but a systematic analysis of
the proper motion catalogs should be conducted. We intend to undertake these tasks, and
plan to establish a website where these data would be hosted.
We have used the new Pleiades member catalog to define the single-star locus at 100
Myr for BV IcKs and the four IRAC bands. These curves can be used as empirical calibration
curves when attempting to identify members of less well-studied, more distant clusters of
similar age. We compared the Pleiades photometry to theoretical isochrones from Siess et al.
– 20 –
(2000) and Baraffe et al. (1998). The Siess et al. (2000) isochrones are not, in detail, a good
fit to the Pleiades photometry, particularly for low mass stars. The Baraffe et al. (1998) 100
Myr isochrone does fit the Pleiades photometry very well in the I vs. I −K plane.
We have identified 31 new substellar candidate members of the Pleiades using our com-
bined seven-band infrared photometry, and have shown that the majority of these objects
appear to share the Pleiades proper motion. We believe that most of the objects that may
be contaminating our list of candidate brown dwarfs are likely to be unresolved galaxies, and
therefore low resolution spectroscopy should be able to provide a good criterion for culling
our list of non-members.
The IRAC images, particularly the 8 µm mosaic, provide vivid evidence of the strong in-
teraction of the Pleiades stars and the interstellar cloud that is passing through the Pleiades.
Our data are supportive of the model proposed by Herbig & Simon (2001) whereby the pass-
ing cloud is part of the Taurus cloud complex and hence is encountering the Pleiades from
the SSE direction. White & Bally (1993) had proposed a model whereby the cloud was
encountering the Pleiades from the west and used this to explain features in the IRAS 60
and 100 µm images of the region as the wake of the Pleiades moving through the cloud.
Our data appear to not be supportive of that hypothesis, and therefore leaves the apparent
structure in the IRAS maps as unexplained.
Most of the support for this work was provided by the Jet Propulsion Laboratory, Cal-
ifornia Institute of Technology, under NASA contract 1407. This research has made use of
NASA’s Astrophysics Data System (ADS) Abstract Service, and of the SIMBAD database,
operated at CDS, Strasbourg, France. This research has made use of data products from
the Two Micron All-Sky Survey (2MASS), which is a joint project of the University of Mas-
sachusetts and the Infrared Processing and Analysis Center, funded by the National Aero-
nautics and Space Administration and the National Science Foundation. These data were
served by the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion
Laboratory, California Institute of Technology, under contract with the National Aeronautics
and Space Administration. The research described in this paper was partially carried out at
the Jet Propulsion Laboratory, California Institute of Technology, under contract with the
National Aeronautics and Space Administration.
– 21 –
A. APPENDIX
A.1. Membership Catalogs
Membership lists of the Pleiades date back to antiquity if one includes historical and
literary references to the Seven Sisters (Alcyone, Maia, Merope, Electra, Taygeta, Asterope
and Celeno) and their parents (Atlas and Pleione). The first paper discussing relative proper
motions of a large sample of stars in the Pleiades (based on visual observations) was published
by Pritchard (1884). The best of the early proper motion surveys of the Pleiades derived
from photographic plate astrometry was that by Trumpler (1921), based on plates obtained
at Yerkes and Lick observatories. The candidate members from that survey were presented
in two tables, with the first being devoted to candidate members within about one degree
from the cluster center (operationally, within one degree from Alcyone) and the second table
being devoted to candidates further than one degree from the cluster center. Most of the
latter stars were denoted by Trumpler by an S or R, followed by an identification number.
We use Tr to designate the Trumpler stars (hence Trnnn for a star from the 1st table and
the small number of stars in the second table without an “S” or an “R”, and TrSnnn or
TrRnnn for the other stars). For the central region, Trumpler’s catalog extends to V ∼ 13,
while the outer region catalog includes stars only to about V ∼ 9.
The most heavily referenced proper motion catalog of the Pleiades is that provided by
Hertzsprung (1947). That paper makes reference to two separate catalogs: a photometric
catalog of the Pleiades published by Hertzsprung in 1923 (Hertzsprung 1923), whose members
are commonly referred to by HI numbers, and the new proper motion catalog from the
1947 paper, commonly referenced as the HII catalog. While both HI and HII numbers
have been used in subsequent observational papers, it is the HII identification numbers
that predominate. That catalog – derived from Carte du Ciel blue-sensitive plates from
14 observatories – includes stars in the central 2×2 degree region of the cluster, and has
a faint limit of about V = 15.5. Johnson system BV I photometry is provided for most
of the proposed Hertzsprung members in Johnson & Mitchell (1958) and Iriarte (1967).
Additional Johnson B and V photometry plus Kron I photometry for a fairly large number
of the Hertzsprung members can be found in Stauffer (1980), Stauffer (1982), and Stauffer
(1984). Other Johnson BV photometry for a scattering of stars can be found in Jones (1973),
Robinson & Kraft (1974), Messina (2001). Spectroscopic confirmation, primarily via radial
velocities, that these are indeed Pleiades members has been provided in Soderblom et al.
(1993); Queloz et al. (1998) and Mermilliod et al. (1997).
Two other proper motion surveys provide relatively bright candidate members relatively
far from the cluster center: Artyukhina & Kalinina (1970) and van Leeuwen, Alphenaar, & Brand
– 22 –
(1986). Stars from the Artyukhina catalog are designated as AK followed by the region from
which the star was identified followed by an identification number. The new members pro-
vided in the van Leeuwen paper were taken from an otherwise unpublished proper motion
study by Pels, where the first 118 stars were considered probable members and the remaining
75 stars were considered possible members. Van Leeuwen categorized a number of the Pels
stars as non-members based on the Walraven photometry they obtained, and we adopt those
findings. Radial velocities for stars in these two catalogs have been obtained by Rosvick et al
(1992), Mermilliod et al. (1997), and Queloz et al. (1998), and those authors identified a list
of the candidate members that they considered confirmed by the high resolution spectroscopy.
For these outlying candidate members, to be included in Table 2 we require that the star
be a radial velocity member from one of the above three surveys, or be indicated as having
“no dip” in the Coravel cross-correlation (indicating rapid rotation, which at least for the
later type stars is suggestive of membership). Geneva photometry of the Artyukhina stars
considered as likely members was provided by Mermilliod et al. (1997). The magnitude limit
of these surveys was not well-defined, but most of the Artyukhina and Pels stars are brighter
than V=13.
Jones (1973) provided proper motion membership probabilities for a large sample of
proposed Pleiades members, and for a set of faint, red stars towards the Pleiades. A few
star identification names from the sources considered by Jones appear in Table 2, including
MT (McCarthy & Treanor 1964), VM (van Maanen 1946), ALR (Ahmed et al. 1965), and
J (Jones 1973).
The chronologically next significant source of new Pleiades candidate members was
the flare star survey of the Pleiades conducted at several observatories in the 1960s, and
summarized in Haro et al. (1982), hereafter HCG. The logic behind these surveys was that
even at 100 Myr, late type dwarfs have relatively frequent and relatively high luminosity
flares (as demonstrated by Johnson & Mitchell (1958) having detected two flares during
their photometric observations of the Pleiades), and therefore wide area, rapid cadence
imaging of the Pleiades at blue wavelengths should be capable of identifying low mass cluster
members. However, such surveys also will detect relatively young field dwarfs, and therefore
it is best to combine the flare star surveys with proper motions. Dedicated proper motion
surveys of the HCG flare stars were conducted by Jones (1981) and Stauffer et al. (1991),
with the latter also providing photographic V I photometry (Kron system). Photoelectric
photometry for some of the HCG stars have been reported in Stauffer (1982), Stauffer (1984),
Stauffer & Hartmann (1987), and Prosser et al. (1991). High resolution spectroscopy of
many of the HCG stars is reported in Stauffer (1984), Stauffer & Hartmann (1987) and
Terndrup et al. (2000). Because a number of the papers providing additional observational
data for the flare stars were obtained prior to 1982, we also include in Table 2 the original
– 23 –
flare star names which were derived from the observatory where the initial flare was detected.
Those names are of the form an initial letter indicating the observatory – A (Asiago), B
(Byurakan), K (Konkoly), T (Tonantzintla) – followed by an identification number.
Stauffer et al. (1991) conducted two proper motion surveys of the Pleiades over an
approximately 4×4 degree region of the cluster based on plates obtained with the Lick 20′′
astrographic telescope. The first survey was essentially unbiased, except for the requirement
that the stars fall approximately in the region of the V vs. V − I color-magnitude diagram
where Pleiades members should lie. Candidate members from this survey are designated
by SK numbers. The second survey was a proper motion survey of the HCG stars. Photo-
graphic V I photometry of all the stars was provided as well as proper motion membership
probabilities. Photoelectric photometry for some of the candidate members was obtained as
detailed above in the section on the HCG catalog stars. The faint limit of these surveys is
about V=18.
Hambly et al. (1991) provided a significantly deeper, somewhat wider area proper mo-
tion survey, with the faintest members having V ≃ 20 and the total area covered being
of order 25 square degrees. The survey utilized red sensitive plates from the Palomar and
UK Schmidt telescopes. Due to incomplete coverage at one epoch, there is a vertical swath
slightly east of the cluster center where no membership information is available. Stars from
this survey are designated by their HHJ numbers . Hambly et al. (1993) provide RI photo-
graphic photometry on a natural system for all of their candidate members, plus photoelectric
Cousins RI photometry for a small number of stars and JHK photometry for a larger sam-
ple. Some spectroscopy to confirm membership has been reported in Stauffer et al. (1994),
Stauffer et al. (1995), Oppenheimer et al. (1997), Stauffer et al. (1998), and Steele et al.
(1995), though for most of the HHJ stars there is no spectroscopic membership confirma-
tion.
Pinfield et al. (2000) provide the deepest wide-field proper motion survey of the Pleiades.
That survey combines CCD imaging of six square degrees of the Pleiades obtained with the
Burrell Schmidt telescope (as five separate, non-overlapping fields near but outside the cluster
center) with deep photographic plates which provide the 1st epoch positions. Candidate
members are designated by BPL numbers (for Burrell Pleiades), with the faintest stars
having I ≃ 19.5, corresponding to V > 23. Only the stars brighter than about I= 17 have
sufficiently accurate proper motions to use to identify Pleiades members. Fainter than I=
17, the primary selection criteria are that the star fall in an appropriate place in both an I
vs. I − Z and an I vs. I −K CMD.
Adams et al. (2001) combined the 2MASS and digitized POSS databases to produce a
very wide area proper motion survey of the Pleiades. By design, that survey was very inclu-
– 24 –
sive - covering the entire physical area of the cluster and extending to the hydrogen burning
mass limit. However, it was also very “contaminated”, with many suspected non-members.
The catalog of possible members was not published. We have therefore not included stars
from this study in Table 2; we have used the proper motion data from Adams et al. (2001)
to help decide cases where a given star has ambiguous membership data from the other
surveys.
Deacon & Hambly (2004) provided another deep and very wide area proper motion
survey of the Pleiades. The survey covers a circular area of approximately five degrees
radius to R ∼ 20, or V ∼ 22. Candidate members are designated by DH. Deacon & Hambly
(2004) also provide membership probabilities based on proper motions for many candidate
cluster members from previous surveys. For stars where Deacon & Hambly (2004) derive
P < 0.1 and where we have no other proper motion information or where another proper
motion survey also finds low membership probability, we exclude the star from our catalog.
For cases where two of our proper motion catalogs differ significantly in their membership
assessment, with one survey indicating the star is a probable member, we retain the star
in the catalog as the conservative choice. Examples of the latter where Deacon & Hambly
(2004) derive P < 0.1 include HII 1553, HII 2147, HII 2278 and HII 2665 – all of which
we retain in our catalog because other surveys indicate these are high probability Pleiades
members.
A.2. Photometry
Photometry for stars in open cluster catalogs can be used to help confirm cluster mem-
bership and to help constrain physical properties of those stars or of the cluster. For a
variety of reasons, photometry of stars in the Pleiades has been obtained in a panoply of
different photometric systems. For our own goals, which are to use the photometry to help
verify membership and to define the Pleiades single-star locus in color magnitude diagrams,
we have attempted to convert photometry in several of these systems to a common sys-
tem (Johnson BV and Cousins I). We detail below the sources of the photometry and the
conversions we have employed.
Photoelectric photometry of Pleiades members dates back to at least 1921 (Cummings
1921). However, as far as we are aware the first “modern” photoelectric photometry for the
Pleiades, using a potassium hydride photoelectric cell, is that of Calder & Shapley (1937).
Eggen (1950) provided photoelectric photometry using a 1P21 phototube (but calibrated
to a no-longer-used photographic system) for most of the known Pleiades members within
one degree of the cluster center and with magnitudes < 11. The first phototube photom-
– 25 –
etry of Pleiades stars calibrated more-or-less to the modern UBV system was provided by
Johnson & Morgan (1951). An update of that paper, and the oldest photometry included
here was reported in Johnson & Mitchell (1958), which provided UBV Johnson system pho-
tometry for a large sample of HII and Trumpler candidate Pleiades members. Iriarte (1967)
later reported Johnson system V − I colors for most of these stars. We have converted
Iriarte’s V − I photometry to estimated Cousins V − I colors using a formula from Bessell
(1979):
V − I(Cousins) = 0.778× V − I(Johnson). (A1)
BV RI photometry for most of the Hertzsprung members fainter than V= 10 has been
published by Stauffer (1980), Stauffer (1982), Stauffer (1984), and Stauffer & Hartmann
(1987). The BV photometry is Johnson system, whereas the RI photometry is on the Kron
system. The Kron V − I colors were converted to Cousins V − I using a transformation
provided by Bessell & Weis (1987):
V − I(Cousins) = 0.227 + 0.9567(V − I)k + 0.0128(V − I)
k − 0.0053(V − I)
k (A2)
Other Kron system V−I colors have been published for Pleiades candidates in Stauffer et al.
(1991) (photographic photometry) and in Prosser et al. (1991). These Kron-system colors
have also been converted to Cousins V − I using the above formula.
Johnson/Cousins UBV R photometry for a set of low mass Pleiades members was pro-
vided by Landolt (1979). We only use the BV magnitudes from that study. Additional John-
son system UBV photometry for small numbers of stars is provided in Robinson & Kraft
(1974), Messina (2001) and Jones (1973).
van Leeuwen, Alphenaar, & Meys (1987) provided Walraven V BLUW photometry for
nearly all of the Hertzsprung members brighter than V ∼ 13.5 and for the Pels candidate
members. Van Leeuwen provided an estimated Johnson V derived from the Walraven V
in his tables. We have transformed the Walraven V − B color into an estimate of Johnson
B − V using a formula from Rosvick et al (1992):
B − V (Johnson) = 2.571(V − B)− 1.02(V −B)2 + 0.5(V − B)3 − 0.01 (A3)
Hambly et al. (1993) provided photographic V RI photometry for all of the HHJ candidate
members, and V RI Cousins photoelectric photometry for a small fraction of those stars. We
took all of the HHJ stars with photographic photometry for which we also have photoelectric
V I photometry on the Cousins system, and plotted V (Cousins) vs. V (HHJ) and I(Cousins)
vs. I(HHJ). While there is some evidence for slight systematic departures of the HHJ photo-
graphic photometry from the Cousins system, those departures are relatively small and we
have chosen simply to retain the HHJ values and treat them as Cousins system.
– 26 –
Pinfield et al. (2000) reported their I magnitudes in an instrumental system which they
designated as Ikp. We identified all BPL candidate members for which we had photoelectric
Cousins I estimates, and plotted Ikp vs. IC. Figure 21 shows this correlation, and the
piecewise linear fit we have made to convert from Ikp to IC. Our catalog lists these converted
IC measures for the BPL stars for which we have no other photoelectric I estimates.
Deacon & Hambly (2004) derived RI photometry from the scans of their plates, and
calibrated that photometry by reference to published photometry from the literature. When
we plotted their the difference between their I band photometry and literature values (where
available), we discovered a significant dependence on right ascension. Unfortunately, because
the DH survey extended over larger spatial scales than the calibrating photometry, we could
not derive a correction which we could apply to all the DH stars. We therefore developed
the following indirect scheme. We used the stars for which we have estimated IC magnitudes
(from photoelectric photometry) to define the relation between J and (IC−J) for Pleiades
members. For each DH star, we combined that relation and the 2MASS J magnitude to yield
a predicted IC. Figure 22 shows a plot of the difference of this predicted IC and I(DH) with
right ascension. The solid line shows the relation we adopt. Figure 23 shows the relation
between the corrected I(DH) values and Table 2 IC measures from photoelectric sources.
There is still a significant amount of scatter but the corrected I(DH) photometry appears
to be accurately calibrated to the Cousins system.
In a very few cases (specifically, just five stars), we provide an estimate of Ic based
on data from a wide-area CCD survey of Taurus obtained with the Quest-2 camera on the
Palomar 48 inch Samuel Oschin telescope (Slesnick et al. 2006). That survey calibrated their
photometry to the Sloan i system, and we have converted the Sloan i magnitudes to Ic. We
intend to make more complete use of the Quest-2 data in a subsequent paper.
When we have multiple sources of photometry for a given star, we consider how to com-
bine them. In most cases, if we have photoelectric data, that is given preference. However,
if we have photographic V and I, and only a photoelectric measurement for I, we do not
replace the photographic I with the photoelectric value because these stars are variable and
the photographic measurements are at least in some cases from nearly simultaneous expo-
sures. Where we have multiple sources for photoelectric photometry, and no strong reason
to favor one measurement or set of measurements over another, we have averaged the pho-
tometry for a given star. In most cases, where we have multiple photometry the individual
measurements agree reasonably well but with the caveat that the Pleiades low mass stars are
in many cases heavily spotted and “active” chromospherically, and hence are photometrically
variable. In a few cases, even given the expectation that spots and other phenomena may
affect the photometry, there seems to be more discrepancy between reported V magnitudes
– 27 –
than we expect. We note two such cases here. We suspect these results indicate that at least
some of the Pleiades low mass stars have long-term photometric variability larger than their
short period (rotational) modulation.
HII 882 has at least four presumably accurate V magnitude measurements reported in
the literature. Those measures are: V=12.66 Johnson & Mitchell (1958); V=12.95 Stauffer
(1982); V=12.898 van Leeuwen, Alphenaar, & Brand (1986); and V=12.62 Messina (2001).
HII 345 has at least three presumably accurate V magnitude measurements. Those
measurements are: V=11.65 Landolt (1979); V=11.73 van Leeuwen, Alphenaar, & Brand
(1986); V=11.43 Messina (2001).
At the bottom of Table 2, we provide a key to the source(s) of the optical photometry
provided in the table.
This research made use of the SIMBAD database operated at CDS, Strasbourg, France,
and also of the NED and NStED databases operated at IPAC, Pasadena, USA. A large
amount of data for the Pleiades (and other open clusters) can also be found at the open
cluster database WEBDA (http://www.univie.ac.at/webda/), operated in Vienna by Ernst
Paunzen.
REFERENCES
Adams, J., Stauffer, J., Monet, D., Skrutskie, M., & Beichman, C. 2001, AJ, 121, 2053
Allen, L. et al. 2004, ApJS, 154, 363
Ahmed, F., Lawrence, L., & Reddish, V. 1965, PROE, 3, 187
Artymowicz, P. & Clampin, M. 1997, ApJ, 490, 863
Artyukhina, N. 1969, Soviet Astronomy, 12, 987
Artyukhina, N. & Kalinina, E. 1970, Trudy Sternberg Astron Inst. 39, 111
Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. 1998, A&A, 337, 403
Bessell, M. 1979, PASP, 91, 589
Bessell, M. & Weis, E. 1987, PASP, 99, 642
Bihain, G. et al. 2006, A&A, 458, 805
http://www.univie.ac.at/webda/
– 28 –
Breger, M. 1986, ApJ, 309, 311
Calder, W. & Shapley, H. 1937, Ann. Ast. Obs. Harvard College, 105, 453
Chabrier, G., Baraffe, I., Allard, F., & Hauschildt, P. 2000, ApJ, 542, 464
Cruz, K. et al. 2006,
Cummings, E. 1921, PASP, 33, 214
Deacon, N., & Hambly, N. 2004, A&A, 416, 125
Eggen, O. 1950, ApJ, 111, 81
Federman, S. & Willson, R. 1984, ApJ, 283, 626
Festin, L. 1998, A&A, 333, 497
Gorlova, N. et al. 2006, ApJ, 649, 1028
Hambly, N., Hawkins, M.R.S., & Jameson, R. 1991, MNRAS, 253, 1
Hambly, N., Hawkins, M.R.S., & Jameson, R. 1993, A&AS, 100, 607
Haro, G., Chavira, E. & Gonzalez, G. 1982, Bol Inst Tonantzintla 3, 1
Herbig, G. & Simon, T. 2001, AJ, 121, 3138
Hertzsprung, E. 1923, Mem. Danish Acad. 4, No. 4
Hertzsprung, E. 1947, Ann.Leiden Obs. 19, Part1A
Iriarte, B. 1967, Boll. Obs. Tonantzintla Tacubaya 4, 79
Jameson, R. & Skillen, I. 1989, MNRAS, 239, 247
Jarrett, T., Chester, T., Cutri, R., Schneider, S., Skrutskie, M., & Huchra, J. 2000, AJ, 119,
Jeffries, R.D., & Oliveira, J. 2005, MNRAS, 358, 13.
Johnson, H. L., & Mitchell, R. I. 1958, ApJ, 128, 31 (JM)
Johnson, H.L. & Morgan, W.W. 1951, ApJ, 114, 522
Jones, B.F. 1973, A&AS, 9, 313
– 29 –
Jones, B.F. 1981, AJ, 86, 290
Krishnamurthi, A. et al. 1998, ApJ, 493, 914
Kraemer, K., et al. 2003, AJ, 126, 1423
Landolt, A. ApJ, 231, 468
van Leeuwen, F., Alphenaar, P., & Brand, J. 1986, A&AS, 65, 309
van Leeuwen, F., Alphenaar, P., & Meys, J. J. M. 1987, A&AS, 67, 483
van Maanen, A. 1946, ApJ, 102, 26
Lowrance, P. et al. 2007, in preparation
Makovoz, D., & Marleau, F. 2005 PASP, 117, 1113
Marilli, E., Catalano, S., & Frasca, A. 1997, MemSAI, 68, 895
McCarthy, M. & Treanor, P. 1964, Ric. Astron. Specola Vat. Astron. 6, 535
Mendoza, E. E. 1967, Boletin Observatorio Tonantzintla y Tacuba, 4, 149
Mermilliod, J.-C., Rosvick, J., Duquennoy, A., Mayor, M. 1992, A&A, 265, 513
Mermilliod, J.-C., Bratschi, P., & Mayor, M. 1997, A&A, 320, 74
Mermilliod, J.-C. & Mayor, M. 1999, A&A, 352, 479
Messina, S. 2001, A&A, 371, 1024
Meynet, G., Mermilliod, J.-C., & Maeder, A. 1993, A&AS, 98, 477
Oppenheimer, B., Basri, G., Nakajima, T., & Kulkarni, S. 1997, AJ, 113, 296
Patten, B., et al. 2006, ApJ, 651, 502
Pinfield, D., Hodgkin, S., Jameson, R., Cossburn, M., Hambly, N., & Devereux, N. 2000,
MNRAS, 313, 347
Pritchard, R. 1884, MNRAS, 44, 355
Prosser, C., Stauffer, J., & Kraft, R. 1991, AJ, 101, 1361
Queloz, D., Allain, S., Mermilliod, J.-C., Bouvier, J., & Mayor, M. 1998, A&A, 335, 183
– 30 –
Raboud, D., & Mermilliod, J.-C. 1998, A&A, 329, 101
Rieke, G. & Lebofsky, M. 1985, ApJ, 288, 618
Robinson, E.L. & Kraft, R.P. 1974, AJ, 79, 698
Rosvick, J., Mermilliod, J., & Mayor, M. 1992, A&A, 255, 130
Siess, L., Dufour, E., & Forestini, M. 2000, A&A, 358, 593
Skrutskie, M. et al. 2006, AJ, 131, 1163
Slesnick, C., Carpenter, J., Hillenbrand, L., & Mamajek, E. 2006, AJ, 132, 2665
Soderblom, D. R., Jones, B. R., Balachandran, S., Stauffer, J. R., Duncan, D. K., Fedele, S.
B., & Hudon, J. 1993, AJ, 106, 1059
Soderblom, D., Nelan, E., Benedict, G., McArthur, B., Ramirez, I., Spiesman, W., & Jones,
B. 2005, AJ, 129, 1616
Stauffer, J. 1980, AJ, 85, 1341
Stauffer, J. R. 1982a, AJ, 87, 1507
Stauffer, J. 1984, ApJ, 280, 189
Stauffer, J. R., Hartmann, L. W., Soderblom, D. R., & Burnham, N. 1984, ApJ, 280, 202
Stauffer, J. R., & Hartmann, L. W. 1987, ApJ, 318, 337
Stauffer, J., Hamilton, D., Probst, R., Rieke, G., & Mateo, M. 1989,ApJ, 344, 21
Stauffer, J., Klemola, A., Prosser, C. & Probst, R. 1991, AJ, 101, 980
Stauffer, J. R., Caillault, J.-P., Gagne, M., Prosser, C. F., & Hartmann, L. W. 1994, ApJS,
91, 625
Stauffer, J. R., Liebert, J., & Giampapa, M. 1995, AJ, 109, 298
Stauffer, J., Hamilton, D., Probst, R., Rieke, G., Mateo, M. 1989, ApJ, 344, L21
Stauffer, J. R., et al. 1999, ApJ, 527, 219
Stauffer, J. R., et al. 2003, AJ, 126, 833
Stauffer, J. R., et al. 2005, AJ, 130, 1834
– 31 –
Steele, I. et al. 1995, MNRAS, 272, 630
Terlevich, E. 1987, MNRAS, 224, 193
Terndrup, D. M, Stauffer, J. R., Pinsonneault, M. H., Sills, A., Yuan, Y., Jones, B. F.,
Fischer, D., & Krishnamurthi, A. 2000, AJ, 119, 1303
Trumpler, R.J. 1921, Lick Obs. Bull. 10, 110
Ventura, P., Zeppieri, A., Mazzitelli, I., & D’Antona, F. 1998, A&A, 334, 953
Werner, M. et al. 2004, ApJS, 154, 309
White, R. E. 2003, ApJS, 148, 487
White, R. E. & Bally, J. 1993, ApJ, 409, 234
This preprint was prepared with the AAS LATEX macros v5.2.
– 32 –
Table 1. Pleiades Membership Surveys used as Sources
Reference Area Covered Magnitude Range Number Candidates Name
Sq. Deg. (and band) Prefix
Trumpler (1921) 3 2.5< B <14.5 174 Tr
Trumpler (1921)a 24 2.5< B <10 72 Tr
Hertzsprung (1947) 4 2.5< V <15.5 247 HII
Artyukhina (1969) 60 2.5< B <12.5 ∼200 AK
Haro et al. (1982) 20 11< V <17.5 519 HCG
van Leeuwen et al. (1986) 80 2.5< B <13 193 PELS
Stauffer et al. (1991) 16 14< V <18 225 SK
Hambly et al. (1993) 23 10< I <17.5 440 HHJ
Pinfield et al. (2000) 6 13.5< I <19.5 339 BPL
Adams et al. (2001) 300 8< Ks <14.5 1200 ...
Deacon & Hambly (2004) 75 10< R <19 916 DH
aThe Trumpler paper is listed twice because there are two membership surveys included in that
paper, with differing spatial coverages and different limiting magnitudes.
Fig. 1.— Spatial coverage of the six times deeper “2MASS 6x” observations of the Pleiades.
The 2MASS survey region is approximately centered on Alcyone, the most massive member
of the Pleiades. The trapezoidal box roughly indicates the region covered with the shallow
IRAC survey of the cluster core. The star symbols correspond to the brightest B star
members of the cluster. The red points are the location of objects in the 2MASS 6x Point
Source Catalog.
Fig. 2.— Color-magnitude diagram for the Pleiades derived from the 2MASS 6x obser-
vations. The red dots correspond to objects identified as unresolved, whereas the green
dots correspond to extended sources (primarily background galaxies). The lack of green
dots fainter than K = 16 is indicative that there is too few photons to identify sources as
extended - the extragalactic population presumably increases to fainter magnitudes.
Fig. 3.— As for Figure 2, except in this case the axes are J − H and H − Ks. The
extragalactic objects are very red in both colors.
Fig. 4.— FIGURE REMOVED TO FIT WITHIN ASTRO-PH FILESIZE GUIDELINES.
See http://spider.ipac.caltech.edu/staff/stauffer/pleiades07/ for full-res version. Two-color
(4.5 and 8.0 micron) mosaic of the central square degree of the Pleiades from the IRAC
survey. North is approximately vertical, and East is approximately to the left. The bright
star nearest the center is Alcyone; the bright star at the left of the mosaic is Atlas; and the
bright star at the right of the mosaic is Electra.
http://spider.ipac.caltech.edu/staff/stauffer/pleiades07/
Fig. 5.— Finding chart corresponding approximately to the region imaged with IRAC.
The large, five-pointed stars are all of the Pleiades members brighter than V= 5.5. The
small open circles correspond to other cluster members. Several stars with 8 µm excesses are
labelled by their HII numbers, and are discussed further in Section 6. The short lines through
several of the stars indicate the size and position angle of the residual optical polarization
(after subtraction of a constant foreground component), as provided in Figure 6 of Breger
(1986).
Fig. 6.— Comparison of aperture photometry for Pleiades members derived from the
IRAC 3.6 µm mosaic using the Spitzer APEX package and the IRAF implementation of
DAOPHOT.
Fig. 7.— Difference between aperture photometry for Pleiades members for IRAC channels
1 and 2. The [3.6]−[4.5] color begins to depart from essentially zero at magnitudes ∼10.5,
corresponding approximately to spectral type M0 in the Pleiades.
Fig. 8.— Ks vs. Ks −[4.5] CMD for Pleiades candidate members, illustrating why we have
excluded HII 1695 from the final catalog of cluster members. The “X” symbol marks the
location of HII 1695 in this diagram.
Fig. 9.— Spatial plot of the candidate Pleiades members from Table 2. The large star
symbols are members brighter than Ks= 6; the open circles are stars with 6 < Ks < 9; and
the dots are candidate members fainter than Ks= 9. The solid line is parallel to the galactic
plane.
Fig. 10.— The cumulative radial density profiles for Pleiades members in several magnitude
ranges: heavy, long dash – Ks < 6; dots – 6 < Ks < 9; short dash – 9 < Ks < 12; light, long
dash – Ks > 12.
Fig. 11.— V vs. (V − I)c CMD for Pleiades members with photoelectric photometry. The
solid curve is the “by eye” fit to the single-star locus for Pleiades members.
Fig. 12.— Ks vs. Ks − [3.6] CMD for Pleiades candidate members from Table 2 (dots) and
from deeper imaging of a set of Pleiades VLM and brown dwarf candidate members from
Lowrance et al. (2007) (squares). The solid curve is the single-star locus from Table 3.
Fig. 13.— V vs. (V − I)c CMD for Pleiades candidate members from Table 2 for which we
have photoelectric photometry, compared to theoretical isochrones from Siess et al. (2000)
(left) and from Baraffe et al. (1998) (right). For the left panel, the curves correspond to 10,
50, 100 Myr and a ZAMS; the right panel includes curves for 50, 100 Myr and a ZAMS.
Fig. 14.— K vs. (I −K) CMD for Pleiades candidate members from Table 2, compared to
theoretical isochrones from Siess et al. (2000) (left) and from Baraffe et al. (1998) (right).
The curves correspond to 50 Myr, 100 Myr and a ZAMS.
Fig. 15.— Ks vs. Ks−[3.6] CMD for the objects in the central one square degree of the
Pleiades, combining data from the IRAC shallow survey and 2MASS. The symbols are defined
within the figure (and see text for details). The dashed-line box indicates the region within
which we have searched for new candidate Pleiades VLM and substellar members. The solid
curve is a DUSTY 100 Myr isochrone from Chabrier et al. (2000), for masses from 0.1 to
0.03 M⊙.
Fig. 16.— Proper motion vector point diagrams (VPDs) for various stellar samples in the
central one degree field, derived from combining the IRAC and 2MASS 6x observations.
Top left: VPD comparing all objects in the field (small black dots) to Pleiades members
with 11 < Ks < 14 (large blue dots). Top right: same, except the blue dots are the new
candidate VLM and substellar Pleiades members. Bottom left: same, except the blue dots
are a nearby, low mass field star sample from a box just blueward of the trapezoidal region
in 15. Bottom right: VPD just showing a second, distant field star sample as described in
the text.
Fig. 17.— Same as Fig. 15, except that the new candidate VLM and substellar objects from
Table 4 are now indicated as small, red squares.
Fig. 18.— Two plots intended to isolate Pleiades members with excess and/or extended 8
µm emission. The plot with [3.6]−[8.0] micron colors shows data from Table 3 (and hence is
for aperture sizes of 3 pixel and 2 pixel radius, respectively). The increased vertical spread
in the plots at faint magnitudes is simply due to decreasing signal to noise at 8 µm. The
numbers labelling stars with excesses are the HII identification numbers for those stars.
Fig. 19.— Postage stamp images extracted from individual, 8 µm BCDs for the stars with
extended 8 µm emission, from which we have subtracted an empirical PSF. Clockwise from
the upper left, the stars shown are HII1234, HII859, Merope and HII652. The five-pointed
star indicates the astrometric position of the star (often superposed on a few black pixels
where the 8 µm image was saturated. The circle in the Merope image is centered on the
location of IC349 and has diameter about 25” (the size of IC349 in the optical is of order
10” x 10”).
Fig. 20.— Aperture growth curves from the 8 µm mosaic for stars with 24 µm excesses from
Gorlova et al. (2006) and for a set of control objects (dashed curves). All of the objects have
been scaled to common zero-point magnitudes for 9 pixel apertures, with the 24 µm excess
stars offset from the control objects by 0.1 mag. The three Gorlova et al. (2006) stars with
no excess at 8 µm are HII 996, HII 1284 and HII 2195. The Gorlova et al. (2006) star with
a slight excess at 8 µm is HII 489.
Fig. 21.— Calibration derived relating Ikp from Pinfield et al. (2000) and IC. The dots are
stars for which we have both Ikp and IC measurements (small dots: photographic IC; large
dots: photoelectric IC), and the solid line indicates the piecewise linear fit we use to convert
the Ikp values to IC for stars for which we only have Ikp.
Fig. 22.— Difference between the predicted IC and Deacon & Hambly (2004) I magnitude
as a function of right ascension for the DH stars. No obvious dependence is present versus
declination.
Fig. 23.— Comparison of the recalibrated DH I photometry with estimates of IC for stars
in Table 2 with photoelectric data.
This figure "f1.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f2.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f3.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f4.jpg" is available in "jpg"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f5.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f6.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f7.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f8.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f9.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f10.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f11.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f12.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f13.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f14.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f15.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f16.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f17.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f18.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f19.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f20.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f21.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f22.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
This figure "f23.gif" is available in "gif"
format from:
http://arxiv.org/ps/0704.1832v1
http://arxiv.org/ps/0704.1832v1
Introduction
New Observational Data
2MASS ``6x" Imaging of the Pleiades
Shallow IRAC Imaging
A Catalog of Pleiades Candidate Members
Empirical Pleiades Isochrones and Comparison to Model Isochrones
Identification of New Very Low Mass Candidate Members
Mid-IR Observations of Dust and PAHS in the Pleiades
Pleiades Dust-Star Encounters Imaged with IRAC
Circumstellar Disks and IRAC
Summary and Conclusions
APPENDIX
Membership Catalogs
Photometry
|
0704.1833 | Analysis of the 802.11e Enhanced Distributed Channel Access Function | Analysis of the 802.11e Enhanced Distributed
Channel Access Function †
Inanc Inan, Feyza Keceli, and Ender Ayanoglu
Center for Pervasive Communications and Computing
Department of Electrical Engineering and Computer Science
The Henry Samueli School of Engineering
University of California, Irvine, 92697-2625
Email: {iinan, fkeceli, ayanoglu}@uci.edu
Abstract
The IEEE 802.11e standard revises the Medium Access Control (MAC) layer of the former IEEE 802.11
standard for Quality-of-Service (QoS) provision in the Wireless Local Area Networks (WLANs). The Enhanced
Distributed Channel Access (EDCA) function of 802.11e defines multiple Access Categories (AC) with AC-specific
Contention Window (CW) sizes, Arbitration Interframe Space (AIFS) values, and Transmit Opportunity (TXOP)
limits to support MAC-level QoS and prioritization. We propose an analytical model for the EDCA function which
incorporates an accurate CW, AIFS, and TXOP differentiation at any traffic load. The proposed model is also
shown to capture the effect of MAC layer buffer size on the performance. Analytical and simulation results are
compared to demonstrate the accuracy of the proposed approach for varying traffic loads, EDCA parameters, and
MAC layer buffer space.
I. INTRODUCTION
The IEEE 802.11 standard [1] defines the Distributed Coordination Function (DCF) which provides
best-effort service at the Medium Access Control (MAC) layer of the Wireless Local Area Networks
(WLANs). The recently ratified IEEE 802.11e standard [2] specifies the Hybrid Coordination Function
(HCF) which enables prioritized and parameterized Quality-of-Service (QoS) services at the MAC layer,
on top of DCF. The HCF combines a distributed contention-based channel access mechanism, referred
to as Enhanced Distributed Channel Access (EDCA), and a centralized polling-based channel access
mechanism, referred to as HCF Controlled Channel Access (HCCA).
† This work is supported by the Center for Pervasive Communications and Computing, and by National Science Foundation under Grant
No. 0434928. Any opinions, findings, and conclusions or recommendations expressed in this material are those of authors and do not
necessarily reflect the view of the National Science Foundation.
http://arXiv.org/abs/0704.1833v3
We confine our analysis to the EDCA scheme, which uses Carrier Sense Multiple Access with Collision
Avoidance (CSMA/CA) and slotted Binary Exponential Backoff (BEB) mechanism as the basic access
method. The EDCA defines multiple Access Categories (AC) with AC-specific Contention Window (CW)
sizes, Arbitration Interframe Space (AIFS) values, and Transmit Opportunity (TXOP) limits to support
MAC-level QoS and prioritization [2].
In order to assess the performance of these functions, simulations or mathematical analysis can be used.
Although simulation models may capture system dynamics very closely, they lack explicit mathematical
relations between the network parameters and performance measures. A number of networking functions
would benefit from the insights provided by such mathematical relations. For example, analytical modeling
is a more convenient way to assist embedded QoS-aware MAC scheduling and Call Admission Control
(CAC) algorithms. Theoretical analysis can provide invaluable insights for QoS provisioning in the WLAN.
On the other hand, analytical modeling can potentially be complex, where the effect of multiple layer
network parameters makes the task of deriving a simple and accurate analytical model highly difficult.
However, a set of appropriate assumptions may lead to simple yet accurate analytical models.
The majority of analytical work on the performance of 802.11e EDCA (and of 802.11 DCF) assumes
that every station has always backlogged data ready to transmit in its buffer anytime (in saturation) as
will be discussed in Section III. Analysis of the system in this state (saturation analysis) provides accurate
and practical asymptotic figures. However, the saturation assumption is unlikely to be valid in practice
given the fact that the demanded bandwidth for most of the Internet traffic is variable with significant
idle periods. Our main contribution is an accurate EDCA analytical model which releases the saturation
assumption. The model is shown to predict EDCA performance accurately for the whole traffic load range
from a lightly loaded non-saturated channel to a heavily congested saturated medium for a range of traffic
models.
Similarly, the majority of analytical work on the performance of 802.11e EDCA (and of 802.11
DCF) assumes constant collision probability for any transmitted packet at an arbitrary backoff slot
independent of the number of retransmissions it has experienced. A complementary assumption is the
constant transmission probability for any AC at an arbitrary backoff slot independent of the number of
retransmissions it has experienced. As will be discussed in Section III, these approximations lead to
accurate analysis in saturation. Our analysis shows that the slot homogeneity assumption leads to accurate
performance prediction even when the saturation assumption is released.
Furthermore, the majority of analytical work on the performance of 802.11e EDCA (and of 802.11
DCF) in non-saturated conditions assumes either a very small or an infinitely large MAC layer buffer
space. Our analysis removes such assumptions by incorporating the finite size MAC layer queue (interface
queue between Link Layer (LL) and MAC layer) into the model. The finite size queue analysis shows
the effect of MAC layer buffer space on EDCA performance which we will show to be significant.
A key contribution of this work is that the proposed analytical model incorporates all EDCA QoS
parameters, CW, AIFS, and TXOP. The model also considers varying collision probabilities at different
AIFS slots which is a direct result of varying number of contending stations. Comparing with simulations,
we show that our model can provide accurate results for an arbitrary selection of AC-specific EDCA
parameters at any load.
We present a Markov model the states of which represent the state of the backoff process and MAC
buffer occupancy. To enable analysis in the Markov framework, we assume constant probability of packet
arrival per state (for the sake of simplicity, Poisson arrivals). On the other hand, we have also shown that
the results also hold for a range of traffic types.
II. EDCA OVERVIEW
The IEEE 802.11e EDCA is a QoS extension of IEEE 802.11 DCF. The major enhancement to support
QoS is that EDCA differentiates packets using different priorities and maps them to specific ACs that are
buffered in separate queues at a station. Each ACi within a station (0 ≤ i ≤ imax, imax = 3 in [2]) having
its own EDCA parameters contends for the channel independently of the others. Following the convention
of [2], the larger the index i is, the higher the priority of the AC is. Levels of services are provided
through different assignments of the AC specific EDCA parameters; AIFS, CW, and TXOP limits.
If there is a packet ready for transmission in the MAC queue of an AC, the EDCA function must sense
the channel to be idle for a complete AIFS before it can start the transmission. The AIFS of an AC is
determined by using the MAC Information Base (MIB) parameters as
AIFS = SIFS + AIFSN × Tslot, (1)
where AIFSN is the AC-specific AIFS number, SIFS is the length of the Short Interframe Space and
Tslot is the duration of a time slot.
If the channel is idle when the first packet arrives at the AC queue, the packet can be directly transmitted
as soon as the channel is sensed to be idle for AIFS. Otherwise, a backoff procedure is completed following
the completion of AIFS before the transmission of this packet. A uniformly distributed random integer,
namely a backoff value, is selected from the range [0, W ]. Should the channel be sensed busy at any time
slot during AIFS or backoff, the backoff procedure is suspended at the current backoff value. The backoff
resumes as soon as the channel is sensed to be idle for AIFS again. When the backoff counter reaches
zero, the packet is transmitted in the following slot.
The value of W depends on the number of retransmissions the current packet experienced. The initial
value of W is set to the AC-specific CWmin. If the transmitter cannot receive an Acknowledgment (ACK)
packet from the receiver in a timeout interval, the transmission is labeled as unsuccessful and the packet
is scheduled for retransmission. At each unsuccessful transmission, the value of W is doubled until the
maximum AC-specific CWmax limit is reached. The value of W is reset to the AC-specific CWmin if the
transmission is successful, or the retry limit is reached thus the packet is dropped.
The higher priority ACs are assigned smaller AIFSN. Therefore, the higher priority ACs can either
transmit or decrement their backoff counters while lower priority ACs are still waiting in AIFS. This
results in higher priority ACs enjoying a lower average probability of collision and relatively faster
progress through backoff slots. Moreover, in EDCA, the ACs with higher priority may select backoff
values from a comparably smaller CW range. This approach prioritizes the access since a smaller CW
value means a smaller backoff delay before the transmission.
Upon gaining the access to the medium, each AC may carry out multiple frame exchange sequences as
long as the total access duration does not go over a TXOP limit. Within a TXOP, the transmissions are
separated by SIFS. Multiple frame transmissions in a TXOP can reduce the overhead due to contention.
A TXOP limit of zero corresponds to only one frame exchange per access.
An internal (virtual) collision within a station is handled by granting the access to the AC with the
highest priority. The ACs with lower priority that suffer from a virtual collision run the collision procedure
as if an outside collision has occured [2].
III. RELATED WORK
In this section, we provide a brief summary of the theoretical DCF and EDCA function performance
analysis in the literature.
The majority of previous work carries out performance analysis for asymptotical conditions assuming
each station is in saturation. Three major saturation performance models have been proposed for DCF;
i) assuming constant collision probability for each station, Bianchi [3] developed a simple Discrete-Time
Markov Chain (DTMC) and the saturation throughput is obtained by applying regenerative analysis to
a generic slot time, ii) Cali et al. [4],[5] employed renewal theory to analyze a p-persistent variant of
DCF with persistence factor p derived from the CW, and iii) Tay et al. [6] instead used an average value
mathematical method to model DCF backoff procedure and to calculate the average number of interruptions
that the backoff timer experiences. Having the common assumption of slot homogeneity (for an arbitrary
station, constant collision or transmission probability at an arbitrary slot), these models define all different
renewal cycles all of which lead to accurate saturation performance analysis. Similarly, Medepalli et al.
[7] provided explicit expressions for average DCF cycle time and system throughput. Pointing out another
direction for future performance studies, Hui et al. [8] recently proposed the application of metamodeling
techniques in order to find approximate closed-form mathematical models.
These major methods are modified by several researchers to include the extra features of the EDCA
function in the saturation analysis. Xiao [9],[10] extended [3] to analyze only the CW differentiation.
Kong et al. [11] took AIFS differentiation into account via a 3-dimensional DTMC. On the other hand,
these EDCA extensions miss the treatment of varying collision probabilities at different AIFS slots due
to varying number of contending stations. Robinson et al. [12],[13] proposed an average analysis on
the collision probability for different contention zones during AIFS and employed calculated average
collision probability on a 2-dimensional DTMC. Hui et al. [14],[15] unified several major approaches
into one approximate average model taking into account varying collision probability in different backoff
subperiods (corresponds to contention zones in [12]). Zhu et al. [16] proposed another analytical EDCA
Markov model averaging the transition probabilities based on the number and the parameters of high
priority flows. Inan et al. [17] proposed a simple DTMC which provides accurate treatment of AIFS
and CW differentiation between the ACs for the constant transmission probability assumption. Another
3-dimensional DTMC is proposed by Tao et al. [18],[19] in which the third dimension models the state
of backoff slots between successive transmission periods. In [18],[19], the fact that the number of idle
slots between successive transmissions can be at most the minimum of AC-specific CWmax values is
considered. Independent from [18],[19], Zhao et al. [20] had previously proposed a similar model for
the heterogeneous case where each station has traffic of only one AC. Banchs et al. [21],[22] proposed
another model which considers varying collision probability among different AIFS slots due to a variable
number of stations. Chen et al. [23], Kuo et al. [24], and Lin et al. [25] extended [6] in order to include
mean value analysis for AIFS and CW differentiation.
Although it has not yet received much attraction, the research that releases the saturation assumption
basically follows two major methods; i) modeling the non-saturated behavior of DCF or EDCA function
via Markov analysis, ii) employing queueing theory [26] and calculating certain quantities through average
or Markov analysis. Our approach in this work falls into the first category.
Markov analysis for the non-saturated case still assumes slot homogeneity and extends [3] with necessary
extra Markov states and transitions. Duffy et al. [27] and Alizadeh-Shabdiz et al. [28],[29] proposed similar
extensions of [3] for non-saturated analysis of 802.11 DCF. Due to specific structure of the proposed
DTMCs, these extensions assume a MAC layer buffer size of one packet. We show that this assumption
may lead to significant performance prediction errors for EDCA in the case of larger buffers. Cantieni et
al. [30] extended the model of [28] assuming infinitely large station buffers and the MAC queue being
empty with constant probability regardless of the backoff stage the previous transmission took place. Li et
al. [31] proposed an approximate model for non-saturation where only CW differentiation is considered.
Engelstad et al. [32] used a DTMC model to perform delay analysis for both DCF and EDCA considering
queue utilization probability as in [30]. Zaki et al. [33] proposed yet another Markov model with states
that are of fixed real-time duration which cannot capture the pre-saturation DCF throughput peak.
A number of models employing queueing theory have also been developed for 802.11(e) performance
analysis in non-saturated conditions. These models are assisted by independent analysis for the calculation
of some quantities such as collision and transmission probabilities. Tickoo et al. [34],[35] modeled each
802.11 node as a discrete time G/G/1 queue to derive the service time distribution, but the models are
based on an assumption that the saturated setting provides good approximation for certain quantities in
non-saturated conditions. Chen et al. [36] employed both G/M/1 and G/G/1 queue models on top of [10]
which only considers CW differentiation. Lee et al. [37] analyzed the use of M/G/1 queueing model
while employing a simple non-saturated Markov model to calculate necessary quantities. Medepalli et al.
[38] built upon the average cycle time derivation [7] to obtain individual queue delays using both M/G/1
and G/G/1 queueing models. Foh et al. [39] proposed a Markov framework to analyze the performance
of DCF under statistical traffic. This framework models the number of contending nodes as an M/Ej/1/k
queue. Tantra et al. [40] extended [39] to include service differentiation in EDCA. However, such analysis
is only valid for a restricted scenario where all nodes have a MAC queue size of one packet.
There are also a few studies that investigated the effect of EDCA TXOPs on 802.11e performance
for a saturated scenario. Mangold et al. [41] and Suzuki et al. [42] carried out the performance analysis
through simulation. The efficiency of burst transmissions with block acknowledgements is studied in [43].
Tinnirello et al. [44] also proposed different TXOP managing policies for temporal fairness provisioning.
Peng et al. [45] proposed an analytical model to study the effect of burst transmissions and showed that
improved service differentiation can be achieved using a novel scheme based on TXOP thresholds.
A thorough and careful literature survey shows that an EDCA analytical model which incorporates all
EDCA QoS parameters, CW, AIFS, and TXOP, for any traffic load has not been designed yet.
IV. EDCA DISCRETE-TIME MARKOV CHAIN MODEL
Assuming slot homogeneity, we propose a novel DTMC to model the behavior of the EDCA function
of any AC at any load. The main contribution of this work is that the proposed model considers the
effect of all EDCA QoS parameters (CW, AIFS, and TXOP) on the performance for the whole traffic load
range from a lightly-loaded non-saturated channel to a heavily congested saturated medium. Although we
assume constant probability of packet arrival per state (for the sake of simplicity, Poisson arrivals), we
show that the model provides accurate performance analysis for a range of traffic types.
The state of the EDCA function of any AC at an arbitrary time t depends on several MAC layer events
that may have occured before t. We model the MAC layer state of an ACi, 0 ≤ i ≤ 3, with a 3-dimensional
Markov process, (si(t), bi(t), qi(t)). The stochastic process si(t) represents the value of the backoff stage
at time t, i.e., the number of retransmissions that the packet to be transmitted currently has experienced
until time t. The stochastic process bi(t) represents the state of the backoff counter at time t. Up to this
point, the definition of the first two dimensions follows [3] which is introduced for DCF. In order to
enable the accurate non-saturated analysis considering EDCA TXOPs, we introduce another dimension
which models the stochastic process qi(t) denoting the number of packets buffered for transmission at the
MAC layer. Moreover, as the details will be described in the sequel, in our model, bi(t) does not only
represent the value of the backoff counter, but also the number of transmissions carried out during the
current EDCA TXOP (when the value of backoff counter is actually zero).
Using the assumption of independent and constant collision probability at an arbitrary backoff slot, the
3-dimensional process (si(t), bi(t), qi(t)) is represented as a Discrete-Time Markov Chain (DTMC) with
states (j, k, l) and index i. We define the limits on state variables as 0 ≤ j ≤ ri − 1, −Ni ≤ k ≤ Wi,j
and 0 ≤ l ≤ QSi. In these inequalities, we let ri be the retransmission limit of a packet of ACi; Ni
be the maximum number of successful packet exchange sequences of ACi that can fit into one TXOPi;
Wi,j = 2
min(j,mi)(CWi,min + 1) − 1 be the CW size of ACi at the backoff stage j where CWi,max =
2mi(CWi,min +1)−1, 0 ≤ mi < ri; and QSi be the maximum number of packets that can buffered at the
MAC layer, i.e., MAC queue size. Moreover, it is important to note that a couple of restrictions apply to
the state indices.
• When there are not any buffered packets at the AC queue, the EDCA function of the corresponding
AC cannot be in a retransmitting state. Therefore, if l = 0, then j = 0 should hold. Such backoff
states represent the postbackoff process [1],[2], therefore called as postbackoff slots in the sequel.
The postbackoff procedure ensures that the transmitting station waits at least another backoff between
successive TXOPs. Note that, when l > 0 and k ≥ 0, these states are named backoff slots.
• The states with indices −Ni ≤ k ≤ −1 represent the negation of the number of packets that are
successfully transmitted at the current TXOP rather than the value of the backoff counter (which is
zero during a TXOP). For simplicity, in the design of the Markov chain, we introduced such states in
the second dimension. Therefore, if −Ni ≤ k ≤ −1, we set j = 0. As it will be clear in the sequel,
the addition of these states enables EDCA TXOP analysis.
Let pci denote the average conditional probability that a packet from ACi experiences either an external
or an internal collision after the EDCA function decides on the transmission. Let pnt(l
′, T |l) be the
probability that there are l′ packets in the MAC buffer at time t + T given that there were l packets
at t and no transmissions have been made during interval T . Similarly, let pst(l
′, T |l) be the probability
that there are l′ packets in the MAC buffer at time t + T given that there were l packets at time t and a
transmission has been made during interval T . Note that since we assume Poisson arrivals, the exponential
interarrival distributions are independent, and pnt and pst only depend on the interval length T and are
independent of time t. Then, the nonzero state transmission probabilities of the proposed Markov model
for ACi, denoted as Pi(j
′, k′, l′|j, k, l) adopting the same notation in [3], are calculated as follows.
1) The backoff counter is decremented by one at the slot boundary. Note that we define the postbackoff
or the backoff slot as Bianchi defines the slot time [3]. Then, for 0 ≤ j ≤ ri − 1, 1 ≤ k ≤ Wi,j, and
0 ≤ l ≤ l′ ≤ QSi,
Pi(j, k − 1, l
′|j, k, l) = pnt(l
′, Ti,bs|l). (2)
It is important to note that the proposed DTMC’s evolution is not real-time and the state duration
varies depending on the state. The average duration of a backoff slot Ti,bs is calculated by (29) which
will be derived. Note that, in (2), we consider the probability of packet arrivals during Ti,bs (buffer
size l′ after the state transition depends on this probability).
2) We assume the transmitted packet experiences a collision with constant probability pci (slot homo-
geneity). In the following, note that the cases when the retry limit is reached and when the MAC
buffer is full are treated separately, since the transition probabilities should follow different rules.
Let Ti,s and Ti,c be the time spent in a successful transmission and a collision by ACi respectively
which will be derived. Then, for 0 ≤ j ≤ ri − 1, 0 ≤ l ≤ QSi − 1, and max(0, l − 1) ≤ l
′ ≤ QSi,
Pi(0,−1, l
′|j, 0, l) = (1 − pci) · pst(l
′, Ti,s|l) (3)
Pi(0,−1, QSi − 1|j, 0, QSi) = 1 − pci. (4)
For 0 ≤ j ≤ ri − 2, 0 ≤ k ≤ Wi,j+1, and 0 ≤ l ≤ l
′ ≤ QSi,
Pi(j + 1, k, l
′|j, 0, l) =
pci · pnt(l
′, Ti,c|l)
Wi,j+1 + 1
. (5)
For 0 ≤ k ≤ Wi,0, 0 ≤ l ≤ QSi − 1, and max(0, l − 1) ≤ l
′ ≤ QSi,
Pi(0, k, l
′|ri − 1, 0, l) =
Wi,0 + 1
· pst(l
′, Ti,s|l) (6)
Pi(0, k, QSi − 1|ri − 1, 0, QSi) =
Wi,0 + 1
Note that we use pnt in (5) although a transmission has been made. On the other hand, the packet
has collided and is still at the MAC queue for retransmission as if no transmission has occured. This
is not the case in (3) and (6), since in these transitions a successful transmission or a drop occurs.
When the MAC buffer is full, any arriving packet is discarded as (4) and (7) imply.
3) Once the TXOP is started, the EDCA function may continue with as many packet SIFS-separated
exchange sequences as it can fit into the TXOP duration. Let Ti,exc be the average duration of a
successful packet exchange sequence for ACi which will be derived in (24). Then, for −Ni + 1 ≤
k ≤ −1, 1 ≤ l ≤ QSi, and max(0, l − 1) ≤ l
′ ≤ QSi,
Pi(0, k − 1, l
′|0, k, l) = pst(l
′, Ti,exc|l). (8)
When the next transmission cannot fit into the remaining TXOP, the current TXOP is immediately
concluded and the unused portion of the TXOP is returned. By design, our model includes maximum
number of packets that can fit into one TXOP. Then, for 0 ≤ k ≤ Wi,0 and 1 ≤ l ≤ QSi,
Pi(0, k, l|0,−Ni, l) =
Wi,0 + 1
. (9)
The TXOP ends when the MAC queue is empty. Then, for 0 ≤ k′ ≤ Wi,0 and −Ni ≤ k ≤ −1,
Pi(0, k
′, 0|0, k, 0) =
Wi,0 + 1
. (10)
Note that no time passes in (9) and (10), so the definition of these states and transitions is actually
not necessary for accuracy. On the other hand, they simplify the DTMC structure and symmetry.
4) If the queue is still empty when the postbackoff counter reaches zero, the EDCA function enters
the idle state until another packet arrival. Note (0,0,0) also represents the idle state. We make two
assumptions; i) At most one packet may arrive during Tslot with constant probability ρi (considering
the fact that Tslot is in the order of microseconds, the probability that multiple packets can arrive in
this interval is very small), ii) if the channel is idle at the slot the packet arrives at an empty queue,
the transmission will be successful at AIFS completion without any backoff. The latter assumption
is due to the following reason. While the probability of the channel becoming busy during AIFS
or a collision occuring for the transmission at AIFS is very small at a lightly loaded scenario,
the probability of a packet arrival to an empty queue is very small at a highly loaded scenario. As
observed via simulations, these assumptions do not lead to any noticeable changes in the results while
simplifying the Markov chain structure and symmetry. Then, for 0 ≤ k ≤ Wi,0 and 1 ≤ l ≤ QSi,
Pi(0, 0, 0|0, 0, 0) = (1 − pci) · (1 − ρi) + pci · pnt(0, Ti,b|0), (11)
Pi(0, k, l|0, 0, 0) =
Wi,0 + 1
· pnt(l, Ti,b|0), (12)
Pi(0,−1, l|0, 0, 0) = (1 − pci) · ρi · pnt(l, Ti,s|0). (13)
Let Ti,b in (11) and (12) be the length of a backoff slot given it is not idle. Note that actually a
successful transmission occurs in the state transition in (13). On the other hand, the transmitted packet
is not reflected in the initial queue size state which is 0. Therefore, pnt is used instead of pst.
Parts of the proposed DTMC model are illustrated in Fig. 1 for an arbitrary ACi with Ni = 2. Fig. 1(a)
shows the state transitions for l = 0. Note that in Fig. 1(a) the states with −Ni ≤ k ≤ −2 can only be
reached from the states with l = 1. Fig. 1(b) presents the state transitions for 0 < l < QSi and 0 ≤ j < ri.
Note that only the transition probabilities and the states marked with rectangles differ when j = ri − 1
(as in (6)). Therefore, we do not include an extra figure for this case. Fig. 1(c) shows the state transitions
when l = QSi. Note also that the states marked with rectangles differ when j = ri − 1 (as in (7)). The
combination of these small chains for all j, k, l constitutes our DTMC model.
A. Steady-State Solution
Let bi,j,k,l be the steady-state probability of the state (j, k, l) of the proposed DTMC with index i
which can be solved using (2)-(13) subject to
l bi,j,k,l = 1 (the proposed DTMC is ergodic and
irreducible). Let τi be the probability that an ACi transmits at an arbitrary backoff or postbackoff slot
∑ri−1
l=1 bi,j,0,l
+ bi,0,0,0 · ρi · (1 − pci)
∑ri−1
∑Wi,j
l=0 bi,j,k,l
. (14)
Note that −Ni ≤ k ≤ −1 is not included in the normalization in (14), since these states represent a
continuation in the EDCA TXOP rather than a contention for the access. The value of τi depends on the
values of the average conditional collision probability pci , the various state durations Ti,bs, Ti,b, Ti,s and
Ti,c, and the conditional queue state transition probabilities pnt and pst.
1) Average conditional collision probability pci: The difference in AIFS of each AC in EDCA creates
the so-called contention zones as shown in Fig. 2 [12]. In each contention zone, the number of contending
stations may vary. The collision probability cannot simply be assumed to be constant among all ACs.
We can define pci,x as the conditional probability that ACi experiences either an external or an internal
collision given that it has observed the medium idle for AIFSx and transmits in the current slot (note
AIFSx ≥ AIFSi should hold). For the following, in order to be consistent with the notation of [2], we
assume AIFS0 ≥ AIFS1 ≥ AIFS2 ≥ AIFS3. Let di = AIFSi − AIFS3. Also, let the total number
ACi flows be fi. Then, for the heterogeneous scenario in which each station has only one AC
pci,x = 1 −
i′:di′≤dx
(1 − τi′)
(1 − τi)
. (15)
When each station has multiple ACs that are active, internal collisions may occur. Then, for the scenario
in which each station has all 4 ACs active
pci,x = 1 −
i′:di′≤dx
(1 − τi′)
fi′−1
i′′>i
(1 − τi′′). (16)
Similar extensions when the number of active ACs are 2 or 3 are straightforward.
We use the Markov chain shown in Fig. 3 to find the long term occupancy of contention zones. Each
state represents the nth backoff slot after completion of the AIFS3 idle interval following a transmission
period. The Markov chain model uses the fact that a backoff slot is reached if and only if no transmission
occurs in the previous slot. Moreover, the number of states is limited by the maximum idle time between
two successive transmissions which is Wmin = min(CWi,max) for a saturated scenario. Although this is
not the case for a non-saturated scenario, we do not change this limit. As the comparison with simulation
results show, this approximation does not result in significant prediction errors. The probability that at
least one transmission occurs in a backoff slot in contention zone x is
ptrx = 1 −
i′:di′≤dx
(1 − τi′)
fi′ . (17)
Note that the contention zones are labeled with x regarding the indices of d. In the case of equal AIFS
values, the contention zone is labeled with the index of the AC with higher priority.
Given the state transition probabilities as in Fig. 3, the long term occupancy of the backoff slots b′n can
be obtained from the steady-state solution of the Markov chain. Then, the AC-specific average collision
probability pci is found by weighing zone specific collision probabilities pci,x according to the long term
occupancy of contention zones (thus backoff slots)
pci =
∑Wmin
n=di+1
pci,x · b
∑Wmin
n=di+1
where x = max
y | dy = max
(dz | dz ≤ n)
which shows x is assigned the highest index value within
a set of ACs that have AIFS smaller than or equal to n+AIFS3. This ensures that at backoff slot n, ACi
has sensed the medium idle for AIFSx. Therefore, the calculation in (18) fits into the definition of pci,x .
Note that the average collision probability calculation in [12, Section IV-D] is a special case of our
calculation for two ACs.
2) The state duration Ti,s and Ti,c: Let Ti,p be the average payload transmission time for ACi (Ti,p
includes the transmission time of MAC and PHY headers), δ be the propagation delay, Tack be the time
required for acknowledgment packet (ACK) transmission. Then, for the basic access scheme, we define
the time spent in a successful transmission Ti,s and a collision Ti,c for any ACi as
Ti,s =Ti,p + δ + SIFS + Tack + δ + AIFSi (19)
Ti,c =Ti,p∗ + ACK Timeout + AIFSi (20)
where Ti,p∗ is the average transmission time of the longest packet payload involved in a collision [3].
For simplicity, we assume the packet size to be equal for any AC, then Ti,p∗ = Ti,p. Being not explicitly
specified in the standards, we set ACK Timeout, using Extended Inter Frame Space (EIFS) as EIFSi−
AIFSi.
The extensions of (19) and (20) for the Request-to-Send/Clear-to-Send (RTS/CTS) scheme are
Ti,s =Trts + δ + SIFS + Tcts + δ + SIFS + Ti,p + δ + SIFS + Tack + δ + AIFSi (21)
Ti,c =Trts + CTS Timeout + AIFSi (22)
where Trts and Tcts are the time required for RTS and CTS packet transmissions respectively. Being not
explicitly specified in the standards, we set CTS Timeout as we set ACK Timeout.
3) The state duration Ti,bs and Ti,b: The average time between successive backoff counter decrements
is denoted by Ti,bs. The backoff counter decrement may be at the slot boundary of an idle backoff slot
or the last slot of AIFS following an EDCA TXOP or a collision period. We start with calculating the
average duration of an EDCA TXOP for ACi Ti,txop as
Ti,txop =
l=0 bi,0,−Ni,l · ((Ni − 1) · Ti,exc + Ti,s) +
k=−Ni+1
bi,0,k,0 · ((−k − 1) · Ti,exc + Ti,s)
k=−Ni+1
bi,0,k,0 +
l=0 bi,0,−Ni,l
where Ti,exc is defined as the duration of a successful packet exchange sequence within a TXOP. Since
the packet exchanges within a TXOP are separated by SIFS rather than AIFS,
Ti,exc = Ti,s − AIFSi + SIFS, (24)
Ni = max(1, ⌊(TXOPi + SIFS)/Ti,exc⌋). (25)
Given τi and fi, simple probability theory can be used to calculate the conditional probability of no
transmission (pidlex,i ), only one transmission from ACi′ (p
suci′
x,i ), or at least two transmissions (p
x,i) at the
contention zone x given one ACi is in backoff.
pidlex,i =
i′:di′≤dx
(1 − τi′)
fi′ , if di > dx
i′:di′≤dx
(1 − τi′)
1 − τi
, if di ≤ dx.
suci′
x,i =
0, if dx < di′
fi′τi′(1 − τi′)
fi′−1
i′′:di′′≤dx
(1 − τi′′)
fi′′ , if di > dx and di′ ≤ dx
fi′τi′(1 − τi′)
fi′−1
1 − τi
i′′:di′′≤dx
(1 − τi′′)
fi′′ , if di ≤ dx and di′ ≤ dx.
pcolx,i =1 − p
x,i −
suci′
x,i (28)
Let xi be the first contention zone in which ACi can transmit. Then,
Ti,bs =
xi<x′≤3
(pidlex′,i · Tslot + p
x′,i · Ti,c +
suci′
x′,i · Ti′,txop) · pzx′ (29)
where pzx denotes the stationary distribution for a random backoff slot being in zone x. Note that, in
(29), the fractional term before summation accounts for the busy periods experienced before AIFSi is
completed. Therefore, if we let d−1 = Wmin,
pzx =
min(dx′ |dx′>dx)
n=dx+1
b′n. (30)
The expected duration of a backoff slot given it is busy and one ACi is in idle state is calculated as
Ti,b =
pcolx′,i
1 − pidlex′,i
· Ti,c +
suci′
1 − pidlex′,i
· Ti′,txop
· pzx′ . (31)
4) The conditional queue state transition probabilities pnt and pst: We assume the packets arrive at
the AC queue with size QSi according to a Poisson process with rate λi packets per second. Using the
probability distribution function of the Poisson process, the probability of k arrivals occuring in time
interval t can be calculated as
Pr(Nt,i = k) =
exp−λit(λit)
. (32)
Then, pnt(l
′, T |l) and pst(l
′, T |l) can be calculated as follows. Note that the finite buffer space is
considered throughout calculations since the number of packets that may arrive during T can be more
than the available queue space.
pnt(l
′, T |l) =
Pr(NT,i = l
′ − l), if l′ < QSi
QSi−1
Pr(NT,i = l
′ − l), if l′ = QSi.
pst(l
′, T |l) =
Pr(NT,i = l
′ − l + 1), if l′ < QSi
QSi−1
l′=l−1
Pr(NT,i = l
′ − l + 1), if l′ = QSi.
Note that in (11)-(14), ρi = 1−Pr(NTslot,i = 0). Together with the steady-state transition probabilities,
(14)-(34) represent a nonlinear system which can be solved using numerical methods.
B. Normalized Throughput Analysis
The normalized throughput of a given ACi, Si, is defined as the fraction of the time occupied by the
successfully transmitted information. Then,
psiNi,txopTi,p
pITslot +
i′ psi′Ti′,txop + (1 − pI −
i′ psi′ )Tc
pI is the probability of the channel being idle at a backoff slot, psi is the conditional successful transmission
probability of ACi at a backoff slot, and Ni,txop = (Ti,txop−AIFSi+SIFS)/Ti,exc. Note that, we consider
Ni,txop and Ti,txop in (35) to define the generic slot time and the time occupied by the successfully
transmitted information in the case of EDCA TXOPs.
The probability of a slot being idle, pI , depends on the state of previous slots. For example, conditioned
on the previous slot to be busy (pB = 1−pI ), pI only depends on the transmission probability of the ACs
with the smallest AIFS, since others have to wait extra AIFS slots. Generalizing this to all AIFS slots, pI
can be calculated as
γnpB(pI)
γnpBp
I + γd0p
I (36)
where γn denotes the probability of no transmission occuring at the (n + 1)
th AIFS slot after AIFS3.
Substituting γn = γd0 for n ≥ d0, and releasing the condition on the upper limit of summation, Wmin,
to ∞, pI can be approximated as in (36). According to the simulation results, this approximation works
well. Note that γn = 1 − p
x where x = max
y | dy = max
(dz | dz ≤ n)
The probability of successful transmission psi is conditioned on the states of the previous slots as well.
This is again because the number of stations that can contend at an arbitrary backoff slot differs depending
on the number of previous consecutive idle backoff slots. Therefore, for the heterogeneous case, in which
each station only has one AC, psi can be calculated as
psi =
(1 − τi)
n=di+1
(n−1)
i′:0≤di′≤(n−1)
(1 − τi′)
+ (pI)
(1 − τi′)
. (37)
Similarly, for the scenario, in which each station has four active ACs,
psi =
(1 − τi)
n=di+1
(n−1)
i′:0≤di′≤(n−1)
(1 − τi′)
fi′−1
i′′>i
(1 − τi′′)
+ (pI)
(1 − τi′)
fi′−1
i′′>i
(1 − τi′′)
. (38)
C. Average Delay Analysis
Our goal is to find total average delay E[Di] which is defined as the average time from when a packet
enters the MAC layer queue of ACi until it is successfully transmitted. Di has two components; i) queueing
time Qi and ii) access time Ai. Qi is the period that a packet waits in the queue for other packets in
front to be transmitted. Ai is the period a packet waits at the head of the queue until it is transmitted
successfully (backoff and transmission period). We carry out a recursive calculation as in [11] to find
E[Ai] for ACi. Then, using E[Ai] and bi,j,k,l, we calculate E[Di]=E[Qi]+E[Ai]. Note that, E[Ai] differs
depending on whether the EDCA function is idle or not when the packet arrives. We will treat these cases
separately. In the sequel, Ai,idle denotes the access delay when the EDCA function is idle at the time a
packet arrives.
The recursive calculation is carried out in a bottom-to-top and left-to-right manner on the AC-specific
DTMC. For the analysis, let Ai(j, k) denote the time delay from the current state (j, k, l) until the packet
at the head of the ACi queue is transmitted successfully (l ≥ 1). The initial condition on the recursive
calculation is
Ai(ri − 1, 0) = Ti,s. (39)
Recursive delay calculations for 0 ≤ j ≤ ri − 1 are
Ai(j, k) =
Ai(j, k − 1) + Ti,bs, if 1 ≤ k ≤ Wi,j
(1 − pci)Ti,s + pci
PWi,j+1
Ai(j+1,k
Wi,j+1+1
+ Ti,c
, if k = 0 and j 6= ri − 1.
Then,
E[Ai] =
∑Wi,0
k=0 Ai(0, k)
Wi,0 + 1
Following the assumptions made in (11)-(13) and considering the packet loss probability due to the
retry limit as pl,r = (pci)
ri (note that the delay a dropped packet experiences cannot be considered in a
total delay calculation), E[Ai,idle] can be calculated as
E[Ai,idle] = Ti,s · (1 − pci) + (E[Ai] + Ti,b) · pci · (1 − pl,r). (42)
In this case, the average access delay is equal to the total average delay, i.e., Di(0, 0, 0) = E[Ai,idle].
We perform another recursive calculation to calculate the total delay a packet experiences Di(j, k, l)
(given that the packet arrives while the EDCA function is at state (j, k, l)). In the calculations, we account
for the remaining access delay for the packet at the head of the MAC queue and the probability that this
packet may be dropped due to the retry limit.
Let Ai,d(j, k) be the access delay conditioned that the packet drops. Ai,d(j, k) can easily be calculated by
modifying the recursive method of calculating Ai(j, k). The initial condition on this recursive calculation
Ai,d(ri − 1, 0) = Ti,c. (43)
Recursive delay calculations for 0 ≤ j ≤ ri − 1 are
Ai,d(j, k) =
Ai(j, k − 1) + Ti,bs, if 1 ≤ k ≤ Wi,j
Wi,j+1
Ai,d(j + 1, k) + Ti,c, if k = 0 and j 6= ri − 1.
Then,
E[Ai,d] =
∑Wi,0
k=0 Ai,d(0, k)
Wi,0 + 1
If a packet arrives during the backoff of another packet, it is delayed at least for the remaining access
time. Depending on the queue size, it may be transmitted at the current TXOP, or may be delayed till
further accesses are gained. Then, for 0 ≤ j ≤ ri − 1, 0 ≤ k ≤ Wi,j, and 1 ≤ l ≤ QSi,
Di(j, k, l) =(1 − pl,r) · (Ai(j, k) + min(Ni − 1, l − 1) · Ti,exc + Di(−1,−1, l − Ni))
+ pl,r · (Ai,d(j, k) + Di(−1,−1, l − 1)) . (46)
When the packet arrives during postbackoff, the total delay is equal to the access delay. Then, for 0 ≤
k ≤ Wi,j and l = 0,
Di(j, k, l) = Ai(j, k). (47)
When the packet arrives during a TXOP, it may be transmitted at the current TXOP, or it may wait for
further accesses. Then, for −Ni + 1 ≤ k ≤ −1 and 1 ≤ l ≤ QSi,
Di(j, k, l) = min(k − 1, l) · Ti,exc + Di(−1,−1, l − k + 1). (48)
Di(−1,−1, l) is calculated recursively according to the value of l
Di(−1,−1, l) =
0, if l ≤ 0
E[Ai] · (1 − pl,r), if l = 1
χ, if l > 1
where
χ =(1 − pl,r) · (E[Ai] + min(Ni − 1, l − 1) · Ti,exc
+Di(−1,−1, l − Ni)) + pl,r · (E[Ai,d] + Di(−1,−1, l − 1)) . (50)
Let the probability of any arriving packet seeing the EDCA function at state (j, k, l) be b̄i,j,k,l. Since
we assume independent and exponentially distributed packet interarrivals, b̄i,j,k,l can simply be calculated
by normalizing bi,j,k,l excluding the states in which no time passes, i.e., ∀(j, k, l) such that (0,−Ni, 1 ≤
l ≤ QSi) or (0,−Ni ≤ k ≤ −1, 0). Note that b̄i,j,k,l is zero for these states
b̄i,j,k,l =
bi,j,k,l
l=1 bi,0,−Ni,l −
k=−Ni
bi,0,k,0
. (51)
Then, the total average delay a successful packet experiences E[Di] can be calculated averaging Di(j, k, l)
over all possible states
E[Di] = E[Ai,idle] · b̄i,0,0,0 +
∀(j,k,l)/(0,0,0)
Di(j, k, l) · b̄i,j,k,l. (52)
D. Average Packet Loss Ratio
We consider two types of packet losses; i) the packet is dropped when the MAC layer retry limit is
reached, ii) the packet is dropped if the MAC queue is full at the time of packet arrival. Let plri denote
the average packet loss ratio for ACi. We use the steady-state probability bi,j,k,l to find the probability
whether the MAC queue is full or not at the time of packet arrival. If the queue is full, the arriving packet
is dropped (second term in (53)). Otherwise, the packet is dropped with probability prici , i.e. only if the
retry limit is reached (first term in (53)). Note that we consider packet retransmissions only due to packet
collisions. Then,
plri =
QSi−1
bi,j,k,l · p
bi,j,k,QSi. (53)
E. Queue Size Distribution
Due to the specific structure of the proposed model, it is straightforward to calculate the MAC queue
size distribution for ACi. Note that we use queue size distribution in the calculation of average packet
loss ratio.
Pr(l = l′) =
bi,j,k,l′. (54)
V. NUMERICAL AND SIMULATION RESULTS
We validate the accuracy of the numerical results calculated via the proposed EDCA model by comparing
them with the simulations results obtained from ns-2 [46]. For the simulations, we employ the IEEE
802.11e HCF MAC simulation model for ns-2.28 that we developed [47]. This module implements all
the EDCA and HCCA functionalities stated in [2].
As in all work on the subject in the literature, we consider ACs that transmit fixed-size User Datagram
Protocol (UDP) packets. In simulations, we consider two ACs, one high priority and one low priority.
Each station runs only one traffic class. Unless otherwise stated, the packets are generated according to
a Poisson process with equal rate for both ACs. We set AIFSN1 = 3, AIFSN3 = 2, CW1,min = 15,
CW3,min = 7, m1 = m3 = 3, r1 = r3 = 7. For both ACs, the payload size is 1034 bytes. Again, as in
most of the work on the subject, the simulation results are reported for the wireless channel which is
assumed to be not prone to any errors during transmission. The errored channel case is left for future
study. All the stations have 802.11g Physical Layer (PHY) using 54 Mbps and 6 Mbps as the data and
basic rate respectively (Tslot = 9 µs, SIFS = 10 µs) [48]. The simulation runtime is 100 seconds.
Fig. 4 shows the differentiation of throughput for two ACs when EDCA TXOP limits of both are set
to 0 (1 packet exchange per EDCA TXOP). In this scenario, there are 5 stations for both ACs and they
are transmitting to an AP. The normalized throughput per AC as well as the total system throughput
is plotted for increasing offered load per AC. We have carried out the analysis for maximum MAC
buffer sizes of 2 packets and 10 packets. The comparison between analytical and simulation results
shows that our model can accurately capture the linear relationship between throughput and offered load
under low loads, the complex transition in throughput between under-loaded and saturation regimes, and
the saturation throughput. Although we do not present here, considerable inaccuracy is observed if the
postbackoff procedure, varying collision probability among different AIFS zones, and varying service time
among different backoff stages are not modeled correctly as proposed. The results also present that the
slot homogeneity assumption works accurately in a non-saturated model for throughput estimation.
The proposed model can also capture the throughput variation with respect to the size of the MAC buffer.
The results reveal how significantly the size of the MAC buffer affects the throughput in the transition
period from underloaded to highly loaded channel. This also shows small interface buffer assumptions of
previous models [27],[28],[29],[40] can lead to considerable analytical inaccuracies. Although the total
throughput for the small buffer size case has higher throughput in the transition region for the specific
example, this cannot be generalized. The reason for this is that AC1 suffers from low throughput for
QS1 = 10 due to the selection of EDCA parameters, which affects the total throughput.
It is also important to note that the throughput performance does not differ significantly (around %1-
%2) for buffer sizes larger than 10 packets for the given scenarios. Therefore, we do not include such
cases in order not to complicate the figures. Since the complexity of the mathematical solution increases
with the increasing size of the third dimension of DTMC, it may be preferable to implement the model
for smaller queue sizes when the throughput performance is not expected to be affected by the selection.
Fig. 5 depicts the differentiation of throughput for two ACs when EDCA TXOP limits are set to 1.504
ms and 3.008 ms for high and low priority ACs respectively. For TXOP limits, we use the suggested values
for voice and video ACs in [2]. It is important to note that the model works for an arbitrary selection of the
TXOP limit. According to the selected TXOP limits, N1 = 5 and N2 = 11. The normalized throughput per
AC as well as the total system throughput is plotted while increasing offered load per AC. We have done
the analysis for maximum MAC buffer sizes of 2 packets and 10 packets. The model accurately captures
the throughput for any traffic load. As expected, increasing maximum buffer size to 10 packets increases
the throughput both in the transition and the saturation region. Note that when more than a packet fits
into EDCA TXOPs, this decreases contention overhead which in turn increases channel utilization and
throughput (comparison of Fig. 5 with Fig. 4). Although corresponding results are not presented here, the
model works accurately for higher queue sizes in the case of EDCA TXOPs as well.
Fig. 6 displays the differentiation of throughput for two ACs when packet arrival rate is fixed to 2
Mbps and the station number per AC is increased. We have done the analysis for the MAC buffer size
of 10 packets with EDCA TXOPs enabled. The analytical and simulation results are well in accordance.
As the traffic load increases, the differentiation in throughput between the ACs is observed.
Fig. 7 shows the normalized throughput for two ACs when offered load per AC is not equal. In this
scenario, we set the packet arrival rate per AC1 to 2 Mbps and the packet arrival rate per AC3 to 0.5
Mbps. The analytical and simulation results are well in accordance. As the traffic load increases, AC3
maintains linear increase with respect to offered load, while AC1 experiences decrease in throughput due
to larger settings of AIFS and CW if the total number of stations exceeds 22.
In the design of the model, we assume constant packet arrival probability per state. The Poisson arrival
process fits this definition because of the independent exponentially distributed interarrival times. We have
also compared the throughput estimates obtained from the analytical model with the simulation results
obtained using an On/Off traffic model in Fig. 8. A similar study has first been made for DCF in [27].
We modeled the high priority with On/Off traffic model with exponentially distributed idle and active
intervals of mean length 1.5 s. In the active interval, packets are generated with Constant Bit Rate (CBR).
The low priority traffic uses Poisson distributed arrivals. Note that we leave the packet size unchanged,
but normalize the packet arrival rate according to the on/off pattern so that total offered load remains
constant to have a fair comparison. The analytical predictions closely follow the simulation results for the
given scenario. We have observed that the predictions are more sensitive if the transition region is entered
with a few number of stations (5 stations per AC).
Our model also provides a very good match in terms of the throughput for CBR traffic. In Fig. 9, we
compare the throughput prediction of the proposed model with simulations using CBR traffic. The packet
arrival rate is fixed to 2 Mbps for both ACs and the station number per AC is increased. MAC buffer size
is 10 packets and EDCA TXOPs are enabled.
Fig. 10 depicts the total average packet delay with respect to increasing traffic load per AC. We present
the results for two different scenarios. In the first scenario, TXOP limits are set to 0 ms for both ACs.
In the second scenario, TXOP limits are set to 1.504 ms and 3.008 ms for high and low priority ACs
respectively. The analysis is carried out for a buffer size of 10 packets. As the results imply, the analytical
results closely follow the simulation results for both scenarios. In the lightly loaded region, the delays
are considerably small. The increase in the transition region is steeper when TXOP limits are 0. In the
specific example, enabling TXOPs decreases the total delay where the decrease is more considerable for
the low priority AC (due to selection of parameters). Since the buffer size is limited, the total average
delay converges to a specific value as the load increases. Still this limit is not of interest, since the packet
loss rate at this region is unpractically large. Note that this limit will be higher for larger buffers. The
region of interest is the start of the transition region (between 2 Mbps and 3 Mbps for the example in
Fig. 10). On the other hand, we also display other data points to show the performance of the model for
the whole load span.
Fig. 11 depicts the average packet loss ratio with respect to increasing traffic load per AC. We present
the results for two different scenarios. In the first scenario, TXOP limits are set to 0 ms for both ACs.
In the second scenario, TXOP limits are set to 1.504 ms and 3.008 ms for high and low priority ACs
respectively. The analysis is carried out for a buffer size of 10 packets. As the results imply, the analytical
results closely follow the simulation results for both scenarios. Although it is not presented in Fig. 11,
the packet loss ratio drops exponentially to 0 when the offered load per AC is lower than 2.5 Mbps.
The results presented in this paper fixes the AIFS and CW parameters for each AC. The results are
compared for different TXOP values at varying traffic load. Therefore, the presented results can mainly
indicate the effects of TXOP on the maximum throughput. The model can also be used in order to
investigate the effects of AIFS and CW on the maximum throughput.
As the comparison of Fig. 4 and Fig. 5 reveals, the total throughput can be maximized with the
introduction of EDCA TXOPs which enable multiple frame transmissions in one channel access (note
that MAC buffer sizes for each AC should be equal to or larger than the number of packets that can fit
to the AC-specific TXOP in order to efficiently utilize each TXOP gained). EDCA TXOPs decrease the
channel contention overhead and the ACs can efficiently utilize the resources. Note also that the effects of
EDCA TXOPs in the lightly loaded region is marginal compared to highly loaded region. This is expected
since the MAC queues do not build up in the lightly loaded scenario where stations usually have just one
packet to send at their access to the channel.
As Fig. 4 shows the saturation throughput is usually less than the maximum throughput that can be
obtained. This is also observed for DCF in [3]. Similarly, in Fig. 6-Fig. 9, the total throughput slightly
decreases as the total load increases. As the load in the system increases the collision overhead becomes
significant which decreases the total channel utilization. On the other hand, as also discussed in [3], the
point where the maximum throughput is observed is unstable in a random access system. Therefore, a
good admission control algorithm should be defined to operate the system at the point right before the
lightly loaded to highly loaded transition region starts.
VI. CONCLUSION
We have presented an accurate Markov model for analytically calculating the EDCA throughput and
delay for the whole traffic load range from a lightly loaded non-saturated channel to a heavily congested
saturated medium. The presented model shows the accuracy of the homogeneous slot assumption (constant
collision and transmission probability at an arbitrary backoff slot) that is extensively studied in saturation
scenarios for the whole traffic range. The presented model accurately captures the linear relationship
between throughput and offered load under low loads and the limiting behavior of throughput at saturation.
The key contribution of this paper is that the model accounts for all of the differentiation mechanisms
EDCA proposes. The analytical model can incorporate any selection of AC-specific AIFS, CW, and TXOP
values for any number of ACs. The model also considers varying collision probabilities at different con-
tention zones which provides accurate AIFS differentiation analysis. Although not presented explicitly in
this paper, it is straightforward to extend the presented model for scenarios where the stations run multiple
ACs (virtual collisions may take place) or RTS/CTS protection mechanism is used. The approximations
made for the sake of DTMC simplicity and symmetry may also be removed easily for increased accuracy,
although they are shown to be highly accurate.
We also show that the MAC buffer size affects the EDCA performance significantly between underloaded
and saturation regimes (including saturation) especially when EDCA TXOPs are enabled. The presented
model captures this complex transition accurately. This analysis also points out the fact that including an
accurate queue treatment is vital. Incorporating MAC queue states also enables EDCA TXOP analysis so
that the EDCA TXOP continuation process is modeled in considerable detail. To the authors’ knowledge
this is the first demonstration of an analytic model including EDCA TXOP procedure for finite load.
It is also worth noting that our model can easily be simplified to model DCF behavior. Moreover, after
modifying our model accordingly, the throughput analysis for the infrastructure WLAN where there are
transmissions both in the uplink and downlink can be performed (note that in a WLAN downlink traffic
load may significantly differ from uplink traffic load).
Although the Markov analysis assumes the packets are generated according to Poisson process, the
comparison with simulation results shows that the throughput analysis is valid for a range of traffic types
such as CBR and On/Off traffic (On/Off traffic model is a widely used model for voice and telnet traffic).
The non-existence of a closed-form solution for the Markov model limits its practical use. On the other
hand, the accurate saturation throughput analysis can highlight the strengths and the shortcomings of
EDCA for varying scenarios and can provide invaluable insights. The model can effectively assist EDCA
parameter adaptation or a call admission control algorithm for improved QoS support in the WLAN.
REFERENCES
[1] IEEE Standard 802.11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications, IEEE 802.11 Std., 1999.
[2] IEEE Standard 802.11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications: Medium access control
(MAC) Quality of Service (QoS) Enhancements, IEEE 802.11e Std., 2005.
[3] G. Bianchi, “Performance Analysis of the IEEE 802.11 Distributed Coordination Function,” IEEE Trans. Commun., pp. 535–547, March
2000.
[4] F. Cali, M. Conti, and E. Gregori, “IEEE 802.11 Wireless LAN: Capacity Analysis and Protocol Enhancement,” in Proc. IEEE Infocom
’98, March 1998.
[5] ——, “Dynamic Tuning of the IEEE 802.11 Protocol to Achieve a Theoretical Throughput Limit,” IEEE/ACM Trans. Netw., pp.
785–799, December 2000.
[6] J. C. Tay and K. C. Chua, “A Capacity Analysis for the IEEE 802.11 MAC Protocol,” Wireless Netw., pp. 159–171, July 2001.
[7] K. Medepalli and F. A. Tobagi, “Throughput Analysis of IEEE 802.11 Wireless LANs using an Average Cycle Time Approach,” in
Proc. IEEE Globecom ’05, November 2005.
[8] J. Hui and M. Devetsikiotis, “Metamodeling of Wi-Fi Performance,” in Proc. IEEE ICC ’06, June 2006.
[9] Y. Xiao, “An Analysis for Differentiated Services in IEEE 802.11 and IEEE 802.11e Wireless LANs,” in Proc. IEEE ICDCS ’04,
March 2004.
[10] ——, “Performance Analysis of Priority Schemes for IEEE 802.11 and IEEE 802.11e Wireless LANs,” IEEE Trans. Wireless Commun.,
pp. 1506–1515, July 2005.
[11] Z. Kong, D. H. K. Tsang, B. Bensaou, and D. Gao, “Performance Analysis of the IEEE 802.11e Contention-Based Channel Access,”
IEEE J. Select. Areas Commun., pp. 2095–2106, December 2004.
[12] J. W. Robinson and T. S. Randhawa, “Saturation Throughput Analysis of IEEE 802.11e Enhanced Distributed Coordination Function,”
IEEE J. Select. Areas Commun., pp. 917–928, June 2004.
[13] ——, “A Practical Model for Transmission Delay of IEEE 802.11e Enhanced Distributed Channel Access,” in Proc. IEEE PIMRC ’04,
September 2004.
[14] J. Hui and M. Devetsikiotis, “Performance Analysis of IEEE 802.11e EDCA by a Unified Model,” in Proc. IEEE Globecom ’04,
December 2004.
[15] ——, “A Unified Model for the Performance Analysis of IEEE 802.11e EDCA,” IEEE Trans. Commun., pp. 1498–1510, September
2005.
[16] H. Zhu and I. Chlamtac, “Performance Analysis for IEEE 802.11e EDCF Service Differentiation,” IEEE Trans. Wireless Commun., pp.
1779–1788, July 2005.
[17] I. Inan, F. Keceli, and E. Ayanoglu, “Saturation Throughput Analysis of the 802.11e Enhanced Distributed Channel Access Function,”
to appear in Proc. IEEE ICC ’07.
[18] Z. Tao and S. Panwar, “An Analytical Model for the IEEE 802.11e Enhanced Distributed Coordination Function,” in Proc. IEEE ICC
’04, May 2004.
[19] ——, “Throughput and Delay Analysis for the IEEE 802.11e Enhanced Distributed Channel Access,” IEEE Trans. Commun., pp.
596–602, April 2006.
[20] J. Zhao, Z. Guo, Q. Zhang, and W. Zhu, “Performance Study of MAC for Service Differentiation in IEEE 802.11,” in Proc. IEEE
Globecom ’02, November 2002.
[21] A. Banchs and L. Vollero, “A Delay Model for IEEE 802.11e EDCA,” IEEE Commun. Lett., pp. 508–510, June 2005.
[22] ——, “Throughput Analysis and Optimal Configuration of IEEE 802.11e EDCA,” Comp. Netw., pp. 1749–1768, August 2006.
[23] Y. Chen, Q.-A. Zeng, and D. P. Agrawal, “Performance Analysis of IEEE 802.11e Enhanced Distributed Coordination Function,” in
Proc. IEEE ICON ’03, September 2003.
[24] Y.-L. Kuo, C.-H. Lu, E. H.-K. Wu, G.-H. Chen, and Y.-H. Tseng, “Performance Analysis of the Enhanced Distributed Coordination
Function in the IEEE 802.11e,” in Proc. IEEE VTC ’03 - Fall, October 2003.
[25] Y. Lin and V. W. Wong, “Saturation Throughput of IEEE 802.11e EDCA Based on Mean Value Analysis,” in Proc. IEEE WCNC ’06,
April 2006.
[26] L. Kleinrock, Queueing Systems. John Wiley and Sons, 1975.
[27] K. Duffy, D. Malone, and D. J. Leith, “Modeling the 802.11 Distributed Coordination Function in Non-Saturated Conditions,” IEEE
Commun. Lett., pp. 715–717, August 2005.
[28] F. Alizadeh-Shabdiz and S. Subramaniam, “Analytical Models for Single-Hop and Multi-Hop Ad Hoc Networks,” in Proc. ACM
Broadnets ’04, October 2004.
[29] ——, “Analytical Models for Single-Hop and Multi-Hop Ad Hoc Networks,” Mobile Networks and Applications, pp. 75–90, February
2006.
[30] G. R. Cantieni, Q. Ni, C. Barakat, and T. Turletti, “Performance Analysis under Finite Load and Improvements for Multirate 802.11,”
Comp. Commun., pp. 1095–1109, June 2005.
[31] B. Li and R. Battiti, “Analysis of the IEEE 802.11 DCF with Service Differentiation Support in Non-Saturation Conditions,” in QoFIS
’04, September 2004.
[32] P. E. Engelstad and O. N. Osterbo, “Analysis of the Total Delay of IEEE 802.11e EDCA and 802.11 DCF,” in Proc. IEEE ICC ’06,
June 2006.
[33] A. N. Zaki and M. T. El-Hadidi, “Throughput Analysis of IEEE 802.11 DCF Under Finite Load Traffic,” in Proc. First International
Symposium on Control, Communications and Signal Processing, 2004.
[34] O. Tickoo and B. Sikdar, “Queueing Analysis and Delay Mitigation in IEEE 802.11 Random Access MAC based Wireless Networks,”
in Proc. IEEE Infocom ’04, March 2004.
[35] ——, “A Queueing Model for Finite Load IEEE 802.11 Random Access MAC,” in Proc. IEEE ICC ’04, June 2004.
[36] X. Chen, H. Zhai, X. Tian, and Y. Fang, “Supporting QoS in IEEE 802.11e Wireless LANs,” IEEE Trans. Wireless Commun., pp.
2217–2227, August 2006.
[37] W. Lee, C. Wang, and K. Sohraby, “On Use of Traditional M/G/1 Model for IEEE 802.11 DCF in Unsaturated Traffic Conditions,” in
Proc. IEEE WCNC ’06, May 2006.
[38] K. Medepalli and F. A. Tobagi, “System Centric and User Centric Queueing Models for IEEE 802.11 based Wireless LANs,” in Proc.
IEEE Broadnets ’05, October 2005.
[39] C. H. Foh and M. Zukerman, “A New Technique for Performance Evaluation of Random Access Protocols,” in Proc. European Wireless
’02, February 2002.
[40] J. W. Tantra, C. H. Foh, I. Tinnirello, and G. Bianchi, “Analysis of the IEEE 802.11e EDCA Under Statistical Traffic,” in Proc. IEEE
ICC ’06, June 2006.
[41] S. Mangold, S. Choi, P. May, and G. Hiertz, “IEEE 802.11e - Fair Resource Sharing Between Overlapping Basic Service Sets,” in
Proc. IEEE PIMRC ’02, September 2002.
[42] T. Suzuki, A. Noguchi, and S. Tasaka, “Effect of TXOP-Bursting and Transmission Error on Application-Level and User-Level QoS
in Audio-Video Transmission with 802.11e EDCA,” in Proc. IEEE PIMRC ’06, September 2006.
[43] I. Tinnirello and S. Choi, “Efficiency Analysis of Burst Transmissions with Block ACK in Contention-Based 802.11e WLANs,” in
Proc. IEEE ICC ’05, May 2005.
[44] ——, “Temporal Fairness Provisioning in Multi-Rate Contention-Based 802.11e WLANs,” in Proc. IEEE WoWMoM ’05, June 2005.
[45] F. Peng, H. M. Alnuweiri, and V. C. M. Leung, “Analysis of Burst Transmission in IEEE 802.11e Wireless LANs,” in Proc. IEEE
ICC ’06, June 2006.
[46] (2006) The Network Simulator, ns-2. [Online]. Available: http://www.isi.edu/nsnam/ns
[47] IEEE 802.11e HCF MAC model for ns-2.28. [Online]. Available: http://newport.eecs.uci.edu/$\sim$fkeceli/ns.htm
[48] IEEE Standard 802.11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications: Further Higher Data
Rate Extension in the 2.4 GHz Band, IEEE 802.11g Std., 2003.
http://www.isi.edu/nsnam/ns
http://newport.eecs.uci.edu/$\sim $fkeceli/ns.htm
0,-1,0 0,0,0 0,1,0 0,2,0 0,Wi,0-1,0 0,Wi,0,0
0,1,l’ 0,Wi,0-1,l’0,0,l’ 0,Wi,0-2,l’
pnt(0,Ti,bs|0)pnt(0,Ti,bs|0)pnt(0,Ti,bs|0)
0,-1,l’
.pnt(0,Ti,s|0)
.pnt(l’,Ti,s|0)
pnt(l’,Ti,bs|0)pnt(l’,Ti,bs|0)pnt(l’,Ti,bs|0)
)(1-ρ
0,0BO0,l’
pnt(l’,Ti,b|0)
0,-2,0
0,-1,l j,0,l j,1,l j,2,l j,Wi,j-1,l j,Wi,j,l
j,1,l’ j,Wi,j-1,l’j,0,l’ j,Wi,j-2,l’
pnt(l,Ti,bs|l)pnt(l,Ti,bs|l)
0,-1,l’
pnt(l’,Ti,bs|l)pnt(l’,Ti,bs|l)pnt(l’,Ti,bs|l)
pnt(l,Ti,bs|l)
0,-2,l
0,-2,l’
pst(l’,Ti,exc|l)
pst(l,Ti,exc|l)
j+1,l’
(1-pc
)pst(l,Ti,s|l)
(1-pc
)pst(l’,Ti,s|l)
pnt(l’,Ti,c|l)
0,-1,l-1 j,0,l j,1,l j,2,l j,Wi,j-1,l j,Wi,j,l
j,lBOj+1,l
1111-pc
Fig. 1. Parts of the proposed DTMC model for Ni=2. The combination of these small chains for all j, k, l constitutes the proposed DTMC
model. (a) l = 0. (b) 0 < l < QSi. (c) l = QSi. Remarks: i) the transition probabilities and the states marked with rectangles differ when
j = ri − 1 (as in (6) and (7)), ii) the limits for l
′ follow the rules in (2)-(13).
Transmission/
Collision period
AIFSN
AIFSN
AIFSN
AIFSN
No Tx Zone 3 Zone 2 Zone 1
AC3 in Backoff
AC2 in Backoff
AC1 in Backoff
AC0 in Backoff
Fig. 2. EDCA backoff after busy medium.
1 d2 d1 Wmin
tr 1-p
tr 1-p
Fig. 3. Transition through backoff slots in different contention zones for the example given in Fig.2.
1 2 3 4 5 6 7 8 9 10
Offered traffic rate per AC (Mbps)
QS=2, AC
− analysis
QS=2, AC
− analysis
QS=2, total − analysis
QS=2, AC
− sim
QS=2, AC
− sim
QS=2, total −sim
QS=10, AC
− analysis
QS=10, AC
− analysis
QS=10, total − analysis
QS=10, AC
− sim
QS=10, AC
− sim
QS=10, total − sim
Fig. 4. Normalized throughput prediction of the proposed model for 2 AC heterogeneous scenario with respect to increasing load per AC
at each station and varying MAC buffer size in basic access mode (TXOP3 = 0, TXOP1 = 0). Simulation results are also added for
comparison.
1 2 3 4 5 6 7 8 9 10
Offered traffic rate per AC (Mbps)
QS=2, AC
− analysis
QS=2, AC
− analysis
QS=2, total − analysis
QS=2, AC
− sim
QS=2, AC
− sim
QS=2, total − sim
QS=10, AC
− analysis
QS=10, AC
− analysis
QS=10, total − analysis
QS=10, AC
− sim
QS=10, AC
− sim
QS=10, total − sim
Fig. 5. Normalized throughput prediction of the proposed model for 2 AC heterogeneous scenario with respect to increasing load per AC
at each station and varying MAC buffer size in basic access mode (TXOP3 = 1504ms, TXOP1 = 3008ms). Simulation results are also
added for comparison.
2 3 4 5 6 7 8 9 10 11 12
Number of stations per AC
− analysis
− analysis
total − analysis
− sim
− sim
total −sim
Fig. 6. Normalized throughput prediction of the proposed model for 2 AC heterogeneous scenario with respect to increasing number
of stations when MAC buffer size is 10 packets and total offered load per AC is 2 Mbps (TXOP3 = 1504ms, TXOP1 = 3008ms).
Simulation results are also added for comparison.
8 9 10 11 12 13 14 15 16
Number of stations per AC
− analysis
− analysis
total − analysis
− sim
− sim
total − sim
Fig. 7. Normalized throughput prediction of the proposed model for 2 AC heterogeneous scenario with respect to increasing number of
stations when MAC buffer size is 10 packets (TXOP3 = 1504ms, TXOP1 = 3008ms). Total offered load per AC3 is 0.5 Mbps while
total offered load per AC3 is 2 Mbps. Simulation results are also added for comparison.
18 19 20 21 22 23 24 25 26 27
Number of stations per AC
t AC1 − analysis
− analysis
total − analysis
, Poisson − sim
, On/Off − sim
total − sim
Fig. 8. Normalized throughput prediction of the proposed model for 2 AC heterogeneous scenario with respect to increasing number of
stations when total offered load per AC is 0.5 Mbps (TXOP3 = 1504ms, TXOP1 = 3008ms). Simulation results are also added for
the scenario when AC3 uses On/Off traffic with exponentially distributed idle and active times both with mean 1.5s. AC1 uses Poisson
distribution for packet arrivals.
2 3 4 5 6 7 8 9 10 11 12
Number of stations per AC
− analysis
− analysis
total − analysis
, CBR − sim
, CBR − sim
total, CBR − sim
Fig. 9. Normalized throughput prediction of the proposed model for 2 AC heterogeneous scenario with respect to increasing number
of stations when MAC buffer size is 10 packets and total offered load per AC is 2 Mbps (TXOP3 = 1504ms, TXOP1 = 3008ms).
Simulation results are also added for the scenario when both AC1 and AC3 uses CBR traffic.
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
Offered traffic rate per AC (Mbps)
− analysis
− analysis
− sim
− sim
+ TXOP − analysis
+ TXOP − analysis
+ TXOP − sim
+ TXOP − sim
Fig. 10. Total average delay prediction of the proposed model for 2 AC heterogeneous scenario with respect to increasing load per AC at
each station. In the first scenario, TXOP limits are set to 0 ms for both ACs. In the second scenario, TXOP limits are set to 1.504 ms and
3.008 ms for high and low priority ACs respectively. Simulation results are also added for comparison.
2.5 3 3.5 4 4.5 5
Offered traffic rate per AC (Mbps)
− analysis
− analysis
− sim
− sim
+ TXOP − analysis
+ TXOP − analysis
+ TXOP − sim
+ TXOP − sim
Fig. 11. Average packet loss ratio prediction of the proposed model for 2 AC heterogeneous scenario with respect to increasing load per
AC at each station. In the first scenario, TXOP limits are set to 0 ms for both ACs. In the second scenario, TXOP limits are set to 1.504
ms and 3.008 ms for high and low priority ACs respectively. Simulation results are also added for comparison.
Introduction
EDCA Overview
Related Work
EDCA Discrete-Time Markov Chain Model
Steady-State Solution
Average conditional collision probability pci
The state duration Ti,s and Ti,c
The state duration Ti,bs and Ti,b
The conditional queue state transition probabilities pnt and pst
Normalized Throughput Analysis
Average Delay Analysis
Average Packet Loss Ratio
Queue Size Distribution
Numerical and Simulation Results
Conclusion
References
|
0704.1834 | Charge Ordering in Half-Doped Manganites: Weak Charge Disproportion and
Leading Mechanisms | Charge Ordering in Half-Doped Manganites: Weak Charge Dis-
proportion and Leading Mechanisms
Dmitri Volja1,2, Wei-Guo Yin1 (a) and Wei Ku1,2 (b)
1 Condensed Matter Physics and Materials Science Department, Brookhaven National Laboratory, Upton, New York
11973, USA
2 Physics Department, State University of New York, Stony Brook, New York 11790, USA
PACS 75.47.Lx – Manganites (magnetotransport materials)
PACS 71.45.Lr – Charge-density waves - collective excitations
PACS 71.10.Fd – Lattice fermion models (Hubbard model, etc.)
PACS 71.30.+h – Metal-insulator transitions and other electronic transitions
Abstract. - The apparent contradiction between the recently observed weak charge disproportion
and the traditional Mn3+/Mn4+ picture of the charge-orbital orders in half-doped manganites is
resolved by a novel Wannier states analysis of the LDA+U electronic structure. Strong electron
itinerancy in this charge-transfer system significantly delocalizes the occupied low-energy “Mn3+”
Wannier states such that charge leaks into the “Mn4+”-sites. Furthermore, the leading mechanisms
of the charge order are quantified via our first-principles derivation of the low-energy effective
Hamiltonian. The electron-electron interaction is found to play a role as important as the electron-
lattice interaction.
Introduction. – The exploration of interplay among
distinct orders lies in the heart of condensed matter
physics and materials science, as this interplay often give
rises to tunable properties of practical applications, such
as exotic states and colossal responses to external stimuli.
Manganese oxides such as La1−xCaxMnO3, which host
rich charge, orbital, spin, and lattice degrees of freedom,
have thus attracted great attention [1]. In particular, the
vastly interesting colossal magnetoresistance (CMR) ef-
fect for x ∼ 0.2 − 0.4 exemplifies rich physics originating
from proximity of competing orders. In a slightly more
doped system (x = 0.5), all these orders coexist in an in-
sulating state [2], providing a unique opportunity for a
clean investigation of the strength and origin of each or-
der. Therefore, the study of half-doped manganites is key
to a realistic understanding of the physics of manganites
in general and the CMR effect in particular [3, 4].
The peculiar multiple orders in half-doped manganites
have long been understood in the Goodenough model [2]
of a Mn 3+/4+ checkerboard charge order (CO) with the
occupied Mn3+ eg orbitals zigzag ordered in the CE-type
antiferromagnetic background [2]. Pertaining to CMR, it
is broadly believed that a key is the emerging of nanoscale
E-mail: [email protected]
E-mail: [email protected]
charge-ordered insulating regions of Goodenough type at
intermediate temperatures, which could melt rapidly in
the magnetic field [4, 5].
Nonetheless, the simple yet profound Goodenough pic-
ture has been vigorously challenged thanks to recent ex-
perimental observations of nearly indiscernible charge dis-
proportion (CD) in a number of half-doped manganites
[6–13]. Such weak CD has also been observed in first-
principles computations [14, 15] and charge transfer be-
tween Mn and O sites was reported as well [15]. In
essence, these findings have revived a broader discussion
on the substantial mismatch of valence and charge in most
charge-transfer insulators. Indeed, extensive experimen-
tal and theoretical effort has been made in light of the
novel Zener-polaron model [9,33] in which all the Mn sites
become equivalent with valence being +3.5. Amazingly,
most of these investigations concluded in favor of two
distinct Mn sites as predicted in the Goodenough model
[11–13,16, 17], calling for understanding the emergence of
weak CD within the 3+/4+ valence picture.
Another closely related crucial issue is the roles of dif-
ferent microscopic interactions in the observed charge-
orbital orders, in particular the relevance of electron-
electron (e-e) interactions in comparison with the well-
accepted electron-lattice (e-l) interactions. For example,
http://arxiv.org/abs/0704.1834v2
D. Volja et al.
∆n, the difference in the electron occupation number be-
tween Mn3+ and Mn4+ states, was shown to be small in an
e-e only picture [18]; this is however insufficient to explain
the observed weak CD, since the established e-l interac-
tions will cause a large ∆n [19]. Moreover, despite the
common belief that e-l interactions dominate the general
physics of eg electrons in the manganites [2, 20], a recent
theoretical study [21] showed that e-e interaction plays an
essential and leading role in ordering the eg orbitals in the
parent compound. It is thus important to quantify the
leading mechanisms in the doped case and uncover the ef-
fects of the additional charge degree of freedom in general.
In this Letter, we present a general, simple, yet quanti-
tative picture of doped holes in strongly correlated charge-
transfer systems, and apply it to resolve the above con-
temporary fundamental issues concerning the charge order
in half-doped manganites. Based on recently developed
first-principles Wannier states (WSs) analysis [21,22,34] of
the LDA+U electronic structure in prototypical Ca-doped
manganites, the doped holes are found to reside primarily
in the oxygen atoms. They are entirely coherent in short
range [23], forming a Wannier orbital of Mn eg symme-
try at low-energy scale. This hybrid orbital, together with
the unoccupied Mn 3d orbital, forms the effective “Mn eg”
basis in the low-energy theory with conventional 3+/4+
valence picture, but simultaneously results in a weak CD
owing to the similar degree of mixing with the intrinsic
Mn orbitals, thus reconciling the current conceptual con-
tradictions. Moreover, our first-principles derivation of the
low-energy interacting Hamiltonian reveals a surprisingly
essential role of e-e interactions in the observed charge or-
der, contrary to the current lore. Our theoretical method
and the resulting simple picture provide a general frame-
work to utilize the powerful valence picture even with weak
CD, and can be directly applied to a wide range of doped
charge-transfer insulators.
Small CD vs. 3+/4+ valence Picture. – To pro-
ceed with our WSs analysis, the first-principles electronic
structure needs to reproduce all the relevant experimental
observations, including a band gap of ∼ 1.3 eV, CE-type
magnetic and orbital orders, and weak CD, as well as two
distinct Mn sites. We find that the criteria are met by the
LDA+U (8 eV) [14,24] band structure of the prototypical
half-hoped manganite La1/2Ca1/2MnO3 based on the re-
alistic crystal structure [25] supplemented with assumed
alternating La/Ca order. Hence, a proper analysis of this
LDA+U electronic structure is expected to illustrate the
unified picture of weak CD with the Mn3+/Mn4+ assign-
ment, which can be easily extended to other cases. In
practice, we shall focus on the most relevant low-energy
(near the Fermi level EF) bands—they are 16 “eg” spin-
majority bands (corresponding to 8 “spin-up” Mn atoms
in our unit cell) spanning an energy window of 3.2 eV,
as clearly shown in Fig. 1(a). For short notation, the Mn
bridge- and corner-sites in the zigzag ferromagnetic chain
are abbreviated to B- and C-sites, respectively.
Mn spin minority
Mn spin majority
2 2x y−2 23z r−
Oxygen
Fig. 1: (Color online) (a) LDA+U Band structures (dots). The
(red) lines result from the Wannier states analysis of the four
occupied spin-majority Mn 3d-derived bands. (b) An occupied
B-site (“Mn3+”) Wannier orbital in a spin-up (up arrow) zig-
zag chain, showing remarkable delocalization to the neighbor-
ing Mn C-sites. (c) Low-energy Mn atomic-like Wannier states
containing in their tails the integrated out O 2p orbitals.
The simplest yet realistic picture of the CO can be ob-
tained by constructing occupation-resolvedWSs (ORWSs)
from the four fully occupied bands, each centered at one
B-site as illustrated in Fig. 1(a). This occupied B-site eg
Wannier orbital of 3x2 − r2 or 3y2 − r2 symmetry (so for-
mal valency is 3+) contains in its tail the integrated out
O 2p orbitals with considerable weight, indicative of the
Charge Ordering in Half-Doped Manganites
charge-transfer nature [15]. Moreover, this “molecular or-
bital in the crystal” extends significantly to neighboring
C-sites on the same zig-zag chain. Therefore, although by
construction the C-site eg ORWSs (not shown) are com-
pletely unoccupied (so formal valency is 4+), appreciable
charge is still accumulated within the C-site Mn atomic
sphere owing to the large tails of the two occupied OR-
WSs centered at the two neighboring B-sites. Integrating
the charges within the atomic spheres around the B- and
C-site Mn atoms leads to a CD of mere 0.14 e, in agree-
ment with experimental 0.1− 0.2 e [7–12]. In this simple
picture, one finds a large difference in the occupation num-
bers of the ORWSs at the B- and C- sites (∆n = 1), but
a small difference in real charge. That is, the convenient
3+/4+ picture is perfectly applicable and it allows weak
CD, as long as the itinerant nature of manganites is incor-
porated via low-energy WSs rather than standard “atomic
states.”
In comparison, to make connection with the conven-
tional atomic picture and to formulate the spontaneous
symmetry breaking with a symmetric starting point (c.f.
the next section), we construct from the 16 low-energy
bands “atomic-like” WSs (AWSs) of Mn d3z2−r2 and
dx2−y2 symmetry, as shown in Fig. 1(b). In this pic-
ture, both B- and C-site AWSs are partially occupied with
∆n = 0.6. Now weak CD results from large hybridization
with O 2p orbitals, which significantly decreases the charge
within the Mn atomic sphere. Obviously, this picture is
less convenient for an intuitive and quantitative under-
standing of weak CD compatible with the 3+/4+ picture
than the previous one, as the latter builds the information
of the Hamiltonian and the resulting reduced density ma-
trix into the basis. On the other hand, it indeed implies
that strong charge-transfer in the system renders it highly
inappropriate to associate CD with the difference in the
occupation numbers of atomic-like states.
Furthermore, we find that the above conclusions are
generic in manganites by also looking into the two end lim-
its of La1−xCaxMnO3 (x = 0, 1). As shown in Fig. 2, Mn d
charge (within the Mn atomic sphere) is found to change
only insignificantly upon doping, in agreement with ex-
periments [7, 26]. This indicates that doped holes reside
primarily in the oxygen atoms, but are entirely coherent
in short-range and form additional effective “Mn eg” or-
bitals in order to gain the most kinetic energy from the
d-p hybridization, as shown in Fig. 1(a)-(b), in spirit sim-
ilar to hole-doped cuprates [23]. This justifies the present
simplest description of the charge-orbital orders with only
the above Mn-centering eg WSs [27].
Leading Mechanisms. – To identify the leading
mechanisms of the charge-orbital orders in a rigorous for-
malism, we proceed to derive a realistic effective low-
energy Hamiltonian, Heff , following our recently devel-
oped first-principles WS approach [21, 34]. As clearly
shown in Fig. 1, the low-energy physics concerning charge
and orbital orders is mainly the physics of one zig-zag FM
0.0 0.5 1.0
spin minority
spin majority
total
C-site
C-site
C-site
B-site
B-site
B-site
Level of Ca doping
Fig. 2: The calculated d charge within the Mn atomic sphere
for La1−xCaxMnO3 (x = 0, 1/2, and 1). The results are shown
in terms of (i) B-site (filled symbols), C-site (empty symbols),
and site-average (half-filled symbols); (ii) spin-majority (up tri-
angles), spin-minority (down triangles), and total charge (dia-
monds). The lines are guides to eye.
chain, since electron hopping between the antiferromag-
netically arranged chains is strongly suppressed by the
double-exchange effect [18, 28, 29]. Our unbiased first-
principles analysis of the 16-band one-particle LDA+U
Hamiltonian in the above AWS representation reveals
[18, 28, 29, 36]
Heff = −
〈ij〉γγ′
jγ′diγ + h.c.)− Ez
+ Ueff
ni↑ni↓ + V
niQ1i + T
i Q2i + T
i Q3i) (1)
in addition to the elastic energy K({Qi}). Here d†iγ and
diγ are electron creation and annihilation operators at site
i with “pseudo-spin” γ defined as | ↑〉 = |3z2 − r2〉 and
| ↓〉 = |y2−x2〉 AWSs, corresponding to pseudo-spin oper-
ator T x
i↑di↓+d
i↓di↑)/2 and T
i↑di↑−d
i↓di↓)/2.
ni = d
i↑di↑ + d
i↓di↓ is the electron occupation number.
The in-plane hoppings are basically symmetry related:
= t/4, t
= 3t/4, t
3t/4 where the signs
depend on hopping along the x or y direction. Ez stands
for the oxygen octahedral-tilting induced crystal field. Ueff
and V are effective on-site and nearest-neighbor e-e in-
teractions, respectively. g is the e-l coupling constant.
Qi = (Q1i, Q2i, Q3i) is the standard octahedral-distortion
vector, where Q1i is the breathing mode (BM), and Q2i
and Q3i are the Jahn-Teller (JT) modes [18,19,28,29,36].
In Eq. (1) the electron-lattice couplings have been con-
strained to be invariant under the transformation of the
cubic group [36].
The effective Hamiltonian are determined by match-
ing its self-consistent Hartree-Fock (HF) expression with
D. Volja et al.
Table 1: Contributions of different terms to the energy gain
(in units of meV per Mn) due to the CO formation in self-
consistent mean-field theory. BM (JT) denotes the contribu-
tion from electronic coupling to the breathing (Jahn-Teller)
mode. K denotes that from the elastic energy.
Qi Total Ueff V t BM JT K
0 -13 -11 -15 12 0 0 0
realistic -127 -22 -42 71 -42 -113 24
HLDA+U [21, 34] owing to the analytical structure of the
LDA+U approximation [30]. An excellent mapping re-
sults from t = 0.6 eV, Ez = −0.08 eV, Ueff = 1.65 eV,
V = 0.44 eV, and g = 2.35 eV/Å. These numbers are
close to those obtained for undoped LaMnO3 [21] (exclud-
ing V , which is inert in the undoped case), and indicates
that the spin-majority eg electrons in the manganites are
still in the intermediate e-e interaction regime with compa-
rable e-l interaction. Note that Ueff should be understood
as an effective repulsion of corresponding Mn-centered WS
playing the role of “d” states, rather than the “bare” d-d
interaction U = 8 eV [21]. Furthermore, the rigidity of the
low-energy parameters upon significant (x = 0.5) doping
verifies the validity of using a single set of parameters for
a wide range of doping levels, a common practice that is
not a priori justified for low-energy effective Hamiltoni-
ans. Clearly, the observed optical gap energy scale of ∼ 2
eV originates mainly from Ueff instead of the JT splitting
widely assumed in existing theories [5, 20].
Now based on the AWSs, CO is measured by ∆n =
〈ni∈B〉 − 〈nj∈C〉 = 0.6. It deviates from unity because the
kinetic energy ensures that the ground state is a hybrid of
both B- and C-site AWSs, like in the usual tight-binding
modeling based on atomic orbitals. However, since the
AWSs considerably extend to neighboring oxygen atoms,
the actual CD is much smaller than ∆n.
With the successful derivation of Heff , the microscopic
mechanisms of the charge-orbital orders emerge. First of
all, note that the kinetic term alone is able to produce the
orbital ordered insulating phase [18,29]: Since the intersite
interorbital hoppings of the Mn eg electrons along the x
and y directions have opposite signs, the occupied bonding
state is gapped from the unoccupied nonbonding and an-
tibonding states (by t and 2t, respectively) in the enlarged
unit cell. As for orbital ordering, the Mn dy2−z2 (dx2−z2)
orbital on any B-site bridging two C-sites along the x (y)
direction is irrelevant, as the hopping integrals involving
it is vanishing. That is, only the d3x2−r2 (d3y2−r2) orbital
on that B-site is active and the B-sites on the zigzag FM
chains have to form an “ordered” pattern of alternating
(3x2 − r2)/(3y2 − r2) orbitals. However, the kinetic term
alone give only ∆n = 0. Clearly, CO is induced by the
interactions, Ueff , V , or g.
To quantify their relative importance for CO, we calcu-
late their individual contributions to the total energy gain
with respect to the aforementioned ∆n = 0 but orbital-
ordered insulating state in the self-consistent mean-field
theory. The results are listed in Table 1. The first row
obtained without lattice distortions provides a measure of
the purely electronic mechanisms for CO. Interestingly, al-
though the tendency is weak (−13 meV), e-e interactions
all together are sufficient to induce a CO, consistent with
the results of our first-principles calculations. The second
row is obtained for the realistic lattice distortions, which
shows a dramatic enhancement of CO by the JT distor-
tions (−113 meV), given that only half of the Mn atoms
are JT active. Together with a −42 meV gain from the
BM distortion, the e-l interactions overwhelm the cost of
the kinetic (71 meV) and elastic (24 meV) energy by −63
meV, further stabilizing the observed CO. Nevertheless,
the −64 meV energy gain from the overall e-e interac-
tions accounts for half of the total energy gain, illustrating
clearly their importance to the realization of the resulting
∆n = 0.6. Indeed, a further analysis reveals that e-l cou-
plings alone (Ueff = V = 0) produce ∆n ∼ 0.3, only half
of the realistic ∆n, manifesting the necessity of including
e-e interactions.
Further insights can be obtained by considering the in-
dividual microscopic roles of these interactions acting to
the kinetic-only starting point. First, infinitesimal Ueff or
g can induce CO, as a result of exploiting the fact that
the B- (C-) site has one (two) active eg orbital: (i) Ueff
has no effect on the B-sites; therefore, Ueff pushes the eg
electrons to the B-sites, in order to lower the Coulomb
energy on the C-sites [18]. This is opposite to its normal
behavior of favoring charge homogeneity in systems such
as straight FM chains (realized in the C-type antiferro-
magnet). (ii) It is favorable to cooperatively induce the
(3x2 − r2)/(3y2 − r2)-type JT distortions on the B-sites
and the BM distortions on the C-sites in order to mini-
mize elastic energy [2]. These lattice distortions lower the
relative potential energy of the active orbitals in the B-
sites, also driving the eg electrons there. Hence, Ueff and
e-l interactions work cooperatively in the CO formation.
Unexpectedly, we find that V alone must be larger than
Vc ≃ 0.8 eV to induce CO. This is surprising in compar-
ison to the well-known Vc = 0 for straight FM chains.
The existence of Vc is in fact a general phenomenon in
a “pre-gapped” system. Generally speaking, in a sys-
tem with a charge gap, ∆0, before CO takes place (e.g.
the zigzag chain discussed here), forming CO always costs
non-negligible kinetic energy due to the mixing of states
across the gap. As a consequence, unlike the first order
energy gain from Ueff and g, the second order energy gain
from V is insufficient to overcome this cost until V is large
enough (of order ∆0). In this specific case, V = 0.44 eV is
insufficient to induce CO by itself, but it does contribute
significantly to the total energy gain once CO is triggered
by Ueff or g, as discussed above.
It is worth mentioning that the contribution from the
EzTz term is neglected from Table 1 because of the small
coefficient of Ez ≃ 8 meV, consistent with the previous
Charge Ordering in Half-Doped Manganites
study for undoped La3MnO3 [21]. In perovskites, the tilt-
ing of the oxygen octahedra could yield the Jahn-Teller-
like distortion of GdFeO3 type, which can be mathemat-
ically described by the EzTz or ExTx term. However, in
the perovskite manganites these terms are shown here and
in Ref. [21] to be negligibly small. In addition, in half-
doped manganites, the pattern of the orbital order is pre-
dominantly pined by the zig-zag pathway of the itinerant
electrons and thus the effect of tilting is less relevant. On
the other hand, the EzTz is very effective to explain the
zig-zag orbital ordering of (x2 − z2)/(y2 − z2) pattern in
half-doped layered manganites, such as La0.5Sr1.5MnO4
Ref. [35], where pseudo-spin-up (the 3z2 − r2 orbital) is
favored by a much stronger Ez due to the elongation of
the oxygen octahedral along the c-axis. Described via
the pseudo-spin angel, θ = arctan(Tx/Tz) [21], Ez signif-
icantly reduces θ from ±120◦ [i.e., (3x2 − r2)/(3y2 − r2)]
to ±30◦ [i.e., (x2 − z2)/(y2 − z2).
The present results would impose stringent constraints
on the general understanding of the manganites. For ex-
ample, Zener polarons were shown to coexist with the CE
phase within a purely electronic modeling of near half-
doped manganites [31]. However, to predict the realistic
phase diagram of the manganites, one must take into ac-
count the lattice degree of freedom. In fact, when propos-
ing the CE phase, Goodenough considered its advantage
of minimizing the strain energy cost. Note that the pre-
vious HF theory indicated a decrease of the total energy
by 0.5 eV per unit cell with Zener-polaron-like displace-
ment [15]. The present LDA+U calculations reveal an
increase of the total energy by 1.07 eV per unit cell with
the same Zener-polaron-like displacement. This discrep-
ancy is quite understandable from the characteristics of
the LDA functional, which favors covalent bond, while the
HF approximation tends to over localize the orbital and
disfavor bonding. To resolve the competition between the
CE phase and the Zener polaron phase, real structural
optimization is necessary and will be presented elsewhere.
As another example, in the absence of e-l interactions the
holes were predicted to localize in the B-site-type region
(i.e., the straight segment portion of the zigzag FM chain)
in the CxE1−x phase of doped E-type manganites [32].
However, since the B-sites are susceptible to the JT dis-
tortion, they are more likely to favor electron localization
instead; future experimental verification is desirable.
Summary. – A general first-principles Wannier func-
tion based method and the resulting valence picture of
doped holes in strongly correlated charge-transfer systems
are presented. Application to the charge order in half-
doped manganites reconciles the current fundamental con-
tradictions between the traditional 3+/4+ valence picture
and the recently observed small charge disproportion. In
essence, while the doped holes primarily resides in the oxy-
gen atoms, the local orbital are entirely coherent following
the symmetry of Mn eg orbital, giving rise to an effective
valence picture with weak CD. Furthermore, our first-
principles derivation of realistic low-energy Hamiltonian
reveals a surprisingly important role of electron-electron
interactions in ordering charges, contrary to current lore.
Our theoretical method and the resulting flexible valence
picture can be applied to a wide range of doped charge-
transfer insulators for realistic investigations and interpre-
tations of the rich properties of the doped holes.
∗ ∗ ∗
We thank E. Dagotto for stimulating discussions and V.
Ferrari and P. B. Littlewood for clarifying their Hartree-
Fock results [15]. The work was supported by U.S. Depart-
ment of Energy under Contract No. DE-AC02-98CH10886
and DOE-CMSN.
REFERENCES
[1] Dagotto E., Nanoscale Phase Separation and Colossal Mag-
netoresistance (Springer-Verlag, Berlin) 2002.
[2] Goodenough J., Phys. Rev., 100 (1955) 564.
[3] Akahoshi D. et al., Phys. Rev. Lett., 90 (2003) 177203;
Mathieu R. et al., ibid., 93 (2004) 227202; Takeshita N. et
al., Phys. Rev. B, 69 (2004) 180405(R); Alvarez G. et al.,
ibid., 73 (2006) 224426.
[4] Moreo A. et al., Science, 283 (1999) 2034.
[5] Sen C. et al., Phys. Rev. Lett., 98 (2007) 127202.
[6] Coey M., Nature, 430 (2004) 155.
[7] Tyson T. A. et al., Phys. Rev. B, 60 (1999) 4665.
[8] Garćıa J. et al., J. Phys.: Condens. Matter, 13 (2001) 3229.
[9] Daoud-Aladine A. et al., Phys. Rev. Lett., 89 (2002)
097205.
[10] Thomas K. J. et al., Phys. Rev. Lett., 92 (2004) 237204.
[11] Grenier S. et al., Phys. Rev. B, 69 (2004) 134419.
[12] Herrero-Mart́ın J. et al., Phys. Rev. B, 70 (2004) 024408.
[13] Goff R. J. et al., Phys. Rev. B, 70 (2004) 140404.
[14] Anisimov V. I. et al., Phys. Rev. B, 55 (1997) 15494.
[15] Ferrari V. et al., Phys. Rev. Lett., 91 (2003) 277202.
[16] Trokiner A. et al., Phys. Rev. B, 74 (2006) 092403.
[17] Patterson C. H., Phys. Rev. B, 72, 085125 (2005).
[18] van den Brink J. et al., Phys. Rev. Lett., 83 (1999) 5118.
[19] Popović Z. and Satpathy S., Phys. Rev. Lett., 88 (2002)
197201.
[20] Millis A. J., Nature, 392 (1998) 147.
[21] Yin W.-G. et al., Phys. Rev. Lett., 96 (2006) 116405.
[22] Ku W. et al., Phys. Rev. Lett., 89 (2002) 167204.
[23] Zhang F. C. and Rice T. M., Phys. Rev. B, 37 (1988)
3759.
[24] The WIEN2k [Blaha P. et al., Comput. Phys. Commun.,
147 (2002) 71] implementation of the full potential LAPW
method is employed.
[25] Radaelli P. G. et al., Phys. Rev. B, 55 (1997) 3015.
[26] Herrero-Mart́ın J. et al., Phys. Rev. B, 72 (2005) 085106.
[27] For explicit inclusion of the oxygen orbitals in a more com-
plete model, see Mostovoy M. V. and Khomskii D. I., Phys.
Rev. Lett., 92 (2004) 167201.
[28] Hotta T. et al., Phys. Rev. B, 62 (2000) 9432.
[29] Solovyev I. V. and Terakura K., Phys. Rev. Lett., 83
(1999) 2825.
D. Volja et al.
[30] Unlike the local interactions, the intersite interactions in
LDA+U is treated via LDA functional, which has dominant
Hartree contribution that we used for this mapping.
[31] Efremov D. V. et al., Nature Materials, 3 (2004) 853.
[32] Hotta T. et al., Phys. Rev. Lett., 90 (2003) 247203.
[33] Ch. Jooss et al., PNAS, 104 (2007) 13597.
[34] Yin W.-G. and Ku W., Phys. Rev. B, 79 (2009) 214512.
[35] D. J. Huang et al., Phys. Rev. Lett., 92 (2004) 087202.
[36] P. B. Allen and V. Perebeinos, Phys. Rev. B, 60 (1999)
10747.
Introduction. –
Small CD vs. 3+/4+ valence Picture. –
Leading Mechanisms. –
Summary. –
|
0704.1835 | Direct extraction of one-loop integral coefficients | arXiv:0704.1835v1 [hep-ph] 15 Apr 2007
SLAC–PUB–12455 UCLA/07/TEP/12
Direct extraction of one-loop integral coefficients ∗
Darren Forde
Stanford Linear Accelerator Center
Stanford University
Stanford, CA 94309, USA,
Department of Physics and Astronomy, UCLA
Los Angeles, CA 90095–1547, USA.
(Dated: 12th April 2007)
Abstract
We present a general procedure for obtaining the coefficients of the scalar bubble and triangle
integral functions of one-loop amplitudes. Coefficients are extracted by considering two-particle and
triple unitarity cuts of the corresponding bubble and triangle integral functions. After choosing
a specific parameterisation of the cut loop momentum we can uniquely identify the coefficients
of the desired integral functions simply by examining the behaviour of the cut integrand as the
unconstrained parameters of the cut loop momentum approach infinity. In this way we can produce
compact forms for scalar integral coefficients. Applications of this method are presented for both
QCD and electroweak processes, including an alternative form for the recently computed three-
mass triangle coefficient in the six-photon amplitude A6(1
−, 2+, 3−, 4+, 5−, 6+). The direct nature
of this extraction procedure allows for a very straightforward automation of the procedure.
PACS numbers: 11.15.Bt, 11.25.Db, 12.15.Lk, 12.38.Bx
∗ Research supported in part by the US Department of Energy under contracts DE–FG03–91ER40662 and
DE–AC02–76SF00515.
http://arXiv.org/abs/0704.1835v1
I. INTRODUCTION
Maximising the discovery potential of future colliders such as CERN’s Large Hadron
Collider (LHC) will rely upon a detailed understanding of Standard Model processes. Dis-
tinguishing signals of new physics from background processes requires precise theoretical
calculations. These background processes need to be known to at least a next-to-leading
order (NLO) level. This in turn entails the need for computation of one-loop amplitudes.
Whilst much progress has been made in calculating such processes, the feasibility of produc-
ing these needed higher multiplicity amplitudes, such as one-loop processes with one or more
vector bosons (W’s, Z’s and photons) along with multiple jets, strains standard Feynman
diagram techniques.
Direct calculations using Feynman diagrams are generally inefficient; the large number
of terms and diagrams involved has by necessity demanded (semi)numerical approaches be
taken when dealing with higher multiplicity amplitudes. Much progress has been made in
this way, numerical evaluations of processes with up to six partons have been performed [1,
2, 3, 4, 5]. On assembling complete amplitudes from Feynman diagrams it is commonly
found that large cancellations take place between the various terms. The remaining result is
then far more compact than would naively be expected from the complexity of the original
Feynman diagrams. The greater simplicity of these final forms has spurred the development
of alternative more direct and efficient techniques for calculating these processes.
The elegant and efficient approach of recursion relations has long been a staple part of the
tree level calculational approach [6, 7]. Recent progress, inspired by developments in twistor
string theory [8, 9], builds upon the idea of recursion relations, but centred around the use of
gauge-independent or on-shell intermediate quantities and hence negating a potential source
of large cancellations between terms. Britto, Cachazo and Feng [10] initially wrote down a set
of tree level recursion relations utilising on-shell amplitudes with complex values of external
momenta. Then, along with Witten [11], they proved these on-shell recursion relations using
just a knowledge of the factorisation properties of the amplitudes and Cauchy’s theorem.
The generality of the proof has led to their application in many diverse areas beyond that
of massless gluons and fermions in gauge theory [10, 13]. There have been extensions to
theories with massive scalars and fermions [14, 15, 16] as well as amplitudes in gravity [12].
Similarly “on-shell” approaches can also be constructed at loop level. The unitarity
of the perturbative S-matrix can be used to produce compact analytical results by “glu-
ing” together on-shell tree amplitudes to form the desired loop amplitude. This unitarity
approach has been developed into a practical technique for the construction of loop ampli-
tudes [17, 18, 19], initially, for computational reasons, for the construction of amplitudes
where the loop momentum was kept in D = 4 dimensions. This limited its applicability to
computations of the “cut-constructible” parts of an amplitude only, i.e. (poly)logarithmic
containing terms and any associated π2 constants. Amplitudes consisting of only such terms,
such as supersymmetric amplitudes, can therefore be completely constructed in this way.
QCD amplitudes contain in addition rational pieces which cannot be derived using such cuts.
The “missing” rational parts are constructible directly from the unitarity approach only by
taking the cut loop momentum to be in D = 4 − 2ǫ dimensions [20]. The greater difficulty
of such calculations has, with only a few exceptions [21, 22], restricted the application of
this approach, although recent developments [23, 24, 25] have provided new promise for this
direction.
The generality of the foundation of on-shell recursion relation techniques does not limit
their applicability to tree level processes only. The “missing” rational pieces at one-loop, in
QCD and other similar theories, can be constructed in an analogous way to (rational) tree
level amplitudes [26, 27]. The “unitarity on-shell bootstrap” technique combines unitarity
with on-shell recursion, and provides, in an efficient manner, the complete one-loop ampli-
tude. This approach has been used to produce various new analytic results for amplitudes
containing both fixed numbers as well as arbitrary numbers of external legs [28, 29, 30].
Other newly developed alternative methods have also proved fruitful for calculating rational
terms [31, 32, 33, 34]. In combination with the required cut-containing terms [35, 36, 37]
these new results for the rational loop contributions combine to give the complete analytic
form for the one-loop QCD six-gluon amplitude.
The development of efficient techniques for calculating, what were previously difficult
to derive rational terms, has emphasised the need to optimise the derivation of the cut-
constructible pieces of the amplitude. One-loop amplitudes can be decomposed entirely in
terms of a basis of scalar bubble, scalar triangle and scalar box integral functions. Deriving
cut-constructible terms therefore reduces to the problem of finding the coefficients of these
basis integrals. For the coefficients of scalar box integrals it was shown in [38] that a
combination of generalised unitarity [19, 39, 40, 41], quadruple cuts in this case, along with
the use of complex momenta could be used, within a purely algebraic approach, to extract
the desired coefficient from the cut integrand of the associated box topology.
Extracting triangle and bubble coefficients presents more of a problem. Unlike for the case
of box coefficients, cutting all the propagators associated with the desired integral topology
does not uniquely isolate a single integral coefficient. Inside a particular two-particle or triple
cut lie multiple scalar integral coefficients corresponding to integrals with topologies sharing
not only the same cuts but also additional propagators. These coefficients must therefore
be disentangled in some way. There are multiple directions within the literature which have
been taken to effect this separation. The pioneering work by Bern, Dixon, Dunbar and
Kosower related unitarity cuts to Feynman diagrams and thence to the scalar integral basis,
this then allowed for the derivation of many important results [17, 18, 19]. More recently the
technique of Britto et. al. [23, 24, 25, 35, 36] has for two-particle cuts and the its extension to
triple cuts by Mastrolia [42], highlighted the benefits of working in a spinor formalism, where
the cut integrals can be integrated directly. Important results obtained in this way include
the most difficult of the cut-constructable pieces for the one-loop amplitude for six gluons
with the helicity configurations A6(+−+−+−) and A6(−+−−++). The cut-constructible
parts of Maximum-Helicity-Violating (MHV) one-loop amplitudes were found by joining
MHV amplitudes together in a similar manner to at tree level [43]. This method has been
applied by Bedford, Brandhuber, Spence and Travaglini to produce new QCD results [37].
In the approach of Ossola, Papadopoulos and Pittau [44, 45] it is possible to avoid the need
to perform any integration or use any integral reduction techniques. Coefficients are instead
extracted by solving sets of equations. The solutions of these equations include the desired
coefficients, along with additional “spurious” terms corresponding to coefficients of terms
which vanish after integrating over the loop momenta.
The many-fold different processes and their differing parton contents that will be needed
at current and future collider experiments suggests that some form of automation, even of
the more efficient “on-shell” techniques, will be required. From an efficiency standpoint,
therefore, we would ideally wish to minimise the degree of calculation required for each step
of any such process. Here we propose a new method for the extraction of scalar integral
coefficients which aims to meet this goal. The technique follows in the spirit of the simplicity
of the derivation of scalar box coefficients given in ref. [38]. Desired coefficients can be
constructed directly using two-particle or triple cuts. The complete one-loop amplitude
can then be obtained by summing over all such cuts and adding any box terms and rational
pieces. Alternatively our technique can be used to extract the bubble and triangle coefficients
from a one-loop amplitude, generated for example from a Feynman diagram. Hence the
technique is acting as an efficient way to perform the integration.
We use unitarity cuts to freeze some of the degrees of freedom of the integral loop mo-
mentum, whilst leaving others unconstrained. This then isolates a specific single bubble
or triangle integral topology and hence its coefficient. Within each cut there remain ad-
ditional coefficients. In the triangle case those of scalar box integrals. In the bubble case
both scalar box and scalar triangle integrals contribute. Disentangling our desired coefficient
from these extra contributions is a straightforward two step procedure. First one rewrites
the loop momentum inside the cut integrand in terms of its unconstrained parameters. In
the triangle case there is a single parameter, and in the bubble case there are a pair of
parameters. Examining the behaviour of the integrand as these unconstrained parameters
approach infinity then allows for a straightforward separation of the desired coefficient from
any extra contributions. The coefficient of each basis integral function can therefore be
extracted individually in an efficient manner with no further computation.
This paper is organised as follows. In section II we outline the notation used throughout
this paper. In section III we proceed to present the basic structure of a one-loop amplitude
in terms of a basis of scalar integral functions. We describe in section IV our procedure
for extracting the coefficients of scalar triangle coefficients through the use of a particular
loop-momentum parameterisation for the triple cuts along with the properties of the cut
as the single free integral parameter tends to infinity. Section V extends this formalism
to include the extraction of scalar bubble coefficients. The two-particle cut used in this
case contains an additional free parameter and requires an additional step in our procedure.
Finally in section VI we conclude by providing some applications which act as checks of
our method. Initially we examine the extraction of various basis integral coefficients from
some common one-loop integral functions. We then turn our attention to the construction
of the coefficients of some more phenomenologically interesting processes. These include the
three-mass triangle coefficient for the six photon amplitude A6(− + − + −+), as well as
a representative three-mass triangle coefficient of the process e+e− → q+q−g−g+. Finally
we construct the complete cut-containing part of the amplitude A
1−loop
−, 2−, 3+, 4+, 5+)
and discuss further comparisons against coefficients of more complicated gluon amplitudes
contained in the literature.
II. NOTATION
In this section we summarise the notation used in the remainder of the paper. We will
use the spinor helicity formalism [47, 48], in which the amplitudes are expressed in terms of
spinor inner-products,
〈j l〉 = 〈j−|l+〉 = ū−(kj)u+(kl) , [j l] = 〈j+|l−〉 = ū+(kj)u−(kl) , (2.1)
where u±(k) is a massless Weyl spinor with momentum k and positive or negative chirality.
The notation used here follows the QCD literature, with [i j] = sign(k0i k
j )〈j i〉∗ for real
momenta so that,
〈i j〉[j i] = 2ki · kj = sij . (2.2)
Our convention is that all legs are outgoing. We also define,
λi ≡ u+(ki), λ̃i ≡ u−(ki) . (2.3)
We denote the sums of cyclicly-consecutive external momenta by
i...j ≡ k
i + k
i+1 + · · ·+ k
j−1 + k
j , (2.4)
where all indices are mod n for an n-gluon amplitude. The invariant mass of this vector is
si...j ≡ K2i...j . (2.5)
Special cases include the two- and three-particle invariant masses, which are denoted by
sij ≡ K2ij ≡ (ki + kj)2 = 2ki · kj, sijk ≡ (ki + kj + kk)2 . (2.6)
We also define spinor strings,
∣ (/a ± /b)
= 〈i a〉[a j] ± 〈i b〉[b j] ,
∣ (/a + /b)(/c + /d)
= [i a]
∣ (/c + /d)
+ [i b]
∣ (/c + /d)
. (2.7)
III. UNITARITY CUTTING TECHNIQUES AND THE ONE-LOOP INTEGRAL
BASIS
Our starting point will be the general dimensionally-regularised decomposition of a one-
loop amplitude into a basis of scalar integral functions [18, 53]
A1−loopn =Rn+rΓ
(µ2)ǫ
(4π)2−ǫ
biB0(K
cijC0(K
i , K
dijkD0(K
i , K
j , K
.(3.1)
The scalar bubble, triangle and box integral functions are denoted by B0, C0 and D0 respec-
tively, and along with rΓ their explicit forms can be found in Appendix C. The bi, cij and
dijk are their corresponding rational coefficients. Any ǫ dependence within these coefficients
has been removed and placed into the rational, Rn, term. The problem of deriving the
one-loop amplitude is therefore reduced to that of finding the coefficients of these scalar
integral functions and any rational terms when working in D = 4 dimensions.
We are going to consider obtaining these coefficients via the application of various cuts
within the framework of generalised unitarity [19, 39, 40, 41]. In general our cut momenta
will be complex, so for our purposes we define a “cut” as the replacement
(l + Ki)2
→ (2π)δ((l + Ki)2). (3.2)
By systematically constructing all possible unitarity cuts we can reproduce every integral
coefficient of a particular amplitude. Alternatively, application of the same procedure of
“cutting” legs can be used to extract from a one-loop integral the corresponding coefficients
of the standard basis integrals making up that particular integral, in a sense acting as a form
of specialised integral reduction. This approach follows in a similar vein to that adopted by
Ossola, Papadopoulos and Pittau [44].
The most straightforward implementation of the technique we present here is when the cut
loop momentum is massless and kept in D = 4 dimensions. Eq. 3.1 therefore contains, within
the term Rn, any rational terms missed by performing cuts in only D = 4. Approaches for
deriving such terms independently of unitarity cuts exist and so we do not concern ourselves
with these here [23, 24, 26, 27, 29, 30, 31, 32, 33, 34, 44, 45].
As was demonstrated in [38], the application of a quadruple cut, as shown in figure 1, to
A1−loopn uniquely identifies a particular box integral topology D0(K
i , K
j , K
k) and hence its
Kk−Ki − Kj − Kk
FIG. 1: A generic quadruple cut used to isolate the scalar box integral D0(K
coefficient. This coefficient is then given by
dijk =
A1(lijk;a)A2(lijk;a)A3(lijk;a)A4(lijk;a), (3.3)
where lijk;a is the a
th solution of the cut loop momentum l that isolates the scalar box
function D0(K
i , K
j , K
k), there are 2 such solutions. Eq. 3.3 applies as well to the cases
when one or more of the four legs of the box is massless. This is a result of the existence, for
complex momenta, of a well-defined three-point tree amplitude corresponding to any corner
of a box containing a massless leg.
Applying a triple cut to the amplitude A1−loopn does not isolate a single basis integral.
Instead we have a triangle integral plus a sum of box integrals obtained by “opening” a
fourth propagator. This can be represented schematically via
(µ2)ǫ
(4π)2−ǫ
cijC0(K
i , K
j ) +
dijkD0(K
i , K
j , K
k) + . . .
, (3.4)
where the additional terms correspond to “opening” the Ki leg or the Kj leg instead of the
−(Ki + Kj) leg. Similarly in the case of a two-particle cut we again cannot isolate a single
basis integral by itself. Instead we get additional triangle and box integrals corresponding
to “opening” third and forth propagators. Schematically this is given by
(µ2)ǫ
(4π)2−ǫ
biB0(K
i ) +
cijC0(K
i , K
j ) +
dijkD0(K
i , K
j , K
k) + . . .
, (3.5)
where again the additional terms are boxes with the Ki leg or the Kj legs “opened”. Whilst
not isolating a single integral each of the above cuts does single out either one scalar triangle,
in the triple cut case, or one scalar bubble, in the two-particle cut case. Disentangling
these single bubble or triangle integral functions from the contributions of the remaining
basis integrals will allow us to directly read off the corresponding coefficient. Applying all
possible two-particle, triple and quadruple cuts then enables us to derive the coefficients of
every basis integral function.
IV. TRIPLE CUTS AND SCALAR TRIANGLE COEFFICIENTS
A triple cut contains not only contributions for the corresponding scalar triangle integral,
but also contributions from scalar box integrals which share the same three cuts as the
triangle. Of the four propagators of a scalar box integral, three will be given by the three
cut legs of the triple cut loop integral. The forth propagator will be contained inside the cut
integrand in a denominator factor of the form (l − P )2, which corresponds to a propagator
pole. Ideally we want to separate terms containing such poles from the remainder of the
cut integrand. The remaining term will be the scalar triangle integral multiplied by its
coefficient for that particular cut.
The three delta functions of a triple cut constrain the cut loop momentum such that
only a single free parameter of the integral remains, which we label t. We can express the
loop momentum in terms of this parameter using the orthogonal null four-vectors, a
i , with
i = 1, 2, 3, specific forms for these basis vectors are presented in section IVA. The loop
momentum is then given by
lµ = a
0 t +
1 + a
2 . (4.1)
Denominator factors of the cut integrand depending upon the cut loop momentum, can be
written as propagators of the general form, (l − P )2. When these propagators go on-shell
they will correspond to poles in t. These poles will be solutions of the following equation
(l − P )2 = 0 ⇒ 2(a0 · P )t + 2(a1 · P )
+ 2(a2 · P ) − P 2 = 0. (4.2)
If we consider t to be a complex parameter then we can use a partial fraction decomposi-
tion in terms of t to rewrite an arbitrary triple-cut integral. For the extraction of integral
coefficients we need only work with integrals in D = 4 dimensions. We also drop an overall
denominator factor of 1/(2π)4 which multiplies all integrals. The partial fraction decompo-
sition is therefore given, in the case when we have applied a triple cut on the legs l2, (l−K1)2
and (l − K2)2, by
(2π)3
δ(l2i )A1A2A3
=(2π)3
δ(l2i )
[InftA1A2A3] (t) +
poles {j}
Rest=tj A1A2A3
t − tj
, (4.3)
where li = l −Ki and l0 = l. This is a sum of all possible poles of t, labelled here as the set
{j}, contained in the cut integrand denoted by A1A2A3. Pieces of the integrand without a
pole are contained in the Inf term, originally given in [30], and defined such that
([InftA1A2A3] (t) − A1(t)A2(t)A3(t)) = 0. (4.4)
In general [InfzA1A2A3](t) will be some polynomial in t,
[Inf tA1A2A3] (t) =
i, (4.5)
where m is the leading degree of large t behaviour and depends upon the specific integrand
in question.
After applying the three delta functions constraints we see that taking the residue of
A1A2A3 at a particular pole, t = t0, removes any remaining dependence upon the loop
momentum. Hence we can write
δ(l2i )
Rest=t0 A1A2A3
t − t0
∼ lim
[(t − t0)A1A2A3]
δ(l2i )
t − t0
. (4.6)
Where on the right hand side of this we understand the integral,
d4l, as over the parame-
terised form of l in terms of t and the three other degrees of freedom. In the cut integrand
the only source of poles in t is from propagator terms of the type 1/(l − P )2. Generally
each such propagator, when on-shell, contains two poles due to the quadratic nature, in t,
of eq. (4.2). If we label these solutions t± then we can write a triple-cut scalar box in terms
of these poles as
δ(l2i )
(l − P )2 ∼
t+ − t−
δ(l2i )
t − t+
δ(l2i )
t − t−
. (4.7)
From comparing this to eq. (4.6) we see that all residue terms of eq. (4.3) simply correspond
to pieces of triple-cut scalar box functions multiplied by various coefficients.
Therefore we can associate all residue terms with scalar boxes, meaning that our triple
cut amplitude can be written simply as
(2π)3
δ(l2i )A1A2A3 = (2π)
boxes {l}
0 . (4.8)
This is a sum over the set {l} of possible cut scalar boxes, Dcut0 , and their associated
coefficients, dl, along with a power series in positive powers of t. In eq. (4.8) we have
integrated over the three delta functions after performing the integral transformation from
lµ to t, the Jacobian of which, and any additional factors picked up from the integration
is contained in the factor Jt. The limit m of the summation is the maximum power of
t appearing in the integrand, which in turn is the maximum power of l appearing in the
numerator of the integrand. In general for renormalisable theories, such as QCD amplitudes,
m ≤ 3.
We must now turn our attention to answering the question of what do the remaining
terms correspond to? To do this we need to understand the behaviour of the integrals
over positive powers of t. There is a freedom in our choice of the parameterisation of the
cut-loop momentum. This freedom extends, as we will prove in section IVA, to choosing
a parametrisation where the integrals over all positive powers of t vanish. Doing this then
reduces the cut integrand to
(2π)3
δ(l2i )A1A2A3 = (2π)
dtJt +
boxes {l}
0 . (4.9)
The remaining integral is now simply that of a triple-cut scalar triangle, multiplied by the co-
efficient f0. For the triple-cut scalar triangle integral, C
i , K
j ), given by −(2π)3
dtJt,
the triple cut form of eq. (C4), we find that its corresponding coefficient is given simply by
cij = − [InftA1A2A3] (t)
, (4.10)
which is just the first term in the series expansion in t of the cut-integrand at infinity.
The simplicity of this result relies crucially upon two facts. The first is that on the
triple cut the integral is sufficiently simple that it can be decomposed into either a triangle
contribution or a box contribution. This is important as it allows us to easily distinguish
between the two types of term. As an example consider a linear box which contains a
numerator factor constructed such that it vanishes at the pole contained in the denominator,
but without being proportional to the denominator itself. To which basis integral does this
term contribute to? In the simplest case such a term would look like
δ(l2i )
〈lW 〉
〈lP 〉 =
δ(l2i )
〈aW 〉(t − t0)
〈aP 〉(t− t0)
〈aW 〉
〈aP 〉
δ(l2i ),
and hence must contribute entirely to the triangle integral, it contains no box terms. Here we
have chosen a simplified loop momentum parameterisation in terms of two basis spinors |a+〉
and |a+〉 such that 〈lP 〉 = t〈aP 〉+ 〈aP 〉. This then contains a pole in t at t0 = −〈aP 〉/〈aP 〉
and we have chosen the spinor |W+〉 such that 〈aW 〉 = −t0〈aW 〉.
The second crucial fact is the vanishing of the other integrals over t so that the complete
scalar triangle integral is given by only the remaining integral over t0. Hence the coefficient
is given by a single term. Furthermore, the use of a complex loop momentum also means
that we can apply this formalism to the extraction of scalar coefficients corresponding to
one- and two-mass triangles as well as three-mass triangles. As discussed above for the case
of box coefficients, this is a result of the possibility of a well-defined three-point vertex when
using complex momentum, enabling in these cases the construction of non-vanishing cut
integrands.
A. The momentum parameterisation
We wish to compute the coefficient of the scalar triangle singled out by the triple cut
given in figure 2. The cut integral when written in terms of tree amplitudes is
l1 = l − K1
l2 = l − K2
c3 − 1
c2 − 1
c1 − 1
FIG. 2: The triple cut used to compute the scalar triangle coefficient of C0(K
(2π)3
δ(l2i )A
c3−c1+2
(−l, c1, . . . , (c3 − 1), l1)Atreec2−c3+2(−l1, c3, . . . , (c2 − 1), l2)
× Atreen−c2+c1+2(−l2, c2, . . . , (c1 − 1), l), (4.11)
with l1 = l − K1 = l − Kc1...c3−1 and l2 = l − K2 = l + Kc2...c1−1, so that K1 = Kc1...c3−1 and
K2 = −Kc2...c1−1.
Our first step will be to find a parameterisation of l in terms of the single free integral
parameter remaining after satisfying all three of the cut delta functions constraints,
l2 = 0, l21 = (l − K1)2 = 0, and l22 = (l − K2)2 = 0. (4.12)
Each of the three legs can be massive or massless. We will deal with the general case of three
massive legs explicitly here. The cases with massless legs are then easily found by setting
the relevant mass in the parameterisation to zero. We will find it very convenient to express
lµ in terms of a basis of momentum identical to the momenta l1 and l2 used by Ossola,
Papadopoulos and Pittau [44]. We will write these momenta in the suggestive notation K♭1
and K♭2 and define them via
1 = K
2 = K
1 , (4.13)
with γ = 〈K♭,−1 | /K
1 〉 ≡ 〈K
2 | /K
2 〉 and Si = K2i . Each momentum K♭1, K♭2 is
the massless projection of one of the massive legs in the direction of the other masslessly
projected leg. A more practical definition of K♭1 and K
2, in terms of the external momenta
alone, can be found by solving the above equations for K♭1 and K
2, so that in terms of S1,
S2, K
1 and K
2 we have
1 − (S1/γ)K
1 − (S1S2/γ2)
2 − (S2/γ)K
1 − (S1S2/γ2)
. (4.14)
In addition γ can be expressed in terms of the external momenta,
γ± = (K1 · K2) ±
∆, ∆ = (K1 · K2)2 − K21K22 . (4.15)
When using eq. (4.10) we must average over the number of solutions of γ. In the three-mass
case there are a pair of solutions. For the one- and two-mass cases, when either K21 = 0 or
K22 = 0, then there is only a single solution.
After satisfying the three constraints given by eq. (4.12) we write the spinor components
of lµ in terms of our basis K♭1 and K
〈l−| = t〈K♭,−1 | + α01〈K
〈l+| = α02
〈K♭,+1 | + 〈K
2 |, (4.16)
where
α01 =
S1 (γ − S2)
(γ2 − S1S2)
, α02 =
S2 (γ − S1)
(γ2 − S1S2)
. (4.17)
Written as a four-vector, lµ is given by
lµ = α02K
1 + α01K
〈K♭,−1 |γµ|K
2 〉 +
α01α02
〈K♭,−2 |γµ|K
1 〉. (4.18)
We can also use momentum conservation to write component forms for the other two cut
momenta li with i = 1, 2,
〈l−i | = t〈K
1 | + αi1〈K
〈l+i | =
〈K♭,+1 | + 〈K
2 |, (4.19)
where the αij are given in Appendix A.
A final point is that after having integrated over the three delta function constraints and
performed the change of variables to the momentum parameterisation of eq. (4.16) we have
the factor Jt = 1/(tγ) contained in eq. (4.8). We always associate this factor with the scalar
triangle integral and so its explicit form does not play a role in our formalism.
B. Vanishing integrals
As we have remarked previously, the simplicity of the method outlined here rests crucially
upon the properties of the momentum parameterisation we have used. The key feature is the
vanishing of the integrals over t. It can easily be shown that within our chosen momentum
parameterisation, of section IVA, any integral of a positive or negative power of t vanishes.
Following an argument very similar to that used by Ossola, Papadopoulos and Pittau [44]
we use 〈K♭,±1 | /K1|K
2 〉 = 0, 〈K
1 | /K2|K
2 〉 = 0 and 〈K
1 |γµ|K
2 〉〈K
1 |γµ|K
2 〉 = 0, to
show that
〈K♭,−1 |/l|K
l2l21l
= 0 ⇒
= 0 for n ≥ 1,
〈K♭,−2 |/l|K
l2l21l
= 0 ⇒
dtJtt
n = 0 for n ≥ 1. (4.20)
The vanishing of these terms then leads directly to our general procedure, encapsulated in
eq. (4.10), which is to simply express the triple cut of the desired scalar triangle in the
momentum parameterisation given by eq. (4.16) and then take the t0 component of a series
expansion in t around infinity.
V. TWO-PARTICLE CUTS AND SCALAR BUBBLE COEFFICIENTS
In the same spirit as the triangle case we now wish to extract the coefficients of scalar
bubble terms using, in this case, a two-particle cut. Now a two-particle cut will contain in
addition to our desired scalar bubble both scalar boxes and triangles, all of which need to
be disentangled. What we will find, though, is that naively applying the technique as given
for the scalar triangle coefficients will not give us the complete scalar bubble contribution.
The reason for this is straightforward to see. A two-particle cut places only two constraints
on the loop momentum and so we can parameterise it in terms of two free variables, which we
will label t and y. Consider rewriting the cut integrand in a partial fraction decomposition
in terms of y. Schematically, therefore, the two-particle cut of the legs l2 and (l −K1)2 can
be written as
(2π)2
δ(l2i )A1A2 =(2π)
dtdyJt,y
[InfyA1A2] (y) +
poles {j}
Resy=yj A1A2
y − yj
,(5.1)
where again {j} is the sum over all possible poles, this time in y, and Jt,y contains any
terms from the change into the parameterisation of y and t as well as any pieces picked up
by integrating over the two delta functions. So far this seems to be similar to the triangle
case, but with the residue terms now corresponding to triangles as well as boxes. As we
have two parameters though we can consider a further partial fraction decomposition, this
time with t, giving
(2π)2
δ(l2i )A1A2 =
(2π)2
dtdyJt,y
[Inft [InfyA1A2] (y)] (t) +
Inft
poles {j}
Resy=yj A1A2
y − yj
(t)
poles {l}
Rest=tl [InfyA1A2] (y)
t − tl
poles {j},{l}
Rest=tl
Resy=yj A1A2
t − tl
, (5.2)
where here {l} is the sum over all possible poles in t. The general dependence of the cut
integral momentum, lµ, on the free integral parameters t and y can be written in terms of
null four-vectors a
i with i = 0, 1, 2, 3, 4 such that l
2 = 0. An explicit form for these will be
presented in section VA. We then define lµ by
1 + ya
2 + ta
3 + a
4 . (5.3)
Again residues of pole terms will correspond to the solutions of (l − P )2 = 0 and hence it
is straightforward to see that the final term of eq. (5.2), containing the sum of residues in
both y and t, has both of these free parameters fixed. Any such terms must contain at least
one propagator pole. Also the numerator will be independent of any integration variables,
as both y and t are fixed. Thus all such terms will correspond to purely scalar triangle and
scalar box terms. Looking at the second and third terms of eq. (5.2) we might also, at least
initially, want to associate these terms with contributions to scalar triangle terms only and
hence naively conclude that only the first term of eq. (5.2) contributes to the scalar bubble
coefficient. This assumption though would be wrong.
The crucial difference between the single residue terms of eq. (5.2) and those of eq. (4.3)
is the parameterisation of the loop momentum which is being used. Taking the residue of
a pole term at a particular point y freezes y such that we force a particular momentum
parameterisation upon these triple-cut terms. Importantly, in general this particular forced
momentum parameterisation is such that the integrals over t in the second and third terms
of eq. (5.2) now no longer vanish.
If only scalar triangle contributions came from the integrals over t then this would not
be an issue; we could just discard these terms as not relevant for the extraction of our
bubble coefficient. What we find though, through a simple application of Passarino-Veltman
reduction techniques, is that these integrals contain scalar bubble contributions, B0, with
coefficients b,
dtJ ′tt
n = bB0 + c C0, (5.4)
where J ′t is the relevant Jacobian for this parameterisation of the loop momentum and c is
the coefficient corresponding to the scalar triangle contribution, C0. We cannot therefore
simply discard the residue pieces of eq. (5.2), as we could in the triangle case, if we want to
derive the full scalar bubble coefficient. Furthermore, there is an additional complication.
We will see that the integrals over powers of y contained in the first term of eq. (5.2) also
do not vanish in general and hence must also be taken into account.
There is a limit to the maximum positive powers of y and t that appear in the rewritten
partial-fractioned decomposition of the integral. For renormalisable theories, such as QCD,
up to three powers of t appear for triangle coefficients and up to four powers of y for bubble
coefficients. Therefore the power series in y and t of the Inf operators will always terminate
at these fixed points. It is then straightforward, as we will discuss in section VD and
section VB, to derive the general form for all possible non-vanishing contributing integrals,
over powers of y and t, in terms of their scalar bubble contributions.
Calculation of the scalar bubble coefficient therefore requires a two stage process. First
take the Infy and Inft pieces of the cut integrand and replace any integrals over y with
their known general forms, as we shall see integrals proportional to t will vanish. Secondly
compute all possible triple cuts that could be generated by applying a third cut to the two-
particle cut we are considering. To these terms then apply, not the parameterisation we used
in section IV, but the parameterisation forced upon us by taking the residues of the poles in
y, which we will derive in section VC. This is equivalent to calculating all the contributions
from the residues of the partial fraction decomposed cut integrand of eq. (5.2). Within these
terms we then replace any integrals of powers of t with their known general forms. Finally
we sum all the contributing pieces together to get the full scalar bubble contribution and
hence its coefficient. Our final result for assembling the bubble coefficient is then given by
eq. (5.28).
A. The momentum parameterisation for the two-particle cut
We want to extract the scalar bubble coefficient obtainable from the application of the
two-particle cut given in figure 3. This two-particle cut can be expressed in terms of tree
amplitudes as
(2π)2
δ(l2i )A
c2−c1+2
(−l, (c1 + 1), . . . , c2, l1)Atreen−c2+c1+2(−l1, (c2 + 1), . . . , c1, l),(5.5)
with l1 = l − Kc1+1...c2 = l − K1.
A bubble can be classified entirely in terms of the momentum of one of its two legs, which
we label K1, and so we will find it useful to express the cut loop momentum l in terms of
c2 + 1
c1 + 1
l1 = l − K1
FIG. 3: The two-particle cut for computing the scalar bubble coefficient of B0(K
the pair of massless momenta K♭1 and χ defined via
1 = K
χµ, (5.6)
here γ = 〈χ±| /K1|χ±〉 ≡ 〈χ±| /K♭1|χ±〉. The arbitrary vector χ can be chosen independently
for each bubble coefficient as a result of the independence of the choice of basis representation
for the cut momentum. In the two-particle cut case we have only two momentum constraints
l2 = 0, and l21 = (l − K1)2 = 0, (5.7)
and so we have two free parameters which we will label y and t. The loop momentum can
then be expressed in terms of spinor components as
〈l−| = t〈K♭,−1 | +
(1 − y) 〈χ−|,
〈l+| = y
〈K♭,+1 | + 〈χ+|. (5.8)
Written as a four-vector lµ is
lµ = yK
(1 − y)χµ + t
〈K♭,−1 |γµ|χ−〉 +
(1 − y)〈χ−|γµ|K♭,−1 〉. (5.9)
We can also use momentum conservation to write a component form for the other cut
momentum. We have
〈l−1 | = 〈K
1 | −
〈χ−|,
〈l+1 | = (y − 1) 〈K
1 | + t〈χ+|. (5.10)
Furthermore after rewriting the integral in this cut-momentum parameterisation and
integrating over the two delta function constraints we find the following simple result for
the constant Jt,y contained in eq. (5.1), namely Jt,y = 1.
B. Non-vanishing integrals
In the case of the scalar triangles of section IVB crucial simplifications occurred as a result
of our chosen cut momentum parameterisation. Any integral over a power of t vanished,
leaving only a single contribution corresponding to the desired coefficient. For the scalar
bubble coefficient things are not quite as simple.
We can use 〈K♭,±1 | /K1|χ±〉 = 0 as well as 〈K
1 |γµ|χ±〉〈K
1 |γµ|χ±〉 = 0 to show that
〈χ−|/l|K♭,−1 〉n
l2l21
= 0 ⇒
dtdy tn = 0,
〈K♭,−1 |/l|χ−〉n
l2l21
= 0 ⇒
(1 − y)n = 0. (5.11)
Hence the integrals over all positive and negative powers of t vanish,
dtdy tn = 0 for n 6= 0. (5.12)
Integrals over positive powers of y, contained within the double Inf piece of the first term
of eq. (5.2), will not vanish. These integrals are straightforwardly derivable with the aid
of identities involving the four vector nµ = K
1 − (S1/γ)χµ which satisfies the constraints
(K1 · n) = 0 and n2 = −S1. It is then possible to show the following relations in D = 4
dimensions, and remembering that Jt,y = 1,
(l · n)2m−1
l2l21
= 0 ⇒
)2m−1
(l · n)2m
l2l21
= S2m1 B
= S2m1 B̃
(l · K1)2m
l2l21
= (2m + 1)S2m1 B
dtdy = (2m + 1)S2m1 B̃
PV , (5.13)
where BmPV and B̃
PV are Passarino-Veltman reduction coefficients, the explicit forms of which
are not needed. Solving these equations for the integral of ym leads to the result
dtdy ym =
m + 1
dtdy for m ≥ 0. (5.14)
Contributions to our desired scalar bubble coefficient from the double Inf piece of eq. (5.2)
therefore come not only from the single constant t0y0 term but also from terms proportional
to integrals of t0ym. This is not the end of the story. As described above, there can be further
contributions from the second and third residue terms generated in the decomposition of
eq. (5.2). We could proceed from the cut integrand to explicitly calculate these residue
terms. However as we will shall see, a more straightforward approach is to derive these
terms by relating them to triple cuts.
C. The momentum parameterisation for triple cut contributions
We wish to relate the contributions to the bubble coefficient of the residue pieces, sep-
arated in the decomposition of eq. (5.2), to triple cuts in a specific basis of the cut-loop
momentum. To find this basis we will apply the additional constraint
(l + K2)
2 = 0, (5.15)
to the two-particle cut momentum of section VA. Note that here we label the “K2” leg as K2
in contrast to (−K2) as we did in the triangle coefficient case of section IVA. This constraint
corresponds to the application of an additional cut which would appear as δ((l+K2)
2) inside
the integral. This additional constraint, applied to the starting point of the two-particle cut
loop momentum, forces us to use K♭1 and χ as the momentum basis vectors of l. Importantly,
this differs from the basis choice for the triple cut momenta developed in section IVA, which
leads to the differing behaviour of these triple-cut contributions.
The presence of y in both 〈l−| and 〈l+| directs us for reasons of efficiency to choose to
use eq. (5.15) to first constrain y, leaving t free. Looking at eq. (5.9) we see that as lµ is
quadratic in y then there are two solutions to this constraint, y±, which are given by
2S1〈χ−| /K2|K♭,−1 〉
γ〈K♭,−1 | /K2|K
1 〉 − S1〈χ−| /K2|χ−〉
t + S1〈χ−| /K2|K♭,−1 〉
S1〈χ−| /K2|K♭,−1 〉 + 2tγ (K1 · K2)
− 4S1S2γt
tγ − 〈χ−| /K2|K♭,−1 〉
. (5.16)
On substituting these two solutions into the two-particle cut momentum of eq. (5.8) we
obtain our desired triple-cut momentum parameterisation.
Our final step is then to relate the triple-cut integrals defined in this basis to the residue
terms of eq. (5.2). Rewriting the triple cut integral after the change of momentum parame-
terisation and integrating over all but the third delta function gives the general form
(2π)3
dtdy J ′t (δ(y − y+) + δ(y − y−))M(y, t), (5.17)
where M(y, t) is a general cut integrand and
J ′t =
S1〈χ−| /K2|K♭,−1 〉 + 2tγ (K1 · K2)
− 4S1S2γt
tγ − 〈χ−| /K2|K♭,−1 〉
. (5.18)
Upon examination of a general residue term we find that it corresponds to an integral of the
(2π)2i
dtdyJt,y Res
M(y, t)
(l + K2)2
≡ −(2π)
dtdy J ′t (δ(y − y+) + δ(y − y−))M(y, t),(5.19)
and hence that residue contributions are given, up to a factor of (−1/2), by the triple cut.
This result applies equally when S2 = K
2 = 0, corresponding to a one or two-mass
triangle, when the appropriate scale is set to zero in eq. (5.16) and eq. (5.18). The momentum
parameterisation in this simplified case is contained in Appendix B.
D. More non-vanishing integrals and bubble coefficients
There is a direct correspondence between a triple cut contribution and a residue contri-
bution. The sum of all possible triple cuts, which contain the original two-particle cut, will
therefore correspond to the sum of all residue terms. We must now examine how such terms
contribute to the bubble coefficient itself.
Unlike for the case of triple cut integrands as parameterised in section IVA we will find
that there are contributions, specifically in this case bubble coefficient contributions, coming
from the integrals over t. To see this let us investigate the integrals over t in more detail.
As an example consider extracting the scalar bubble term coming from a two-mass linear
triangle (with the massless leg K2 so that S2 = 0). We would start from a two-particle cut
which, after decomposing as eq. (5.1), would give
(2π)2
δ(l2i )
〈K−2 |/l|a−〉
(l + K2)2
(5.20)
= (2π)2
δ(l2i )
[lK2]
= (2π)2
dtdyJ ′t
[K♭1a]
[K♭1K2]
[χK2]
y[K♭1a]+t[χa]
[χK2]
[K♭1K2]
[K♭1K2] + [χK2]
The first term of this is clearly not the complete coefficient, and so we need to obtain the
bubble contribution contained within the second term. Consider reconstructing this term
using a triple cut with the cut loop momentum parameterised in a form given by setting y
equal to its value at the residue of the pole of this second term. This triple cut term is given
−(2π)
δ(l2i )〈K−2 |/l|a−〉
= −(2π)
dtJ ′t
[χK♭1][K2a]
[K♭1K2]
〈K−2 | /K1|K−2 〉t+
〈χ−| /K2|K♭,−1 〉
, (5.21)
where we have used the parameterisation of lµ given by eq. (B9) and added an extra overall
factor of i which would come from the additional tree amplitude in a triple cut.
Of this triple cut integrand only the first, t dependent, term can give anything other than
a scalar triangle contribution. To derive the result of this integral over t we will, as we have
done previously, use our parameterisation of the cut momentum, eq. (5.9), to pick out the
integral as follows
〈χ−|/l|K♭,−1 〉
l2l21(l + K2)
≡ (2π)3γ
dtJ ′tt. (5.22)
Using Passarino-Veltman reduction on the single tensor integral on the left hand side of
this as well as dropping anything but the contributing bubble integrals of our particular cut
leaves us with the result
dtJ ′tt =
(2π)3
S1〈χK2〉[K2K♭1]
γ〈K−2 | /K1|K−2 〉2
Bcut0 (K
1 ), (5.23)
where Bcut0 (K
1 ) is the cut form of the scalar bubble integral of eq. (C1). This non-vanishing
result for the integral over t, in contrast to that of section IVA, is a direct consequence of
the cut momentum parameterisation forced upon us when taking the residues contained in
the two-particle cut integrand with which we started.
On substituting the result of eq. (5.23) into eq. (5.21) we find that we can write eq. (5.20),
using the bubble integral given in eq. (C1), as
(2π)2
δ(l2i )
〈K−2 |/l|a−〉
(l + K2)2
=−i [χK
1][K2a]
[K♭1K2]
〈χ−| /K2|K♭,−1 〉
〈K−2 | /K1|K−2 〉
Bcut0 (K
1 )+i
[K♭1a]
[K♭1K2]
Bcut0 (K
〈K−2 | /K1|a−〉
〈K−2 | /K1|K−2 〉
Bcut0 (K
1), (5.24)
which is the known coefficient of the scalar bubble contained inside the linear triangle.
Of course, if we had chosen χ = K2 from the beginning, then the first term on the left
hand side of eq. (5.20) would have been the complete bubble coefficient. In general, if we
are able to rewrite a two-particle cut integrand such that each term contains only a single
propagator then we can always choose a different χ = K♭2, defined via
2 = K
〈K♭,−1 | /K2|K
1 , (5.25)
for each term individually such that there are no contributions from the residue terms.
Whether this is both feasible and a more computationally effective approach than calculating
the residue contributions through the use of triple cuts would depend upon the cut integrand
in question.
In general we will be considering processes which contain terms with powers of up to t3,
so we will need to know these integrals. Again these can be found using a straightforward
application of tensor reduction techniques. When all three legs in the cut are massive these
integrals over t are given, after dropping an overall factor of 1/(2π)3 witch always cancels
out of the final coefficient, by
T (j) =
dtJ ′tt
)j〈χ−| /K2|K♭,−1 〉j(K1 · K2)j−1
Sl−12
(K1 · K2)l−1
Bcut0 (K
1). (5.26)
Simply taking the relevant mass to zero gives the forms in the one and two mass cases. ∆
was previously defined in eq. (4.15) and we have
C11 =
C21 = −
, C22 = −
C31 = −
(K1 · K2)2
, C32 =
, C33 =
. (5.27)
Also for later use we define T (0) = 0.
E. The bubble coefficient
We have now assembled all the pieces necessary to compute our desired scalar bubble
coefficient, bj , corresponding to the cut scalar bubble integral B
j ). It is given in general
not as the coefficient of a single term but by summing together the t0ym terms from both the
double Inf in y followed by t as well as residue contributions which we derive by considering
all possible triple cuts contained in the two-particle cut. The coefficient is given by
bj = −i [Inft [InfyA1A2] (y)] (t)
t→0, ym→ 1
{Ctri}
[InftA1A2A3] (t)
tj→T (j)
, (5.28)
where T (j) is defined in eq. (5.26) and the sum over the set {Ctri} is a sum over all triple
cuts obtainable by cutting one more leg of the two-particle cut integrand A1A2.
When computing with eq. (5.28) there is a freedom in the choice of χ. A suitable choice of
which can simplify the degree of computation involved in extracting a particular coefficient.
Particular choices of χ can eliminate the need to calculate the second term of eq. (5.28)
completely, as discussed in section VD. We also note that there are choices of χ which
eliminate the need to evaluate the first term of eq. (5.28), so that the coefficient comes
entirely from the second term of eq. (5.28) instead.
VI. APPLICATIONS
To demonstrate our method we now present the recalculation of some representative
triangle and bubble integral coefficients. We also discuss checks we have made against other
various state-of-the-art cut-constructable coefficients contained in the literature.
A. Extracting coefficients
To highlight the application of our procedure to the extraction of basis integral coefficients
we consider deriving the coefficients of some simple integral functions which commonly
appear, for example, in one-loop Feynman diagrams.
1. The triangle coefficient of a linear two-mass triangle
First we consider deriving the scalar triangle coefficient of a linear two-mass triangle with
massive leg K1, massless leg K2, and a and b arbitrary massless four-vectors not equal to
K2. This is given by the integral
〈a−|/l|b−〉
l2(l − K1)2(l + K2)2
. (6.1)
Extracting the triangle coefficient requires cutting all three propagators of the integrand.
We do this here by simply removing the “cut” propagator as we are interested only in the
integrand. This leaves only
〈a−|/l|b−〉. (6.2)
Rewriting this integrand in terms of the parameterisation of eq. (4.16) gives
α01〈a−| /K2|b−〉 + t〈aK♭1〉[χb]
. (6.3)
As S2 = 0 we see that α01 = S1/γ and that γ = 2(K1 · K2). Then taking the t0 component
of the [Inft] of this in accordance with eq. (4.10) leaves us with our desired coefficient
〈K−2 | /K1|K−2 〉
〈a−| /K2|b−〉, (6.4)
which matches the expected result.
2. The bubble contributions of a three-mass linear triangle
Consider a linear triangle with in this case three massive legs, so now K2 is massive but
again a and b are arbitrary massless four-vectors,
〈a−|/l|b−〉
l2(l − K1)2(l + K2)2
. (6.5)
Extracting the bubble coefficient of the integral B0(K
1) is done by cutting the two propa-
gators l2 and (l−K1)2. Again cutting the legs is done by removing the relevant propagators
from the integrand so that it is given by
〈a−|/l|b−〉
(l + K2)2
. (6.6)
As this contains a single propagator, and therefore a single pole, we could choose to set
χ = K♭2 (as defined in eq. (5.25)), before performing the series expansions in y and t. For
this choice of χ the bubble coefficient comes entirely from the two-particle cut. Using the
first term of eq. (5.28) gives directly
γ〈a−| /K♭1|b−〉
γ2 − S1S2
− S1〈a
−| /K♭2|b−〉
γ2 − S1S2
, (6.7)
where γ = 〈K♭,−2 | /K♭1|K
2 〉, a result which is equivalent to the expected answer.
In order to demonstrate the procedure of using triple cut contributions in extracting a
bubble coefficient we will now reproduce this by assuming χ 6= K♭2. For this case the first
term of eq. (5.28) then gives
− i 〈aχ〉[K
〈χ−| /K2|K♭,−1 〉
, (6.8)
which upon choosing χ = a vanishes and so the complete contribution will come from the
triple cut pieces of eq. (6.5). Cutting the remaining propagator in eq. (6.6) gives us the
single triple cut term which will contribute. The integrand of this is given, after multiplying
by an additional factor of i which would come from the third tree amplitude if this was a
triple cut, by
〈al〉[lb]
+ 〈al〉[lb]
(y+ + y−)〈aK♭1〉[K♭1b] + 2t〈aK♭1〉[ab]
, (6.9)
where we have set χ = a. From eq. (5.16) we have
y+ + y− =
〈K♭,−1 | /K2|K
1 〉 − S1〈a−| /K2|a−〉
t + S1〈a−| /K2|K♭,−1 〉
S1〈a−| /K2|K♭,−1 〉
, (6.10)
Hence taking the [Inft] of the cut integrand, eq. (6.9), and dropping any terms not propor-
tional to t leaves
it〈a−| /K1|b−〉
γ〈K♭,−1 | /K2|K
1 〉 − S1〈a−| /K2|a−〉
S1〈a−| /K2|K♭,−1 〉
[K♭1b]
, (6.11)
which after inserting the result for the t integral given by eq. (5.26) and substituting this
into the second term of eq. (5.28) gives for our desired coefficient
−| /K1|b−〉
〈K♭,−1 | /K2|K
〈a−| /K2|a−〉 +
〈a−| /K2|K♭,−1 〉
[K♭1b]
= −i 1
(K1 · K2)〈a−| /K1|b−〉 − S1〈a−| /K2|b−〉
, (6.12)
where ∆ was given in eq. (4.15). This matches both the expected result and eq. (6.7).
B. Constructing the one-loop six-photon amplitude A6(1
−, 2+, 3−, 4+, 5−, 6+)
Recently an analytic form for the last unknown six-photon one-loop amplitude was ob-
tained by Binoth, Heinrich, Gehrmann and Mastrolia in ref. [46]. This result was used
to confirm a previous numerical result [50]. More recently still further corroboration has
been provided by [45]. Here we reproduce, as an example, the calculation of the three-mass
triangle and bubble coefficients, again confirming part of these results.
Firstly it is a very simple exercise to demonstrate by explicit computation that all bubble
coefficients vanish. If we were to use the basis of finite box integrals, as defined in [35], then
there is only a single unique three-mass triangle coefficient, a complete explicit derivation
of which we now present. Starting from the cut in the 12 : 34 : 56 channel shown in figure 4
we can write the cut integrand as
FIG. 4: Triple cut six-photon amplitude in the 12 : 34 : 56 channel.
16A4(−l−hq , 1−, 2+, lh22,q)A4(−l−h22,q , 3−, 4+, lh11,q)A4(−l−h11,q , 5−, 6+, lhq ), (6.13)
with all unlabelled legs photons and l1 = l − K56 and l2 = l + K12. The overall factor of
16 comes from the differing normalisation conventions between QCD colour-ordered ampli-
tudes and QED photon amplitudes. Both helicity choices h = h1 = h2 = ± give identical
contributions. Written explicitly, eq. (6.13) is
〈l1〉2〈l23〉2〈l15〉2
〈l2〉〈l22〉〈l14〉〈l24〉〈l6〉〈l16〉
. (6.14)
After inserting the momentum parameterisation of eq. (4.16) this becomes
t〈K♭12〉 + α01〈K♭22〉
t〈K♭12〉 + α21〈K♭22〉
t〈K♭14〉 + α11〈K♭24〉
t〈K♭11〉 + α01〈K♭21〉
t〈K♭13〉 + α21〈K♭23〉
t〈K♭15〉 + α11〈K♭25〉
t〈K♭14〉 + α21〈K♭24〉
t〈K♭16〉 + α01〈K♭26〉
t〈K♭16〉 + α11〈K♭26〉
) . (6.15)
Applying eq. (4.10) implies taking only the t0 piece of the [Inft] of this expression. Averaging
over both solutions leaves us with our form for the three mass triangle coefficient
− 16i
〈K♭11〉2〈K♭13〉2〈K♭15〉2
〈K♭12〉2〈K♭14〉2〈K♭16〉2
, (6.16)
where K♭1 depends upon the form of γ± as given in eq. (4.14). Numerical comparison with
the analytic result of [46] shows complete agreement.
C. Contributions to the one-loop A6(1
q , 2
q , 3
−, 4+; 5−e , 6
e ) amplitude
This particular amplitude was originally obtained by Bern, Dixon and Kosower in [19].
Making up this amplitude are many box, triangle and bubble integrals along with rational
terms. Here we will recompute one particular representative three-mass triangle coefficient
in order to highlight the application of our technique to a phenomenologically interesting
process.
Following the notation of [19], we wish to calculate the three-mass triangle coefficient of
I3m3 (s14, s23, s56) ≡ C0(s14, s56) of the F cc term. The only contributing cut is shown in figure
5. We begin by writing down the triple cut integrand for this case
FIG. 5: Triple cut in the 14 : 23 : 56 channel.
A4(−l−h11,q̄ , 5−e , 6+e , lh22,q)A4(−l−h22,q , 4+, 1+q , lhg )A4(−l−hg , 2−q , 3−, lh11,q), (6.17)
where l1 = l − K23 and l2 = l + K14. Only when h = −, h1 = + and h2 = + do we get a
contribution. It can be written explicitly as
〈l25〉2〈ll2〉2〈23〉2
〈14〉〈56〉〈4l2〉〈2l〉〈ll1〉〈l1l2〉
. (6.18)
Rewriting this in terms of the loop momentum parametrisation of eq. (4.16) gives
t〈K♭15〉 + α21〈K♭25〉
)2 〈23〉2
1 − s23
〈14〉〈56〉
t〈4K♭1〉 + α21〈4K♭2〉
t〈2K♭1〉 + α01〈2K♭2〉
. (6.19)
The two solutions of γ are given by γ± = −(K23 · K14) ±
(K23 · K14)2 − s23s14, the αij ’s
are given in Appendix A.
The application of eq. (4.10) involves taking [Inft] of eq. (6.19), dropping all but the t
component of the result and then averaging over both solutions of γ giving the coefficient
γ〈K♭15〉2〈23〉2
1 − S1
〈14〉〈56〉〈4K♭1〉〈2K♭1〉
, (6.20)
where again K♭1 depends upon γ±. Numerical comparison against the solution for this
coefficient presented in [19],
〈2−| /K14 /K23|5+〉2 − 〈25〉2s14s23
〈14〉[23]〈56〉〈2−| /K14|3−〉〈2−| /K34|1−〉
+ flip, (6.21)
shows complete agreement, where the operation flip is defined as the exchanges 1 ↔ 2,
3 ↔ 4, 5 ↔ 6, 〈ab〉 ↔ [ab].
The remaining triangle and bubble coefficients can be derived in an analogous way. We
have computed a selection of these coefficients for A6(1
q , 2
q , 3
−, 4+; 5−e , 6
e ), along with
coefficients of other amplitudes given in [19], and find complete agreement.
D. Bubble coefficients of the one-loop 5-gluon QCD amplitude A5(1
−, 2−, 3+, 4+, 5+)
This result for the 1-loop 5 gluon QCD amplitude A5(1
−, 2−, 3+, 4+, 5+) was originally
calculated by Bern, Dixon, Dunbar and Kosower in [18]. It contains neither box nor triangle
integrals, only bubbles. We need therefore only compute bubble coefficients. There are only
a pair of such coefficients, with masses s23 and s234 = s51.
For the first cut in the channel K1 = K23 we have, for the sum of the two possible helicity
configurations, the two-particle cut integrand
〈23〉〈45〉〈51〉
〈1l1〉2〈1l〉〈2l〉〈2l1〉2
〈4l1〉〈3l1〉〈ll1〉2
, (6.22)
and for the second, in the channel K1 = K234,
〈23〉〈34〉〈51〉
〈1l1〉2〈1l〉〈2l〉〈2l1〉2
〈4l1〉〈5l1〉〈ll1〉2
. (6.23)
Focus upon the K1 = K23 cut initially. There are two pole-containing terms in the
denominator of this cut. We could choose to partial fraction these terms and then pick χ =
K2 in each case to extract the coefficient. Instead though we will derive the coefficient using
triple cut contributions. Choosing χ = k1 so that after inserting the cut loop momentum
parameterisation of eq. (5.8) the cut integrand becomes
2γ2〈1K♭1〉
S21〈23〉〈45〉〈51〉
〈2K♭1〉 − S1γ
t〈2K♭1〉 + S1γ (1 − y) 〈21〉
〈3K♭1〉 − S1γ
〈4K♭1〉 − S1γ
) , (6.24)
and hence produces no [Infy[Inft]] term. Consequentially the two-particle cut contribution
to the bubble coefficient vanishes. The same choice of χ similarly removes all two-particle
cut contributions in the channel K1 = K234 from the corresponding scalar bubble coefficient.
Examining the triple cuts of the bubble in the K23 channel shows only two possible
contributions, again after summing over both contributing helicities, given by
〈45〉〈51〉
[3l][3l2]〈1l1〉〈1l〉2〈2l1〉〈2l〉
〈ll1〉〈l1l2〉[ll2]〈l4〉
, (6.25)
when K2 = k3 and
− 2i〈23〉〈51〉
[4l][4l2]〈1l1〉〈1l2〉2〈2l1〉〈2l〉2
〈ll1〉〈l1l2〉[ll2]〈5l2〉〈3l〉
, (6.26)
when K2 = k4. In both cases K2 is massless and is of positive helicity so we use the
parameterisation of the triple cut momenta for y+ given in eq. (B2). Then along with
setting χ = k1 gives for the first triple cut integrand
2i〈1K♭1〉2〈23〉
〈13〉〈34〉〈45〉〈51〉
〈1−|/2|3−〉
〈1K♭1〉
〈3−| /K23|3−〉
〈1K♭1〉
〈13〉 〈23〉+
, (6.27)
and for the second
− 2i〈1K
1〉2〈24〉2
〈23〉〈34〉〈45〉〈51〉〈14〉
〈4−| /K23|4−〉
〈14〉 −
〈1−| /K23|4−〉
〈1K♭1〉
〈1K♭1〉
〈14〉 〈24〉+
S1〈21〉
.(6.28)
Applying these integrands to the second term of eq. (5.28) by taking [Inft], dropping any
terms not proportional to t and then performing the substitution ti → T (i) gives for the
coefficient of the first triple cut simply 1
Atree5 , and for the second triple cut
〈1+|/2/4 /K23|1+〉2
〈4−| /K23|4−〉2
s12 −
〈1+|/2/4 /K23|1+〉
〈4−| /K23|4−〉
. (6.29)
After following the same series of steps as above for the second bubble coefficient with
K1 = K234 we find only a single triple cut contributing term corresponding to K2 = k4. This
is related to the second triple cut coefficient derived above via the replacement K23 → K234
and swapping the overall sign.
After combining the three triple cut pieces above we arrive at the following form for the
cut constructable pieces of this amplitude
(4π)2−ǫ
Atree5 B0(s23)
〈1+|/2/4 /K23|1+〉2
〈4−| /K23|4−〉2
〈1+|/2/4 /K23|1+〉
〈4−| /K23|4−〉
(B0(s23)−B0(s234))
, (6.30)
which can easily be shown to match the result given in [18].
While this example is particularly simple we have also performed additional compar-
isons against other results in the literature. Such tests include the cut constructible pieces
of all two-minus gluon amplitudes with up to seven external legs, originally obtained in
[18, 37]. Additionally we find agreement for the case when, with six gluon legs, three
are of negative helicity and adjacent to each other and the remainder are positive helic-
ity, which was originally obtained in [49]. We have also successfully reproduced the known
three mass triangle coefficients in N = 1 supersymmetry for A6(1−, 2+, 3−, 4+, 5−, 6+) and
−, 2−, 3+, 4−, 5+, 6+), originally obtained in [35].
VII. CONCLUSIONS
The calculation of Standard Model background processes at the LHC requires efficient
techniques for the production of amplitudes. The large numbers of processes involved along
with their differing partonic makeups suggests that as much automation as possible is de-
sired. In this paper we have presented a new formalism which directs us towards this goal.
Coefficients of the basis scalar integrals making up a one-loop amplitude are constructed in
a straightforward manner involving only a simple change of variables and a series expansion,
thus avoiding the need to perform any integration or calculate any extraneous intermediate
quantities. The main results of this paper can be encapsulated simply by eq. (4.10) and
eq. (5.28) along with the cut loop momentum given by eq. (4.16), eq. (5.8) and eq. (5.16).
Although this technique has been presented mainly in the context of using generalised
unitarity [19, 39, 40, 41] to construct coefficients, and hence the cut-constructible part of
the amplitude, it can also be used as an efficient method of performing one-loop integration.
Using the idea of “cutting” two, three or four of the propagators inside an integral, we
isolate and then extract scalar basis coefficients. This procedure then allows us to rewrite
the integral in terms of the scalar one-loop basis integrals, hence giving us a result for the
integral.
Different unitarity cuts isolate particular basis integrals. For the extraction of triangle
integral coefficients this means triple cuts and for bubble coefficients we use a combination
of two-particle and triple cuts. Extracting the desired coefficients from these cut integrands
is then a two step process. The first step is to rewrite the cut loop momentum in terms of a
parameterisation which depends upon the remaining free parameters of the integral after all
the cut delta functions have been applied. Triangle coefficients are then found by taking the
terms independent of the sole free integral parameter as this parameter is taken to infinity.
Bubble coefficients are calculated in a similar if slightly more complicated way. The pres-
ence of a second free parameter in the bubble case means that we must take into account, not
only the constant term in the expansion of the cut integrand as the free integral parameters
are taken to infinity, but also powers of one of these parameters. The limit on the maximum
power of lµ appearing in the cut integral restricts the appearance of such terms and hence
we need consider only finite numbers of powers of these free parameters. Additionally it can
also be necessary to take into account contributions from terms generated by applying an
additional cut to the bubble integral. The flexibility in our choice of the cut-loop momentum
parameterisation allows us to directly control whether we need compute any of these triple
cut terms. Furthermore we can control which of these triple cut terms appears, in cases
when their computation is necessary.
As we consider the application of this procedure to more diverse processes than those
detailed here, we should also investigate the “complexity” of the generated coefficients.
In the applications we have presented we can see that we produce “compact” forms with
minimal amounts of simplification required. This is important if we are to consider further
automation. The straightforward nature of this technique combined with the minimal need
for simplification means that efficient computer implementations can easily be produced. As
a test of this assertion we have implemented the formalism within a Mathematica program
which has been used to perform checks against state-of-the-art results contained in the
literature. Such checks have included various helicity configurations of up to seven external
gluons as well as the bubble and three-mass triangle coefficients of the six photon A6(−+−+
−+) amplitude. In addition representative coefficients of processes of the type e+e− → qqgg
have been successfully obtained.
Our procedure as presented has mainly been in the context of massless theories. Funda-
mentally there is no restriction to the application of this to theories also involving massive
fields circulating in the loop. Extensions to include masses should require only a suitable
momentum parameterisation for the cut loop momentum; the procedure is then expected to
apply as before.
In conclusion therefore we believe that the technique presented here shows great potential
for easing the calculation of needed one-loop integrals for current and future colliders.
Acknowledgements
I would like to thank David Kosower for collaboration in the early stages of this work
and also Zvi Bern and Lance Dixon for many interesting and productive discussions as well
as for useful comments on this manuscript. I would also like to thank the hospitality of
Saclay where early portions of this work were carried out. The figures were generated using
Jaxodraw [51], based on Axodraw [52].
APPENDIX A: THE TRIPLE CUT PARAMETERISATION
In this appendix we give the complete detail of the triple cut parameterisation along with
some other useful results. The three cut momenta are given by
〈l−i | = t〈K
1 | + αi1〈K
2 |, 〈l+i | =
〈K♭,+1 | + 〈K
2 |, (A1)
α01 =
S1 (γ − S2)
(γ2 − S1S2)
, α02 =
S2 (γ − S1)
(γ2 − S1S2)
α11 = α01 −
= −S1S2 (1 − (S1/γ))
γ2 − S1S2
, α12 = α02 − 1 =
γ(S2 − γ)
γ2 − S1S2
α21 = α01 − 1 =
γ(S1 − γ)
γ2 − S1S2
, α22 = α02 −
= −S1S2 (1 − (S2/γ))
γ2 − S1S2
,(A2)
along with the identities α01α02 = α11α12 and α01α02 = α21α22. When written as four-vectors
the cut momentum are given by
i = αi2K
1 + αi1K
〈K♭,−1 |γµ|K
2 〉 +
αi1αi2
〈K♭,−2 |γµ|K
1 〉. (A3)
From these parameterised forms we have the following spinor product identities
[ll1] =
α12 − α02
[K♭1K
2] = −
[K♭2K
〈ll1〉 = t(α11 − α01)〈K♭1K♭2〉 = −
〈K♭1K♭2〉,
[ll2] =
α22 − α02
[K♭1K
2] = −
[K♭2K
〈ll2〉 = t(α21 − α01)〈K♭1K♭2〉 = −t〈K♭1K♭2〉,
[l1l2] =
α22 − α12
[K♭1K
1 − S2
[K♭2K
〈l1l2〉 = t(α11 − α21)〈K♭1K♭2〉 = −t
1 − S1
〈K♭1K♭2〉. (A4)
and we note that
1 − S2
1 − S1
γ = −γ − S1S2
+ S1 + S2 = (K1 − K2)2 = S3, (A5)
and so with l ≡ l0 we have 〈lilj〉[ljli] = Si+j, as expected.
APPENDIX B: THE TRIPLE CUT BUBBLE CONTRIBUTION MOMENTUM
PARAMETERISATION WHEN K22 = 0
In this appendix we give the forms for the triple cut momentum of section VC in the
case when S2 = 0, i.e. we have a one or two mass triangle. Firstly in these cases the K2 leg
is attached to a three-point vertex and so the amplitude for this will contain either [K2l] or
〈K2l〉 depending upon the helicity of K2. This means that in the positive helicity case only
the delta function solution δ(y − y+) survives and for a negative helicity K2 the δ(y − y−)
survives. We have for both solutions
J ′t =
S1〈χ−| /K2|K♭,−1 〉 + tγ〈K−2 | /K1|K−2 〉
) . (B1)
The momentum parameterisation for the y+ solution is given in spinor components by
〈l−| = 〈χK
〈χK2〉
〈K−2 |, 〈l+| = 〈K
1 | −
〈χK2〉
〈K−2 | /K1, (B2)
and as a 4-vector by
〈χK♭1〉
2〈χK2〉
S1〈χK2〉
〈K−2 |γµ /K1|K+2 〉 + 〈K−2 |γµ|K
. (B3)
The other momenta are given by
〈l−1 | = t
〈χK♭1〉
〈χK2〉
〈K−2 | −
〈χ−|, 〈l+1 | = −
S1〈χK2〉
〈K−2 | /K1,
〈l−2 | =
〈χK♭1〉
〈χK2〉
〈K−2 |, 〈l+2 | = −
〈χ−| /K3
〈χK♭1〉
〈χK2〉
〈K−2 | /K1. (B4)
where we have moved the overall factor of t from 〈l−1 | to 〈l+1 | to avoid the presence of a 1/t
term for aesthetical reasons. The spinor products formed from these are given by
〈ll1〉 =
〈χK♭1〉, [ll1] = [K♭1χ],
〈ll2〉 = 0, [ll2] = −
〈χK2〉
〈χK♭1〉
[lK2],
〈l1l2〉 = −
〈χK♭1〉, [l1l2] = −
[K♭1χ]. (B5)
and we see that again, as expected, with l = l0, we have 〈lilj〉[ljli] = Si+j . As we have
massless legs some spinor products will consequentially vanish. In the two-mass case these
〈ll2〉 = 0, 〈lK2〉 = 0, [l2K2] = 0, (B6)
and for the one-mass case
[l1l2] = 0, 〈ll2〉 = 0, 〈lK2〉 = 0, 〈l2K2〉 = 0, [l1K3] = 0, [l2K3] = 0, (B7)
where K3 is the momentum of the third leg.
The momentum parameterisation for the y− solution is given in spinor components by
〈l−| = t
〈K+2 | /K1 +
〈χ−|, 〈l+| = [χK
〈K+2 |, (B8)
and as a 4-vector by
[χK♭1]
2[K2K
〈K+2 | /K1γµ|K−2 〉 +
〈χ−|γµ|K−2 〉
. (B9)
The other momenta are given by
〈l−1 | =
〈K+2 | /K1, 〈l+1 | = −t
[K♭1χ]
〈K+2 | − 〈K
〈l−2 | =
[χK♭1]
〈K♭,+1 | /K3 +
〈K+2 | /K1, 〈l+2 | =
[χK♭1]
〈K+2 |. (B10)
The spinor products formed from these are given by
〈ll1〉 =
〈χK♭1〉, [ll1] = [K♭1χ],
〈ll2〉 =
[χK♭1]
〈K2l〉, [ll2] = 0,
〈l1l2〉 =
〈K♭1χ〉, [l1l2] = [χK♭1]. (B11)
and again 〈lilj〉[ljli] = Si+j as expected. The vanishing spinor products in the two mass case
[ll2] = 0, [lK2] = 0, [l2K2] = 0, (B12)
and in the one mass case
〈l1l2〉 = 0, [ll2] = 0, [lK2] = 0, [l2K2] = 0, 〈l1K3〉 = 0, 〈l2K3〉 = 0. (B13)
APPENDIX C: THE SCALAR INTEGRAL FUNCTIONS
The scalar bubble integral with massive leg K1 given in figure 6 is defined as
FIG. 6: The scalar bubble integral with a leg of mass K21 .
1 ) = (−i)(4π)2−ǫ
d4−2ǫl
(2π)4−2ǫ
l2(l − K1)2
, (C1)
and is given by
ǫ(1 − 2ǫ)(−K
−ǫ = rΓ
− ln(−K21 ) + 2
+ O(ǫ), (C2)
Γ(1 + ǫ)Γ2(1 − ǫ)
Γ(1 − 2ǫ) . (C3)
The general form of the scalar triangle integral with the masses of its legs labelled K21 ,
K22 and K
3 given in figure 7 is defined as
FIG. 7: The scalar triangle with its three legs of mass K21 , K
2 and K
1 , K
2) = i(4π)
d4−2ǫl
(2π)4−2ǫ
l2(1 − K1)2(l − K2)2
, (C4)
and separates into three cases depending upon the masses of these external legs. In the one
mass case we have K22 = 0 and K
3 = 0 and the corresponding integral is given by
1 , K
2 ) =
(−K21 )−1−ǫ =
(−K21 )
− ln(−K
ln2(−K21 )
+ O(ǫ), (C5)
If two legs are massive the integral, assuming K23 = 0, is given by
1 , K
(−K21 )−ǫ − (−K22 )−ǫ
(−K21 ) − (−K22 )
(−K21 ) − (−K22 )
− ln (−K
1 ) − ln (−K22 )
ln2 (−K21 ) − ln2 (−K22 )
.(C6)
Finally if all three legs are massive then the integral is as given in [53, 54]
1 , K
1 + iδj
1 − iδj
− Li2
1 − iδj
1 + iδj
+ O(ǫ), (C7)
where
K21 − K22 − (K1 + K2)2√
−K21 + K22 − (K1 + K2)2√
−K21 − K22 + (K1 + K2)2√
, (C8)
∆3 = −(K22 )2 − (K22)2 − (K23)2 + 2(K21K22 + K23K21 + K22K23) = −4∆, (C9)
with ∆ given by eq. (4.15).
The general form for a scalar box function is given by
1 , K
2 , K
3) = (−i)(4π)2−ǫ
d4−2ǫl
(2π)4−2ǫ
l2(l − K1)2(l − K2)2(l − K3)2
. (C10)
The solution of this integral is split up into classes depending upon the masses of the external
legs. These solutions are labelled as zero mass I0m4 , one mass I
4 , two mass hard, I
4 , two
mass easy I2me4 , three mass I
4 and four mass I
4 integrals. The results for which can be
found in the literature, for example in [17].
[1] A. Denner, S. Dittmaier, M. Roth and L. H. Wieders, Phys. Lett. B612:223 (2005) [hep-
ph/0502063]; Nucl. Phys. B724:247 (2005) [hep-ph/0505042].
[2] W. T. Giele and E. W. N. Glover, JHEP 0404:029 (2004) [hep-ph/0402152];
R. K. Ellis, W. T. Giele and G. Zanderighi, Phys. Rev. D72:054018 (2005) [hep-ph/0506196].
[3] R. K. Ellis, W. T. Giele and G. Zanderighi, Phys. Rev. D73:014027 (2006) [hep-ph/0508308].
[4] R. K. Ellis, W. T. Giele and G. Zanderighi, JHEP 0605:027 (2006) [hep-ph/0602185].
[5] T. Binoth, G. Heinrich and N. Kauer, Nucl. Phys. B654:277 (2003) [hep-ph/0210023];
M. Kramer and D. E. Soper, Phys. Rev. D66:054017 (2002) [hep-ph/0204113];
Z. Nagy and D. E. Soper, JHEP 0309:055 (2003) [hep-ph/0308127];
T. Binoth, J. P. Guillet, G. Heinrich, E. Pilon and C. Schubert, JHEP 0510:015 (2005) [hep-
ph/0504267];
T. Binoth, M. Ciccolini and G. Heinrich, Nucl. Phys. Proc. Suppl. 157:48 (2006) [hep-
ph/0601254].
A. Lazopoulos, K. Melnikov and F. Petriello, hep-ph/0703273.
[6] F. A. Berends and W. T. Giele, Nucl. Phys. B306:759 (1988).
[7] D. A. Kosower, Nucl. Phys. B335:23 (1990).
[8] E. Witten, Commun. Math. Phys. 252:189 (2004) [hep-th/0312171];
R. Roiban, M. Spradlin and A. Volovich, JHEP 0404:012 (2004) [hep-th/0402016];
R. Roiban and A. Volovich, Phys. Rev. Lett. 93:131602 (2004) [hep-th/0402121];
R. Roiban, M. Spradlin and A. Volovich, Phys. Rev. D70:026009 (2004) [hep-th/0403190];
F. Cachazo and P. Svrček, in Proceedings of the RTN Winter School on Strings, Supergrav-
ity and Gauge Theories, edited by M. Bertolini et al. (Proceedings of Science, 2005) [hep-
th/0504194].
[9] F. Cachazo, P. Svrček and E. Witten, JHEP 0409:006 (2004) [hep-th/0403047];
C. J. Zhu, JHEP 0404:032 (2004) [hep-th/0403115];
G. Georgiou and V. V. Khoze, JHEP 0405:070 (2004) [hep-th/0404072];
J. B. Wu and C. J. Zhu, JHEP 0407:032 (2004) [hep-th/0406085];
J. B. Wu and C. J. Zhu, JHEP 0409:063 (2004) [hep-th/0406146];
D. A. Kosower, Phys. Rev. D71:045007 (2005) [hep-th/0406175];
G. Georgiou, E. W. N. Glover and V. V. Khoze, JHEP 0407:048 (2004) [hep-th/0407027];
Y. Abe, V. P. Nair and M. I. Park, Phys. Rev. D71:025002 (2005) [hep-th/0408191];
L. J. Dixon, E. W. N. Glover and V. V. Khoze, JHEP 0412:015 (2004) [hep-th/0411092];
Z. Bern, D. Forde, D. A. Kosower and P. Mastrolia, Phys. Rev. D72:025006 (2005) [hep-
ph/0412167];
T. G. Birthwright, E. W. N. Glover, V. V. Khoze and P. Marquard, JHEP 0505:013 (2005)
[hep-ph/0503063]; JHEP 0507:068 (2005) [hep-ph/0505219].
[10] R. Britto, F. Cachazo and B. Feng, Nucl. Phys. B 715, 499 (2005) [hep-th/0412308].
[11] R. Britto, F. Cachazo, B. Feng and E. Witten, Phys. Rev. Lett. 94, 181602 (2005) [hep-
th/0501052].
[12] J. Bedford, A. Brandhuber, B. Spence and G. Travaglini, Nucl. Phys. B721:98 (2005) [hep-
th/0502146];
F. Cachazo and P. Svrček, hep-th/0502160;
N. E. J. Bjerrum-Bohr, D. C. Dunbar, H. Ita, W. B. Perkins and K. Risager, JHEP 0601:009
(2006) [hep-th/0509016].
[13] M. Luo and C. Wen, JHEP 0503:004 (2005) [hep-th/0501121].
[14] S. D. Badger, E. W. N. Glover, V. V. Khoze and P. Svrček, JHEP 0507:025 (2005) [hep-
th/0504159].
[15] S. D. Badger, E. W. N. Glover and V. V. Khoze, JHEP 0601:066 (2006) [hep-th/0507161];
D. Forde and D. A. Kosower, Phys. Rev. D73:065007 (2006) [hep-th/0507292];
C. Schwinn and S. Weinzierl, JHEP 0603:030 (2006) [hep-th/0602012];
P. Ferrario, G. Rodrigo and P. Talavera, Phys. Rev. Lett. 96:182001 (2006) [hep-th/0602043].
[16] C. Schwinn and S. Weinzierl, hep-ph/0703021.
[17] Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower, Nucl. Phys. B425:217 (1994) [hep-
ph/9403226].
[18] Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower, Nucl. Phys. B435:59 (1995) [hep-
ph/9409265].
[19] Z. Bern, L. J. Dixon and D. A. Kosower, Nucl. Phys. B 513, 3 (1998) [hep-ph/9708239].
[20] W. L. van Neerven, Nucl. Phys. B 268, 453 (1986).
[21] Z. Bern and A. G. Morgan, Nucl. Phys. B467:479 (1996) [hep-ph/9511336];
Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower, Phys. Lett. B394:105 (1997) [hep-
th/9611127].
[22] A. Brandhuber, S. McNamara, B. Spence and G. Travaglini, JHEP 0510:011 (2005) [hep-
th/0506068].
[23] C. Anastasiou, R. Britto, B. Feng, Z. Kunszt and P. Mastrolia, Phys. Lett. B 645, 213 (2007)
[hep-ph/0609191].
[24] C. Anastasiou, R. Britto, B. Feng, Z. Kunszt and P. Mastrolia, hep-ph/0612277.
[25] R. Britto and B. Feng, hep-ph/0612089.
[26] Z. Bern, L. J. Dixon and D. A. Kosower, Phys. Rev. D 71, 105013 (2005) [hep-th/0501240].
[27] Z. Bern, L. J. Dixon and D. A. Kosower, Phys. Rev. D 73, 065013 (2006) [hep-ph/0507005].
[28] D. Forde and D. A. Kosower, Phys. Rev. D 73, 061701 (2006) [hep-ph/0509358].
[29] C. F. Berger, Z. Bern, L. J. Dixon, D. Forde and D. A. Kosower, Phys. Rev. D 75, 016006
(2007) [hep-ph/0607014].
[30] C. F. Berger, Z. Bern, L. J. Dixon, D. Forde and D. A. Kosower, Phys. Rev. D 74, 036009
(2006) [hep-ph/0604195].
[31] Z. Xiao, G. Yang and C. J. Zhu, Nucl. Phys. B 758, 1 (2006) [hep-ph/0607015].
[32] X. Su, Z. Xiao, G. Yang and C. J. Zhu, Nucl. Phys. B 758, 35 (2006) [hep-ph/0607016].
[33] Z. Xiao, G. Yang and C. J. Zhu, Nucl. Phys. B 758, 53 (2006) [hep-ph/0607017].
[34] T. Binoth, J. P. Guillet and G. Heinrich, hep-ph/0609054.
[35] R. Britto, E. Buchbinder, F. Cachazo and B. Feng, Phys. Rev. D 72, 065012 (2005) [hep-
ph/0503132].
[36] R. Britto, B. Feng and P. Mastrolia, Phys. Rev. D 73, 105004 (2006) [hep-ph/0602178].
[37] J. Bedford, A. Brandhuber, B. Spence and G. Travaglini, Nucl. Phys. B712:59 (2005) [hep-
th/0412108].
[38] R. Britto, F. Cachazo and B. Feng, Nucl. Phys. B 725, 275 (2005) [hep-th/0412103].
[39] L. D. Landau, Nucl. Phys. 13:181 (1959);
S. Mandelstam, Phys. Rev. 112:1344 (1958);
R. E. Cutkosky, J. Math. Phys. 1:429 (1960);
R. J. Eden, P. V. Landshoff, D. I. Olive, J. C. Polkinghorne, The Analytic S Matrix (Cambridge
University Press, 1966).
[40] Z. Bern, L. J. Dixon and D. A. Kosower, JHEP 0408:012 (2004) [hep-ph/0404293].
[41] Z. Bern, V. Del Duca, L. J. Dixon and D. A. Kosower, Phys. Rev. D71:045006 (2005) [hep-
th/0410224].
[42] P. Mastrolia, Phys. Lett. B 644, 272 (2007) [hep-th/0611091].
[43] A. Brandhuber, B. Spence and G. Travaglini, Nucl. Phys. B706:150 (2005) [hep-th/0407214];
C. Quigley and M. Rozali, JHEP 0501:053 (2005) [hep-th/0410278];
J. Bedford, A. Brandhuber, B. Spence and G. Travaglini, Nucl. Phys. B706:100 (2005) [hep-
th/0410280].
[44] G. Ossola, C. G. Papadopoulos and R. Pittau, Nucl. Phys. B 763, 147 (2007) [hep-
ph/0609007].
[45] G. Ossola, C. G. Papadopoulos and R. Pittau, arXiv:0704.1271 [hep-ph].
[46] T. Binoth, T. Gehrmann, G. Heinrich and P. Mastrolia, hep-ph/0703311.
[47] F. A. Berends, R. Kleiss, P. De Causmaecker, R. Gastmans and T. T. Wu, Phys. Lett. B103:124
(1981);
P. De Causmaecker, R. Gastmans, W. Troost and T. T. Wu, Nucl. Phys. B206:53 (1982);
Z. Xu, D. H. Zhang and L. Chang, TUTP-84/3-TSINGHUA;
R. Kleiss and W. J. Stirling, Nucl. Phys. B262:235 (1985);
J. F. Gunion and Z. Kunszt, Phys. Lett. B161:333 (1985);
Z. Xu, D. H. Zhang and L. Chang, Nucl. Phys. B291:392 (1987).
[48] M. L. Mangano and S. J. Parke, Phys. Rept. 200:301 (1991);
L. J. Dixon, in QCD & Beyond: Proceedings of TASI ’95, ed. D. E. Soper (World Scientific,
1996) [hep-ph/9601359].
[49] Z. Bern, N. E. J. Bjerrum-Bohr, D. C. Dunbar and H. Ita, JHEP 0511, 027 (2005) [hep-
ph/0507019].
[50] Z. Nagy and D. E. Soper, Phys. Rev. D 74, 093006 (2006) [hep-ph/0610028].
[51] D. Binosi and L. Theussl, Comput. Phys. Commun. 161:76 (2004) [hep-ph/0309015].
[52] J. A. M. Vermaseren, Comput. Phys. Commun. 83:45 (1994).
[53] Z. Bern, L. J. Dixon and D. A. Kosower, Nucl. Phys. B 412, 751 (1994) [hep-ph/9306240].
[54] H.-J. Lu and C. Perez, SLAC-PUB-5809.
|
0704.1836 | Comment on Electroweak Higgs as a Pseudo-Goldstone Boson of Broken Scale
Invariance | arXiv:0704.1836v1 [hep-ph] 13 Apr 2007
CSULB–PA–07–4
Comment on Electroweak Higgs as a
Pseudo-Goldstone Boson of Broken Scale Invariance
Hitoshi NISHINO1) and Subhash RAJPOOT2)
Department of Physics & Astronomy
California State University
1250 Bellflower Boulevard
Long Beach, CA 90840
Abstract
The first model of Foot, Kobakhidze and Volkas described in their work in
arXiv:0704.1165 [hep-ph] is a tailored version of our model on broken scale invari-
ance in the standard model presented in hep-th/0403039.
1) E-Mail: [email protected]
2) E-Mail: [email protected]
http://arxiv.org/abs/0704.1836v1
The merits of implementing scale invariance in the standard model of particle interactions
were enunciated by us in [1]. An extended version of this work that also addresses the
important issue of unification of elementary particle interactions will appear in [2]. The
salient features of our model were recently recapitulated in our comment [3].
Here we point out that in the work of Foot, Kobakhidze and Volkas [4] on broken scale
invariance in the standard model, their first model corresponds to a tailored version of our
model.
R. Foot et. al. [4] are fully aware of the fact that scale invariance symmetry can be realised
as a local symmetry,3) in which case breaking it results in an additional neutral gauge boson.
In our work [1] we dubbed this gauge boson the Weylon, named after Herman Weyl [5].
References
[1] H. Nishino and S. Rajpoot, ‘Broken Scale Invariance in the Standard Model’, CSULB-
PA-04-2, hep-th/0403039.
[2] H. Nishino and S. Rajpoot, ‘Standard Model and SU(5) GUT with Local Scale Invariance
and the Weylon’, CSULB-PA-06-4, to appear in CICHEP-II Conference Proceedings,
2006, published by AIP.
[3] H. Nishino and S. Rajpoot, ‘Comment on Shadow and Non-Shadow Extensions of the
Standard Model’, hep-th/0702080.
[4] R. Foot, A. Kobakhidze and R.R. Volkas, ‘Electroweak Higgs as a Pseudo-Goldstone
Boson of Broken Scale Invariance’, arXiv:0704.1165 [hep-ph].
[5] H. Weyl, S.-B. Preuss. Akad. Wiss. 465 (1918); Math. Z. 2 (1918) 384; Ann. Phys. 59
(1919) 101; Raum, Zeit, Materie, vierte erweiterte Auflage: Julius Springer (1921).
3) Cf. Footnote #2 in [4].
|
0704.1837 | Hard x-ray photoemission study of LaAlO3/LaVO3 multilayers | Hard x-ray photoemission study of LaAlO3/LaVO3 multilayers
H. Wadati,1, ∗ Y. Hotta,2 A. Fujimori,1 T. Susaki,2 H. Y. Hwang,2, 3 Y. Takata,4 K. Horiba,4
M. Matsunami,4 S. Shin,4, 5 M. Yabashi,6, 7 K. Tamasaku,6 Y. Nishino,6 and T. Ishikawa6, 7
1Department of Physics, University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
2Department of Advanced Materials Science, University of Tokyo, Kashiwa, Chiba 277-8561, Japan
3Japan Science and Technology Agency, Kawaguchi 332-0012, Japan
4Soft X-ray Spectroscopy Laboratory, RIKEN/SPring-8,
1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5148, Japan
5Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan
6Coherent X-ray Optics Laboratory, RIKEN/SPring-8,
1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5148, Japan
7JASRI/SPring-8, 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5198, Japan
(Dated: November 28, 2018)
We have studied the electronic structure of multilayers composed of a band insulator LaAlO3
(LAO) and a Mott insulator LaVO3 (LVO) by means of hard x-ray photoemission spectroscopy,
which has a probing depth as large as ∼ 60 Å. The Mott-Hubbard gap of LVO remained open at
the interface, indicating that the interface is insulating unlike the LaTiO3/SrTiO3 multilayers. We
found that the valence of V in LVO were partially converted from V3+ to V4+ only at the interface
on the top side of the LVO layer and that the amount of V4+ increased with LVO layer thickness. We
suggest that the electronic reconstruction to eliminate the polarity catastrophe inherent in the polar
heterostructure is the origin of the highly asymmetric valence change at the LVO/LAO interfaces.
PACS numbers: 71.28.+d, 73.20.-r, 79.60.Dp, 71.30.+h
I. INTRODUCTION
The interfaces of hetero-junctions composed of
transition-metal oxides have recently attracted great in-
terest. For example, it has been suggested that the inter-
face between a band insulator SrTiO3 (STO) and a Mott
insulator LaTiO3 (LTO) shows metallic conductivity.
1,2,3
Recently, Takizawa et al.4 measured photoemission spec-
tra of this interface and observed a clear Fermi cut-off,
indicating that an electronic reconstruction indeed oc-
curs at this interface. In the case of STO/LTO, elec-
trons penetrate from the layers of the Mott insulator to
the layers of the band insulator, resulting in the inter-
mediate band filling and hence the metallic conductivity
of the interfaces. It is therefore interesting to investi-
gate how electrons behave if we confine electrons in the
layers of the Mott insulator. In this paper, we inves-
tigate the electronic structure of multilayers consisting
of a band insulator LaAlO3 (LAO) and a Mott insula-
tor LaVO3 (LVO). LAO is a band insulator with a large
band gap of about 5.6 eV. LVO is a Mott-Hubbard in-
sulator with a band gap of about 1.0 eV.5 This material
shows G-type orbital ordering and C-type spin ordering
below the transition temperature TOO = TSO = 143 K.
From the previous results of photoemission and inverse
photoemission spectroscopy, it was revealed that in the
valence band there are O 2p bands at 4 − 8 eV and V
3d bands (lower Hubbard bands; LHB) at 0 − 3 eV and
that above EF there are upper Hubbard bands (UHB) of
V 3d origin separated by a band gap of about 1 eV from
the LHB.7 Since the bottom of the conduction band of
LAO has predominantly La 5d character and its energy
position is well above that of the LHB of LVO,8 the V
3d electrons are expected to be confined within the LVO
layers as a “quantum well” and not to penetrate into
the LAO layers, making this interface insulating unlike
the LTO/STO case.1,2,3,4 Recently, Hotta et al.9 investi-
gated the electronic structure of 1-5 unit cell thick layers
of LVO embedded in LAO by means of soft x-ray (SX)
photoemission spectroscopy. They found that the V 2p
core-level spectra had both V3+ and V4+ components
and that the V4+ was localized in the topmost layer.
However, due to the surface sensitivity of SX photoemis-
sion, the information about deeply buried interfaces in
the multilayers is still lacking. Also, they used an un-
monochromatized x-ray source, whose energy resolution
was not sufficient for detailed studies of the valence band.
In the present work, we have investigated the electronic
structure of the LAO/LVO interfaces by means of hard
x-ray (HX) photoemission spectroscopy (hν = 7937 eV)
at SPring-8 BL29XU. HX photoemission spectroscopy is
a bulk-sensitive experimental technique compared with
ultraviolet and SX photoemission spectroscopy, and is
very powerful for investigating buried interfaces in mul-
tilayers. From the valence-band spectra, we found that
a Mott-Hubbard gap of LVO remained open at the in-
terface, indicating the insulating nature of this interface.
From the V 1s and 2p core-level spectra, the valence of V
in LVO was found to be partially converted from V3+ to
V4+ at the interface, confirming the previous study.9 Fur-
thermore, the amount of V3+ was found to increase with
LVO layer thickness. We attribute this valence change to
the electronic reconstruction due to polarity of the layers.
http://arxiv.org/abs/0704.1837v1
SampleB:
LAO (3ML)/LVO (50ML)
LaAlO3 (3ML)
LaVO3 (50ML)
SrTiO3 (5ML)
SrTiO3 (100)
Sample A:
LAO (3ML)/LVO (3ML)/LAO (30ML)
LaAlO3 (3ML)
LaVO3 (3ML)
SrTiO3 (5ML)
SrTiO3 (100)
LaAlO3 (30ML)
Sample C:
LVO (50ML)
LaVO3 (50ML)
SrTiO3 (100)
FIG. 1: Schematic view of the LaAlO3/LaVO3 mul-
tilayer samples. Sample A: LaAlO3 (3ML)/LaVO3
(50ML)/SrTiO3. Sample B: LaAlO3 (3ML)/LaVO3
(50ML)/LaAlO3 (30ML)/SrTiO3. Sample C: LaVO3
(50ML)/SrTiO3.
II. EXPERIMENT
The LAO/LVOmultilayer thin films were fabricated on
TiO2-terminated STO(001) substrates
10 using the pulsed
laser deposition (PLD) technique. An infrared heat-
ing system was used for heating the substrates. The
films were grown on the substrates at an oxygen pres-
sure of 10−6 Torr using a KrF excimer laser (λ = 248
nm) operating at 4 Hz. The laser fluency to ablate
LaVO4 polycrystalline and LAO single crystal targets
was ∼ 2.5 J/cm2. The film growth was monitored us-
ing real-time reflection high-energy electron diffraction
(RHEED). Schematic views of the fabricated thin films
are shown in Fig. 1. Sample A consisted of 3ML LVO
capped with 3ML LAO. Below the 3ML LVO, 30ML
LAO was grown, making LVO sandwiched by LAO. Sam-
ple B consisted of 50ML LVO capped with 3ML LAO.
Sample C was 50ML LVO without LAO capping lay-
ers. Details of the fabrication and characterization of the
films were described elsewhere.11 The characterization of
the electronic structure of uncapped LaVO3 thin films
by x-ray photoemission spectroscopy will be described
elsewhere.12 HX photoemission experiments were per-
formed at an undulator beamline, BL29XU, of SPring-8.
The experimental details are described in Refs. 13,14,15.
The total energy resolution was set to about 180 meV.
All the spectra were measured at room temperature. The
Fermi level (EF ) position was determined by measuring
gold spectra.
III. RESULTS AND DISCUSSION
Figure 2 shows the valence-band photoemission spectra
of the LAO/LVO multilayer samples. Figure 2 (a) shows
the entire valence-band region. Compared with the pre-
vious photoemission results,7 structures from 9 to 3 eV
are assigned to the O 2p dominant bands, and emission
from 3 eV to EF to the V 3d bands. The energy posi-
tions of the O 2p bands were almost the same in these
three samples, indicating that the band bending effect
at the interface of LAO and LVO was negligible. Fig-
3.0 2.0 1.0 0
Binding Energy (eV)
hn = 7937 eV
A: LAO (3ML)/LVO (3ML)
/LAO (30ML)
B: LAO (3ML)/LVO (50ML)
C: LVO (50ML)
10 8 6 4 2 0 -2 -4
Binding Energy (eV)
hn = 7937 eV
O 2p bands
V 3d bands
A: LAO (3ML)/LVO (3ML)
/LAO (30ML)
C: LVO (50ML)
B: LAO (3ML)/LVO (50ML)
FIG. 2: Valence-band photoemission spectra of the
LaAlO3/LaVO3 multilayer samples. (a) Valence-band spec-
tra over a wide energy range. (b) V 3d band region.
ure 2 (b) shows an enlarged plot of the spectra in the V
3d-band region. A Mott-Hubbard gap of LVO remained
open at the interface between LAO and LVO, indicat-
ing that this interface is insulating unlike the STO/LTO
interfaces.1,2,3,4 The line shapes of the V 3d bands were
almost the same in these three samples, except for the
energy shift in sample A. We estimated the value of the
band gap from the linear extrapolation of the rising part
of the peak as shown in Fig. 2 (b). The gap size of sample
B was almost the same (∼ 100 meV) as that of sample
C, while that of sample A was much larger (∼ 400 meV)
due to the energy shift of the V 3d bands. The origin
of the enhanced energy gap is unclear at present, but
an increase of the on-site Coulomb repulsion U in the
thin LVO layers compared to the thick LVO layers or
bulk LVO due to a decrease of dielectric screening may
explain the experimental observation.
Figure 3 shows the V 1s core-level photoemission spec-
tra of the LAO/LVO multilayer samples. The V 1s spec-
tra had a main peak at 5467 eV and a satellite structure
at 5478 eV. The main peaks were not simple symmet-
ric peaks but exhibited complex line shapes. We there-
fore consider that the main peaks consisted of V3+ and
V4+ components. In sample C, there is a considerable
amount of V4+ probably due to the oxidation of the sur-
face of the uncapped LVO. A satellite structure has also
been observed in the V 1s spectrum of V2O3
16 and inter-
preted as a charge transfer (CT) satellites arising from
the 1s13d3L final state, where L denotes a hole in the O
2p band. Screening-derived peaks at the lower-binding-
energy side of V 1s, which have been observed in the
metallic phase of V2−xCrxO3,
16,17 were not observed in
the present samples, again indicating the insulating na-
ture of these interfaces.
Figure 4 shows the O 1s and V 2p core-level photoemis-
sion spectra of the LAO/LVO multilayer samples. The
O 1s spectra consisted of single peaks without surface
contamination signal on the higher-binding-energy side,
indicating the bulk sensitivity of HX photoemission spec-
troscopy. The energy position of the O 1s peak of sample
5490 5480 5470 5460 5450
Binding Energy (eV)
La 2p3/2
CT satellite
hn = 7937 eV
B: LAO (3ML)/LVO (50ML)
A: LAO (3ML)/LVO (3ML)
/LAO (30ML)
C: LVO (50ML)
FIG. 3: V 1s core-level photoemission spectra of the
LaAlO3/LaVO3 multilayer samples.
520 518 516 514 512 510 508
Binding Energy (eV)
hn = 7937 eV
V 2p3/2
B: LAO (3ML)/LVO (50ML)
A: LAO (3ML)/LVO (3ML)
/LAO (30ML)
C: LVO (50ML)
535 530 525 520 515 510
Binding Energy (eV)
V 2p1/2
V 2p3/2
hn = 7937 eV(a)
B: LAO (3ML)/LVO (50ML)
A: LAO (3ML)/LVO (3ML)
/LAO (30ML)
C: LVO (50ML)
FIG. 4: O 1s and V 2p core-level photoemission spectra of the
LaAlO3/LaVO3 multilayer samples. (a) shows wide energy
region and (b) is an enlarged plot of the V 2p3/2 spectra.
A, whose LVO layer thickness was only 3 ML, was differ-
ent from those of the rest because LAO and LVO have
different energy positions of the O 1s core levels. Fig-
ure 4 (b) shows an enlarged plot of the V 2p3/2 spectra.
Here again, the V 2p3/2 photoemission spectra showed
complex line shapes consisting of V3+ and V4+ com-
ponents, and no screening-derived peaks on the lower-
binding-energy side of V 2p3/2 were observed. The line
shapes of the V 2p3/2 spectra were very similar for sam-
ples A and B. The amount of V4+ was larger in sample
C, consistent with the case of V 1s and again shows the
effect of the oxidation of the uncapped LVO.
We have fitted the core-level spectra of samples A and
B to a Gaussian convoluted with a Lorentzian to esti-
mate the amount of V3+, V4+ and V5+ at the interface
following the procedure of Ref. 9. Figure 5 shows the
fitting results of the V 1s and V 2p3/2 core-level spec-
tra. Here, the spectra were decomposed into the V3+
and V4+ components, and the V5+ component was not
necessary. The full width at half maximum (FWHM) of
the Lorentzian has been fixed to 1.01 eV for V 1s and to
0.24 eV for V 2p3/2 according to Ref. 18. The FWHM of
the Gaussian has been chosen 0.90 eV for V 1s and 1.87
eV for V 2p3/2, reflecting the larger multiplet splitting
for V 2p than for V 1s. In Fig. 6, we summarize the ratio
of the V3+ component thus estimated, together with the
results of the emission angle (θe) dependence of the V
experiment
A: LAO (3ML)
/LVO (3ML)
/LAO (30ML)
hn = 7937 eV
5472 5470 5468 5466 5464
Binding Energy (eV)
experiment
hn = 7937 eV
B: LAO (3ML)
/LVO (50ML)
experiment
A: LAO (3ML)
/LVO (3ML)
/LAO (30ML)
hn = 7937 eV
V 2p3/2
520 518 516 514 512
Binding Energy (eV)
experiment
hn = 7937 eV
B: LAO (3ML)
/LVO (50ML)
V 2p3/2
FIG. 5: Fitting results for the V 1s and 2p3/2 core-level spec-
tra. (a) V 1s core level of sample A (LaVO3 3ML), (b) V
2p3/2 core level of sample A (LaVO3 3ML), (c) V 1s core
level of sample B (LaVO3 50ML), (d) V 2p3/2 core level of
sample B (LaVO3 50ML).
3 4 5 6 7 8 9
2 3 4 5 6 7
Mean free path (Å)
(V 2p)
(V 1s)(30
SX SX SX
surface bulk
B: LAO (3ML)/LVO (50ML)
Experiment
Model (B)
3 4 5 6 7 8 9
2 3 4 5 6 7
Mean free path (Å)
(V 2p)
(V 1s)
SX SX SX
surface bulk
A: LAO (3ML)/LVO (3ML)
/LAO (30ML)
Experiment
Asymetric model (A-1)
Symetric model (A-2)
FIG. 6: Ratio of V4+ or V4++ V5+ determined under various
experimental conditions using hard x-rays and soft x-rays.9
(a) Sample A (3ML LaVO3), (b) Sample B (50ML LaVO3).
Here, SX is a result of soft x-ray photoemission, and HX is
of hard x-ray photoemission. In the case of SX, the values in
the parenthesis denote the values of θe.
2p core-level SX photoemission spectra measured using a
laboratory SX source.9
In order to interpret those results qualitatively, first we
have to know the probing depth of photoemission spec-
troscopy under various measurement conditions. From
the kinetic energies of photoelectrons, the mean free
paths of the respective measurements are obtained as de-
scribed in Ref. 19.20 When we measure V 2p3/2 spectra
with the Mg Kα line (hν = 1253.6 eV), the kinetic en-
ergy of photoelectrons is about 700 eV, and the mean
free path is estimated to be about 10 Å. Likewise, we
also estimate the mean free path in the HX case. The
values are summarized in Table I. In the SX case, these
values are 10 cos θe Å. One can obtain the most surface-
sensitive spectra under the condition of SX with θe = 70
[denoted by SX(70o)] and the most bulk-sensitive spectra
for HX measurements of the V 2p3/2 core level [denoted
TABLE I: Mean free path of photoelectrons (in units of Å)
SX SX SX HX HX
(70◦) (55◦) (30◦) (V 1s) (V 2p)
3.4 5.7 8.7 30 60
A-2: Symmetric model
:50%V
B: LAO (3ML)/LVO (50ML)
:85%V
A: LAO (3ML)/LVO (3ML)/LAO (30ML)
:70%V
A-1: Asymmetric model
FIG. 7: Models for the V valence distributions in the
LaAlO3/LaVO3 multilayer samples. A: LaVO3 3ML. A-1 is
an asymmetric model, whereas A-2 is a symmetric model. B:
LaVO3 50ML.
by HX(V 2p)]. From Fig. 6 and Table I, one observes a
larger amount of V4+ components under more surface-
sensitive conditions. These results demonstrate that the
valence of V in LVO is partially converted from V3+ to
V4+ at the interface.
In order to reproduce the present experimental result
and the result reported in Ref. 9 (shown in Fig. 6), we
propose a model of the V valence distribution at the in-
terface as shown in Fig. 7. For sample A, we consider two
models, that is, an asymmetric model and a symmetric
model. In the asymmetric model (A-1), no symmetry is
assumed between the first and the third layers. As shown
in Fig. 6, the best fit result was obtained for the valence
distribution that 70 % of the first layer is V4+ and there
are no V4+ in the second and third layers, assuming the
above-mentioned mean free paths in Table I and expo-
nential decrease of the number of photoelectrons. In the
symmetric model (A-2), it is assumed that the electronic
structures are symmetric between the first and the third
layers. The best fit was obtained when 50 % of the first
and third layers are V4+. In sample B, a model (B) where
85 % of the first layer and 50 % of the second layer are
V4+ best reproduced the experimental result. As shown
in Fig. 6, for the 3ML case, the model (A-2) did not
reproduce the experimental results well compared to (A-
1), which demonstrates that the valence distribution of
V was highly asymmetric at these interfaces.
The origin of this highly asymmetric valence change
from V3+ to V4+ at the interfaces can be interpreted
in two ways. One possible scenario is a simple chemical
effect during the fabrication process of the PLD tech-
nique. The topmost LVO layer spends a longer time be-
fore the next deposition of LAO than the rest LVO lay-
ers, and therefore, oxidation process may easily proceed
at the topmost layer. In this scenario, if we make sam-
ples under different experimental conditions, especially
under different oxygen pressures, the amount of V4+ at
the interface may change greatly. In the other scenario,
we consider that the polarity of the LAO/LVO multilay-
ers plays an essential role. In the present samples, both
the LAO and LVO layers are polar, and do not consist of
charge neutral layers, that is, they consist of alternating
stack of LaO+ and AlO−2 or VO
2 layers. As recently
discussed by Nakagawa et al.,23 electronic reconstruction
occurs during the fabrication of the polar layers in or-
der to prevent the divergence of Madelung potential, i.e.,
so-called polar catastrophe.24 We consider that the elec-
tronic reconstruction occurs in the present samples, and
that the valence change of V at the interface is a result of
this reconstruction. This effect explains 0.5 ML of V4+,
but we cannot explain the total amount of V4+ exceed-
ing 0.5 ML, and we must also consider some chemical
effects that V atoms are relatively easily oxidized at the
topmost layer. Similar studies on samples with different
termination layers will be necessary to test this scenario.
Recently, Huijben et al.25 studied STO/LAO multilayers
and found a critical thickness of LAO and STO, below
which a decrease of the interface conductivity and car-
rier density occurs. Therefore, changing the numbers of
LAO capping layers may also change the valence of V at
the interface. Further systematic studies, including other
systems like LTO/STO1,2,3,4 and LAO/STO23,25,26, will
reveal the origin of the valence change at the interface.
IV. CONCLUSION
We have investigated the electronic structure of the
multilayers composed of a band insulator LaAlO3 and a
Mott insulator LaVO3 (LVO) by means of HX photoe-
mission spectroscopy. The Mott-Hubbard gap of LVO
remained open at the interface, indicating that the inter-
face is insulating and the delocalization of 3d electrons
does not occur unlike the LaTiO3/SrTiO3 multilayers.
From the V 1s and 2p core-level photoemission intensi-
ties, we found that the valence of V in LVO was partially
converted from V3+ to V4+ at the interface only on the
top side of the LVO layer and that the amount of V4+
increased with LVO layer thickness. We constructed a
model for the V valence redistribution in order to ex-
plain the experimental result and found that the V4+ is
preferentially distributed on the top of the LVO layers.
We suggest that the electronic reconstruction to elimi-
nate polar catastrophe may be the origin of the highly
asymmetric valence change at the interfaces.
V. ACKNOWLEDGMENTS
The HX photoemission experiments reported here have
benefited tremendously from the efforts of Dr. D. Miwa
of the coherent x-ray optics laboratory RIKEN/SPring-
8, Japan and we dedicate this work to him. This work
was supported by a Grant-in-Aid for Scientific Research
(A16204024) from the Japan Society for the Promotion
of Science (JSPS) and a Grant-in-Aid for Scientific Re-
search in Priority Areas “Invention of Anomalous Quan-
tum Materials” from the Ministry of Education, Culture,
Sports, Science and Technology. H. W. acknowledges fi-
nancial support from JSPS. Y. H. acknowledges support
from QPEC, Graduate School of Engineering, University
of Tokyo.
∗ Electronic address: [email protected];
URL: http://www.geocities.jp/qxbqd097/index2.htm;
Present address: Department of Physics and Astron-
omy, University of British Columbia, Vancouver, British
Columbia V6T-1Z1, Canada
1 A. Ohtomo, D. A. Muller, J. L. Grazul, and H. Y. Hwang,
Nature 419, 378 (2002).
2 K. Shibuya, T. Ohnishi, M. Kawasaki, H. Koinuma, and
M. Lippmaa, Jpn. J. Appl. Phys. 43, L1178 (2004).
3 S. Okamoto and A. J. Millis, Nature 428, 630 (2004).
4 M. Takizawa, H. Wadati, K. Tanaka, M. Hashimoto, T.
Yoshida, A. Fujimori, A. Chikamatsu, H. Kumigashira, M.
Oshima, K. Shibuya, T. Mihara, T. Ohnishi, M. Lippmaa,
M. Kawasaki, H. Koinuma, S. Okamoto, and A. J. Millis,
Phys. Rev. Lett. 97, 057601 (2006).
5 T. Arima, Y. Tokura, and J. B. Torrance, Phys. Rev. B
48, 17006 (1993).
6 S. Miyasaka, Y. Okimoto, M. Iwama, and Y. Tokura, Phys.
Rev. B 68, 100406(R) (2003).
7 K. Maiti and D. D. Sarma, Phys. Rev. B 61, 2525 (2000).
8 S.-G. Lim, S. Kriventsov, T. N. Jackson, J. H. Haeni,
D. G. Schlom, A. M. Balbashov, R. Uecker, P. Reiche,
J. L. Freeouf, and G. Lucovsky, J. Appl. Phys. 91, 4500
(2002).
9 Y. Hotta, H. Wadati, A. Fujimori, T. Susaki, and H. Y.
Hwang, Appl. Phys. Lett. 89, 251916 (2006).
10 M. Kawasaki, K. Takahashi, T. Maeda, R. Tsuchiya, M.
Shinohara, O. Ishihara, T. Yonezawa, M. Yoshimoto, and
H. Koinuma, Science 266, 1540 (1994).
11 Y. Hotta, Y. Mukunoki, T. Susaki, H. Y. Hwang, L. Fit-
ting, and D. A. Muller, Appl. Phys. Lett. 89, 031918
(2006).
12 H. Wadati, Y. Hotta, M. Takizawa, A. Fujimori, T. Susaki,
and H. Y. Hwang (unpublished).
13 K. Tamasaku, Y. Tanaka, M. Yabashi, H. Yamazaki, N.
Kawamura, M. Suzuki, and T. Ishikawa, Nucl. Instrum.
Methods A 467/468, 686 (2001).
14 T. Ishikawa, K. Tamasaku, and M. Yabashi, Nucl. Instrum.
Methods A 547, 42 (2005).
15 Y. Takata, M. Yabashi, K. Tamasaku, Y. Nishino, D.
Miwa, T. Ishikawa, E. Ikenaga, K. Horiba, S. Shin, M.
Arita, K. Shimada, H. Namatame, M. Taniguchi, H. No-
hira, T. Hattori, S. Sodergren, B. Wannberg, and K.
Kobayashi, Nucl. Instrum. Methods A 547, 50 (2005).
16 N. Kamakura, M. Taguchi, A. Chainani, Y. Takata, K.
Horiba, K. Yamamoto, K. Tamasaku, Y. Nishino, D. Miwa,
E. Ikenaga, M. Awaji, A. Takeuchi, H. Ohashi, Y. Senba,
H. Namatame, M. Taniguchi, T. Ishikawa, K. Kobayashi,
and S. Shin, Europhys. Lett. 68, 557 (2004).
17 M. Taguchi, A. Chainani, N. Kamakura, K. Horiba, Y.
Takata, M. Yabashi, K. Tamasaku, Y. Nishino, D. Miwa,
T. Ishikawa, S. Shin, E. Ikenaga, T. Yokoya, K. Kobayashi,
T. Mochiku, K. Hirata, and K. Motoya, Phys. Rev. B 71,
155102 (2005).
18 M. O. Krause and J. H. Oliver, J. Phys. Chem. Ref. Data
8, 329 (1979).
19 S. Tanuma, C. J. Powell, and D. R. Penn, Surf. Sci. 192,
L849 (1987).
20 The mean free paths in HX photoemission were recently
determined experimentally as described in Refs. 21,22.
21 C. Dallera, L. Duo, L. Braicovich, G. Panaccione, G. Pao-
licelli, B. Cowie, and J. Zegenhagen, Appl. Phys. Lett. 85,
4532 (2004).
22 M. Sacchi, F. Offi, P. Torelli, A. Fondacaro, C. Spezzani,
M. Cautero, G. Cautero, S. Huotari, M. Grioni, R. De-
launay, M. Fabrizioli, G. Vanko, G. Monaco, G. Paolicelli,
G. Stefani, and G. Panaccione, Phys. Rev. B 71, 155117
(2005).
23 N. Nakagawa, H. Y. Hwang, and D. A. Muller, Nature
Materials 5, 204 (2006).
24 W. A. Harrison, E. A. Kraut, J. R. Waldrop, and R. W.
Grant, Phys. Rev. B 18, 4402 (1978).
25 M. Huijben, G. Rijnders, D. H. A. Blank, S. Bals, S. V.
Aert, J. Verbeeck, G. V. Tendeloo, A. Brinkman, and H.
Hilgenkamp, Nature Materials 5, 556 (2006).
26 A. Ohtomo and H. Y. Hwang, Nature 427, 423 (2004).
mailto:[email protected]
http://www.geocities.jp/qxbqd097/index2.htm
|
0704.1838 | Performance Analysis of the IEEE 802.11e Enhanced Distributed
Coordination Function using Cycle Time Approach | Performance Analysis of the IEEE 802.11e
Enhanced Distributed Coordination Function using
Cycle Time Approach †
Inanc Inan, Feyza Keceli, and Ender Ayanoglu
Center for Pervasive Communications and Computing
Department of Electrical Engineering and Computer Science
The Henry Samueli School of Engineering
University of California, Irvine, 92697-2625
Email: {iinan, fkeceli, ayanoglu}@uci.edu
Abstract
The recently ratified IEEE 802.11e standard defines the Enhanced Distributed Channel Access (EDCA) function
for Quality-of-Service (QoS) provisioning in the Wireless Local Area Networks (WLANs). The EDCA uses Carrier
Sense Multiple Access with Collision Avoidance (CSMA/CA) and slotted Binary Exponential Backoff (BEB)
mechanism. We present a simple mathematical analysis framework for the EDCA function. Our analysis considers
the fact that the distributed random access systems exhibit cyclic behavior where each station successfully transmits
a packet in a cycle. Our analysis shows that an AC-specific cycle time exists for the EDCA function. Validating
the theoretical results via simulations, we show that the proposed analysis accurately captures EDCA saturation
performance in terms of average throughput, medium access delay, and packet loss ratio. The cycle time analysis
is a simple and insightful substitute for previously proposed more complex EDCA models.
I. INTRODUCTION
The IEEE 802.11e standard [1] specifies the Hybrid Coordination Function (HCF) which enables
prioritized and parameterized Quality-of-Service (QoS) services at the MAC layer. The HCF combines
a distributed contention-based channel access mechanism, referred to as Enhanced Distributed Channel
Access (EDCA), and a centralized polling-based channel access mechanism, referred to as HCF Con-
trolled Channel Access (HCCA). We confine our analysis to the EDCA scheme, which uses Carrier
† This work is supported by the Center for Pervasive Communications and Computing, and by Natural Science Foundation under Grant No.
0434928. Any opinions, findings, and conclusions or recommendations expressed in this material are those of authors and do not necessarily
reflect the view of the Natural Science Foundation.
http://arxiv.org/abs/0704.1838v1
Sense Multiple Access with Collision Avoidance (CSMA/CA) and slotted Binary Exponential Backoff
(BEB) mechanism as the basic access method. The EDCA defines multiple Access Categories (AC) with
AC-specific Contention Window (CW) sizes, Arbitration Interframe Space (AIFS) values, and Transmit
Opportunity (TXOP) limits to support MAC-level QoS and prioritization.
We evaluate the EDCA performance for the saturation (asymptotic) case. The saturation analysis
provides the limits reached by the system throughput and protocol service time in stable conditions
when every station has always backlogged data ready to transmit in its buffer. The analysis of the
saturation provides in-depth understanding and insights into the random access schemes and the effects
of different contention parameters on the performance. The results of such analysis can be employed in
access parameter adaptation or in a call admission control algorithm.
Our analysis is based on the fact that a random access system exhibits cyclic behavior. A cycle time is
defined as the duration in which an arbitrary tagged user successfully transmits one packet on average [2].
We will derive the explicit mathematical expression of the AC-specific EDCA cycle time. The derivation
considers the AIFS and CW differentiation by employing a simple average collision probability analysis.
We will use the EDCA cycle time to predict the first moments of the saturation throughput, the service
time, and the packet loss probability. We will show that the results obtained using the cycle time model
closely follow the accurate predictions of the previously proposed more complex analytical models and
simulation results. Our cycle time analysis can serve as a simple and practical alternative model for EDCA
saturation throughput analysis.
II. EDCA OVERVIEW
The IEEE 802.11e EDCA is a QoS extension of IEEE 802.11 Distributed Coordination Function (DCF).
The major enhancement to support QoS is that EDCA differentiates packets using different priorities and
maps them to specific ACs that are buffered in separate queues at a station. Each ACi within a station
(0 ≤ i ≤ imax, imax = 3 in [1]) having its own EDCA parameters contends for the channel independently
of the others. Following the convention of [1], the larger the index i is, the higher the priority of the AC
is. Levels of services are provided through different assignments of the AC-specific EDCA parameters;
AIFS, CW, and TXOP limits.
If there is a packet ready for transmission in the MAC queue of an AC, the EDCA function must sense
the channel to be idle for a complete AIFS before it can start the transmission. The AIFS of an AC is
determined by using the MAC Information Base (MIB) parameters as AIFS = SIFS+AIFSN ×Tslot,
where AIFSN is the AC-specific AIFS number, SIFS is the length of the Short Interframe Space, and
Tslot is the duration of a time slot.
If the channel is idle when the first packet arrives at the AC queue, the packet can be directly transmitted
as soon as the channel is sensed to be idle for AIFS. Otherwise, a backoff procedure is completed following
the completion of AIFS before the transmission of this packet. A uniformly distributed random integer,
namely a backoff value, is selected from the range [0,W ]. The backoff counter is decremented at the slot
boundary if the previous time slot is idle. Should the channel be sensed busy at any time slot during AIFS
or backoff, the backoff procedure is suspended at the current backoff value. The backoff resumes as soon
as the channel is sensed to be idle for AIFS again. When the backoff counter reaches zero, the packet is
transmitted in the following slot.
The value of W depends on the number of retransmissions the current packet experienced. The initial
value of W is set to the AC-specific CWmin. If the transmitter cannot receive an Acknowledgment (ACK)
packet from the receiver in a timeout interval, the transmission is labeled as unsuccessful and the packet
is scheduled for retransmission. At each unsuccessful transmission, the value of W is doubled until the
maximum AC-specific CWmax limit is reached. The value of W is reset to the AC-specific CWmin if the
transmission is successful, or the retry limit is reached thus the packet is dropped.
The higher priority ACs are assigned smaller AIFSN. Therefore, the higher priority ACs can either
transmit or decrement their backoff counters while lower priority ACs are still waiting in AIFS. This
results in higher priority ACs facing a lower average probability of collision and relatively faster progress
through backoff slots. Moreover, in EDCA, the ACs with higher priority may select backoff values from
a comparably smaller CW range. This approach prioritizes the access since a smaller CW value means a
smaller backoff delay before the transmission.
Upon gaining the access to the medium, each AC may carry out multiple frame exchange sequences as
long as the total access duration does not go over a TXOP limit. Within a TXOP, the transmissions are
separated by SIFS. Multiple frame transmissions in a TXOP can reduce the overhead due to contention.
A TXOP limit of zero corresponds to only one frame exchange per access.
An internal (virtual) collision within a station is handled by granting the access to the AC with the
highest priority. The ACs with lower priority that suffer from a virtual collision run the collision procedure
as if an outside collision has occured.
III. RELATED WORK
In this section, we provide a brief summary of the studies in the literature on the theoretical DCF and
EDCA function saturation performance analysis.
Three major saturation performance models have been proposed for DCF; i) assuming constant collision
probability for each station, Bianchi [3] developed a simple Discrete-Time Markov Chain (DTMC) and the
saturation throughput is obtained by applying regenerative analysis to a generic slot time, ii) Cali et al. [4]
employed renewal theory to analyze a p-persistent variant of DCF with persistence factor p derived from
the CW, and iii) Tay et al. [5] instead used an average value mathematical method to model DCF backoff
procedure and to calculate the average number of interruptions that the backoff timer experiences. Having
the common assumption of slot homogeneity (for an arbitrary station, constant collision or transmission
probability at an arbitrary slot), these models define all different renewal cycles all of which lead to
accurate saturation performance analysis.
These major methods (especially [3]) are modified by several researchers to include the extra features of
the EDCA function in the saturation analysis. Xiao [6] extended [3] to analyze only the CW differentiation.
Kong et al. [7] took AIFS differentiation into account. On the other hand, these EDCA extensions miss the
treatment of varying collision probabilities at different AIFS slots due to varying number of contending
stations. Robinson et al. [8] proposed an average analysis on the collision probability for different
contention zones during AIFS. Hui et al. [9] unified several major approaches into one approximate average
model taking into account varying collision probability in different backoff subperiods (corresponds to
contention zones in [8]). Zhu et al. [10] proposed another analytical EDCA Markov model averaging the
transition probabilities based on the number and the parameters of high priority flows. Inan et al. [11]
proposed a 3-dimensional DTMC which provides accurate treatment of AIFS and CW differentiation.
Another 3-dimensional DTMC is proposed by Tao et al. [12] in which the third dimension models the
state of backoff slots between successive transmission periods. The fact that the number of idle slots
between successive transmissions can be at most the minimum of AC-specific CWmax values is considered.
Independently, Zhao et al. [13] had previously proposed a similar model for the heterogeneous case where
each station has traffic of only one AC. Banchs et al. [14] proposed another model which considers varying
collision probability among different AIFS slots due to a variable number of stations. Lin et al. [15]
extended [5] in order to carry out mean value analysis for approximating AIFS and CW differentiation.
Our approach is based on the observation that the transmission behavior in the 802.11 WLAN follows a
pattern of periodic cycles. Previously, Medepalli et al. [2] provided explicit expressions for average DCF
cycle time and system throughput. Similarly, Kuo et al. [16] calculated the EDCA transmission cycle
assuming constant collision probability for any traffic class. On the other hand, such an assumption leads
to analytical inaccuracies [7]-[15]. The main contribution is that we incorporate accurate AIFS and CW
differentiation calculation in the EDCA cycle time analysis. We show that the cyclic behavior is observed
on a per AC basis in the EDCA. To maintain the simplicity of the cycle time analysis, we employ averaging
on the AC-specific collision probability. The comparison with more complex and detailed theoretical and
simulation models reveals that the analytical accuracy is preserved.
IV. EDCA CYCLE TIME ANALYSIS
In this section, we will first derive the AC-specific average collision probability. Next, we will calculate
the AC-specific average cycle time. Finally, we will relate the average cycle time and the average collision
probability to the average normalized throughput, EDCA service time, and packet loss probability.
A. AC-specific Average Collision Probability
The difference in AIFS of each AC in EDCA creates the so-called contention zones or periods as shown
in Fig. 1 [8],[9]. In each contention zone, the number of contending stations may vary. We employ an
average analysis on the AC-specific collision probability rather than calculating it separately for different
AIFS and backoff slots as in [11]-[14]. We calculate the AC-specific collision probability according to
the long term occupancy of AIFS and backoff slots.
We define pci,x as the conditional probability that ACi experiences either an external or an internal
collision given that it has observed the medium idle for AIFSx and transmits in the current slot (note
AIFSx ≥ AIFSi should hold). For the following, in order to be consistent with the notation of [1],
we assume AIFS0 ≥ AIFS1 ≥ AIFS2 ≥ AIFS3. Let di = AIFSNi − AIFSN3. Following the slot
homogeneity assumption of [3], assume that each ACi transmits with constant probability, τi. Also, let
the total number ACi flows be Ni. Then, for the heterogeneous scenario in which each station has only
one AC
pci,x = 1−
i′≤dx
(1− τi′)
(1− τi)
. (1)
We only formulate the situation when there is only one AC per station, therefore no internal collisions
can occur. Note that this simplification does not cause any loss of generality, because the proposed model
can be extended for the case of higher number of ACs per station as in [7],[11].
We use the Markov chain shown in Fig. 2 to find the long term occupancy of the contention zones.
Each state represents the nth backoff slot after the completion of the AIFS3 idle interval following a
transmission period. The Markov chain model uses the fact that a backoff slot is reached if and only if
no transmission occurs in the previous slot. Moreover, the number of states is limited by the maximum
idle time between two successive transmissions which is Wmin = min(CWi,max) for a saturated scenario.
The probability that at least one transmission occurs in a backoff slot in contention zone x is
ptrx = 1−
i′:di′≤dx
(1− τi′)
Ni′ . (2)
Note that the contention zones are labeled with x regarding the indices of d. In the case of an equality in
AIFS values of different ACs, the contention zone is labeled with the index of AC with higher priority.
Given the state transition probabilities as in Fig. 2, the long term occupancy of the backoff slots b′n can
be obtained from the steady-state solution of the Markov chain. Then, the AC-specific average collision
probability pci is found by weighing zone specific collision probabilities pci,x according to the long term
occupancy of contention zones (thus backoff slots)
pci =
∑Wmin
n=di+1
pci,xb
∑Wmin
n=di+1
where x = max
y | dy = max
(dz | dz ≤ n)
which shows x is assigned the highest index value within
a set of ACs that have AIFSN smaller than or equal to n+AIFSN3. This ensures that at backoff slot n,
ACi has observed the medium idle for AIFSx. Therefore, the calculation in (3) fits into the definition of
pci,x .
B. AC-Specific Average Cycle Time
Intuitively, it can be seen that each user transmitting at the same AC has equal cycle time, while the
cycle time may differ among ACs. Our analysis will also mathematically show this is the case. Let Ei[tcyc]
be average cycle time for a tagged ACi user. Ei[tcyc] can be calculated as the sum of average duration for
i) the successful transmissions, Ei[tsuc], ii) the collisions, Ei[tcol], and iii) the idle slots, Ei[tidle] in one
cycle.
In order to calculate the average time spent on successful transmissions during an ACi cycle time, we
should find the expected number of total successful transmissions between two successful transmissions
of ACi. Let Qi represent this random variable. Also, let γi be the probability that the transmitted packet
belongs to an arbitrary user from ACi given that the transmission is successful. Then,
n=di+1
psi,n/Ni
psj,n
where
psi,n =
(1− τi)
i′:di′≤n−1
(1− τi′)Ni′ , if n ≥ di + 1
0, if n < di + 1.
Then, the Probability Mass Function (PMF) of Qi is
Pr(Qi = k) = γi(1− γi)
k, k ≥ 0. (6)
We can calculate expected number of successful transmissions of any ACj during the cycle time of
ACi, STj,i, as
STj,i = NjE[Qi]
1− γi
. (7)
Inserting E[Qi] = (1 − γi)/γi in (7), our intuition that each user from ACi can transmit successfully
once on average during the cycle time of another ACi user, i.e., STi,i = Ni, is confirmed. Therefore, the
average cycle time of any user belonging to the same AC is equal in a heterogeneous scenario where each
station runs only one AC. Including the own successful packet transmission time of tagged ACi user in
Ei[tsuc], we find
Ei[tsuc] =
STj,iTsj (8)
where Tsj is defined as the time required for a successful packet exchange sequence. Tsj will be derived
in (16).
To obtain Ei[tcol], we need to calculate average number of users that involve in a collision, Ncn , at the
nth slot after last busy time for given Ni and τi, ∀i. Let the total number of users transmitting at the n
th slot
after last busy time be denoted as Yn. We see that Yn is the sum of random variables, Binomial(Ni, τi),
∀i : di ≤ n− 1. Employing simple probability theory, we can calculate Ncn = E[Yn|Yn ≥ 2]. After some
simplification,
Ncn =
i:di≤n−1
(Niτi − psi,n)
i:di≤n−1
(1− τi)Ni −
i:di≤n−1
psi,n
If we let the average number of users involved in a collision at an arbitrary backoff slot be Nc, then
b′nNcn . (10)
We can also calculate the expected number of collisions that an ACj user experiences during the cycle
time of an ACi, CTj,i, as
CTj,i =
1− pcj
STj,i. (11)
Then, defining Tcj as the time wasted in a collision period (will be derived in (17),
Ei[tcol] =
CTj,iTcj . (12)
Given pci , we can calculate the expected number of backoff slots Ei[tbo] that ACi waits before attempting
a transmission. Let Wi,k be the CW size of ACi at backoff stage k [11]. Note that, when the retry limit
ri is reached, any packet is discarded. Therefore, another Ei[tbo] passes between two transmissions with
probability prici
Ei[tbo] =
pk−1ci (1− pci)
. (13)
Noticing that between two successful transmissions, ACi also experiences CTi,i collisions,
Ei[tidle] = Ei[tbo](CTi,i/Ni + 1)tslot. (14)
As shown in [9], the transmission probability of a user using ACi,
Ei[tbo] + 1
. (15)
Note that, in [9], it is proven that the mean value analysis for the average transmission probability as
in (15) matches the Markov analysis of [3].
The fixed-point equations (1)-(15) can numerically be solved for τi and pci , ∀i. Then, each component
of the average cycle time for ACi, ∀i, can be calculated using (4)-(14).
C. Performance Analysis
Let Tpi be the average payload transmission time for ACi (Tpi includes the transmission time of MAC
and PHY headers), δ be the propagation delay, Tack be the time required for acknowledgment packet (ACK)
transmission. Then, for the basic access scheme, we define the time spent in a successful transmission
Tsi and a collision Tci for any ACi as
Tsi =Tpi + δ + SIFS + Tack + δ + AIFSi (16)
Tci =Tp∗i + ACK Timeout+ AIFSi (17)
where Tp∗
is the average transmission time of the longest packet payload involved in a collision [3]. For
simplicity, we assume the packet size to be equal for any AC, then Tp∗
= Tpi . Being not explicitly specified
in the standards, we set ACK Timeout, using Extended Inter Frame Space (EIFS) as EIFSi −AIFSi.
Note that the extensions of (16) and (17) for the RTS/CTS scheme are straightforward [3].
The average cycle time of an AC represents the renewal cycle for each AC. Then, the normalized
throughput of ACi is defined as the successfully transmitted information per renewal cycle
NiTpi
Ei[tsuc] + Ei[tcol] + Ei[tidle]
. (18)
The AC-specific cycle time is directly related but not equal to the mean protocol service time. By
definition, the cycle time is the duration between successful transmissions. We define the average protocol
service time such that it also considers the service time of packets which are dropped due to retry limit.
On the average, 1/pi,drop service intervals correspond to 1/pi,drop − 1 cycles. Therefore, the mean service
time µi can be calculated as
µi = (1− pi,drop)Ei[tcyc]. (19)
Simply, the average packet drop probability due to MAC layer collisions is
pi,drop = p
. (20)
V. NUMERICAL AND SIMULATION RESULTS
We validate the accuracy of the numerical results by comparing them to the simulation results obtained
from ns-2 [17]. For the simulations, we employ the IEEE 802.11e HCF MAC simulation model for ns-2.28
[18]. This module implements all the EDCA and HCCA functionalities stated in [1].
In simulations, we consider two ACs, one high priority (AC3) and one low priority (AC1). Each station
runs only one AC. Each AC has always buffered packets that are ready for transmission. For both ACs,
the payload size is 1000 bytes. RTS/CTS handshake is turned on. The simulation results are reported for
the wireless channel which is assumed to be not prone to any errors during transmission. The errored
channel case is left for future study. All the stations have 802.11g Physical Layer (PHY) using 54 Mbps
and 6 Mbps as the data and basic rate respectively (Tslot = 9 µs, SIFS = 10 µs) [19]. The simulation
runtime is 100 seconds.
In the first set of experiments, we set AIFSN1 = 3, AIFSN3 = 2, CW1,min = 31, CW3,min = 15,
m1 = m3 = 3, r1 = r3 = 7. Fig. 3 shows the normalized throughput of each AC when both N1 and N3
are varied from 5 to 30 and equal to each other. As the comparison with a more detailed analytical model
[11] and the simulation results reveal, the cycle time analysis can predict saturation throughput accurately.
Fig. 4 and Fig. 5 display the mean protocol service time and packet drop probability respectively for
the same scenario of Fig. 3. As comparison with [11] and the simulation results show, both performance
measures can accurately be predicted by the proposed cycle time model. Although not included in the
figures, a similar discussion holds for the comparison with other detailed and/or complex models of
[12]-[14].
In the second set of experiments, we fix the EDCA parameters of one AC and vary the parameters
of the other AC in order to show the proposed cycle time model accurately captures the normalized
throughput for different sets of EDCA parameters. In the simulations, both N1 and N3 are set to 10.
Fig. 6 shows the normalized throughput of each AC when we set AIFSN3 = 2, CW3,min = 15, and vary
AIFSN1 and CW1,min. Fig. 7 shows the normalized throughput of each AC when we set AIFSN1 = 4,
CW1,min = 127, and vary AIFSN3 and CW3,min. As the comparison with simulation results show, the
predictions of the proposed cycle time model are accurate. We do not include the results for packet drop
probability and service time for this experiment. No discernable trends toward error are observed.
VI. CONCLUSION
We have presented an accurate cycle time model for predicting the EDCA saturation performance
analytically. The model accounts for AIFS and CW differentiation mechanisms of EDCA. We employ a
simple average collision probability calculation regarding AIFS and CW differentiation mechanisms of
EDCA. Instead of generic slot time analysis of [3], we use the AC-specific cycle time as the renewal cycle.
We show that the proposed simple cycle time model performs as accurate as more detailed and complex
models previously proposed in the literature. The mean saturation throughput, protocol service time and
packet drop probability are calculated using the model. This analysis also highlights some commonalities
between approaches in EDCA saturation performance analysis. The simple cycle time analysis can provide
invaluable insights for QoS provisioning in the WLAN.
REFERENCES
[1] IEEE Standard 802.11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications: Medium access control
(MAC) Quality of Service (QoS) Enhancements, IEEE 802.11e Std., 2005.
[2] K. Medepalli and F. A. Tobagi, “Throughput Analysis of IEEE 802.11 Wireless LANs using an Average Cycle Time Approach,” in
Proc. IEEE Globecom ’05, November 2005.
[3] G. Bianchi, “Performance Analysis of the IEEE 802.11 Distributed Coordination Function,” IEEE Trans. Commun., pp. 535–547, March
2000.
[4] F. Cali, M. Conti, and E. Gregori, “Dynamic Tuning of the IEEE 802.11 Protocol to Achieve a Theoretical Throughput Limit,”
IEEE/ACM Trans. Netw., pp. 785–799, December 2000.
[5] J. C. Tay and K. C. Chua, “A Capacity Analysis for the IEEE 802.11 MAC Protocol,” Wireless Netw., pp. 159–171, July 2001.
[6] Y. Xiao, “Performance Analysis of Priority Schemes for IEEE 802.11 and IEEE 802.11e Wireless LANs,” IEEE Trans. Wireless
Commun., pp. 1506–1515, July 2005.
[7] Z. Kong, D. H. K. Tsang, B. Bensaou, and D. Gao, “Performance Analysis of the IEEE 802.11e Contention-Based Channel Access,”
IEEE J. Select. Areas Commun., pp. 2095–2106, December 2004.
[8] J. W. Robinson and T. S. Randhawa, “Saturation Throughput Analysis of IEEE 802.11e Enhanced Distributed Coordination Function,”
IEEE J. Select. Areas Commun., pp. 917–928, June 2004.
[9] J. Hui and M. Devetsikiotis, “A Unified Model for the Performance Analysis of IEEE 802.11e EDCA,” IEEE Trans. Commun., pp.
1498–1510, September 2005.
[10] H. Zhu and I. Chlamtac, “Performance Analysis for IEEE 802.11e EDCF Service Differentiation,” IEEE Trans. Wireless Commun., pp.
1779–1788, July 2005.
[11] I. Inan, F. Keceli, and E. Ayanoglu, “Saturation Throughput Analysis of the 802.11e Enhanced Distributed Channel Access Function,”
to appear in Proc. IEEE ICC ’07.
[12] Z. Tao and S. Panwar, “Throughput and Delay Analysis for the IEEE 802.11e Enhanced Distributed Channel Access,” IEEE Trans.
Commun., pp. 596–602, April 2006.
[13] J. Zhao, Z. Guo, Q. Zhang, and W. Zhu, “Performance Study of MAC for Service Differentiation in IEEE 802.11,” in Proc. IEEE
Globecom ’02, November 2002.
[14] A. Banchs and L. Vollero, “Throughput Analysis and Optimal Configuration of IEEE 802.11e EDCA,” Comp. Netw., pp. 1749–1768,
August 2006.
[15] Y. Lin and V. W. Wong, “Saturation Throughput of IEEE 802.11e EDCA Based on Mean Value Analysis,” in Proc. IEEE WCNC ’06,
April 2006.
[16] Y.-L. Kuo, C.-H. Lu, E. H.-K. Wu, G.-H. Chen, and Y.-H. Tseng, “Performance Analysis of the Enhanced Distributed Coordination
Function in the IEEE 802.11e,” in Proc. IEEE VTC ’03 - Fall, October 2003.
[17] (2006) The Network Simulator, ns-2. [Online]. Available: http://www.isi.edu/nsnam/ns
[18] IEEE 802.11e HCF MAC model for ns-2.28. [Online]. Available: http://newport.eecs.uci.edu/$\sim$fkeceli/ns.htm
[19] IEEE Standard 802.11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications: Further Higher Data
Rate Extension in the 2.4 GHz Band, IEEE 802.11g Std., 2003.
http://www.isi.edu/nsnam/ns
http://newport.eecs.uci.edu/$\sim $fkeceli/ns.htm
Transmission/
Collision period
AIFSN
AIFSN
AIFSN
AIFSN
No Tx Zone 3 Zone 2 Zone 1
AC3 in Backoff
AC2 in Backoff
AC1 in Backoff
AC0 in Backoff
Fig. 1. EDCA backoff after busy medium.
1 2 d2 d2+1 d1 d1+1
tr 1-p
tr 1-p
Fig. 2. Transition through backoff slots in different contention zones for the example given in Fig.1.
5 10 15 20 25 30
Number of AC
and AC
Cycle−AC
Cycle−AC
Cycle−total
[11]−AC
[11]−AC
[11]−total
Sim−AC
Sim−AC
Sim−total
Fig. 3. Analyzed and simulated normalized throughput of each AC when both N1 and N3 are varied from 5 to 30 and equal to each other
for the cycle time analysis. Analytical results of the model proposed in [11] are also added for comparison.
5 10 15 20 25 30
Number of AC
and AC
Cycle−AC
Cycle−AC
[11]−AC
[11]−AC
Sim−AC
Sim−AC
Fig. 4. Analyzed and simulated mean protocol service time of each AC when both N1 and N3 are varied from 5 to 30 and equal to each
other for the proposed cycle time analysis and the model in [11].
5 10 15 20 25 30
Number of AC
and AC
Cycle−AC
Cycle−AC
[11]−AC
[11]−AC
Sim−AC
Sim−AC
Fig. 5. Analyzed and simulated mean packet drop probability of each AC when both N1 and N3 are varied from 5 to 30 and equal to
each other for the proposed cycle time analysis and the model in [11].
0 50 100 150 200 250
1,min
A=0−AC
A=0−AC
A=1−AC
A=1−AC
A=2−AC
A=2−AC
sim−AC
sim−AC
Fig. 6. Analytically calculated and simulated performance of each AC when AIFSN3 = 2, CW3,min = 15, N1 = N3 = 10, AIFSN1
varies from 2 to 4, and CW1,min takes values from the set {15, 31, 63, 127, 255}. Note that AIFSN1 − AIFSN3 is denoted by A.
0 20 40 60 80 100 120
3,min
A=0−AC
A=0−AC
A=1−AC
A=1−AC
A=2−AC
A=2−AC
sim−AC
sim−AC
Fig. 7. Analytically calculated and simulated performance of each AC when AIFSN1 = 4, CW1,min = 127, N1 = N3 = 10, AIFSN3
varies from 2 to 4, and CW3,min takes values from the set {15, 31, 63, 127}. Note that AIFSN1 − AIFSN3 is denoted by A.
Introduction
EDCA Overview
Related Work
EDCA Cycle Time Analysis
AC-specific Average Collision Probability
AC-Specific Average Cycle Time
Performance Analysis
Numerical and Simulation Results
Conclusion
References
|
0704.1839 | ALHEP symbolic algebra program for high-energy physics | ALHEP symbolic algebra program for
high-energy physics
V. Makarenko 1
NC PHEP BSU, 153 Bogdanovicha str., 220040 Minsk, Belarus
Abstract
ALHEP is the symbolic algebra program for high-energy physics. It deals with
amplitudes calculation, matrix element squaring, Wick theorem, dimensional regu-
larization, tensor reduction of loop integrals and simplification of final expressions.
The program output includes: Fortran code for differential cross section, Mathe-
matica files to view results and intermediate steps and TeX source for Feynman
diagrams. The PYTHIA interface is available.
The project website http://www.hep.by/alhep contains up-to-date executa-
bles, manual and script examples.
1 Introduction
The analytical calculations in high-energy physics are mostly impossible with-
out a powerful computing tool. The big variety of packages is commonly used
[1,2,3,4,5,6,7,8,9,10,11,12,13,14]. Some are general-purpose symbolic algebra
programs with specific HEP-related plug-ins (REDUCE [15], Mathematica
[16]), some are designed especially for particle physics (CompHEP [1], SANC
[2], GRACE [3] etc.) and some are created for specific interaction class or spe-
cific task. Many of them uses external symbolic algebra core (Form [17], Math-
Link [16]). They can deal with matrix elements squaring (FeynCalc [7]) or cal-
culate helicity amplitudes directly (MadGraph [4], CompHEP [1], O’Mega [5]).
Some packages provide numerical calculations, some require external Monte-
Carlo generator to be linked. Some programs contain also one-loop calculation
routines (FormCalc [6], GRACE [3], SANC [2]). Nevertheless, there is no uni-
form program that meets all the user requirements.
Email address: [email protected] (V. Makarenko ).
1 Supported by INTAS YS Grant no. 05-112-5429
Preprint submitted to Elsevier 17 June 2021
http://arxiv.org/abs/0704.1839v1
Every calculation requires the program-independent check. The optimal tac-
tics is the simultaneous usage of two (or more) different symbolic algebra
packages.
ALHEP is a symbolic algebra program for performing the way from Standard
Model Lagrangian to amplitude or squared matrix element for the specified
scattering process. It can also be useful for loop diagrams analysis. The basic
features are:
• Diagrams generation using Wick theorem and SM Lagrangian.
• Amplitude calculation or matrix element squaring.
• Bondarev functions method for traces calculation.
• Tensor reduction of loop integrals.
• Dimensional regularization scheme.
• Generation of Fortran procedures for numerical analysis (PYTHIA [18] and
LoopTools [19] interfaces are implemented).
The current ALHEP version have several implementation restrictions, that will
be lifted in future. The following features are in progress of implementation:
• Bremsstrahlung part of radiative correction (the integration over real photon
phase space).
• Complete one-loop renormalization scheme including renormalization con-
stants derivation.
• Arbitrary Lagrangian assignment.
After these methods implementation the complete one-loop analysis will be
available. Please refer to project website for program updates.
ALHEP website http://www.hep.by/alhep contains the up-to-date executa-
bles (for both Linux & Win32 platforms), manual and script examples. The
mirror at http://cern.ch/~makarenko/alhep is also updated.
2 ALHEP Review. Program Structure
The ALHEP program internal structure can be outlined as follows:
• The native symbolic algebra core.
• Common algebra libraries:
Dirac matrices, tensor and spinor algebra, field operators & particle wave
functions zoo.
• Specific HEP functions and libraries.
It include Feynman diagrams generation, trace calculations, helicity am-
plitudes method, HEP-specific simplification procedures, tensor integrals
reduction and others.
• Interfaces to Mathematica, Fortran, TeX and internal I/O format.
Fortran code is used for further numerical analysis. Mathematica code can
be used for viewing any symbolic expression in program. But no backward
interface from Mathematica is currently implemented. TeX output can be
generated for Feynman diagrams view. Internal I/O format is implemented
for the most of symbolic expressions, allowing save and restore calculation
at intermediate steps.
• Command script processor.
User interface is implemented in terms of command scripts. The ALHEP
script language have C-like syntax, variables, arithmetic operations, func-
tion calls. All HEP-related tasks are implemented as build-in functions.
2.1 Getting Started
To use ALHEP one should download the pre-compiled executable for appro-
priate platform (Linux and Win32 are available) and write a control script
to describe your task. ALHEP program should be launched with the single
argument: script file name to be invoked The following steps are required to
create a workspace:
• Download ALHEP executables from project website: alhep.gz (for Linux)
or alhep.zip (Win32). Unpack executable, e.g.
gzip -d alhep.gz
• Download up-to date command list: ALHEPCommands.txt. The set of com-
mands (or options) may be changed in future versions and this manual may
be somewhat obsolete. Please refer to ALHEPCommands.txt file that always
corresponds to the latest ALHEP version. The last changes are outlined in
RecentChanges.txt file at website.
• Create some working directory and compose command script file therein.
For example consider the uū → W+W−γ process with {−+}-helicities of
initial quarks. To calculate amplitude we create the following script file (call
it "test.al"):
SetKinematics(2, 3 // 2->3 process
,QUARK_U,"p\_1","e\_1" // u
,-QUARK_U,"p\_2","e\_2" // u-bar
,WBOZON,"f\_1","g\_1" // W{+}
,PHOTON,"f\_0","g\_0" // photon
,-WBOZON, "f\_2", "g\_2" ); // W{-}
SetDiagramPhysics(PHYS_SM_Q1GEN);// SM with 2 quarks only
SetMassCalcOrder(QUARK_U, 0); // consider massless
SetMassCalcOrder(QUARK_D, 0); // consider massless
diags = ComposeDiagrams(3); // create diagrams, e^3
DrawDiagrams(diags, "res.tex",
DD_SMALL|DD_SWAP_TALES, FILE_START);
SetFermionHelicity(1, -1);
SetFermionHelicity(2, 1);
SetParameter(PAR_TRACES_BONDEREV, 1);
ampl = CalcAmplitude(RetrieveME(diags));
ampl = KinArrange(ampl); // arrange result
ampl = Minimize(ampl); // minimize result
SaveNB("res.nb",ampl,"",FILE_START|FILE_CLOSE);
f = NewFortranFile("res.F", CODE_F77);
CreateFortranProc(f, "UUWWA", ampl,
CODE_IS_AMPLITUDE|CODE_COMPLEX16
|CODE_CHECK_DENOMINATORS|CODE_PYTHIA_VECTORS);
The SetKinematics() function declares particles, momenta and polariza-
tion symbols. The physics is declared with PHYS_SM_Q1GEN option to restrict
diagrams number. The amplitude for all (d, c, b) internal quarks can be ob-
tained from generated here by quark mixing matrix replacement: U2ud →
U2ud+U
ub (chiral limit). Diagrams are created by ComposeDiagrams()
call. CalcAmplitude() function creates the symbolic value for process am-
plitude. For detailed discussion of this example see sec. 15.1.
• Create simple batch file "run.me" like:
~/alhep_bin_path/alhep test.al
The ALHEP program creates some console output (current commands,
scroll bars and some debugging data). If it is not allowed one should redirect
console output to file here.
The batch execution with test.al command file takes about 1 minute at
1.8GHz P4 processor. The following files are created:
res.nb: Mathematica file containing symbolic expression of amplitude. Cre-
ated by SaveNB() function.
res.F: F77 code for numerical analysis created by NewFortranFile() and
CreateFortranProc() calls. The library file alhep_lib.F is required for
code compilation and should be downloaded from project website. See sec.
11.1 for Fortran generation and compilation review.
res.tex: TeX source for 19 Feynman graphs generated. The AxoDraw [23]
Latex package is used. The res.tex file should be included into your La-
TeX document using \input res.tex command. The template document
to include your diagrams can be found at program website. See fig. 1 in sec.
15.1 for diagrams generated.
debug.nb: Mathematica file with debugging information and some interme-
diate steps. The amount of debugging information is declared in debug.ini
file in working directory. See sec. 13 for further details.
See sec. 15 and project website for another examples. It is convenient to use
some example script as template and modify it for your purposes.
2.2 Calculation scheme
The usual ALHEP script contains several steps:
(1) Initialization section
Declaration of process kinematics: initial and final-state particles, titles
for particle momenta & polarization vectors (SetKinematics()).
Physics model definition. The SM physics or part of SM Hamiltonian
should be specified (SetDiagramPhysics()). The shorter Hamiltonian is
selected, the faster is Wick theorem invocation.
Polarization declaration. Every particle is considered as polarized with
abstract polarization vector by default. The specific helicity value can be
set manually or particle can be marked as unpolarized. The several ways
of polarization involving are inplemented, see sec. 4.4 (SetPolarized(),
SetFermionHelicity(), SetPhotonHelicity(), ...).
Setting mass-order rules for specific particles (SetMassCalcOrder()).
One can demand the massless calculation for light particle, that greatly
saves evaluation time. One can also demand keeping particle mass with
specific order Mn and drop out the higher-order expressions like Mn+1.
It allows to consider the leading mass contribution without calculating
precisely.
(2) Diagrams generation
Feynman diagrams are generated for specific en order using the Wick
theorem algorithm (ComposeDiagrams()) User may draw diagrams here
(DrawDiagrams()), halt the program (Halt()) and check out if diagrams
are generated correctly.
After the diagram set is generated one may cut-off not interesting di-
agrams to work with the shorter set or select the single diagram to work
with (SelectDiagrams()). The loop corrections are calculating faster
when processed by single diagrams.
Then matrix element is retrieved from diagrams set (RetrieveME()).
Before any operation with loop matrix element one should declare
the N−dimensional space (SetNDimensionSpace()). Some procedures
involve N−dimensional mode automatically for loop objects, but most
functions (arranging, simplification etc.) don’t know the nature of expres-
sion they work with. Therefore the dimensional mode should be forced.
The diagrams set and any symbolic expression may be saved and re-
stored at next session to refrain from job repetition (Save(), Load()).
(3) Amplitude calculation
The amplitude evaluation (CalcAmplitude()) is the faster way for
multi-diagram process analysis.
All the particles are considered polarized. The spinor objects are pro-
jected to abstract basis spinor. The basis vectors are generated in nu-
merical code to meet the non-zero denominators condition. See sec. 7 for
method details.
(4) Matrix element squaring (coupling to other)
The squaring procedure (SquareME()) is controlled by plenty of op-
tions, intended mostly for the performance tuning and debugging.
It basically includes reduction of gamma-matrix sequences coupled to
kinematically dependent vectors. It reduces the number of matrices in
every product to minimum. The item-by-item squaring is followed. For
loop × born∗ couplings the virtual integrals are involved. See sec. 8 for
details.
Amplitude calculation is fast, but its result may be more complicated
than squaring expressions. Amplitude depends on particle momenta, po-
larization vectors and additional basis vectors. For unpolarized process
the averaging cycle is generated in Fortran code, and complex-numbers
calculation should be performed. The squaring result for unpolarized pro-
cess is the polynomial of momenta couplings only. Hence there is no
unique answer what result is simpler for 10-15 diagrams reaction. One
should definitely use amplitude method for more than 10-15 diagrams
squaring.
(5) Loop diagrams analysis
The tensor virtual integrals are reduced to scalar ones (Evaluate(),
sec. 9). The scalar coefficients of tensor integral decompositions are also
reduced to scalar integrals in the most cases (for 1− 4 point integrals).
For scalar loop integrals the tabulated values are used. There is no
reason to tabulate integrals with complicated structure. Hence scalar in-
tegrals table contain the A0 and B0 integrals with different mass configu-
ration. It also contains a useful D0 chiral decomposition. Other integrals
should to be resolved using LoopTools-like [19,20] numerical programs.
The renormalization procedure is under construction now. The counter-
terms (CT) part of Lagrangian leads to CT diagrams generating. In the
nearest future the abstract renormalization constants (δm, δf etc.) will be
involved and tabulated for the minimal on-shell scheme. The automatic
derivation of constants is supposed to be implemented further. Please
refer ALHEP website for implementation progress.
(6) Simplification
The kinematic simplification procedure is available (KinSimplify(),
sec. 12). It reduces expression using all the possible kinematic relations
between momenta and invariants. The minimization of +/* operations
in huge expressions can also be performed (Minimize()).
(7) Fortran procedure creation for numerical analysis
F90 or F77-syntaxes for generated procedures are used. Generated code
can be linked to PYTHIA, LoopTools and any Monte-Carlo generator for
numerical analysis.
3 ALHEP script language
The script syntax is similar to C/Java languages. Command line breaks at
”;” symbol only. Comments are marked as // or /*...*/. Operands may be
variables or function calls. If no function with some title is defined, the operand
is considered as variable. The notation is case-sensitive.
The script language have no user-defined functions, classes or loop operators.
It seems to be useless in current version.
All ALHEP features are implemented as build-in functions. The execution
starts from the first line and finishes at the end of file (or at Halt() command).
Variable types are casted and checked automatically (in run-time mode). No
manual type specifying or casting are available. List of ALHEP script internal
types:
• Abstract Symbolic Expression (expr).
Result of function operations. Can be stored to file(Save()) and loaded
back(Load()). Basic operations: +,-,*. The division is implemented using
Frac(a,b) function. Are not supposed to be inputed manually, although a
few commands for manual input exist.
• Integer Value (int).
Any number parameter will be casted to integer value. Basic operations:
+,-,*,|. The division is performed using Frac(a,b) function. No fractional
values are currently available.
• String (str).
String parameters are started and closed by double quotes (”string vari-
able”). Used to specify symbol notation (e.g. momenta titles), file names,
etc. Basic operations: +(concatenate strings).
• Set of Feynman Diagrams (diagrams).
Result of diagrams composing function. Basic operations: Save(), Load()
and SelectDiagrams(). Diagrams can also be converted into TeX graphics.
• Matrix Element (me).
Contains symbolic expression for matrix element and information on it’s
use: list of virtual momenta (integration list) etc.
The total up-to-date functions list can be found in ALHEPCommands.txt file
at project website.
4 Initialization section
4.1 Particles
Particles are determined by integer number, called particle kind (PK). The
following integer constants are defined: ELECTRON, MUON, TAULEPTON, PHOTON,
ZBOZON, WBOZON, QUARK_D, QUARK_U, QUARK_S, QUARK_C, QUARK_B, QUARK_T,
NEUTRINO_ELECTRON, NEUTRINO_MUON, NEUTRINO_TAU. The ghost and scalar
particles are not supposed to be external and their codes are unavailable.
Antiparticles have negative PK that is obtained by ”-PK” operation.
If kinematics is declared the particles can be secelted using the particle ID
(PID) number. Initial particles have negeative ID (-1, -2) and final are pointed
by positive numbers (1, 2, 3...).
4.2 Kinematic selection
Before any computations may be performed one needs to declare the kinematic
conditions. They are: number of initial and final state particles, PK codes,
symbols for momentum and polarization vector for every particle.
SetKinematics((int)N Initials, (int)N Finals,
[(int)PK I, (str)momentum I, (str)polarization I, ...]);
Here N_Initials and N_Finals – numbers of initial and final particles in kine-
matics. The next parameters are particle kind, momentum and polarization
symbols, repeated for every particle. For example, the e−(k1, e1)e
+(k2, e2) →
µ−(p1, e3)µ
+(p2, e4) process should be declared as follows:
SetKinematics(2, 2,
ELECTRON,"k\_1","e\_1",
-ELECTRON,"k\_2","e\_2",
MUON, "p\_1", "e\_3",
-MUON, "p\_2", "e\_4");
4.3 Particle masses
All the particles are considered massive by default. Particle can be declared
massless using the SetMassCalcOrder function.
SetMassCalcOrder((int)PK, (int)order);
PK: particle kind,
order: maximum order of particle mass to be kept in calculations. Zero value
means massless calculations for specified particle. Negative value declares
the mass-exact operations.
The mass symbols are generated automatically and look like me, mW etc.
Hence, all the electrons in process will have the same mass symbol. To declare
unique mass symbols for specific particles the SetMassSym() function is used.
The different masses for unique particles are often required to involve the
Breit-Wigner distribution for particles masses.
SetMassSym((int)PID, (str or expr)mass);
PID: particle ID in kinematics (< 0 for initial and > 0 for final particles),
mass: new mass symbol, like "m\_X" for mX .
4.4 Polarization data
All the particles are considered as polarized initially. The default polarization
vector symbols are set together with kinematic data (in SetKinematics()).
To declare particle unpolarized the SetPolarized() function is used:
SetPolarized((int)PID, (int)polarized);
PID: particle ID in kinematics,
polarized: 1 (polarized) or 0 (unpolarized).
One can declare the specific polarization state (helicity value) for particles.
The photon (k,e) helicities are involved in terms of two outer physical mo-
menta:
e±µ =
(p+ · k)p−µ − (p− · k)p+µ ± iǫµαβνpα+p
/Nk,p±. (1)
SetPhotonHelicity((int)PID, (int)h, (str)”pP”, (str)”pM”]);
PID: photon particle ID in kinematics,
h: helicity value: ±1 or 0. Zero value clears the helicity information
pP, pM: p± base vectors in (1) formula. The γ-coupled forms of (1) (precise
or chiral [26]) are used automatically if available.
One may select the transverse unpolarized photon density matrix (instead of
usual −(1/2)gµν):
ν → −gµν +
(P · k)
(P · k)
kµkνk
(P · k)2
. (2)
SetPhotonDMBase((int)PID, (str)”P”);
PID: photon particle ID in kinematics,
"P": Basis vector P in (2).
The fermion (k,e) helicity is declared using:
SetFermionHelicity((int)PID, (int)h);
SetFermionHelicity((int)PID, (srt)”h”);
PID: fermion particle ID in kinematics,
h: helicity value: ±1 or 0. Zero value clears the helicity information. Density
matrix is usual:
uū → p̂γ± (massless fermion).
"h": symbol for scalar parameter in the following density matrix (massless
fermion):
uū → (1/2)p̂(1 + hγ5).
Notes:
• SetPolarized(PID, 0) call will also clear helicity data.
• SetXXXHelicity(PID, 1, ...) also sets particle as polarized (previous
SetPolarized(PID,0) call is canceled).
• SetXXXHelicity(PID, 0, ...) clears helicity data but does not set parti-
cle unpolarized.
4.5 Physics selection
One can either declare the full Standard Model (in unitary or Feynman gauge)
or select the parts of SM Lagrangian to be used. The QED physics is used by
default. The Feynman rules corresponds to [27] paper.
SetDiagramPhysics( (int)physID );
physID: physics descriptor, to be constructed from the following flags:
PHYS_QED: Pure QED interactions,
PHYS_Z and PHYS_W: Z- and W-boson vertices,
PHYS_MU, PHYS_TAU: Muons and tau leptons (and corresponding neutrinos),
PHYS_SCALARS: Scalar particles including Higgs bosons,
PHYS_GHOSTS: Faddeev-Popov ghosts,
PHYS_GAUGE_UNITARY: Use unitary gauge,
PHYS_QUARKS, PHYS_QUARKS_2GEN and PHYS_QUARKS_3GEN: {d, u}, {d, u, s, c}
and {d, u, s, c, b, t} sets of quarks,
PHYS_RARE_VERICES: vertices with 3 or more SCALAR/GHOST tales,
PHYS_CT: Renormalization counter-terms (implementation in progress),
PHYS_SM: full SM physics in unitary gauge (all the flags above except for
PHYS_CT),
PHYS_SM_Q1GEN: the SM physics in unitary gauge with only first generation
of quarks (d, u),
PHYS_ELW: SM in Feynman gauge with no rare 3-scalar vertice,
PHYS_EONLY: no muons, tau-leptons and adjacent neutrinos,
PHYS_NOQUARKS: no quarks,
PHYS_4BOSONS_ANOMALOUS: anomalous quartic gauge boson interactions (see
[28]), affects on AAWW and AZWW vertices.
The less items are selected in Lagrangian, the faster diagram generation is
performed.
5 Bondarev functions
The Bondarev method of trace calculation is implemented according to [25]
paper. The trace of γµ-matrices product becomes much shorter in terms of
F -functions. The number of items for Tr[(1 − γ5)â1â2 · · · â2n] occurs 2n. For
example, the 12 matrices trace contains 10395 items usually while the new
method leads to 64-items sum.
The 8 complex functions are introduced:
F1(a, b) = 2[(aq−)(bq+)− (ae+)(be−)],
F2(a, b) = 2[(ae+)(bq−)− (aq−)(be+)],
F3(a, b) = 2[(aq+)(bq−)− (ae−)(be+)],
F4(a, b) = 2[(ae−)(bq+)− (aq+)(be−)], (3)
F5(a, b) = 2[(aq−)(bq+)− (ae−)(be+)],
F6(a, b) = 2[(ae−)(bq−)− (aq−)(be−)],
F7(a, b) = 2[(aq+)(bq−)− (ae+)(be−)],
F8(a, b) = 2[(ae+)(bq+)− (aq+)(be+)].
The basis vectors q± and e± are selected as follows:
(1, ±1, 0, 0), eµ± =
(0, 0, 1,±i). (4)
The results for traces evaluation looks as follows:
Tr[(1− γ5)â1â2] = F1(a1, a2) + F3(a1, a2),
T r[(1− γ5)â1â2â3â4] = F1(a1, a2)F1(a3, a4) + F2(a1, a2)F4(a3, a4) +
+F3(a1, a2)F3(a3, a4) + F4(a1, a2)F2(a3, a4).
Please refer to [25] paper for method details.
SetParameter(PAR TRACES BONDEREV, (int)par);
par: 1 or 0 – allow or forbid Bondarev functions usage.
The numerical code for F -functions (3) is contained in alhep_lib.F library
file. The code is available for scalar couplings only: Fµνp
µqν . If vector Bondarev
functions remain in result, the Fortran-generation procedure fails. One should
repeat the whole calculation without Bondarev functions in that case.
6 Diagrams generation
The diagrams are generated after the kinematics and physics model are de-
clared. The Wick-theorem-based method is implemented.
The only distinct from the usual Feynman diagrams is the following: the vertex
rules have additional 1/2 factors for every identical lines pair. It makes two
effects:
- The crossing diagrams are usually involved if they have different topology
from original. I.e. if two external photon lines starts from single vertex, they
are not crossed. Nevertheless all the similar external lines have crossings in
ALHEP.
- Some ALHEP diagrams have 2n factors due to identical internal lines. For
example, the W+W− → {γγWWvertex} → γγ → {γγWWvertex} →
W+W− diagram will have additional factor 4. The result remains correct
due to 1/2 factors at every γγW+W− vertex.
The diagrams are generated without any crossings. The crossed graphs are
added automatically during the squaring or amplitude calculation procedures.
diagrams ComposeDiagrams((int)n);
n: order of diagrams to be created, MX→Y ∼ en.
Uses current physics and kinematics information. Returns generated diagrams
One may select specified diagrams into another diagrams set:
diagrams SelectDiagrams( (diagrams)d, (int)i0 [, i1, i2...]);
d: initial diagrams set. Remains unaffected during the procedure.
i0..iN: numbers of diagrams to be selected. First diagram is ”0”.
The new diagrams set is returned.
To retrieve matrix element from the diagram set the RetrieveME() is used.
me RetrieveME( (diagrams)d );
d: diagrams list.
7 Helicity amplitudes
The amplitudes are calculated according to [24] paper. Every spinor in matrix
element is projected to common abstract spinor:
ūp =
ūQupūp
ūQup
= eiC
ūQPp
(Tr[PpPQ])1/2
, up =
upūpuQ
ūpuQ
= eiC
(Tr[PQPp])1/2
The eiC factor is equal for all diagrams and may be neglected.
The projection operator PQ = upūp is choosen as follows:
• PQ = Q̂(1 + γ5)/2 – for massive external fermions,
• PQ = Q̂(1 + ÊQ)/2 – if one of fermions is massless.
The value of vector Q is selected arbitrary in Fortran numerical procedure.
The additional basis vector EQ (if exists) is selected meet the polarization
requirements ((EQ.Q) = 0, (EQ.EQ) = −1). The fractions like 1/Tr[PQPp]
may turn 1/0 at some Q and EQ values. The denominators check procedures
are generated in Fortran code (the CODE_CHECK_DENOMINATORS key should be
used in CreateFortranProc() call). If the | Tr[PQPp] |> δ check is failed, the
another Q and EQ values are generated.
expr CalcAmplitude( (me)ME );
ME: matrix element retrieved from the whole diagrams set.
The result expression is a function of all the particle helicity vectors. The
averaging over polarization vectors is performed numerically. The numerical
averaging procedure is automatically generated in Fortran output if unpolar-
izaed particles are declared.
The CODE_IS_AMPLITUDE key in CreateFortranProc() procedure declares
expression as amplitude and leads to proper numerical code (Ampl×Amlp∗).
8 Matrix Element squaring
The squaring procedure have the following steps:
• matrix elements simplification to minimize the γ-matrices number,
• denominators caching to make procedure faster,
• item-by-item squaring (coupling to other conjugated),
• saving memory mechanism to avoid huge sums arranging (SQR_SAVE_MEMORY
option),
• virtual integrals reconstruction.
expr SquareME((me)ME1, [(me)ME2,] [(int)flags ]);
ME1: matrix element #1,
ME2: matrix element #2 (should be omitted for squaring),
flags: method options (defauls is ”0”):
SQR_CMS: c.m.s. consideration (initial momenta are collinear). The additional
pseudo-covariant relations (p1.ε2 → 0, p2.ε1 → 0) appear that simplify work
with abstract polarization vectors.
SQR_NO_CROSSING_1: do not involve crossings for ME #1,
SQR_NO_CROSSING_2: do not involve crossings for ME #2,
SQR_MANDELSTAMS: allow Mandelstam variable usage,
SQR_PH_GAMMA_CHIRAL: tries to involve photons helicities in short chiral ê±
form (see sec. 4.4, [26]).
SQR_PH_GAMMA_PRECISE: involve precise ê± form for photon helicities. If the
polarization vectors are not coupled to γµ, the vector form e
µ is used (see
sec. 4.4).
SQR_SAVE_MEMORY: save memory and processor time for huge matrix ele-
ment squaring. Minimizes sub-results by every 1000 items and skips final
arranging of the whole sum. No huge sums occurs in calculation in this
mode, but the result is also not minimal. If result expression contains a
sum of 105 − 106 items (when expanded), the arranging time is significant
and SQR_SAVE_MEMORY flag should be involved. If no results are calculated
in reasonable time the CalcAmplitude() (see sec. 7) procedure should be
used.
If two matrix elements are given, the first will be conjugated, i.e. the result is
ME1∗ ×ME2.
9 Virtual integrals operations
The tensor virtual integrals are reduced to scalar ones using two methods.
If tensor integral is coupled to external momentum Iµp
µ and p-vector can
be decomposed by integral vector parameters, the fast reduction is involved.
The Dx integrals for 2 → 2 process contain the whole basis of 4-dimension
space and Dx-couplings to any external momentum can be decomposed. It
works well if all the polarization vectors are constructed in terms of external
momenta.
The common tensor reduction scheme is involved elsewhere. Tensor integrals
are decomposed by the vector basis like
Iµν (p, q) = I
pµpν , qνqµ, p(µqν), gµν
The linear system for scalar coefficients is composed and solved. Implemented
forBj and Ci,ij integrals only. The other scalar coefficients should be calculated
numerically [19].
expr Evaluate( (expr)src );
Evaluate: Reduction of tensor virtual integrals to scalar ones. The scalar
coefficients in tensor VI decomposition are also evaluated (not for all integrals).
expr ConvertInvariantVI( (expr)src );
ConvertInvariantVI: Vector parameters are substituted by scalars. I.e.:
C0(k1, k2, m1, m2, m3) → C0(k21, (k1 − k2)2, k22, m1, m2, m3).
Note: A- and B- integrals are converted automatically during arrangement.
expr CalcScalarVI( (expr)src );
CalcScalarVI: Substitutes known scalar integrals with its values. The most
of UV-divergent integrals (A0,B0) are substituted. The chiral decomposition
for D0-integral is also applied. The complicated integrals should be calculated
numerically [19].
The source expressions are unaffected in all the functions above.
10 Regularization
The dimensional regularization scheme is implemented. One may change the
space-time dimension before every operation:
SetNDimensionSpace( (int)val );
val: 1 (n-dimensions) or 0 (4-dimensions space).
expr SingularArrange( (expr)src );
SingularArrange: turns expression to 4-dimensional form. Calculates (n−4)i
factor in every item and drops out all the neglecting contributions.
SetDRMassPK( (int)PK )
SetDRMassPK: set the particle to be used as DR mass regulator.
PK: particle kind (see sec. 4.1). ”0” value declares the default ”µ” DR mass.
11 ALHEP interfaces
11.1 Fortran numerical code
The numerical analysis in particle physics is commonly performed using the
Fortran programming language. Hence, we should provide the Fortran code to
meet the variety of existing Monte-Carlo generators.
To start a new Fortran file the NewFortranFile() function is used:
int NewFortranFile( (str)fn [, (int)type]);
fn: output file name
type: Fortran compiler conventions: CODE_F77 or CODE_F90. The FORTRAN 77
conventions are presumed by default.
Function returns the ID of created file.
Then we can add a function to FORTAN file:
CreateFortranProc( (int)fID, (str)name, (expr)src [, (int)keys]);
fID: file ID returned by NewFortranFile() call.
name: Fortran function name.
src: symbolic expression to be calculated. The source may be | M |2, dσ/dΓ
(use CODE_IS_DIFF_CS flag) or amplitude (CODE_IS_AMPLITUDE flag). The
result is always dσ/dΓ calculation procedure.
keys: option flags for code generation (default is CODE_REAL8):
CODE_REAL8: mean all the symbols in expression as REAL*8 values.
CODE_COMPLEX16: declare variables type as COMPLEX*16.
CODE_IS_DIFF_CS: the source expression is differential cross section.
CODE_IS_AMPLITUDE the source expression is amplitude and a squaring code
should be generated: AMPL*DCONJG(AMPL).
CODE_CHECK_DENOMINATORS check denominators for zero. Used to re-generate
free basis vectors in amplitude code (see sec. 7).
CODE_LOOPTOOLS: Create LoopTools [19] calls for virtual integrals.
CODE_SEPARATE_VI create unique title for every virtual integral function (ac-
cording to parameter values). Do not use with CODE_LOOPTOOLS.
CODE_PYTHIA_VECTORS retrieve vector values from PYTHIA [18] user process
PUP(I,J) array.
CODE_POWER_PARAMS factorize and pre-calculate powers if possible.
CODE_NO_4VEC_ENTRY do not create a 4-vector entry for function.
CODE_NO_CONSTANTS do not use predefined physics constants. All variables
becomes external parameters.
CODE_NO_SPLIT do not split functions by 100 lines.
CODE_NO_COMMON: don’t use CONNON-block to keep internal variables.
The scalar vector couplings, Bondarev functions (see sec. 5) and εabcdp
objects are replaced with scalar parameters to be calculated once. These func-
tions are calculated using alhep_lib.F library procedures.
Some compilers works extremely slow with long procedures. Therefore Fortran
functions are automatically splitted after every 100 lines for faster compilation.
The internal functions functions have TEMPXXX() notation. To avoid problems
with several ALHEP-genarated files linking we should rename the TEMP-prefix
to make internal functions unique.
SetParameterS(PAR STR FORTRAN TEMP, (str)prefix);
prefix: "TMP1" or another unique prefix for temporary functions name.
To obtain the better performance of numerical calculations ALHEP provides
the mechanism for minimization of +/× operations in the expression. We
recommend to invoke the Minimize() function before Fortran generation (see
sec. 12).
11.1.1 PYTHIA interface
The PYTHIA [18] interface is implemented in terms of UPINIT/UPEVNT proce-
dures and 2 → N phase-space generator.
The momenta of external particles are retrieved from PYTHIA user process
event common block (PUP(I,J)). The order of particles in kinematics should
meet the order of generated particles, and the CODE_PYTHIA_VECTORS option
should be used.
The template UPINIT/UPEVNT procedures for ALHEP → PYTHIA junction are
found at ALHEP website. However one should modify them by adjusting the
generated particles sequence, including user cutting rules, using symmetries
for calculation similar processes in single function etc.
The plane 2 → N phase-space generator in alhep_lib.F library file is written
by V. Mossolov. It is desirable to replace the plain phase-space generator by
the adaptive one for multiparticle production process.
The more automation will be implemented in future. Please refer to ALHEP
website for details.
11.2 Mathematica
The output interface to Mathematica Notebook [16] file is basically used to
view the expressions in the convenient form. Implemented for all the symbolic
objects in ALHEP.
SaveNB( (str)fn, (expr or me)val [, (str)comm ][, (int)flags ]);
MarkNB( (str)fn [, (str)comm ][, (int)flags ]);
fn: Mathematica output file name.
val: expression to be stored.
comm: comment text to appear in output file ("" if no comments are required).
flags: file open flags (default is ”0”):
0: append to existing file and do not close it afterward,
FILE_START: delete previous, start new file and add Mathematica header,
FILE_CLOSE: add closing Mathematica block.
The MarkNB function is used to add comments only.
The Mathematica program can open valid files only. The valid x.nb file should
be started (FILE_START) and closed once (FILE_CLOSE). It is convenient to in-
sert MarkNB("x.nb","",FILE_START) and MarkNB("x.nb","",FILE_CLOSE)
calls to start and end of your script file.
No backward interface (Mathematica → ALHEP) is currently available.
11.3 LaTeX
The LaTeX interface in ALHEP is only implemented for Feynman diagrams
drawing. The diagrams are illustrated in terms of AxoDraw [23] package.
DrawDiagrams( (diagrams)d, (str)fn [, (int)flags][, (int)draw]);
MarkTeX( (str)fn, [, (str)comm][, (int)flags]);
d: diagrams set.
fn: output LaTeX file name.
flags: file open flags (0(default): append, FILE_START: truncate old):
comm: comment text to appear in output file ("" if no comments are required).
draw: draw options (default is DD_SMALL):
DD_SMALL: diagrams with small font captions.
DD_LARGE: diagrams with large font captions.
DD_MOMENTA: print particles momenta.
DD_SWAP_TALES: allow arbitrary order for final-state lines (option is included
automatically for 2 → N kinematics).
DD_DONT_SWAP_TALES: deny the DD_SWAP_TALES option. The order of final
lines is same to kinematics declaration.
11.4 ALHEP native save/load operations
The native ALHEP serialization format is XML-structured. It may be edited
outside the ALHEP for some debug purposes.
Save( (str)fn, (diagrams or expr)val);
object Load( (str)fn );
fn: output XML file name.
val: diagrams set or symbolic expression to be stored.
12 Common algebra utilities
The following set of common symbolic operations is available:
• Expand(): expands all the brackets and arranges result. Works slowly with
huge expressions.
• Arrange(): arranges expression (makes alphabetic order in commutative se-
quences). The most of ALHEP functions performs arranging automatically
and there is no need to call Arrange() directly.
• Minimize(): reduces the number of ”sum-multiply” operations in expres-
sion. Should be used for simplification of big sums before numerical calcu-
lations.
• Factor(): factorize expression.
• KinArrange(): arranges expression using kinematic relations,
• KinSimplify(): simplify expression using the kinematic relations. Works
very slowly with large expressions. Mostly useful for 2 → 2 process.
expr Expand( (expr)src );
expr Arrange( (expr)src );
expr Minimize( (expr)src [, (int)flags]);
expr Factor( (expr)src [, (int)flags]);
expr KinArrange( (expr)src [, (int)flags]);
expr KinSimplify( (expr) src [, (int)flags]);
src: source expression (remains unaffected).
Minimize() function flags (default is MIN_DEFAULT):
MIN_DEFAULT = MIN_FUNCTIONS|MIN_DENS|MIN_NUMERATORS,
MIN_FUNCTIONS: factorize functions,
MIN_DENS: factorize denominators,
MIN_NUMERATORS: factorize numerators,
MIN_NUMBERS: factorize numbers,
MIN_ALL_DENOMINATORS: factorize all denominators 1/(a + b) + 1/x →
(x+ a+ b)/(x ∗ (a+ b))),
MIN_ALL_SINGLE_DENS: factorize single denominators (but not sums, prod-
ucts etc.),
MIN_VERIFY: verify result (self-check: expand result back and compare to
source).
Factor() function flags (default is 0):
FACT_NO_NUMBERS: do not factorize numbers,
FACT_NO_DENS: do not factorize fraction denominators,
FACT_ALL_DENS: factorize all denominators,
FACT_ALL_SINGLE_DENS: factorize all single denominators (but not sums,
products etc.),
FACT_VERIFY: verify result (self-check: expand result back and compare to
source).
KinArrange() function flags (default is 0):
KA_MASS_EXACT: do not truncate masses (neglecting SetMassCalcOrder()
settings),
KA_MANDELSTAMS: involve Mandelstam variables (for 2 → 2 kinematics),
KA_NO_EXPAND: do not expand source.
KinSimplify() function flags (default is KS_FACTORIZE_DEFAULT):
KS_FACTORIZE_DEFAULT: factorize functions and denominators (the first
two flags below),
KS_FACTORIZE_FUNCTIONS: factorize functions before simplification,
KS_FACTORIZE_DENS: factorize denominators (including partial factoriza-
tion) before simplification,
KS_FACTORIZE_ALL_DENS: factorize all denominators,
KS_MASS_EXACT: do not truncate masses (neglecting SetMassCalcOrder()
settings),
KS_MANDELSTAMS: involve Mandelstam variables (for 2 → 2 kinematics),
KS_NO_EXPAND: do not expand source (if no simplification are found).
13 Debugging tools
ALHEP allows user to control the most of internal calculation flow. The debug
info is stored to debug.nb file (critical messages will also appear in console).
The debug.ini file contains numerical criteria for messages to be logged, the
debug levels for different internal classes. Warning: raising the debug.ini val-
ues leads to sufficient performance drop and enormous debug.nb file growth.
If one feels some problems with ALHEP usage, please contact author for assis-
tance (attaching your script file). Do not waste the time for manual debugging
using debug.ini.
There are some specific commands to view internal data. For example, the
whole list of tensor virtual integrals reduction results is kept in internal data
storage and can be dumped to Mathematica file for viewing:
• str ViewParticleData((int)PID) returns the brief information on parti-
cle settings. Puts the information string to console and returns it also as a
string variable.
PID: particle ID in kinematics.
• ViewFeynmanRules((str)nb_file, (int)flags) stores Feynman rules of
current physics to Mathematica file.
• ViewTensorVITable((str)nb_file, (int)flags)) stores tensor integrals
reduction table to Mathematica file. The VI reduction table is filled during
the Evaluate() operation.
• ViewScalarVICache((str)nb_file, (int)flags)) stores scalar loop in-
tegrals values cache. The scalar VI cache is filled during the CalcScalarVI()
invocation.
nb file: output Mathematica file name,
flags: access flags to Mathematica file: 0,FILE_START and/or FILE_CLOSE.
Call without parameters turns output to debug.nb file.
14 System commands
The two system commands are useful:
• Halt(): stop further script processing. May be used to test the first part of
script and save (Save()) internal result. It first part finished successfully, it
may be commented (\*...*\) and followed by loading procedure (Load()).
Then script execution is restarted.
• Timer(): view time elapsed since the last Timer() call (from program start
for first call).
15 Examples
15.1 Amplitude for qq̄ → W+W−γ
Let’s consider the example from sec. 2.1 in details. We also extend it for anoma-
lous quartic gauge boson interactions [28]. And we don’t use the Bondarev
method for traces calculation this time. Please refer to sec. 2.1 for ALHEP
installation notes.
We start test.al script from output files creation:
nbfile = "uuWWA_MPXXX.nb"; // Mathenatica file name
MarkNB(nbfile, "", FILE_START); // Create file
texfile = "res.tex"; // LaTeX file name
MarkTeX(texfile, "", FILE_START); // Create file
Then 2 → 3 process kinematics and physics are declared:
u(p1, e1) ū(p2, e2) → γ(f0, g0) W+(f1, g1) W−(f2, g2). (5)
SetKinematics(2, 3 // 2->3 process
,QUARK_U,"p\_1","e\_1" // u
,-QUARK_U,"p\_2","e\_2" // u-bar
,WBOZON,"f\_1","g\_1" // W{+}
,PHOTON,"f\_0","g\_0" // photon
,-WBOZON, "f\_2", "g\_2" ); // W{-}
SetDiagramPhysics(PHYS_SM_Q1GEN|PHYS_4BOSONS_ANOMALOUS);
We declare physics with u- and d-quarks only. The amplitude will be sum-
marized for all the possible internal quarks numerically. It requires the simple
replacing of quark mixing matrix in resulting Fortran code: U2ud → U2ud+U2us+
U2ub).
Next we declare the u- and d-quarks massless:
SetMassCalcOrder(QUARK_U, 0); // consider massless
SetMassCalcOrder(QUARK_D, 0); // consider massless
Set polarizations to ”-+UUU”:
SetFermionHelicity(1, -1); // u
SetFermionHelicity(2, 1); // u-bar
Create diagrams set and store it to LaTeX file:
diags = ComposeDiagrams(3); //e^3 order
DrawDiagrams(diags, texfile);
Next we include the following lines:
Save("diags.xml",diags); //save to XML file
//Halt(); //stop execution
//diags = Load("diags.xml"); //load from XML file
We can save diagrams, stop the program now and view diagrams generated.
To stop ALHEP session the Halt() line should be uncommented. Then we
modify our script as follows:
Fig. 1. The diagrams generated for uū → W−W+γ process (see res.tex file). The
anomalous quartic gauge boson interaction affects the first two diagrams.
/* diags = ComposeDiagrams(3); // commented
... // commented
Halt(); */ // commented
diags = Load("diags.xml"); // uncommented
If we run the script again, it will skip the diagrams generation step and load
diagrams from XML file.
Matrix element retrieval:
me = RetrieveME(diags); //get matrix element
SaveNB(nbfile, me, "Matrix element"); //view
Calculate helicity amplitude, arrange result and minimize the +/× operations
number:
ampl = CalcAmplitude(me);
SaveNB(nbfile, ampl, "Amplitude after CalcAmplitude()");
ampl = KinArrange(ampl);
SaveNB(nbfile, ampl, "Amplitude after KinArrange()");
ampl = Minimize(ampl);
SaveNB(nbfile, ampl, "Amplitude after Minimize()");
The another breakpoint can be inserted here. The result for amplitude is saved,
the Halt and Load commands are commented for further use:
Save("ampl.xml", ampl); // save amplitude
//MarkNB(nbfile, FILE_CLOSE); Halt(); // close NB and exit
//ampl = Load("ampl.xml"); // load amplitude
This breakpoint allows to repeat the next Fortran creation step without re-
calculating of matrix element.
Let’s average over final state polarizations in further numerical procedure. Set
final particles unpolarized:
SetPolarized(-1, 0); // set unpolarized
SetPolarized(-2, 0); // set unpolarized
SetPolarized(-3, 0); // set unpolarized
The Fortran output for differential cross section:
SetParameterS(PAR_STR_FORTRAN_TEMP, "TMP1");
f = NewFortranFile("uuWWA.F", CODE_F77); //f77 file
CreateFortranProc(f, "uuWWA", ampl,
CODE_IS_AMPLITUDE| //square amplitude
CODE_CHECK_DENOMINATORS| //check 1/0 limits
CODE_COMPLEX16| //complex values
CODE_POWER_PARAMS| //F(M^2) instead of F(M)
CODE_PYTHIA_VECTORS); //use PYTHIA PUP(I,J) vectors
The SetParameterS call sets the unique notation for internal variables and
functions. Please do not make it too long. The complex-type code is required
for proper amplitude calculation.
Close Mathematica output file at the end of script:
MarkNB(nbfile, FILE_CLOSE);
The execution of this script takes less than 2 minutes at 1.8GHz P4 processor.
We will not discuss the structure of generated uuWWA.F file in details. But
some remarks should be done:
Line 5: The main function call. The following parameters are declared (order
is changed here): All the parameters (except the kQOrig) are of COMPLEX*16
type. Ones the CODE_COMPLEX16 option is set, all the real objects are treated
as complex.
kQOrig (INTEGER): The ID of u(first)-quark in PYTHIA PUP(I,J) array.
Possible values: 1 or 2.
PAR a 0, PAR a c, PAR a n, PAR ah c, PAR ahat n: Anomalous quartic
gauge boson interaction constants a0, ac, an, âc, ân [28].
PAR CapitalLambda: Scale factor Λ for anomalous interaction [28].
PAR VudP2: Quark mixing matrix element squared |Uud|2. The U2ud +U2us +
U2ub value may be passed to summarize the whole diagrams (neglecting
quarks masses). The numbers for mixing matrix elements may be obtained
using QMIX_VAL(ID1,ID2), QMIX_SQR_SUM(ID) and QMIX_PROD_SUM func-
tions of alhep_lib.F library.
Line 29: Internal COMMON-block with PAR(XX) array. All the scalar cou-
plings and other compound objects are precalculated and stored in PAR(XX).
Lines 49-55: External momenta initialization from PYTHIA PUP(I,J) ar-
ray. The order of external vectors is expected as follows:
PUP(I,1),PUP(I,2): initial particles. If kQOrig=2 the order is backward:
PUP(I,2), PUP(I,1).
PUP(I,3..N): final particles in the same order as in SetKinematics()
call. One should modify this section (or SetKinematics() parameters)
to make the proper particles order.
DO 10 I=1,4
p_1(I) = DCMPLX(PUP(I,kQ1Orig),0D0) // kQ1Orig = kQOrig
f_0(I) = DCMPLX(PUP(I,4),0D0)
p_2(I) = DCMPLX(PUP(I,kQ2Orig),0D0) // kQ2Orig = 3-kQOrig
f_1(I) = DCMPLX(PUP(I,3),0D0)
f_2(I) = DCMPLX(PUP(I,5),0D0)
10 CONTINUE
Lines 69-231: Polarization averaging and basis vector generation cycle. For
any momenta set the PAR(XXX) array is filled. Then denominator checks
and amplitude averaging are performed.
Lines 244-252, 438-444, ... Interaction constants and particle masses defi-
nitions in sub-procedures. The constants can be declared as main function
parameters using CODE_NO_CONSTANTS option in CreateFortranProc func-
tion.
For the complete pp̄ → W+W−γ analysis the following steps are required:
• The PYTHIA client program should be written. The template files are avail-
able at ALHEP website.
• The another helicity configuration +-UUU should be calculated separately.
• The another channels qiq̄j → W+W−γ (i 6= j) should be calculated and
included into generator.
15.2 Z-boxes for e−e+ → µ−µ+
Let’s calculate some box diagrams now. Consider the following process:
e−(p1, e1) e
+(p2, e2) → µ−(f1, g1) µ+(f2, g2). (6)
As in previous example, we start command script from files initialization:
nbfile = "Zbox.nb"; // Mathenatica file name
MarkNB(nbfile, "", FILE_START); // Create file
texfile = "res.tex"; // LaTeX file name
MarkTeX(texfile, "", FILE_START); // Create file
The 2 → 2 kinematics declaration:
SetKinematics(2, 2, // 2->2 process
ELECTRON, "p\_1", "e\_1" , // e^{-}
-ELECTRON, "p\_2", "e\_2", // e^{+}
MUON, "f\_1", "g\_1", // mu^{-}
-MUON, "f\_2", "g\_2"); // mu^{+}
The Standard model physics in Feynman gauge (we omit quarks for faster
diagrams generation):
SetDiagramPhysics(PHYS_ELW|PHYS_NOQUARKS);
Set leptons massless and declare the N -dimensional space:
SetMassCalcOrder(ELECTRON, 0); // massless electrons
SetMassCalcOrder(MUON, 0); // massless muons
SetNDimensionSpace(1);
Use Mandelstam variables throughout the calculation:
SetParameter(PAR_MANDELSTAMS,1);
Consider unpolarized particles:
SetPolarized(1, 0); // unpolarized e^{-}
SetPolarized(2, 0); // unpolarized e^{+}
SetPolarized(-1, 0); // unpolarized mu^{-}
SetPolarized(-2, 0); // unpolarized mu^{+}
Compose born and one-loop diagrams:
diags_born = ComposeDiagrams(2); //e^2 order
DrawDiagrams(diags_born, texfile);
diags_loop = ComposeDiagrams(4); //e^4 order
Save("diags_loop.xml",diags_loop);
//diags_loop=Load("diags_loop.xml");
DrawDiagrams(diags_loop, texfile);
The 220 loop diagrams are created, saved to internal format and TeX file.
The ComposeDiagrams(4) procedure takes 5-10 minutes here. It is convenient
to comment ComposeDiagrams(4)-Save() lines at second run and Load loop
diagrams from disk.
Fig. 2. Born level diagrams for e−e+ → µ−µ+.
Fig. 3. Part of 220 loop diagrams stored to res.tex file
Let’s select the double Z-exchange box graphs from the whole set (194 and
195 diagrams at fig. 3):
diag_box = SelectDiagrams(diags_loop,194,195);
Next we couple the loop and born matrix elements:
me_born = RetrieveME(diags_born);
me_box = RetrieveME(diag_box);
me_sqr = SquareME(me_box, me_born);
The simplification procedures are not included into SquareME implementation.
It may take much time to arrange items in huge expression. Therefore all the
simplification procedures are optional and should be called manually:
me_sqr = KinArrange(me_sqr);
me_sqr = KinSimplify(me_sqr);
SaveNB(nbfile, me_sqr, "squared & simplified");
The reduction of tensor virtual integrals follows:
me_sqr = Evaluate(me_sqr);
me_sqr = KinArrange(me_sqr);
me_sqr = KinSimplify(me_sqr);
SaveNB(nbfile, me_sqr, "VI evaluated");
Next we convert scalar integrals to invariant-dependent form and replace with
tabulated values:
me_sqr = ConvertInvariantVI(me_sqr);
me_sqr = CalcScalarVI(me_sqr); // use pre-calculated values
me_sqr = KinArrange(me_sqr);
SaveNB(nbfile, me_sqr, "VI scalars ");
Turn to 4-dimensional space, drop out (n− 4)i items and final simplification:
me_sqr = SingularArrange(me_sqr);
SetNDimensionSpace(0);
me_sqr = KinArrange(me_sqr);
me_sqr = KinSimplify(me_sqr);
Save result and create Fortran code with LoopTools [19] interface:
Save("ZBox.xml",me_sqr); // save result
//me_sqr = Load("ZBox.xml"); // reload result
SaveNB(nbfile, me_sqr, "Z boxes result"); // view result
f = NewFortranFile("ZBOX.F", CODE_F77);
CreateFortranProc(f, "ZBOX", me_sqr,
CODE_POWER_PARAMS|CODE_LOOPTOOLS);
View tensor integrals reduction table and close Mathematica output file:
ViewTensorVITable(nbfile);
MarkNB(nbfile, FILE_CLOSE);
The script runs about 15 minutes on 1.8GHz P4 processor. The half of this
time takes the ComposeDiagrams(4) procedure.
�������
jjjjjjjj
jjjjjjjj
2,t,me
2,s,me
2,mZ,m�,me,mZ<
jjjjjjjj
jjjjjjjj
jjjjjjjj
jjjjjjj
����������������������HmZ2 − sL JmZ
3N s − JmZ2 t2 + mZ4 t +
3N e2
zzzzzzz gm
����������������������HmZ2 − sL s t
zzzzzzzz
zzzzzzzz
jjjjjjjj
����������������������HmZ2 − sL JmZ
3N s − JmZ2 t2 + mZ4 t + 1����
3N e2
zzzzzzzz
zzzzzzzz
2,s,m
,mZ,mZ<
jjjjjjjj
jjjjjjjj
J 1����
����������������������HmZ2 − sL s − J
uN e2
zzzzzzzz
jjjjjjj
jjjjjjj
jjjjjjjJ
����������������������HmZ2 − sL s − J
uN e2
zzzzzzz gm
HmZ2 + 1����2 s + uL
������������������������������������HmZ2 − sL s
zzzzzzz gm
2 JmZ2 + 1����
s + uN e2
zzzzzzz gm
zzzzzzzz
2,s,me
2,me,mZ,mZ<
jjjjjjjj
jjjjjjjj
J 1����
����������������������HmZ2 − sL s − J
uN e2
zzzzzzzz
jjjjjjj
jjjjjjj
jjjjjjjJ
����������������������HmZ2 − sL s − J
uN e2
zzzzzzz gm
HmZ2 + 1����2 s + uL
������������������������������������HmZ2 − sL s
zzzzzzz gm
2 JmZ2 + 1����
s + uN e2
zzzzzzz gm
zzzzzzzz
s + D0
2,u,me
2,s,me
2,mZ,m�,me,mZ<
jjjjjjjj
jjjjjjj
jjjjjjJ2 mZ
t + 2 mZ
����������������������HmZ2 − sL s − J2 mZ
t + 2 mZ
3N e2
zzzzzz
jjjjjjj
����������������������HmZ2 − sL s − e
zzzzzzz gm
zzzzzzz gm
jjjjjjjj
����������������������HmZ2 − sL s − e
zzzzzzzz
zzzzzzzz
2,t,me
2,mZ,m
HmZ2 + 1����2 t − 1����2 uL
�������������������������������������������HmZ2 − sL s − JmZ
uN e2y
zzz g
HmZ2 + 1����2 t − 1����2 uL
�������������������������������������������HmZ2 − sL s − JmZ
uN e2y
zzz g
zzz t −
8s,mZ,mZ<
jjjjjjjj
jjjjjjjj
jjjjjjjj
jjjjjjj
����������������������HmZ2 − sL s − e
zzzzzzz gm
����������������������HmZ2 − sL s t
zzzzzzzz
zzzzzzzz
jjjjjjjj
����������������������HmZ2 − sL s − e
zzzzzzzz
zzzzzzzz
2,u,me
2,mZ,m�,me< i
jjjjjjH2 mZ
+ s + 2 uL g
����������������������HmZ2 − sL s − H2 mZ
+ s + 2 uL e2
zzzzzz g
LogA 1�������
jjjjjjjj
jjjjjjj
����������������������HmZ2 − sL s − e
zzzzzzz gm
jjjjjjjj
����������������������HmZ2 − sL s − e
zzzzzzzz
zzzzzzzz
LogA 1�������
jjjjjjg
����������������������HmZ2 − sL s − e
zzzzzz g
jjjjjjjj
H2 + χL g
jjjjjjjj
H2 + χL
����������������������HmZ2 − sL s t −
jjjjjjjJ1 +
����������������������HmZ2 − sL s − J1 +
χN e2
zzzzzzz gm
zzzzzzzz
zzzzzzzz
jjjjjjjj
J1 + 1����
����������������������HmZ2 − sL s + e
2 J−J1 + 1����
zzzzzzzz
zzzzzzzz
zzzzzzzz
Fig. 4. The result expression in Zbox.nb file. This result contains UV-regulator
term χ, that should cancel one in B0(s,MZ ,MZ) integral. It can be checked using
GetUVTerm() function.
Code remarks:
Line 5: The main function call. The 3 parameters are usual Mandelstam
variables (s, t, u, type is complex). The current ALHEP version does not
care about interdependent parameters in Fortran output. And all the three
Mandelstam variables may occur in parameters list. The future versions will
be saved from this trouble.
Line 14,50: Include LoopTools header file ("looptools.h"). See LoopTools
manual [19] for details.
Line 20,21: Retrieve LoopTools values for UV-regulator getdelta() and
DR-mass squared getmudim().
The complete code of examples including scripts, batches and output files are
available at ALHEP website.
16 Conclusions
The new program for symbolic computations in high-energy physics is pre-
sented. In spite of several restrictions remained in current version, it can be
useful for computation of observables in particle collision experiments. It con-
cerns both multiparticle production amplitudes and loop diagrams analysis.
The nearest projects are:
• Bondarev functions method improvement,
• Complete renormalization scheme for SM,
• Complete covariant analysis of the one-loop radiative corrections including
the hard bremsstrahlung scattering contribution.
• Arbitrary Lagrangian assignment.
Refer ALHEP project websites for program updates:
http://www.hep.by/alhep ,
http://cern.ch/~makarenko/alhep .
References
[1] CompHEP, http://theory.sinp.msu.ru/comphep,
E. Boos et al., Nucl.Instrum.Meth. A534 (2004) 250, hep-ph/0403113.
[2] SANC, http://sanc.jinr.ru/ (or http://pcphsanc.cern.ch/),
A. Andonov et al., hep-ph/0411186, to appear in Comp.Phys.Comm.; D.
Bardin, P. Christova, L. Kalinovskaya, Nucl.Phys.Proc.Suppl. B116 (2003) 48.
[3] GRACE, http://minami-home.kek.jp/,
G. Belanger et al., LAPTH-982/03, KEK-CP-138, hep-ph/0308080 (One-
loop review); J. Fujimoto et al., Comput. Phys. Commun. 153 (2003) 106,
hep-ph/0208036 (SUSY review);
http://theory.sinp.msu.ru/comphep
http://arxiv.org/abs/hep-ph/0403113
http://sanc.jinr.ru/
http://arxiv.org/abs/hep-ph/0411186
http://minami-home.kek.jp/
http://arxiv.org/abs/hep-ph/0308080
http://arxiv.org/abs/hep-ph/0208036
[4] MadGraph, http://madgraph.hep.uiuc.edu/,
T. Stelzer, W. F. Long, Comput. Phys. Commun. 81 (1994) 357,
hep-ph/9401258.
[5] O’Mega, http://theorie.physik.uni-wuerzburg.de/~ohl/omega/
M. Moretti, T. Ohl, J. Reuter, IKDA 2001/06, LC-TOOL-2001-040,
hep-ph/0102195.
[6] FormCalc, http://www.feynarts.de/formcalc,
T. Hahn, M. Perez-Victoria, Comput.Phys.Commun. 118 (1999) 153,
hep-ph/9807565.
[7] FeynCalc, http://www.feyncalc.org/
R. Mertig, M. Bohm, A. Denner, Comput.Phys.Commun. 64 (1991) 345.
[8] Amegic, F. Krauss, R. Kuhn, G. Soff, JHEP 0202 (2002) 044, hep-ph/0109036.
[9] AlpGen, http://mlm.home.cern.ch/mlm/alpgen/,
M. Mangano et al., CERN-TH-2002-129, JHEP 0307 (2003) 001,
hep-ph/0206293.
[10] HELAC-PHEGAS, C. Papadopoulos, Comput.Phys.Commun. 137 (2001) 247,
hep-ph/0007335; A. Kanaki, C. Papadopoulos, hep-ph/0012004.
[11] xloops, http://wwwthep.physik.uni-mainz.de/~xloops/,
L. Brucher, J. Franzkowski, D. Kreimer, Comput. Phys. Commun. 115 (1998)
[12] aITALC, http://www-zeuthen.desy.de/theory/aitalc/,
A. Lorca, T. Riemann, DESY 04-110, SFB/CPP-04-22, hep-ph/0407149;
J. Fleischer, A. Lorca, T. Riemann, DESY 04-161, SFB/CPP-04-38,
hep-ph/0409034.
[13] MINCER,
http://www.nikhef.nl/~form/maindir/packages/mincer,
S.A. Larin, F.V. Tkachov, J.A.M. Vermaseren, NIKHEF-H/91-18;
S.G. Gorishnii, S.A. Larin, L.R. Surguladze, F.V. Tkachov, Comput. Phys.
Commun. 55 (1989) 381.
[14] DIANA, http://www.physik.uni-bielefeld.de/~tentukov/diana.html,
M. Tentyukov, J. Fleischer, Comput.Phys.Commun. 160 (2004) 167,
hep-ph/0311111; M. Tentyukov, J. Fleischer, Comput.Phys.Commun. 132
(2000) 124, hep-ph/9904258.
[15] REDUCE by A. Hearn, http://www.reduce-algebra.com/.
[16] Mathematica by S.Wolfram,
http://www.wolfram.com/products/mathematica/.
[17] FORM by J. Vermaseren, http://www.nikhef.nl/~form/.
[18] PYTHIA 6.4, http://projects.hepforge.org/pythia6/,
T.Sjostrand, S.Mrenna, P.Skands, JHEP 0605 (2006) 026, LU-TP-06-13,
hep-ph/0603175.
http://madgraph.hep.uiuc.edu/
http://arxiv.org/abs/hep-ph/9401258
http://arxiv.org/abs/hep-ph/0102195
http://arxiv.org/abs/hep-ph/9807565
http://www.feyncalc.org/
http://arxiv.org/abs/hep-ph/0109036
http://mlm.home.cern.ch/mlm/alpgen/
http://arxiv.org/abs/hep-ph/0206293
http://arxiv.org/abs/hep-ph/0007335
http://arxiv.org/abs/hep-ph/0012004
http://www-zeuthen.desy.de/theory/aitalc/
http://arxiv.org/abs/hep-ph/0407149
http://arxiv.org/abs/hep-ph/0409034
http://arxiv.org/abs/hep-ph/0311111
http://arxiv.org/abs/hep-ph/9904258
http://www.reduce-algebra.com/
http://arxiv.org/abs/hep-ph/0603175
[19] LoopTools, http://www.feynarts.de/looptools/,
T. Hahn, M. Perez-Victoria, Comput.Phys.Commun. 118
(1999) 153, hep-ph/9807565; T. Hahn, Nucl.Phys.Proc.Suppl. 89 (2000) 231,
hep-ph/0005029.
[20] FF, http://www.xs4all.nl/~gjvo/FF.html,
G.J. van Oldenborgh, J.A.M. Vermaseren, Z.Phys. C46 (1990) 425, NIKHEF-
H/89-17.
[21] FeynArts, http://www.feynarts.de/,
T. Hahn, Comput.Phys.Commun. 140 (2001) 418, hep-ph/0012260; T. Hahn, C.
Schappacher, Comput.Phys.Commun. 143 (2002) 54, hep-ph/0105349 (MSSM).
[22] WHIZARD, http://www-ttp.physik.uni-karlsruhe.de/whizard/,
W. Kilian, LC-TOOL-2001-039.
[23] AxoDraw LaTeX style package,
J.A.M. Vermaseren, Comput.Phys.Commun. 83 (1994) 45.
[24] A.L. Bondarev, Talk at 9th Annual RDMS CMS Collaboration Conference,
Minsk, 2004, hep-ph/0511324.
[25] A.L. Bondarev, Nucl.Phys. B733 (2006) 48, hep-ph/0504223.
[26] P. De Causmaecker, R. Gastmans, W. Troost, Tai Tsun Wu, Nucl. Phys. B206
(1982) 53.
[27] A. Denner, Fortsch.Phys. 41 (1993) 307.
[28] A.Denner, S.Dittmaier, M.Roth,D.Wackeroth, Eur.Phys.J.C 20 (2001) 201.
http://www.feynarts.de/looptools/
http://arxiv.org/abs/hep-ph/9807565
http://arxiv.org/abs/hep-ph/0005029
http://www.feynarts.de/
http://arxiv.org/abs/hep-ph/0012260
http://arxiv.org/abs/hep-ph/0105349
http://www-ttp.physik.uni-karlsruhe.de/whizard/
http://arxiv.org/abs/hep-ph/0511324
http://arxiv.org/abs/hep-ph/0504223
Introduction
ALHEP Review. Program Structure
Getting Started
Calculation scheme
ALHEP script language
Initialization section
Particles
Kinematic selection
Particle masses
Polarization data
Physics selection
Bondarev functions
Diagrams generation
Helicity amplitudes
Matrix Element squaring
Virtual integrals operations
Regularization
ALHEP interfaces
Fortran numerical code
Mathematica
LaTeX
ALHEP native save/load operations
Common algebra utilities
Debugging tools
System commands
Examples
Amplitude for q "7016q W+ W-
Z-boxes for e- e+ - +
Conclusions
References
|
0704.1840 | Simulations of aging and plastic deformation in polymer glasses | APS/123-QED
Simulations of Aging and Plastic Deformation in Polymer Glasses
Mya Warren∗ and Jörg Rottler
Department of Physics and Astronomy, The University of British Columbia,
6224 Agricultural Road, Vancouver, BC, V6T 1Z1, Canada
(Dated: August 14, 2018)
We study the effect of physical aging on the mechanical properties of a model polymer glass
using molecular dynamics simulations. The creep compliance is determined simultaneously with
the structural relaxation under a constant uniaxial load below yield at constant temperature. The
model successfully captures universal features found experimentally in polymer glasses, including
signatures of mechanical rejuvenation. We analyze microscopic relaxation timescales and show
that they exhibit the same aging characteristics as the macroscopic creep compliance. In addition,
our model indicates that the entire distribution of relaxation times scales identically with age.
Despite large changes in mobility, we observe comparatively little structural change except for a
weak logarithmic increase in the degree of short-range order that may be correlated to an observed
decrease in aging with increasing load.
PACS numbers: 81.40.Lm, 81.40.Lg, 83.10.Rs
I. INTRODUCTION
Glassy materials are unable to reach equilibrium over
typical experimental timescales [1, 2, 3]. Instead, the
presence of disorder at temperatures below the glass tran-
sition permits only a slow exploration of the configura-
tional degrees of freedom. The resulting structural re-
laxation, also known as physical aging [4], is one of the
hallmarks of glassy dynamics and leads to material prop-
erties that depend on the wait time tw since the glass was
formed. While thermodynamic variables such as energy
and density typically evolve only logarithmically, the re-
laxation times grow much more rapidly with wait time
[3, 4, 5].
Aging is a process observed in many different glassy
systems, including colloidal glasses [6], microgel pastes
[7], and spin glasses [8], but is most frequently studied
in polymers due to their good glass-forming ability and
ubiquitous use in structural applications. Of particular
interest is therefore to understand the effect of aging on
their mechanical response during plastic deformation [5].
In a classic series of experiments, Struik [4] studied many
different polymer glasses and determined that their stiff-
ness universally increases with wait time. However, it has
also been found that large mechanical stimuli can alter
the intrinsic aging dynamics of a glass. Cases of both
decreased aging (rejuvenation) [4] and increased aging
(overaging) [9, 10] have been observed, but the interpre-
tation of these findings in terms of the structural evolu-
tion remains controversial [11, 12].
The formulation of a comprehensive molecular model
of the non-equilibrium dynamics of glasses has been im-
peded by the fact that minimal structural change oc-
curs during aging. Traditional interpretations of aging
presume that structural relaxation is accompanied by a
∗Electronic address: [email protected]
decrease in free volume available to molecules and an as-
sociated reduction in molecular mobility [4]. While this
idea is intuitive, it suffers from several limitations. First,
the free volume has been notoriously difficult to define ex-
perimentally. Also, this model does not seem compatible
with the observed aging in glassy solids under constant
volume conditions [13], and cannot predict the aging be-
havior under complex thermo-mechanical histories. Mod-
ern energy landscape theories describe the aging process
as a series of hops between progressively deeper traps in
configuration space [14, 15]. These models have had some
success in capturing experimental trends, but have yet to
directly establish a connection between macroscopic ma-
terial response and the underlying molecular level pro-
cesses. Recent efforts to formulate a molecular theory of
aging are promising but require knowledge of how local
density fluctuations control the relaxation times in the
glass [16].
Molecular simulations using relatively simple models
of glass forming solids, such as the binary Lennard-Jones
glass [17] or the bead spring model [18] for polymers,
have shown rich aging phenomenology. For instance,
calculations of particle correlation functions have shown
explicitly that the characteristic time scale for particle
relaxations increases with wait time [19]. Recent work
[13, 20] has focused on the effect of aging on the mechan-
ical properties; results showed that the shear yield stress
(defined as the overshoot or maximum of the stress-strain
curve) in deformation at constant strain rate generally in-
creases logarithmically with tw. Based on a large number
of simulations at different strain rates and temperatures,
a phenomenological rate-state model was developed that
describes the combined effect of rate and age on the shear
yield stress for many temperatures below the glass tran-
sition [21].
In contrast to the strain-controlled studies described
above, experiments on aging typically impose a small,
constant stress and measure the resulting creep as a func-
tion of time and tw [4]. In this study, we perform molecu-
http://arxiv.org/abs/0704.1840v1
mailto:[email protected]
lar dynamics simulations on a coarse grained, glass form-
ing polymer model in order to investigate the relation-
ship between macroscopic creep response and microscopic
structure and dynamics. In Section IIIA, we determine
creep compliance curves for different temperatures and
applied loads (in the sub-yield regime) and find that, as
in experiments, curves for different ages can be super-
imposed by rescaling time. The associated shift factors
exhibit a power-law dependence on the wait time, and
the effect of aging can be captured by an effective time
as originally envisioned by Struik [4]. In Section III B, we
compute microscopic mobilities and the full spectrum of
relaxation times and show their relationship to the creep
response. Additionally, we study several parameters that
are sensitive to the degree of short-range order in Section
III C. We detect very little evolution toward increased
local order in our polymer model, indicating that short
range order is not a sensitive measure of the mechanical
relaxation times responsible for the creep compliance of
glassy polymers.
II. SIMULATIONS
We perform molecular dynamics (MD) simulations
with a well-known model polymer glass on the bead-
spring level. The beads interact via a non-specific van
der Waals interaction given by a 6-12 Lennard-Jones po-
tential, and the covalent bonds are modeled with a stiff
spring that prevents chain crossing [22]. This level of
modeling does not include chemical specificity, but al-
lows us to study longer aging times than a fully atomistic
model and seems appropriate to examine a universal phe-
nomenon found in all glassy polymers. All results will be
given in units of the diameter a of the bead, the mass m,
and the Lennard-Jones energy scale, u0. The characteris-
tic timescale is therefore τLJ =
ma2/u0, and the pres-
sure and stress are in units of u0/a
3. The Lennard-Jones
interaction is truncated at 1.5a and adjusted vertically
for continuity. All polymers have a length of 100 beads,
and unless otherwise noted, we analyze 870 polymers in a
periodic simulation box. Results are obtained either with
one large simulation containing the full number of poly-
mers, or with several smaller simulations, each starting
from a unique configuration, whose results are averaged.
The large simulations and the averaged small simulations
provide identical results. The small simulations are used
to estimate uncertainties caused by the finite size of the
simulation volume.
To create the glass, we begin with a random dis-
tribution of chains and relax in an ensemble at con-
stant volume and at a melt temperature of 1.2u0/kB.
Once the system is fully equilibrated, it is cooled over
750τLJ to a temperature below the glass transition at
Tg ≈ 0.35u0/kB [18]. The density of the melt is cho-
sen such that after cooling the pressure is zero. We
then switch to an NPT ensemble - the pressure and
temperature are controlled via a Nosé-Hoover thermo-
750
2250
7500
22500
75000
500
1500
5000
15000
50000
FIG. 1: Simulated creep compliance J(t, tw) at a glassy tem-
perature of T = 0.2u0/kB for various wait times tw (indi-
cated in the legend in units of τLJ ). A uniaxial load of (a)
σ = 0.4u0/a
3 and (b) σ = 0.5u0/a
3 is applied to the aged
glasses. The strain during creep is monitored over time to
give the creep compliance.
stat/barostat - with zero pressure and age for various
wait times (tw) between 500 to 75,000τLJ. The aged
samples undergo a computer creep experiment where a
uniaxial tensile stress (in the z-direction) is ramped up
quickly over 75τLJ , and then held constant at a value
of σ, while the strain ǫ = ∆Lz/Lz is monitored. After
an initial elastic deformation, the glass slowly elongates
in the direction of applied stress due to structural relax-
ations. In the two directions perpendicular to the applied
stress, the pressure is maintained at zero.
III. RESULTS
A. Macroscopic Mechanical Deformation
Historically, measurements of the creep compliance
have been instrumental in probing the relaxation dynam-
ics of glasses, and continue to be the preferred tool in
investigating the aging of glassy polymers [15, 23, 24]. In
his seminal work on aging in polymer glasses, Struik [4]
performed an exhaustive set of creep experiments on dif-
ferent materials, varying the temperature and the applied
load. In this section, we perform a similar set of exper-
iments with our model polymer glass. The macroscopic
creep compliance is defined as
J(t, tw) =
ǫ(t, tw)
. (1)
short time long time
FIG. 2: The same data as Fig. 1 is shown with the curves
shifted by aJ (tw) to form a master curve. The dashed lines are
fits to the master curves using the effective time formulation,
and the dotted line is a short-time fit for comparison (see
text).
Compliance curves J(t, tw) for several temperatures and
stresses were obtained as a function of wait time since
the quench; representative data is shown in Figure 1.
The curves for different wait times appear similar and
agree qualitatively with experiment. An initially rapid
rise in compliance crosses over into a slower, logarith-
mic increase at long times. The crossover between the
two regimes increases with increasing wait time. Struik
showed that experimental creep compliance curves for
different ages can be superimposed by rescaling the time
variable by a shift factor, aJ ,
J(t, tw) = J(taJ , t
w). (2)
This result is called time-aging time superposition [4, 5].
Simulated creep compliance curves from Fig. 1 can simi-
larly be superimposed, and the resulting master curve is
shown in Fig. 2.
Shift factors required for this data collapse are plot-
ted versus the wait time in Fig. 3. All data fall along a
straight line in the double-logarithmic plot, clearly indi-
cating power law behavior:
w . (3)
This power law in the shift factor is characteristic of ag-
ing. µ is called the aging exponent, and has been found
experimentally to be close to unity for a wide variety of
glasses in a temperature range near Tg [4].
Figure 4 shows the effect of stress and temperature on
the aging exponent, as determined from linear fits to the
data in Fig. 3. At T = 0.2u0/kB, µ is close to one for
small stresses, but decreases strongly with stress. This
apparent erasure of aging by large mechanical deforma-
tions has been called “mechanical rejuvenation” [25]. Ex-
periments have frequently found a stress dependence of
3 4 5
3 4 5
σ = 0.2
σ = 0.4
σ = 0.5
σ = 0.1
σ = 0.2
σ = 0.4
T = 0.2 T = 0.3
FIG. 3: Plot of the shift factors found by superimposing the
creep compliance curves, aJ (circles), the mean-squared dis-
placement curves, aMSD (triangles), and the incoherent scat-
tering function curves, aC (×) at different wait times (see
text). The solid lines are linear fits to the data.
the aging exponent [4], although it is not always the case
that the aging process slows down with applied stress;
stress has been known to increase the rate of aging in
some circumstances as well [9, 10]. The structural ori-
gins of this effect are not well understood [11, 12].
At T = 0.3u0/kB, we find that the aging exponent is
somewhat smaller than at T = 0.2u0/kB and varies much
less with applied stress. This behavior is most likely
due to the fact that the temperature is approaching Tg.
Indeed, experiments show that µ rapidly drops to zero
above Tg. compliance is an order of magnitude larger at
T = 0.3u0/kB than at T = 0.2u0/kB and the data does
not fully superimpose in a master curve for long times
where J > 0.2u0/a
3. Shift factors were obtained from
the small creep portion of the curves.
The relatively simple relationship between shift factors
and wait time permits construction of an expression that
describes the entire master curve in Fig. 2. For creep
times that are short compared to the wait time - such
that minimal physical aging occurs over the timescale of
the experiment - experimental creep compliance curves
can be fit to a stretched exponential (typical of processes
with a spectrum of relaxation times),
J(t) = J0 exp[(t/t0)
m] (4)
where t0 is the retardation factor, and the exponent, m,
has been found to be close to 1/3 for most glasses [4].
A fit of this expression to our simulated creep compli-
ance curves is shown in Fig. 2 (dotted line). This ex-
0 0.1 0.2 0.3 0.4 0.5 0.6
T = 0.2
T = 0.3
FIG. 4: The aging exponent, µ, determined from the slopes
of log(aJ ) versus log(tw) (from Fig. 3) plotted versus stress
(open symbols). The solid symbols at zero stress refer to
shift factors determined from aMSD (eq. 7) and aC (eq. 6)
data only. The dashed lines are guides to the eye.
pression is clearly only consistent with the data at times
t < tw. At times much longer than the wait time, the
creep compliance varies more slowly due to the stiffen-
ing caused by aging during the course of the experiment.
Struik suggested that eq. (4) could be extended to the
long-time creep regime, where the experimental timescale
may be longer than the wait time, by introducing an ef-
fective time to account for the slowdown in the relaxation
timescales:
teff =
tw + t′
dt′ (5)
Upon replacing t with teff , eq. (4) may be used to de-
scribe the entire creep curve. Creep compliance curves
from Fig. 2 can indeed be fit to this form (dashed lines)
for a known wait time, tw, and aging exponent, µ, as
obtained from the master curve. We find m ≈ 0.5 ± 0.1
for all stresses at T = 0.2u0/kB, and a relatively broad
range of values for J0 and t0 are consistent with the data.
For the simple thermo-mechanical history prescribed by
the creep experiment, Struik’s effective time formulation
appears to work quite well.
The present results parallel those of a recent simula-
tion study of the shear yield stress in glassy solids [21].
In this work, the glassy solid was deformed at constant
strain rate, and two different regimes of strong and weak
rate dependence emerged depending on the time to reach
the yield point relative to the wait time. In order to ra-
tionalize these results, a rate-state model was developed
that accounted for the internal evolution of the material
with age through a single state variable Θ(t). This formu-
lation successfully collapses yield stress data for different
ages and strain rates in a universal curve by adapting the
evolution of the state variable during the strain interval.
We note here that this state variable is closely related to
Struik’s effective time, as it tries to subsume the modified
aging dynamics during deformation in a single variable
and in particular easily includes the effects of overaging
or rejuvenation.
B. Microscopic Dynamics
The aging behavior of the simulated mechanical re-
sponse functions agrees remarkably well with experiment.
Additional microscopic information from simulations al-
lows us to obtain more directly the relevant timescales
of the system, and the relevant microscopic processes re-
sponsible for aging. One parameter which has been use-
ful in studying glassy dynamics is the “self” part of the
incoherent scattering factor [19],
Cq(t, tw) =
exp(i~q · [~rj(tw + t)− ~rj(tw)]) (6)
where ~rj is the position of the j
th atom, and ~q is the
wave-vector. Cq curves as a function of age are shown
in Fig. 5 and exhibit three distinct regions. Initially,
Cq decreases as particles make very small unconstrained
excursions about their positions. There follows a long
plateau, where the correlation function does not change
considerably. In this regime, atoms are not free to dif-
fuse, but are trapped in local cages formed by their near-
est neighbours. For this reason, the time spent in the
plateau regime is often associated with a “cage time”.
The plateau region ends when particles finally escape
from local cages (α-relaxation), and larger atomic rear-
rangements begin to take place. The cage time corre-
sponds closely to the transition from short-time to long-
time regime observed in the creep compliance. Structural
rearrangements taking place in the α-relaxation regime
are clearly associated with the continued aging observed
in the creep compliance, as well as plastic deformations
occurring in that region.
The correlation functions for different ages are similar
in form, but the time spent in the plateau region in-
creases with age. Just as creep compliance curves can be
shifted in time to form a master curve, we may overlap
the long-time, cage-escape regions of Cq by rescaling the
time variable of the correlation data at different ages (see
inset of Fig. 5). The corresponding shift factors aC(tw)
are also shown in Fig. 3, where we see that the increase
in cage time with age follows the same power law as the
shift factors of the creep compliance. These results are
qualitatively similar to the scaling of the relaxation times
with age found in [19] with no load.
The real space quantity corresponding to Cq is the
mean squared displacement,
〈~r(t, tw)
∆~rj(t, tw)
2 (7)
FIG. 5: Incoherent scattering factor (eq. 6) for different
wait times measured under the same loading conditions as
in Fig. 1(a) for q = (0, 0, 2π). The inset shows the master
curve created by rescaling the time variable by aC . Symbols
as in Fig. 1(b).
FIG. 6: Mean-squared displacement (eq. 7) for different wait
times measured under the same loading conditions as in
Fig. 1(a). The inset shows the master curve created by rescal-
ing the time variable by aMSD. Symbols as in Fig. 1(b).
where ∆~rj(t, tw) = ~rj(tw + t) − ~rj(tw). This function
is shown in Fig. 6. Again we see three characteristic re-
gions of unconstrained (ballistic), caged, and cage-escape
behavior. The departure from the cage plateau likewise
increases with age, and a master curve can be constructed
by shifting the curves with a factor aMSD (see inset of
Fig. 6). Shift factors aMSD are plotted in Fig. 3, along
with shifts for creep compliance and incoherent scatter-
ing function. As anticipated, the shifts versus wait time
fully agree with those obtained from Cq and J .
This clearly demonstrates that for our model, the cage
escape time is indeed the controlling factor in the aging
dynamics of the mechanical response functions.
0 0.5 1 1.5 2 2.5 3 3.5
FIG. 7: The displacement probability distribution versus time
measured under the same loading conditions as in Fig. 1(a),
with a wait time of 500τLJ . The solid lines from left to right
are obtained at times t of 75, 750, 7500, and 75000τLJ . The
dashed lines show fits to the double Gaussian distribution (see
text, eq. 8).
Additional information about microscopic processes
can be obtained by studying not only the mean of the dis-
placements, but also the full spectrum of relaxation dy-
namics as a function of time and wait time. To this end,
we measure the probability distribution P (∆r(t, tw)
2) of
atomic displacements during time intervals, t, for glasses
at various ages, tw. This quantity is complementary to
the measurements of dynamical heterogeneities detailed
in [26], where the spatial variations of the vibrational am-
plitudes were measured at a snapshot in time to show the
correlations of mobile particles in space. In our study, we
omit the spatial information, but retain all of the dynam-
ical information.
Representative distribution functions are shown in
Fig. 7 for a constant wait time of tw = 500τLJ and var-
ious time intervals t. The distributions were obtained
from a smaller system of only 271 polymer chains due to
memory constraints. The data does not reflect a simple
Gaussian distribution, but clearly exhibits the presence
of two distinct scales: there is a narrow distribution of
caged particles and a wider distribution of particles that
have escaped from their cages. This behavior can be de-
scribed by the sum of two Gaussian peaks,
P (∆r2) = N1 exp
+N2 exp
. (8)
Deviations from purely Gaussian behavior are common in
glassy systems and are a signature of dynamical hetero-
geneities [26, 27]. Experiments on colloidal glasses [28]
show a similar separation of displacement distributions
into fast and slow particles.
A fit of the normalized distributions to eq. (8) (dashed
lines in Fig. 7) requires adjustment of three parameters:
the variance of caged and mobile particle distributions,
1 2 3 4 5
FIG. 8: The Gaussian fit parameters for the distribution of
displacements (see text, eq. 8), (a) N1/N , (b) σ
1 , and (c) σ
measured under the same loading conditions as in Fig. 1(a).
The curves are for wait times increasing from left to right
from 500τLJ to 15000τLJ .
and σ2
, as well as their relative contributions N1/N ,
where N = N1 +N2. These parameters are sufficient to
describe the full evolution of the displacement distribu-
tion during aging. In Fig. 8, we show the fit parameters
as a function of time and wait time. Again two distinct
time scales are evident. At short times, most of the parti-
cles are caged (N1/N ≈ 1), and the variance of the cage
peak is also changing very little. There are few rear-
rangements in this regime, however Fig. 8(c) shows that
a small fraction of particles are mobile at even the short-
est times. At a time corresponding to the onset of cage
escape, the number of particles in the cage peak begins
to rapidly decay, and the variance of the cage peak in-
creases. This indicates that the cage has become more
malleable - small, persistent rearrangements occur lead-
ing to eventual cage escape. In this regime, the variance
of the mobile peak increases very little. Note that the
typical length scale of rearrangements is less than a par-
ticle diameter even in the cage escape regime, but the
number of particles undergoing rearrangements changes
by more than 50%.
Similar to the compliance and mean-squared displace-
ment curves, the data in Fig. 8(a) and (b) can also be
0 0.5 1 1.5 2
500
1500
5000
15000
FIG. 9: The displacement probability distribution measured
under the same loading conditions and wait times as in
Fig. 1(a) plotted at times corresponding to < r2(t, tw) >=
0.7, shown in the inset as a dashed line. The legend indicates
the wait time.
superimposed by shifting time. Shift factors for N1/N
and σ2
coincide exactly with shifts for the mean; how-
ever, data for σ2
(Fig. 8(c)) seems to be much less af-
fected by the wait time. The aging dynamics appears to
be entirely determined by the cage dynamics, and not by
larger rearrangements within the glass.
Since the fit parameters exhibit the same scaling with
wait time as the mean, one might expect that the en-
tire distribution of displacements under finite load scales
with the evolution of the mean. In Fig. 9, we plot dis-
placement distributions for several wait times at time in-
tervals chosen such that the mean squared displacements
are identical (see inset). Indeed, we find that all curves
overlap, indicating that the entire relaxation spectrum
ages in the same way. A similar observation was recently
made in simulations of a model for a metallic glass aging
at zero stress [29], although in this study the tails of the
distribution were better described by stretched exponen-
tials.
In order to study the effect of load on the relaxation
dynamics, we compare in Figure 10 the fit parameters for
a sample undergoing creep (replotted from Fig. 8) and a
reference sample without load. It is clear that the dy-
namics are strongly affected by the applied stress only in
the region characterized by α-relaxations. For the stress
applied here, the onset of cage-escape does not appear to
be greatly modified by the stress, however the decay in
N1/N and the widening of the cage peak are accelerated.
The stress does not modify the variance of the mobile
peak, confirming again the importance of local rearrange-
ments as compared to large-scale motion in the dynamics
of the system. The accelerated structural rearrangements
caused by the stress result in creep on the macroscopic
scale, but may also be responsible for the modification of
the aging dynamics with stress as observed in Fig. 4.
1 2 3 4 5
σ = 0
σ = 0.4 (a)
FIG. 10: The Gaussian fit parameters to the displacement
distributions (see text, eq. 8) (a) N1/N , (b) σ
1 , and (c) σ
a sample aged at T = 0.2u0/kB for tw = 500τLJ , and then
either undergoing a creep experiment at σ = 0.4u0/a
3 (black),
or simply aging further at zero stress (red).
C. Structural evolution
The connection between the dynamics and the struc-
ture of a glass during aging remains uncertain, mostly
because no structural parameter has been found that
strongly depends on wait time. Recent simulation stud-
ies of metallic glasses have shown the existence of sev-
eral short range order parameters that can distinguish
between glassy states created through different quench-
ing protocols [30, 31, 32]. A strong correlation has been
found between “ordered” regions of the glass and strain
localization. Many metallic glasses are known to form
quasi-crystalline structures that optimize local packing.
It remains to be seen whether the short-range order
evolves in the context of aging and in other glass for-
mers such as polymers and colloids. A recent experimen-
tal study of aging in colloidal glasses found no change in
the distribution function of a tetrahedral order parame-
ter [33]. Below, we investigate several measures of local
order in our model as they evolve with age and under
load.
Since Lennard-Jones liquids are known to condense
into a crystal with fcc symmetry at low temperatures,
it is reasonable to look for the degree of local fcc order in
our polymer model. The level of fcc order can be quan-
tified via the bond orientational parameter [34],
. (9)
This parameter has been successfully used to character-
ize the degree of order in systems of hard sphere glasses.
Q6 is determined for each atom by projecting the bond
angles of the nearest neighbours onto the spherical har-
monics, Y6m(θ, φ). The overbar denotes an average over
all bonds. Nearest neighbours are defined as all atoms
within a cutoff radius, rc, of the central atom. For all of
the order parameters discussed here, the cutoff radius is
defined by the first minimum in the pair correlation func-
tion, in this case 1.45a. Q6 is approximately 0.575 for a
perfect fcc crystal; for jammed structures, it can exhibit
a large range of values less than about 0.37 [34]. The full
distribution of Q6 for our model glass is shown for several
ages as well as an initial melt state in Fig. 11(a). We see
that there is very little difference even between melt and
glassy states in our model, and no discernible difference
at all with increasing age.
Locally, close-packing is achieved by tetrahedral order-
ing and not fcc ordering, however, tetrahedral orderings
cannot span the system. The glass formation process has
been described in terms of frustration between optimal
local and global close-packing structures. To investigate
the type of local ordering in this model, we investigate
a 3-body angular correlation function, P (θ). θ is de-
fined as the internal angle created by a central atom and
individual pairs of nearest-neighbours, and P (θ) is the
probability of occurrence of θ. Results for this corre-
lation are shown in Fig. 11(b). Two peaks at approxi-
mately 60◦ and 110◦ indicate tetrahedral ordering. The
peaks sharpen under quenching from the melt, but the
distribution does not evolve significantly during aging.
In contrast, simulations of metallic glass formers showed
a stronger sensitivity of this parameter to the quench
protocol [31], but most of those changes may be due to
rearrangements in the supercooled liquid state and not
in the glassy state.
Another parameter that has been successful in classi-
fying glasses is the triangulated surface order parameter
[32],
(6 − q)νq (10)
which measures the degree of quasi-crystalline order. The
surface coordination number, q, is defined for each atom
of the coordination shell as the number of neighbouring
atoms also residing in the coordination shell; νq is the
number of atoms in the coordination shell with surface
coordination q. Ordered systems have been identified
with S equal to 12 (icosahedron), 14, 15 and 16. Figure
11(c) shows the probability distribution for P (S) for the
melt and for glassy states with short and long wait times.
0.1 0.2 0.3 0.4 0.5
0 50 100 150
θ (degrees)
5 10 15 20 25 30 35
FIG. 11: Short-range order parameters: (a) the bond-
orientational parameter, (b) the three-body angular corre-
lations, (c) the surface triangulated order (see text for dis-
cussion). x’s show the melt state, circles show the sample
aged for tw = 500τLJ , and triangles show the sample aged for
500, 000τLJ .
The peak of the distribution moves toward lower S (more
ordered) upon cooling, and continues to evolve slowly in
the glass. The mean of S relative to the as-quenched
state, 〈S〉, is shown in Fig. 12 as a function of wait time
at two temperatures. We see that 〈S〉 is a logarithmi-
cally decreasing function of wait time. Even though this
is not a strong dependence, this order parameter is sig-
nificantly more sensitive to age than the others that have
been investigated.
Figure 12 also shows the order parameter 〈S〉 after the
ramped-up stress has been applied to the aged samples.
We can see that at T = 0.2u0/kB, some of the order
that developed during age is erased, while no appreciable
change occurs at the higher temperature T = 0.3u0/kB.
These results correlate well with the behavior of the ag-
ing exponent found in Fig. 4, where mechanical rejuve-
nation was found at lower temperatures but was much
less pronounced at higher T . The load is applied very
quickly, and most of the deformation in this regime is
affine, however, the strain during this time was similar
for both temperatures, therefore the effect is not simply
due to a change in density. More work is needed to clarify
the nature of the structural changes during rejuvenation.
T = 0.2 T = 0.3
FIG. 12: Precent change in the triangulated surface order
parameter with wait time as compared to the just-quenched
sample. Circles are for samples aged at zero pressure for the
time tw. Triangles are for the same samples immediately after
ramping up to the creep stress. For T = 0.2u0/kB this stress
is 0.4u0/a
3, and for T = 0.3u0/kB the stress is 0.1u0/a
IV. CONCLUSIONS
We investigate the effects of aging on both macroscopic
creep response and underlying microscopic structure and
dynamics through simulations on a simple model polymer
glass. The model qualitatively reproduces key experi-
mental trends in the mechanical behavior of glasses under
sustained stress. We observe a power-law dependence of
the relaxation time on the wait time with an aging expo-
nent of approximately unity, and a decrease in the aging
exponent with increasing load that indicates the presence
of mechanical rejuvenation. The model creep compliance
curves can be fit in the short and long-time regimes using
Struik’s effective time formulation. Additionally, inves-
tigation of the microscopic dynamics through two-time
correlation functions has shown that, for our model glass,
the aging dynamics of the creep compliance exactly corre-
sponds to the increase in the cage escape or α-relaxation
time.
A detailed study of the entire distribution of parti-
cle displacements yields an interesting picture of the mi-
croscopic dynamics during aging. The distribution can
be described by the sum of two Gaussians, reflecting
the presence of caged and mobile particles. The frac-
tion of mobile particles and the amplitude of rearrange-
ments in the cage strongly increase at the cage escape
time. However, in analogy with results in colloidal glasses
[35], structural rearrangements occur even for times well
within the “caged” regime, and fairly independent of wait
time and stress. For our model glass, we find that the
entire distribution of displacements scales with age in
the same way as the mean. At times when the long-
time portion of the mean squared displacement overlaps,
the distribution of displacements at different wait times
completely superimpose, confirming that all of the me-
chanical relaxation times scale in the same way with age.
To characterize the evolution of the structure during
aging, we investigate several measures of short-range or-
der in our model glass. We find that the short-range order
does not evolve strongly during aging. The triangulated
surface order [32], however, shows a weak logarithmic de-
pendence on age. Results also show a change in structure
when a load is rapidly applied, and this seems to be cor-
related with an observed decrease in the aging exponent
under stress.
This study has characterized the dynamics of a model
glass prepared by a rapid quench below Tg, followed by
aging at constant T and subsequent application of a con-
stant load. For such simple thermo-mechanical histo-
ries, existing phenomenological models work well, how-
ever, the dynamics of glasses are in general much more
complex. For instance, large stresses in the non-linear
regime modify the aging dynamics and cause nontrivial
effects such as mechanical rejuvenation and over-aging
[10, 11]. Also, experiments have shown that the time-age
time superposition no longer holds when polymer glasses
undergo more complex thermal histories such as a tem-
perature jump [23]. The success of our study in analyzing
simple aging situations indicates that the present simula-
tion methodology will be able to shed more light on these
topics in the near future.
Acknowledgments
We thank the Natural Sciences and Engineering Coun-
cil of Canada (NSERC) for financial support. Computing
time was provided by WestGrid. Simulations were per-
formed with the LAMMPS molecular dynamics package
[36].
[1] J.-L. Barrat, M. V. Feigelman, J. Kurchan, and J. D.
(Eds.), Slow Relaxations and Nonequilibrium Dynam-
ics in Condensed Matter: Les Houches Session LXXVII
(Springer, 2002).
[2] C. A. Angell, Science 267, 1924 (1995).
[3] C. A. Angell, K. L. Ngai, G. B. McKenna, P. F. McMil-
lan, and S. W. Martin, J. Applied Physics 88, 3113
(2000).
[4] L. C. E. Struik, Physical Aging in Amorphous Polymers
and Other Materials (Elsevier, Amsterdam, 1978).
[5] J. M. Hutchinson, Prog. Polym. Sci. 20, 703 (1978).
[6] B. Abou, D. Bonn, and J. Meunier, Phys. Rev. E 64,
021510 (2001).
[7] M. Cloitre, R. Borrega, and L. Leibler, Phys. Rev. Lett.
85, 4819 (2000).
[8] T. Jonsson, J. Mattsson, C. Djurberg, F. A. Khan,
P. Nordblad, and P. Svedlindh, Phys. Rev. Lett. 75, 4138
(1995).
[9] V. Viasnoff and F. Lequeux, Phys. Rev. Lett. 89, 065701
(2002).
[10] D. J. Lacks and M. J. Osborne, Phys. Rev. Lett. 93,
255501 (2004).
[11] G. B. McKenna, J. Phys.: Condens. Matter 15, S737
(2003).
[12] L. C. E. Struik, Polymer 38, 4053 (1997).
[13] M. Utz, P. G. Debenedetti, and F. H. Stillinger, Phys.
Rev. Lett. 84, 1471 (2000).
[14] C. Monthus and J. P. Bouchaud, J. Phys. A: Math. Gen.
29, 3847 (1996).
[15] P. Sollich, F. Lequeux, P. Hebraud, and M. E. Cates,
Phys. Rev. Lett. 78, 2020 (1997).
[16] K. Chen and K. S. Schweizer (2007), preprint, personal
communication.
[17] W. Kob, J. Phys.: Condens. Mat. 11, R85 (1999).
[18] C. Bennemann, W. Paul, K. Binder, and B. Duenweg,
Phys. Rev. E 57, 843 (1998).
[19] W. Kob and J.-L. Barrat, Phys. Rev. Lett. 78, 4581
(1997).
[20] F. Varnik, L. Bocquet, and J.-L. Barrat, J. Chem. Phys
120, 2788 (2004).
[21] J. Rottler and M. O. Robbins, Phys. Rev. Lett. 95,
225504 (2005).
[22] K. Kremer and G. S. Grest, J. Chem. Phys 92, 5057
(1990).
[23] H. Montes, V. Viasnoff, S. Juring, and F. Lequeux, J.
Stat. Mech.: Theory and Experiment p. P03003 (2006).
[24] L. C. Brinson and T. S. Gates, Int. J. Solids and Struc-
tures 32, 827 (1995).
[25] G. B. McKenna and A. J. Kovacs, Polym. Eng. Sci. 24,
1131 (1984).
[26] K. Vollmayr-Lee and A. Zippelius, Phys. Rev. E 72,
041507 (2005).
[27] W. Kob, C. Donati, S. J. Plimpton, P. H. Poole, and
S. C. Glotzer, Phys. Rev. Lett. 79, 2827 (1997).
[28] E. R. Weeks and D. A. Weitz, Chem. Phys. 284, 361
(2002).
[29] H. E. Castillo and A. Parsaeian, Nature Physics 3, 26
(2007).
[30] Y. Shi and M. L. Falk, Phys. Rev. Lett. 95, 095502
(2005).
[31] F. Albano and M. L. Falk, J. Chem. Phys 122, 154508
(2005).
[32] Y. Shi and M. L. Falk, Phys. Rev. B 73, 214201 (2006).
[33] G. C. Cianci, R. E. Courtland, and E. R. Weeks, in
Flow Dynamics: The Second International Conference
on Flow Dynamics (AIP Conf. Proc., 2006), vol. 832,
pp. 21–25.
[34] S. Torquato, T. M. Truskett, and P. G. Debenedetti,
Phys. Rev. Lett. 84, 2064 (2000).
[35] R. E. Courtland and E. R. Weeks, J. Phys.:Condens.
Matter 15, S359 (2003).
[36] http://lammps.sandia.gov.
|
0704.1841 | Gaseous Inner Disks | Gaseous Inner Disks
Joan R. Najita
National Optical Astronomy Observatory
John S. Carr
Naval Research Laboratory
Alfred E. Glassgold
University of California, Berkeley
Jeff A. Valenti
Space Telescope Science Institute
As the likely birthplaces of planets and an essential conduit for the buildup of stellar
masses, inner disks are of fundamental interest in star and planet formation. Studies of the
gaseous component of inner disks are of interest because of their ability to probe the dynamics,
physical and chemical structure, and gas content of this region. We review the observational
and theoretical developments in this field, highlighting the potential of such studies to, e.g.,
measure inner disk truncation radii, probe the nature of the disk accretion process, and chart the
evolution in the gas content of disks. Measurements of this kind have the potential to provide
unique insights on the physical processes governing star and planet formation.
1. INTRODUCTION
Circumstellar disks play a fundamental role in the for-
mation of stars and planets. A significant fraction of the
mass of a star is thought to be built up by accretion through
the disk. The gas and dust in the inner disk (r <10 AU)
also constitute the likely material from which planets form.
As a result, observations of the gaseous component of in-
ner disks have the potential to provide critical clues to the
physical processes governing star and planet formation.
From the planet formation perspective, probing the
structure, gas content, and dynamics of inner disks is of
interest, since they all play important roles in establish-
ing the architectures of planetary systems (i.e., planetary
masses, orbital radii, and eccentricities). For example, the
lifetime of gas in the inner disk (limited by accretion onto
the star, photoevaporation, and other processes) places an
upper limit on the timescale for giant planet formation (e.g.,
Zuckerman et al., 1995).
The evolution of gaseous inner disks may also bear on
the efficiency of orbital migration and the eccentricity evo-
lution of giant and terrestrial planets. Significant inward
orbital migration, induced by the interaction of planets with
a gaseous disk, is implied by the small orbital radii of extra-
solar giant planets compared to their likely formation dis-
tances (e.g., Ida and Lin, 2004). The spread in the orbital
radii of the planets (0.05–5 AU) has been further taken to in-
dicate that the timing of the dissipation of the inner disk sets
the final orbital radius of the planet (Trilling et al., 2002).
Thus, understanding how inner disks dissipate may impact
our understanding of the origin of planetary orbital radii.
Similarly, residual gas in the terrestrial planet region may
play a role in defining the final masses and eccentricities of
terrestrial planets. Such issues have a strong connection to
the question of the likelihood of solar systems like our own.
An important issue from the perspective of both star and
planet formation is the nature of the physical mechanism
that is responsible for disk accretion. Among the proposed
mechanisms, perhaps the foremost is the magnetorotational
instability (Balbus and Hawley, 1991) although other pos-
sibilities exist. Despite the significant theoretical progress
that has been made in identifying plausible accretion mech-
anisms (e.g., Stone et al., 2000), there is little observational
evidence that any of these processes are active in disks.
Studies of the gas in inner disks offer opportunities to probe
the nature of the accretion process.
For these reasons, it is of interest to probe the dynami-
cal state, physical and chemical structure, and the evolution
of the gas content of inner disks. We begin this Chapter
with a brief review of the development of this field and an
overview of how high resolution spectroscopy can be used
to study the properties of inner disks (Section 1). Previ-
ous reviews provide additional background on these top-
ics (e.g., Najita et al., 2000). In Sections 2 and 3, we re-
view recent observational and theoretical developments in
this field, first describing observational work to date on the
gas in inner disks, and then describing theoretical models
for the surface and interior regions of disks. In Section 4,
we look to the future, highlighting several topics that can be
explored using the tools discussed in Sections 2 and 3.
http://arxiv.org/abs/0704.1841v1
1.1 Historical Perspective
One of the earliest studies of gaseous inner disks was
the work by Kenyon and Hartmann on FU Orionis objects.
They showed that many of the peculiarities of these sys-
tems could be explained in terms of an accretion outburst in
a disk surrounding a low-mass young stellar object (YSO;
cf. Hartmann and Kenyon, 1996). In particular, the varying
spectral type of FU Ori objects in optical to near-infrared
spectra, evidence for double-peaked absorption line pro-
files, and the decreasing widths of absorption lines from
the optical to the near-infrared argued for an origin in an
optically thick gaseous atmosphere in the inner region of a
rotating disk. Around the same time, observations of CO
vibrational overtone emission, first in the BN object (Scov-
ille et al., 1983) and later in other high and low mass objects
(Thompson, 1985; Geballe and Persson, 1987; Carr, 1989),
revealed the existence of hot, dense molecular gas plausibly
located in a disk. One of the first models for the CO over-
tone emission (Carr, 1989) placed the emitting gas in an
optically-thin inner region of an accretion disk. However,
only the observations of the BN object had sufficient spec-
tral resolution to constrain the kinematics of the emitting
The circumstances under which a disk would produce
emission or absorption lines of this kind were explored in
early models of the atmospheres of gaseous accretion disks
under the influence of external irradiation (e.g., Calvet et
al., 1991). The models interpreted the FU Ori absorption
features as a consequence of midplane accretion rates high
enough to overwhelm external irradiation in establishing
a temperature profile that decreases with disk height. At
lower accretion rates, the external irradiation of the disk was
expected to induce a temperature inversion in the disk atmo-
sphere, producing emission rather than absorption features
from the disk atmosphere. Thus the models potentially pro-
vided an explanation for the FU Ori absorption features and
CO emission lines that had been detected.
By PPIV (Najita et al., 2000), high-resolution spec-
troscopy had demonstrated that CO overtone emission
shows the dynamical signature of a rotating disk (Carr et
al., 1993; Chandler et al., 1993), thus confirming theoreti-
cal expectations and opening the door to the detailed study
of gaseous inner disks in a larger number of YSOs. The
detection of CO fundamental emission (Section 2.3) and
emission lines of hot H2O (Section 2.2) had also added new
probes of the inner disk gas.
Seven years later, at PPV, we find both a growing number
of diagnostics available to probe gaseous inner disks as well
as increasingly detailed information that can be gleaned
from these diagnostics. Disk diagnostics familiar from
PPIV have been used to infer the intrinsic line broaden-
ing of disk gas, possibly indicating evidence for turbulence
in disks (Section 2.1). They also demonstrate the differen-
tial rotation of disks, provide evidence for non-equilibrium
molecular abundances (Section 2.2), probe the inner radii
of gaseous disks (Section 2.3), and are being used to probe
the gas dissipation timescale in the terrestrial planet region
(Section 4.1). Along with these developments, new spectral
line diagnostics have been used as probes of the gas in inner
disks. These include transitions of molecular hydrogen at
UV, near-infrared, and mid-infrared wavelengths (Sections
2.4, 2.5) and the fundamental ro-vibrational transitions of
the OH molecule (Section 2.2). Additional potential diag-
nostics are discussed in Section 2.6.
1.2 High Resolution Spectroscopy of Inner Disks
The growing suite of diagnostics can be used to probe in-
ner disks using standard high resolution spectroscopic tech-
niques. Although inner disks are typically too small to
resolve spatially at the distance of the nearest star form-
ing regions, we can utilize the likely differential rotation
of the disk along with high spectral resolution to separate
disk radii in velocity. At the warm temperatures (∼100 K–
5000 K) and high densities of inner disks, molecules are ex-
pected to be abundant in the gas phase and sufficiently ex-
cited to produce rovibrational features in the infrared. Com-
plementary atomic transitions are likely to be good probes
of the hot inner disk and the photodissociated surface lay-
ers at larger radii. By measuring multiple transitions of
different species, we should therefore be able to probe the
temperatures, column densities, and abundances of gaseous
disks as a function of radius.
With high spectral resolution we can resolve individual
lines, which facilitates the detection of weak spectral fea-
tures. We can also work around telluric absorption fea-
tures, using the radial velocity of the source to shift its spec-
tral features out of telluric absorption cores. This approach
makes it possible to study a variety of atomic and molecular
species, including those present in the Earth’s atmosphere.
Gaseous spectral features are expected in a variety of
situations. As already mentioned, significant vertical vari-
ation in the temperature of the disk atmosphere will pro-
duce emission (absorption) features if the temperature in-
creases (decreases) with height (Calvet et al., 1991; Mal-
bet and Bertout, 1991). In the general case, when the disk
is optically thick, observed spectral features measure only
the atmosphere of the disk and are unable to probe directly
the entire disk column density, a situation familiar from the
study of stellar atmospheres.
Gaseous emission features are also expected from re-
gions of the disk that are optically thin in the continuum.
Such regions might arise as a result of dust sublimation
(e.g., Carr, 1989) or as a consequence of grain growth and
planetesimal formation. In these scenarios, the disk would
have a low continuum opacity despite a potentially large gas
column density. Optically thin regions can also be produced
by a significant reduction in the total column density of the
disk. This situation might occur as a consequence of giant
planet formation, in which the orbiting giant planet carves
out a “gap” in the disk. Low column densities would also
be characteristic of a dissipating disk. Thus, we should be
able to use gaseous emission lines to probe the properties of
inner disks in a variety of interesting evolutionary phases.
2. OBSERVATIONS OF GASEOUS INNER DISKS
2.1 CO Overtone Emission
The CO molecule is expected to be abundant in the gas
phase over a wide range of temperatures, from the tem-
perature at which it condenses on grains (∼20 K) up to its
thermal dissociation temperature (∼4000 K at the densities
of inner disks). As a result, CO transitions are expected
to probe disks from their cool outer reaches (>100 AU) to
their innermost radii. Among these, the overtone transitions
of CO (∆v=2, λ=2.3µm) were the emission line diagnostics
first recognized to probe the gaseous inner disk.
CO overtone emission is detected in both low and high
mass young stellar objects, but only in a small fraction of
the objects observed. It appears more commonly among
higher luminosity objects. Among the lower luminosity
stars, it is detected from embedded protostars or sources
with energetic outflows (Geballe and Persson, 1987; Carr,
1989; Greene and Lada, 1996; Hanson et al., 1997; Luh-
man et al., 1998; Ishii et al., 2001; Figueredo et al., 2002;
Doppmann et al., 2005). The conditions required to excite
the overtone emission, warm temperatures (& 2000 K) and
high densities (>1010 cm−3), may be met in disks (Scoville
et al., 1983; Carr, 1989; Calvet et al., 1991), inner winds
(Carr, 1989), or funnel flows (Martin, 1997).
High resolution spectroscopy can be used to distinguish
among these possibilities. The observations typically find
strong evidence for the disk interpretation. The emission
line profiles of the v=2–0 bandhead in most cases show the
characteristic signature of bandhead emission from sym-
metric, double-peaked line profiles originating in a rotating
disk (e.g., Carr et al., 1993; Chandler et al., 1993; Na-
jita et al., 1996; Blum et al., 2004). The symmetry of the
observed line profiles argues against the likelihood that the
emission arises in a wind or funnel flow, since inflowing
or outflowing gas is expected to produce line profiles with
red- or blue-shifted absorption components (alternatively
line asymmetries) of the kind that are seen in the hydrogen
Balmer lines of T Tauri stars (TTS). Thus high resolution
spectra provide strong evidence for rotating inner disks.
The velocity profiles of the CO overtone emission are
normally very broad (>100km s−1). In lower mass stars
(∼1M⊙), the emission profiles show that the emission ex-
tends from very close to the star, ∼0.05 AU, out to ∼0.3 AU
(e.g., Chandler et al., 1993; Najita et al., 2000). The small
radii are consistent with the high excitation temperatures
measured for the emission (∼1500–4000K). Velocity re-
solved spectra have also been modeled in a number of high
mass stars (Blum et al., 2004; Bik and Thi, 2004), where the
CO emission is found to arise at radii ∼ 3AU.
The large near-infrared excesses of the sources in which
CO overtone emission is detected imply that the warm emit-
ting gas is located in a vertical temperature inversion re-
gion in the disk atmosphere. Possible heating sources for
the temperature inversion include: external irradiation by
the star at optical through UV wavelengths (e.g., Calvet
et al., 1991; D’Alessio et al., 1998) or by stellar X-rays
(Glassgold et al., 2004; henceforth GNI04); turbulent heat-
ing in the disk atmosphere generated by a stellar wind flow-
ing over the disk surface (Carr et al., 1993); or the dissi-
pation of turbulence generated by disk accretion (GNI04).
Detailed predictions of how these mechanisms heat the
gaseous atmosphere are needed in order to use the observed
bandhead emission strengths and profiles to investigate the
origin of the temperature inversion.
The overtone emission provides an additional clue that
suggests a role for turbulent dissipation in heating disk at-
mospheres. Since the CO overtone bandhead is made up
of closely spaced lines with varying inter-line spacing and
optical depth, the emission is sensitive to the intrinsic line
broadening of the emitting gas (as long as the gas is not op-
tically thin). It is therefore possible to distinguish intrinsic
line broadening from macroscopic motions such as rotation.
In this way, one can deduce from spectral synthesis model-
ing that the lines are suprathermally broadened, with line
widths approximately Mach 2 (Carr et al., 2004; Najita et
al., 1996). Hartmann et al. (2004) find further evidence for
turbulent motions in disks based on high resolution spec-
troscopy of CO overtone absorption in FU Ori objects.
Thus disk atmospheres appear to be turbulent. The tur-
bulence may arise as a consequence of turbulent angular
momentum transport in disks, as in the magnetorotational
instability (MRI; Balbus and Hawley, 1991) or the global
baroclinic instability (Klahr and Bodenheimer, 2003). Tur-
bulence in the upper disk atmosphere may also be generated
by a wind blowing over the disk surface.
2.2 Hot Water and OH Fundamental Emission
Water molecules are also expected to be abundant in
disks over a range of disk radii, from the temperature at
which water condenses on grains (∼150 K) up to its thermal
dissociation temperature (∼2500 K). Like the CO overtone
transitions, the rovibrational transitions of water are also ex-
pected to probe the high density conditions in disks. While
the strong telluric absorption produced by water vapor in
the Earth’s atmosphere will restrict the study of cool wa-
ter to space or airborne platforms, it is possible to observe
from the ground water that is much hotter than the Earth’s
atmosphere. Very strong emission from hot water can be
detected in the near-infrared even at low spectral resolution
(e.g., SVS-13; Carr et al., 2004). More typically, high reso-
lution spectroscopy of individual lines is required to detect
much weaker emission lines.
For example, emission from individual lines of water in
the K- and L-bands have been detected in a few stars (both
low and high mass) that also show CO overtone emission
(Carr et al., 2004; Najita et al., 2000; Thi and Bik, 2005).
Velocity resolved spectra show that the widths of the water
lines are consistently narrower than those of the CO emis-
sion lines. Spectral synthesis modeling further shows that
the excitation temperature of the water emission (typically
∼1500 K), is less than that of the CO emission. These re-
sults are consistent with both the water and CO originat-
ing in a differentially rotating disk with an outwardly de-
creasing temperature profile. That is, given the lower dis-
sociation temperature of water (∼2500 K) compared to CO
(∼4000 K), CO is expected to extend inward to smaller radii
than water, i.e., to higher velocities and temperatures.
The ∆v=1 OH fundamental transitions at 3.6µm have
also been detected in the spectra of two actively accreting
sources, SVS-13 and V1331 Cyg, that also show CO over-
tone and hot water emission (Carr et al., in preparation).
As shown in Fig. 1, these features arise in a region that
is crowded with spectral lines of water and perhaps other
species. Determining the strengths of the OH lines will,
therefore, require making corrections for spectral features
that overlap closely in wavelength.
Spectral synthesis modeling of the detected CO, H2O
and OH features reveals relative abundances that depart sig-
nificantly from chemical equilibrium (cf. Prinn, 1993), with
the relative abundances of H2O and OH a factor of 2–10
below that of CO in the region of the disk probed by both
diagnostics (Carr et al., 2004; Carr et al., in preparation;
see also Thi and Bik, 2005). These abundance ratios may
arise from strong vertical abundance gradients produced by
the external irradiation of the disk (see Section 3.4).
2.3 CO Fundamental Emission
The fundamental (∆v=1) transitions of CO at 4.6µm are
an important probe of inner disk gas in part because of their
broader applicability compared, e.g., to the CO overtone
lines. As a result of their comparatively small A-values, the
CO overtone transitions require large column densities of
warm gas (typically in a disk temperature inversion region)
in order to produce detectable emission. Such large column
densities of warm gas may be rare except in sources with
the largest accretion rates, i.e., those best able to tap a large
Fig. 1.— OH fundamental ro-vibrational emission from SVS-13
on a relative flux scale.
Fig. 2.— Gaseous inner disk radii for TTS from CO fundamental
emission (filled squares) compared with corotation radii for the
same sources. Also shown are dust inner radii from near-infrared
interferometry (filled circles; Akeson et al., 2005a,b) or spectral
energy distributions (open circles; Muzerolle et al., 2003). The
solid and dashed lines indicate an inner radius equal to, twice,
and 1/2 the corotation radius. The points for the three stars with
measured inner radii for both the gas and dust are connected by
dotted lines. Gas is observed to extend inward of the dust inner
radius and typically inward of the corotation radius.
accretion energy budget and heat a large column density
of the disk atmosphere. In contrast, the CO fundamental
transitions, with their much larger A-values, should be de-
tectable in systems with more modest column densities of
warm gas, i.e., in a broader range of sources. This is borne
out in high resolution spectroscopic surveys for CO funda-
mental emission from TTS (Najita et al., 2003) and Herbig
AeBe stars (Blake and Boogert, 2004) which detect emis-
sion from essentially all sources with accretion rates typical
of these classes of objects.
In addition, the lower temperatures required to excite
the CO v=1–0 transitions make these transitions sensitive
to cooler gas at larger disk radii, beyond the region probed
by the CO overtone lines. Indeed, the measured line profiles
for the CO fundamental emission are broad (typically 50–
100km s−1 FWHM) and centrally peaked, in contrast to
the CO overtone lines which are typically double-peaked.
These velocity profiles suggest that the CO fundamental
emission arises from a wide range of radii, from .0.1 AU
out to 1–2 AU in disks around low mass stars, i.e., the ter-
restrial planet region of the disk (Najita et al., 2003).
CO fundamental emission spectra typically show sym-
metric emission lines from multiple vibrational states (e.g.,
v=1–0, 2–1, 3–2); lines of 13CO can also be detected when
the emission is strong and optically thick. The ability to
study multiple vibrational states as well as isotopic species
within a limited spectral range makes the CO fundamental
lines an appealing choice to probe gas in the inner disk over
a range of temperatures and column densities. The relative
strengths of the lines also provide insight into the excitation
mechanism for the emission.
In one source, the Herbig AeBe star HD141569, the
excitation temperature of the rotational levels (∼200 K) is
much lower than the excitation temperature of the vibra-
tional levels (v=6 is populated), which is suggestive of UV
pumping of cold gas (Brittain et al., 2003). The emis-
sion lines from the source are narrow, indicating an origin
at &17 AU. The lack of fluorescent emission from smaller
radii strongly suggests that the region within 17 AU is de-
pleted of gaseous CO. Thus detailed models of the fluores-
cence process can be used to constrain the gas content in the
inner disk region (S. Brittain, personal communication).
Thus far HD141569 appears to be an unusual case. For
the majority of sources from which CO fundamental is de-
tected, the relative line strengths are consistent with emis-
sion from thermally excited gas. They indicate typical ex-
citation temperatures of 1000–1500K and CO column den-
sities of ∼1018 cm−2 for low mass stars. These temper-
atures are much warmer than the dust temperatures at the
same radii implied by spectral energy distributions (SEDs)
and the expectations of some disk atmosphere models (e.g.,
D’Alessio et al., 1998). The temperature difference can be
accounted for by disk atmosphere models that allow for the
thermal decoupling of the gas and dust (Section 3.2).
For CTTS systems in which the inclination is known, we
can convert a measured HWZI velocity for the emission to
an inner radius. The CO inner radii, thus derived, are typ-
ically ∼0.04 AU for TTS (Najita et al., 2003; Carr et al.,
in preparation), smaller than the inner radii that are mea-
sured for the dust component either through interferometry
(e.g., Eisner et al., 2005; Akeson et al., 2005a; Colavita et
al., 2003; see chapter by Millan-Gabet et al.) or through the
interpretation of SEDs (e.g., Muzerolle et al., 2003). This
shows that gaseous disks extend inward to smaller radii than
dust disks, a result that is not surprising given the relatively
low sublimation temperature of dust grains (∼1500–2000
K) compared to the CO dissociation temperature (∼4000
K). These results are consistent with the suggestion that the
inner radius of the dust disk is defined by dust sublimation
rather than by physical truncation (Muzerolle et al., 2003;
Eisner et al., 2005).
Perhaps more interestingly, the inner radius of the CO
emission appears to extend up to and usually within the
corotation radius (i.e., the radius at which the disk rotates
at the same angular velocity as the star; Fig. 2). In the cur-
rent paradigm for TTS, a strong stellar magnetic field trun-
cates the disk near the corotation radius. The coupling be-
tween the stellar magnetic field and the gaseous inner disk
regulates the rotation of the star, bringing the star into coro-
tation with the disk at the coupling radius. From this re-
gion emerge both energetic (X-)winds and magnetospheric
accretion flows (funnel flows; Shu et al., 1994). The ve-
locity extent of the CO fundamental emission shows that
gaseous circumstellar disks indeed extend inward beyond
Fig. 3.— The distribution of gaseous inner radii, measured with
the CO fundamental transitions, compared to the distribution of
orbital radii of short-period extrasolar planets. A minimum plane-
tary orbital radius of ∼0.04 AU is similar to the minimum gaseous
inner radius inferred from the CO emission line profiles.
the dust destruction radius to the corotation radius (and be-
yond), providing the material that feeds both X-winds and
funnel flows. Such small coupling radii are consistent with
the rotational rates of young stars.
It is also interesting to compare the distribution of inner
radii for the CO emission with the orbital radii of the “close-
in” extrasolar giant planets (Fig. 3). Extra-solar planets
discovered by radial velocity surveys are known to pile-up
near a minimum radius of 0.04 AU. The similarity between
these distributions is roughly consistent with the idea that
the truncation of the inner disk can halt the inward orbital
migration of a giant planet (Lin et al., 1996). In detail, how-
ever, the planet is expected to migrate slightly inward of
the truncation radius, to the 2:1 resonance, an effect that
is not seen in the present data. A possible caveat is that
the wings of the CO lines may not trace Keplerian motion
or that the innermost gas is not dynamically significant. It
would be interesting to explore this issue further since the
results impact our understanding of planet formation and
the origin of planetary architectures. In particular, the ex-
istence of a stopping mechanism implies a lower efficiency
for giant planet formation, e.g., compared to a scenario in
which multiple generations of planets form and only the last
generation survives (e.g., Trilling et al., 2002).
2.4 UV Transitions of Molecular Hydrogen
Among the diagnostics of inner disk gas developed since
PPIV, perhaps the most interesting are those of H2. H2 is
presumably the dominant gaseous species in disks, due to
high elemental abundance, low depletion onto grains, and
robustness against dissociation. Despite its expected ubiq-
uity, H2 is difficult to detect because permitted electronic
transitions are in the far ultraviolet (FUV) and accessible
only from space. Optical and rovibrational IR transitions
have radiative rates that are 14 orders of magnitude smaller.
1208 1212 1216 1220
Vacuum Wavelength (Å)
TW Hya
v"=0 v"=1 v"=2 v"=3
E/k =
20,000 K
Fig. 4.— Lyα emission from TW Hya, an accreting T Tauri star,
and a reconstruction of the Lyα profile seen by the circumstel-
lar H2. Each observed H2 progression (with a common excited
state) yields a single point in the reconstructed Lyα profile. The
wavelength of each point in the reconstructed Lyα profile corre-
sponds to the wavelength of the upward transition that pumps the
progression. The required excitation energies for the H2 before the
pumping is indicated in the inset energy level diagram. There are
no low excitation states of H2 with strong transitions that overlap
Lyα. Thus, the H2 must be very warm to be preconditioned for
pumping and subsequent fluorescence.
Considering only radiative transitions with spontaneous
rates above 107 s−1, H2 has about 9000 possible Lyman-
band (B-X) transitions from 850-1650 Å and about 5000
possible Werner-band (C-X) transitions from 850-1300 Å
(Abgrall et al., 1993a,b). However, only about 200 FUV
transitions have actually been detected in spectra of accret-
ing TTS. Detected H2 emission lines in the FUV all origi-
nate from about two dozen radiatively pumped states, each
more than 11 eV above ground. These pumped states of
H2 are the only ones connected to the ground electronic
configuration by strong radiative transitions that overlap the
broad Lyα emission that is characteristic of accreting TTS
(see Fig. 4). Evidently, absorption of broad Lyα emission
pumps the H2 fluorescence. The two dozen strong H2 tran-
sitions that happen to overlap the broad Lyα emission are
all pumped out of high v and/or high J states at least 1 eV
above ground (see inset in Fig. 4). This means some mech-
anism must excite H2 in the ground electronic configura-
tion, before Lyα pumping can be effective. If the excitation
mechanism is thermal, then the gas must be roughly 103 K
to obtain a significant H2 population in excited states.
H2 emission is a ubiquitous feature of accreting TTS.
Fluoresced H2 is detected in the spectra of 22 out of 24 ac-
creting TTS observed in the FUV by HST/STIS (Herczeg et
al., 2002; Walter et al., 2003; Calvet et al., 2004; Bergin et
al., 2004; Gizis et al., 2005; Herczeg et al., 2005; unpub-
lished archival data). Similarly, H2 is detected in all 8 ac-
creting TTS observed by HST/GHRS (Ardila et al., 2002)
and all 4 published FUSE spectra (Wilkinson et al., 2002;
Herczeg et al., 2002; 2004; 2005). Fluoresced H2 was even
detected in 13 out of 39 accreting TTS observed by IUE,
despite poor sensitivity (Brown et al., 1981; Valenti et al.,
2000). Fluoresced H2 has not been detected in FUV spec-
tra of non-accreting TTS, despite observations of 14 stars
with STIS (Calvet et al., 2004; unpublished archival data),
1 star with GHRS (Ardila et al., 2002), and 19 stars with
IUE (Valenti et al., 2000). However, the existing observa-
tions are not sensitive enough to prove that the circumstellar
H2 column decreases contemporaneously with the dust con-
tinuum of the inner disk. When accretion onto the stellar
surface stops, fluorescent pumping becomes less efficient
because the strength and breadth of Lyα decreases signifi-
cantly and the H2 excitation needed to prime the pumping
mechanism may become less efficient. COS, if installed on
HST, will have the sensitivity to set interesting limits on H2
around non-accreting TTS in the TW Hya association.
The intrinsic Lyα profile of a TTS is not observable at
Earth, except possibly in the far wings, due to absorption by
neutral hydrogen along the line of sight. However, observa-
tions of H2 line fluxes constrain the Lyα profile seen by the
fluoresced H2. The rate at which a particular H2 upward
transition absorbs Lyα photons is equal to the total rate
of observed downward transitions out of the pumped state,
corrected for missing lines, dissociation losses, and propa-
gation losses. If the total number of excited H2 molecules
before pumping is known (e.g., by assuming a temperature),
then the inferred pumping rate yields a Lyα flux point at the
wavelength of each pumping transition (Fig. 4).
Herczeg et al. (2004) applied this type of analysis to
TW Hya, treating the circumstellar H2 as an isothermal,
self-absorbing slab. Fig. 4 shows reconstructed Lyα flux
points for the upward pumping transitions, assuming the
fluoresced H2 is at 2500 K. The smoothness of the recon-
structed Lyα flux points implies that the H2 level popula-
tions are consistent with thermal excitation. Assuming an
H2 temperature warmer or cooler by a few hundred degrees
leads to unrealistic discontinuities in the reconstructed Lyα
flux points. The reconstructed Lyα profile has a narrow
absorption component that is blueshifted by −90 kms−1,
presumably due to an intervening flow.
The spatial morphology of fluoresced H2 around TTS
is diverse. Herczeg et al. (2002) used STIS to observe TW
Hya with 50 mas angular resolution, corresponding to a spa-
tial resolution of 2.8 AU at a distance of 56 pc, finding no
evidence that the fluoresced H2 is extended. At the other
extreme, Walter et al. (2003) detected fluoresced H2 up to
9 arcsec from T Tau N, but only in progressions pumped by
H2 transitions near the core of Lyα. Fluoresced H2 lines
have a main velocity component at or near the stellar radial
velocity and perhaps a weaker component that is blueshifted
by tens of km s−1 (Herczeg et al., 2006). These two compo-
nents are attributed to the disk and the outflow, respectively.
TW Hya has H2 lines with no net velocity shift, consistent
with formation in the face-on disk (Herczeg et al., 2002).
On the other hand, RU Lup has H2 lines that are blueshifted
by 12 kms−1, suggesting formation in an outflow. In both
of these stars, absorption in the blue wing of the C II 1335
Å wind feature strongly attenuates H2 lines that happen to
overlap in wavelength, so in either case H2 forms inside the
warm atomic wind (Herczeg et al., 2002; 2005).
The velocity widths of fluoresced H2 lines (after re-
moving instrumental broadening) range from 18 km s−1 to
28 km s−1 for the 7 accreting TTS observed at high spec-
tral resolution with STIS (Herczeg et al., 2006). Line width
does not correlate well with inclination. For example, TW
Hya (nearly face-on disk) and DF Tau (nearly edge-on disk)
both have line widths of 18 km s−1. Thermal broadening
is negligible, even at 2000 K. Keplerian motion, enforced
corotation, and outflow may all contribute to H2 line width
in different systems. More data are needed to understand
how velocity widths (and shifts) depend on disk inclination,
accretion rate, and other factors.
2.5 Infrared Transitions of Molecular Hydrogen
Transitions of molecular hydrogen have also been stud-
ied at longer wavelengths, in the near- and mid-infrared.
The v=1–0 S(1) transition of H2 (at 2µm) has been de-
tected in emission in a small sample of classical T Tauri
stars (CTTS) and one weak T Tauri star (WTTS; Bary et
al., 2003 and references therein). The narrow emission
lines (.10kms−1), if arising in a disk, indicate an origin at
large radii, probably beyond 10 AU. The high temperatures
required to excite these transitions thermally (1000s K), in
contrast to the low temperatures expected for the outer disk,
suggest that the emission is non-thermally excited, possibly
by X-rays (Bary et al., 2003). The measurement of other
rovibrational transitions of H2 is needed to confirm this.
The gas mass detectable by this technique depends on
the depth to which the exciting radiation can penetrate the
disk. Thus, the emission strength may be limited either by
the strength of the radiation field, if the gas column density
is high, or by the mass of gas present, if the gas column
density is low. While it is therefore difficult to measure to-
tal gas masses with this approach, clearly non-thermal pro-
cesses can light up cold gas, making it easier to detect.
Emission from a WTTS is surprising since WTTS are
thought to be largely devoid of circumstellar dust and gas,
given the lack of infrared excesses and the low accre-
tion rates for these systems. The Bary et al. results call
this assumption into question and suggest that longer lived
gaseous reservoirs may be present in systems with low ac-
cretion rates. We return to this issue in Section 4.1.
At longer wavelengths, the pure rotational transitions of
H2 are of considerable interest because molecular hydrogen
carries most of the mass of the disk, and these mid-infrared
transitions are capable of probing the ∼100 K temperatures
that are expected for the giant planet region of the disk.
These transitions present both advantages and challenges
as probes of gaseous disks. On the one hand, their small A-
values make them sensitive, in principle, to very large gas
masses (i.e., the transitions do not become optically thick
until large gas column densities NH=10
− 1024 cm−2 are
reached). On the other hand, the small A-values also im-
ply small critical densities, which allows the possibility of
contaminating emission from gas at lower densities not as-
sociated with the disk, including shocks in outflows and UV
excitation of ambient gas.
In considering the detectability of H2 emission from
gaseous disks mixed with dust, one issue is that the dust
continuum can become optically thick over column densi-
ties NH ≪ 10
− 1024 cm−2. Therefore, in a disk that is
optically thick in the continuum (i.e., in CTTS), H2 emis-
sion may probe smaller column densities. In this case, the
line-to-continuum contrast may be low unless there is a
strong temperature inversion in the disk atmosphere, and
high signal-to-noise observations may be required to detect
the emission. In comparison, in disk systems that are op-
tically thin in the continuum (e.g., WTTS), H2 could be a
powerful probe as long as there are sufficient heating mech-
anisms (e.g., beyond gas-grain coupling) to heat the H2.
A thought-provoking result from ISO was the report
of approximately Jupiter-masses of warm gas residing in
∼20 Myr old debris disk systems (Thi et al., 2001) based
on the detection of the 28 µm and 17 µm lines of H2. This
result was surprising because of the advanced age of the
sources in which the emission was detected; gaseous reser-
voirs are expected to dissipate on much shorter timescales
(Section 4.1). This intriguing result is, thus far, uncon-
firmed by either ground-based studies (Richter et al., 2002;
Sheret et al., 2003; Sako et al., 2005) or studies with Spitzer
(e.g., Chen et al., 2004).
Nevertheless, ground-based studies have detected pure
rotational H2 emission from some sources. Detections to
date include AB Aur (Richter et al., 2002). The narrow
width of the emission in AB Aur (∼10 kms−1 FWHM),
if arising in a disk, locates the emission beyond the giant
planet region. Thus, an important future direction for these
studies is to search for H2 emission in a larger number of
sources and at higher velocities, in the giant planet region
of the disk. High resolution mid-IR spectrographs on >3-
m telescopes will provide the greater sensitivity needed for
such studies.
2.6 Potential Disk Diagnostics
In a recent development, Acke et al. (2005) have reported
high resolution spectroscopy of the [OI] 6300 Å line in Her-
big AeBe stars. The majority of the sources show a narrow
(<50 km s−1 FWHM), fairly symmetric emission compo-
nent centered at the radial velocity of the star. In some
cases, double-peaked lines are detected. These features are
interpreted as arising in a surface layer of the disk that is
irradiated by the star. UV photons incident on the disk sur-
face are thought to photodissociate OH and H2O, produc-
ing a non-thermal population of excited neutral oxygen that
decays radiatively, producing the observed emission lines.
Fractional OH abundances of ∼10−7 − 10−6 are needed to
account for the observed line luminosities.
Another recent development is the report of strong ab-
sorption in the rovibrational bands of C2H2, HCN, and CO2
in the 13–15 µm spectrum of a low-mass class I source in
Ophiuchus, IRS 46 (Lahuis et al., 2006). The high excita-
tion temperature of the absorbing gas (400-900 K) suggests
an origin close to the star, an interpretation that is consis-
tent with millimeter observations of HCN which indicate a
source size ≪100 AU. Surprisingly, high dispersion obser-
vations of rovibrational CO (4.7 µm) and HCN (3.0 µm)
show that the molecular absorption is blueshifted relative
to the molecular cloud. If IRS 46 is similarly blueshifted
relative to the cloud, the absorption may arise in the at-
mosphere of a nearly edge-on disk. A disk origin for the
absorption is consistent with the observed relative abun-
dances of C2H2, HCN, and CO2 (10
−6–10−5), which are
close to those predicted by Markwick et al. (2002) for the
inner region of gaseous disks (.2 AU; see Section 3). Alter-
natively, if IRS 46 has approximately the same velocity as
the cloud, then the absorbing gas is blueshifted with respect
to the star and the absorption may arise in an outflowing
wind. Winds launched from the disk, at AU distances, may
have molecular abundances similar to those observed if the
chemical properties of the wind change slowly as the wind
is launched. Detailed calculations of the chemistry of disk
winds are needed to explore this possibility. The molecular
abundances in the inner disk midplane (Section 3.3) provide
the initial conditions for such studies.
3. THERMAL-CHEMICAL MODELING
3.1 General Discussion
The results discussed in the previous section illustrate
the growing potential for observations to probe gaseous in-
ner disks. While, as already indicated, some conclusions
can be drawn directly from the data coupled with simple
spectral synthesis modeling, harnessing the full diagnos-
tic potential of the observations will likely rely on detailed
models of the thermal-chemical structure (and dynamics)
of disks. Fortunately, the development of such models has
been an active area of recent research. Although much of
the effort has been devoted to understanding the outer re-
gions of disks (∼100 AU; e.g., Langer et al., 2000; chap-
ters by Bergin et al. and Dullemond et al.), recent work has
begun to focus on the region within 10 AU.
Because disks are intrinsically complex structures, the
models include a wide array of processes. These encom-
pass heating sources such as stellar irradiation (including
far UV and X-rays) and viscous accretion; chemical pro-
cesses such as photochemistry and grain surface reactions;
and mass transport via magnetocentrifugal winds, surface
evaporative flows, turbulent mixing, and accretion onto the
star. The basic goal of the models is to calculate the den-
sity, temperature, and chemical abundance structures that
result from these processes. Ideally, the calculation would
be fully self-consistent, although approximations are made
to simplify the problem.
A common simplification is to adopt a specified density
distribution and then solve the rate equations that define the
chemical model. This is justifiable where the thermal and
chemical timescales are short compared to the dynamical
timescale. A popular choice is the α-disk model (Shakura
and Sunyaev, 1973; Lynden-Bell and Pringle, 1974) in
which a phenomenological parameter α characterizes the
efficiency of angular momentum transport; its vertically av-
eraged value is estimated to be ∼10−2 for T Tauri disks
on the basis of measured accretion rates (Hartmann et al.,
1998). Both vertically isothermal α-disk models and the
Hayashi minimum mass solar nebula (e.g., Aikawa et al.,
1999) were adopted in early studies.
An improved method removes the assumption of vertical
isothermality and calculates the vertical thermal structure of
the disk including viscous accretion heating at the midplane
(specified by α) and stellar radiative heating under the as-
sumption that the gas and dust temperatures are the same
(Calvet et al., 1991; D’Alessio et al., 1999). Several chemi-
cal models have been built using the D’Alessio density dis-
tribution (e.g., Aikawa and Herbst, 1999; GNI04; Jonkheid
et al., 2004).
Starting about 2001, theoretical models showed that the
gas temperature can become much larger than the dust tem-
perature in the atmospheres of outer (Kamp and van Zadel-
hoff, 2001) and inner (Glassgold and Najita, 2001) disks.
This suggested the need to treat the gas and dust as two in-
dependent but coupled thermodynamic systems. As an ex-
ample of this approach, Gorti and Hollenbach (2004) have
iteratively solved a system of chemical rate equations along
with the equations of hydrostatic equilibrium and thermal
balance for both the gas and the dust.
The chemical models developed so far are character-
ized by diversity as well as uncertainty. There is diver-
sity in the adopted density distribution and external radi-
ation field (UV, X-rays, and cosmic rays; the relative im-
portance of these depends on the evolutionary stage) and
in the thermal and chemical processes considered. The rel-
evant heating processes are less well understood than line
cooling. One issue is how UV, X-ray, and cosmic rays
heat the gas. Another is the role of mechanical heating
associated with various flows in the disk, especially accre-
tion (GNI04). The chemical processes are also less cer-
tain. Our understanding of astrochemistry is based mainly
on the interstellar medium, where densities and tempera-
tures are low compared to those of inner disks, except per-
haps in shocks and photon-dominated regions. New reac-
tion pathways or processes may be important at the higher
densities (> 107 cm−3) and higher radiation fields of in-
ner disks. A basic challenge is to understand the thermal-
chemical role of dust grains and PAHs. Indeed, perhaps the
most significant difference between models is the treatment
of grain chemistry. The more sophisticated models include
adsorption of gas onto grains in cold regions and desorption
Fig. 5.— Temperature profiles from GNI04 for a protoplanetary
disk atmosphere. The lower solid line shows the dust temperature
of D’Alessio et al. (1999) at a radius of 1 AU and a mass accretion
rate of 10−8M⊙ yr
−1. The upper curves show the corresponding
gas temperature as a function of the phenomenological mechani-
cal heating parameter defined by Equation 1, αh = 1 (solid line),
0.1 (dotted line), and 0.01 (dashed line). The αh = 0.01 curve
closely follows the limiting case of pure X-ray heating. The lower
vertical lines indicate the major chemical transitions, specifically
CO forming at ∼ 1021cm−2, H2 forming at ∼ 6 × 10
and water forming at higher column densities.
in warm regions. Yet another level of complexity is intro-
duced by transport processes which can affect the chemistry
through vertical or radial mixing.
An important practical issue in thermal-chemical mod-
eling is that self-consistent calculations become increas-
ingly difficult as the density, temperature, and the number
of species increase. Almost all models employ truncated
chemistries with with somewhere from 25 to 215 species,
compared with 396 in the UMIST data base (Le Teuff et
al., 2000). The truncation process is arbitrary, determined
largely by the goals of the calculations. Wiebe et al., (2003)
have an objective method for selecting the most important
reactions from large data bases. Additional insights into
disk chemistry are offered in the chapter by Bergin et al.
3.2 The Disk Atmosphere
As noted above, Kamp and van Zadelhoff (2001) con-
cluded in their model of debris disks that the gas and dust
temperature can differ, as did Glassgold and Najita (2001)
for T Tauri disks. The former authors developed a compre-
hensive thermal-chemical model where the heating is pri-
marily from the dissipation of the drift velocity of the dust
through the gas. For T Tauri disks, stellar X-rays, known
to be a universal property of low-mass YSOs, heat the gas
to temperatures thousands of degrees hotter than the dust
temperature.
Fig. 5 shows the vertical temperature profile obtained
by Glassgold et al. (2004) with a thermal-chemical model
based on the dust model of D’Alessio et al. (1999) for a
generic T Tauri disk. Near the midplane, the densities are
high enough to strongly couple the dust and gas. At higher
altitudes, the disk becomes optically thin to the stellar opti-
cal and infrared radiation, and the temperature of the (small)
grains rises, as does the still closely-coupled gas tempera-
ture. However, at still higher altitudes, the gas responds
strongly to the less attenuated X-ray flux, and its tempera-
ture rises much above the dust temperature. The presence
of a hot X-ray heated layer above a cold midplane layer was
obtained independently by Alexander et al. (2004).
GNI04 also considered the possibility that the surface
layers of protoplanetary disks are heated by the dissipation
of mechanical energy. This might arise through the interac-
tion of a wind with the upper layers of the disk or through
disk angular momentum transport. Since the theoretical un-
derstanding of such processes is incomplete, a phenomeno-
logical treatment is required. In the case of angular momen-
tum transport, the most widely accepted mechanism is the
MRI (Balbus and Hawley, 1991; Stone et al., 2000), which
leads to the local heating formula,
Γacc =
2Ω, (1)
where ρ is the mass density, c is the isothermal sound speed,
Ω is the angular rotation speed, and αh is a phenomeno-
logical parameter that depends on how the turbulence dis-
sipates. One can argue, on the basis of simulations by
Miller and Stone (2000), that midplane turbulence gener-
ates Alfvén waves which, on reaching the diffuse surface
regions, produce shocks and heating. Wind-disk heating
can be represented by a similar expression on the basis of
dimensional arguments. Equation 1 is essentially an adap-
tation of the expression for volumetric heating in an α-disk
model, where α can in general depend on position. GNI04
used the notation αh to distinguish its value in the disk at-
mosphere from the usual midplane value.
In the top layers fully exposed to X-rays, the gas tem-
perature at 1 AU is ∼5000 K. Further down, there is a warm
transition region (500–2000 K) composed mainly of atomic
hydrogen but with carbon fully associated into CO. The
conversion from atomic H to H2 is reached at a column
density of ∼6× 1021 cm−2, with more complex molecules
such as water forming deeper in the disk. The location
and thickness of the warm molecular region depends on the
strength of the surface heating. The curves in Fig. 5 illus-
trate this dependence for a T Tauri disk at r = 1AU. With
αh = 0.01, X-ray heating dominates this region, whereas
with αh > 0.1, mechanical heating dominates.
Gas temperature inversions can also be produced by
UV radiation operating on small dust grains and PAHs, as
demonstrated by the thermal-chemical models of Jonkheid
et al. (2004) and Kamp and Dullemond (2004). Jonkheid et
al. use the D’Alessio et al. (1999) model and focus on the
disk beyond 50 AU. At this radius, the gas temperature can
rise to 800 K or 200 K, depending on whether small grains
are well mixed or settled. For a thin disk and a high stel-
lar UV flux, Kamp and Dullemond obtain temperatures that
asymptote to several 1000 K inside 50 AU. Of course these
results are subject to various assumptions that have been
made about the stellar UV, the abundance of PAHs, and the
growth and settling of dust grains.
Many of the earlier chemical models, oriented towards
outer disks (e.g., Willacy and Langer, 2000; Aikawa and
Herbst, 1999; 2001; Markwick et al., 2002), adopt a value
for the stellar UV radiation field that is 104 times larger
than Galactic at a distance of 100 AU. This choice can be
traced back to early IUE measurements of the stellar UV
beyond 1400 Å for several TTS (Herbig and Goodrich,
1986). Although the UV flux from TTS covers a range
of values and is undoubtedly time-variable, detailed stud-
ies with IUE (e.g., Valenti et al., 2000; Johns-Krull et al.,
2000) and FUSE (e.g., Wilkinson et al., 2002; Bergin et al.,
2003) indicate that it decreases into the FUV domain with a
typical value ∼10−15erg cm−2s−1 Å−1, much smaller than
earlier estimates. A flux of ∼10−15erg cm−2s−1 Å−1 at
Earth translates into a value at 100 AU of ∼100 times the
traditional Habing value for the interstellar medium. The
data in the FUV range are sparse, unfortunately, as a func-
tion of age or the evolutionary state of the system. More
measurements of this kind are needed since it is obviously
important to use realistic fluxes in the crucial FUV band be-
tween 912 and 1100 Å where atomic C can be photoionized
and H2 and CO photodissociated (Bergin et al., 2003 and
the chapter by Bergin et al.).
Whether stellar FUV or X-ray radiation dominates the
ionization, chemistry, and heating of protoplanetary disks is
important because of the vast difference in photon energy.
The most direct physical consequence is that FUV photons
cannot ionize H, and thus the abundance of carbon provides
an upper limit to the ionization level produced by the pho-
toionization of heavy atoms, xe ∼10
−4–10−3. Next, FUV
photons are absorbed much more readily than X-rays, al-
though this depends on the size and spatial distribution of
the dust grains, i.e, on grain growth and sedimentation. Us-
ing realistic numbers for the FUV and X-ray luminosities
of TTS, we estimate that LFUV ∼ LX. The rates used in
many early chemical models correspond to LX ≪ LFUV.
This suggests that future chemical modeling of protoplan-
etary disks should consider both X-rays and FUV in their
treatment of ionization, heating, and chemistry.
3.3 The Midplane Region
Unlike the warm upper atmosphere of the disk, which
is accessible to observation, the optically thick midplane is
much more difficult to study. Nonetheless, it is extremely
important for understanding the dynamics of the basic flows
in star formation such as accretion and outflow. The impor-
tant role of the ionization level for disk accretion via the
MRI was pointed out by Gammie (1996). The physical rea-
son is that collisional coupling between electrons and neu-
trals is required to transfer the turbulence in the magnetic
field to the neutral material of the disk. Gammie found
that Galactic cosmic rays cannot penetrate beyond a surface
layer of the disk. He suggested that accretion only occurs in
the surface of the inner disk (the “active region”) and not in
the much thicker midplane region (the “dead zone”) where
the ionization level is too small to mediate the MRI.
Glassgold et al. (1997) argued that the Galactic cosmic
rays never reach the inner disk because they are blown away
by the stellar wind, much as the solar wind excludes Galac-
tic cosmic rays. They showed that YSO X-rays do almost
as good a job as cosmic rays in ionizing surface regions,
thus preserving the layered accretion model of the MRI
for YSOs. Igea and Glassgold (1999) supported this con-
clusion with a Monte Carlo calculation of X-ray transport
through disks, demonstrating that scattering plays an impor-
tant role in the MRI by extending the active surface layer to
column densities greater than 1025 cm−2, approaching the
Galactic cosmic ray range used by Gammie (1996). This
early work showed that the theory of disk ionization and
chemistry is crucial for understanding the role of the MRI
for YSO disk accretion and possibly for planet formation.
Indeed, Glassgold, Najita, and Igea suggested that Gam-
mie’s dead zone might provide a good environment for the
formation of planets.
These challenges have been taken up by several groups
(e.g., Sano et al., 2000; Fromang et al., 2002; Semenov et
al., 2004; Kunz and Balbus, 2004; Desch, 2004; Matsumura
and Pudritz, 2003, 2005; and Ilgner and Nelson, 2006a,b).
Fromang et al. discussed many of the issues that affect the
size of the dead zone: differences in the disk model, such as
a Hayashi disk or a standard α-disk; temporal evolution of
the disk; the role of a small abundance of heavy atoms that
recombine mainly radiatively; and the value of the mag-
netic Reynolds number. Sano et al. (2000) explored the
role played by small dust grains in reducing the electron
fraction when it becomes as small as the abundance of dust
grains. They showed that the dead zone decreases and even-
tually vanishes as the grain size increases or as sedimenta-
tion towards the midplane proceeds. More recently, Inut-
suka and Sano (2005) have suggested that a small fraction
of the energy dissipated by the MRI leads to the produc-
tion of fast electrons with energies sufficient to ionize H2.
When coupled with vertical mixing of highly ionized sur-
face regions, Inutsuka and Sano argue that the MRI can self
generate the ionization it needs to be operative throughout
the entire disk.
Recent chemical modeling (Semenov et al., 2004; Ilgner
and Nelson, 2006a,b) confirms that the level of ionization in
the midplane is affected by many microphysical processes.
These include the abundances of radiatively-recombining
atomic ions, molecular ions, small grains, and PAHs. The
proper treatment of the ions represents a great challenge for
disk chemistry, one made particularly difficult by the lack
of observations of the dense gas at the midplane of the in-
ner disk. Thus the uncertainties in inner disk chemistry pre-
clude definitive quantitative conclusions about the midplane
ionization of protoplanetary disks. Perhaps the biggest wild
card is the issue of grain growth, emphasized anew by Se-
menov et al., (2004). If the disk grain size distribution were
close to interstellar, then the small grains would be effective
in reducing the electron fraction and producing dead zones.
But significant grain growth is expected and observed in the
disks of YSOs, limiting the extent of dead zones (e.g., Sano
et al., 2002).
The broader chemical properties of the inner midplane
region are also of great interest since most of the gas in the
disk is within one or two scale heights. The chemical com-
position of the inner midplane gas is important because it
provides the initial conditions for outflows and for the for-
mation of planets and other small bodies; it also determines
whether the MRI operates. Relatively little work has been
done on the midplane chemistry of the inner disk. For ex-
ample, GNI04 excluded N and S species and restricted the
carbon chemistry to species closely related to CO. However,
Willacy et al. (1998), Markwick et al. (2002), and Ilgner et
al. (2004) have carried out interesting calculations that shed
light on a possible rich organic chemistry in the inner disk.
Using essentially the same chemical model, these au-
thors follow mass elements in time as they travel in a steady
accretion flow towards the star. At large distances, the gas
is subject to adsorption, and at small distances to thermal
desorption. In between it reacts on the surface of the dust
grains; on being liberated from the dust, it is processed by
gas phase chemical reactions. The gas and dust are assumed
to have the same temperature, and all effects of stellar radi-
ation are ignored. The ionizing sources are cosmic rays and
26Al. Since the collisional ionization of low ionization po-
tential atoms is ignored, a very low ionization level results.
Markwick et al. improve on Willacy et al. by calculating the
vertical variation of the temperature, and Ilgner et al. con-
sider the effects of mixing. Near 1 AU, H2O and CO are
very abundant, as predicted by simpler models, but Mark-
wick et al. find that CH4 and CO have roughly equal abun-
dances. Nitrogen-bearing molecules, such as NH3, HCN,
and HNC are also predicted to be abundant, as are a vari-
ety of hydrocarbons such as CH4, C2H2, C2H3, C2H4, etc.
Markwick et al. also simulate the presence of penetrating X-
rays and find increased column densities of CN and HCO+.
Despite many uncertainties, these predictions are of interest
for our future understanding of the midplane region.
Infrared spectroscopic searches for hydrocarbons in
disks may be able to test these predictions. For example,
Gibb et al. (2004) searched for CH4 in absorption toward
HL Tau. The upper limit on the abundance of CH4 relative
to CO (<1%) in the absorbing gas may contradict the pre-
dictions of Markwick et al. (2002) if the absorption arises in
the disk atmosphere. However, some support for the Mark-
wick et al. (2002) model comes from a recent report by
Lahuis et al. (2006) of a rare detection by Spitzer of C2H2
and HCN in absorption towards a YSO, with ratios close to
those predicted for the inner disk (Section 2.6).
3.4 Modeling Implications
An interesting implication of the irradiated disk atmo-
sphere models discussed above is that the region of the at-
mosphere over which the gas and dust temperatures differ
includes the region that is accessible to observational study.
Indeed, the models have interesting implications for some
of the observations presented in Section 2. They can ac-
count roughly for the unexpectedly warm gas temperatures
that have been found for the inner disk region based on the
CO fundamental (Section 2.3) and UV fluorescent H2 tran-
sitions (Section 2.4). In essence, the warm gas temperatures
arise from the direct heating of the gaseous component and
the poor thermal coupling between the gas and dust com-
ponents at the low densities characteristic of upper disk at-
mospheres. The role of X-rays in heating disk atmospheres
has some support from the results of Bergin et al. (2004);
they suggested that some of the UV H2 emission from TTS
arises from excitation by fast electrons produced by X-rays.
In the models, CO is found to form at a column den-
sity NH≃10
21 cm−2 and temperature ∼1000 K in the ra-
dial range 0.5–2 AU (GNI04; Fig. 5), conditions similar to
those deduced for the emitting gas from the CO fundamen-
tal lines (Najita et al., 2003). Moreover, CO is abundant in
a region of the disk that is predominantly atomic hydrogen,
a situation that is favorable for exciting the rovibrational
transitions because of the large collisional excitation cross
section for H + CO inelastic scattering. Interestingly, X-
ray irradiation alone is probably insufficient to explain the
strength of the CO emission observed in actively-accreting
TTS. This suggests that other processes may be important
in heating disk atmospheres. GNI04 have explored the role
of mechanical heating. Other possible heating processes are
FUV irradiation of grains and or PAHs.
Molecular hydrogen column densities comparable to the
UV fluorescent column of ∼5 × 1018cm−2 observed from
TW Hya are reached at 1 AU at a total vertical hydrogen
column density of ∼5 × 1021cm−2, where the fractional
abundance of H2 is ∼10
−3 (GNI04; Fig. 5). Since Lyα
photons must traverse the entire ∼5×1021cm−2 in order to
excite the emission, the line-of-sight dust opacity through
this column must be relatively low. Observations of this
kind, when combined with atmosphere models, may be able
to constrain the gas-to-dust ratio in disk atmospheres, with
consequent implications for grain growth and settling.
Work in this direction has been carried out by Nomura
and Millar (2005). They have made a detailed thermal
model of a disk that includes the formation of H2 on grains,
destruction via FUV lines, and excitation by Lyα photons.
The gas at the surface is heated primarily by the photoelec-
tric effect on dust grains and PAHs, with a dust model ap-
propriate for interstellar clouds, i.e., one that reflects little
grain growth. Both interstellar and stellar UV radiation are
included, the latter based on observations of TW Hya. The
gas temperature at the surface of their flaring disk model
reaches 1500 K at 1 AU. They are partially successful in ac-
counting for the measurements of Herczeg et al. (2002), but
their model fluxes fall short by a factor of five or so. A
likely defect in their model is that the calculated tempera-
ture of the disk surface is too low, a problem that might be
remedied by reducing the UV attenuation by dust and by
including X-ray or other surface heating processes.
The relative molecular abundances that are predicted by
these non-turbulent, layered model atmospheres are also of
interest. At a distance of 1 AU, the calculations of GNI04
indicate that the relative abundance of H2O to CO is ∼10
in the disk atmosphere for column densities <1022 cm−2;
only at column densities >1023 cm−2 are H2O and CO
comparably abundant. The abundance ratio in the atmo-
sphere is significantly lower than the few relative abun-
dances measurements to date (0.1–0.5) at <0.3 AU (Carr
et al., 2004; Section 2.2). Perhaps layered model atmo-
spheres, when extended to these small radii, will be able to
account for the abundant water that is detected. If not, the
large water abundance may be evidence of strong vertical
(turbulent) mixing that carries abundant water from deeper
in the disk up to the surface. Thus, it would be of great in-
terest to develop the modeling for the sources and regions
where water is observed in the context of both layered mod-
els and those with vertical mixing. Work in this direction
has the potential to place unique constraints on the dynam-
ical state of the disk.
4. CURRENT AND FUTURE DIRECTIONS
As described in the previous sections, significant progress
has been made in developing both observational probes of
gaseous inner disks as well as the theoretical models that
are needed to interpret the observations. In this section,
we describe some areas of current interest as well as future
directions for studies of gaseous inner disks.
4.1 Gas Dissipation Timescale
The lifetime of gas in the inner disk is of interest in the
context of both giant and terrestrial planet formation. Since
significant gas must be present in the disk in order for a
gas giant to form, the gas dissipation timescale in the gi-
ant planet region of the disk can help to identify dominant
pathways for the formation of giant planets. A short dissi-
pation time scale favors processes such as gravitational in-
stabilities which can form giant planets on short time scales
(< 1000 yr; Boss, 1997; Mayer et al., 2002). A longer dissi-
pation time scale accommodates the more leisurely forma-
tion of planets in the core accretion scenario (few–10 Myr;
Bodenheimer and Lin, 2002).
Similarly, the outcome of terrestrial planet formation
(the masses and eccentricities of the planets and their con-
sequent habitability) may depend sensitively on the resid-
ual gas in the terrestrial planet region of the disk at ages
of a few Myr. For example, in the picture of terrestrial
planet formation described by Kominami and Ida (2002),
if the gas column density in this region is ≫1 g cm−2 at the
epoch when protoplanets assemble to form terrestrial plan-
ets, gravitational gas drag is strong enough to circularize
the orbits of the protoplanets, making it difficult for them
to collide and build Earth-mass planets. In contrast, if the
gas column density is ≪1 g cm−2, Earth-mass planets can
be produced, but gravitational gas drag is too weak to re-
circularize their orbits. As a result, only a narrow range of
gas column densities around ∼1 g cm−2 is expected to lead
to planets with the Earth-like masses and low eccentricities
that we associate with habitability on Earth.
From an observational perspective, relatively little is
known about the evolution of the gaseous component. Disk
lifetmes are typically inferred from infrared excesses that
probe the dust component of the disk, although processes
such as grain growth, planetesimal formation, and rapid
grain inspiraling produced by gas drag (Takeuchi and Lin,
2005) can compromise dust as a tracer of the gas. Our un-
derstanding of disk lifetimes can be improved by directly
probing the gas content of disks and using indirect probes
of disk gas content such as stellar accretion rates (see Na-
jita, 2006 for a review of this topic).
Several of the diagnostics decribed in Section 2 may be
suitable as direct probes of disk gas content. For example,
transitions of H2 and other molecules and atoms at mid-
through far-infrared wavelengths are thought to be promis-
ing probes of the giant planet region of the disk (Gorti and
Hollenbach, 2004). This is a important area of investigation
currently for the Spitzer Space Telescope and, in the future,
for Herschel and 8- to 30-m ground-based telescopes.
Studies of the lifetime of gas in the terrestrial planet
region are also in progress. The CO transitions are well
suited for this purpose because the transitions of CO and its
isotopes probe gas column densities in the range of inter-
est (10−4 − 1 g cm−2). A current study by Najita, Carr,
and Mathieu, which explores the residual gas content of
optically thin disks (Najita, 2004), illustrates some of the
challenges in probing the residual gas content of disks.
Firstly, given the well-known correlation between IR ex-
cess and accretion rate in young stars (e.g., Kenyon and
Hartmann, 1995), CO emission from sources with optically
thin inner disks may be intrinsically weak if accretion con-
tributes significantly to heating disk atmospheres. Thus,
high signal-to-noise spectra may be needed to detect this
emission. Secondly, since the line emission may be intrin-
sically weak, structure in the stellar photosphere may com-
plicate the identification of emission features. Fig. 6 shows
an example in which CO absorption in the stellar photo-
sphere of TW Hya likely veils weak emission from the disk.
Correcting for the stellar photosphere would not only am-
plify the strong v=1–0 emission that is clearly present (cf.
Rettig et al., 2004), it would also uncover weak emission
in the higher vibrational lines, confirming the presence of
the warmer gas probed by the UV fluorescent lines of H2
(Herczeg et al., 2002).
Stellar accretion rates provide a complementary probe
of the gas content of inner disks. In a steady accretion disk,
the column density Σ is related to the disk accretion rate Ṁ
by a relation of the form Σ ∝ Ṁ/αT , where T is the disk
temperature. A relation of this form allows us to infer Σ
from Ṁ given a value for the viscosity parameter α. Alter-
natively, the relation could be calibrated empirically using
measured disk column densities.
Fig. 6.— (Top) Spectrum of the transitional disk system TW
Hya at 4.6 µm (histogram). The strong emission in the v=1–0
CO fundamental lines extend above the plotted region. Although
the model stellar photospheric spectrum (light solid line) fits the
weaker features in the TW Hya spectrum, it predicts stronger ab-
sorption in the low vibrational CO transitions (indicated by the
lower vertical lines) than is observed. This suggests that the stellar
photosphere is veiled by CO emission from warm disk gas. (Bot-
tom) CO fundamental emission from the transitional disk system
V836 Tau. Vertical lines mark the approximate line centers at the
velocity of the star. The velocity widths of the lines locate the
emission within a few AU of the star, and the relative strengths of
the lines suggest optically thick emission. Thus, a large reservoir
of gas may be present in the inner disk despite the weak infrared
excess from this portion of the disk.
Accretion rates are available for many sources in the
age range 0.5–10 Myr (e.g., Gullbring et al., 1998; Hart-
mann et al., 1998; Muzerolle et al., 1998, 2000). A typical
value of 10−8M⊙ yr
−1 for TTS corresponds to a(n active)
disk column density of ∼100 g cm−2 at 1 AU for α=0.01
(D’Alessio et al., 1998). The accretion rates show an overall
decline with time with a large dispersion at any given age.
The existence of 10 Myr old sources with accretion rates as
large as 10−8M⊙ yr
−1 (Sicilia-Aguilar et al., 2005) sug-
gests that gaseous disks may be long lived in some systems.
Even the lowest measured accretion rates may be dy-
namically significant. For a system like V836 Tau (Fig. 6), a
∼3 Myr old (Siess et al., 1999) system with an optically thin
inner disk, the stellar accretion rate of 4 × 10−10M⊙ yr
(Hartigan et al., 1995; Gullbring et al., 1998) would corre-
spond to ∼4 g cm−2 at 1 AU. Although the accretion rate is
irrelevant for the buildup of the stellar mass, it corresponds
to a column density that would favorably impact terrestrial
planet formation. More interesting perhaps is St34, a TTS
with a Li depletion age of 25 Myr; its stellar accretion rate
of 2× 10−10M⊙ yr
−1 (White and Hillenbrand, 2005) sug-
gests a dynamically significant reservoir of gas in the inner
disk region. These examples suggest that dynamically sig-
nificant reservoirs of gas may persist even after inner disks
become optically thin and over the timescales needed to in-
fluence the outcome of terrestrial planet formation.
The possibility of long lived gaseous reservoirs can be
confirmed by using the diagnostics described in Section 2
to measure total disk column densities. Equally important,
a measured the disk column density, combined with the stel-
lar accretion rate, would allow us to infer a value for viscos-
ity parameter α for the system. This would be another way
of constraining the disk accretion mechanism.
4.2 Nature of Transitional Disk Systems
Measurements of the gas content and distribution in in-
ner disks can help us to identify systems in various states
of planet formation. Among the most interesting objects to
study in this context are the transitional disk systems, which
possess optically thin inner and optically thick outer disks.
Examples of this class of objects include TW Hya, GM Aur,
DM Tau, and CoKu Tau/4 (Calvet et al., 2002; Rice et al.,
2003; Bergin et al., 2004; D’Alessio et al., 2005; Calvet et
al., 2005). It was suggested early on that optically thin in-
ner disks might be produced by the dynamical sculpting of
the disk by orbiting giant planets (Skrutskie et al., 1990; see
also Marsh and Mahoney, 1992).
Indeed, optically thin disks may arise in multiple phases
of disk evolution. For example, as a first step in planet for-
mation (via core accretion), grains are expected to grow into
planetesimals and eventually rocky planetary cores, produc-
ing a region of the disk that has reduced continuum opac-
ity but is gas-rich. These regions of the disk may there-
fore show strong line emission. Determining the fraction
of sources in this phase of evolution may help to establish
the relative time scales for planetary core formation and the
accretion of gaseous envelope.
If a planetary core accretes enough gas to produce a low
mass giant planet (∼1MJ), it is expected to carve out a gap
in its vicinity (e.g., Takeuchi et al., 1996). Gap crossing
streams can replenish an inner disk and allow further accre-
tion onto both the star and planet (Lubow et al., 1999). The
small solid angle subtended by the accretion streams would
produce a deficit in the emission from both gas and dust in
the vicinity of the planet’s orbit. We would also expect to
detect the presence of an inner disk. Possible examples of
systems in this phase of evolution include GM Aur and TW
Hya in which hot gas is detected close to the star as is accre-
tion onto the star (Bergin et al., 2004; Herczeg et al., 2002;
Muzerolle et al., 2000). The absence of gas in the vicinity of
the planet’s orbit would help to confirm this interpretation.
Once the planet accretes enough mass via the accretion
streams to reach a mass ∼5–10MJ , it is expected to cut off
further accretion (e.g., Lubow et al., 1999). The inner disk
will accrete onto the star, leaving a large inner hole and no
trace of stellar accretion. CoKu Tau/4 is a possible example
of a system in this phase of evolution (cf. Quillen et al.,
2004) since it appears to have a large inner hole and a low
to negligible accretion rate (<few×10−10M⊙ yr
−1). This
interpretation predicts little gas anywhere within the orbit
of the planet.
At late times, when the disk column density around
10 AU has decreased sufficiently that the outer disk is being
photoevaporated away faster than it can resupply material
to the inner disk via accretion, the outer disk will decouple
from the inner disk, which will accrete onto the star, leav-
ing an inner hole that is devoid of gas and dust (the “UV
Switch” model; Clarke et al., 2001). Measurements of the
disk gas column density and the stellar accretion rate can
be used to test this possibility. As an example, TW Hya
is in the age range (∼10 Myr) where photoevaporation is
likely to be significant. However, the accretion rate onto
star, gas content of the inner disk (Sections 2 and 4), as well
as the column density inferred for the outer disk (32 g cm−2
at 20 AU based on the dust SED; Calvet et al., 2002) are all
much larger than is expected in the UV switch model. Al-
though this mechanism is, therefore, unlikely to explain the
SED for TW Hya, it may explain the presence of inner holes
in less massive disk systems of comparable age.
4.3 Turbulence in Disks
Future studies of gaseous inner disks may also help to
clarify the nature of the disk accretion process. As indicated
in Section 2.1, evidence for suprathermal line broadening in
disks supports the idea of a turbulent accretion process. A
turbulent inner disk may have important consequences for
the survival of terrestrial planets and the cores of giant plan-
ets. An intriguing puzzle is how these objects avoid Type-I
migration, which is expected to cause the object to lose an-
gular momentum and spiral into the star on short timescales
(e.g., Ward, 1997). A recent suggestion is that if disk accre-
tion is turbulent, terrestral planets will scatter off turbulent
fluctuations, executing a “random walk” which greatly in-
creases the migration time as well as the chances of survival
(Nelson et al., 2000; see chapter by Nelson et al.).
It would be interesting to explore this possible connec-
tion further by extending the approach used for the CO over-
tone lines to a wider range of diagnostics to probe the intrin-
sic line width as a function of radius and disk height. By
comparing the results to the detailed predictions of theoret-
ical models, it may be possible to distinguish between the
turbulent signature, produced e.g., by the MRI instability,
from the turbulence that might be produced by, e.g., a wind
blowing over the disk.
A complementary probe of turbulence may come from
exploring the relative molecular abundances in disks. As
noted in Section 3.4, if relative abundances cannot be ex-
plained by model predictions for non-turbulent, layered ac-
cretion flows, a significant role for strong vertical mixing
produced by turbulence may be implied. Although model-
dependent, this approach toward diagnosing turbulent ac-
cretion appears to be less sensitive to confusion from wind-
induced turbulence, especially if one can identify diagnos-
tics that require vertical mixing from deep down in the disk.
Another complementary approach toward probing the ac-
cretion process, discussed in Section 4.1, is to measure to-
tal gas column densities in low column density, dissipating
disks in order to infer values for the viscosity parameter α.
5. SUMMARY AND CONCLUSIONS
Recent work has lent new insights on the structure, dy-
namics, and gas content of inner disks surrounding young
stars. Gaseous atmospheres appear to be hotter than the dust
in inner disks. This is a consequence of irradiative (and pos-
sibly mechanical) heating of the gas as well as the poor ther-
mal coupling between the gas and dust at the low densities
of disk atmospheres. In accreting systems, the gaseous disk
appears to be turbulent and extends inward beyond the dust
sublimation radius to the vicinity of the corotation radius.
There is also evidence that dynamically significant reser-
voirs of gas can persist even after the inner disk becomes
optically thin in the continuum. These results bear on im-
portant star and planet formation issues such as the origin
of winds, funnel flows, and the rotation rates of young stars;
the mechanism(s) responsible for disk accretion; and the
role of gas in the determining the architectures of terres-
trial and giant planets. Although significant future work
is needed to reach any conclusions on these issues, the fu-
ture for such studies is bright. Increasingly detailed studies
of the inner disk region should be possible with the advent
of powerful spectrographs and interferometers (infrared and
submillimeter) as well as sophisticated models that describe
the coupled thermal, chemical, and dynamical state of the
disk.
Acknowledgments. We thank Stephen Strom who con-
tributed significantly to the discussion on the nature of tran-
sitional disk systems. We also thank Fred Lahuis and Matt
Richter for sharing manuscripts of their work in advance of
publication. AEG acknowledges support from the NASA
Origins and NSF Astronomy programs. JSC and JRN also
thank the NASA Origins program for its support.
REFERENCES
Abgrall H., Roueff E., Launay F., Roncin J. Y., and Subtil, J. L.
(1993) Astron. Astrophys. Suppl., 101, 273-321.
Abgrall H., Roueff E., Launay F., Roncin J. Y., and Subtil J. L.
(1993) Astron. Astrophys. Suppl., 101, 323-362.
Acke B., van den Ancker M. E., and Dullemond C. P. (2005), As-
tron. Astrophys., 436, 209-230.
Aikawa Y., Miyama S. M., Nakano T., and Umebayashi T. (1996)
Astrophys. J., 467, 684-697.
Aikawa Y., Umebayashi T., Nakano T., and Miyama S. M. (1997)
Astrophys. J., 486, L51-L54.
Aikawa Y., Umebayashi T., Nakano T., and Miyama S. M. (1999)
Astrophys. J., 519, 705-725.
Aikawa Y. and Herbst E. (1999) Astrophys. J., 526, 314-326.
Aikawa Y. and Herbst E. (2001) Astron. Astrophys., 371, 1107-
1117.
Akeson R. L., Walker, C. H., Wood, K., Eisner, J. A., Scire, E. et
al. (2005a) Astrophys. J., 622, 440-450.
Akeson R. L., Boden, A. F., Monnier, J. D., Millan-Gabet, R.,
Beichman, C. et al. (2005b) Astrophys. J., 635, 1173-1181.
Alexander R. D., Clarke C. J., and Pringle J. E. (2004) Mon. Not.
R. Astron. Soc., 354, 71-80.
Ardila D. R., Basri G., Walter F. M., Valenti J. A., and Johns-Krull
C. M. (2002) Astrophys. J., 566, 1100-1123.
Balbus S. A. and Hawley J. F. (1991) Astrophys. J., 376, 214-222.
Bary J. S., Weintraub D. A., and Kastner J. H. (2003) Astrophys.
J., 586, 1138-1147.
Bergin E., Calvet, N., Sitko M. L., Abgrall H., D’Alessio, P. et al.
(2004) Astrophys. J., 614, L133-Ll37.
Bergin E., Calvet N., D’Alessio P., and Herczeg G. J. (2003) As-
trophys. J., 591, L159-L162.
Blake G. A. and Boogert A. C. A. (2004) Astrophys. J., 606, L73-
Blum R. D., Barbosa C. L., Damineli A., Conti P. S., and Ridgway
S. (2004) Astrophys. J., 617, 1167-1176.
Bodenheimer P. and Lin D. N. C. (2002) Ann. Rev. Earth Planet.
Sci., 30 113-148.
Boss A. P. (1995) Science, 276 1836-1839.
Brittain S. D., Rettig T. W., Simon T., Kulesa C., DiSanti M. A.,
and Dello Russo N. (2003) Astrophys. J., 588, 535-544.
Brown A., Jordan C., Millar T. J., Gondhalekar P., and Wilson R.
(1981) Nature, 290, 34-36.
Calvet N., Patino A., Magris G., and D’Alessio P. (1991) Astro-
phys. J., 380, 617-630.
Calvet N., D’Alessio P., Hartmann L, Wilner D., Walsh A., and
Sitko M. (2002) Astrophys. J., 568, 1008-1016.
Calvet N., Muzerolle J., Briceño C., Hernández J., Hartmann L.,
Saucedo J. L., and Gordon K. D. (2004) Astron. J., 128, 1294-
1318.
Calvet N., D’Alessio P., Watson D. M., Franco-Hernández R.,
Furlan, E. et al. (2005) Astrophys. J., 630 L185-L188.
Carr J. S. (1989) Astrophys. J., 345, 522-535.
Carr J. S., Tokunaga A. T., Najita J., Shu F. H., and Glassgold A.
E. (1993) Astrophys. J., 411, L37-L40.
Carr J. S., Tokunaga A. T., and Najita J. (2004) Astrophys. J., 603,
213-220.
Chandler C. J., Carlstrom J. E., Scoville N. Z., Dent W. R. F., and
Geballe T. R. (1993) Astrophys. J., 412, L71-L74.
Chen C. H., Van Cleve J. E., Watson D. M., Houck J. R., Werner
M. W., Stapelfeldt K. R., Fazio G. G., and Rieke G. H. (2004)
AAS Meeting Abstracts, 204, 4106.
Clarke C. J., Gendrin A., and Sotomayor M. (2001) MNRAS, 328,
485-491.
Colavita M., Akeson R., Wizinowich P., Shao M., Acton S. et al.
(2003) Astrophys. J., 592, L83-L86.
D’Alessio P., Canto J., Calvet N., and Lizano S. (1998) Astrophys.
J., 500, 411.
D’Alessio P., Calvet N., and Hartmann L., Lizano S., and Cantoó
J. (1999) Astrophys. J., 527, 893-909.
D’Alessio P., Calvet N., and Hartmann L. (2001) Astrophys. J.,
553, 321-334.
D’Alessio P., Hartmann L., Calvet N., Franco-Hernández R., For-
rest W. J. et al. (2005) Astrophys. J., 621, 461-472.
Desch S. (2004) Astrophys. J., 608, 509
Doppmann G. W., Greene T. P., Covey K. R., and Lada C. J. (2005)
Astrophys. J., 130, 1145-1170.
Edwards S., Fischer W., Kwan J., Hillenbrand L., and Dupree A.
K. (2003) Astrophys. J., 599, L41-L44.
Eisner J. A., Hillenbrand L. A., White R. J., Akeson R. L., and
Sargent A. E. (2005) Astrophys. J., 623, 952-966.
Feigelson E. D. and Montmerle T. (1999) Ann. Rev. Astron. As-
trophys., 37, 363-408.
Figuerêdo E., Blum R. D., Damineli A., and Conti P. S. (2002) AJ,
124, 2739-2748.
Fromang S., Terquem C., and Balbus S. A. (2002) Mon. Not. R.
Astron. Soc.339, 19
Gammie C. F. (1996) Astrophys. J., 457, 355-362.
Geballe T. R. and Persson S. E., (1987) Astrophys. J., 312, 297-
Gibb E. L., Rettig T., Brittain S. and Haywood R. (2004) Astro-
phys. J., 610, L113-L116.
Gizis J. E., Shipman H. L., and Harvin J. A. (2005) Astrophys. J.,
630, L89-L91.
Glassgold A. E., Najita J., and Igea J. (1997) Astrophys. J., 480,
344-350 (GNI97).
Glassgold A. E. and Najita J.(2001) in Young Stars Near Earth,
ASP Conf. Ser. vol. 244 (R. Jayawardhana and T. Greene,
eds.) pp. 251-255. ASP, San Francisco.
Glassgold A. E., Najita J., and Igea J. (2004) Astrophys. J., 615,
972-990 (GNI04).
Gorti U. and Hollenbach D. H. (2004) Astrophys. J., 613, 424-447.
Greene T. P. and Lada C. J. (1996) AJ, 112, 2184-2221.
Gullbring E., Hartmann L., Briceño C., and Calvet N. (1998) As-
trophys. J., 492, 323-341.
Hanson M. M., Howarth I. D., and Conti P. S. (1997) Astrophys.
J., 489, 698-718.
Hartmann L. and Kenyon S. (1996) Ann. Rev. Astron. Astrophys.,
34, 207-240.
Hartmann L., Calvet N., Gullbring E. and D’Alessio P. (1998) As-
trophys. J.495, 385-400.
Hawley J. F., Gammie C. F., and Balbus S. A. (1995) Astrophys.
J., 440, 742-763.
Hartmann L., Hinkle K., and Calvet N. (2004) Astrophys. J., 609,
906-916.
Herczeg G. J., Linsky J. L., Valenti J. A., Johns-Krull C. M., and
Wood B. E. (2002) Astrophys. J., 572, 310-325.
Herczeg G. J., Wood B. E., Linsky J. L., Valenti J. A., and Johns-
Krull C. M. (2004) Astrophys. J., 607, 369-383.
Herczeg G. J., Walter F. M., Linsky J. L., Gahm G. F., Ardila D.
R. et al. (2005) Astron. J., 129, 2777-2791.
Herczeg G. J., Linsky J. L., Walter F. M., Gahm G. F., and Johns-
Krull C. M. (2006) in preparation
Hollenbach D. J. Yorke H. W., Johnstone D. (2000) in Protostars
and Planets IV, (V. Mannings et al., eds.), pp. 401-428. Univ.
of Arizona, Tucson.
Ida S. and Lin D. N. C. (2004) Astrophys. J., 604, 388-413.
Igea J. and Glassgold A. E. (1999) Astrophys. J., 518, 848-858.
Ilgner M., Henning Th., Markwick A. J., and Millar T. J. (2004)
Astron. Astrophys., 415, 643-659.
Ilgner M. and Nelson R. P. (2006a) Astron. Astrophys., 445, 205-
Ilgner M. and Nelson R. P. (2006b) Astron. Astrophys., 445, 223-
Inutsuka S. and Sano T. (2005) Astrophys. J., 628, L155-L158.
Ishii M., Nagata T., Sato S., Yao Y., Jiang Z., and Nakaya H.
(2001) Astron. Astrophys., 121, 3191-3206.
Jonkheid B., Faas F. G. A., van Zadelhoff G.-J., and van Dishoeck
E. F. (2004) Astron. Astrophys.428, 511-521.
Johns-Krull C. M., Valenti J. A., and Linsky J. L. (2000) Astro-
phys. J. 539, 815-833.
Kamp I. and van Zadelhoff G.-J. (2001) Astron. Astrophys., 373,
641-656.
Kamp I. and Dullemond C. P. (2004) Astrophys. J., 615, 991-999.
Kenyon S. J. and Hartmann L. (1995) Astrophys. J. Suppl., 101,
117-171.
Klahr H. H. and Bodenheimer P. (2003) Astrophys. J., 582, 869-
Kominami J. and Ida S. (2002) Icarus, 157, 43-56.
Kunz M. W. and Balbus S. A. (2004) Mon. Not. R. Astron. Soc.,
348, 355-360.
Lahuis F., van Dishoeck E. F., Boogert A. C. A., Pontoppidan K.
M., Blake G. A. et al. (2006) Astrophys. J., 636, L145-L148.
Langer W. et al. (2000) in Protostars and Planets IV, (V. Mannings
et al., eds.), pp. 29-. Univ. of Arizona, Tucson.
Le Teuff Y., Markwick, A., and Millar, T. (2000) Astron. Astro-
phys., 146, 157-168.
Ida S. and Lin D. N. C. (2004) Astrophys. J., 604, 388-413.
Lin D. N. C., Bodenheimer P., and Richardson D. C. (1996) Na-
ture, 380, 606-607.
Lubow S. H., Seibert M., and Artymowicz P. (1999) Astrophys. J.,
526, 1001-1012.
Luhman K. L., Rieke G. H., Lada C. J., and Lada E. A. (1998)
Astrophys. J., 508, 347-369.
Lynden-Bell D. and Pringle J. E. (1974) Mon. Not. R. Astron. Soc.,
168, 603-637.
Malbet F. and Bertout C. (1991) Astrophys. J., 383, 814-819.
Markwick A. J., Ilgner M., Millar T. J., and Henning Th. (2002)
Astron. Astrophys., 385, 632-646.
Marsh K. A. and Mahoney M. J. (1992) Astrophys. J., 395, L115-
L118.
Martin S. C. (1997) Astrophys. J., 478, L33-L36.
Matsumura S. and Pudritz R. E. (2003) Astrophys. J., 598, 645-
Matsumura S. and Pudritz R. E. (2005) Astrophys. J., 618, L137-
L140.
Mayer L., Quinn T., Wadsley J., and Stadel J. (2002) Science, 298,
1756-1759.
Miller K. A. and Stone J. M. (2000) Astrophys. J., 534, 398-419.
Muzerolle J., Hartmann L., and Calvet N. (1998) Astron. J., 116,
2965-2974.
Muzerolle J., Calvet N., Briceño C., Hartmann L., and Hillenbrand
L. (2000) Astrophys. J., 535, L47-L50.
Muzerolle J., Calvet N., Hartmann L., and D’Alessio P. (2003)
Astrophys. J., 597, L149-152.
Najita J., Carr J. S., Glassgold A. E., Shu F. H., and Tokunaga A.
T. (1996) Astrophys. J., 462, 919-936.
Najita J., Edwards S., Basri G., and Carr J. (2000) in Protostars
and Planets IV, (V. Mannings et al., eds.), p. 457-483. Univ.
of Arizona, Tucson.
Najita J., Carr J. S., and Mathieu R. D. (2003) Astrophys. J., 589,
931-952.
Najita, J. (2004) in Star Formation in the Interstellar Medium ASP
Conf. Ser., vol. 323, (D. Johnstone et al., eds.) pp. 271-277.
ASP, San Francisco.
Najita, J. (2006) in A Decade of Extrasolar Planets Around Nor-
mal Stars STScI Symposium Series, vol. 19, (M. Livio, ed.)
Cambridge U. Press, Cambridge, in press.
Nelson R. P., Papaloizou J. C. B., Masset F., and Kley W. (2000)
MNRAS, 318, 18-36.
Nomura H. and Millar T. J. (2005) Astron. Astrophys., 438, 923-
Quillen A. C., Blackman E. G., Frank A., and Varnière P. (2004)
Astrophys. J., 612, L137-L140.
Prinn R. (1993) in Protostars and Planets III, (E. Levy and J. Lu-
nine, eds.) pp. 1005-1028. Univ. of Arizona, Tucson.
Rettig T. W., Haywood J., Simon T., Brittain S. D., and Gibb E.
(2004) Astrophys. J., 616, L163-L166.
Rice W. K. M., Wood K., Armitage P. J., Whitney B. A., and
Bjorkman J. E. (2003) MNRAS, 342, 79-85.
Richter M. J., Jaffe D. T., Blake G. A., and Lacy J. H. (2002)
Astrophys. J., 572, L161-L164.
Sako S., Yamashita T., Kataza H., Miyata T., Okamoto Y. K. et al.
(2005) Astrophys. J., 620, 347-354.
Sano T., Miyama S., Umebayashi T., amd Nakano T. (2000) As-
trophys. J., 543, 486-501.
Scoville N., Kleinmann S. G., Hall D. N. B., and Ridgway S. T.
(1983) Astrophys. J., 275, 201-224.
Scoville N. Z. (1985) in Protostars and Planets II, pp. 188-200.
Univ. of Arizona, Tucson.
Semenov D., Widebe D., and Henning Th. (2004) Astron. Astro-
phys., 417, 93-106.
Shakura N. I. and Sunyaev R. A. (1973) Astron. Astrophys., 24,
337-355.
Sheret I., Ramsay Howat S. K., Dent W. R. F. (2003) MNRAS, 343,
L65-L68.
Shu F. H., Johnstone D., Hollenbach D. (1993) Icarus, 106, 92.
Skrutskie M. F., Dutkevitch D., Strom S. E., Edwards S., Strom
K. M., and Shure M. A. (1990) Astron. J., 99, 1187-1195.
Sicilia-Aguilar A., Hartmann L. W., Hernández J., Briceño C., and
Calvet N. (2005) Astron. J., 130, 188-209.
Siess L., Forestini M., and Bertout C. (1999) Astron. Astrophys.,
342, 480-491.
Stone J. M., Gammie C. F., Balbus S. A., and Hawley J. F. (2000)
in Protostars and Planets IV, (V. Mannings et al., eds) pp. 589-
611. Univ. Arizona, Tucson.
Strom K. M., Strom S. E., Edwards S., Cabrit S., and Skrutskie M.
F. (1989) Astron. J., 97, 1451-1470.
Takeuchi T., Miyama S. M., and Lin D. N. C. (1996) Astrophys.
J., 460, 832-847.
Takeuchi T., and Lin D. N. C. (2005) Astrophys. J., 623, 482-492.
Thi W. F., van Dishoeck E. F., Blake G. A., van Zadelhoff G. J.,
Horn J. et al. (2001) Astrophys. J., 561, 1074-1094.
Thompson R. (1985) Astrophys. J., 299, L41-L44.
Trilling D. D., Lunine J. I., and Benz W. (2002) Astron. Astro-
phys.394, 241-251.
Valenti J. A., Johns-Krull C. M., and Linsky J. L. (2000) Astro-
phys. J. Suppl. 129, 399-420.
Walter F. M., Herczeg G., Brown A., Ardila D. R., Gahm G. F.,
Johns-Krull C. M., Lissauer J. J., Simon M., and Valenti J. A.
(2003) Astron. J., 126, 3076-3089.
Ward W. R. (1997) Icarus, 126, 261-281.
White R. J. and Hillenbrand L. A. (2005) Astrophys. J., 621, L65-
Wiebe D., Semenov D. and Henning Th. (2003) Astron. Astro-
phys., 399, 197-210.
Wilkinson E., Harper G. M., Brown A. and Herczeg G. J. (2002)
Astron. J., 124, 1077-1081.
Willacy K. and Langer W. D. (2000) Astrophys. J., 544, 903-920.
Willacy K., Klahr H. H., Millar T. J., and Henning Th. (1998)
Astron. Astrophys., 338, 995-1005.
Zuckerman B., Forveille T., and Kastner J. H. (1995) Nature, 373,
494-496
INTRODUCTION
OBSERVATIONS OF GASEOUS INNER DISKS
THERMAL-CHEMICAL MODELING
CURRENT AND FUTURE DIRECTIONS
SUMMARY AND CONCLUSIONS
|